Oct 02 07:12:03 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 02 07:12:03 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 02 07:12:03 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 07:12:03 localhost kernel: BIOS-provided physical RAM map:
Oct 02 07:12:03 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 02 07:12:03 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 02 07:12:03 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 02 07:12:03 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 02 07:12:03 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 02 07:12:03 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 02 07:12:03 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 02 07:12:03 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 02 07:12:03 localhost kernel: NX (Execute Disable) protection: active
Oct 02 07:12:03 localhost kernel: APIC: Static calls initialized
Oct 02 07:12:03 localhost kernel: SMBIOS 2.8 present.
Oct 02 07:12:03 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 02 07:12:03 localhost kernel: Hypervisor detected: KVM
Oct 02 07:12:03 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 02 07:12:03 localhost kernel: kvm-clock: using sched offset of 4481226092 cycles
Oct 02 07:12:03 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 02 07:12:03 localhost kernel: tsc: Detected 2799.886 MHz processor
Oct 02 07:12:03 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 02 07:12:03 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 02 07:12:03 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 02 07:12:03 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 02 07:12:03 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 02 07:12:03 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 02 07:12:03 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 02 07:12:03 localhost kernel: Using GB pages for direct mapping
Oct 02 07:12:03 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 02 07:12:03 localhost kernel: ACPI: Early table checksum verification disabled
Oct 02 07:12:03 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 02 07:12:03 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 07:12:03 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 07:12:03 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 07:12:03 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 02 07:12:03 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 07:12:03 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 07:12:03 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 02 07:12:03 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 02 07:12:03 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 02 07:12:03 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 02 07:12:03 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 02 07:12:03 localhost kernel: No NUMA configuration found
Oct 02 07:12:03 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 02 07:12:03 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 02 07:12:03 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 02 07:12:03 localhost kernel: Zone ranges:
Oct 02 07:12:03 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 02 07:12:03 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 02 07:12:03 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 02 07:12:03 localhost kernel:   Device   empty
Oct 02 07:12:03 localhost kernel: Movable zone start for each node
Oct 02 07:12:03 localhost kernel: Early memory node ranges
Oct 02 07:12:03 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 02 07:12:03 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 02 07:12:03 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 02 07:12:03 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 02 07:12:03 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 02 07:12:03 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 02 07:12:03 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 02 07:12:03 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 02 07:12:03 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 02 07:12:03 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 02 07:12:03 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 02 07:12:03 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 02 07:12:03 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 02 07:12:03 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 02 07:12:03 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 02 07:12:03 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 02 07:12:03 localhost kernel: TSC deadline timer available
Oct 02 07:12:03 localhost kernel: CPU topo: Max. logical packages:   8
Oct 02 07:12:03 localhost kernel: CPU topo: Max. logical dies:       8
Oct 02 07:12:03 localhost kernel: CPU topo: Max. dies per package:   1
Oct 02 07:12:03 localhost kernel: CPU topo: Max. threads per core:   1
Oct 02 07:12:03 localhost kernel: CPU topo: Num. cores per package:     1
Oct 02 07:12:03 localhost kernel: CPU topo: Num. threads per package:   1
Oct 02 07:12:03 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 02 07:12:03 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 02 07:12:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 02 07:12:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 02 07:12:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 02 07:12:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 02 07:12:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 02 07:12:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 02 07:12:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 02 07:12:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 02 07:12:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 02 07:12:03 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 02 07:12:03 localhost kernel: Booting paravirtualized kernel on KVM
Oct 02 07:12:03 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 02 07:12:03 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 02 07:12:03 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 02 07:12:03 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 02 07:12:03 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 02 07:12:03 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 02 07:12:03 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 07:12:03 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 02 07:12:03 localhost kernel: random: crng init done
Oct 02 07:12:03 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 02 07:12:03 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 02 07:12:03 localhost kernel: Fallback order for Node 0: 0 
Oct 02 07:12:03 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 02 07:12:03 localhost kernel: Policy zone: Normal
Oct 02 07:12:03 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 02 07:12:03 localhost kernel: software IO TLB: area num 8.
Oct 02 07:12:03 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 02 07:12:03 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 02 07:12:03 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 02 07:12:03 localhost kernel: Dynamic Preempt: voluntary
Oct 02 07:12:03 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 02 07:12:03 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 02 07:12:03 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 02 07:12:03 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 02 07:12:03 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 02 07:12:03 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 02 07:12:03 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 02 07:12:03 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 02 07:12:03 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 07:12:03 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 07:12:03 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 07:12:03 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 02 07:12:03 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 02 07:12:03 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 02 07:12:03 localhost kernel: Console: colour VGA+ 80x25
Oct 02 07:12:03 localhost kernel: printk: console [ttyS0] enabled
Oct 02 07:12:03 localhost kernel: ACPI: Core revision 20230331
Oct 02 07:12:03 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 02 07:12:03 localhost kernel: x2apic enabled
Oct 02 07:12:03 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 02 07:12:03 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 02 07:12:03 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.77 BogoMIPS (lpj=2799886)
Oct 02 07:12:03 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 02 07:12:03 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 02 07:12:03 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 02 07:12:03 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 02 07:12:03 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 02 07:12:03 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 02 07:12:03 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 02 07:12:03 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 02 07:12:03 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 02 07:12:03 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 02 07:12:03 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 02 07:12:03 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 02 07:12:03 localhost kernel: x86/bugs: return thunk changed
Oct 02 07:12:03 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 02 07:12:03 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 02 07:12:03 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 02 07:12:03 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 02 07:12:03 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 02 07:12:03 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 02 07:12:03 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 02 07:12:03 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 02 07:12:03 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 02 07:12:03 localhost kernel: landlock: Up and running.
Oct 02 07:12:03 localhost kernel: Yama: becoming mindful.
Oct 02 07:12:03 localhost kernel: SELinux:  Initializing.
Oct 02 07:12:03 localhost kernel: LSM support for eBPF active
Oct 02 07:12:03 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 02 07:12:03 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 02 07:12:03 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 02 07:12:03 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 02 07:12:03 localhost kernel: ... version:                0
Oct 02 07:12:03 localhost kernel: ... bit width:              48
Oct 02 07:12:03 localhost kernel: ... generic registers:      6
Oct 02 07:12:03 localhost kernel: ... value mask:             0000ffffffffffff
Oct 02 07:12:03 localhost kernel: ... max period:             00007fffffffffff
Oct 02 07:12:03 localhost kernel: ... fixed-purpose events:   0
Oct 02 07:12:03 localhost kernel: ... event mask:             000000000000003f
Oct 02 07:12:03 localhost kernel: signal: max sigframe size: 1776
Oct 02 07:12:03 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 02 07:12:03 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 02 07:12:03 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 02 07:12:03 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 02 07:12:03 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 02 07:12:03 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 02 07:12:03 localhost kernel: smpboot: Total of 8 processors activated (44798.17 BogoMIPS)
Oct 02 07:12:03 localhost kernel: node 0 deferred pages initialised in 23ms
Oct 02 07:12:03 localhost kernel: Memory: 7765540K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616504K reserved, 0K cma-reserved)
Oct 02 07:12:03 localhost kernel: devtmpfs: initialized
Oct 02 07:12:03 localhost kernel: x86/mm: Memory block size: 128MB
Oct 02 07:12:03 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 02 07:12:03 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 02 07:12:03 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 02 07:12:03 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 02 07:12:03 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 02 07:12:03 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 02 07:12:03 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 02 07:12:03 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 02 07:12:03 localhost kernel: audit: type=2000 audit(1759389121.466:1): state=initialized audit_enabled=0 res=1
Oct 02 07:12:03 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 02 07:12:03 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 02 07:12:03 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 02 07:12:03 localhost kernel: cpuidle: using governor menu
Oct 02 07:12:03 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 02 07:12:03 localhost kernel: PCI: Using configuration type 1 for base access
Oct 02 07:12:03 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 02 07:12:03 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 02 07:12:03 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 02 07:12:03 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 02 07:12:03 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 02 07:12:03 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 02 07:12:03 localhost kernel: Demotion targets for Node 0: null
Oct 02 07:12:03 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 02 07:12:03 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 02 07:12:03 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 02 07:12:03 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 02 07:12:03 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 02 07:12:03 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 02 07:12:03 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 02 07:12:03 localhost kernel: ACPI: Interpreter enabled
Oct 02 07:12:03 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 02 07:12:03 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 02 07:12:03 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 02 07:12:03 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 02 07:12:03 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 02 07:12:03 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 02 07:12:03 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [3] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [4] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [5] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [6] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [7] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [8] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [9] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [10] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [11] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [12] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [13] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [14] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [15] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [16] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [17] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [18] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [19] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [20] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [21] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [22] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [23] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [24] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [25] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [26] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [27] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [28] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [29] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [30] registered
Oct 02 07:12:03 localhost kernel: acpiphp: Slot [31] registered
Oct 02 07:12:03 localhost kernel: PCI host bridge to bus 0000:00
Oct 02 07:12:03 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 02 07:12:03 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 02 07:12:03 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 02 07:12:03 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 02 07:12:03 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 02 07:12:03 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 02 07:12:03 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 02 07:12:03 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 02 07:12:03 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 02 07:12:03 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 02 07:12:03 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 02 07:12:03 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 02 07:12:03 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 02 07:12:03 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 02 07:12:03 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 02 07:12:03 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 02 07:12:03 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 02 07:12:03 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 02 07:12:03 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 02 07:12:03 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 02 07:12:03 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 02 07:12:03 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 02 07:12:03 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 02 07:12:03 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 02 07:12:03 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 02 07:12:03 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 02 07:12:03 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 02 07:12:03 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 02 07:12:03 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 02 07:12:03 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 02 07:12:03 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 02 07:12:03 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 02 07:12:03 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 02 07:12:03 localhost kernel: iommu: Default domain type: Translated
Oct 02 07:12:03 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 02 07:12:03 localhost kernel: SCSI subsystem initialized
Oct 02 07:12:03 localhost kernel: ACPI: bus type USB registered
Oct 02 07:12:03 localhost kernel: usbcore: registered new interface driver usbfs
Oct 02 07:12:03 localhost kernel: usbcore: registered new interface driver hub
Oct 02 07:12:03 localhost kernel: usbcore: registered new device driver usb
Oct 02 07:12:03 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 02 07:12:03 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 02 07:12:03 localhost kernel: PTP clock support registered
Oct 02 07:12:03 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 02 07:12:03 localhost kernel: NetLabel: Initializing
Oct 02 07:12:03 localhost kernel: NetLabel:  domain hash size = 128
Oct 02 07:12:03 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 02 07:12:03 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 02 07:12:03 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 02 07:12:03 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 02 07:12:03 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 02 07:12:03 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 02 07:12:03 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 02 07:12:03 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 02 07:12:03 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 02 07:12:03 localhost kernel: vgaarb: loaded
Oct 02 07:12:03 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 02 07:12:03 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 02 07:12:03 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 02 07:12:03 localhost kernel: pnp: PnP ACPI init
Oct 02 07:12:03 localhost kernel: pnp 00:03: [dma 2]
Oct 02 07:12:03 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 02 07:12:03 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 02 07:12:03 localhost kernel: NET: Registered PF_INET protocol family
Oct 02 07:12:03 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 02 07:12:03 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 02 07:12:03 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 02 07:12:03 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 02 07:12:03 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 02 07:12:03 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 02 07:12:03 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 02 07:12:03 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 02 07:12:03 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 02 07:12:03 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 02 07:12:03 localhost kernel: NET: Registered PF_XDP protocol family
Oct 02 07:12:03 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 02 07:12:03 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 02 07:12:03 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 02 07:12:03 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 02 07:12:03 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 02 07:12:03 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 02 07:12:03 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 02 07:12:03 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 73431 usecs
Oct 02 07:12:03 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 02 07:12:03 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 02 07:12:03 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 02 07:12:03 localhost kernel: ACPI: bus type thunderbolt registered
Oct 02 07:12:03 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 02 07:12:03 localhost kernel: Initialise system trusted keyrings
Oct 02 07:12:03 localhost kernel: Key type blacklist registered
Oct 02 07:12:03 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 02 07:12:03 localhost kernel: zbud: loaded
Oct 02 07:12:03 localhost kernel: integrity: Platform Keyring initialized
Oct 02 07:12:03 localhost kernel: integrity: Machine keyring initialized
Oct 02 07:12:03 localhost kernel: Freeing initrd memory: 86104K
Oct 02 07:12:03 localhost kernel: NET: Registered PF_ALG protocol family
Oct 02 07:12:03 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 02 07:12:03 localhost kernel: Key type asymmetric registered
Oct 02 07:12:03 localhost kernel: Asymmetric key parser 'x509' registered
Oct 02 07:12:03 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 02 07:12:03 localhost kernel: io scheduler mq-deadline registered
Oct 02 07:12:03 localhost kernel: io scheduler kyber registered
Oct 02 07:12:03 localhost kernel: io scheduler bfq registered
Oct 02 07:12:03 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 02 07:12:03 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 02 07:12:03 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 02 07:12:03 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 02 07:12:03 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 02 07:12:03 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 02 07:12:03 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 02 07:12:03 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 02 07:12:03 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 02 07:12:03 localhost kernel: Non-volatile memory driver v1.3
Oct 02 07:12:03 localhost kernel: rdac: device handler registered
Oct 02 07:12:03 localhost kernel: hp_sw: device handler registered
Oct 02 07:12:03 localhost kernel: emc: device handler registered
Oct 02 07:12:03 localhost kernel: alua: device handler registered
Oct 02 07:12:03 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 02 07:12:03 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 02 07:12:03 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 02 07:12:03 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 02 07:12:03 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 02 07:12:03 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 02 07:12:03 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 02 07:12:03 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 02 07:12:03 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 02 07:12:03 localhost kernel: hub 1-0:1.0: USB hub found
Oct 02 07:12:03 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 02 07:12:03 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 02 07:12:03 localhost kernel: usbserial: USB Serial support registered for generic
Oct 02 07:12:03 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 02 07:12:03 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 02 07:12:03 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 02 07:12:03 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 02 07:12:03 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 02 07:12:03 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 02 07:12:03 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 02 07:12:03 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-02T07:12:02 UTC (1759389122)
Oct 02 07:12:03 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 02 07:12:03 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 02 07:12:03 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 02 07:12:03 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 02 07:12:03 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 02 07:12:03 localhost kernel: usbcore: registered new interface driver usbhid
Oct 02 07:12:03 localhost kernel: usbhid: USB HID core driver
Oct 02 07:12:03 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 02 07:12:03 localhost kernel: Initializing XFRM netlink socket
Oct 02 07:12:03 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 02 07:12:03 localhost kernel: Segment Routing with IPv6
Oct 02 07:12:03 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 02 07:12:03 localhost kernel: mpls_gso: MPLS GSO support
Oct 02 07:12:03 localhost kernel: IPI shorthand broadcast: enabled
Oct 02 07:12:03 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 02 07:12:03 localhost kernel: AES CTR mode by8 optimization enabled
Oct 02 07:12:03 localhost kernel: sched_clock: Marking stable (1205003172, 143160197)->(1470572017, -122408648)
Oct 02 07:12:03 localhost kernel: registered taskstats version 1
Oct 02 07:12:03 localhost kernel: Loading compiled-in X.509 certificates
Oct 02 07:12:03 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 02 07:12:03 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 02 07:12:03 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 02 07:12:03 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 02 07:12:03 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 02 07:12:03 localhost kernel: Demotion targets for Node 0: null
Oct 02 07:12:03 localhost kernel: page_owner is disabled
Oct 02 07:12:03 localhost kernel: Key type .fscrypt registered
Oct 02 07:12:03 localhost kernel: Key type fscrypt-provisioning registered
Oct 02 07:12:03 localhost kernel: Key type big_key registered
Oct 02 07:12:03 localhost kernel: Key type encrypted registered
Oct 02 07:12:03 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 02 07:12:03 localhost kernel: Loading compiled-in module X.509 certificates
Oct 02 07:12:03 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 02 07:12:03 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 02 07:12:03 localhost kernel: ima: No architecture policies found
Oct 02 07:12:03 localhost kernel: evm: Initialising EVM extended attributes:
Oct 02 07:12:03 localhost kernel: evm: security.selinux
Oct 02 07:12:03 localhost kernel: evm: security.SMACK64 (disabled)
Oct 02 07:12:03 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 02 07:12:03 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 02 07:12:03 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 02 07:12:03 localhost kernel: evm: security.apparmor (disabled)
Oct 02 07:12:03 localhost kernel: evm: security.ima
Oct 02 07:12:03 localhost kernel: evm: security.capability
Oct 02 07:12:03 localhost kernel: evm: HMAC attrs: 0x1
Oct 02 07:12:03 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 02 07:12:03 localhost kernel: Running certificate verification RSA selftest
Oct 02 07:12:03 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 02 07:12:03 localhost kernel: Running certificate verification ECDSA selftest
Oct 02 07:12:03 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 02 07:12:03 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 02 07:12:03 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 02 07:12:03 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 02 07:12:03 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 02 07:12:03 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 02 07:12:03 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 02 07:12:03 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 02 07:12:03 localhost kernel: clk: Disabling unused clocks
Oct 02 07:12:03 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 02 07:12:03 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 02 07:12:03 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 02 07:12:03 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 02 07:12:03 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 02 07:12:03 localhost kernel: Run /init as init process
Oct 02 07:12:03 localhost kernel:   with arguments:
Oct 02 07:12:03 localhost kernel:     /init
Oct 02 07:12:03 localhost kernel:   with environment:
Oct 02 07:12:03 localhost kernel:     HOME=/
Oct 02 07:12:03 localhost kernel:     TERM=linux
Oct 02 07:12:03 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 02 07:12:03 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 02 07:12:03 localhost systemd[1]: Detected virtualization kvm.
Oct 02 07:12:03 localhost systemd[1]: Detected architecture x86-64.
Oct 02 07:12:03 localhost systemd[1]: Running in initrd.
Oct 02 07:12:03 localhost systemd[1]: No hostname configured, using default hostname.
Oct 02 07:12:03 localhost systemd[1]: Hostname set to <localhost>.
Oct 02 07:12:03 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 02 07:12:03 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 02 07:12:03 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 02 07:12:03 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 02 07:12:03 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 02 07:12:03 localhost systemd[1]: Reached target Local File Systems.
Oct 02 07:12:03 localhost systemd[1]: Reached target Path Units.
Oct 02 07:12:03 localhost systemd[1]: Reached target Slice Units.
Oct 02 07:12:03 localhost systemd[1]: Reached target Swaps.
Oct 02 07:12:03 localhost systemd[1]: Reached target Timer Units.
Oct 02 07:12:03 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 02 07:12:03 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 02 07:12:03 localhost systemd[1]: Listening on Journal Socket.
Oct 02 07:12:03 localhost systemd[1]: Listening on udev Control Socket.
Oct 02 07:12:03 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 02 07:12:03 localhost systemd[1]: Reached target Socket Units.
Oct 02 07:12:03 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 02 07:12:03 localhost systemd[1]: Starting Journal Service...
Oct 02 07:12:03 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 02 07:12:03 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 02 07:12:03 localhost systemd[1]: Starting Create System Users...
Oct 02 07:12:03 localhost systemd[1]: Starting Setup Virtual Console...
Oct 02 07:12:03 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 02 07:12:03 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 02 07:12:03 localhost systemd-journald[311]: Journal started
Oct 02 07:12:03 localhost systemd-journald[311]: Runtime Journal (/run/log/journal/8d603fb8984b47fa8e454729d5224ba1) is 8.0M, max 153.5M, 145.5M free.
Oct 02 07:12:03 localhost systemd-sysusers[314]: Creating group 'users' with GID 100.
Oct 02 07:12:03 localhost systemd[1]: Started Journal Service.
Oct 02 07:12:03 localhost systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Oct 02 07:12:03 localhost systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 02 07:12:03 localhost systemd[1]: Finished Create System Users.
Oct 02 07:12:03 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 02 07:12:03 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 02 07:12:03 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 02 07:12:04 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 02 07:12:04 localhost systemd[1]: Finished Setup Virtual Console.
Oct 02 07:12:04 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 02 07:12:04 localhost systemd[1]: Starting dracut cmdline hook...
Oct 02 07:12:04 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Oct 02 07:12:04 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 07:12:04 localhost systemd[1]: Finished dracut cmdline hook.
Oct 02 07:12:04 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 02 07:12:04 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 02 07:12:04 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 02 07:12:04 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 02 07:12:04 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 02 07:12:04 localhost kernel: RPC: Registered udp transport module.
Oct 02 07:12:04 localhost kernel: RPC: Registered tcp transport module.
Oct 02 07:12:04 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 02 07:12:04 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 02 07:12:04 localhost rpc.statd[446]: Version 2.5.4 starting
Oct 02 07:12:04 localhost rpc.statd[446]: Initializing NSM state
Oct 02 07:12:04 localhost rpc.idmapd[451]: Setting log level to 0
Oct 02 07:12:04 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 02 07:12:04 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 02 07:12:04 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Oct 02 07:12:04 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 02 07:12:04 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 02 07:12:04 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 02 07:12:04 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 02 07:12:04 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 02 07:12:04 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 07:12:04 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 02 07:12:04 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 07:12:04 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 07:12:04 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 02 07:12:04 localhost systemd[1]: Reached target Network.
Oct 02 07:12:04 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 02 07:12:04 localhost systemd[1]: Starting dracut initqueue hook...
Oct 02 07:12:04 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 02 07:12:04 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 02 07:12:04 localhost systemd[1]: Reached target System Initialization.
Oct 02 07:12:04 localhost systemd[1]: Reached target Basic System.
Oct 02 07:12:04 localhost kernel: libata version 3.00 loaded.
Oct 02 07:12:04 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 02 07:12:04 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 02 07:12:04 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 02 07:12:04 localhost kernel:  vda: vda1
Oct 02 07:12:04 localhost kernel: scsi host0: ata_piix
Oct 02 07:12:04 localhost kernel: scsi host1: ata_piix
Oct 02 07:12:04 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 02 07:12:04 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 02 07:12:04 localhost systemd-udevd[487]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 07:12:05 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 02 07:12:05 localhost systemd[1]: Reached target Initrd Root Device.
Oct 02 07:12:05 localhost kernel: ata1: found unknown device (class 0)
Oct 02 07:12:05 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 02 07:12:05 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 02 07:12:05 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 02 07:12:05 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 02 07:12:05 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 02 07:12:05 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 02 07:12:05 localhost systemd[1]: Finished dracut initqueue hook.
Oct 02 07:12:05 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 02 07:12:05 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 02 07:12:05 localhost systemd[1]: Reached target Remote File Systems.
Oct 02 07:12:05 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 02 07:12:05 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 02 07:12:05 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 02 07:12:05 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Oct 02 07:12:05 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 02 07:12:05 localhost systemd[1]: Mounting /sysroot...
Oct 02 07:12:05 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 02 07:12:05 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 02 07:12:05 localhost kernel: XFS (vda1): Ending clean mount
Oct 02 07:12:05 localhost systemd[1]: Mounted /sysroot.
Oct 02 07:12:06 localhost systemd[1]: Reached target Initrd Root File System.
Oct 02 07:12:06 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 02 07:12:06 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 02 07:12:06 localhost systemd[1]: Reached target Initrd File Systems.
Oct 02 07:12:06 localhost systemd[1]: Reached target Initrd Default Target.
Oct 02 07:12:06 localhost systemd[1]: Starting dracut mount hook...
Oct 02 07:12:06 localhost systemd[1]: Finished dracut mount hook.
Oct 02 07:12:06 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 02 07:12:06 localhost rpc.idmapd[451]: exiting on signal 15
Oct 02 07:12:06 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 02 07:12:06 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 02 07:12:06 localhost systemd[1]: Stopped target Network.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Timer Units.
Oct 02 07:12:06 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 02 07:12:06 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Basic System.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Path Units.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Remote File Systems.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Slice Units.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Socket Units.
Oct 02 07:12:06 localhost systemd[1]: Stopped target System Initialization.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Local File Systems.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Swaps.
Oct 02 07:12:06 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped dracut mount hook.
Oct 02 07:12:06 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 02 07:12:06 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 02 07:12:06 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 02 07:12:06 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 02 07:12:06 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 02 07:12:06 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 02 07:12:06 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 02 07:12:06 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 02 07:12:06 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 02 07:12:06 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 02 07:12:06 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 02 07:12:06 localhost systemd[1]: systemd-udevd.service: Consumed 1.001s CPU time.
Oct 02 07:12:06 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Closed udev Control Socket.
Oct 02 07:12:06 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Closed udev Kernel Socket.
Oct 02 07:12:06 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 02 07:12:06 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 02 07:12:06 localhost systemd[1]: Starting Cleanup udev Database...
Oct 02 07:12:06 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 02 07:12:06 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 02 07:12:06 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Stopped Create System Users.
Oct 02 07:12:06 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 02 07:12:06 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 02 07:12:06 localhost systemd[1]: Finished Cleanup udev Database.
Oct 02 07:12:06 localhost systemd[1]: Reached target Switch Root.
Oct 02 07:12:06 localhost systemd[1]: Starting Switch Root...
Oct 02 07:12:06 localhost systemd[1]: Switching root.
Oct 02 07:12:06 localhost systemd-journald[311]: Journal stopped
Oct 02 08:23:38 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:23:38 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:23:38 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:23:38 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:23:38 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:23:38 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:23:38 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:23:38 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:23:38 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:23:38 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID c8aeabca-6b5c-477a-9156-9f9592c20b93
Oct 02 08:23:38 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:23:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.986 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'env', 'PROCESS_TAG=haproxy-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c8aeabca-6b5c-477a-9156-9f9592c20b93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:23:38 compute-0 nova_compute[260603]: 2025-10-02 08:23:38.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:39 compute-0 podman[293032]: 2025-10-02 08:23:39.382697377 +0000 UTC m=+0.066013641 container create 002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 08:23:39 compute-0 systemd[1]: Started libpod-conmon-002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952.scope.
Oct 02 08:23:39 compute-0 podman[293032]: 2025-10-02 08:23:39.34278165 +0000 UTC m=+0.026097924 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:23:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc34d9a9787cb780b9ef5d6332cf0d5803f416d2f51f980b7daa992b30f3c9a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:39 compute-0 podman[293032]: 2025-10-02 08:23:39.483126665 +0000 UTC m=+0.166442929 container init 002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:23:39 compute-0 podman[293032]: 2025-10-02 08:23:39.488784847 +0000 UTC m=+0.172101111 container start 002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:23:39 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [NOTICE]   (293051) : New worker (293053) forked
Oct 02 08:23:39 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [NOTICE]   (293051) : Loading success.
Oct 02 08:23:39 compute-0 nova_compute[260603]: 2025-10-02 08:23:39.695 2 DEBUG nova.compute.manager [req-943a2eec-30d5-4932-9b8c-b2684e890213 req-ef3f309c-4bf7-4e1a-8258-acc005115153 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:23:39 compute-0 nova_compute[260603]: 2025-10-02 08:23:39.695 2 DEBUG oslo_concurrency.lockutils [req-943a2eec-30d5-4932-9b8c-b2684e890213 req-ef3f309c-4bf7-4e1a-8258-acc005115153 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:39 compute-0 nova_compute[260603]: 2025-10-02 08:23:39.696 2 DEBUG oslo_concurrency.lockutils [req-943a2eec-30d5-4932-9b8c-b2684e890213 req-ef3f309c-4bf7-4e1a-8258-acc005115153 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:39 compute-0 nova_compute[260603]: 2025-10-02 08:23:39.696 2 DEBUG oslo_concurrency.lockutils [req-943a2eec-30d5-4932-9b8c-b2684e890213 req-ef3f309c-4bf7-4e1a-8258-acc005115153 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:39 compute-0 nova_compute[260603]: 2025-10-02 08:23:39.696 2 DEBUG nova.compute.manager [req-943a2eec-30d5-4932-9b8c-b2684e890213 req-ef3f309c-4bf7-4e1a-8258-acc005115153 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Processing event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:23:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 02 08:23:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 02 08:23:40 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 02 08:23:40 compute-0 ceph-mon[74477]: pgmap v1224: 305 pgs: 305 active+clean; 180 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.1 MiB/s wr, 185 op/s
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.508 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393420.5076969, 7c5d0818-0647-43a7-aaa1-b875b8b8424d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.509 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] VM Started (Lifecycle Event)
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.511 2 DEBUG nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.516 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.521 2 INFO nova.virt.libvirt.driver [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Instance spawned successfully.
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.521 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.546 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.551 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.561 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.562 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.562 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.563 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.564 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.565 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.601 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.601 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393420.5089803, 7c5d0818-0647-43a7-aaa1-b875b8b8424d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.602 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] VM Paused (Lifecycle Event)
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.633 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.637 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393420.5157433, 7c5d0818-0647-43a7-aaa1-b875b8b8424d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.638 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] VM Resumed (Lifecycle Event)
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.649 2 INFO nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Took 8.26 seconds to spawn the instance on the hypervisor.
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.649 2 DEBUG nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.660 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.664 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.695 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.720 2 INFO nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Took 9.14 seconds to build instance.
Oct 02 08:23:40 compute-0 nova_compute[260603]: 2025-10-02 08:23:40.737 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 180 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.6 MiB/s wr, 123 op/s
Oct 02 08:23:41 compute-0 podman[293104]: 2025-10-02 08:23:41.010225622 +0000 UTC m=+0.077749128 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:23:41 compute-0 ceph-mon[74477]: osdmap e149: 3 total, 3 up, 3 in
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.142 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "284aaef3-b320-49e2-a541-b17eb4eb208f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.142 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.143 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.143 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.143 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.145 2 INFO nova.compute.manager [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Terminating instance
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.146 2 DEBUG nova.compute.manager [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:23:41 compute-0 kernel: tap368af35e-e9 (unregistering): left promiscuous mode
Oct 02 08:23:41 compute-0 NetworkManager[45129]: <info>  [1759393421.1931] device (tap368af35e-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:41 compute-0 ovn_controller[152344]: 2025-10-02T08:23:41Z|00161|binding|INFO|Releasing lport 368af35e-e91a-4677-ac5f-ee6169dbd8f0 from this chassis (sb_readonly=0)
Oct 02 08:23:41 compute-0 ovn_controller[152344]: 2025-10-02T08:23:41Z|00162|binding|INFO|Setting lport 368af35e-e91a-4677-ac5f-ee6169dbd8f0 down in Southbound
Oct 02 08:23:41 compute-0 ovn_controller[152344]: 2025-10-02T08:23:41Z|00163|binding|INFO|Removing iface tap368af35e-e9 ovn-installed in OVS
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.227 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:03:c3 10.100.0.5'], port_security=['fa:16:3e:36:03:c3 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '284aaef3-b320-49e2-a541-b17eb4eb208f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=368af35e-e91a-4677-ac5f-ee6169dbd8f0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.229 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 368af35e-e91a-4677-ac5f-ee6169dbd8f0 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.230 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.231 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc3e46d-c877-4576-9973-1f4a78d0a471]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.231 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace which is not needed anymore
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:41 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct 02 08:23:41 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Consumed 3.051s CPU time.
Oct 02 08:23:41 compute-0 systemd-machined[214636]: Machine qemu-29-instance-00000019 terminated.
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.392 2 INFO nova.virt.libvirt.driver [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Instance destroyed successfully.
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.393 2 DEBUG nova.objects.instance [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'resources' on Instance uuid 284aaef3-b320-49e2-a541-b17eb4eb208f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.409 2 DEBUG nova.virt.libvirt.vif [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-924493590',display_name='tempest-ImagesTestJSON-server-924493590',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-924493590',id=25,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:23:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-f9z2qxia',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:23:38Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=284aaef3-b320-49e2-a541-b17eb4eb208f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "address": "fa:16:3e:36:03:c3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap368af35e-e9", "ovs_interfaceid": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.411 2 DEBUG nova.network.os_vif_util [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "address": "fa:16:3e:36:03:c3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap368af35e-e9", "ovs_interfaceid": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.413 2 DEBUG nova.network.os_vif_util [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:03:c3,bridge_name='br-int',has_traffic_filtering=True,id=368af35e-e91a-4677-ac5f-ee6169dbd8f0,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap368af35e-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.414 2 DEBUG os_vif [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:03:c3,bridge_name='br-int',has_traffic_filtering=True,id=368af35e-e91a-4677-ac5f-ee6169dbd8f0,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap368af35e-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap368af35e-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:23:41 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [NOTICE]   (292447) : haproxy version is 2.8.14-c23fe91
Oct 02 08:23:41 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [NOTICE]   (292447) : path to executable is /usr/sbin/haproxy
Oct 02 08:23:41 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [WARNING]  (292447) : Exiting Master process...
Oct 02 08:23:41 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [WARNING]  (292447) : Exiting Master process...
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.430 2 INFO os_vif [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:03:c3,bridge_name='br-int',has_traffic_filtering=True,id=368af35e-e91a-4677-ac5f-ee6169dbd8f0,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap368af35e-e9')
Oct 02 08:23:41 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [ALERT]    (292447) : Current worker (292449) exited with code 143 (Terminated)
Oct 02 08:23:41 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [WARNING]  (292447) : All workers exited. Exiting... (0)
Oct 02 08:23:41 compute-0 systemd[1]: libpod-f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204.scope: Deactivated successfully.
Oct 02 08:23:41 compute-0 conmon[292443]: conmon f66990656dc36436cbcb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204.scope/container/memory.events
Oct 02 08:23:41 compute-0 podman[293146]: 2025-10-02 08:23:41.441519928 +0000 UTC m=+0.060932075 container died f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 08:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204-userdata-shm.mount: Deactivated successfully.
Oct 02 08:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bff6eb4a3f402d26d136c7162ad878ef5efad0cc6f70e47e24c61d9d48c5b98-merged.mount: Deactivated successfully.
Oct 02 08:23:41 compute-0 podman[293146]: 2025-10-02 08:23:41.49058513 +0000 UTC m=+0.109997277 container cleanup f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:23:41 compute-0 systemd[1]: libpod-conmon-f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204.scope: Deactivated successfully.
Oct 02 08:23:41 compute-0 podman[293198]: 2025-10-02 08:23:41.580026204 +0000 UTC m=+0.061081020 container remove f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.585 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[919b852c-af2e-4fb3-ad58-06f4eb2f98eb]: (4, ('Thu Oct  2 08:23:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204)\nf66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204\nThu Oct  2 08:23:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204)\nf66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.587 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a10aa023-f855-4bdb-9313-74fb5450408f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.588 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:41 compute-0 kernel: tap897d7abf-90: left promiscuous mode
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.640 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c37a31c9-aeac-49f9-9086-62ab3dbb0747]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.665 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[508f8864-7e64-4841-8d39-4653876047a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.666 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9049c0-55b6-4c31-92c8-0a1167fc5d7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.690 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc10053-b537-4fe8-af08-bb64db0510f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428578, 'reachable_time': 15769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293212, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d897d7abf\x2d9e23\x2d43cd\x2d8f60\x2d7156792a4360.mount: Deactivated successfully.
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.693 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:23:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.693 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[6e69cf00-8987-4e61-98d5-3f6ee1280344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.844 2 INFO nova.virt.libvirt.driver [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Deleting instance files /var/lib/nova/instances/284aaef3-b320-49e2-a541-b17eb4eb208f_del
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.845 2 INFO nova.virt.libvirt.driver [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Deletion of /var/lib/nova/instances/284aaef3-b320-49e2-a541-b17eb4eb208f_del complete
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.890 2 INFO nova.compute.manager [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.891 2 DEBUG oslo.service.loopingcall [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.891 2 DEBUG nova.compute.manager [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:23:41 compute-0 nova_compute[260603]: 2025-10-02 08:23:41.891 2 DEBUG nova.network.neutron [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:23:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:23:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 02 08:23:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 02 08:23:41 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 02 08:23:42 compute-0 ceph-mon[74477]: pgmap v1226: 305 pgs: 305 active+clean; 180 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.6 MiB/s wr, 123 op/s
Oct 02 08:23:42 compute-0 ceph-mon[74477]: osdmap e150: 3 total, 3 up, 3 in
Oct 02 08:23:42 compute-0 nova_compute[260603]: 2025-10-02 08:23:42.356 2 DEBUG nova.compute.manager [req-6caa47af-0cd1-4a50-9300-e96ff7fd9404 req-136b6ecb-ac7e-4aea-9aa5-7b162bc908f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:23:42 compute-0 nova_compute[260603]: 2025-10-02 08:23:42.357 2 DEBUG oslo_concurrency.lockutils [req-6caa47af-0cd1-4a50-9300-e96ff7fd9404 req-136b6ecb-ac7e-4aea-9aa5-7b162bc908f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:42 compute-0 nova_compute[260603]: 2025-10-02 08:23:42.358 2 DEBUG oslo_concurrency.lockutils [req-6caa47af-0cd1-4a50-9300-e96ff7fd9404 req-136b6ecb-ac7e-4aea-9aa5-7b162bc908f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:42 compute-0 nova_compute[260603]: 2025-10-02 08:23:42.358 2 DEBUG oslo_concurrency.lockutils [req-6caa47af-0cd1-4a50-9300-e96ff7fd9404 req-136b6ecb-ac7e-4aea-9aa5-7b162bc908f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:42 compute-0 nova_compute[260603]: 2025-10-02 08:23:42.359 2 DEBUG nova.compute.manager [req-6caa47af-0cd1-4a50-9300-e96ff7fd9404 req-136b6ecb-ac7e-4aea-9aa5-7b162bc908f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] No waiting events found dispatching network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:23:42 compute-0 nova_compute[260603]: 2025-10-02 08:23:42.359 2 WARNING nova.compute.manager [req-6caa47af-0cd1-4a50-9300-e96ff7fd9404 req-136b6ecb-ac7e-4aea-9aa5-7b162bc908f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received unexpected event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 for instance with vm_state active and task_state None.
Oct 02 08:23:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 108 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 5.5 MiB/s wr, 279 op/s
Oct 02 08:23:44 compute-0 ceph-mon[74477]: pgmap v1228: 305 pgs: 305 active+clean; 108 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 5.5 MiB/s wr, 279 op/s
Oct 02 08:23:44 compute-0 nova_compute[260603]: 2025-10-02 08:23:44.753 2 DEBUG nova.network.neutron [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:23:44 compute-0 nova_compute[260603]: 2025-10-02 08:23:44.777 2 INFO nova.compute.manager [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Took 2.89 seconds to deallocate network for instance.
Oct 02 08:23:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 88 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.2 MiB/s wr, 292 op/s
Oct 02 08:23:44 compute-0 sudo[293217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:23:44 compute-0 sudo[293217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:44 compute-0 sudo[293217]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:44 compute-0 nova_compute[260603]: 2025-10-02 08:23:44.840 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:44 compute-0 nova_compute[260603]: 2025-10-02 08:23:44.841 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:44 compute-0 sudo[293242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:23:44 compute-0 sudo[293242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:44 compute-0 sudo[293242]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:44 compute-0 nova_compute[260603]: 2025-10-02 08:23:44.911 2 DEBUG nova.compute.manager [req-04a7ff71-8277-46ac-bbc2-69020dc945bc req-fa79189e-8a5a-4421-be8d-41b38f7da9b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received event network-vif-unplugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:23:44 compute-0 nova_compute[260603]: 2025-10-02 08:23:44.912 2 DEBUG oslo_concurrency.lockutils [req-04a7ff71-8277-46ac-bbc2-69020dc945bc req-fa79189e-8a5a-4421-be8d-41b38f7da9b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:44 compute-0 nova_compute[260603]: 2025-10-02 08:23:44.913 2 DEBUG oslo_concurrency.lockutils [req-04a7ff71-8277-46ac-bbc2-69020dc945bc req-fa79189e-8a5a-4421-be8d-41b38f7da9b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:44 compute-0 nova_compute[260603]: 2025-10-02 08:23:44.913 2 DEBUG oslo_concurrency.lockutils [req-04a7ff71-8277-46ac-bbc2-69020dc945bc req-fa79189e-8a5a-4421-be8d-41b38f7da9b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:44 compute-0 nova_compute[260603]: 2025-10-02 08:23:44.913 2 DEBUG nova.compute.manager [req-04a7ff71-8277-46ac-bbc2-69020dc945bc req-fa79189e-8a5a-4421-be8d-41b38f7da9b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] No waiting events found dispatching network-vif-unplugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:23:44 compute-0 nova_compute[260603]: 2025-10-02 08:23:44.914 2 WARNING nova.compute.manager [req-04a7ff71-8277-46ac-bbc2-69020dc945bc req-fa79189e-8a5a-4421-be8d-41b38f7da9b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received unexpected event network-vif-unplugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 for instance with vm_state deleted and task_state None.
Oct 02 08:23:44 compute-0 nova_compute[260603]: 2025-10-02 08:23:44.925 2 DEBUG oslo_concurrency.processutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:44 compute-0 sudo[293267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:23:44 compute-0 sudo[293267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:44 compute-0 sudo[293267]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:44 compute-0 nova_compute[260603]: 2025-10-02 08:23:44.974 2 DEBUG nova.compute.manager [req-ea2e9b60-4c29-42af-a08d-b0b4521220dc req-e457bf60-2de2-46f6-ac05-dada35f291a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received event network-vif-deleted-368af35e-e91a-4677-ac5f-ee6169dbd8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:23:45 compute-0 sudo[293293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:23:45 compute-0 sudo[293293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:45 compute-0 podman[293317]: 2025-10-02 08:23:45.15654253 +0000 UTC m=+0.089486817 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd)
Oct 02 08:23:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:23:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120033406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.451 2 DEBUG oslo_concurrency.processutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.459 2 DEBUG nova.compute.provider_tree [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.490 2 DEBUG nova.scheduler.client.report [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.519 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.555 2 INFO nova.scheduler.client.report [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Deleted allocations for instance 284aaef3-b320-49e2-a541-b17eb4eb208f
Oct 02 08:23:45 compute-0 sudo[293293]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:45.620 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:23:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:45.621 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:23:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:23:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:23:45 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.640 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:23:45 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:23:45 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 53045044-d76a-4afe-8566-56f0651d222a does not exist
Oct 02 08:23:45 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9a603c03-4c82-409d-a6cb-3b7f9aeab2cb does not exist
Oct 02 08:23:45 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c0e96ea5-d285-4f12-b0df-e274fd9a580e does not exist
Oct 02 08:23:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:23:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:23:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:23:45 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:23:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:23:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:23:45 compute-0 sudo[293389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:23:45 compute-0 sudo[293389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.745 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.746 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:45 compute-0 sudo[293389]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.768 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:23:45 compute-0 sudo[293414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:23:45 compute-0 sudo[293414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:45 compute-0 sudo[293414]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.858 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.859 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.868 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:23:45 compute-0 nova_compute[260603]: 2025-10-02 08:23:45.869 2 INFO nova.compute.claims [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:23:45 compute-0 sudo[293439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:23:45 compute-0 sudo[293439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:45 compute-0 sudo[293439]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.015 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:46 compute-0 sudo[293464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:23:46 compute-0 sudo[293464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:46 compute-0 ceph-mon[74477]: pgmap v1229: 305 pgs: 305 active+clean; 88 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.2 MiB/s wr, 292 op/s
Oct 02 08:23:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1120033406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:23:46 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:23:46 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:23:46 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:23:46 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:23:46 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:23:46 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.302 2 DEBUG nova.compute.manager [None req-add5470f-8a10-444c-8c81-b9770446aeda 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.352 2 INFO nova.compute.manager [None req-add5470f-8a10-444c-8c81-b9770446aeda 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] instance snapshotting
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:23:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3566674992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:23:46 compute-0 podman[293552]: 2025-10-02 08:23:46.468468259 +0000 UTC m=+0.064186100 container create 1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_snyder, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.481 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.488 2 DEBUG nova.compute.provider_tree [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.510 2 DEBUG nova.scheduler.client.report [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:23:46 compute-0 podman[293552]: 2025-10-02 08:23:46.430978771 +0000 UTC m=+0.026696692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:23:46 compute-0 systemd[1]: Started libpod-conmon-1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125.scope.
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.555 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.556 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:23:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:23:46 compute-0 podman[293552]: 2025-10-02 08:23:46.591676603 +0000 UTC m=+0.187394514 container init 1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_snyder, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.592 2 WARNING nova.compute.manager [None req-add5470f-8a10-444c-8c81-b9770446aeda 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Image not found during snapshot: nova.exception.ImageNotFound: Image ce2d4960-dbf0-47ba-910e-94d22e6c3390 could not be found.
Oct 02 08:23:46 compute-0 podman[293552]: 2025-10-02 08:23:46.603946958 +0000 UTC m=+0.199664799 container start 1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 08:23:46 compute-0 podman[293552]: 2025-10-02 08:23:46.607254084 +0000 UTC m=+0.202971955 container attach 1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_snyder, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 08:23:46 compute-0 focused_snyder[293570]: 167 167
Oct 02 08:23:46 compute-0 systemd[1]: libpod-1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125.scope: Deactivated successfully.
Oct 02 08:23:46 compute-0 podman[293552]: 2025-10-02 08:23:46.609207237 +0000 UTC m=+0.204925098 container died 1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_snyder, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.620 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.621 2 DEBUG nova.network.neutron [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:23:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-703d15ead9784e0364b9a4b33932111a1a3ba0f21293a80968b7ce03acb96a80-merged.mount: Deactivated successfully.
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.646 2 INFO nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.664 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:23:46 compute-0 podman[293552]: 2025-10-02 08:23:46.698638011 +0000 UTC m=+0.294355882 container remove 1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_snyder, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 08:23:46 compute-0 systemd[1]: libpod-conmon-1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125.scope: Deactivated successfully.
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.755 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.757 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.757 2 INFO nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Creating image(s)
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.782 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.809 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 194 op/s
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.837 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.850 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:46 compute-0 podman[293647]: 2025-10-02 08:23:46.889681801 +0000 UTC m=+0.040062052 container create ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.908 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.910 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.910 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.911 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:46 compute-0 systemd[1]: Started libpod-conmon-ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425.scope.
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.943 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:46 compute-0 nova_compute[260603]: 2025-10-02 08:23:46.947 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc21d44ccc749c9b881c88e141e3a63bdd3346b0ee5192e91a15ac0a5f5afd73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc21d44ccc749c9b881c88e141e3a63bdd3346b0ee5192e91a15ac0a5f5afd73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc21d44ccc749c9b881c88e141e3a63bdd3346b0ee5192e91a15ac0a5f5afd73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc21d44ccc749c9b881c88e141e3a63bdd3346b0ee5192e91a15ac0a5f5afd73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc21d44ccc749c9b881c88e141e3a63bdd3346b0ee5192e91a15ac0a5f5afd73/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:46 compute-0 podman[293647]: 2025-10-02 08:23:46.87260082 +0000 UTC m=+0.022981101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:23:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:23:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 02 08:23:46 compute-0 podman[293647]: 2025-10-02 08:23:46.981709998 +0000 UTC m=+0.132090269 container init ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:23:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 02 08:23:46 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 02 08:23:46 compute-0 podman[293647]: 2025-10-02 08:23:46.988605481 +0000 UTC m=+0.138985742 container start ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:23:46 compute-0 podman[293647]: 2025-10-02 08:23:46.993806979 +0000 UTC m=+0.144187250 container attach ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 08:23:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3566674992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:23:47 compute-0 ceph-mon[74477]: osdmap e151: 3 total, 3 up, 3 in
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.189 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.277 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] resizing rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.408 2 DEBUG nova.objects.instance [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'migration_context' on Instance uuid 95b47d23-06fb-460c-b8d9-7b7213dae4c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.428 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.428 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Ensure instance console log exists: /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.429 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.429 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.430 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.643 2 DEBUG nova.policy [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6747651cfdcc4f868c43b9d78f5846c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56b1e1170f2e4a73aaf396476bc82261', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.941 2 DEBUG nova.compute.manager [req-bd302b0a-83de-4b43-9fd1-26907a0ecaa2 req-cacb2f9b-cfff-4be0-b426-5ec041ffefcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received event network-vif-plugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.943 2 DEBUG oslo_concurrency.lockutils [req-bd302b0a-83de-4b43-9fd1-26907a0ecaa2 req-cacb2f9b-cfff-4be0-b426-5ec041ffefcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.944 2 DEBUG oslo_concurrency.lockutils [req-bd302b0a-83de-4b43-9fd1-26907a0ecaa2 req-cacb2f9b-cfff-4be0-b426-5ec041ffefcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.944 2 DEBUG oslo_concurrency.lockutils [req-bd302b0a-83de-4b43-9fd1-26907a0ecaa2 req-cacb2f9b-cfff-4be0-b426-5ec041ffefcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.944 2 DEBUG nova.compute.manager [req-bd302b0a-83de-4b43-9fd1-26907a0ecaa2 req-cacb2f9b-cfff-4be0-b426-5ec041ffefcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] No waiting events found dispatching network-vif-plugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:23:47 compute-0 nova_compute[260603]: 2025-10-02 08:23:47.945 2 WARNING nova.compute.manager [req-bd302b0a-83de-4b43-9fd1-26907a0ecaa2 req-cacb2f9b-cfff-4be0-b426-5ec041ffefcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received unexpected event network-vif-plugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 for instance with vm_state deleted and task_state None.
Oct 02 08:23:48 compute-0 trusting_ishizaka[293683]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:23:48 compute-0 trusting_ishizaka[293683]: --> relative data size: 1.0
Oct 02 08:23:48 compute-0 trusting_ishizaka[293683]: --> All data devices are unavailable
Oct 02 08:23:48 compute-0 systemd[1]: libpod-ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425.scope: Deactivated successfully.
Oct 02 08:23:48 compute-0 systemd[1]: libpod-ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425.scope: Consumed 1.014s CPU time.
Oct 02 08:23:48 compute-0 podman[293806]: 2025-10-02 08:23:48.120473115 +0000 UTC m=+0.030118963 container died ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 08:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc21d44ccc749c9b881c88e141e3a63bdd3346b0ee5192e91a15ac0a5f5afd73-merged.mount: Deactivated successfully.
Oct 02 08:23:48 compute-0 ceph-mon[74477]: pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 194 op/s
Oct 02 08:23:48 compute-0 podman[293806]: 2025-10-02 08:23:48.189863192 +0000 UTC m=+0.099508950 container remove ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 08:23:48 compute-0 systemd[1]: libpod-conmon-ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425.scope: Deactivated successfully.
Oct 02 08:23:48 compute-0 sudo[293464]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:48 compute-0 sudo[293821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:23:48 compute-0 sudo[293821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:48 compute-0 sudo[293821]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:48 compute-0 sudo[293846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:23:48 compute-0 sudo[293846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:48 compute-0 sudo[293846]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.478 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.478 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.478 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.478 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.478 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.479 2 INFO nova.compute.manager [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Terminating instance
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.480 2 DEBUG nova.compute.manager [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:23:48 compute-0 sudo[293871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:23:48 compute-0 sudo[293871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:48 compute-0 sudo[293871]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:48 compute-0 kernel: tap6cf06b33-eb (unregistering): left promiscuous mode
Oct 02 08:23:48 compute-0 NetworkManager[45129]: <info>  [1759393428.5273] device (tap6cf06b33-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:23:48 compute-0 ovn_controller[152344]: 2025-10-02T08:23:48Z|00164|binding|INFO|Releasing lport 6cf06b33-eb42-471e-9c4b-9119daeb78c1 from this chassis (sb_readonly=0)
Oct 02 08:23:48 compute-0 ovn_controller[152344]: 2025-10-02T08:23:48Z|00165|binding|INFO|Setting lport 6cf06b33-eb42-471e-9c4b-9119daeb78c1 down in Southbound
Oct 02 08:23:48 compute-0 ovn_controller[152344]: 2025-10-02T08:23:48Z|00166|binding|INFO|Removing iface tap6cf06b33-eb ovn-installed in OVS
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.545 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:42:57 10.100.0.5'], port_security=['fa:16:3e:78:42:57 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7c5d0818-0647-43a7-aaa1-b875b8b8424d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4066c489424391bd4a75b195bd5011', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1aba9f6e-efc2-4ae1-83f0-6308a1293c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f5b9ac6-d3ea-442b-a1a3-0f4eb2329dfd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6cf06b33-eb42-471e-9c4b-9119daeb78c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.547 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6cf06b33-eb42-471e-9c4b-9119daeb78c1 in datapath c8aeabca-6b5c-477a-9156-9f9592c20b93 unbound from our chassis
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.550 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c8aeabca-6b5c-477a-9156-9f9592c20b93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.551 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[623dc11d-551e-408b-acb2-f1e235eaf86c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.552 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 namespace which is not needed anymore
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:48 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 02 08:23:48 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Consumed 9.603s CPU time.
Oct 02 08:23:48 compute-0 systemd-machined[214636]: Machine qemu-30-instance-0000001a terminated.
Oct 02 08:23:48 compute-0 sudo[293896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:23:48 compute-0 sudo[293896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.721 2 INFO nova.virt.libvirt.driver [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Instance destroyed successfully.
Oct 02 08:23:48 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [NOTICE]   (293051) : haproxy version is 2.8.14-c23fe91
Oct 02 08:23:48 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [NOTICE]   (293051) : path to executable is /usr/sbin/haproxy
Oct 02 08:23:48 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [WARNING]  (293051) : Exiting Master process...
Oct 02 08:23:48 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [WARNING]  (293051) : Exiting Master process...
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.722 2 DEBUG nova.objects.instance [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'resources' on Instance uuid 7c5d0818-0647-43a7-aaa1-b875b8b8424d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:23:48 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [ALERT]    (293051) : Current worker (293053) exited with code 143 (Terminated)
Oct 02 08:23:48 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [WARNING]  (293051) : All workers exited. Exiting... (0)
Oct 02 08:23:48 compute-0 systemd[1]: libpod-002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952.scope: Deactivated successfully.
Oct 02 08:23:48 compute-0 podman[293944]: 2025-10-02 08:23:48.735783614 +0000 UTC m=+0.078415580 container died 002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.743 2 DEBUG nova.virt.libvirt.vif [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:23:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-571471940',display_name='tempest-ImagesOneServerNegativeTestJSON-server-571471940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-571471940',id=26,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:23:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-ntyvpu68',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:23:46Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=7c5d0818-0647-43a7-aaa1-b875b8b8424d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "address": "fa:16:3e:78:42:57", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cf06b33-eb", "ovs_interfaceid": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.744 2 DEBUG nova.network.os_vif_util [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "address": "fa:16:3e:78:42:57", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cf06b33-eb", "ovs_interfaceid": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.746 2 DEBUG nova.network.os_vif_util [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:42:57,bridge_name='br-int',has_traffic_filtering=True,id=6cf06b33-eb42-471e-9c4b-9119daeb78c1,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cf06b33-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.746 2 DEBUG os_vif [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:42:57,bridge_name='br-int',has_traffic_filtering=True,id=6cf06b33-eb42-471e-9c4b-9119daeb78c1,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cf06b33-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.750 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cf06b33-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.760 2 INFO os_vif [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:42:57,bridge_name='br-int',has_traffic_filtering=True,id=6cf06b33-eb42-471e-9c4b-9119daeb78c1,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cf06b33-eb')
Oct 02 08:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952-userdata-shm.mount: Deactivated successfully.
Oct 02 08:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fc34d9a9787cb780b9ef5d6332cf0d5803f416d2f51f980b7daa992b30f3c9a-merged.mount: Deactivated successfully.
Oct 02 08:23:48 compute-0 podman[293944]: 2025-10-02 08:23:48.810340948 +0000 UTC m=+0.152972874 container cleanup 002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:23:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 130 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 233 op/s
Oct 02 08:23:48 compute-0 systemd[1]: libpod-conmon-002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952.scope: Deactivated successfully.
Oct 02 08:23:48 compute-0 podman[294022]: 2025-10-02 08:23:48.88328615 +0000 UTC m=+0.048889387 container remove 002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.891 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dc126534-85c7-40c7-a027-efb00446c3f2]: (4, ('Thu Oct  2 08:23:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 (002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952)\n002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952\nThu Oct  2 08:23:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 (002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952)\n002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.893 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb33c94-1d19-4bcf-8601-e914e42477da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.894 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8aeabca-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:48 compute-0 kernel: tapc8aeabca-60: left promiscuous mode
Oct 02 08:23:48 compute-0 nova_compute[260603]: 2025-10-02 08:23:48.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.914 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b8230b8b-b8ec-424e-b3e3-13147b262820]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.936 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[58fcdd5c-1ab9-4c92-b93d-257fbca10b59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.938 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb1f110-531e-4422-ad3a-5807bbbc9da0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.956 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4cbb72-de34-4905-860e-546c377ba57b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429726, 'reachable_time': 37791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294059, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:48 compute-0 systemd[1]: run-netns-ovnmeta\x2dc8aeabca\x2d6b5c\x2d477a\x2d9156\x2d9f9592c20b93.mount: Deactivated successfully.
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.961 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:23:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.961 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[48daa443-08e8-40ac-ba6f-e435873f72fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:49 compute-0 podman[294061]: 2025-10-02 08:23:49.042839724 +0000 UTC m=+0.057986640 container create 86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_shamir, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 08:23:49 compute-0 systemd[1]: Started libpod-conmon-86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2.scope.
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.105 2 INFO nova.virt.libvirt.driver [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Deleting instance files /var/lib/nova/instances/7c5d0818-0647-43a7-aaa1-b875b8b8424d_del
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.107 2 INFO nova.virt.libvirt.driver [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Deletion of /var/lib/nova/instances/7c5d0818-0647-43a7-aaa1-b875b8b8424d_del complete
Oct 02 08:23:49 compute-0 podman[294061]: 2025-10-02 08:23:49.01603905 +0000 UTC m=+0.031185996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.111 2 DEBUG nova.compute.manager [req-fb3d0ca4-8620-4be7-bc07-365e9f011d9b req-6067b608-44c2-4483-a8a0-4cc0d49e59dc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-vif-unplugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.112 2 DEBUG oslo_concurrency.lockutils [req-fb3d0ca4-8620-4be7-bc07-365e9f011d9b req-6067b608-44c2-4483-a8a0-4cc0d49e59dc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.112 2 DEBUG oslo_concurrency.lockutils [req-fb3d0ca4-8620-4be7-bc07-365e9f011d9b req-6067b608-44c2-4483-a8a0-4cc0d49e59dc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.113 2 DEBUG oslo_concurrency.lockutils [req-fb3d0ca4-8620-4be7-bc07-365e9f011d9b req-6067b608-44c2-4483-a8a0-4cc0d49e59dc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.113 2 DEBUG nova.compute.manager [req-fb3d0ca4-8620-4be7-bc07-365e9f011d9b req-6067b608-44c2-4483-a8a0-4cc0d49e59dc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] No waiting events found dispatching network-vif-unplugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.113 2 DEBUG nova.compute.manager [req-fb3d0ca4-8620-4be7-bc07-365e9f011d9b req-6067b608-44c2-4483-a8a0-4cc0d49e59dc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-vif-unplugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:23:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:23:49 compute-0 podman[294061]: 2025-10-02 08:23:49.134731047 +0000 UTC m=+0.149877953 container init 86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_shamir, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 08:23:49 compute-0 podman[294061]: 2025-10-02 08:23:49.141999082 +0000 UTC m=+0.157145988 container start 86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 08:23:49 compute-0 podman[294061]: 2025-10-02 08:23:49.144615386 +0000 UTC m=+0.159762302 container attach 86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_shamir, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:23:49 compute-0 confident_shamir[294078]: 167 167
Oct 02 08:23:49 compute-0 systemd[1]: libpod-86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2.scope: Deactivated successfully.
Oct 02 08:23:49 compute-0 podman[294061]: 2025-10-02 08:23:49.150811755 +0000 UTC m=+0.165958661 container died 86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.176 2 INFO nova.compute.manager [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.178 2 DEBUG oslo.service.loopingcall [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c59f23936012ba7a6bdedf9448271b0a88465c0085adf40058df1a6a6593e19-merged.mount: Deactivated successfully.
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.182 2 DEBUG nova.compute.manager [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.183 2 DEBUG nova.network.neutron [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:23:49 compute-0 podman[294061]: 2025-10-02 08:23:49.188885343 +0000 UTC m=+0.204032239 container remove 86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_shamir, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 08:23:49 compute-0 systemd[1]: libpod-conmon-86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2.scope: Deactivated successfully.
Oct 02 08:23:49 compute-0 nova_compute[260603]: 2025-10-02 08:23:49.423 2 DEBUG nova.network.neutron [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Successfully created port: b546f674-89d3-44f3-82c4-9426c5910017 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:23:49 compute-0 podman[294103]: 2025-10-02 08:23:49.443832854 +0000 UTC m=+0.059101087 container create a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 08:23:49 compute-0 systemd[1]: Started libpod-conmon-a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba.scope.
Oct 02 08:23:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13000f181c4b8a3a33c5661566e7980634f28bcbe5eb6aa095aaf2763859f898/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13000f181c4b8a3a33c5661566e7980634f28bcbe5eb6aa095aaf2763859f898/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:49 compute-0 podman[294103]: 2025-10-02 08:23:49.420450329 +0000 UTC m=+0.035718572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13000f181c4b8a3a33c5661566e7980634f28bcbe5eb6aa095aaf2763859f898/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13000f181c4b8a3a33c5661566e7980634f28bcbe5eb6aa095aaf2763859f898/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:49 compute-0 podman[294103]: 2025-10-02 08:23:49.529203996 +0000 UTC m=+0.144472299 container init a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 08:23:49 compute-0 podman[294103]: 2025-10-02 08:23:49.545963336 +0000 UTC m=+0.161231569 container start a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:23:49 compute-0 podman[294103]: 2025-10-02 08:23:49.549598943 +0000 UTC m=+0.164867206 container attach a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 08:23:50 compute-0 ceph-mon[74477]: pgmap v1232: 305 pgs: 305 active+clean; 130 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 233 op/s
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]: {
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:     "0": [
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:         {
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "devices": [
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "/dev/loop3"
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             ],
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_name": "ceph_lv0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_size": "21470642176",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "name": "ceph_lv0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "tags": {
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.cluster_name": "ceph",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.crush_device_class": "",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.encrypted": "0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.osd_id": "0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.type": "block",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.vdo": "0"
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             },
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "type": "block",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "vg_name": "ceph_vg0"
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:         }
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:     ],
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:     "1": [
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:         {
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "devices": [
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "/dev/loop4"
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             ],
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_name": "ceph_lv1",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_size": "21470642176",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "name": "ceph_lv1",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "tags": {
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.cluster_name": "ceph",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.crush_device_class": "",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.encrypted": "0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.osd_id": "1",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.type": "block",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.vdo": "0"
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             },
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "type": "block",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "vg_name": "ceph_vg1"
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:         }
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:     ],
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:     "2": [
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:         {
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "devices": [
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "/dev/loop5"
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             ],
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_name": "ceph_lv2",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_size": "21470642176",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "name": "ceph_lv2",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "tags": {
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.cluster_name": "ceph",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.crush_device_class": "",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.encrypted": "0",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.osd_id": "2",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.type": "block",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:                 "ceph.vdo": "0"
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             },
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "type": "block",
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:             "vg_name": "ceph_vg2"
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:         }
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]:     ]
Oct 02 08:23:50 compute-0 crazy_lovelace[294119]: }
Oct 02 08:23:50 compute-0 systemd[1]: libpod-a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba.scope: Deactivated successfully.
Oct 02 08:23:50 compute-0 podman[294103]: 2025-10-02 08:23:50.278582548 +0000 UTC m=+0.893850861 container died a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-13000f181c4b8a3a33c5661566e7980634f28bcbe5eb6aa095aaf2763859f898-merged.mount: Deactivated successfully.
Oct 02 08:23:50 compute-0 podman[294103]: 2025-10-02 08:23:50.329961044 +0000 UTC m=+0.945229267 container remove a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 08:23:50 compute-0 systemd[1]: libpod-conmon-a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba.scope: Deactivated successfully.
Oct 02 08:23:50 compute-0 sudo[293896]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:50 compute-0 sudo[294141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:23:50 compute-0 sudo[294141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:50 compute-0 sudo[294141]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:50 compute-0 sudo[294166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:23:50 compute-0 sudo[294166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:50 compute-0 sudo[294166]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:50 compute-0 sudo[294191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:23:50 compute-0 sudo[294191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:50 compute-0 sudo[294191]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:50 compute-0 sudo[294216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:23:50 compute-0 sudo[294216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 130 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 211 op/s
Oct 02 08:23:50 compute-0 nova_compute[260603]: 2025-10-02 08:23:50.858 2 DEBUG nova.network.neutron [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:23:50 compute-0 nova_compute[260603]: 2025-10-02 08:23:50.886 2 INFO nova.compute.manager [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Took 1.70 seconds to deallocate network for instance.
Oct 02 08:23:50 compute-0 nova_compute[260603]: 2025-10-02 08:23:50.939 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:50 compute-0 nova_compute[260603]: 2025-10-02 08:23:50.940 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:50 compute-0 nova_compute[260603]: 2025-10-02 08:23:50.979 2 DEBUG nova.compute.manager [req-723000c3-9a8f-4678-9af1-c0ad02dfe188 req-a60d4b8a-0495-4ff5-a68b-0489679f0565 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-vif-deleted-6cf06b33-eb42-471e-9c4b-9119daeb78c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.001 2 DEBUG oslo_concurrency.processutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:51 compute-0 podman[294282]: 2025-10-02 08:23:51.033891711 +0000 UTC m=+0.070377270 container create 15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:23:51 compute-0 systemd[1]: Started libpod-conmon-15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091.scope.
Oct 02 08:23:51 compute-0 podman[294282]: 2025-10-02 08:23:50.993380725 +0000 UTC m=+0.029866294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:23:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:23:51 compute-0 podman[294282]: 2025-10-02 08:23:51.121060711 +0000 UTC m=+0.157546280 container init 15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tesla, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:23:51 compute-0 podman[294282]: 2025-10-02 08:23:51.133286175 +0000 UTC m=+0.169771704 container start 15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:23:51 compute-0 romantic_tesla[294299]: 167 167
Oct 02 08:23:51 compute-0 systemd[1]: libpod-15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091.scope: Deactivated successfully.
Oct 02 08:23:51 compute-0 podman[294282]: 2025-10-02 08:23:51.13870385 +0000 UTC m=+0.175189369 container attach 15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:23:51 compute-0 podman[294282]: 2025-10-02 08:23:51.140053753 +0000 UTC m=+0.176539292 container died 15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tesla, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-53e7f1d8f17e8b844cdaaeb80e789ec06b5f439a8a14ccd17a8b076b356edd9e-merged.mount: Deactivated successfully.
Oct 02 08:23:51 compute-0 podman[294282]: 2025-10-02 08:23:51.172356715 +0000 UTC m=+0.208842224 container remove 15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tesla, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 08:23:51 compute-0 systemd[1]: libpod-conmon-15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091.scope: Deactivated successfully.
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.234 2 DEBUG nova.compute.manager [req-8bd52909-fdd0-446d-a977-9d5df77a9b73 req-3e8b2900-7ad0-49c7-9529-22636cbbd355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.235 2 DEBUG oslo_concurrency.lockutils [req-8bd52909-fdd0-446d-a977-9d5df77a9b73 req-3e8b2900-7ad0-49c7-9529-22636cbbd355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.236 2 DEBUG oslo_concurrency.lockutils [req-8bd52909-fdd0-446d-a977-9d5df77a9b73 req-3e8b2900-7ad0-49c7-9529-22636cbbd355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.236 2 DEBUG oslo_concurrency.lockutils [req-8bd52909-fdd0-446d-a977-9d5df77a9b73 req-3e8b2900-7ad0-49c7-9529-22636cbbd355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.236 2 DEBUG nova.compute.manager [req-8bd52909-fdd0-446d-a977-9d5df77a9b73 req-3e8b2900-7ad0-49c7-9529-22636cbbd355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] No waiting events found dispatching network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.236 2 WARNING nova.compute.manager [req-8bd52909-fdd0-446d-a977-9d5df77a9b73 req-3e8b2900-7ad0-49c7-9529-22636cbbd355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received unexpected event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 for instance with vm_state deleted and task_state None.
Oct 02 08:23:51 compute-0 podman[294341]: 2025-10-02 08:23:51.318724375 +0000 UTC m=+0.038612916 container create 412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 08:23:51 compute-0 systemd[1]: Started libpod-conmon-412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8.scope.
Oct 02 08:23:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078c393a78508a8006bd5110a184264c8da45f4206687b09b2f0603e6b3d9922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078c393a78508a8006bd5110a184264c8da45f4206687b09b2f0603e6b3d9922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078c393a78508a8006bd5110a184264c8da45f4206687b09b2f0603e6b3d9922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078c393a78508a8006bd5110a184264c8da45f4206687b09b2f0603e6b3d9922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:51 compute-0 podman[294341]: 2025-10-02 08:23:51.39550895 +0000 UTC m=+0.115397511 container init 412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:23:51 compute-0 podman[294341]: 2025-10-02 08:23:51.301299593 +0000 UTC m=+0.021188144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:23:51 compute-0 podman[294341]: 2025-10-02 08:23:51.402691242 +0000 UTC m=+0.122579823 container start 412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:23:51 compute-0 podman[294341]: 2025-10-02 08:23:51.406054 +0000 UTC m=+0.125942561 container attach 412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 02 08:23:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:23:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591484094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.443 2 DEBUG oslo_concurrency.processutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.451 2 DEBUG nova.compute.provider_tree [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.467 2 DEBUG nova.scheduler.client.report [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.486 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.521 2 INFO nova.scheduler.client.report [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Deleted allocations for instance 7c5d0818-0647-43a7-aaa1-b875b8b8424d
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.600 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.670 2 DEBUG nova.network.neutron [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Successfully updated port: b546f674-89d3-44f3-82c4-9426c5910017 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.691 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.692 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquired lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.692 2 DEBUG nova.network.neutron [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:23:51 compute-0 nova_compute[260603]: 2025-10-02 08:23:51.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:23:52 compute-0 nova_compute[260603]: 2025-10-02 08:23:52.205 2 DEBUG nova.network.neutron [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:23:52 compute-0 ceph-mon[74477]: pgmap v1233: 305 pgs: 305 active+clean; 130 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 211 op/s
Oct 02 08:23:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3591484094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:23:52 compute-0 practical_bouman[294357]: {
Oct 02 08:23:52 compute-0 practical_bouman[294357]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "osd_id": 2,
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "type": "bluestore"
Oct 02 08:23:52 compute-0 practical_bouman[294357]:     },
Oct 02 08:23:52 compute-0 practical_bouman[294357]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "osd_id": 1,
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "type": "bluestore"
Oct 02 08:23:52 compute-0 practical_bouman[294357]:     },
Oct 02 08:23:52 compute-0 practical_bouman[294357]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "osd_id": 0,
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:23:52 compute-0 practical_bouman[294357]:         "type": "bluestore"
Oct 02 08:23:52 compute-0 practical_bouman[294357]:     }
Oct 02 08:23:52 compute-0 practical_bouman[294357]: }
Oct 02 08:23:52 compute-0 systemd[1]: libpod-412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8.scope: Deactivated successfully.
Oct 02 08:23:52 compute-0 podman[294341]: 2025-10-02 08:23:52.427735942 +0000 UTC m=+1.147624523 container died 412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 08:23:52 compute-0 systemd[1]: libpod-412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8.scope: Consumed 1.027s CPU time.
Oct 02 08:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-078c393a78508a8006bd5110a184264c8da45f4206687b09b2f0603e6b3d9922-merged.mount: Deactivated successfully.
Oct 02 08:23:52 compute-0 podman[294341]: 2025-10-02 08:23:52.489696079 +0000 UTC m=+1.209584620 container remove 412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bouman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:23:52 compute-0 systemd[1]: libpod-conmon-412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8.scope: Deactivated successfully.
Oct 02 08:23:52 compute-0 sudo[294216]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:23:52 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:23:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:23:52 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:23:52 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b4cf33c9-d46e-4423-8b5c-2060fa1266fd does not exist
Oct 02 08:23:52 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a01f15da-954d-43d5-9078-587f651368c2 does not exist
Oct 02 08:23:52 compute-0 sudo[294404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:23:52 compute-0 sudo[294404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:52 compute-0 sudo[294404]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:52 compute-0 sudo[294429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:23:52 compute-0 sudo[294429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:23:52 compute-0 sudo[294429]: pam_unix(sudo:session): session closed for user root
Oct 02 08:23:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 109 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.261 2 DEBUG nova.network.neutron [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Updating instance_info_cache with network_info: [{"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.289 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Releasing lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.290 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance network_info: |[{"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.297 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Start _get_guest_xml network_info=[{"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.305 2 WARNING nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.311 2 DEBUG nova.virt.libvirt.host [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.312 2 DEBUG nova.virt.libvirt.host [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.318 2 DEBUG nova.virt.libvirt.host [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.318 2 DEBUG nova.virt.libvirt.host [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.319 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.320 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.320 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.321 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.321 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.322 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.322 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.323 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.324 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.324 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.325 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.325 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.331 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:53 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:23:53 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:23:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3329157866' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.804 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.837 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:53 compute-0 nova_compute[260603]: 2025-10-02 08:23:53.842 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.154 2 DEBUG nova.compute.manager [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received event network-changed-b546f674-89d3-44f3-82c4-9426c5910017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.155 2 DEBUG nova.compute.manager [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Refreshing instance network info cache due to event network-changed-b546f674-89d3-44f3-82c4-9426c5910017. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.156 2 DEBUG oslo_concurrency.lockutils [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.156 2 DEBUG oslo_concurrency.lockutils [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.157 2 DEBUG nova.network.neutron [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Refreshing network info cache for port b546f674-89d3-44f3-82c4-9426c5910017 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:23:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:23:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/212109316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.308 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.311 2 DEBUG nova.virt.libvirt.vif [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-798194672',display_name='tempest-ImagesTestJSON-server-798194672',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-798194672',id=27,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-t2xpcyan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:46Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=95b47d23-06fb-460c-b8d9-7b7213dae4c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.312 2 DEBUG nova.network.os_vif_util [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.313 2 DEBUG nova.network.os_vif_util [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.315 2 DEBUG nova.objects.instance [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 95b47d23-06fb-460c-b8d9-7b7213dae4c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.330 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:23:54 compute-0 nova_compute[260603]:   <uuid>95b47d23-06fb-460c-b8d9-7b7213dae4c7</uuid>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   <name>instance-0000001b</name>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <nova:name>tempest-ImagesTestJSON-server-798194672</nova:name>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:23:53</nova:creationTime>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:23:54 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:23:54 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:23:54 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:23:54 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:23:54 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:23:54 compute-0 nova_compute[260603]:         <nova:user uuid="6747651cfdcc4f868c43b9d78f5846c2">tempest-ImagesTestJSON-1188243509-project-member</nova:user>
Oct 02 08:23:54 compute-0 nova_compute[260603]:         <nova:project uuid="56b1e1170f2e4a73aaf396476bc82261">tempest-ImagesTestJSON-1188243509</nova:project>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:23:54 compute-0 nova_compute[260603]:         <nova:port uuid="b546f674-89d3-44f3-82c4-9426c5910017">
Oct 02 08:23:54 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <system>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <entry name="serial">95b47d23-06fb-460c-b8d9-7b7213dae4c7</entry>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <entry name="uuid">95b47d23-06fb-460c-b8d9-7b7213dae4c7</entry>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     </system>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   <os>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   </os>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   <features>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   </features>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk">
Oct 02 08:23:54 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       </source>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:23:54 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk.config">
Oct 02 08:23:54 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       </source>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:23:54 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:f0:3b:c8"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <target dev="tapb546f674-89"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/console.log" append="off"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <video>
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     </video>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:23:54 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:23:54 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:23:54 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:23:54 compute-0 nova_compute[260603]: </domain>
Oct 02 08:23:54 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.332 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Preparing to wait for external event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.332 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.332 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.333 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.334 2 DEBUG nova.virt.libvirt.vif [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-798194672',display_name='tempest-ImagesTestJSON-server-798194672',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-798194672',id=27,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-t2xpcyan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:46Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=95b47d23-06fb-460c-b8d9-7b7213dae4c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.334 2 DEBUG nova.network.os_vif_util [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.335 2 DEBUG nova.network.os_vif_util [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.336 2 DEBUG os_vif [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.338 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb546f674-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.344 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb546f674-89, col_values=(('external_ids', {'iface-id': 'b546f674-89d3-44f3-82c4-9426c5910017', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:3b:c8', 'vm-uuid': '95b47d23-06fb-460c-b8d9-7b7213dae4c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:54 compute-0 NetworkManager[45129]: <info>  [1759393434.3878] manager: (tapb546f674-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.397 2 INFO os_vif [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89')
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.461 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.462 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.462 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No VIF found with MAC fa:16:3e:f0:3b:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.462 2 INFO nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Using config drive
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.482 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:54 compute-0 ceph-mon[74477]: pgmap v1234: 305 pgs: 305 active+clean; 109 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 02 08:23:54 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3329157866' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:23:54 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/212109316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:23:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 88 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.954 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.955 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:54 compute-0 nova_compute[260603]: 2025-10-02 08:23:54.977 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.007 2 INFO nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Creating config drive at /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/disk.config
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.017 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbbq5ihx9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.077 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.078 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.089 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.089 2 INFO nova.compute.claims [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.157 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbbq5ihx9" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.191 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.195 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/disk.config 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.279 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.373 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/disk.config 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.374 2 INFO nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Deleting local config drive /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/disk.config because it was imported into RBD.
Oct 02 08:23:55 compute-0 kernel: tapb546f674-89: entered promiscuous mode
Oct 02 08:23:55 compute-0 NetworkManager[45129]: <info>  [1759393435.4606] manager: (tapb546f674-89): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Oct 02 08:23:55 compute-0 ovn_controller[152344]: 2025-10-02T08:23:55Z|00167|binding|INFO|Claiming lport b546f674-89d3-44f3-82c4-9426c5910017 for this chassis.
Oct 02 08:23:55 compute-0 ovn_controller[152344]: 2025-10-02T08:23:55Z|00168|binding|INFO|b546f674-89d3-44f3-82c4-9426c5910017: Claiming fa:16:3e:f0:3b:c8 10.100.0.10
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:55 compute-0 systemd-udevd[294607]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.510 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:3b:c8 10.100.0.10'], port_security=['fa:16:3e:f0:3b:c8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '95b47d23-06fb-460c-b8d9-7b7213dae4c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '2', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b546f674-89d3-44f3-82c4-9426c5910017) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.512 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b546f674-89d3-44f3-82c4-9426c5910017 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 bound to our chassis
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.514 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:23:55 compute-0 NetworkManager[45129]: <info>  [1759393435.5328] device (tapb546f674-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:23:55 compute-0 NetworkManager[45129]: <info>  [1759393435.5342] device (tapb546f674-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.534 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8ace38a1-ef35-44a3-98a5-10cf70fac092]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.536 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap897d7abf-91 in ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.539 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap897d7abf-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.539 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ad0e9628-f184-430f-8869-338d91ac69b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 ovn_controller[152344]: 2025-10-02T08:23:55Z|00169|binding|INFO|Setting lport b546f674-89d3-44f3-82c4-9426c5910017 ovn-installed in OVS
Oct 02 08:23:55 compute-0 ovn_controller[152344]: 2025-10-02T08:23:55Z|00170|binding|INFO|Setting lport b546f674-89d3-44f3-82c4-9426c5910017 up in Southbound
Oct 02 08:23:55 compute-0 systemd-machined[214636]: New machine qemu-31-instance-0000001b.
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.550 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2e131e4b-8903-4e83-84e6-949971632323]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-0000001b.
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.568 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ab20c0-ac03-4d4f-bc1b-3061102e1e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.600 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[13c2f8d3-f856-4182-b7fb-48b912f9987a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.623 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.650 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bba22cd6-756d-4062-81ab-1ef19af30ff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 systemd-udevd[294613]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:23:55 compute-0 NetworkManager[45129]: <info>  [1759393435.6570] manager: (tap897d7abf-90): new Veth device (/org/freedesktop/NetworkManager/Devices/82)
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.656 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10b3bd6a-96da-41d7-83b9-a966edb7be78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.697 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a8fc5a67-8e00-4119-830b-c670438d8a9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.702 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d40174db-5f57-496d-8f98-929749bc0c50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 NetworkManager[45129]: <info>  [1759393435.7273] device (tap897d7abf-90): carrier: link connected
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.737 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a272f655-e4f7-4bc6-8e21-0a8cdf4d5160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.763 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f23bfb26-af0e-4743-aa68-843daf4294a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431432, 'reachable_time': 31805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294643, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.784 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e6f2019-0b12-43d5-b2eb-6a29ed8a230a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:18ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431432, 'tstamp': 431432}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294644, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:23:55 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228177581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.801 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cde60cba-7460-4ee7-9848-f29943aca275]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431432, 'reachable_time': 31805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294645, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.819 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.825 2 DEBUG nova.compute.provider_tree [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.836 2 DEBUG nova.network.neutron [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Updated VIF entry in instance network info cache for port b546f674-89d3-44f3-82c4-9426c5910017. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.837 2 DEBUG nova.network.neutron [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Updating instance_info_cache with network_info: [{"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.841 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[95c6722a-cbf3-40d4-bc20-6252b1595736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.857 2 DEBUG nova.scheduler.client.report [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.862 2 DEBUG oslo_concurrency.lockutils [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.883 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.884 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.913 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8a221d77-0fa5-44b3-a735-c5e3279a6deb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.915 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.915 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.916 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:23:55 compute-0 kernel: tap897d7abf-90: entered promiscuous mode
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:55 compute-0 NetworkManager[45129]: <info>  [1759393435.9214] manager: (tap897d7abf-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.929 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:23:55 compute-0 ovn_controller[152344]: 2025-10-02T08:23:55Z|00171|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.935 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.935 2 DEBUG nova.network.neutron [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.952 2 INFO nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.952 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.953 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6d1765-fcc6-4643-b216-649ec1eea347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.957 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:23:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.958 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'env', 'PROCESS_TAG=haproxy-897d7abf-9e23-43cd-8f60-7156792a4360', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/897d7abf-9e23-43cd-8f60-7156792a4360.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:55 compute-0 nova_compute[260603]: 2025-10-02 08:23:55.972 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.112 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.113 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.114 2 INFO nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Creating image(s)
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.137 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.167 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.185 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.189 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.267 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.268 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.268 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.268 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.286 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.289 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:56 compute-0 podman[294778]: 2025-10-02 08:23:56.327509751 +0000 UTC m=+0.048068810 container create 3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 08:23:56 compute-0 systemd[1]: Started libpod-conmon-3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49.scope.
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.389 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393421.38837, 284aaef3-b320-49e2-a541-b17eb4eb208f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.390 2 INFO nova.compute.manager [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] VM Stopped (Lifecycle Event)
Oct 02 08:23:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ec9518e036099ec816e19e9d3a64de8fb07b1fecfb5f7dfe22b9c3e6c79aa9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:23:56 compute-0 podman[294778]: 2025-10-02 08:23:56.304392976 +0000 UTC m=+0.024952035 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:23:56 compute-0 podman[294778]: 2025-10-02 08:23:56.413062109 +0000 UTC m=+0.133621188 container init 3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 08:23:56 compute-0 podman[294778]: 2025-10-02 08:23:56.418260507 +0000 UTC m=+0.138819566 container start 3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.419 2 DEBUG nova.compute.manager [None req-8c0e8ce0-5439-45a9-a1c5-7c88dd2f0801 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:23:56 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [NOTICE]   (294836) : New worker (294838) forked
Oct 02 08:23:56 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [NOTICE]   (294836) : Loading success.
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.541 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:56 compute-0 ceph-mon[74477]: pgmap v1235: 305 pgs: 305 active+clean; 88 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:23:56 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4228177581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.570 2 DEBUG nova.policy [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a117ecad98d493d8782539545db5ac9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff4066c489424391bd4a75b195bd5011', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.605 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] resizing rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.668 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393436.6520765, 95b47d23-06fb-460c-b8d9-7b7213dae4c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.668 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] VM Started (Lifecycle Event)
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.673 2 DEBUG nova.objects.instance [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'migration_context' on Instance uuid ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.700 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.701 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.701 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Ensure instance console log exists: /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.702 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.702 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.702 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.706 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393436.6522737, 95b47d23-06fb-460c-b8d9-7b7213dae4c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.706 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] VM Paused (Lifecycle Event)
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.731 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.776 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:56 compute-0 nova_compute[260603]: 2025-10-02 08:23:56.796 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:23:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 88 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:23:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.220 2 DEBUG nova.compute.manager [req-b0235788-ad38-4571-87b2-8f9351420020 req-2ee46f53-86c1-4e7f-85e6-fdb7c0c98a76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.221 2 DEBUG oslo_concurrency.lockutils [req-b0235788-ad38-4571-87b2-8f9351420020 req-2ee46f53-86c1-4e7f-85e6-fdb7c0c98a76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.221 2 DEBUG oslo_concurrency.lockutils [req-b0235788-ad38-4571-87b2-8f9351420020 req-2ee46f53-86c1-4e7f-85e6-fdb7c0c98a76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.222 2 DEBUG oslo_concurrency.lockutils [req-b0235788-ad38-4571-87b2-8f9351420020 req-2ee46f53-86c1-4e7f-85e6-fdb7c0c98a76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.222 2 DEBUG nova.compute.manager [req-b0235788-ad38-4571-87b2-8f9351420020 req-2ee46f53-86c1-4e7f-85e6-fdb7c0c98a76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Processing event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.223 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.228 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393437.2282119, 95b47d23-06fb-460c-b8d9-7b7213dae4c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.229 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] VM Resumed (Lifecycle Event)
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.231 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.236 2 INFO nova.virt.libvirt.driver [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance spawned successfully.
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.236 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.269 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.279 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.285 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.286 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.287 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.287 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.288 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.288 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.303 2 DEBUG nova.network.neutron [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Successfully created port: 17b2c50e-61e3-4e6e-853a-81c49befe22b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.312 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.342 2 INFO nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Took 10.59 seconds to spawn the instance on the hypervisor.
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.343 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.403 2 INFO nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Took 11.56 seconds to build instance.
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.418 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.495 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.496 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.537 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.618 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.618 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.628 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.628 2 INFO nova.compute.claims [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:23:57 compute-0 nova_compute[260603]: 2025-10-02 08:23:57.774 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:23:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:23:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:23:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:23:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:23:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:23:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:23:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2256363663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.273 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.282 2 DEBUG nova.compute.provider_tree [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.302 2 DEBUG nova.scheduler.client.report [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.331 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.332 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.379 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.380 2 DEBUG nova.network.neutron [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.398 2 INFO nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.413 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.487 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.488 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.489 2 INFO nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Creating image(s)
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.513 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.540 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:58 compute-0 ceph-mon[74477]: pgmap v1236: 305 pgs: 305 active+clean; 88 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:23:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2256363663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.565 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.570 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.640 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.641 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.642 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.642 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.663 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.666 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5d50db9c-0731-468a-81da-6762d68cda94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.687 2 DEBUG nova.policy [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0a15988bafc4a03bd5b08291a4cc14c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3665ae0e483545e2aaa658ac8a3949aa', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:23:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 644 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.838 2 DEBUG nova.network.neutron [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Successfully updated port: 17b2c50e-61e3-4e6e-853a-81c49befe22b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.855 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "refresh_cache-ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.856 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquired lock "refresh_cache-ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.856 2 DEBUG nova.network.neutron [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.876 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5d50db9c-0731-468a-81da-6762d68cda94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:23:58 compute-0 nova_compute[260603]: 2025-10-02 08:23:58.941 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] resizing rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.054 2 DEBUG nova.objects.instance [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lazy-loading 'migration_context' on Instance uuid 5d50db9c-0731-468a-81da-6762d68cda94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.068 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.069 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Ensure instance console log exists: /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.070 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.070 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.071 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.114 2 DEBUG nova.network.neutron [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.782 2 DEBUG oslo_concurrency.lockutils [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.783 2 DEBUG oslo_concurrency.lockutils [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.784 2 DEBUG nova.compute.manager [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.790 2 DEBUG nova.compute.manager [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.791 2 DEBUG nova.objects.instance [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'flavor' on Instance uuid 95b47d23-06fb-460c-b8d9-7b7213dae4c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.811 2 DEBUG nova.compute.manager [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-changed-17b2c50e-61e3-4e6e-853a-81c49befe22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.812 2 DEBUG nova.compute.manager [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Refreshing instance network info cache due to event network-changed-17b2c50e-61e3-4e6e-853a-81c49befe22b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.813 2 DEBUG oslo_concurrency.lockutils [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.832 2 DEBUG nova.virt.libvirt.driver [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.895 2 DEBUG nova.compute.manager [req-bd5fa0c3-509a-4f86-97a0-2baf32210ce5 req-d8851435-c83d-42c2-b5f6-0821f5a1b605 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.896 2 DEBUG oslo_concurrency.lockutils [req-bd5fa0c3-509a-4f86-97a0-2baf32210ce5 req-d8851435-c83d-42c2-b5f6-0821f5a1b605 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.897 2 DEBUG oslo_concurrency.lockutils [req-bd5fa0c3-509a-4f86-97a0-2baf32210ce5 req-d8851435-c83d-42c2-b5f6-0821f5a1b605 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.897 2 DEBUG oslo_concurrency.lockutils [req-bd5fa0c3-509a-4f86-97a0-2baf32210ce5 req-d8851435-c83d-42c2-b5f6-0821f5a1b605 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.898 2 DEBUG nova.compute.manager [req-bd5fa0c3-509a-4f86-97a0-2baf32210ce5 req-d8851435-c83d-42c2-b5f6-0821f5a1b605 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] No waiting events found dispatching network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:23:59 compute-0 nova_compute[260603]: 2025-10-02 08:23:59.899 2 WARNING nova.compute.manager [req-bd5fa0c3-509a-4f86-97a0-2baf32210ce5 req-d8851435-c83d-42c2-b5f6-0821f5a1b605 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received unexpected event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 for instance with vm_state active and task_state powering-off.
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.188 2 DEBUG nova.network.neutron [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Successfully created port: 4e1ca975-9579-4bcd-8e92-5c169484f7fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:24:00 compute-0 ceph-mon[74477]: pgmap v1237: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 644 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.679 2 DEBUG nova.network.neutron [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Updating instance_info_cache with network_info: [{"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.700 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Releasing lock "refresh_cache-ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.701 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Instance network_info: |[{"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.702 2 DEBUG oslo_concurrency.lockutils [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.702 2 DEBUG nova.network.neutron [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Refreshing network info cache for port 17b2c50e-61e3-4e6e-853a-81c49befe22b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.708 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Start _get_guest_xml network_info=[{"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.713 2 WARNING nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.720 2 DEBUG nova.virt.libvirt.host [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.721 2 DEBUG nova.virt.libvirt.host [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.730 2 DEBUG nova.virt.libvirt.host [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.731 2 DEBUG nova.virt.libvirt.host [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.732 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.732 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.733 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.734 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.734 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.735 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.736 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.736 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.737 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.737 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.738 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.739 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:24:00 compute-0 nova_compute[260603]: 2025-10-02 08:24:00.743 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct 02 08:24:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:24:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400921510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.270 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.306 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.312 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:01 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1400921510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:24:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:24:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/173100061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.789 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.791 2 DEBUG nova.virt.libvirt.vif [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1190225091',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1190225091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1190225091',id=28,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-i2hcy206',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:56Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.793 2 DEBUG nova.network.os_vif_util [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.794 2 DEBUG nova.network.os_vif_util [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.796 2 DEBUG nova.objects.instance [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'pci_devices' on Instance uuid ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.815 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:24:01 compute-0 nova_compute[260603]:   <uuid>ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48</uuid>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   <name>instance-0000001c</name>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1190225091</nova:name>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:24:00</nova:creationTime>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:24:01 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:24:01 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:24:01 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:24:01 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:24:01 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:24:01 compute-0 nova_compute[260603]:         <nova:user uuid="3a117ecad98d493d8782539545db5ac9">tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member</nova:user>
Oct 02 08:24:01 compute-0 nova_compute[260603]:         <nova:project uuid="ff4066c489424391bd4a75b195bd5011">tempest-ImagesOneServerNegativeTestJSON-1267478522</nova:project>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:24:01 compute-0 nova_compute[260603]:         <nova:port uuid="17b2c50e-61e3-4e6e-853a-81c49befe22b">
Oct 02 08:24:01 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <system>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <entry name="serial">ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48</entry>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <entry name="uuid">ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48</entry>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     </system>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   <os>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   </os>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   <features>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   </features>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk">
Oct 02 08:24:01 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       </source>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:24:01 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk.config">
Oct 02 08:24:01 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       </source>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:24:01 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:19:4a:61"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <target dev="tap17b2c50e-61"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/console.log" append="off"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <video>
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     </video>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:24:01 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:24:01 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:24:01 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:24:01 compute-0 nova_compute[260603]: </domain>
Oct 02 08:24:01 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.822 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Preparing to wait for external event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.822 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.823 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.823 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.824 2 DEBUG nova.virt.libvirt.vif [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1190225091',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1190225091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1190225091',id=28,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-i2hcy206',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:56Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.824 2 DEBUG nova.network.os_vif_util [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.825 2 DEBUG nova.network.os_vif_util [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.825 2 DEBUG os_vif [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17b2c50e-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap17b2c50e-61, col_values=(('external_ids', {'iface-id': '17b2c50e-61e3-4e6e-853a-81c49befe22b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:4a:61', 'vm-uuid': 'ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:01 compute-0 NetworkManager[45129]: <info>  [1759393441.8346] manager: (tap17b2c50e-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.848 2 INFO os_vif [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61')
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.938 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.939 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.940 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No VIF found with MAC fa:16:3e:19:4a:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.941 2 INFO nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Using config drive
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.974 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:24:01 compute-0 nova_compute[260603]: 2025-10-02 08:24:01.996 2 DEBUG nova.network.neutron [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Successfully updated port: 4e1ca975-9579-4bcd-8e92-5c169484f7fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.014 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "refresh_cache-5d50db9c-0731-468a-81da-6762d68cda94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.014 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquired lock "refresh_cache-5d50db9c-0731-468a-81da-6762d68cda94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.015 2 DEBUG nova.network.neutron [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.277 2 DEBUG nova.network.neutron [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.548 2 INFO nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Creating config drive at /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/disk.config
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.553 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvbkxrso5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:02 compute-0 ceph-mon[74477]: pgmap v1238: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct 02 08:24:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/173100061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.695 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvbkxrso5" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.733 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.738 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/disk.config ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.768 2 DEBUG nova.network.neutron [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Updated VIF entry in instance network info cache for port 17b2c50e-61e3-4e6e-853a-81c49befe22b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.770 2 DEBUG nova.network.neutron [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Updating instance_info_cache with network_info: [{"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.789 2 DEBUG oslo_concurrency.lockutils [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:24:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 157 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 138 op/s
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.901 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/disk.config ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.902 2 INFO nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Deleting local config drive /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/disk.config because it was imported into RBD.
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.924 2 DEBUG nova.compute.manager [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-changed-4e1ca975-9579-4bcd-8e92-5c169484f7fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.925 2 DEBUG nova.compute.manager [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Refreshing instance network info cache due to event network-changed-4e1ca975-9579-4bcd-8e92-5c169484f7fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:24:02 compute-0 nova_compute[260603]: 2025-10-02 08:24:02.928 2 DEBUG oslo_concurrency.lockutils [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d50db9c-0731-468a-81da-6762d68cda94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:24:02 compute-0 NetworkManager[45129]: <info>  [1759393442.9751] manager: (tap17b2c50e-61): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Oct 02 08:24:02 compute-0 kernel: tap17b2c50e-61: entered promiscuous mode
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:03 compute-0 ovn_controller[152344]: 2025-10-02T08:24:03Z|00172|binding|INFO|Claiming lport 17b2c50e-61e3-4e6e-853a-81c49befe22b for this chassis.
Oct 02 08:24:03 compute-0 ovn_controller[152344]: 2025-10-02T08:24:03Z|00173|binding|INFO|17b2c50e-61e3-4e6e-853a-81c49befe22b: Claiming fa:16:3e:19:4a:61 10.100.0.11
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.024 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:4a:61 10.100.0.11'], port_security=['fa:16:3e:19:4a:61 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4066c489424391bd4a75b195bd5011', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1aba9f6e-efc2-4ae1-83f0-6308a1293c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f5b9ac6-d3ea-442b-a1a3-0f4eb2329dfd, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=17b2c50e-61e3-4e6e-853a-81c49befe22b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.026 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 17b2c50e-61e3-4e6e-853a-81c49befe22b in datapath c8aeabca-6b5c-477a-9156-9f9592c20b93 bound to our chassis
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.028 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8aeabca-6b5c-477a-9156-9f9592c20b93
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.049 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b059c1-19b2-4239-b202-09638a03a641]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.051 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc8aeabca-61 in ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:24:03 compute-0 ovn_controller[152344]: 2025-10-02T08:24:03Z|00174|binding|INFO|Setting lport 17b2c50e-61e3-4e6e-853a-81c49befe22b up in Southbound
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.054 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc8aeabca-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.054 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50201702-52a4-4b3c-9a3b-5d03e3f3564f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:03 compute-0 ovn_controller[152344]: 2025-10-02T08:24:03Z|00175|binding|INFO|Setting lport 17b2c50e-61e3-4e6e-853a-81c49befe22b ovn-installed in OVS
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.058 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9529780-844b-4fdb-95f0-b97a9338823a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.074 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[c349d3a7-da43-4f60-9452-c364a8b958de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 systemd-machined[214636]: New machine qemu-32-instance-0000001c.
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.094 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb4fc92-82b9-4757-9e4f-14f53a2fb63b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000001c.
Oct 02 08:24:03 compute-0 systemd-udevd[295247]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.129 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ed29c10c-0d31-4056-adb6-9ef0e2755f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 NetworkManager[45129]: <info>  [1759393443.1339] device (tap17b2c50e-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:24:03 compute-0 NetworkManager[45129]: <info>  [1759393443.1385] device (tap17b2c50e-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:24:03 compute-0 NetworkManager[45129]: <info>  [1759393443.1484] manager: (tapc8aeabca-60): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.148 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8927ee8d-6b8d-49bb-bb0d-618f20f8bd60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 systemd-udevd[295250]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.192 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ec30e42f-5e4a-4083-842a-9fbdef0a0d6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.196 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f4079d55-0a70-4400-9f47-d31d2b862720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 NetworkManager[45129]: <info>  [1759393443.2273] device (tapc8aeabca-60): carrier: link connected
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.233 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f5b34c-eec9-461a-b69c-08485fa3d719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.252 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[93330427-0e46-48a1-9021-b085e39e7686]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8aeabca-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:61:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432182, 'reachable_time': 43742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295276, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.271 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc5ded7-0099-4703-bc23-28061747f0c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:61ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432182, 'tstamp': 432182}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295277, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.288 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd827f8f-8a51-4fc0-9057-861b20298fe6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8aeabca-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:61:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432182, 'reachable_time': 43742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295278, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.321 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bed5eec6-f396-43ae-989c-7bc90bb7d12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.421 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e87a47c5-fa88-4a8d-aff8-0512045ab310]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.423 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8aeabca-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.423 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.424 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8aeabca-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:03 compute-0 NetworkManager[45129]: <info>  [1759393443.4269] manager: (tapc8aeabca-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Oct 02 08:24:03 compute-0 kernel: tapc8aeabca-60: entered promiscuous mode
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.432 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8aeabca-60, col_values=(('external_ids', {'iface-id': '4db7ab1a-8260-4cf1-9ae1-c00a351cfe04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:03 compute-0 ovn_controller[152344]: 2025-10-02T08:24:03Z|00176|binding|INFO|Releasing lport 4db7ab1a-8260-4cf1-9ae1-c00a351cfe04 from this chassis (sb_readonly=0)
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.457 2 DEBUG nova.compute.manager [req-5b98efa9-8ba9-40d6-80fa-19ab068a6a8e req-ebf4c191-a78a-4fb1-a00f-0eae66f549bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.458 2 DEBUG oslo_concurrency.lockutils [req-5b98efa9-8ba9-40d6-80fa-19ab068a6a8e req-ebf4c191-a78a-4fb1-a00f-0eae66f549bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.459 2 DEBUG oslo_concurrency.lockutils [req-5b98efa9-8ba9-40d6-80fa-19ab068a6a8e req-ebf4c191-a78a-4fb1-a00f-0eae66f549bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.460 2 DEBUG oslo_concurrency.lockutils [req-5b98efa9-8ba9-40d6-80fa-19ab068a6a8e req-ebf4c191-a78a-4fb1-a00f-0eae66f549bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.460 2 DEBUG nova.compute.manager [req-5b98efa9-8ba9-40d6-80fa-19ab068a6a8e req-ebf4c191-a78a-4fb1-a00f-0eae66f549bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Processing event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.461 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8aeabca-6b5c-477a-9156-9f9592c20b93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8aeabca-6b5c-477a-9156-9f9592c20b93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.463 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3a2a22-1d71-498c-9a71-adba88f2fb1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.464 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-c8aeabca-6b5c-477a-9156-9f9592c20b93
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/c8aeabca-6b5c-477a-9156-9f9592c20b93.pid.haproxy
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID c8aeabca-6b5c-477a-9156-9f9592c20b93
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:24:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.465 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'env', 'PROCESS_TAG=haproxy-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c8aeabca-6b5c-477a-9156-9f9592c20b93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.480 2 DEBUG nova.network.neutron [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Updating instance_info_cache with network_info: [{"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.505 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Releasing lock "refresh_cache-5d50db9c-0731-468a-81da-6762d68cda94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.506 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Instance network_info: |[{"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.507 2 DEBUG oslo_concurrency.lockutils [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d50db9c-0731-468a-81da-6762d68cda94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.508 2 DEBUG nova.network.neutron [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Refreshing network info cache for port 4e1ca975-9579-4bcd-8e92-5c169484f7fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.511 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Start _get_guest_xml network_info=[{"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.518 2 WARNING nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.528 2 DEBUG nova.virt.libvirt.host [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.529 2 DEBUG nova.virt.libvirt.host [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.565 2 DEBUG nova.virt.libvirt.host [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.566 2 DEBUG nova.virt.libvirt.host [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.567 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.567 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.567 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.567 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.568 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.568 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.568 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.568 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.568 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.569 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.569 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.569 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.572 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.718 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393428.7174823, 7c5d0818-0647-43a7-aaa1-b875b8b8424d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.719 2 INFO nova.compute.manager [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] VM Stopped (Lifecycle Event)
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.746 2 DEBUG nova.compute.manager [None req-24427bf8-e94b-45ac-b292-4e06c0d7747c - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:03 compute-0 podman[295371]: 2025-10-02 08:24:03.883107853 +0000 UTC m=+0.066040100 container create 6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:24:03 compute-0 systemd[1]: Started libpod-conmon-6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2.scope.
Oct 02 08:24:03 compute-0 podman[295371]: 2025-10-02 08:24:03.848334152 +0000 UTC m=+0.031266469 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:24:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb557c89a09e3c98f6728bab7f7254a61a1ae88dabc0772575bffd2a9b2ec5c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:03 compute-0 podman[295371]: 2025-10-02 08:24:03.971174782 +0000 UTC m=+0.154107009 container init 6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:24:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:24:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/365269458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:24:03 compute-0 podman[295371]: 2025-10-02 08:24:03.983219082 +0000 UTC m=+0.166151309 container start 6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 08:24:03 compute-0 nova_compute[260603]: 2025-10-02 08:24:03.998 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:04 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [NOTICE]   (295392) : New worker (295409) forked
Oct 02 08:24:04 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [NOTICE]   (295392) : Loading success.
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.019 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.022 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.057 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393444.0371478, ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.058 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] VM Started (Lifecycle Event)
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.061 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.063 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.066 2 INFO nova.virt.libvirt.driver [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Instance spawned successfully.
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.066 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.084 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.089 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.092 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.092 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.093 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.093 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.094 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.094 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.124 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.124 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393444.0374107, ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.124 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] VM Paused (Lifecycle Event)
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.147 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.150 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393444.0630102, ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.150 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] VM Resumed (Lifecycle Event)
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.158 2 INFO nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Took 8.04 seconds to spawn the instance on the hypervisor.
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.158 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.165 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.167 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.189 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.222 2 INFO nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Took 9.17 seconds to build instance.
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.237 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:24:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3593587089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.517 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.520 2 DEBUG nova.virt.libvirt.vif [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1904255359',display_name='tempest-ImagesOneServerTestJSON-server-1904255359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1904255359',id=29,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3665ae0e483545e2aaa658ac8a3949aa',ramdisk_id='',reservation_id='r-qijl0plc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-2127991989',owner_user_name='tempest-ImagesOneServerTestJSON-2127991989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:58Z,user_data=None,user_id='e0a15988bafc4a03bd5b08291a4cc14c',uuid=5d50db9c-0731-468a-81da-6762d68cda94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.521 2 DEBUG nova.network.os_vif_util [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Converting VIF {"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.522 2 DEBUG nova.network.os_vif_util [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.524 2 DEBUG nova.objects.instance [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d50db9c-0731-468a-81da-6762d68cda94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.543 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:24:04 compute-0 nova_compute[260603]:   <uuid>5d50db9c-0731-468a-81da-6762d68cda94</uuid>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   <name>instance-0000001d</name>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <nova:name>tempest-ImagesOneServerTestJSON-server-1904255359</nova:name>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:24:03</nova:creationTime>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:24:04 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:24:04 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:24:04 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:24:04 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:24:04 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:24:04 compute-0 nova_compute[260603]:         <nova:user uuid="e0a15988bafc4a03bd5b08291a4cc14c">tempest-ImagesOneServerTestJSON-2127991989-project-member</nova:user>
Oct 02 08:24:04 compute-0 nova_compute[260603]:         <nova:project uuid="3665ae0e483545e2aaa658ac8a3949aa">tempest-ImagesOneServerTestJSON-2127991989</nova:project>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:24:04 compute-0 nova_compute[260603]:         <nova:port uuid="4e1ca975-9579-4bcd-8e92-5c169484f7fe">
Oct 02 08:24:04 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <system>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <entry name="serial">5d50db9c-0731-468a-81da-6762d68cda94</entry>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <entry name="uuid">5d50db9c-0731-468a-81da-6762d68cda94</entry>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     </system>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   <os>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   </os>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   <features>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   </features>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5d50db9c-0731-468a-81da-6762d68cda94_disk">
Oct 02 08:24:04 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       </source>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:24:04 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5d50db9c-0731-468a-81da-6762d68cda94_disk.config">
Oct 02 08:24:04 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       </source>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:24:04 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:82:e7:88"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <target dev="tap4e1ca975-95"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/console.log" append="off"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <video>
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     </video>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:24:04 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:24:04 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:24:04 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:24:04 compute-0 nova_compute[260603]: </domain>
Oct 02 08:24:04 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.555 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Preparing to wait for external event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.556 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.557 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.557 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.558 2 DEBUG nova.virt.libvirt.vif [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1904255359',display_name='tempest-ImagesOneServerTestJSON-server-1904255359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1904255359',id=29,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3665ae0e483545e2aaa658ac8a3949aa',ramdisk_id='',reservation_id='r-qijl0plc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-2127991989',owner_user_name='tempest-ImagesOneServerTestJSON-2127991989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:58Z,user_data=None,user_id='e0a15988bafc4a03bd5b08291a4cc14c',uuid=5d50db9c-0731-468a-81da-6762d68cda94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.559 2 DEBUG nova.network.os_vif_util [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Converting VIF {"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.561 2 DEBUG nova.network.os_vif_util [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.562 2 DEBUG os_vif [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.571 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e1ca975-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.573 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4e1ca975-95, col_values=(('external_ids', {'iface-id': '4e1ca975-9579-4bcd-8e92-5c169484f7fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:e7:88', 'vm-uuid': '5d50db9c-0731-468a-81da-6762d68cda94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:04 compute-0 NetworkManager[45129]: <info>  [1759393444.5775] manager: (tap4e1ca975-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.591 2 INFO os_vif [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95')
Oct 02 08:24:04 compute-0 ceph-mon[74477]: pgmap v1239: 305 pgs: 305 active+clean; 157 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 138 op/s
Oct 02 08:24:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/365269458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:24:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3593587089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.656 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.657 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.658 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] No VIF found with MAC fa:16:3e:82:e7:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.659 2 INFO nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Using config drive
Oct 02 08:24:04 compute-0 nova_compute[260603]: 2025-10-02 08:24:04.695 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 181 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.160 2 DEBUG nova.network.neutron [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Updated VIF entry in instance network info cache for port 4e1ca975-9579-4bcd-8e92-5c169484f7fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.161 2 DEBUG nova.network.neutron [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Updating instance_info_cache with network_info: [{"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.177 2 DEBUG oslo_concurrency.lockutils [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d50db9c-0731-468a-81da-6762d68cda94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.385 2 INFO nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Creating config drive at /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/disk.config
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.394 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm9txbqzo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.532 2 DEBUG nova.compute.manager [req-f90e5847-f023-4110-be80-ae58edc33be2 req-9592c382-8c1e-43dd-b6fd-d26b9687397f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.533 2 DEBUG oslo_concurrency.lockutils [req-f90e5847-f023-4110-be80-ae58edc33be2 req-9592c382-8c1e-43dd-b6fd-d26b9687397f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.534 2 DEBUG oslo_concurrency.lockutils [req-f90e5847-f023-4110-be80-ae58edc33be2 req-9592c382-8c1e-43dd-b6fd-d26b9687397f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.535 2 DEBUG oslo_concurrency.lockutils [req-f90e5847-f023-4110-be80-ae58edc33be2 req-9592c382-8c1e-43dd-b6fd-d26b9687397f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.535 2 DEBUG nova.compute.manager [req-f90e5847-f023-4110-be80-ae58edc33be2 req-9592c382-8c1e-43dd-b6fd-d26b9687397f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] No waiting events found dispatching network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.536 2 WARNING nova.compute.manager [req-f90e5847-f023-4110-be80-ae58edc33be2 req-9592c382-8c1e-43dd-b6fd-d26b9687397f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received unexpected event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b for instance with vm_state active and task_state None.
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.546 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm9txbqzo" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.588 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.595 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/disk.config 5d50db9c-0731-468a-81da-6762d68cda94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.683 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.684 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.685 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.685 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.686 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.688 2 INFO nova.compute.manager [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Terminating instance
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.690 2 DEBUG nova.compute.manager [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:24:05 compute-0 kernel: tap17b2c50e-61 (unregistering): left promiscuous mode
Oct 02 08:24:05 compute-0 NetworkManager[45129]: <info>  [1759393445.7349] device (tap17b2c50e-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:05 compute-0 ovn_controller[152344]: 2025-10-02T08:24:05Z|00177|binding|INFO|Releasing lport 17b2c50e-61e3-4e6e-853a-81c49befe22b from this chassis (sb_readonly=0)
Oct 02 08:24:05 compute-0 ovn_controller[152344]: 2025-10-02T08:24:05Z|00178|binding|INFO|Setting lport 17b2c50e-61e3-4e6e-853a-81c49befe22b down in Southbound
Oct 02 08:24:05 compute-0 ovn_controller[152344]: 2025-10-02T08:24:05Z|00179|binding|INFO|Removing iface tap17b2c50e-61 ovn-installed in OVS
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:05.759 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:4a:61 10.100.0.11'], port_security=['fa:16:3e:19:4a:61 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4066c489424391bd4a75b195bd5011', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1aba9f6e-efc2-4ae1-83f0-6308a1293c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f5b9ac6-d3ea-442b-a1a3-0f4eb2329dfd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=17b2c50e-61e3-4e6e-853a-81c49befe22b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:24:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:05.760 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 17b2c50e-61e3-4e6e-853a-81c49befe22b in datapath c8aeabca-6b5c-477a-9156-9f9592c20b93 unbound from our chassis
Oct 02 08:24:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:05.762 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c8aeabca-6b5c-477a-9156-9f9592c20b93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:24:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:05.764 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e32af0e9-6bd8-4d82-83f0-dfa358f8400c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:05.764 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 namespace which is not needed anymore
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:05 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct 02 08:24:05 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Consumed 2.572s CPU time.
Oct 02 08:24:05 compute-0 systemd-machined[214636]: Machine qemu-32-instance-0000001c terminated.
Oct 02 08:24:05 compute-0 podman[295507]: 2025-10-02 08:24:05.838945884 +0000 UTC m=+0.072119986 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.857 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/disk.config 5d50db9c-0731-468a-81da-6762d68cda94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.857 2 INFO nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Deleting local config drive /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/disk.config because it was imported into RBD.
Oct 02 08:24:05 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [NOTICE]   (295392) : haproxy version is 2.8.14-c23fe91
Oct 02 08:24:05 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [NOTICE]   (295392) : path to executable is /usr/sbin/haproxy
Oct 02 08:24:05 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [WARNING]  (295392) : Exiting Master process...
Oct 02 08:24:05 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [ALERT]    (295392) : Current worker (295409) exited with code 143 (Terminated)
Oct 02 08:24:05 compute-0 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [WARNING]  (295392) : All workers exited. Exiting... (0)
Oct 02 08:24:05 compute-0 systemd[1]: libpod-6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2.scope: Deactivated successfully.
Oct 02 08:24:05 compute-0 podman[295503]: 2025-10-02 08:24:05.896999716 +0000 UTC m=+0.137002188 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:24:05 compute-0 podman[295564]: 2025-10-02 08:24:05.900123877 +0000 UTC m=+0.040012440 container died 6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:24:05 compute-0 kernel: tap4e1ca975-95: entered promiscuous mode
Oct 02 08:24:05 compute-0 NetworkManager[45129]: <info>  [1759393445.9129] manager: (tap4e1ca975-95): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Oct 02 08:24:05 compute-0 systemd-udevd[295270]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:05 compute-0 ovn_controller[152344]: 2025-10-02T08:24:05Z|00180|binding|INFO|Claiming lport 4e1ca975-9579-4bcd-8e92-5c169484f7fe for this chassis.
Oct 02 08:24:05 compute-0 systemd-udevd[295266]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:24:05 compute-0 kernel: tap17b2c50e-61: entered promiscuous mode
Oct 02 08:24:05 compute-0 kernel: tap17b2c50e-61 (unregistering): left promiscuous mode
Oct 02 08:24:05 compute-0 ovn_controller[152344]: 2025-10-02T08:24:05Z|00181|binding|INFO|4e1ca975-9579-4bcd-8e92-5c169484f7fe: Claiming fa:16:3e:82:e7:88 10.100.0.7
Oct 02 08:24:05 compute-0 NetworkManager[45129]: <info>  [1759393445.9295] device (tap4e1ca975-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:24:05 compute-0 NetworkManager[45129]: <info>  [1759393445.9302] device (tap4e1ca975-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:24:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:05.934 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:e7:88 10.100.0.7'], port_security=['fa:16:3e:82:e7:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5d50db9c-0731-468a-81da-6762d68cda94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3665ae0e483545e2aaa658ac8a3949aa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '927e44a9-fbe2-4210-9cfb-52bbd0657d4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33e27bca-0333-46b3-bd98-5e7fb62b735e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4e1ca975-9579-4bcd-8e92-5c169484f7fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2-userdata-shm.mount: Deactivated successfully.
Oct 02 08:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb557c89a09e3c98f6728bab7f7254a61a1ae88dabc0772575bffd2a9b2ec5c4-merged.mount: Deactivated successfully.
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.950 2 INFO nova.virt.libvirt.driver [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Instance destroyed successfully.
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.951 2 DEBUG nova.objects.instance [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'resources' on Instance uuid ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:24:05 compute-0 podman[295564]: 2025-10-02 08:24:05.959334976 +0000 UTC m=+0.099223529 container cleanup 6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.966 2 DEBUG nova.virt.libvirt.vif [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:23:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1190225091',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1190225091',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1190225091',id=28,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:24:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-i2hcy206',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:24:04Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.966 2 DEBUG nova.network.os_vif_util [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.967 2 DEBUG nova.network.os_vif_util [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.967 2 DEBUG os_vif [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.969 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17b2c50e-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:05 compute-0 systemd-machined[214636]: New machine qemu-33-instance-0000001d.
Oct 02 08:24:05 compute-0 systemd[1]: libpod-conmon-6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2.scope: Deactivated successfully.
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:05 compute-0 nova_compute[260603]: 2025-10-02 08:24:05.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:24:05 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.008 2 INFO os_vif [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61')
Oct 02 08:24:06 compute-0 ovn_controller[152344]: 2025-10-02T08:24:06Z|00182|binding|INFO|Setting lport 4e1ca975-9579-4bcd-8e92-5c169484f7fe ovn-installed in OVS
Oct 02 08:24:06 compute-0 ovn_controller[152344]: 2025-10-02T08:24:06Z|00183|binding|INFO|Setting lport 4e1ca975-9579-4bcd-8e92-5c169484f7fe up in Southbound
Oct 02 08:24:06 compute-0 podman[295611]: 2025-10-02 08:24:06.034064996 +0000 UTC m=+0.049243598 container remove 6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.042 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[36918d96-5d59-4ea0-98e9-07248b6d27b6]: (4, ('Thu Oct  2 08:24:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 (6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2)\n6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2\nThu Oct  2 08:24:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 (6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2)\n6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.049 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc8467e-3746-4761-ba54-d3068022938f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.050 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8aeabca-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:06 compute-0 kernel: tapc8aeabca-60: left promiscuous mode
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.057 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0e674660-6929-4846-bdfa-50279fb0cd87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.089 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c4643909-c18a-431b-b2d2-a9feec89b70b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.090 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbd96c5-ce94-466f-8882-3757f76e0949]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.114 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe7d8b0-562d-4fe8-9be1-9bc224615980]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432172, 'reachable_time': 22828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295644, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 systemd[1]: run-netns-ovnmeta\x2dc8aeabca\x2d6b5c\x2d477a\x2d9156\x2d9f9592c20b93.mount: Deactivated successfully.
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.119 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.120 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[878136ba-bc45-437a-b657-9867278b9726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.121 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4e1ca975-9579-4bcd-8e92-5c169484f7fe in datapath 281b57e5-e0d2-447a-8a70-0fab75a8117e unbound from our chassis
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.122 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 281b57e5-e0d2-447a-8a70-0fab75a8117e
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.136 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6982660c-9a14-4172-8436-c9ed6b4f4209]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.137 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap281b57e5-e1 in ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.138 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap281b57e5-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.138 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[19639d5a-5138-431b-88dc-2969079f80ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.139 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3abfc36f-2c8b-4ad2-973a-3bc01190dcbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.156 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6c7d6e-5fd3-490e-87b5-dade1a4912df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.187 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a72967da-0959-4f58-95b6-f79a89479063]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.230 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8de8a3c6-36d8-4621-93d3-5e8a355bd04e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.241 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3074dc87-2c60-4d44-bee3-f2d09a295622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 NetworkManager[45129]: <info>  [1759393446.2429] manager: (tap281b57e5-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Oct 02 08:24:06 compute-0 systemd-udevd[295645]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.287 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b5df1085-fe4e-408f-9c22-951247ba3ef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.289 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab0fcec-66c1-4486-9aa9-08ae030ce084]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 NetworkManager[45129]: <info>  [1759393446.3133] device (tap281b57e5-e0): carrier: link connected
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.318 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[98db480e-0ea0-469c-81de-692fdd662490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.337 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3e10f8ac-0555-4f39-986d-c264f50c7b5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap281b57e5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:b3:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432491, 'reachable_time': 41599, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295674, 'error': None, 'target': 'ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.353 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d47ce93-d57f-4322-9d37-4a85d2e96d2a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:b397'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432491, 'tstamp': 432491}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295675, 'error': None, 'target': 'ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.373 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2adb0e88-7155-4e11-9d85-9e17ad10ce38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap281b57e5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:b3:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432491, 'reachable_time': 41599, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295676, 'error': None, 'target': 'ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.404 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e3b063-c974-4a50-a269-dc9805b5a561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.469 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[49f29c64-e041-4853-bde9-79674a9edd9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.470 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap281b57e5-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.471 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.472 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap281b57e5-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:06 compute-0 NetworkManager[45129]: <info>  [1759393446.5172] manager: (tap281b57e5-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Oct 02 08:24:06 compute-0 kernel: tap281b57e5-e0: entered promiscuous mode
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.520 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap281b57e5-e0, col_values=(('external_ids', {'iface-id': 'a1c00715-c815-46c4-9348-33448964f617'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:06 compute-0 ovn_controller[152344]: 2025-10-02T08:24:06Z|00184|binding|INFO|Releasing lport a1c00715-c815-46c4-9348-33448964f617 from this chassis (sb_readonly=0)
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.533 2 INFO nova.virt.libvirt.driver [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Deleting instance files /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_del
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.533 2 INFO nova.virt.libvirt.driver [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Deletion of /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_del complete
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.543 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/281b57e5-e0d2-447a-8a70-0fab75a8117e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/281b57e5-e0d2-447a-8a70-0fab75a8117e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.544 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f9ba66-92e5-4e97-958d-29cb3e0f585c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.545 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-281b57e5-e0d2-447a-8a70-0fab75a8117e
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/281b57e5-e0d2-447a-8a70-0fab75a8117e.pid.haproxy
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 281b57e5-e0d2-447a-8a70-0fab75a8117e
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:24:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.546 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'env', 'PROCESS_TAG=haproxy-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/281b57e5-e0d2-447a-8a70-0fab75a8117e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:24:06 compute-0 ceph-mon[74477]: pgmap v1240: 305 pgs: 305 active+clean; 181 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.818 2 INFO nova.compute.manager [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Took 1.13 seconds to destroy the instance on the hypervisor.
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.819 2 DEBUG oslo.service.loopingcall [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.819 2 DEBUG nova.compute.manager [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:24:06 compute-0 nova_compute[260603]: 2025-10-02 08:24:06.819 2 DEBUG nova.network.neutron [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:24:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 181 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 132 op/s
Oct 02 08:24:06 compute-0 podman[295708]: 2025-10-02 08:24:06.89271171 +0000 UTC m=+0.051879593 container create a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:24:06 compute-0 systemd[1]: Started libpod-conmon-a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8.scope.
Oct 02 08:24:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abcc92a4aba871794cf5916850bf03d08984036d0d311577b9f0f2209f683ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:06 compute-0 podman[295708]: 2025-10-02 08:24:06.86756963 +0000 UTC m=+0.026737533 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:24:06 compute-0 podman[295708]: 2025-10-02 08:24:06.97208006 +0000 UTC m=+0.131247943 container init a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:24:06 compute-0 podman[295708]: 2025-10-02 08:24:06.977212156 +0000 UTC m=+0.136380039 container start a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:24:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:24:07 compute-0 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [NOTICE]   (295728) : New worker (295730) forked
Oct 02 08:24:07 compute-0 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [NOTICE]   (295728) : Loading success.
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.631 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393447.6310706, 5d50db9c-0731-468a-81da-6762d68cda94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.631 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] VM Started (Lifecycle Event)
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.671 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.676 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393447.633374, 5d50db9c-0731-468a-81da-6762d68cda94 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.676 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] VM Paused (Lifecycle Event)
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.717 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.721 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.780 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.808 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-vif-unplugged-17b2c50e-61e3-4e6e-853a-81c49befe22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.809 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.809 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.809 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.809 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] No waiting events found dispatching network-vif-unplugged-17b2c50e-61e3-4e6e-853a-81c49befe22b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.809 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-vif-unplugged-17b2c50e-61e3-4e6e-853a-81c49befe22b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.809 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.810 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.810 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.810 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.810 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] No waiting events found dispatching network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.810 2 WARNING nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received unexpected event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b for instance with vm_state active and task_state deleting.
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.810 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.810 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Processing event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.812 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] No waiting events found dispatching network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.812 2 WARNING nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received unexpected event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe for instance with vm_state building and task_state spawning.
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.812 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.818 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393447.8180413, 5d50db9c-0731-468a-81da-6762d68cda94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.818 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] VM Resumed (Lifecycle Event)
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.820 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.824 2 INFO nova.virt.libvirt.driver [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Instance spawned successfully.
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.824 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.884 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.889 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.893 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.894 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.894 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.895 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.896 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.896 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:07 compute-0 nova_compute[260603]: 2025-10-02 08:24:07.962 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.002 2 INFO nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Took 9.51 seconds to spawn the instance on the hypervisor.
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.002 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.160 2 INFO nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Took 10.57 seconds to build instance.
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.189 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.593 2 DEBUG nova.network.neutron [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.617 2 INFO nova.compute.manager [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Took 1.80 seconds to deallocate network for instance.
Oct 02 08:24:08 compute-0 ceph-mon[74477]: pgmap v1241: 305 pgs: 305 active+clean; 181 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 132 op/s
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.677 2 DEBUG nova.compute.manager [req-f08d490d-6448-4d29-abd0-adab22fc6eec req-4c21c98a-ef7c-46a7-94aa-965fd9334ede 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-vif-deleted-17b2c50e-61e3-4e6e-853a-81c49befe22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.680 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.681 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.714 2 DEBUG nova.scheduler.client.report [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.731 2 DEBUG nova.scheduler.client.report [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.732 2 DEBUG nova.compute.provider_tree [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.746 2 DEBUG nova.scheduler.client.report [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.769 2 DEBUG nova.scheduler.client.report [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 08:24:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 146 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.8 MiB/s wr, 251 op/s
Oct 02 08:24:08 compute-0 nova_compute[260603]: 2025-10-02 08:24:08.855 2 DEBUG oslo_concurrency.processutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:09 compute-0 ovn_controller[152344]: 2025-10-02T08:24:09Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:3b:c8 10.100.0.10
Oct 02 08:24:09 compute-0 ovn_controller[152344]: 2025-10-02T08:24:09Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:3b:c8 10.100.0.10
Oct 02 08:24:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:24:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3668573006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:09 compute-0 nova_compute[260603]: 2025-10-02 08:24:09.263 2 DEBUG oslo_concurrency.processutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:09 compute-0 nova_compute[260603]: 2025-10-02 08:24:09.273 2 DEBUG nova.compute.provider_tree [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:24:09 compute-0 nova_compute[260603]: 2025-10-02 08:24:09.297 2 DEBUG nova.scheduler.client.report [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:24:09 compute-0 nova_compute[260603]: 2025-10-02 08:24:09.325 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:09 compute-0 nova_compute[260603]: 2025-10-02 08:24:09.351 2 INFO nova.scheduler.client.report [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Deleted allocations for instance ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48
Oct 02 08:24:09 compute-0 nova_compute[260603]: 2025-10-02 08:24:09.419 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:09 compute-0 nova_compute[260603]: 2025-10-02 08:24:09.532 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:24:09 compute-0 nova_compute[260603]: 2025-10-02 08:24:09.532 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:24:09 compute-0 nova_compute[260603]: 2025-10-02 08:24:09.533 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:24:09 compute-0 nova_compute[260603]: 2025-10-02 08:24:09.533 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:24:09 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3668573006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:09 compute-0 nova_compute[260603]: 2025-10-02 08:24:09.888 2 DEBUG nova.virt.libvirt.driver [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:24:10 compute-0 ceph-mon[74477]: pgmap v1242: 305 pgs: 305 active+clean; 146 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.8 MiB/s wr, 251 op/s
Oct 02 08:24:10 compute-0 nova_compute[260603]: 2025-10-02 08:24:10.750 2 DEBUG nova.compute.manager [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:10 compute-0 nova_compute[260603]: 2025-10-02 08:24:10.805 2 INFO nova.compute.manager [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] instance snapshotting
Oct 02 08:24:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 146 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.0 MiB/s wr, 196 op/s
Oct 02 08:24:10 compute-0 nova_compute[260603]: 2025-10-02 08:24:10.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:11 compute-0 nova_compute[260603]: 2025-10-02 08:24:11.176 2 INFO nova.virt.libvirt.driver [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Beginning live snapshot process
Oct 02 08:24:11 compute-0 nova_compute[260603]: 2025-10-02 08:24:11.486 2 DEBUG nova.virt.libvirt.imagebackend [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:24:11 compute-0 nova_compute[260603]: 2025-10-02 08:24:11.672 2 DEBUG nova.storage.rbd_utils [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] creating snapshot(243c57363d5f474cb81b45cfe5573907) on rbd image(5d50db9c-0731-468a-81da-6762d68cda94_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:24:11 compute-0 nova_compute[260603]: 2025-10-02 08:24:11.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:24:12 compute-0 podman[295855]: 2025-10-02 08:24:12.033531455 +0000 UTC m=+0.090078176 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 08:24:12 compute-0 kernel: tapb546f674-89 (unregistering): left promiscuous mode
Oct 02 08:24:12 compute-0 NetworkManager[45129]: <info>  [1759393452.2024] device (tapb546f674-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:24:12 compute-0 ovn_controller[152344]: 2025-10-02T08:24:12Z|00185|binding|INFO|Releasing lport b546f674-89d3-44f3-82c4-9426c5910017 from this chassis (sb_readonly=0)
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:12 compute-0 ovn_controller[152344]: 2025-10-02T08:24:12Z|00186|binding|INFO|Setting lport b546f674-89d3-44f3-82c4-9426c5910017 down in Southbound
Oct 02 08:24:12 compute-0 ovn_controller[152344]: 2025-10-02T08:24:12Z|00187|binding|INFO|Removing iface tapb546f674-89 ovn-installed in OVS
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.231 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:3b:c8 10.100.0.10'], port_security=['fa:16:3e:f0:3b:c8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '95b47d23-06fb-460c-b8d9-7b7213dae4c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b546f674-89d3-44f3-82c4-9426c5910017) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.232 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b546f674-89d3-44f3-82c4-9426c5910017 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.233 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.234 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[14183482-4b14-4cd1-b4e0-9551f6d6ee61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.237 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace which is not needed anymore
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:12 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct 02 08:24:12 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Consumed 12.592s CPU time.
Oct 02 08:24:12 compute-0 systemd-machined[214636]: Machine qemu-31-instance-0000001b terminated.
Oct 02 08:24:12 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [NOTICE]   (294836) : haproxy version is 2.8.14-c23fe91
Oct 02 08:24:12 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [NOTICE]   (294836) : path to executable is /usr/sbin/haproxy
Oct 02 08:24:12 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [WARNING]  (294836) : Exiting Master process...
Oct 02 08:24:12 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [WARNING]  (294836) : Exiting Master process...
Oct 02 08:24:12 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [ALERT]    (294836) : Current worker (294838) exited with code 143 (Terminated)
Oct 02 08:24:12 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [WARNING]  (294836) : All workers exited. Exiting... (0)
Oct 02 08:24:12 compute-0 systemd[1]: libpod-3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49.scope: Deactivated successfully.
Oct 02 08:24:12 compute-0 podman[295898]: 2025-10-02 08:24:12.419702115 +0000 UTC m=+0.055341605 container died 3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:24:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49-userdata-shm.mount: Deactivated successfully.
Oct 02 08:24:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-98ec9518e036099ec816e19e9d3a64de8fb07b1fecfb5f7dfe22b9c3e6c79aa9-merged.mount: Deactivated successfully.
Oct 02 08:24:12 compute-0 podman[295898]: 2025-10-02 08:24:12.498133985 +0000 UTC m=+0.133773495 container cleanup 3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:24:12 compute-0 systemd[1]: libpod-conmon-3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49.scope: Deactivated successfully.
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.540 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.542 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.543 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 95b47d23-06fb-460c-b8d9-7b7213dae4c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:24:12 compute-0 podman[295935]: 2025-10-02 08:24:12.596349381 +0000 UTC m=+0.058202308 container remove 3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.603 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e632038a-1716-406a-9147-09a83271115f]: (4, ('Thu Oct  2 08:24:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49)\n3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49\nThu Oct  2 08:24:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49)\n3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.605 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[801778b7-db8c-4bf6-9b73-08b4482ffe07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.606 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:12 compute-0 kernel: tap897d7abf-90: left promiscuous mode
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 02 08:24:12 compute-0 ceph-mon[74477]: pgmap v1243: 305 pgs: 305 active+clean; 146 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.0 MiB/s wr, 196 op/s
Oct 02 08:24:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 02 08:24:12 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.679 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff50bf7-1c25-4e95-a95f-1fd8477eb1e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.681 2 DEBUG nova.compute.manager [req-9c4f716c-0d3a-4bec-beb9-3bf841ae9f77 req-32305b1c-3db5-44cf-9b6e-11d035ba1d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received event network-vif-unplugged-b546f674-89d3-44f3-82c4-9426c5910017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.682 2 DEBUG oslo_concurrency.lockutils [req-9c4f716c-0d3a-4bec-beb9-3bf841ae9f77 req-32305b1c-3db5-44cf-9b6e-11d035ba1d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.682 2 DEBUG oslo_concurrency.lockutils [req-9c4f716c-0d3a-4bec-beb9-3bf841ae9f77 req-32305b1c-3db5-44cf-9b6e-11d035ba1d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.682 2 DEBUG oslo_concurrency.lockutils [req-9c4f716c-0d3a-4bec-beb9-3bf841ae9f77 req-32305b1c-3db5-44cf-9b6e-11d035ba1d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.682 2 DEBUG nova.compute.manager [req-9c4f716c-0d3a-4bec-beb9-3bf841ae9f77 req-32305b1c-3db5-44cf-9b6e-11d035ba1d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] No waiting events found dispatching network-vif-unplugged-b546f674-89d3-44f3-82c4-9426c5910017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.683 2 WARNING nova.compute.manager [req-9c4f716c-0d3a-4bec-beb9-3bf841ae9f77 req-32305b1c-3db5-44cf-9b6e-11d035ba1d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received unexpected event network-vif-unplugged-b546f674-89d3-44f3-82c4-9426c5910017 for instance with vm_state active and task_state powering-off.
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.717 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[751bb59b-cb52-4f0f-907c-bba885c22100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.718 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e321462a-5e81-4ded-ad0d-e176856020ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.737 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[56f8f13b-3636-4af2-af91-5b4ccb6c2975]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431424, 'reachable_time': 43153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295953, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d897d7abf\x2d9e23\x2d43cd\x2d8f60\x2d7156792a4360.mount: Deactivated successfully.
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.740 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:24:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.740 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b8c0de-ced0-4e19-8ead-798b1706e4f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.753 2 DEBUG nova.storage.rbd_utils [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] cloning vms/5d50db9c-0731-468a-81da-6762d68cda94_disk@243c57363d5f474cb81b45cfe5573907 to images/ed5c9663-363d-4483-aa94-11714a93ce2f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:24:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 165 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.3 MiB/s wr, 272 op/s
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.903 2 INFO nova.virt.libvirt.driver [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance shutdown successfully after 13 seconds.
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.920 2 DEBUG nova.storage.rbd_utils [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] flattening images/ed5c9663-363d-4483-aa94-11714a93ce2f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.979 2 INFO nova.virt.libvirt.driver [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance destroyed successfully.
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.979 2 DEBUG nova.objects.instance [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'numa_topology' on Instance uuid 95b47d23-06fb-460c-b8d9-7b7213dae4c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:24:12 compute-0 nova_compute[260603]: 2025-10-02 08:24:12.997 2 DEBUG nova.compute.manager [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:13 compute-0 nova_compute[260603]: 2025-10-02 08:24:13.052 2 DEBUG oslo_concurrency.lockutils [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:13 compute-0 nova_compute[260603]: 2025-10-02 08:24:13.164 2 DEBUG nova.storage.rbd_utils [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] removing snapshot(243c57363d5f474cb81b45cfe5573907) on rbd image(5d50db9c-0731-468a-81da-6762d68cda94_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:24:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 02 08:24:13 compute-0 ceph-mon[74477]: osdmap e152: 3 total, 3 up, 3 in
Oct 02 08:24:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 02 08:24:13 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 02 08:24:13 compute-0 nova_compute[260603]: 2025-10-02 08:24:13.733 2 DEBUG nova.storage.rbd_utils [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] creating snapshot(snap) on rbd image(ed5c9663-363d-4483-aa94-11714a93ce2f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:24:13 compute-0 nova_compute[260603]: 2025-10-02 08:24:13.870 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Updating instance_info_cache with network_info: [{"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:24:13 compute-0 nova_compute[260603]: 2025-10-02 08:24:13.892 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:24:13 compute-0 nova_compute[260603]: 2025-10-02 08:24:13.893 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:24:13 compute-0 nova_compute[260603]: 2025-10-02 08:24:13.894 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:24:14 compute-0 ovn_controller[152344]: 2025-10-02T08:24:14Z|00188|binding|INFO|Releasing lport a1c00715-c815-46c4-9348-33448964f617 from this chassis (sb_readonly=0)
Oct 02 08:24:14 compute-0 nova_compute[260603]: 2025-10-02 08:24:14.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:24:14 compute-0 nova_compute[260603]: 2025-10-02 08:24:14.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:14 compute-0 nova_compute[260603]: 2025-10-02 08:24:14.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:14 compute-0 nova_compute[260603]: 2025-10-02 08:24:14.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:14 compute-0 nova_compute[260603]: 2025-10-02 08:24:14.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:14 compute-0 nova_compute[260603]: 2025-10-02 08:24:14.542 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:24:14 compute-0 nova_compute[260603]: 2025-10-02 08:24:14.542 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 02 08:24:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 02 08:24:14 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 02 08:24:14 compute-0 ceph-mon[74477]: pgmap v1245: 305 pgs: 305 active+clean; 165 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.3 MiB/s wr, 272 op/s
Oct 02 08:24:14 compute-0 ceph-mon[74477]: osdmap e153: 3 total, 3 up, 3 in
Oct 02 08:24:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 182 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.5 MiB/s wr, 271 op/s
Oct 02 08:24:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:24:14 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/690967593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:14 compute-0 nova_compute[260603]: 2025-10-02 08:24:14.995 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.015 2 DEBUG nova.compute.manager [req-7ea363bf-1384-4e66-82ff-b11eff295717 req-73424abb-69c4-49b8-84f9-870ff22af8b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.016 2 DEBUG oslo_concurrency.lockutils [req-7ea363bf-1384-4e66-82ff-b11eff295717 req-73424abb-69c4-49b8-84f9-870ff22af8b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.016 2 DEBUG oslo_concurrency.lockutils [req-7ea363bf-1384-4e66-82ff-b11eff295717 req-73424abb-69c4-49b8-84f9-870ff22af8b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.017 2 DEBUG oslo_concurrency.lockutils [req-7ea363bf-1384-4e66-82ff-b11eff295717 req-73424abb-69c4-49b8-84f9-870ff22af8b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.018 2 DEBUG nova.compute.manager [req-7ea363bf-1384-4e66-82ff-b11eff295717 req-73424abb-69c4-49b8-84f9-870ff22af8b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] No waiting events found dispatching network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.018 2 WARNING nova.compute.manager [req-7ea363bf-1384-4e66-82ff-b11eff295717 req-73424abb-69c4-49b8-84f9-870ff22af8b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received unexpected event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 for instance with vm_state stopped and task_state image_snapshot_pending.
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.022 2 DEBUG nova.compute.manager [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.103 2 INFO nova.compute.manager [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] instance snapshotting
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.103 2 WARNING nova.compute.manager [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] trying to snapshot a non-running instance: (state: 4 expected: 1)
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.123 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.123 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.128 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.129 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.352 2 INFO nova.virt.libvirt.driver [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Beginning cold snapshot process
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.356 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.357 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4361MB free_disk=59.92191696166992GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.357 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.358 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.449 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 95b47d23-06fb-460c-b8d9-7b7213dae4c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 5d50db9c-0731-468a-81da-6762d68cda94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.512 2 DEBUG nova.virt.libvirt.imagebackend [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.539 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:15 compute-0 ceph-mon[74477]: osdmap e154: 3 total, 3 up, 3 in
Oct 02 08:24:15 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/690967593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.750 2 DEBUG nova.storage.rbd_utils [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(8a6ab14365794246a4643e95735b5aa4) on rbd image(95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:24:15 compute-0 nova_compute[260603]: 2025-10-02 08:24:15.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:24:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267837788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:16 compute-0 podman[296138]: 2025-10-02 08:24:16.017657283 +0000 UTC m=+0.083710211 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 08:24:16 compute-0 nova_compute[260603]: 2025-10-02 08:24:16.038 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:16 compute-0 nova_compute[260603]: 2025-10-02 08:24:16.046 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:24:16 compute-0 nova_compute[260603]: 2025-10-02 08:24:16.072 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:24:16 compute-0 nova_compute[260603]: 2025-10-02 08:24:16.097 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:24:16 compute-0 nova_compute[260603]: 2025-10-02 08:24:16.097 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:16 compute-0 nova_compute[260603]: 2025-10-02 08:24:16.153 2 INFO nova.virt.libvirt.driver [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Snapshot image upload complete
Oct 02 08:24:16 compute-0 nova_compute[260603]: 2025-10-02 08:24:16.153 2 INFO nova.compute.manager [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Took 5.35 seconds to snapshot the instance on the hypervisor.
Oct 02 08:24:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 02 08:24:16 compute-0 ceph-mon[74477]: pgmap v1248: 305 pgs: 305 active+clean; 182 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.5 MiB/s wr, 271 op/s
Oct 02 08:24:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1267837788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 02 08:24:16 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 02 08:24:16 compute-0 nova_compute[260603]: 2025-10-02 08:24:16.783 2 DEBUG nova.storage.rbd_utils [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] cloning vms/95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk@8a6ab14365794246a4643e95735b5aa4 to images/dcbb3886-223f-481d-a1b9-0a1d2d27080f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:24:16 compute-0 nova_compute[260603]: 2025-10-02 08:24:16.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 182 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 142 op/s
Oct 02 08:24:16 compute-0 nova_compute[260603]: 2025-10-02 08:24:16.912 2 DEBUG nova.storage.rbd_utils [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] flattening images/dcbb3886-223f-481d-a1b9-0a1d2d27080f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:24:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:24:17 compute-0 nova_compute[260603]: 2025-10-02 08:24:17.095 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:24:17 compute-0 nova_compute[260603]: 2025-10-02 08:24:17.096 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:24:17 compute-0 nova_compute[260603]: 2025-10-02 08:24:17.121 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:24:17 compute-0 nova_compute[260603]: 2025-10-02 08:24:17.252 2 DEBUG nova.storage.rbd_utils [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] removing snapshot(8a6ab14365794246a4643e95735b5aa4) on rbd image(95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:24:17 compute-0 nova_compute[260603]: 2025-10-02 08:24:17.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:24:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 02 08:24:17 compute-0 ceph-mon[74477]: osdmap e155: 3 total, 3 up, 3 in
Oct 02 08:24:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 02 08:24:17 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 02 08:24:17 compute-0 nova_compute[260603]: 2025-10-02 08:24:17.760 2 DEBUG nova.storage.rbd_utils [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(snap) on rbd image(dcbb3886-223f-481d-a1b9-0a1d2d27080f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:24:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct 02 08:24:18 compute-0 ceph-mon[74477]: pgmap v1250: 305 pgs: 305 active+clean; 182 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 142 op/s
Oct 02 08:24:18 compute-0 ceph-mon[74477]: osdmap e156: 3 total, 3 up, 3 in
Oct 02 08:24:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct 02 08:24:18 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct 02 08:24:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 285 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 15 MiB/s rd, 13 MiB/s wr, 355 op/s
Oct 02 08:24:19 compute-0 ovn_controller[152344]: 2025-10-02T08:24:19Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:82:e7:88 10.100.0.7
Oct 02 08:24:19 compute-0 ovn_controller[152344]: 2025-10-02T08:24:19Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:e7:88 10.100.0.7
Oct 02 08:24:19 compute-0 ceph-mon[74477]: osdmap e157: 3 total, 3 up, 3 in
Oct 02 08:24:20 compute-0 nova_compute[260603]: 2025-10-02 08:24:20.022 2 DEBUG nova.compute.manager [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:20 compute-0 nova_compute[260603]: 2025-10-02 08:24:20.073 2 INFO nova.compute.manager [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] instance snapshotting
Oct 02 08:24:20 compute-0 nova_compute[260603]: 2025-10-02 08:24:20.272 2 INFO nova.virt.libvirt.driver [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Beginning live snapshot process
Oct 02 08:24:20 compute-0 nova_compute[260603]: 2025-10-02 08:24:20.388 2 INFO nova.virt.libvirt.driver [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Snapshot image upload complete
Oct 02 08:24:20 compute-0 nova_compute[260603]: 2025-10-02 08:24:20.389 2 INFO nova.compute.manager [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Took 5.28 seconds to snapshot the instance on the hypervisor.
Oct 02 08:24:20 compute-0 nova_compute[260603]: 2025-10-02 08:24:20.478 2 DEBUG nova.virt.libvirt.imagebackend [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:24:20 compute-0 ceph-mon[74477]: pgmap v1253: 305 pgs: 305 active+clean; 285 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 15 MiB/s rd, 13 MiB/s wr, 355 op/s
Oct 02 08:24:20 compute-0 nova_compute[260603]: 2025-10-02 08:24:20.801 2 DEBUG nova.storage.rbd_utils [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] creating snapshot(8bfb15f943e943a49d11c7009e10679a) on rbd image(5d50db9c-0731-468a-81da-6762d68cda94_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:24:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 285 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 9.2 MiB/s wr, 245 op/s
Oct 02 08:24:20 compute-0 nova_compute[260603]: 2025-10-02 08:24:20.935 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393445.9346998, ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:20 compute-0 nova_compute[260603]: 2025-10-02 08:24:20.936 2 INFO nova.compute.manager [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] VM Stopped (Lifecycle Event)
Oct 02 08:24:20 compute-0 nova_compute[260603]: 2025-10-02 08:24:20.973 2 DEBUG nova.compute.manager [None req-22db1ee5-5dd6-4876-ac94-ef93b6fceeae - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:20 compute-0 nova_compute[260603]: 2025-10-02 08:24:20.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct 02 08:24:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct 02 08:24:21 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct 02 08:24:21 compute-0 nova_compute[260603]: 2025-10-02 08:24:21.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:21 compute-0 nova_compute[260603]: 2025-10-02 08:24:21.818 2 DEBUG nova.storage.rbd_utils [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] cloning vms/5d50db9c-0731-468a-81da-6762d68cda94_disk@8bfb15f943e943a49d11c7009e10679a to images/775ace25-87e6-4cc4-80cb-b697eaf54c22 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:24:21 compute-0 nova_compute[260603]: 2025-10-02 08:24:21.968 2 DEBUG nova.storage.rbd_utils [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] flattening images/775ace25-87e6-4cc4-80cb-b697eaf54c22 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:24:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:24:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct 02 08:24:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct 02 08:24:22 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct 02 08:24:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:24:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4190666196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:24:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:24:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4190666196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.243 2 DEBUG nova.storage.rbd_utils [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] removing snapshot(8bfb15f943e943a49d11c7009e10679a) on rbd image(5d50db9c-0731-468a-81da-6762d68cda94_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.563 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.564 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.564 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.565 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.565 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.567 2 INFO nova.compute.manager [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Terminating instance
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.569 2 DEBUG nova.compute.manager [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.577 2 INFO nova.virt.libvirt.driver [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance destroyed successfully.
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.577 2 DEBUG nova.objects.instance [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'resources' on Instance uuid 95b47d23-06fb-460c-b8d9-7b7213dae4c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.594 2 DEBUG nova.virt.libvirt.vif [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:23:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-798194672',display_name='tempest-ImagesTestJSON-server-798194672',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-798194672',id=27,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:23:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-t2xpcyan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:24:20Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=95b47d23-06fb-460c-b8d9-7b7213dae4c7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.595 2 DEBUG nova.network.os_vif_util [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.596 2 DEBUG nova.network.os_vif_util [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.597 2 DEBUG os_vif [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.599 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb546f674-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:22 compute-0 nova_compute[260603]: 2025-10-02 08:24:22.650 2 INFO os_vif [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89')
Oct 02 08:24:22 compute-0 ceph-mon[74477]: pgmap v1254: 305 pgs: 305 active+clean; 285 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 9.2 MiB/s wr, 245 op/s
Oct 02 08:24:22 compute-0 ceph-mon[74477]: osdmap e158: 3 total, 3 up, 3 in
Oct 02 08:24:22 compute-0 ceph-mon[74477]: osdmap e159: 3 total, 3 up, 3 in
Oct 02 08:24:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/4190666196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:24:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/4190666196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:24:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 270 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 11 MiB/s wr, 403 op/s
Oct 02 08:24:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct 02 08:24:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct 02 08:24:23 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct 02 08:24:23 compute-0 nova_compute[260603]: 2025-10-02 08:24:23.033 2 DEBUG nova.storage.rbd_utils [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] creating snapshot(snap) on rbd image(775ace25-87e6-4cc4-80cb-b697eaf54c22) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:24:23 compute-0 nova_compute[260603]: 2025-10-02 08:24:23.093 2 INFO nova.virt.libvirt.driver [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Deleting instance files /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7_del
Oct 02 08:24:23 compute-0 nova_compute[260603]: 2025-10-02 08:24:23.094 2 INFO nova.virt.libvirt.driver [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Deletion of /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7_del complete
Oct 02 08:24:23 compute-0 nova_compute[260603]: 2025-10-02 08:24:23.141 2 INFO nova.compute.manager [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Took 0.57 seconds to destroy the instance on the hypervisor.
Oct 02 08:24:23 compute-0 nova_compute[260603]: 2025-10-02 08:24:23.142 2 DEBUG oslo.service.loopingcall [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:24:23 compute-0 nova_compute[260603]: 2025-10-02 08:24:23.143 2 DEBUG nova.compute.manager [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:24:23 compute-0 nova_compute[260603]: 2025-10-02 08:24:23.144 2 DEBUG nova.network.neutron [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:24:23 compute-0 nova_compute[260603]: 2025-10-02 08:24:23.987 2 DEBUG nova.network.neutron [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:24:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct 02 08:24:24 compute-0 ceph-mon[74477]: pgmap v1257: 305 pgs: 305 active+clean; 270 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 11 MiB/s wr, 403 op/s
Oct 02 08:24:24 compute-0 ceph-mon[74477]: osdmap e160: 3 total, 3 up, 3 in
Oct 02 08:24:24 compute-0 nova_compute[260603]: 2025-10-02 08:24:24.005 2 INFO nova.compute.manager [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Took 0.86 seconds to deallocate network for instance.
Oct 02 08:24:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct 02 08:24:24 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct 02 08:24:24 compute-0 nova_compute[260603]: 2025-10-02 08:24:24.050 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:24 compute-0 nova_compute[260603]: 2025-10-02 08:24:24.050 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:24 compute-0 nova_compute[260603]: 2025-10-02 08:24:24.148 2 DEBUG oslo_concurrency.processutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:24:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2140571578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:24 compute-0 nova_compute[260603]: 2025-10-02 08:24:24.602 2 DEBUG oslo_concurrency.processutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:24 compute-0 nova_compute[260603]: 2025-10-02 08:24:24.609 2 DEBUG nova.compute.provider_tree [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:24:24 compute-0 nova_compute[260603]: 2025-10-02 08:24:24.630 2 DEBUG nova.scheduler.client.report [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:24:24 compute-0 nova_compute[260603]: 2025-10-02 08:24:24.660 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:24 compute-0 nova_compute[260603]: 2025-10-02 08:24:24.686 2 INFO nova.scheduler.client.report [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Deleted allocations for instance 95b47d23-06fb-460c-b8d9-7b7213dae4c7
Oct 02 08:24:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 234 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 18 MiB/s wr, 588 op/s
Oct 02 08:24:24 compute-0 nova_compute[260603]: 2025-10-02 08:24:24.946 2 DEBUG nova.compute.manager [req-86091690-24ba-4a1f-b7bb-1a5d83250e1f req-e1a4b607-bc52-4f37-9603-28aec973756e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received event network-vif-deleted-b546f674-89d3-44f3-82c4-9426c5910017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:24 compute-0 nova_compute[260603]: 2025-10-02 08:24:24.950 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:25 compute-0 ceph-mon[74477]: osdmap e161: 3 total, 3 up, 3 in
Oct 02 08:24:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2140571578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:25 compute-0 nova_compute[260603]: 2025-10-02 08:24:25.899 2 INFO nova.virt.libvirt.driver [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Snapshot image upload complete
Oct 02 08:24:25 compute-0 nova_compute[260603]: 2025-10-02 08:24:25.900 2 INFO nova.compute.manager [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Took 5.83 seconds to snapshot the instance on the hypervisor.
Oct 02 08:24:26 compute-0 ceph-mon[74477]: pgmap v1260: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 234 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 18 MiB/s wr, 588 op/s
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.219 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.219 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.242 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.306 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.307 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.316 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.316 2 INFO nova.compute.claims [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.435 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 234 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 14 MiB/s wr, 463 op/s
Oct 02 08:24:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:24:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264070693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.907 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.915 2 DEBUG nova.compute.provider_tree [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.937 2 DEBUG nova.scheduler.client.report [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.967 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:26 compute-0 nova_compute[260603]: 2025-10-02 08:24:26.968 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:24:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:24:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct 02 08:24:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct 02 08:24:27 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.018 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.019 2 DEBUG nova.network.neutron [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.037 2 INFO nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:24:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/264070693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:27 compute-0 ceph-mon[74477]: osdmap e162: 3 total, 3 up, 3 in
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.056 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.135 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.136 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.137 2 INFO nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Creating image(s)
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.178 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.206 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.233 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.236 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.330 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.331 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.332 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.332 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.359 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.363 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7e111a93-618c-4a35-a412-f1e8a975a139_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.400 2 DEBUG nova.policy [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6747651cfdcc4f868c43b9d78f5846c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56b1e1170f2e4a73aaf396476bc82261', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.467 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393452.466161, 95b47d23-06fb-460c-b8d9-7b7213dae4c7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.468 2 INFO nova.compute.manager [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] VM Stopped (Lifecycle Event)
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.494 2 DEBUG nova.compute.manager [None req-d6d28382-7ae4-4c3d-a746-22b82c46a7bd - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.630 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7e111a93-618c-4a35-a412-f1e8a975a139_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.746 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] resizing rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.848 2 DEBUG nova.objects.instance [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'migration_context' on Instance uuid 7e111a93-618c-4a35-a412-f1e8a975a139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.864 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.865 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Ensure instance console log exists: /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.865 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.866 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:27 compute-0 nova_compute[260603]: 2025-10-02 08:24:27.866 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:24:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:24:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:24:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:24:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:24:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:24:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:24:27
Oct 02 08:24:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:24:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:24:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'vms']
Oct 02 08:24:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:24:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct 02 08:24:28 compute-0 ceph-mon[74477]: pgmap v1261: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 234 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 14 MiB/s wr, 463 op/s
Oct 02 08:24:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct 02 08:24:28 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct 02 08:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:24:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 222 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.9 MiB/s wr, 265 op/s
Oct 02 08:24:29 compute-0 ceph-mon[74477]: osdmap e163: 3 total, 3 up, 3 in
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.314 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.315 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.316 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.316 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.317 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.319 2 INFO nova.compute.manager [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Terminating instance
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.321 2 DEBUG nova.compute.manager [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.349 2 DEBUG nova.network.neutron [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Successfully created port: cd52f5bd-f296-4867-b590-680a2c8a2870 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:24:29 compute-0 kernel: tap4e1ca975-95 (unregistering): left promiscuous mode
Oct 02 08:24:29 compute-0 NetworkManager[45129]: <info>  [1759393469.3796] device (tap4e1ca975-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:24:29 compute-0 ovn_controller[152344]: 2025-10-02T08:24:29Z|00189|binding|INFO|Releasing lport 4e1ca975-9579-4bcd-8e92-5c169484f7fe from this chassis (sb_readonly=0)
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:29 compute-0 ovn_controller[152344]: 2025-10-02T08:24:29Z|00190|binding|INFO|Setting lport 4e1ca975-9579-4bcd-8e92-5c169484f7fe down in Southbound
Oct 02 08:24:29 compute-0 ovn_controller[152344]: 2025-10-02T08:24:29Z|00191|binding|INFO|Removing iface tap4e1ca975-95 ovn-installed in OVS
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.403 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:e7:88 10.100.0.7'], port_security=['fa:16:3e:82:e7:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5d50db9c-0731-468a-81da-6762d68cda94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3665ae0e483545e2aaa658ac8a3949aa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '927e44a9-fbe2-4210-9cfb-52bbd0657d4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33e27bca-0333-46b3-bd98-5e7fb62b735e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4e1ca975-9579-4bcd-8e92-5c169484f7fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.405 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4e1ca975-9579-4bcd-8e92-5c169484f7fe in datapath 281b57e5-e0d2-447a-8a70-0fab75a8117e unbound from our chassis
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.407 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 281b57e5-e0d2-447a-8a70-0fab75a8117e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.408 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f7972399-f042-4025-bff1-7c0adc7da600]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.409 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e namespace which is not needed anymore
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:29 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct 02 08:24:29 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Consumed 13.899s CPU time.
Oct 02 08:24:29 compute-0 systemd-machined[214636]: Machine qemu-33-instance-0000001d terminated.
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.562 2 INFO nova.virt.libvirt.driver [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Instance destroyed successfully.
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.563 2 DEBUG nova.objects.instance [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lazy-loading 'resources' on Instance uuid 5d50db9c-0731-468a-81da-6762d68cda94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.575 2 DEBUG nova.virt.libvirt.vif [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:23:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1904255359',display_name='tempest-ImagesOneServerTestJSON-server-1904255359',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1904255359',id=29,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:24:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3665ae0e483545e2aaa658ac8a3949aa',ramdisk_id='',reservation_id='r-qijl0plc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-2127991989',owner_user_name='tempest-ImagesOneServerTestJSON-2127991989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:24:25Z,user_data=None,user_id='e0a15988bafc4a03bd5b08291a4cc14c',uuid=5d50db9c-0731-468a-81da-6762d68cda94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.576 2 DEBUG nova.network.os_vif_util [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Converting VIF {"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.576 2 DEBUG nova.network.os_vif_util [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.577 2 DEBUG os_vif [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e1ca975-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:29 compute-0 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [NOTICE]   (295728) : haproxy version is 2.8.14-c23fe91
Oct 02 08:24:29 compute-0 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [NOTICE]   (295728) : path to executable is /usr/sbin/haproxy
Oct 02 08:24:29 compute-0 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [WARNING]  (295728) : Exiting Master process...
Oct 02 08:24:29 compute-0 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [ALERT]    (295728) : Current worker (295730) exited with code 143 (Terminated)
Oct 02 08:24:29 compute-0 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [WARNING]  (295728) : All workers exited. Exiting... (0)
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:29 compute-0 systemd[1]: libpod-a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8.scope: Deactivated successfully.
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.588 2 INFO os_vif [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95')
Oct 02 08:24:29 compute-0 podman[296650]: 2025-10-02 08:24:29.593431781 +0000 UTC m=+0.056001236 container died a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 08:24:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8-userdata-shm.mount: Deactivated successfully.
Oct 02 08:24:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1abcc92a4aba871794cf5916850bf03d08984036d0d311577b9f0f2209f683ce-merged.mount: Deactivated successfully.
Oct 02 08:24:29 compute-0 podman[296650]: 2025-10-02 08:24:29.638764402 +0000 UTC m=+0.101333867 container cleanup a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 08:24:29 compute-0 systemd[1]: libpod-conmon-a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8.scope: Deactivated successfully.
Oct 02 08:24:29 compute-0 podman[296710]: 2025-10-02 08:24:29.705578087 +0000 UTC m=+0.043936047 container remove a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.711 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a13c69-2093-4d25-a9da-ea770c25b0e3]: (4, ('Thu Oct  2 08:24:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e (a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8)\na9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8\nThu Oct  2 08:24:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e (a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8)\na9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.713 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d42c908a-8aba-42df-9fb6-c8d6bbd2e23e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.714 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap281b57e5-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:29 compute-0 kernel: tap281b57e5-e0: left promiscuous mode
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.759 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9da63c-58ca-44c3-a563-4656d83fe856]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.791 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6a011701-63eb-4ec8-acb3-f6deb961b937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.792 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0248e198-200a-4acf-942f-c9ab80820b80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.813 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec6becc-3575-48bf-837b-fb8bf7a991c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432482, 'reachable_time': 24965, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296726, 'error': None, 'target': 'ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.815 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:24:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.815 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1f084aec-6d2d-4c26-a465-6575945a65b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d281b57e5\x2de0d2\x2d447a\x2d8a70\x2d0fab75a8117e.mount: Deactivated successfully.
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.926 2 INFO nova.virt.libvirt.driver [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Deleting instance files /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94_del
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.927 2 INFO nova.virt.libvirt.driver [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Deletion of /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94_del complete
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.968 2 INFO nova.compute.manager [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Took 0.65 seconds to destroy the instance on the hypervisor.
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.968 2 DEBUG oslo.service.loopingcall [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.969 2 DEBUG nova.compute.manager [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:24:29 compute-0 nova_compute[260603]: 2025-10-02 08:24:29.969 2 DEBUG nova.network.neutron [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:24:30 compute-0 ceph-mon[74477]: pgmap v1264: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 222 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.9 MiB/s wr, 265 op/s
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.297 2 DEBUG nova.compute.manager [req-91701cf9-f77d-4350-a786-e0233305782e req-02db855f-de40-473d-b964-8acd08088088 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-vif-unplugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.298 2 DEBUG oslo_concurrency.lockutils [req-91701cf9-f77d-4350-a786-e0233305782e req-02db855f-de40-473d-b964-8acd08088088 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.298 2 DEBUG oslo_concurrency.lockutils [req-91701cf9-f77d-4350-a786-e0233305782e req-02db855f-de40-473d-b964-8acd08088088 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.298 2 DEBUG oslo_concurrency.lockutils [req-91701cf9-f77d-4350-a786-e0233305782e req-02db855f-de40-473d-b964-8acd08088088 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.298 2 DEBUG nova.compute.manager [req-91701cf9-f77d-4350-a786-e0233305782e req-02db855f-de40-473d-b964-8acd08088088 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] No waiting events found dispatching network-vif-unplugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.298 2 DEBUG nova.compute.manager [req-91701cf9-f77d-4350-a786-e0233305782e req-02db855f-de40-473d-b964-8acd08088088 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-vif-unplugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.575 2 DEBUG nova.network.neutron [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Successfully updated port: cd52f5bd-f296-4867-b590-680a2c8a2870 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.596 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "refresh_cache-7e111a93-618c-4a35-a412-f1e8a975a139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.596 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquired lock "refresh_cache-7e111a93-618c-4a35-a412-f1e8a975a139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.597 2 DEBUG nova.network.neutron [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.700 2 DEBUG nova.compute.manager [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received event network-changed-cd52f5bd-f296-4867-b590-680a2c8a2870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.700 2 DEBUG nova.compute.manager [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Refreshing instance network info cache due to event network-changed-cd52f5bd-f296-4867-b590-680a2c8a2870. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.701 2 DEBUG oslo_concurrency.lockutils [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7e111a93-618c-4a35-a412-f1e8a975a139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.718 2 DEBUG nova.network.neutron [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.741 2 INFO nova.compute.manager [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Took 0.77 seconds to deallocate network for instance.
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.807 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.808 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 222 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 02 08:24:30 compute-0 nova_compute[260603]: 2025-10-02 08:24:30.880 2 DEBUG oslo_concurrency.processutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:24:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3322198750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:31 compute-0 nova_compute[260603]: 2025-10-02 08:24:31.336 2 DEBUG oslo_concurrency.processutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:31 compute-0 nova_compute[260603]: 2025-10-02 08:24:31.344 2 DEBUG nova.compute.provider_tree [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:24:31 compute-0 nova_compute[260603]: 2025-10-02 08:24:31.368 2 DEBUG nova.scheduler.client.report [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:24:31 compute-0 nova_compute[260603]: 2025-10-02 08:24:31.399 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:31 compute-0 nova_compute[260603]: 2025-10-02 08:24:31.420 2 INFO nova.scheduler.client.report [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Deleted allocations for instance 5d50db9c-0731-468a-81da-6762d68cda94
Oct 02 08:24:31 compute-0 nova_compute[260603]: 2025-10-02 08:24:31.549 2 DEBUG nova.network.neutron [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:24:31 compute-0 nova_compute[260603]: 2025-10-02 08:24:31.607 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:31 compute-0 nova_compute[260603]: 2025-10-02 08:24:31.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:24:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct 02 08:24:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct 02 08:24:32 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct 02 08:24:32 compute-0 ceph-mon[74477]: pgmap v1265: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 222 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 02 08:24:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3322198750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:32 compute-0 ceph-mon[74477]: osdmap e164: 3 total, 3 up, 3 in
Oct 02 08:24:32 compute-0 nova_compute[260603]: 2025-10-02 08:24:32.429 2 DEBUG nova.compute.manager [req-7266d71d-8938-4706-9a8c-bdc5125ea46f req-6638d1e9-5599-4e46-aaa8-277fe830a722 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:32 compute-0 nova_compute[260603]: 2025-10-02 08:24:32.429 2 DEBUG oslo_concurrency.lockutils [req-7266d71d-8938-4706-9a8c-bdc5125ea46f req-6638d1e9-5599-4e46-aaa8-277fe830a722 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:32 compute-0 nova_compute[260603]: 2025-10-02 08:24:32.430 2 DEBUG oslo_concurrency.lockutils [req-7266d71d-8938-4706-9a8c-bdc5125ea46f req-6638d1e9-5599-4e46-aaa8-277fe830a722 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:32 compute-0 nova_compute[260603]: 2025-10-02 08:24:32.430 2 DEBUG oslo_concurrency.lockutils [req-7266d71d-8938-4706-9a8c-bdc5125ea46f req-6638d1e9-5599-4e46-aaa8-277fe830a722 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:32 compute-0 nova_compute[260603]: 2025-10-02 08:24:32.430 2 DEBUG nova.compute.manager [req-7266d71d-8938-4706-9a8c-bdc5125ea46f req-6638d1e9-5599-4e46-aaa8-277fe830a722 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] No waiting events found dispatching network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:24:32 compute-0 nova_compute[260603]: 2025-10-02 08:24:32.431 2 WARNING nova.compute.manager [req-7266d71d-8938-4706-9a8c-bdc5125ea46f req-6638d1e9-5599-4e46-aaa8-277fe830a722 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received unexpected event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe for instance with vm_state deleted and task_state None.
Oct 02 08:24:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 7 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 292 active+clean; 145 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 4.0 MiB/s wr, 237 op/s
Oct 02 08:24:32 compute-0 nova_compute[260603]: 2025-10-02 08:24:32.920 2 DEBUG nova.compute.manager [req-aeadd090-9f79-4aad-b3d2-c43379652254 req-f7470724-a9d1-4685-9528-2bbe9aa837bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-vif-deleted-4e1ca975-9579-4bcd-8e92-5c169484f7fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.697 2 DEBUG nova.network.neutron [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Updating instance_info_cache with network_info: [{"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.724 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Releasing lock "refresh_cache-7e111a93-618c-4a35-a412-f1e8a975a139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.724 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Instance network_info: |[{"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.725 2 DEBUG oslo_concurrency.lockutils [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7e111a93-618c-4a35-a412-f1e8a975a139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.725 2 DEBUG nova.network.neutron [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Refreshing network info cache for port cd52f5bd-f296-4867-b590-680a2c8a2870 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.730 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Start _get_guest_xml network_info=[{"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.736 2 WARNING nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.742 2 DEBUG nova.virt.libvirt.host [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.743 2 DEBUG nova.virt.libvirt.host [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.752 2 DEBUG nova.virt.libvirt.host [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.753 2 DEBUG nova.virt.libvirt.host [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.753 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.754 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.755 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.755 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.756 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.756 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.757 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.757 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.758 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.758 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.759 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.759 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:24:33 compute-0 nova_compute[260603]: 2025-10-02 08:24:33.763 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:34 compute-0 ceph-mon[74477]: pgmap v1267: 305 pgs: 7 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 292 active+clean; 145 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 4.0 MiB/s wr, 237 op/s
Oct 02 08:24:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:24:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2400793247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.205 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.235 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.240 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:24:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817994304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.699 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.702 2 DEBUG nova.virt.libvirt.vif [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:24:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-902237424',display_name='tempest-ImagesTestJSON-server-902237424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-902237424',id=30,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-cf2bt1a3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:24:27Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=7e111a93-618c-4a35-a412-f1e8a975a139,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.703 2 DEBUG nova.network.os_vif_util [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.704 2 DEBUG nova.network.os_vif_util [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.706 2 DEBUG nova.objects.instance [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e111a93-618c-4a35-a412-f1e8a975a139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.722 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:24:34 compute-0 nova_compute[260603]:   <uuid>7e111a93-618c-4a35-a412-f1e8a975a139</uuid>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   <name>instance-0000001e</name>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <nova:name>tempest-ImagesTestJSON-server-902237424</nova:name>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:24:33</nova:creationTime>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:24:34 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:24:34 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:24:34 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:24:34 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:24:34 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:24:34 compute-0 nova_compute[260603]:         <nova:user uuid="6747651cfdcc4f868c43b9d78f5846c2">tempest-ImagesTestJSON-1188243509-project-member</nova:user>
Oct 02 08:24:34 compute-0 nova_compute[260603]:         <nova:project uuid="56b1e1170f2e4a73aaf396476bc82261">tempest-ImagesTestJSON-1188243509</nova:project>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:24:34 compute-0 nova_compute[260603]:         <nova:port uuid="cd52f5bd-f296-4867-b590-680a2c8a2870">
Oct 02 08:24:34 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <system>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <entry name="serial">7e111a93-618c-4a35-a412-f1e8a975a139</entry>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <entry name="uuid">7e111a93-618c-4a35-a412-f1e8a975a139</entry>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     </system>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   <os>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   </os>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   <features>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   </features>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7e111a93-618c-4a35-a412-f1e8a975a139_disk">
Oct 02 08:24:34 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       </source>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:24:34 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7e111a93-618c-4a35-a412-f1e8a975a139_disk.config">
Oct 02 08:24:34 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       </source>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:24:34 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:cb:0f:7a"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <target dev="tapcd52f5bd-f2"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/console.log" append="off"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <video>
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     </video>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:24:34 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:24:34 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:24:34 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:24:34 compute-0 nova_compute[260603]: </domain>
Oct 02 08:24:34 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.725 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Preparing to wait for external event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.726 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.726 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.727 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.728 2 DEBUG nova.virt.libvirt.vif [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:24:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-902237424',display_name='tempest-ImagesTestJSON-server-902237424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-902237424',id=30,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-cf2bt1a3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:24:27Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=7e111a93-618c-4a35-a412-f1e8a975a139,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.728 2 DEBUG nova.network.os_vif_util [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.730 2 DEBUG nova.network.os_vif_util [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.730 2 DEBUG os_vif [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.732 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd52f5bd-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd52f5bd-f2, col_values=(('external_ids', {'iface-id': 'cd52f5bd-f296-4867-b590-680a2c8a2870', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:0f:7a', 'vm-uuid': '7e111a93-618c-4a35-a412-f1e8a975a139'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:34 compute-0 NetworkManager[45129]: <info>  [1759393474.7417] manager: (tapcd52f5bd-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.750 2 INFO os_vif [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2')
Oct 02 08:24:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:34.811 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:34.811 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:34.812 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 3.1 MiB/s wr, 221 op/s
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.865 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.866 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.866 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No VIF found with MAC fa:16:3e:cb:0f:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.867 2 INFO nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Using config drive
Oct 02 08:24:34 compute-0 nova_compute[260603]: 2025-10-02 08:24:34.899 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2400793247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:24:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/817994304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:24:35 compute-0 nova_compute[260603]: 2025-10-02 08:24:35.583 2 INFO nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Creating config drive at /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/disk.config
Oct 02 08:24:35 compute-0 nova_compute[260603]: 2025-10-02 08:24:35.588 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5hecl41g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:35 compute-0 nova_compute[260603]: 2025-10-02 08:24:35.735 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5hecl41g" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:35 compute-0 nova_compute[260603]: 2025-10-02 08:24:35.759 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:35 compute-0 nova_compute[260603]: 2025-10-02 08:24:35.762 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/disk.config 7e111a93-618c-4a35-a412-f1e8a975a139_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:35 compute-0 nova_compute[260603]: 2025-10-02 08:24:35.923 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/disk.config 7e111a93-618c-4a35-a412-f1e8a975a139_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:35 compute-0 nova_compute[260603]: 2025-10-02 08:24:35.924 2 INFO nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Deleting local config drive /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/disk.config because it was imported into RBD.
Oct 02 08:24:35 compute-0 kernel: tapcd52f5bd-f2: entered promiscuous mode
Oct 02 08:24:35 compute-0 NetworkManager[45129]: <info>  [1759393475.9887] manager: (tapcd52f5bd-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Oct 02 08:24:35 compute-0 ovn_controller[152344]: 2025-10-02T08:24:35Z|00192|binding|INFO|Claiming lport cd52f5bd-f296-4867-b590-680a2c8a2870 for this chassis.
Oct 02 08:24:35 compute-0 ovn_controller[152344]: 2025-10-02T08:24:35Z|00193|binding|INFO|cd52f5bd-f296-4867-b590-680a2c8a2870: Claiming fa:16:3e:cb:0f:7a 10.100.0.8
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:35.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.014 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:0f:7a 10.100.0.8'], port_security=['fa:16:3e:cb:0f:7a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7e111a93-618c-4a35-a412-f1e8a975a139', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '2', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=cd52f5bd-f296-4867-b590-680a2c8a2870) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.015 162357 INFO neutron.agent.ovn.metadata.agent [-] Port cd52f5bd-f296-4867-b590-680a2c8a2870 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 bound to our chassis
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.016 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:24:36 compute-0 systemd-machined[214636]: New machine qemu-34-instance-0000001e.
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.035 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dafa140f-d1a6-41a6-b4c3-594861c3ca01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.036 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap897d7abf-91 in ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.038 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap897d7abf-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.038 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[60061c05-0245-41ad-a693-e22c79168632]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.038 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9db66d-ef28-4d0d-b416-2a0b565bec22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000001e.
Oct 02 08:24:36 compute-0 podman[296873]: 2025-10-02 08:24:36.050764223 +0000 UTC m=+0.107094645 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.050 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[3063cbad-c2b9-419a-820a-31cb49b8c283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 systemd-udevd[296934]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:36 compute-0 podman[296872]: 2025-10-02 08:24:36.077004309 +0000 UTC m=+0.135692697 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 08:24:36 compute-0 ovn_controller[152344]: 2025-10-02T08:24:36Z|00194|binding|INFO|Setting lport cd52f5bd-f296-4867-b590-680a2c8a2870 ovn-installed in OVS
Oct 02 08:24:36 compute-0 ovn_controller[152344]: 2025-10-02T08:24:36Z|00195|binding|INFO|Setting lport cd52f5bd-f296-4867-b590-680a2c8a2870 up in Southbound
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:36 compute-0 NetworkManager[45129]: <info>  [1759393476.0832] device (tapcd52f5bd-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:24:36 compute-0 NetworkManager[45129]: <info>  [1759393476.0841] device (tapcd52f5bd-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.084 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4070ff39-2dbe-4ef1-b5bb-b8dd8c21a529]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 ceph-mon[74477]: pgmap v1268: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 3.1 MiB/s wr, 221 op/s
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.112 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b4184ae1-16ab-4a31-b2c0-c353c995fe82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 NetworkManager[45129]: <info>  [1759393476.1273] manager: (tap897d7abf-90): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Oct 02 08:24:36 compute-0 systemd-udevd[296936]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.128 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa0fd45-cf04-49fc-bd40-0a64c83bfdd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.156 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9c693e42-b39a-4dba-b912-56b604d9cdd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.158 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2a3235-143c-452a-85af-925c5dd5b957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 NetworkManager[45129]: <info>  [1759393476.1762] device (tap897d7abf-90): carrier: link connected
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.182 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[12b74236-0067-47cf-90d5-ed047764f732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.197 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[01acb120-7192-46aa-b818-59b6bb776158]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435477, 'reachable_time': 23844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296964, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.211 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30d24624-f264-4b78-be2e-72325124d4e3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:18ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 435477, 'tstamp': 435477}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296965, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.229 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[28c19370-5a80-4717-ab7b-0a149d3ebf86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435477, 'reachable_time': 23844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296966, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.263 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10ccca31-ddcb-4d24-8bcd-70f87ec75b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.326 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb61f75d-c40a-43e5-bd8d-bb9636868431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.328 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.328 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.328 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:36 compute-0 NetworkManager[45129]: <info>  [1759393476.3750] manager: (tap897d7abf-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Oct 02 08:24:36 compute-0 kernel: tap897d7abf-90: entered promiscuous mode
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.378 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:36 compute-0 ovn_controller[152344]: 2025-10-02T08:24:36Z|00196|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.381 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.382 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf472b14-0a94-4ab6-a856-fd73b14b3c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.383 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:24:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.383 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'env', 'PROCESS_TAG=haproxy-897d7abf-9e23-43cd-8f60-7156792a4360', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/897d7abf-9e23-43cd-8f60-7156792a4360.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:36 compute-0 podman[297040]: 2025-10-02 08:24:36.780429909 +0000 UTC m=+0.052609807 container create b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.795 2 DEBUG nova.compute.manager [req-2d2c2428-7f72-46cb-a07c-82df702715ea req-82c2e9e6-ee18-4ce3-878d-b51df6afabe1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.795 2 DEBUG oslo_concurrency.lockutils [req-2d2c2428-7f72-46cb-a07c-82df702715ea req-82c2e9e6-ee18-4ce3-878d-b51df6afabe1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.796 2 DEBUG oslo_concurrency.lockutils [req-2d2c2428-7f72-46cb-a07c-82df702715ea req-82c2e9e6-ee18-4ce3-878d-b51df6afabe1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.796 2 DEBUG oslo_concurrency.lockutils [req-2d2c2428-7f72-46cb-a07c-82df702715ea req-82c2e9e6-ee18-4ce3-878d-b51df6afabe1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.797 2 DEBUG nova.compute.manager [req-2d2c2428-7f72-46cb-a07c-82df702715ea req-82c2e9e6-ee18-4ce3-878d-b51df6afabe1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Processing event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:36 compute-0 systemd[1]: Started libpod-conmon-b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980.scope.
Oct 02 08:24:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 2.4 MiB/s wr, 134 op/s
Oct 02 08:24:36 compute-0 podman[297040]: 2025-10-02 08:24:36.756075123 +0000 UTC m=+0.028255031 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:24:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:24:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472410631e18c72411482e11524827d3001f58363217869b8650f65927ac6239/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:36 compute-0 podman[297040]: 2025-10-02 08:24:36.891726537 +0000 UTC m=+0.163906435 container init b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:24:36 compute-0 podman[297040]: 2025-10-02 08:24:36.90266682 +0000 UTC m=+0.174846718 container start b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 08:24:36 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [NOTICE]   (297060) : New worker (297062) forked
Oct 02 08:24:36 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [NOTICE]   (297060) : Loading success.
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.977 2 DEBUG nova.network.neutron [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Updated VIF entry in instance network info cache for port cd52f5bd-f296-4867-b590-680a2c8a2870. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:24:36 compute-0 nova_compute[260603]: 2025-10-02 08:24:36.978 2 DEBUG nova.network.neutron [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Updating instance_info_cache with network_info: [{"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:24:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:24:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct 02 08:24:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.004 2 DEBUG oslo_concurrency.lockutils [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7e111a93-618c-4a35-a412-f1e8a975a139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:24:37 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.009 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393477.0084705, 7e111a93-618c-4a35-a412-f1e8a975a139 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.009 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] VM Started (Lifecycle Event)
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.014 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.020 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.024 2 INFO nova.virt.libvirt.driver [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Instance spawned successfully.
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.025 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.034 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.040 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.056 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.057 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.057 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.058 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.059 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.060 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.067 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.067 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393477.0132246, 7e111a93-618c-4a35-a412-f1e8a975a139 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.068 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] VM Paused (Lifecycle Event)
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.105 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.110 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393477.0170038, 7e111a93-618c-4a35-a412-f1e8a975a139 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.110 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] VM Resumed (Lifecycle Event)
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.384 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.389 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.400 2 INFO nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Took 10.26 seconds to spawn the instance on the hypervisor.
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.400 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.413 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.472 2 INFO nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Took 11.18 seconds to build instance.
Oct 02 08:24:37 compute-0 nova_compute[260603]: 2025-10-02 08:24:37.494 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:38 compute-0 ceph-mon[74477]: pgmap v1269: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 2.4 MiB/s wr, 134 op/s
Oct 02 08:24:38 compute-0 ceph-mon[74477]: osdmap e165: 3 total, 3 up, 3 in
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:24:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 965 KiB/s rd, 1.5 MiB/s wr, 170 op/s
Oct 02 08:24:39 compute-0 nova_compute[260603]: 2025-10-02 08:24:39.005 2 DEBUG nova.compute.manager [req-805923f0-7a91-4152-8546-dd0d10371578 req-30d0ae6e-297f-43c6-97d9-bcdb222f8bcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:39 compute-0 nova_compute[260603]: 2025-10-02 08:24:39.006 2 DEBUG oslo_concurrency.lockutils [req-805923f0-7a91-4152-8546-dd0d10371578 req-30d0ae6e-297f-43c6-97d9-bcdb222f8bcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:39 compute-0 nova_compute[260603]: 2025-10-02 08:24:39.007 2 DEBUG oslo_concurrency.lockutils [req-805923f0-7a91-4152-8546-dd0d10371578 req-30d0ae6e-297f-43c6-97d9-bcdb222f8bcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:39 compute-0 nova_compute[260603]: 2025-10-02 08:24:39.007 2 DEBUG oslo_concurrency.lockutils [req-805923f0-7a91-4152-8546-dd0d10371578 req-30d0ae6e-297f-43c6-97d9-bcdb222f8bcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:39 compute-0 nova_compute[260603]: 2025-10-02 08:24:39.008 2 DEBUG nova.compute.manager [req-805923f0-7a91-4152-8546-dd0d10371578 req-30d0ae6e-297f-43c6-97d9-bcdb222f8bcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] No waiting events found dispatching network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:24:39 compute-0 nova_compute[260603]: 2025-10-02 08:24:39.008 2 WARNING nova.compute.manager [req-805923f0-7a91-4152-8546-dd0d10371578 req-30d0ae6e-297f-43c6-97d9-bcdb222f8bcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received unexpected event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 for instance with vm_state active and task_state None.
Oct 02 08:24:39 compute-0 ovn_controller[152344]: 2025-10-02T08:24:39Z|00197|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct 02 08:24:39 compute-0 nova_compute[260603]: 2025-10-02 08:24:39.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:39 compute-0 nova_compute[260603]: 2025-10-02 08:24:39.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:40 compute-0 ceph-mon[74477]: pgmap v1271: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 965 KiB/s rd, 1.5 MiB/s wr, 170 op/s
Oct 02 08:24:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 874 KiB/s rd, 1.4 MiB/s wr, 154 op/s
Oct 02 08:24:41 compute-0 nova_compute[260603]: 2025-10-02 08:24:41.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:24:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct 02 08:24:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct 02 08:24:42 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct 02 08:24:42 compute-0 ceph-mon[74477]: pgmap v1272: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 874 KiB/s rd, 1.4 MiB/s wr, 154 op/s
Oct 02 08:24:42 compute-0 ceph-mon[74477]: osdmap e166: 3 total, 3 up, 3 in
Oct 02 08:24:42 compute-0 nova_compute[260603]: 2025-10-02 08:24:42.410 2 DEBUG nova.objects.instance [None req-9194375b-afb7-4af8-9e3c-0a71542e979f 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e111a93-618c-4a35-a412-f1e8a975a139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:24:42 compute-0 nova_compute[260603]: 2025-10-02 08:24:42.432 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393482.432187, 7e111a93-618c-4a35-a412-f1e8a975a139 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:42 compute-0 nova_compute[260603]: 2025-10-02 08:24:42.433 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] VM Paused (Lifecycle Event)
Oct 02 08:24:42 compute-0 nova_compute[260603]: 2025-10-02 08:24:42.453 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:42 compute-0 nova_compute[260603]: 2025-10-02 08:24:42.459 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:24:42 compute-0 nova_compute[260603]: 2025-10-02 08:24:42.487 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 08:24:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 93 op/s
Oct 02 08:24:42 compute-0 kernel: tapcd52f5bd-f2 (unregistering): left promiscuous mode
Oct 02 08:24:42 compute-0 NetworkManager[45129]: <info>  [1759393482.9097] device (tapcd52f5bd-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:24:42 compute-0 ovn_controller[152344]: 2025-10-02T08:24:42Z|00198|binding|INFO|Releasing lport cd52f5bd-f296-4867-b590-680a2c8a2870 from this chassis (sb_readonly=0)
Oct 02 08:24:42 compute-0 nova_compute[260603]: 2025-10-02 08:24:42.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:42 compute-0 ovn_controller[152344]: 2025-10-02T08:24:42Z|00199|binding|INFO|Setting lport cd52f5bd-f296-4867-b590-680a2c8a2870 down in Southbound
Oct 02 08:24:42 compute-0 ovn_controller[152344]: 2025-10-02T08:24:42Z|00200|binding|INFO|Removing iface tapcd52f5bd-f2 ovn-installed in OVS
Oct 02 08:24:42 compute-0 nova_compute[260603]: 2025-10-02 08:24:42.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:42.934 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:0f:7a 10.100.0.8'], port_security=['fa:16:3e:cb:0f:7a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7e111a93-618c-4a35-a412-f1e8a975a139', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=cd52f5bd-f296-4867-b590-680a2c8a2870) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:24:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:42.937 162357 INFO neutron.agent.ovn.metadata.agent [-] Port cd52f5bd-f296-4867-b590-680a2c8a2870 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis
Oct 02 08:24:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:42.939 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:24:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:42.940 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e17496b6-f725-4835-a46f-fde5b9b4d4bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:42.943 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace which is not needed anymore
Oct 02 08:24:42 compute-0 nova_compute[260603]: 2025-10-02 08:24:42.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:42 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 02 08:24:42 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Consumed 6.556s CPU time.
Oct 02 08:24:42 compute-0 systemd-machined[214636]: Machine qemu-34-instance-0000001e terminated.
Oct 02 08:24:43 compute-0 podman[297075]: 2025-10-02 08:24:43.034721212 +0000 UTC m=+0.091504720 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:24:43 compute-0 kernel: tapcd52f5bd-f2: entered promiscuous mode
Oct 02 08:24:43 compute-0 systemd-udevd[297085]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:24:43 compute-0 ovn_controller[152344]: 2025-10-02T08:24:43Z|00201|binding|INFO|Claiming lport cd52f5bd-f296-4867-b590-680a2c8a2870 for this chassis.
Oct 02 08:24:43 compute-0 ovn_controller[152344]: 2025-10-02T08:24:43Z|00202|binding|INFO|cd52f5bd-f296-4867-b590-680a2c8a2870: Claiming fa:16:3e:cb:0f:7a 10.100.0.8
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:43 compute-0 kernel: tapcd52f5bd-f2 (unregistering): left promiscuous mode
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.119 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:0f:7a 10.100.0.8'], port_security=['fa:16:3e:cb:0f:7a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7e111a93-618c-4a35-a412-f1e8a975a139', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=cd52f5bd-f296-4867-b590-680a2c8a2870) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:24:43 compute-0 NetworkManager[45129]: <info>  [1759393483.1240] manager: (tapcd52f5bd-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Oct 02 08:24:43 compute-0 virtnodedevd[260919]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 02 08:24:43 compute-0 virtnodedevd[260919]: hostname: compute-0
Oct 02 08:24:43 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct 02 08:24:43 compute-0 ovn_controller[152344]: 2025-10-02T08:24:43Z|00203|binding|INFO|Setting lport cd52f5bd-f296-4867-b590-680a2c8a2870 ovn-installed in OVS
Oct 02 08:24:43 compute-0 ovn_controller[152344]: 2025-10-02T08:24:43Z|00204|binding|INFO|Setting lport cd52f5bd-f296-4867-b590-680a2c8a2870 up in Southbound
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:43 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:43 compute-0 ovn_controller[152344]: 2025-10-02T08:24:43Z|00205|binding|INFO|Releasing lport cd52f5bd-f296-4867-b590-680a2c8a2870 from this chassis (sb_readonly=0)
Oct 02 08:24:43 compute-0 ovn_controller[152344]: 2025-10-02T08:24:43Z|00206|binding|INFO|Setting lport cd52f5bd-f296-4867-b590-680a2c8a2870 down in Southbound
Oct 02 08:24:43 compute-0 ovn_controller[152344]: 2025-10-02T08:24:43Z|00207|binding|INFO|Removing iface tapcd52f5bd-f2 ovn-installed in OVS
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.149 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:0f:7a 10.100.0.8'], port_security=['fa:16:3e:cb:0f:7a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7e111a93-618c-4a35-a412-f1e8a975a139', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=cd52f5bd-f296-4867-b590-680a2c8a2870) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:24:43 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.152 2 DEBUG nova.compute.manager [None req-9194375b-afb7-4af8-9e3c-0a71542e979f 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:43 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:43 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct 02 08:24:43 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct 02 08:24:43 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [NOTICE]   (297060) : haproxy version is 2.8.14-c23fe91
Oct 02 08:24:43 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [NOTICE]   (297060) : path to executable is /usr/sbin/haproxy
Oct 02 08:24:43 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [WARNING]  (297060) : Exiting Master process...
Oct 02 08:24:43 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [WARNING]  (297060) : Exiting Master process...
Oct 02 08:24:43 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [ALERT]    (297060) : Current worker (297062) exited with code 143 (Terminated)
Oct 02 08:24:43 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [WARNING]  (297060) : All workers exited. Exiting... (0)
Oct 02 08:24:43 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct 02 08:24:43 compute-0 systemd[1]: libpod-b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980.scope: Deactivated successfully.
Oct 02 08:24:43 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct 02 08:24:43 compute-0 podman[297119]: 2025-10-02 08:24:43.190536067 +0000 UTC m=+0.055149590 container died b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980-userdata-shm.mount: Deactivated successfully.
Oct 02 08:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-472410631e18c72411482e11524827d3001f58363217869b8650f65927ac6239-merged.mount: Deactivated successfully.
Oct 02 08:24:43 compute-0 podman[297119]: 2025-10-02 08:24:43.230353411 +0000 UTC m=+0.094966944 container cleanup b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 08:24:43 compute-0 systemd[1]: libpod-conmon-b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980.scope: Deactivated successfully.
Oct 02 08:24:43 compute-0 podman[297170]: 2025-10-02 08:24:43.305438732 +0000 UTC m=+0.045370234 container remove b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.315 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d66c3e-a029-4ca2-989f-a68f84831096]: (4, ('Thu Oct  2 08:24:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980)\nb553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980\nThu Oct  2 08:24:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980)\nb553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.317 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df409489-7203-4cce-84b2-1f1198fae82d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.318 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:43 compute-0 kernel: tap897d7abf-90: left promiscuous mode
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.342 2 DEBUG nova.compute.manager [req-6845f693-f512-4051-8bea-4f4d054809cc req-027e10b1-b085-46e4-a310-280ec8a810df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received event network-vif-unplugged-cd52f5bd-f296-4867-b590-680a2c8a2870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.343 2 DEBUG oslo_concurrency.lockutils [req-6845f693-f512-4051-8bea-4f4d054809cc req-027e10b1-b085-46e4-a310-280ec8a810df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.343 2 DEBUG oslo_concurrency.lockutils [req-6845f693-f512-4051-8bea-4f4d054809cc req-027e10b1-b085-46e4-a310-280ec8a810df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.343 2 DEBUG oslo_concurrency.lockutils [req-6845f693-f512-4051-8bea-4f4d054809cc req-027e10b1-b085-46e4-a310-280ec8a810df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.344 2 DEBUG nova.compute.manager [req-6845f693-f512-4051-8bea-4f4d054809cc req-027e10b1-b085-46e4-a310-280ec8a810df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] No waiting events found dispatching network-vif-unplugged-cd52f5bd-f296-4867-b590-680a2c8a2870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.344 2 WARNING nova.compute.manager [req-6845f693-f512-4051-8bea-4f4d054809cc req-027e10b1-b085-46e4-a310-280ec8a810df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received unexpected event network-vif-unplugged-cd52f5bd-f296-4867-b590-680a2c8a2870 for instance with vm_state suspended and task_state None.
Oct 02 08:24:43 compute-0 nova_compute[260603]: 2025-10-02 08:24:43.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.346 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0c39d775-b212-410c-88c7-5ca90af17134]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.370 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[716cb9ad-9902-441e-81c8-03d6c527f452]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.375 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b055945-986c-46f4-bd1c-718b1892748a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.393 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd13d8ae-4c2a-4267-abf9-02c69cdf254a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435470, 'reachable_time': 38668, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297189, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d897d7abf\x2d9e23\x2d43cd\x2d8f60\x2d7156792a4360.mount: Deactivated successfully.
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.396 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.396 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a82f76f1-d7d7-4ab1-b186-c7255e705821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.396 162357 INFO neutron.agent.ovn.metadata.agent [-] Port cd52f5bd-f296-4867-b590-680a2c8a2870 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.397 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.398 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4f34e67e-e6e7-409e-b20b-6386e0837fcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.398 162357 INFO neutron.agent.ovn.metadata.agent [-] Port cd52f5bd-f296-4867-b590-680a2c8a2870 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.399 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:24:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.399 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[269dc7c4-5171-4605-9431-71fd4fba93ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:24:44 compute-0 ceph-mon[74477]: pgmap v1274: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 93 op/s
Oct 02 08:24:44 compute-0 nova_compute[260603]: 2025-10-02 08:24:44.561 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393469.56035, 5d50db9c-0731-468a-81da-6762d68cda94 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:44 compute-0 nova_compute[260603]: 2025-10-02 08:24:44.562 2 INFO nova.compute.manager [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] VM Stopped (Lifecycle Event)
Oct 02 08:24:44 compute-0 nova_compute[260603]: 2025-10-02 08:24:44.587 2 DEBUG nova.compute.manager [None req-90d3cbad-face-4e67-9010-3cf6eea9b622 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:44 compute-0 nova_compute[260603]: 2025-10-02 08:24:44.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 19 KiB/s wr, 110 op/s
Oct 02 08:24:45 compute-0 nova_compute[260603]: 2025-10-02 08:24:45.583 2 DEBUG nova.compute.manager [req-6079e1cd-3a65-4429-b5ac-2eff9cb77b2c req-6a40664a-abe2-430a-842d-047a8f909b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:45 compute-0 nova_compute[260603]: 2025-10-02 08:24:45.583 2 DEBUG oslo_concurrency.lockutils [req-6079e1cd-3a65-4429-b5ac-2eff9cb77b2c req-6a40664a-abe2-430a-842d-047a8f909b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:45 compute-0 nova_compute[260603]: 2025-10-02 08:24:45.584 2 DEBUG oslo_concurrency.lockutils [req-6079e1cd-3a65-4429-b5ac-2eff9cb77b2c req-6a40664a-abe2-430a-842d-047a8f909b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:45 compute-0 nova_compute[260603]: 2025-10-02 08:24:45.584 2 DEBUG oslo_concurrency.lockutils [req-6079e1cd-3a65-4429-b5ac-2eff9cb77b2c req-6a40664a-abe2-430a-842d-047a8f909b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:45 compute-0 nova_compute[260603]: 2025-10-02 08:24:45.585 2 DEBUG nova.compute.manager [req-6079e1cd-3a65-4429-b5ac-2eff9cb77b2c req-6a40664a-abe2-430a-842d-047a8f909b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] No waiting events found dispatching network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:24:45 compute-0 nova_compute[260603]: 2025-10-02 08:24:45.585 2 WARNING nova.compute.manager [req-6079e1cd-3a65-4429-b5ac-2eff9cb77b2c req-6a40664a-abe2-430a-842d-047a8f909b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received unexpected event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 for instance with vm_state suspended and task_state None.
Oct 02 08:24:46 compute-0 ceph-mon[74477]: pgmap v1275: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 19 KiB/s wr, 110 op/s
Oct 02 08:24:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:46.197 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:24:46 compute-0 nova_compute[260603]: 2025-10-02 08:24:46.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:46.198 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:24:46 compute-0 nova_compute[260603]: 2025-10-02 08:24:46.511 2 DEBUG nova.compute.manager [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:46 compute-0 nova_compute[260603]: 2025-10-02 08:24:46.576 2 INFO nova.compute.manager [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] instance snapshotting
Oct 02 08:24:46 compute-0 nova_compute[260603]: 2025-10-02 08:24:46.577 2 WARNING nova.compute.manager [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] trying to snapshot a non-running instance: (state: 4 expected: 1)
Oct 02 08:24:46 compute-0 nova_compute[260603]: 2025-10-02 08:24:46.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 90 op/s
Oct 02 08:24:46 compute-0 nova_compute[260603]: 2025-10-02 08:24:46.880 2 INFO nova.virt.libvirt.driver [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Beginning cold snapshot process
Oct 02 08:24:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:24:47 compute-0 podman[297190]: 2025-10-02 08:24:47.027846312 +0000 UTC m=+0.087547434 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:24:47 compute-0 nova_compute[260603]: 2025-10-02 08:24:47.252 2 DEBUG nova.virt.libvirt.imagebackend [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:24:47 compute-0 nova_compute[260603]: 2025-10-02 08:24:47.516 2 DEBUG nova.storage.rbd_utils [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(2d672cc0cec84643a40ae2d55a034eed) on rbd image(7e111a93-618c-4a35-a412-f1e8a975a139_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:24:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct 02 08:24:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct 02 08:24:48 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct 02 08:24:48 compute-0 ceph-mon[74477]: pgmap v1276: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 90 op/s
Oct 02 08:24:48 compute-0 nova_compute[260603]: 2025-10-02 08:24:48.125 2 DEBUG nova.storage.rbd_utils [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] cloning vms/7e111a93-618c-4a35-a412-f1e8a975a139_disk@2d672cc0cec84643a40ae2d55a034eed to images/ba24338b-7147-4fdf-8929-8fac7c4161b9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:24:48 compute-0 nova_compute[260603]: 2025-10-02 08:24:48.281 2 DEBUG nova.storage.rbd_utils [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] flattening images/ba24338b-7147-4fdf-8929-8fac7c4161b9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:24:48 compute-0 nova_compute[260603]: 2025-10-02 08:24:48.563 2 DEBUG nova.storage.rbd_utils [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] removing snapshot(2d672cc0cec84643a40ae2d55a034eed) on rbd image(7e111a93-618c-4a35-a412-f1e8a975a139_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:24:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1023 B/s wr, 83 op/s
Oct 02 08:24:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct 02 08:24:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct 02 08:24:49 compute-0 ceph-mon[74477]: osdmap e167: 3 total, 3 up, 3 in
Oct 02 08:24:49 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct 02 08:24:49 compute-0 nova_compute[260603]: 2025-10-02 08:24:49.123 2 DEBUG nova.storage.rbd_utils [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(snap) on rbd image(ba24338b-7147-4fdf-8929-8fac7c4161b9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:24:49 compute-0 nova_compute[260603]: 2025-10-02 08:24:49.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct 02 08:24:50 compute-0 ceph-mon[74477]: pgmap v1278: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1023 B/s wr, 83 op/s
Oct 02 08:24:50 compute-0 ceph-mon[74477]: osdmap e168: 3 total, 3 up, 3 in
Oct 02 08:24:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct 02 08:24:50 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct 02 08:24:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.3 KiB/s wr, 21 op/s
Oct 02 08:24:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct 02 08:24:51 compute-0 ceph-mon[74477]: osdmap e169: 3 total, 3 up, 3 in
Oct 02 08:24:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct 02 08:24:51 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct 02 08:24:51 compute-0 nova_compute[260603]: 2025-10-02 08:24:51.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:24:52 compute-0 ceph-mon[74477]: pgmap v1281: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.3 KiB/s wr, 21 op/s
Oct 02 08:24:52 compute-0 ceph-mon[74477]: osdmap e170: 3 total, 3 up, 3 in
Oct 02 08:24:52 compute-0 nova_compute[260603]: 2025-10-02 08:24:52.558 2 INFO nova.virt.libvirt.driver [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Snapshot image upload complete
Oct 02 08:24:52 compute-0 nova_compute[260603]: 2025-10-02 08:24:52.558 2 INFO nova.compute.manager [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Took 5.98 seconds to snapshot the instance on the hypervisor.
Oct 02 08:24:52 compute-0 sudo[297351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:24:52 compute-0 sudo[297351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:52 compute-0 sudo[297351]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:52 compute-0 sudo[297376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:24:52 compute-0 sudo[297376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:52 compute-0 sudo[297376]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 130 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 218 op/s
Oct 02 08:24:52 compute-0 sudo[297401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:24:52 compute-0 sudo[297401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:52 compute-0 sudo[297401]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:52 compute-0 sudo[297426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:24:52 compute-0 sudo[297426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:53 compute-0 sudo[297426]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:24:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:24:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:24:53 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:24:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:24:53 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:24:53 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2e028815-07d8-43ca-b6f5-b585ac0bc83b does not exist
Oct 02 08:24:53 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 624315fd-5c1a-429c-a2ab-12df106c20bb does not exist
Oct 02 08:24:53 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f828179d-3ce1-4534-b9ea-4206f90c292b does not exist
Oct 02 08:24:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:24:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:24:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:24:53 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:24:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:24:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:24:53 compute-0 sudo[297482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:24:53 compute-0 sudo[297482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:53 compute-0 sudo[297482]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:53 compute-0 sudo[297507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:24:53 compute-0 sudo[297507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:53 compute-0 sudo[297507]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:53 compute-0 sudo[297532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:24:53 compute-0 sudo[297532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:53 compute-0 sudo[297532]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:53 compute-0 sudo[297557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:24:53 compute-0 sudo[297557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:54 compute-0 ceph-mon[74477]: pgmap v1283: 305 pgs: 305 active+clean; 130 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 218 op/s
Oct 02 08:24:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:24:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:24:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:24:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:24:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:24:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:24:54 compute-0 podman[297622]: 2025-10-02 08:24:54.237983635 +0000 UTC m=+0.068594844 container create e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:24:54 compute-0 systemd[1]: Started libpod-conmon-e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97.scope.
Oct 02 08:24:54 compute-0 podman[297622]: 2025-10-02 08:24:54.209356111 +0000 UTC m=+0.039967370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:24:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:24:54 compute-0 podman[297622]: 2025-10-02 08:24:54.345275374 +0000 UTC m=+0.175886593 container init e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:24:54 compute-0 podman[297622]: 2025-10-02 08:24:54.357107755 +0000 UTC m=+0.187718954 container start e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 08:24:54 compute-0 podman[297622]: 2025-10-02 08:24:54.362971505 +0000 UTC m=+0.193582764 container attach e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:24:54 compute-0 blissful_wu[297638]: 167 167
Oct 02 08:24:54 compute-0 systemd[1]: libpod-e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97.scope: Deactivated successfully.
Oct 02 08:24:54 compute-0 podman[297622]: 2025-10-02 08:24:54.366023963 +0000 UTC m=+0.196635162 container died e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wu, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:24:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-af5aa4bc84ab77a0abdc3e7283eb938e509edb496e40c01be07672129fc3c3c6-merged.mount: Deactivated successfully.
Oct 02 08:24:54 compute-0 podman[297622]: 2025-10-02 08:24:54.440027879 +0000 UTC m=+0.270639088 container remove e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 02 08:24:54 compute-0 systemd[1]: libpod-conmon-e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97.scope: Deactivated successfully.
Oct 02 08:24:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct 02 08:24:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct 02 08:24:54 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct 02 08:24:54 compute-0 podman[297662]: 2025-10-02 08:24:54.65810402 +0000 UTC m=+0.052268796 container create c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 08:24:54 compute-0 systemd[1]: Started libpod-conmon-c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2.scope.
Oct 02 08:24:54 compute-0 podman[297662]: 2025-10-02 08:24:54.631042957 +0000 UTC m=+0.025207783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:24:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70d4a57143800c01d8237f4a155a49a36f438ecbe704a93e6f7bcaa4359bfa5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70d4a57143800c01d8237f4a155a49a36f438ecbe704a93e6f7bcaa4359bfa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70d4a57143800c01d8237f4a155a49a36f438ecbe704a93e6f7bcaa4359bfa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70d4a57143800c01d8237f4a155a49a36f438ecbe704a93e6f7bcaa4359bfa5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70d4a57143800c01d8237f4a155a49a36f438ecbe704a93e6f7bcaa4359bfa5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:54 compute-0 nova_compute[260603]: 2025-10-02 08:24:54.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:54 compute-0 podman[297662]: 2025-10-02 08:24:54.777170449 +0000 UTC m=+0.171335205 container init c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 08:24:54 compute-0 podman[297662]: 2025-10-02 08:24:54.799933153 +0000 UTC m=+0.194097919 container start c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:24:54 compute-0 podman[297662]: 2025-10-02 08:24:54.806905478 +0000 UTC m=+0.201070214 container attach c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 08:24:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 134 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 165 op/s
Oct 02 08:24:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:24:55.201 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.508 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.509 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.510 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.510 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.511 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.513 2 INFO nova.compute.manager [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Terminating instance
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.514 2 DEBUG nova.compute.manager [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.524 2 INFO nova.virt.libvirt.driver [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Instance destroyed successfully.
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.524 2 DEBUG nova.objects.instance [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'resources' on Instance uuid 7e111a93-618c-4a35-a412-f1e8a975a139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.541 2 DEBUG nova.virt.libvirt.vif [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:24:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-902237424',display_name='tempest-ImagesTestJSON-server-902237424',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-902237424',id=30,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:24:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-cf2bt1a3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:24:52Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=7e111a93-618c-4a35-a412-f1e8a975a139,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.542 2 DEBUG nova.network.os_vif_util [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.543 2 DEBUG nova.network.os_vif_util [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.544 2 DEBUG os_vif [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.547 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd52f5bd-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.555 2 INFO os_vif [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2')
Oct 02 08:24:55 compute-0 ceph-mon[74477]: osdmap e171: 3 total, 3 up, 3 in
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.938 2 INFO nova.virt.libvirt.driver [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Deleting instance files /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139_del
Oct 02 08:24:55 compute-0 nova_compute[260603]: 2025-10-02 08:24:55.939 2 INFO nova.virt.libvirt.driver [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Deletion of /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139_del complete
Oct 02 08:24:55 compute-0 youthful_almeida[297679]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:24:55 compute-0 youthful_almeida[297679]: --> relative data size: 1.0
Oct 02 08:24:55 compute-0 youthful_almeida[297679]: --> All data devices are unavailable
Oct 02 08:24:55 compute-0 systemd[1]: libpod-c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2.scope: Deactivated successfully.
Oct 02 08:24:55 compute-0 podman[297662]: 2025-10-02 08:24:55.990671985 +0000 UTC m=+1.384836721 container died c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 08:24:55 compute-0 systemd[1]: libpod-c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2.scope: Consumed 1.128s CPU time.
Oct 02 08:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d70d4a57143800c01d8237f4a155a49a36f438ecbe704a93e6f7bcaa4359bfa5-merged.mount: Deactivated successfully.
Oct 02 08:24:56 compute-0 nova_compute[260603]: 2025-10-02 08:24:56.027 2 INFO nova.compute.manager [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Took 0.51 seconds to destroy the instance on the hypervisor.
Oct 02 08:24:56 compute-0 nova_compute[260603]: 2025-10-02 08:24:56.029 2 DEBUG oslo.service.loopingcall [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:24:56 compute-0 nova_compute[260603]: 2025-10-02 08:24:56.029 2 DEBUG nova.compute.manager [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:24:56 compute-0 nova_compute[260603]: 2025-10-02 08:24:56.029 2 DEBUG nova.network.neutron [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:24:56 compute-0 podman[297662]: 2025-10-02 08:24:56.040497912 +0000 UTC m=+1.434662648 container remove c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:24:56 compute-0 systemd[1]: libpod-conmon-c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2.scope: Deactivated successfully.
Oct 02 08:24:56 compute-0 sudo[297557]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:56 compute-0 sudo[297739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:24:56 compute-0 sudo[297739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:56 compute-0 sudo[297739]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:56 compute-0 sudo[297764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:24:56 compute-0 sudo[297764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:56 compute-0 sudo[297764]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:56 compute-0 sudo[297789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:24:56 compute-0 sudo[297789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:56 compute-0 sudo[297789]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:56 compute-0 sudo[297814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:24:56 compute-0 sudo[297814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:56 compute-0 ceph-mon[74477]: pgmap v1285: 305 pgs: 305 active+clean; 134 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 165 op/s
Oct 02 08:24:56 compute-0 nova_compute[260603]: 2025-10-02 08:24:56.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:24:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 134 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 141 op/s
Oct 02 08:24:56 compute-0 podman[297879]: 2025-10-02 08:24:56.858736244 +0000 UTC m=+0.069175421 container create cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:24:56 compute-0 systemd[1]: Started libpod-conmon-cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b.scope.
Oct 02 08:24:56 compute-0 podman[297879]: 2025-10-02 08:24:56.829209772 +0000 UTC m=+0.039649009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:24:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:24:56 compute-0 podman[297879]: 2025-10-02 08:24:56.96961362 +0000 UTC m=+0.180052857 container init cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 08:24:56 compute-0 podman[297879]: 2025-10-02 08:24:56.981105789 +0000 UTC m=+0.191544976 container start cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:24:56 compute-0 podman[297879]: 2025-10-02 08:24:56.985584044 +0000 UTC m=+0.196023271 container attach cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 08:24:56 compute-0 gallant_feynman[297895]: 167 167
Oct 02 08:24:56 compute-0 systemd[1]: libpod-cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b.scope: Deactivated successfully.
Oct 02 08:24:56 compute-0 podman[297879]: 2025-10-02 08:24:56.989343396 +0000 UTC m=+0.199782573 container died cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 08:24:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:24:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct 02 08:24:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct 02 08:24:57 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct 02 08:24:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce4441261993a3ecbd9d55ad35400e3934e5cfd797815cce65a829e096c1fbe1-merged.mount: Deactivated successfully.
Oct 02 08:24:57 compute-0 podman[297879]: 2025-10-02 08:24:57.050115574 +0000 UTC m=+0.260554761 container remove cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:24:57 compute-0 systemd[1]: libpod-conmon-cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b.scope: Deactivated successfully.
Oct 02 08:24:57 compute-0 podman[297920]: 2025-10-02 08:24:57.294698521 +0000 UTC m=+0.068042455 container create 033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:24:57 compute-0 systemd[1]: Started libpod-conmon-033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb.scope.
Oct 02 08:24:57 compute-0 podman[297920]: 2025-10-02 08:24:57.270523631 +0000 UTC m=+0.043867645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:24:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c04bb5a16434ec290df3213086bdf3604e7011937f94d0c50f9caaaa625fd25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c04bb5a16434ec290df3213086bdf3604e7011937f94d0c50f9caaaa625fd25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c04bb5a16434ec290df3213086bdf3604e7011937f94d0c50f9caaaa625fd25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c04bb5a16434ec290df3213086bdf3604e7011937f94d0c50f9caaaa625fd25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:57 compute-0 podman[297920]: 2025-10-02 08:24:57.392780433 +0000 UTC m=+0.166124457 container init 033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:24:57 compute-0 podman[297920]: 2025-10-02 08:24:57.410596297 +0000 UTC m=+0.183940221 container start 033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:24:57 compute-0 podman[297920]: 2025-10-02 08:24:57.414429411 +0000 UTC m=+0.187773395 container attach 033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:24:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:24:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:24:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:24:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:24:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:24:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:24:58 compute-0 ceph-mon[74477]: pgmap v1286: 305 pgs: 305 active+clean; 134 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 141 op/s
Oct 02 08:24:58 compute-0 ceph-mon[74477]: osdmap e172: 3 total, 3 up, 3 in
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.073 2 DEBUG nova.network.neutron [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.103 2 INFO nova.compute.manager [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Took 2.07 seconds to deallocate network for instance.
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.154 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393483.1521792, 7e111a93-618c-4a35-a412-f1e8a975a139 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:24:58 compute-0 fervent_easley[297936]: {
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.154 2 INFO nova.compute.manager [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] VM Stopped (Lifecycle Event)
Oct 02 08:24:58 compute-0 fervent_easley[297936]:     "0": [
Oct 02 08:24:58 compute-0 fervent_easley[297936]:         {
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "devices": [
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "/dev/loop3"
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             ],
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_name": "ceph_lv0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_size": "21470642176",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "name": "ceph_lv0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "tags": {
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.cluster_name": "ceph",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.crush_device_class": "",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.encrypted": "0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.osd_id": "0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.type": "block",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.vdo": "0"
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             },
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "type": "block",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "vg_name": "ceph_vg0"
Oct 02 08:24:58 compute-0 fervent_easley[297936]:         }
Oct 02 08:24:58 compute-0 fervent_easley[297936]:     ],
Oct 02 08:24:58 compute-0 fervent_easley[297936]:     "1": [
Oct 02 08:24:58 compute-0 fervent_easley[297936]:         {
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "devices": [
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "/dev/loop4"
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             ],
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_name": "ceph_lv1",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_size": "21470642176",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "name": "ceph_lv1",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "tags": {
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.cluster_name": "ceph",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.crush_device_class": "",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.encrypted": "0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.osd_id": "1",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.type": "block",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.vdo": "0"
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             },
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "type": "block",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "vg_name": "ceph_vg1"
Oct 02 08:24:58 compute-0 fervent_easley[297936]:         }
Oct 02 08:24:58 compute-0 fervent_easley[297936]:     ],
Oct 02 08:24:58 compute-0 fervent_easley[297936]:     "2": [
Oct 02 08:24:58 compute-0 fervent_easley[297936]:         {
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "devices": [
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "/dev/loop5"
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             ],
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_name": "ceph_lv2",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_size": "21470642176",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "name": "ceph_lv2",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "tags": {
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.cluster_name": "ceph",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.crush_device_class": "",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.encrypted": "0",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.osd_id": "2",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.type": "block",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:                 "ceph.vdo": "0"
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             },
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "type": "block",
Oct 02 08:24:58 compute-0 fervent_easley[297936]:             "vg_name": "ceph_vg2"
Oct 02 08:24:58 compute-0 fervent_easley[297936]:         }
Oct 02 08:24:58 compute-0 fervent_easley[297936]:     ]
Oct 02 08:24:58 compute-0 fervent_easley[297936]: }
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.190 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.191 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.193 2 DEBUG nova.compute.manager [None req-f1e14f0a-5c4d-4084-b935-22d3fcee806d - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:24:58 compute-0 systemd[1]: libpod-033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb.scope: Deactivated successfully.
Oct 02 08:24:58 compute-0 podman[297920]: 2025-10-02 08:24:58.202461349 +0000 UTC m=+0.975805313 container died 033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:24:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c04bb5a16434ec290df3213086bdf3604e7011937f94d0c50f9caaaa625fd25-merged.mount: Deactivated successfully.
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.264 2 DEBUG oslo_concurrency.processutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:58 compute-0 podman[297920]: 2025-10-02 08:24:58.292474741 +0000 UTC m=+1.065818705 container remove 033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:24:58 compute-0 systemd[1]: libpod-conmon-033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb.scope: Deactivated successfully.
Oct 02 08:24:58 compute-0 sudo[297814]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.397 2 DEBUG nova.compute.manager [req-291ad518-7ef6-4b7f-9cc1-aba07bd38176 req-645328ca-9272-4f3f-aee3-37574bf365b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received event network-vif-deleted-cd52f5bd-f296-4867-b590-680a2c8a2870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:24:58 compute-0 sudo[297958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:24:58 compute-0 sudo[297958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:58 compute-0 sudo[297958]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:58 compute-0 sudo[298002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:24:58 compute-0 sudo[298002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:58 compute-0 sudo[298002]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:58 compute-0 sudo[298027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:24:58 compute-0 sudo[298027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:58 compute-0 sudo[298027]: pam_unix(sudo:session): session closed for user root
Oct 02 08:24:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:24:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195898992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.711 2 DEBUG oslo_concurrency.processutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.723 2 DEBUG nova.compute.provider_tree [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:24:58 compute-0 sudo[298052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:24:58 compute-0 sudo[298052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.753 2 DEBUG nova.scheduler.client.report [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.781 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "9ea31984-a45e-4154-9df9-3c4e8ce69309" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.782 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "9ea31984-a45e-4154-9df9-3c4e8ce69309" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.784 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.818 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.833 2 INFO nova.scheduler.client.report [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Deleted allocations for instance 7e111a93-618c-4a35-a412-f1e8a975a139
Oct 02 08:24:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 41 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.8 MiB/s wr, 217 op/s
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.901 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.902 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.914 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.915 2 INFO nova.compute.claims [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:24:58 compute-0 nova_compute[260603]: 2025-10-02 08:24:58.939 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2195898992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.056 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:24:59 compute-0 podman[298121]: 2025-10-02 08:24:59.21169749 +0000 UTC m=+0.069306626 container create a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 08:24:59 compute-0 systemd[1]: Started libpod-conmon-a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a.scope.
Oct 02 08:24:59 compute-0 podman[298121]: 2025-10-02 08:24:59.194434513 +0000 UTC m=+0.052043639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:24:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:24:59 compute-0 podman[298121]: 2025-10-02 08:24:59.314842065 +0000 UTC m=+0.172451181 container init a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 08:24:59 compute-0 podman[298121]: 2025-10-02 08:24:59.322300635 +0000 UTC m=+0.179909741 container start a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:24:59 compute-0 podman[298121]: 2025-10-02 08:24:59.325124177 +0000 UTC m=+0.182733303 container attach a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:24:59 compute-0 festive_lederberg[298156]: 167 167
Oct 02 08:24:59 compute-0 systemd[1]: libpod-a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a.scope: Deactivated successfully.
Oct 02 08:24:59 compute-0 conmon[298156]: conmon a06f38b42262e39d9303 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a.scope/container/memory.events
Oct 02 08:24:59 compute-0 podman[298121]: 2025-10-02 08:24:59.330512741 +0000 UTC m=+0.188121857 container died a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:24:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-64ec5e2bfd126fd5675b8ea43a6016f3febe0e5e3e0350909158a6d0db57ca0a-merged.mount: Deactivated successfully.
Oct 02 08:24:59 compute-0 podman[298121]: 2025-10-02 08:24:59.361108307 +0000 UTC m=+0.218717413 container remove a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 08:24:59 compute-0 systemd[1]: libpod-conmon-a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a.scope: Deactivated successfully.
Oct 02 08:24:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:24:59 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/885705688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:24:59 compute-0 podman[298180]: 2025-10-02 08:24:59.523266186 +0000 UTC m=+0.048269878 container create 4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.530 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.537 2 DEBUG nova.compute.provider_tree [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.554 2 DEBUG nova.scheduler.client.report [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:24:59 compute-0 systemd[1]: Started libpod-conmon-4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5.scope.
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.590 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.591 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:24:59 compute-0 podman[298180]: 2025-10-02 08:24:59.504950785 +0000 UTC m=+0.029954477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:24:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1870b4b395d7e371d3689ee7e61d9edb940e917258e806f463655334f8ab30b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1870b4b395d7e371d3689ee7e61d9edb940e917258e806f463655334f8ab30b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1870b4b395d7e371d3689ee7e61d9edb940e917258e806f463655334f8ab30b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1870b4b395d7e371d3689ee7e61d9edb940e917258e806f463655334f8ab30b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:24:59 compute-0 podman[298180]: 2025-10-02 08:24:59.640345401 +0000 UTC m=+0.165349133 container init 4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 08:24:59 compute-0 podman[298180]: 2025-10-02 08:24:59.656552423 +0000 UTC m=+0.181556155 container start 4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 08:24:59 compute-0 podman[298180]: 2025-10-02 08:24:59.660470269 +0000 UTC m=+0.185473981 container attach 4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.664 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.665 2 DEBUG nova.network.neutron [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.702 2 INFO nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.722 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.856 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.858 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.859 2 INFO nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Creating image(s)
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.889 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.924 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.955 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:24:59 compute-0 nova_compute[260603]: 2025-10-02 08:24:59.960 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.008 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.009 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.036 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:25:00 compute-0 ceph-mon[74477]: pgmap v1288: 305 pgs: 305 active+clean; 41 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.8 MiB/s wr, 217 op/s
Oct 02 08:25:00 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/885705688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.061 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.062 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.063 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.063 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.089 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.094 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.149 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.150 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.162 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.163 2 INFO nova.compute.claims [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.233 2 DEBUG nova.network.neutron [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.233 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.321 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.344 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.427 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] resizing rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.535 2 DEBUG nova.objects.instance [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lazy-loading 'migration_context' on Instance uuid 9ea31984-a45e-4154-9df9-3c4e8ce69309 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.552 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.552 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Ensure instance console log exists: /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.552 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.553 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.553 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.554 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.558 2 WARNING nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.572 2 DEBUG nova.virt.libvirt.host [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.573 2 DEBUG nova.virt.libvirt.host [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.575 2 DEBUG nova.virt.libvirt.host [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.576 2 DEBUG nova.virt.libvirt.host [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.576 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.576 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.577 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.577 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.577 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.577 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.578 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.578 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.578 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.578 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.578 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.579 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.581 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.634 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "aebef537-a40c-45aa-98b5-ebdd7c27028b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.635 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "aebef537-a40c-45aa-98b5-ebdd7c27028b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.650 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:25:00 compute-0 determined_torvalds[298199]: {
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "osd_id": 2,
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "type": "bluestore"
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:     },
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "osd_id": 1,
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "type": "bluestore"
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:     },
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "osd_id": 0,
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:         "type": "bluestore"
Oct 02 08:25:00 compute-0 determined_torvalds[298199]:     }
Oct 02 08:25:00 compute-0 determined_torvalds[298199]: }
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.733 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:00 compute-0 systemd[1]: libpod-4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5.scope: Deactivated successfully.
Oct 02 08:25:00 compute-0 systemd[1]: libpod-4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5.scope: Consumed 1.060s CPU time.
Oct 02 08:25:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:00 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3408356856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.783 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.791 2 DEBUG nova.compute.provider_tree [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:00 compute-0 podman[298438]: 2025-10-02 08:25:00.793202281 +0000 UTC m=+0.035324950 container died 4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.805 2 DEBUG nova.scheduler.client.report [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.824 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1870b4b395d7e371d3689ee7e61d9edb940e917258e806f463655334f8ab30b-merged.mount: Deactivated successfully.
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.825 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.827 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.835 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.836 2 INFO nova.compute.claims [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:25:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 41 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 70 KiB/s wr, 95 op/s
Oct 02 08:25:00 compute-0 podman[298438]: 2025-10-02 08:25:00.86140691 +0000 UTC m=+0.103529579 container remove 4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:25:00 compute-0 systemd[1]: libpod-conmon-4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5.scope: Deactivated successfully.
Oct 02 08:25:00 compute-0 sudo[298052]: pam_unix(sudo:session): session closed for user root
Oct 02 08:25:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:25:00 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:25:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:25:00 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:25:00 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8646470f-064a-4a6e-8c0a-39d9568b6651 does not exist
Oct 02 08:25:00 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 91b2f16f-819c-477e-be2e-ca3dd1bcf061 does not exist
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.927 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.928 2 DEBUG nova.network.neutron [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.953 2 INFO nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:25:00 compute-0 nova_compute[260603]: 2025-10-02 08:25:00.971 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:25:00 compute-0 sudo[298455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:25:00 compute-0 sudo[298455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:25:00 compute-0 sudo[298455]: pam_unix(sudo:session): session closed for user root
Oct 02 08:25:01 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3408356856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:01 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:25:01 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:25:01 compute-0 sudo[298480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:25:01 compute-0 sudo[298480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:25:01 compute-0 sudo[298480]: pam_unix(sudo:session): session closed for user root
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.069 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2658148916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.101 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.103 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.104 2 INFO nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Creating image(s)
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.129 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.155 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.181 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.185 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.218 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.250 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.255 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.292 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.294 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.295 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.296 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.326 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.331 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804454195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.512 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.522 2 DEBUG nova.compute.provider_tree [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.615 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.696 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] resizing rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:25:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225650474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.732 2 DEBUG nova.scheduler.client.report [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.738 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.741 2 DEBUG nova.objects.instance [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ea31984-a45e-4154-9df9-3c4e8ce69309 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.762 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:25:01 compute-0 nova_compute[260603]:   <uuid>9ea31984-a45e-4154-9df9-3c4e8ce69309</uuid>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   <name>instance-0000001f</name>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1042949854</nova:name>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:25:00</nova:creationTime>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:25:01 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:25:01 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:25:01 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:25:01 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:25:01 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:25:01 compute-0 nova_compute[260603]:         <nova:user uuid="cf69d2cc0a684734b9d2d6da2fee6bf7">tempest-ListImageFiltersTestJSON-1675507372-project-member</nova:user>
Oct 02 08:25:01 compute-0 nova_compute[260603]:         <nova:project uuid="1d3020619cd44cbd964c169ee848da8e">tempest-ListImageFiltersTestJSON-1675507372</nova:project>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <nova:ports/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <system>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <entry name="serial">9ea31984-a45e-4154-9df9-3c4e8ce69309</entry>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <entry name="uuid">9ea31984-a45e-4154-9df9-3c4e8ce69309</entry>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     </system>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   <os>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   </os>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   <features>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   </features>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/9ea31984-a45e-4154-9df9-3c4e8ce69309_disk">
Oct 02 08:25:01 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:01 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/9ea31984-a45e-4154-9df9-3c4e8ce69309_disk.config">
Oct 02 08:25:01 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:01 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/console.log" append="off"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <video>
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     </video>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:25:01 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:25:01 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:25:01 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:25:01 compute-0 nova_compute[260603]: </domain>
Oct 02 08:25:01 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.813 2 DEBUG nova.policy [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6747651cfdcc4f868c43b9d78f5846c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56b1e1170f2e4a73aaf396476bc82261', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.817 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.818 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.832 2 DEBUG nova.objects.instance [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'migration_context' on Instance uuid c00d9d50-5c81-4dc9-8316-c654d4802b4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.855 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.856 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Ensure instance console log exists: /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.857 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.857 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.858 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.881 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.882 2 DEBUG nova.network.neutron [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.899 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.900 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.901 2 INFO nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Using config drive
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.925 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.933 2 INFO nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:25:01 compute-0 nova_compute[260603]: 2025-10-02 08:25:01.957 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:25:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:25:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct 02 08:25:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct 02 08:25:02 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct 02 08:25:02 compute-0 ceph-mon[74477]: pgmap v1289: 305 pgs: 305 active+clean; 41 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 70 KiB/s wr, 95 op/s
Oct 02 08:25:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2658148916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2804454195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/225650474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:02 compute-0 ceph-mon[74477]: osdmap e173: 3 total, 3 up, 3 in
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.085 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.086 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.087 2 INFO nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Creating image(s)
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.130 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.159 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.188 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.192 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.275 2 INFO nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Creating config drive at /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/disk.config
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.281 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7xgd4hf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.307 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.308 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.309 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.309 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.334 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.338 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 aebef537-a40c-45aa-98b5-ebdd7c27028b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.375 2 DEBUG nova.network.neutron [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.376 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.413 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7xgd4hf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.435 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.440 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/disk.config 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.615 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 aebef537-a40c-45aa-98b5-ebdd7c27028b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.651 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/disk.config 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.652 2 INFO nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Deleting local config drive /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/disk.config because it was imported into RBD.
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.690 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] resizing rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:25:02 compute-0 systemd-machined[214636]: New machine qemu-35-instance-0000001f.
Oct 02 08:25:02 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.812 2 DEBUG nova.network.neutron [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Successfully created port: c606d1e1-b27f-498f-989e-2cce97a7589d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.825 2 DEBUG nova.objects.instance [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lazy-loading 'migration_context' on Instance uuid aebef537-a40c-45aa-98b5-ebdd7c27028b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 106 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 3.7 MiB/s wr, 134 op/s
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.852 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.853 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Ensure instance console log exists: /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.853 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.854 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.855 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.857 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.863 2 WARNING nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.869 2 DEBUG nova.virt.libvirt.host [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.870 2 DEBUG nova.virt.libvirt.host [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.876 2 DEBUG nova.virt.libvirt.host [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.876 2 DEBUG nova.virt.libvirt.host [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.877 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.878 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.879 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.879 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.880 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.880 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.881 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.881 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.882 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.882 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.883 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.883 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:25:02 compute-0 nova_compute[260603]: 2025-10-02 08:25:02.888 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2375461221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:03 compute-0 nova_compute[260603]: 2025-10-02 08:25:03.307 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:03 compute-0 nova_compute[260603]: 2025-10-02 08:25:03.342 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:03 compute-0 nova_compute[260603]: 2025-10-02 08:25:03.347 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257391125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:03 compute-0 nova_compute[260603]: 2025-10-02 08:25:03.851 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:03 compute-0 nova_compute[260603]: 2025-10-02 08:25:03.854 2 DEBUG nova.objects.instance [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lazy-loading 'pci_devices' on Instance uuid aebef537-a40c-45aa-98b5-ebdd7c27028b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:03 compute-0 nova_compute[260603]: 2025-10-02 08:25:03.873 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:25:03 compute-0 nova_compute[260603]:   <uuid>aebef537-a40c-45aa-98b5-ebdd7c27028b</uuid>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   <name>instance-00000021</name>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <nova:name>tempest-ListImageFiltersTestJSON-server-868586593</nova:name>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:25:02</nova:creationTime>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:25:03 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:25:03 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:25:03 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:25:03 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:25:03 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:25:03 compute-0 nova_compute[260603]:         <nova:user uuid="cf69d2cc0a684734b9d2d6da2fee6bf7">tempest-ListImageFiltersTestJSON-1675507372-project-member</nova:user>
Oct 02 08:25:03 compute-0 nova_compute[260603]:         <nova:project uuid="1d3020619cd44cbd964c169ee848da8e">tempest-ListImageFiltersTestJSON-1675507372</nova:project>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <nova:ports/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <system>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <entry name="serial">aebef537-a40c-45aa-98b5-ebdd7c27028b</entry>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <entry name="uuid">aebef537-a40c-45aa-98b5-ebdd7c27028b</entry>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     </system>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   <os>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   </os>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   <features>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   </features>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/aebef537-a40c-45aa-98b5-ebdd7c27028b_disk">
Oct 02 08:25:03 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:03 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/aebef537-a40c-45aa-98b5-ebdd7c27028b_disk.config">
Oct 02 08:25:03 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:03 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/console.log" append="off"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <video>
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     </video>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:25:03 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:25:03 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:25:03 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:25:03 compute-0 nova_compute[260603]: </domain>
Oct 02 08:25:03 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:25:03 compute-0 nova_compute[260603]: 2025-10-02 08:25:03.959 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:03 compute-0 nova_compute[260603]: 2025-10-02 08:25:03.960 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:03 compute-0 nova_compute[260603]: 2025-10-02 08:25:03.961 2 INFO nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Using config drive
Oct 02 08:25:03 compute-0 nova_compute[260603]: 2025-10-02 08:25:03.992 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:04 compute-0 ceph-mon[74477]: pgmap v1291: 305 pgs: 305 active+clean; 106 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 3.7 MiB/s wr, 134 op/s
Oct 02 08:25:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2375461221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3257391125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.161 2 INFO nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Creating config drive at /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/disk.config
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.166 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuan1t9d4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.314 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuan1t9d4" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.350 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.355 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/disk.config aebef537-a40c-45aa-98b5-ebdd7c27028b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.428 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393504.4273956, 9ea31984-a45e-4154-9df9-3c4e8ce69309 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.430 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] VM Resumed (Lifecycle Event)
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.436 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.437 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.442 2 INFO nova.virt.libvirt.driver [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Instance spawned successfully.
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.443 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.459 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.469 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.477 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.478 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.479 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.480 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.481 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.482 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.490 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.491 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393504.4296277, 9ea31984-a45e-4154-9df9-3c4e8ce69309 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.492 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] VM Started (Lifecycle Event)
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.515 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.519 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.536 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/disk.config aebef537-a40c-45aa-98b5-ebdd7c27028b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.537 2 INFO nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Deleting local config drive /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/disk.config because it was imported into RBD.
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.542 2 INFO nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Took 4.69 seconds to spawn the instance on the hypervisor.
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.543 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.544 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:04 compute-0 systemd-machined[214636]: New machine qemu-36-instance-00000021.
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.625 2 INFO nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Took 5.75 seconds to build instance.
Oct 02 08:25:04 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000021.
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.645 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "9ea31984-a45e-4154-9df9-3c4e8ce69309" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.658 2 DEBUG nova.network.neutron [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Successfully updated port: c606d1e1-b27f-498f-989e-2cce97a7589d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.674 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "refresh_cache-c00d9d50-5c81-4dc9-8316-c654d4802b4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.674 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquired lock "refresh_cache-c00d9d50-5c81-4dc9-8316-c654d4802b4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.675 2 DEBUG nova.network.neutron [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:25:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 165 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 7.2 MiB/s wr, 196 op/s
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.864 2 DEBUG nova.compute.manager [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-changed-c606d1e1-b27f-498f-989e-2cce97a7589d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.867 2 DEBUG nova.compute.manager [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Refreshing instance network info cache due to event network-changed-c606d1e1-b27f-498f-989e-2cce97a7589d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:25:04 compute-0 nova_compute[260603]: 2025-10-02 08:25:04.867 2 DEBUG oslo_concurrency.lockutils [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c00d9d50-5c81-4dc9-8316-c654d4802b4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.092 2 DEBUG nova.network.neutron [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.539 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393505.5396528, aebef537-a40c-45aa-98b5-ebdd7c27028b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.540 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] VM Resumed (Lifecycle Event)
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.542 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.543 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.546 2 INFO nova.virt.libvirt.driver [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Instance spawned successfully.
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.546 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.585 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.589 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.589 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.590 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.590 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.591 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.591 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.595 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.619 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.620 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393505.5422864, aebef537-a40c-45aa-98b5-ebdd7c27028b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.620 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] VM Started (Lifecycle Event)
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.646 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.650 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.667 2 INFO nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Took 3.58 seconds to spawn the instance on the hypervisor.
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.668 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.677 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.751 2 INFO nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Took 5.06 seconds to build instance.
Oct 02 08:25:05 compute-0 nova_compute[260603]: 2025-10-02 08:25:05.769 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "aebef537-a40c-45aa-98b5-ebdd7c27028b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:06 compute-0 ceph-mon[74477]: pgmap v1292: 305 pgs: 305 active+clean; 165 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 7.2 MiB/s wr, 196 op/s
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.681 2 DEBUG nova.network.neutron [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Updating instance_info_cache with network_info: [{"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.707 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Releasing lock "refresh_cache-c00d9d50-5c81-4dc9-8316-c654d4802b4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.708 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Instance network_info: |[{"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.710 2 DEBUG oslo_concurrency.lockutils [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c00d9d50-5c81-4dc9-8316-c654d4802b4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.710 2 DEBUG nova.network.neutron [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Refreshing network info cache for port c606d1e1-b27f-498f-989e-2cce97a7589d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.715 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Start _get_guest_xml network_info=[{"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.722 2 WARNING nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.736 2 DEBUG nova.virt.libvirt.host [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.738 2 DEBUG nova.virt.libvirt.host [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.745 2 DEBUG nova.virt.libvirt.host [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.746 2 DEBUG nova.virt.libvirt.host [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.748 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.749 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.750 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.751 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.752 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.752 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.753 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.754 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.755 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.756 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.757 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.758 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.764 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:06 compute-0 nova_compute[260603]: 2025-10-02 08:25:06.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 165 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 5.9 MiB/s wr, 159 op/s
Oct 02 08:25:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:25:07 compute-0 podman[299215]: 2025-10-02 08:25:07.045231203 +0000 UTC m=+0.099309194 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 08:25:07 compute-0 podman[299214]: 2025-10-02 08:25:07.099507373 +0000 UTC m=+0.153581254 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:25:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220696812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.201 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.219 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.224 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700383951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.634 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.636 2 DEBUG nova.virt.libvirt.vif [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1903679195',display_name='tempest-ImagesTestJSON-server-1903679195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1903679195',id=32,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-lm5g3lue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:01Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=c00d9d50-5c81-4dc9-8316-c654d4802b4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.637 2 DEBUG nova.network.os_vif_util [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.637 2 DEBUG nova.network.os_vif_util [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.638 2 DEBUG nova.objects.instance [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid c00d9d50-5c81-4dc9-8316-c654d4802b4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.653 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:25:07 compute-0 nova_compute[260603]:   <uuid>c00d9d50-5c81-4dc9-8316-c654d4802b4f</uuid>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   <name>instance-00000020</name>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <nova:name>tempest-ImagesTestJSON-server-1903679195</nova:name>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:25:06</nova:creationTime>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:25:07 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:25:07 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:25:07 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:25:07 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:25:07 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:25:07 compute-0 nova_compute[260603]:         <nova:user uuid="6747651cfdcc4f868c43b9d78f5846c2">tempest-ImagesTestJSON-1188243509-project-member</nova:user>
Oct 02 08:25:07 compute-0 nova_compute[260603]:         <nova:project uuid="56b1e1170f2e4a73aaf396476bc82261">tempest-ImagesTestJSON-1188243509</nova:project>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:25:07 compute-0 nova_compute[260603]:         <nova:port uuid="c606d1e1-b27f-498f-989e-2cce97a7589d">
Oct 02 08:25:07 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <system>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <entry name="serial">c00d9d50-5c81-4dc9-8316-c654d4802b4f</entry>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <entry name="uuid">c00d9d50-5c81-4dc9-8316-c654d4802b4f</entry>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     </system>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   <os>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   </os>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   <features>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   </features>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk">
Oct 02 08:25:07 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:07 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk.config">
Oct 02 08:25:07 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:07 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:48:5d:05"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <target dev="tapc606d1e1-b2"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/console.log" append="off"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <video>
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     </video>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:25:07 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:25:07 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:25:07 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:25:07 compute-0 nova_compute[260603]: </domain>
Oct 02 08:25:07 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.658 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Preparing to wait for external event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.658 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.659 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.659 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.659 2 DEBUG nova.virt.libvirt.vif [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1903679195',display_name='tempest-ImagesTestJSON-server-1903679195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1903679195',id=32,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-lm5g3lue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:01Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=c00d9d50-5c81-4dc9-8316-c654d4802b4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.660 2 DEBUG nova.network.os_vif_util [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.660 2 DEBUG nova.network.os_vif_util [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.661 2 DEBUG os_vif [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.662 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.662 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc606d1e1-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc606d1e1-b2, col_values=(('external_ids', {'iface-id': 'c606d1e1-b27f-498f-989e-2cce97a7589d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:5d:05', 'vm-uuid': 'c00d9d50-5c81-4dc9-8316-c654d4802b4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:07 compute-0 NetworkManager[45129]: <info>  [1759393507.6681] manager: (tapc606d1e1-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.675 2 INFO os_vif [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2')
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.735 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.736 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.736 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No VIF found with MAC fa:16:3e:48:5d:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.737 2 INFO nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Using config drive
Oct 02 08:25:07 compute-0 nova_compute[260603]: 2025-10-02 08:25:07.758 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:08 compute-0 ceph-mon[74477]: pgmap v1293: 305 pgs: 305 active+clean; 165 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 5.9 MiB/s wr, 159 op/s
Oct 02 08:25:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2220696812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2700383951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 181 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 274 op/s
Oct 02 08:25:08 compute-0 nova_compute[260603]: 2025-10-02 08:25:08.920 2 INFO nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Creating config drive at /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/disk.config
Oct 02 08:25:08 compute-0 nova_compute[260603]: 2025-10-02 08:25:08.928 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzs1obi9x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.078 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzs1obi9x" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.118 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.130 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/disk.config c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.341 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/disk.config c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.342 2 INFO nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Deleting local config drive /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/disk.config because it was imported into RBD.
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.365 2 DEBUG nova.compute.manager [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.410 2 INFO nova.compute.manager [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] instance snapshotting
Oct 02 08:25:09 compute-0 kernel: tapc606d1e1-b2: entered promiscuous mode
Oct 02 08:25:09 compute-0 NetworkManager[45129]: <info>  [1759393509.4232] manager: (tapc606d1e1-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:09 compute-0 ovn_controller[152344]: 2025-10-02T08:25:09Z|00208|binding|INFO|Claiming lport c606d1e1-b27f-498f-989e-2cce97a7589d for this chassis.
Oct 02 08:25:09 compute-0 ovn_controller[152344]: 2025-10-02T08:25:09Z|00209|binding|INFO|c606d1e1-b27f-498f-989e-2cce97a7589d: Claiming fa:16:3e:48:5d:05 10.100.0.11
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.439 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:5d:05 10.100.0.11'], port_security=['fa:16:3e:48:5d:05 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c00d9d50-5c81-4dc9-8316-c654d4802b4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '2', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c606d1e1-b27f-498f-989e-2cce97a7589d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.442 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c606d1e1-b27f-498f-989e-2cce97a7589d in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 bound to our chassis
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.444 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:25:09 compute-0 ovn_controller[152344]: 2025-10-02T08:25:09Z|00210|binding|INFO|Setting lport c606d1e1-b27f-498f-989e-2cce97a7589d ovn-installed in OVS
Oct 02 08:25:09 compute-0 ovn_controller[152344]: 2025-10-02T08:25:09Z|00211|binding|INFO|Setting lport c606d1e1-b27f-498f-989e-2cce97a7589d up in Southbound
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.468 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[65e27730-58a8-439c-b66a-a833c1ddd2f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.470 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap897d7abf-91 in ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.472 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap897d7abf-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.473 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a35a9e7-dd7b-446f-a702-c0ca762c09f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 systemd-udevd[299372]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:25:09 compute-0 systemd-machined[214636]: New machine qemu-37-instance-00000020.
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.481 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[00afc06b-92b8-4144-94d3-8cd2ad8dead5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.498 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b617d16f-e97d-42b7-9fee-258eab1e0a55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000020.
Oct 02 08:25:09 compute-0 NetworkManager[45129]: <info>  [1759393509.5064] device (tapc606d1e1-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:25:09 compute-0 NetworkManager[45129]: <info>  [1759393509.5095] device (tapc606d1e1-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.526 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4e597c4a-9b9c-4058-9d8a-a2eabf7ac24c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.573 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[24c2bd31-5167-4d47-896f-a55fbd3a40cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 systemd-udevd[299375]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:25:09 compute-0 NetworkManager[45129]: <info>  [1759393509.5837] manager: (tap897d7abf-90): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.579 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91456dd7-c04e-4506-b7d7-1dcfe1ddca93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.637 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0ef112-3567-4f43-89e6-43d208eeef9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.641 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6148a35c-0276-44f4-a822-56f06dbbb900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 NetworkManager[45129]: <info>  [1759393509.6675] device (tap897d7abf-90): carrier: link connected
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.679 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f56a3af2-d80f-47fc-b748-762b1fda6fbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.697 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[311bbdf7-18df-47de-82a7-a9f9992f5ae3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438826, 'reachable_time': 28172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299404, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.724 2 INFO nova.virt.libvirt.driver [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Beginning live snapshot process
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.728 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d392b163-7551-4c63-8fa5-44bece86a2cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:18ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438826, 'tstamp': 438826}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299405, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.746 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bfab71aa-0269-4cc7-8dc9-826b0db7a5e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438826, 'reachable_time': 28172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299406, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.791 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[066900ae-9a41-4507-8b52-bbb71c8d1bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.853 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0da67ee8-4209-417a-96a2-a83958e74993]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.854 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.854 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.855 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:09 compute-0 NetworkManager[45129]: <info>  [1759393509.8576] manager: (tap897d7abf-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct 02 08:25:09 compute-0 kernel: tap897d7abf-90: entered promiscuous mode
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.860 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:09 compute-0 ovn_controller[152344]: 2025-10-02T08:25:09Z|00212|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.879 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.880 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c84f8d2-ffbf-4207-888b-41029d97092b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.881 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:25:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.881 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'env', 'PROCESS_TAG=haproxy-897d7abf-9e23-43cd-8f60-7156792a4360', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/897d7abf-9e23-43cd-8f60-7156792a4360.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.907 2 DEBUG nova.virt.libvirt.imagebackend [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:25:09 compute-0 nova_compute[260603]: 2025-10-02 08:25:09.998 2 DEBUG nova.network.neutron [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Updated VIF entry in instance network info cache for port c606d1e1-b27f-498f-989e-2cce97a7589d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.005 2 DEBUG nova.network.neutron [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Updating instance_info_cache with network_info: [{"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.021 2 DEBUG oslo_concurrency.lockutils [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c00d9d50-5c81-4dc9-8316-c654d4802b4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:10 compute-0 ceph-mon[74477]: pgmap v1294: 305 pgs: 305 active+clean; 181 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 274 op/s
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.126 2 DEBUG nova.storage.rbd_utils [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] creating snapshot(0474e051fbd9447f81cf1fe9fff50e99) on rbd image(9ea31984-a45e-4154-9df9-3c4e8ce69309_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.169 2 DEBUG nova.compute.manager [req-e3a4e760-fed3-402f-8d89-3839b496a0f9 req-b7a107f7-9604-400a-983e-f4ce7b8accb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.170 2 DEBUG oslo_concurrency.lockutils [req-e3a4e760-fed3-402f-8d89-3839b496a0f9 req-b7a107f7-9604-400a-983e-f4ce7b8accb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.171 2 DEBUG oslo_concurrency.lockutils [req-e3a4e760-fed3-402f-8d89-3839b496a0f9 req-b7a107f7-9604-400a-983e-f4ce7b8accb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.171 2 DEBUG oslo_concurrency.lockutils [req-e3a4e760-fed3-402f-8d89-3839b496a0f9 req-b7a107f7-9604-400a-983e-f4ce7b8accb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.171 2 DEBUG nova.compute.manager [req-e3a4e760-fed3-402f-8d89-3839b496a0f9 req-b7a107f7-9604-400a-983e-f4ce7b8accb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Processing event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:25:10 compute-0 podman[299531]: 2025-10-02 08:25:10.266860976 +0000 UTC m=+0.050226000 container create c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:25:10 compute-0 systemd[1]: Started libpod-conmon-c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598.scope.
Oct 02 08:25:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:25:10 compute-0 podman[299531]: 2025-10-02 08:25:10.244499126 +0000 UTC m=+0.027864170 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dec502266d477323c4c3cfd6e09d38867b92c6da0907359bbdddd0555e1b6f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:25:10 compute-0 podman[299531]: 2025-10-02 08:25:10.360364922 +0000 UTC m=+0.143729976 container init c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:25:10 compute-0 podman[299531]: 2025-10-02 08:25:10.367847413 +0000 UTC m=+0.151212447 container start c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.396 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393510.3952932, c00d9d50-5c81-4dc9-8316-c654d4802b4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.396 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] VM Started (Lifecycle Event)
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.399 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:25:10 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [NOTICE]   (299548) : New worker (299550) forked
Oct 02 08:25:10 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [NOTICE]   (299548) : Loading success.
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.403 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.407 2 INFO nova.virt.libvirt.driver [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Instance spawned successfully.
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.408 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.428 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.432 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.433 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.433 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.433 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.434 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.434 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.438 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.487 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.488 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393510.395485, c00d9d50-5c81-4dc9-8316-c654d4802b4f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.488 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] VM Paused (Lifecycle Event)
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.504 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.506 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393510.401984, c00d9d50-5c81-4dc9-8316-c654d4802b4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.507 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] VM Resumed (Lifecycle Event)
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.509 2 INFO nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Took 9.41 seconds to spawn the instance on the hypervisor.
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.510 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.532 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.538 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.560 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.569 2 INFO nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Took 10.45 seconds to build instance.
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.592 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 181 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 274 op/s
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.929 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.930 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:10 compute-0 nova_compute[260603]: 2025-10-02 08:25:10.951 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.027 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.028 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.037 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.038 2 INFO nova.compute.claims [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:25:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct 02 08:25:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct 02 08:25:11 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.165 2 DEBUG nova.storage.rbd_utils [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] cloning vms/9ea31984-a45e-4154-9df9-3c4e8ce69309_disk@0474e051fbd9447f81cf1fe9fff50e99 to images/e94191cf-f3bc-48ee-b095-e0640354f9ca clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.242 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.314 2 DEBUG nova.storage.rbd_utils [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] flattening images/e94191cf-f3bc-48ee-b095-e0640354f9ca flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.526 2 DEBUG nova.storage.rbd_utils [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] removing snapshot(0474e051fbd9447f81cf1fe9fff50e99) on rbd image(9ea31984-a45e-4154-9df9-3c4e8ce69309_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:25:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3162021905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.771 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.777 2 DEBUG nova.compute.provider_tree [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.793 2 DEBUG nova.scheduler.client.report [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.812 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.813 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.880 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.881 2 DEBUG nova.network.neutron [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.898 2 INFO nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:25:11 compute-0 nova_compute[260603]: 2025-10-02 08:25:11.918 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:25:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.068 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.073 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.073 2 INFO nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Creating image(s)
Oct 02 08:25:12 compute-0 ceph-mon[74477]: pgmap v1295: 305 pgs: 305 active+clean; 181 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 274 op/s
Oct 02 08:25:12 compute-0 ceph-mon[74477]: osdmap e174: 3 total, 3 up, 3 in
Oct 02 08:25:12 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3162021905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct 02 08:25:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.106 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:12 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.146 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.172 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.176 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.237 2 DEBUG nova.storage.rbd_utils [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] creating snapshot(snap) on rbd image(e94191cf-f3bc-48ee-b095-e0640354f9ca) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.273 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.275 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.276 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.276 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.300 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.303 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c28cb03c-6207-4ec5-9156-03252350561c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.334 2 DEBUG nova.policy [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.397 2 DEBUG nova.compute.manager [req-6c2a80af-e549-427b-94e3-20fdb214bf51 req-e9f0a99c-0a5f-4afd-b2b3-b90f73ae2ef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.398 2 DEBUG oslo_concurrency.lockutils [req-6c2a80af-e549-427b-94e3-20fdb214bf51 req-e9f0a99c-0a5f-4afd-b2b3-b90f73ae2ef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.399 2 DEBUG oslo_concurrency.lockutils [req-6c2a80af-e549-427b-94e3-20fdb214bf51 req-e9f0a99c-0a5f-4afd-b2b3-b90f73ae2ef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.399 2 DEBUG oslo_concurrency.lockutils [req-6c2a80af-e549-427b-94e3-20fdb214bf51 req-e9f0a99c-0a5f-4afd-b2b3-b90f73ae2ef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.401 2 DEBUG nova.compute.manager [req-6c2a80af-e549-427b-94e3-20fdb214bf51 req-e9f0a99c-0a5f-4afd-b2b3-b90f73ae2ef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] No waiting events found dispatching network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.401 2 WARNING nova.compute.manager [req-6c2a80af-e549-427b-94e3-20fdb214bf51 req-e9f0a99c-0a5f-4afd-b2b3-b90f73ae2ef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received unexpected event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d for instance with vm_state active and task_state None.
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.537 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c28cb03c-6207-4ec5-9156-03252350561c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.614 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] resizing rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.710 2 DEBUG nova.objects.instance [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'migration_context' on Instance uuid c28cb03c-6207-4ec5-9156-03252350561c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.733 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.734 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Ensure instance console log exists: /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.735 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.735 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.736 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 219 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 9.8 MiB/s rd, 3.4 MiB/s wr, 364 op/s
Oct 02 08:25:12 compute-0 nova_compute[260603]: 2025-10-02 08:25:12.978 2 DEBUG nova.compute.manager [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:13 compute-0 nova_compute[260603]: 2025-10-02 08:25:13.032 2 INFO nova.compute.manager [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] instance snapshotting
Oct 02 08:25:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct 02 08:25:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct 02 08:25:13 compute-0 ceph-mon[74477]: osdmap e175: 3 total, 3 up, 3 in
Oct 02 08:25:13 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct 02 08:25:13 compute-0 nova_compute[260603]: 2025-10-02 08:25:13.341 2 INFO nova.virt.libvirt.driver [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Beginning live snapshot process
Oct 02 08:25:13 compute-0 nova_compute[260603]: 2025-10-02 08:25:13.493 2 DEBUG nova.virt.libvirt.imagebackend [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:25:13 compute-0 nova_compute[260603]: 2025-10-02 08:25:13.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:25:13 compute-0 nova_compute[260603]: 2025-10-02 08:25:13.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:25:13 compute-0 nova_compute[260603]: 2025-10-02 08:25:13.542 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:25:13 compute-0 nova_compute[260603]: 2025-10-02 08:25:13.601 2 DEBUG nova.network.neutron [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Successfully created port: 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:25:13 compute-0 nova_compute[260603]: 2025-10-02 08:25:13.775 2 DEBUG nova.storage.rbd_utils [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(a00d013737de492cb26f2e5326f5769e) on rbd image(c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:25:13 compute-0 podman[299888]: 2025-10-02 08:25:13.9915475 +0000 UTC m=+0.052920498 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:25:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct 02 08:25:14 compute-0 ceph-mon[74477]: pgmap v1298: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 219 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 9.8 MiB/s rd, 3.4 MiB/s wr, 364 op/s
Oct 02 08:25:14 compute-0 ceph-mon[74477]: osdmap e176: 3 total, 3 up, 3 in
Oct 02 08:25:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct 02 08:25:14 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.185 2 DEBUG nova.storage.rbd_utils [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] cloning vms/c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk@a00d013737de492cb26f2e5326f5769e to images/82efe28e-fddc-4d67-b244-5e7b89266ac0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.279 2 DEBUG nova.storage.rbd_utils [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] flattening images/82efe28e-fddc-4d67-b244-5e7b89266ac0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.478 2 DEBUG nova.storage.rbd_utils [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] removing snapshot(a00d013737de492cb26f2e5326f5769e) on rbd image(c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.517 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.641 2 DEBUG nova.network.neutron [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Successfully updated port: 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.678 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.678 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.678 2 DEBUG nova.network.neutron [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.752 2 DEBUG nova.compute.manager [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-changed-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.752 2 DEBUG nova.compute.manager [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Refreshing instance network info cache due to event network-changed-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.753 2 DEBUG oslo_concurrency.lockutils [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.763 2 INFO nova.virt.libvirt.driver [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Snapshot image upload complete
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.763 2 INFO nova.compute.manager [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Took 5.35 seconds to snapshot the instance on the hypervisor.
Oct 02 08:25:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 240 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 6.1 MiB/s wr, 405 op/s
Oct 02 08:25:14 compute-0 nova_compute[260603]: 2025-10-02 08:25:14.932 2 DEBUG nova.network.neutron [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:25:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct 02 08:25:15 compute-0 ceph-mon[74477]: osdmap e177: 3 total, 3 up, 3 in
Oct 02 08:25:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct 02 08:25:15 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct 02 08:25:15 compute-0 nova_compute[260603]: 2025-10-02 08:25:15.173 2 DEBUG nova.storage.rbd_utils [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(snap) on rbd image(82efe28e-fddc-4d67-b244-5e7b89266ac0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:25:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct 02 08:25:16 compute-0 ceph-mon[74477]: pgmap v1301: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 240 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 6.1 MiB/s wr, 405 op/s
Oct 02 08:25:16 compute-0 ceph-mon[74477]: osdmap e178: 3 total, 3 up, 3 in
Oct 02 08:25:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct 02 08:25:16 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.255 2 DEBUG nova.network.neutron [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updating instance_info_cache with network_info: [{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.292 2 DEBUG nova.compute.manager [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.294 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.294 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Instance network_info: |[{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.295 2 DEBUG oslo_concurrency.lockutils [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.295 2 DEBUG nova.network.neutron [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Refreshing network info cache for port 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.301 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Start _get_guest_xml network_info=[{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.310 2 WARNING nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.315 2 DEBUG nova.virt.libvirt.host [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.315 2 DEBUG nova.virt.libvirt.host [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.318 2 DEBUG nova.virt.libvirt.host [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.318 2 DEBUG nova.virt.libvirt.host [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.319 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.319 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.319 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.320 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.320 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.320 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.320 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.321 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.321 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.321 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.321 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.322 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.324 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.361 2 INFO nova.compute.manager [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] instance snapshotting
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.542 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.543 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.709 2 INFO nova.virt.libvirt.driver [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Beginning live snapshot process
Oct 02 08:25:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940798642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.750 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.769 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.774 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 240 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 922 KiB/s wr, 152 op/s
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.902 2 DEBUG nova.virt.libvirt.imagebackend [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:25:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487517691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:16 compute-0 nova_compute[260603]: 2025-10-02 08:25:16.991 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.070 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.071 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.074 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.074 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.077 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.077 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.119 2 DEBUG nova.storage.rbd_utils [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] creating snapshot(fe0fd5c254114774b112faf3f8e46435) on rbd image(aebef537-a40c-45aa-98b5-ebdd7c27028b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:25:17 compute-0 ceph-mon[74477]: osdmap e179: 3 total, 3 up, 3 in
Oct 02 08:25:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2940798642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1487517691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046858845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct 02 08:25:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct 02 08:25:17 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.226 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.227 2 DEBUG nova.virt.libvirt.vif [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-458247204',display_name='tempest-AttachInterfacesTestJSON-server-458247204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-458247204',id=34,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLaaEbFdfJl8DPmpnKJlEIvFg5540SqFF+MSTf3Rd/2eelZoXpVzf3sfGdNxC0G9xmrCg9ZN/m3Sts6FpSIoxcwOomYVGKVXIQ6YS2vLlaWSvr3+0XJ9/CqIl8gT/hDBw==',key_name='tempest-keypair-1310606954',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-jh0cjhom',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=c28cb03c-6207-4ec5-9156-03252350561c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.228 2 DEBUG nova.network.os_vif_util [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.229 2 DEBUG nova.network.os_vif_util [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:cf:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b4ce2c0-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.230 2 DEBUG nova.objects.instance [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_devices' on Instance uuid c28cb03c-6207-4ec5-9156-03252350561c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.252 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:25:17 compute-0 nova_compute[260603]:   <uuid>c28cb03c-6207-4ec5-9156-03252350561c</uuid>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   <name>instance-00000022</name>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <nova:name>tempest-AttachInterfacesTestJSON-server-458247204</nova:name>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:25:16</nova:creationTime>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:25:17 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:25:17 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:25:17 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:25:17 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:25:17 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:25:17 compute-0 nova_compute[260603]:         <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:25:17 compute-0 nova_compute[260603]:         <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:25:17 compute-0 nova_compute[260603]:         <nova:port uuid="8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627">
Oct 02 08:25:17 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <system>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <entry name="serial">c28cb03c-6207-4ec5-9156-03252350561c</entry>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <entry name="uuid">c28cb03c-6207-4ec5-9156-03252350561c</entry>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     </system>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   <os>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   </os>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   <features>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   </features>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/c28cb03c-6207-4ec5-9156-03252350561c_disk">
Oct 02 08:25:17 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:17 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/c28cb03c-6207-4ec5-9156-03252350561c_disk.config">
Oct 02 08:25:17 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:17 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:02:cf:d8"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <target dev="tap8b4ce2c0-9e"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/console.log" append="off"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <video>
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     </video>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:25:17 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:25:17 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:25:17 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:25:17 compute-0 nova_compute[260603]: </domain>
Oct 02 08:25:17 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.254 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Preparing to wait for external event network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.254 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.255 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.255 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.256 2 DEBUG nova.virt.libvirt.vif [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-458247204',display_name='tempest-AttachInterfacesTestJSON-server-458247204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-458247204',id=34,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLaaEbFdfJl8DPmpnKJlEIvFg5540SqFF+MSTf3Rd/2eelZoXpVzf3sfGdNxC0G9xmrCg9ZN/m3Sts6FpSIoxcwOomYVGKVXIQ6YS2vLlaWSvr3+0XJ9/CqIl8gT/hDBw==',key_name='tempest-keypair-1310606954',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-jh0cjhom',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=c28cb03c-6207-4ec5-9156-03252350561c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.256 2 DEBUG nova.network.os_vif_util [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.256 2 DEBUG nova.network.os_vif_util [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:cf:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b4ce2c0-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.257 2 DEBUG os_vif [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:cf:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b4ce2c0-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.258 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.258 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b4ce2c0-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b4ce2c0-9e, col_values=(('external_ids', {'iface-id': '8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:cf:d8', 'vm-uuid': 'c28cb03c-6207-4ec5-9156-03252350561c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:17 compute-0 NetworkManager[45129]: <info>  [1759393517.2892] manager: (tap8b4ce2c0-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.297 2 INFO os_vif [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:cf:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b4ce2c0-9e')
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.322 2 DEBUG nova.storage.rbd_utils [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] cloning vms/aebef537-a40c-45aa-98b5-ebdd7c27028b_disk@fe0fd5c254114774b112faf3f8e46435 to images/5fbebdfe-1f15-443e-9ff4-2429c6d3b3f3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.376 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.377 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3892MB free_disk=59.92285919189453GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.377 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.377 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.417 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.418 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.418 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:02:cf:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.418 2 INFO nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Using config drive
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.438 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.449 2 DEBUG nova.storage.rbd_utils [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] flattening images/5fbebdfe-1f15-443e-9ff4-2429c6d3b3f3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.546 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 9ea31984-a45e-4154-9df9-3c4e8ce69309 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.547 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance c00d9d50-5c81-4dc9-8316-c654d4802b4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.547 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance aebef537-a40c-45aa-98b5-ebdd7c27028b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.547 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance c28cb03c-6207-4ec5-9156-03252350561c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.654 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.710 2 INFO nova.virt.libvirt.driver [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Snapshot image upload complete
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.711 2 INFO nova.compute.manager [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Took 4.68 seconds to snapshot the instance on the hypervisor.
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.778 2 DEBUG nova.storage.rbd_utils [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] removing snapshot(fe0fd5c254114774b112faf3f8e46435) on rbd image(aebef537-a40c-45aa-98b5-ebdd7c27028b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:25:17 compute-0 podman[300246]: 2025-10-02 08:25:17.985501796 +0000 UTC m=+0.051016436 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.996 2 DEBUG nova.network.neutron [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updated VIF entry in instance network info cache for port 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:25:17 compute-0 nova_compute[260603]: 2025-10-02 08:25:17.997 2 DEBUG nova.network.neutron [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updating instance_info_cache with network_info: [{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.015 2 DEBUG oslo_concurrency.lockutils [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4220638138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.120 2 INFO nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Creating config drive at /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/disk.config
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.125 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6b_fgwh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.167 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:18 compute-0 ceph-mon[74477]: pgmap v1304: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 240 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 922 KiB/s wr, 152 op/s
Oct 02 08:25:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3046858845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:18 compute-0 ceph-mon[74477]: osdmap e180: 3 total, 3 up, 3 in
Oct 02 08:25:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4220638138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.177 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.195 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct 02 08:25:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.216 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.217 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:18 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.242 2 DEBUG nova.storage.rbd_utils [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] creating snapshot(snap) on rbd image(5fbebdfe-1f15-443e-9ff4-2429c6d3b3f3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.285 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6b_fgwh" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.317 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.321 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/disk.config c28cb03c-6207-4ec5-9156-03252350561c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.546 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/disk.config c28cb03c-6207-4ec5-9156-03252350561c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.548 2 INFO nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Deleting local config drive /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/disk.config because it was imported into RBD.
Oct 02 08:25:18 compute-0 kernel: tap8b4ce2c0-9e: entered promiscuous mode
Oct 02 08:25:18 compute-0 NetworkManager[45129]: <info>  [1759393518.6331] manager: (tap8b4ce2c0-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Oct 02 08:25:18 compute-0 ovn_controller[152344]: 2025-10-02T08:25:18Z|00213|binding|INFO|Claiming lport 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 for this chassis.
Oct 02 08:25:18 compute-0 ovn_controller[152344]: 2025-10-02T08:25:18Z|00214|binding|INFO|8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627: Claiming fa:16:3e:02:cf:d8 10.100.0.3
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.645 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:cf:d8 10.100.0.3'], port_security=['fa:16:3e:02:cf:d8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c28cb03c-6207-4ec5-9156-03252350561c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c4b338b0-15ff-4ccb-801c-e865bb41224d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.646 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.647 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.659 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b5072cc7-c17d-41ee-b36f-b9927fe943d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.660 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfa1bff6d-11 in ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.664 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfa1bff6d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.664 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3d09bfcd-1a2e-4493-b680-094743b5666f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.666 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[11921612-bc6d-4181-954d-ac9027bcf0f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.675 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[df045daa-618d-47ec-9a7c-2b97efde9499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 systemd-udevd[300340]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:25:18 compute-0 systemd-machined[214636]: New machine qemu-38-instance-00000022.
Oct 02 08:25:18 compute-0 NetworkManager[45129]: <info>  [1759393518.7001] device (tap8b4ce2c0-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:25:18 compute-0 NetworkManager[45129]: <info>  [1759393518.7012] device (tap8b4ce2c0-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.700 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ad50cbf1-3ba2-44a9-87b4-aad10e364665]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:18 compute-0 ovn_controller[152344]: 2025-10-02T08:25:18Z|00215|binding|INFO|Setting lport 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 ovn-installed in OVS
Oct 02 08:25:18 compute-0 ovn_controller[152344]: 2025-10-02T08:25:18Z|00216|binding|INFO|Setting lport 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 up in Southbound
Oct 02 08:25:18 compute-0 nova_compute[260603]: 2025-10-02 08:25:18.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.758 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7fee1221-9501-41a5-98e7-180c6a2053a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.775 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c22b1d97-e142-45c9-9a7c-fc4e4b941579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 NetworkManager[45129]: <info>  [1759393518.7759] manager: (tapfa1bff6d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.820 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5101f2c6-10a8-4f97-9262-6180e4561e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.825 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[baca19eb-90e9-42de-921d-9d41d6f10b63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 399 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 25 MiB/s wr, 692 op/s
Oct 02 08:25:18 compute-0 NetworkManager[45129]: <info>  [1759393518.8584] device (tapfa1bff6d-10): carrier: link connected
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.862 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[16290e92-5eb7-4ac5-a7c1-efa40142226b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.888 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df8c5041-5ccc-4472-b268-90eddc46bc7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439746, 'reachable_time': 37526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300371, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.904 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2919a6d5-ca04-4bcb-9133-74f9aaac4053]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:c92f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439746, 'tstamp': 439746}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300372, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.925 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48f9f101-e466-4a2d-bddc-ae7e3cedf6f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439746, 'reachable_time': 37526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300373, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.958 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[13afd0c4-71a1-44d3-872a-612998b46b22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.024 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7198b983-790f-4f83-a6ad-0b3b9adeb60c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.025 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.025 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.026 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:19 compute-0 kernel: tapfa1bff6d-10: entered promiscuous mode
Oct 02 08:25:19 compute-0 NetworkManager[45129]: <info>  [1759393519.0292] manager: (tapfa1bff6d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.034 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:19 compute-0 ovn_controller[152344]: 2025-10-02T08:25:19Z|00217|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.038 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.039 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[080c5faa-598d-40d6-a5fe-c8178f999a18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.039 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:25:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.041 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'env', 'PROCESS_TAG=haproxy-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fa1bff6d-19fb-4792-a261-4da1165d95a1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.213 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.213 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.214 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:25:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct 02 08:25:19 compute-0 ceph-mon[74477]: osdmap e181: 3 total, 3 up, 3 in
Oct 02 08:25:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct 02 08:25:19 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct 02 08:25:19 compute-0 podman[300446]: 2025-10-02 08:25:19.557677536 +0000 UTC m=+0.057256647 container create 4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:25:19 compute-0 systemd[1]: Started libpod-conmon-4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d.scope.
Oct 02 08:25:19 compute-0 podman[300446]: 2025-10-02 08:25:19.528579648 +0000 UTC m=+0.028158779 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:25:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b0303c5a3bf6706e7d4ed7fdb28ee9eeead9f26ad1ee83702d2c74668a46e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:25:19 compute-0 podman[300446]: 2025-10-02 08:25:19.661591696 +0000 UTC m=+0.161170827 container init 4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:25:19 compute-0 podman[300446]: 2025-10-02 08:25:19.670932468 +0000 UTC m=+0.170511589 container start 4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:25:19 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[300461]: [NOTICE]   (300465) : New worker (300467) forked
Oct 02 08:25:19 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[300461]: [NOTICE]   (300465) : Loading success.
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.837 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393519.836332, c28cb03c-6207-4ec5-9156-03252350561c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.837 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] VM Started (Lifecycle Event)
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.865 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.869 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393519.8365743, c28cb03c-6207-4ec5-9156-03252350561c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.869 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] VM Paused (Lifecycle Event)
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.889 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.895 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:19 compute-0 nova_compute[260603]: 2025-10-02 08:25:19.916 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:20 compute-0 ceph-mon[74477]: pgmap v1307: 305 pgs: 305 active+clean; 399 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 25 MiB/s wr, 692 op/s
Oct 02 08:25:20 compute-0 ceph-mon[74477]: osdmap e182: 3 total, 3 up, 3 in
Oct 02 08:25:20 compute-0 nova_compute[260603]: 2025-10-02 08:25:20.558 2 INFO nova.virt.libvirt.driver [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Snapshot image upload complete
Oct 02 08:25:20 compute-0 nova_compute[260603]: 2025-10-02 08:25:20.559 2 INFO nova.compute.manager [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Took 4.20 seconds to snapshot the instance on the hypervisor.
Oct 02 08:25:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 399 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 21 MiB/s wr, 589 op/s
Oct 02 08:25:21 compute-0 nova_compute[260603]: 2025-10-02 08:25:21.303 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:21 compute-0 nova_compute[260603]: 2025-10-02 08:25:21.304 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:21 compute-0 nova_compute[260603]: 2025-10-02 08:25:21.334 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:25:21 compute-0 nova_compute[260603]: 2025-10-02 08:25:21.393 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:21 compute-0 nova_compute[260603]: 2025-10-02 08:25:21.394 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:21 compute-0 nova_compute[260603]: 2025-10-02 08:25:21.401 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:25:21 compute-0 nova_compute[260603]: 2025-10-02 08:25:21.401 2 INFO nova.compute.claims [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:25:21 compute-0 nova_compute[260603]: 2025-10-02 08:25:21.595 2 DEBUG nova.compute.manager [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:21 compute-0 nova_compute[260603]: 2025-10-02 08:25:21.652 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:21 compute-0 nova_compute[260603]: 2025-10-02 08:25:21.703 2 INFO nova.compute.manager [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] instance snapshotting
Oct 02 08:25:21 compute-0 nova_compute[260603]: 2025-10-02 08:25:21.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:21 compute-0 nova_compute[260603]: 2025-10-02 08:25:21.960 2 INFO nova.virt.libvirt.driver [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Beginning live snapshot process
Oct 02 08:25:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:25:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct 02 08:25:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct 02 08:25:22 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct 02 08:25:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:25:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3792760242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:25:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:25:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3792760242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:25:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/361007538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.166 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.172 2 DEBUG nova.virt.libvirt.imagebackend [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.176 2 DEBUG nova.compute.provider_tree [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.189 2 DEBUG nova.scheduler.client.report [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.208 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.209 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:25:22 compute-0 ceph-mon[74477]: pgmap v1309: 305 pgs: 305 active+clean; 399 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 21 MiB/s wr, 589 op/s
Oct 02 08:25:22 compute-0 ceph-mon[74477]: osdmap e183: 3 total, 3 up, 3 in
Oct 02 08:25:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3792760242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:25:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3792760242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:25:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/361007538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.258 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.259 2 DEBUG nova.network.neutron [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.274 2 INFO nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.289 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.371 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.373 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.373 2 INFO nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Creating image(s)
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.396 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.431 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.460 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.472 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "c5fbcb8fd7373957e056e74fa5c5e80912273aa7" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.474 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c5fbcb8fd7373957e056e74fa5c5e80912273aa7" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.483 2 DEBUG nova.storage.rbd_utils [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] creating snapshot(add832e29cb746969f4674b2a15264f6) on rbd image(9ea31984-a45e-4154-9df9-3c4e8ce69309_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.544 2 DEBUG nova.compute.manager [req-35176719-70ec-45ba-b95c-84b5d8d13a0e req-fdb06a01-b76d-4ba8-8675-8d657b8786db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.544 2 DEBUG oslo_concurrency.lockutils [req-35176719-70ec-45ba-b95c-84b5d8d13a0e req-fdb06a01-b76d-4ba8-8675-8d657b8786db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.545 2 DEBUG oslo_concurrency.lockutils [req-35176719-70ec-45ba-b95c-84b5d8d13a0e req-fdb06a01-b76d-4ba8-8675-8d657b8786db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.545 2 DEBUG oslo_concurrency.lockutils [req-35176719-70ec-45ba-b95c-84b5d8d13a0e req-fdb06a01-b76d-4ba8-8675-8d657b8786db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.546 2 DEBUG nova.compute.manager [req-35176719-70ec-45ba-b95c-84b5d8d13a0e req-fdb06a01-b76d-4ba8-8675-8d657b8786db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Processing event network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.548 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.552 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393522.5522614, c28cb03c-6207-4ec5-9156-03252350561c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.553 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] VM Resumed (Lifecycle Event)
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.558 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.564 2 INFO nova.virt.libvirt.driver [-] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Instance spawned successfully.
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.566 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.573 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.579 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.584 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.585 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.587 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.587 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.589 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.589 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.595 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.638 2 INFO nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Took 10.57 seconds to spawn the instance on the hypervisor.
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.639 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.710 2 INFO nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Took 11.71 seconds to build instance.
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.727 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.758 2 DEBUG nova.policy [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6747651cfdcc4f868c43b9d78f5846c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56b1e1170f2e4a73aaf396476bc82261', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.808 2 DEBUG nova.virt.libvirt.imagebackend [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/82efe28e-fddc-4d67-b244-5e7b89266ac0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/82efe28e-fddc-4d67-b244-5e7b89266ac0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 08:25:22 compute-0 ovn_controller[152344]: 2025-10-02T08:25:22Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:5d:05 10.100.0.11
Oct 02 08:25:22 compute-0 ovn_controller[152344]: 2025-10-02T08:25:22Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:5d:05 10.100.0.11
Oct 02 08:25:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 468 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 22 MiB/s wr, 684 op/s
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.883 2 DEBUG nova.virt.libvirt.imagebackend [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Selected location: {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/82efe28e-fddc-4d67-b244-5e7b89266ac0/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 08:25:22 compute-0 nova_compute[260603]: 2025-10-02 08:25:22.884 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] cloning images/82efe28e-fddc-4d67-b244-5e7b89266ac0@snap to None/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:25:23 compute-0 nova_compute[260603]: 2025-10-02 08:25:23.008 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c5fbcb8fd7373957e056e74fa5c5e80912273aa7" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:23 compute-0 nova_compute[260603]: 2025-10-02 08:25:23.149 2 DEBUG nova.objects.instance [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:23 compute-0 nova_compute[260603]: 2025-10-02 08:25:23.162 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:25:23 compute-0 nova_compute[260603]: 2025-10-02 08:25:23.162 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Ensure instance console log exists: /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:25:23 compute-0 nova_compute[260603]: 2025-10-02 08:25:23.163 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:23 compute-0 nova_compute[260603]: 2025-10-02 08:25:23.163 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:23 compute-0 nova_compute[260603]: 2025-10-02 08:25:23.164 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct 02 08:25:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct 02 08:25:23 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct 02 08:25:23 compute-0 nova_compute[260603]: 2025-10-02 08:25:23.304 2 DEBUG nova.storage.rbd_utils [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] cloning vms/9ea31984-a45e-4154-9df9-3c4e8ce69309_disk@add832e29cb746969f4674b2a15264f6 to images/6250c07d-a0dc-41fe-922a-619617d749c1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:25:23 compute-0 nova_compute[260603]: 2025-10-02 08:25:23.420 2 DEBUG nova.storage.rbd_utils [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] flattening images/6250c07d-a0dc-41fe-922a-619617d749c1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:25:23 compute-0 nova_compute[260603]: 2025-10-02 08:25:23.696 2 DEBUG nova.network.neutron [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Successfully created port: 74b4490f-0cc9-4b51-b6a0-246fd701e628 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:25:23 compute-0 nova_compute[260603]: 2025-10-02 08:25:23.782 2 DEBUG nova.storage.rbd_utils [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] removing snapshot(add832e29cb746969f4674b2a15264f6) on rbd image(9ea31984-a45e-4154-9df9-3c4e8ce69309_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:25:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct 02 08:25:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct 02 08:25:24 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct 02 08:25:24 compute-0 ceph-mon[74477]: pgmap v1311: 305 pgs: 305 active+clean; 468 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 22 MiB/s wr, 684 op/s
Oct 02 08:25:24 compute-0 ceph-mon[74477]: osdmap e184: 3 total, 3 up, 3 in
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.289 2 DEBUG nova.storage.rbd_utils [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] creating snapshot(snap) on rbd image(6250c07d-a0dc-41fe-922a-619617d749c1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:25:24 compute-0 ovn_controller[152344]: 2025-10-02T08:25:24Z|00218|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct 02 08:25:24 compute-0 ovn_controller[152344]: 2025-10-02T08:25:24Z|00219|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct 02 08:25:24 compute-0 NetworkManager[45129]: <info>  [1759393524.4444] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:24 compute-0 NetworkManager[45129]: <info>  [1759393524.4459] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:24 compute-0 ovn_controller[152344]: 2025-10-02T08:25:24Z|00220|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct 02 08:25:24 compute-0 ovn_controller[152344]: 2025-10-02T08:25:24Z|00221|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.625 2 DEBUG nova.compute.manager [req-9aa19a78-a814-454e-8e76-2f050b32552f req-3cf646db-e92e-4602-a1e7-8509695d853a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.625 2 DEBUG oslo_concurrency.lockutils [req-9aa19a78-a814-454e-8e76-2f050b32552f req-3cf646db-e92e-4602-a1e7-8509695d853a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.625 2 DEBUG oslo_concurrency.lockutils [req-9aa19a78-a814-454e-8e76-2f050b32552f req-3cf646db-e92e-4602-a1e7-8509695d853a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.626 2 DEBUG oslo_concurrency.lockutils [req-9aa19a78-a814-454e-8e76-2f050b32552f req-3cf646db-e92e-4602-a1e7-8509695d853a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.626 2 DEBUG nova.compute.manager [req-9aa19a78-a814-454e-8e76-2f050b32552f req-3cf646db-e92e-4602-a1e7-8509695d853a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] No waiting events found dispatching network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.626 2 WARNING nova.compute.manager [req-9aa19a78-a814-454e-8e76-2f050b32552f req-3cf646db-e92e-4602-a1e7-8509695d853a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received unexpected event network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 for instance with vm_state active and task_state None.
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.767 2 DEBUG nova.network.neutron [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Successfully updated port: 74b4490f-0cc9-4b51-b6a0-246fd701e628 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.777 2 DEBUG nova.compute.manager [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-changed-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.777 2 DEBUG nova.compute.manager [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Refreshing instance network info cache due to event network-changed-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.778 2 DEBUG oslo_concurrency.lockutils [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.778 2 DEBUG oslo_concurrency.lockutils [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.778 2 DEBUG nova.network.neutron [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Refreshing network info cache for port 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.796 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "refresh_cache-1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.797 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquired lock "refresh_cache-1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.797 2 DEBUG nova.network.neutron [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:25:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 487 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 11 MiB/s wr, 351 op/s
Oct 02 08:25:24 compute-0 nova_compute[260603]: 2025-10-02 08:25:24.987 2 DEBUG nova.network.neutron [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:25:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct 02 08:25:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct 02 08:25:25 compute-0 ceph-mon[74477]: osdmap e185: 3 total, 3 up, 3 in
Oct 02 08:25:25 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct 02 08:25:25 compute-0 nova_compute[260603]: 2025-10-02 08:25:25.972 2 DEBUG nova.network.neutron [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Updating instance_info_cache with network_info: [{"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.002 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Releasing lock "refresh_cache-1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.002 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Instance network_info: |[{"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.007 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Start _get_guest_xml network_info=[{"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:25:12Z,direct_url=<?>,disk_format='raw',id=82efe28e-fddc-4d67-b244-5e7b89266ac0,min_disk=1,min_ram=0,name='tempest-test-snap-481920133',owner='56b1e1170f2e4a73aaf396476bc82261',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:25:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '82efe28e-fddc-4d67-b244-5e7b89266ac0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.012 2 WARNING nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.024 2 DEBUG nova.virt.libvirt.host [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.025 2 DEBUG nova.virt.libvirt.host [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.030 2 DEBUG nova.virt.libvirt.host [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.031 2 DEBUG nova.virt.libvirt.host [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.032 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.032 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:25:12Z,direct_url=<?>,disk_format='raw',id=82efe28e-fddc-4d67-b244-5e7b89266ac0,min_disk=1,min_ram=0,name='tempest-test-snap-481920133',owner='56b1e1170f2e4a73aaf396476bc82261',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:25:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.033 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.034 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.034 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.035 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.035 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.036 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.036 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.037 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.038 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.038 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.043 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.176 2 DEBUG nova.network.neutron [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updated VIF entry in instance network info cache for port 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.178 2 DEBUG nova.network.neutron [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updating instance_info_cache with network_info: [{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.206 2 DEBUG oslo_concurrency.lockutils [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:26 compute-0 ceph-mon[74477]: pgmap v1314: 305 pgs: 305 active+clean; 487 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 11 MiB/s wr, 351 op/s
Oct 02 08:25:26 compute-0 ceph-mon[74477]: osdmap e186: 3 total, 3 up, 3 in
Oct 02 08:25:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351288113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.543 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.573 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.579 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 487 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 13 MiB/s wr, 408 op/s
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.886 2 DEBUG nova.compute.manager [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-changed-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.887 2 DEBUG nova.compute.manager [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Refreshing instance network info cache due to event network-changed-74b4490f-0cc9-4b51-b6a0-246fd701e628. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.887 2 DEBUG oslo_concurrency.lockutils [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.888 2 DEBUG oslo_concurrency.lockutils [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.891 2 DEBUG nova.network.neutron [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Refreshing network info cache for port 74b4490f-0cc9-4b51-b6a0-246fd701e628 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.944 2 INFO nova.virt.libvirt.driver [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Snapshot image upload complete
Oct 02 08:25:26 compute-0 nova_compute[260603]: 2025-10-02 08:25:26.945 2 INFO nova.compute.manager [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Took 5.24 seconds to snapshot the instance on the hypervisor.
Oct 02 08:25:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:25:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct 02 08:25:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct 02 08:25:27 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct 02 08:25:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494463941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.068 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.069 2 DEBUG nova.virt.libvirt.vif [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-889597672',display_name='tempest-ImagesTestJSON-server-889597672',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-889597672',id=35,image_ref='82efe28e-fddc-4d67-b244-5e7b89266ac0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-02761tfj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='c00d9d50-5c81-4dc9-8316-c654d4802b4f',image_min_disk='1',image_min_ram='0',image_owner_id='56b1e1170f2e4a73aaf396476bc82261',image_owner_project_name='tempest-ImagesTestJSON-1188243509',image_owner_user_name='tempest-ImagesTestJSON-1188243509-project-member',image_user_id='6747651cfdcc4f868c43b9d78f5846c2',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:22Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=1c57c283-5fac-4b60-b9dc-3dbf3bfd5828,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.069 2 DEBUG nova.network.os_vif_util [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.071 2 DEBUG nova.network.os_vif_util [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.072 2 DEBUG nova.objects.instance [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.090 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:25:27 compute-0 nova_compute[260603]:   <uuid>1c57c283-5fac-4b60-b9dc-3dbf3bfd5828</uuid>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   <name>instance-00000023</name>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <nova:name>tempest-ImagesTestJSON-server-889597672</nova:name>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:25:26</nova:creationTime>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:25:27 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:25:27 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:25:27 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:25:27 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:25:27 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:25:27 compute-0 nova_compute[260603]:         <nova:user uuid="6747651cfdcc4f868c43b9d78f5846c2">tempest-ImagesTestJSON-1188243509-project-member</nova:user>
Oct 02 08:25:27 compute-0 nova_compute[260603]:         <nova:project uuid="56b1e1170f2e4a73aaf396476bc82261">tempest-ImagesTestJSON-1188243509</nova:project>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="82efe28e-fddc-4d67-b244-5e7b89266ac0"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:25:27 compute-0 nova_compute[260603]:         <nova:port uuid="74b4490f-0cc9-4b51-b6a0-246fd701e628">
Oct 02 08:25:27 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <system>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <entry name="serial">1c57c283-5fac-4b60-b9dc-3dbf3bfd5828</entry>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <entry name="uuid">1c57c283-5fac-4b60-b9dc-3dbf3bfd5828</entry>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     </system>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   <os>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   </os>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   <features>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   </features>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk">
Oct 02 08:25:27 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:27 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk.config">
Oct 02 08:25:27 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:27 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:a0:18:6b"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <target dev="tap74b4490f-0c"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/console.log" append="off"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <video>
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     </video>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:25:27 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:25:27 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:25:27 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:25:27 compute-0 nova_compute[260603]: </domain>
Oct 02 08:25:27 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.091 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Preparing to wait for external event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.091 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.091 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.091 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.092 2 DEBUG nova.virt.libvirt.vif [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-889597672',display_name='tempest-ImagesTestJSON-server-889597672',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-889597672',id=35,image_ref='82efe28e-fddc-4d67-b244-5e7b89266ac0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-02761tfj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='c00d9d50-5c81-4dc9-8316-c654d4802b4f',image_min_disk='1',image_min_ram='0',image_owner_id='56b1e1170f2e4a73aaf396476bc82261',image_owner_project_name='tempest-ImagesTestJSON-1188243509',image_owner_user_name='tempest-ImagesTestJSON-1188243509-project-member',image_user_id='6747651cfdcc4f868c43b9d78f5846c2',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:22Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=1c57c283-5fac-4b60-b9dc-3dbf3bfd5828,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.092 2 DEBUG nova.network.os_vif_util [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.093 2 DEBUG nova.network.os_vif_util [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.093 2 DEBUG os_vif [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.098 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74b4490f-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.098 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap74b4490f-0c, col_values=(('external_ids', {'iface-id': '74b4490f-0cc9-4b51-b6a0-246fd701e628', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:18:6b', 'vm-uuid': '1c57c283-5fac-4b60-b9dc-3dbf3bfd5828'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:27 compute-0 NetworkManager[45129]: <info>  [1759393527.1002] manager: (tap74b4490f-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.105 2 INFO os_vif [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c')
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.153 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.154 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.154 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No VIF found with MAC fa:16:3e:a0:18:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.155 2 INFO nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Using config drive
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.177 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3351288113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:27 compute-0 ceph-mon[74477]: osdmap e187: 3 total, 3 up, 3 in
Oct 02 08:25:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/494463941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.837 2 INFO nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Creating config drive at /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/disk.config
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.849 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp363gwooq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:25:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:25:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:25:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:25:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:25:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:25:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:25:27
Oct 02 08:25:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:25:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:25:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'volumes', '.mgr', 'vms']
Oct 02 08:25:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:25:27 compute-0 nova_compute[260603]: 2025-10-02 08:25:27.998 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp363gwooq" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.039 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.044 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/disk.config 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.236 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/disk.config 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.238 2 INFO nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Deleting local config drive /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/disk.config because it was imported into RBD.
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.273 2 DEBUG nova.network.neutron [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Updated VIF entry in instance network info cache for port 74b4490f-0cc9-4b51-b6a0-246fd701e628. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.274 2 DEBUG nova.network.neutron [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Updating instance_info_cache with network_info: [{"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:28 compute-0 ceph-mon[74477]: pgmap v1316: 305 pgs: 305 active+clean; 487 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 13 MiB/s wr, 408 op/s
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.299 2 DEBUG oslo_concurrency.lockutils [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:28 compute-0 kernel: tap74b4490f-0c: entered promiscuous mode
Oct 02 08:25:28 compute-0 NetworkManager[45129]: <info>  [1759393528.3083] manager: (tap74b4490f-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Oct 02 08:25:28 compute-0 ovn_controller[152344]: 2025-10-02T08:25:28Z|00222|binding|INFO|Claiming lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 for this chassis.
Oct 02 08:25:28 compute-0 ovn_controller[152344]: 2025-10-02T08:25:28Z|00223|binding|INFO|74b4490f-0cc9-4b51-b6a0-246fd701e628: Claiming fa:16:3e:a0:18:6b 10.100.0.4
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.361 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:18:6b 10.100.0.4'], port_security=['fa:16:3e:a0:18:6b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c57c283-5fac-4b60-b9dc-3dbf3bfd5828', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '2', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=74b4490f-0cc9-4b51-b6a0-246fd701e628) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.363 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 74b4490f-0cc9-4b51-b6a0-246fd701e628 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 bound to our chassis
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.364 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:25:28 compute-0 systemd-udevd[300952]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:25:28 compute-0 ovn_controller[152344]: 2025-10-02T08:25:28Z|00224|binding|INFO|Setting lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 ovn-installed in OVS
Oct 02 08:25:28 compute-0 ovn_controller[152344]: 2025-10-02T08:25:28Z|00225|binding|INFO|Setting lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 up in Southbound
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.384 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5771d2c7-7c22-4565-afc2-3b672054ecc5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:28 compute-0 systemd-machined[214636]: New machine qemu-39-instance-00000023.
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:28 compute-0 NetworkManager[45129]: <info>  [1759393528.3984] device (tap74b4490f-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:25:28 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000023.
Oct 02 08:25:28 compute-0 NetworkManager[45129]: <info>  [1759393528.4000] device (tap74b4490f-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.426 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b319de57-0758-4f30-a2b6-fc2f8b619db3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.430 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[08372561-8b78-405f-b874-4867f4bdbdda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.464 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[96562fd5-d38e-4a1e-bd41-288558693a4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.486 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f00a52-76fc-4882-b677-073d5fa9c492]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438826, 'reachable_time': 28172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300966, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.512 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d29c57b5-7f22-41ac-a154-d4a1a2bac6eb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438841, 'tstamp': 438841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300968, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438844, 'tstamp': 438844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300968, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.514 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.518 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.518 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.519 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.519 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 569 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 8.5 MiB/s wr, 374 op/s
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.966 2 DEBUG nova.compute.manager [req-c30d633b-1cf8-44e7-9de4-3a8d6e474e8d req-90020330-324f-45b4-8b94-8882a2296748 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.966 2 DEBUG oslo_concurrency.lockutils [req-c30d633b-1cf8-44e7-9de4-3a8d6e474e8d req-90020330-324f-45b4-8b94-8882a2296748 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.966 2 DEBUG oslo_concurrency.lockutils [req-c30d633b-1cf8-44e7-9de4-3a8d6e474e8d req-90020330-324f-45b4-8b94-8882a2296748 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.967 2 DEBUG oslo_concurrency.lockutils [req-c30d633b-1cf8-44e7-9de4-3a8d6e474e8d req-90020330-324f-45b4-8b94-8882a2296748 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:28 compute-0 nova_compute[260603]: 2025-10-02 08:25:28.967 2 DEBUG nova.compute.manager [req-c30d633b-1cf8-44e7-9de4-3a8d6e474e8d req-90020330-324f-45b4-8b94-8882a2296748 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Processing event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.314 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393529.3143692, 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.315 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] VM Started (Lifecycle Event)
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.317 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.321 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.324 2 INFO nova.virt.libvirt.driver [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Instance spawned successfully.
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.325 2 INFO nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Took 6.95 seconds to spawn the instance on the hypervisor.
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.325 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.376 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.379 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.409 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.410 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393529.3155656, 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.410 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] VM Paused (Lifecycle Event)
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.412 2 INFO nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Took 8.04 seconds to build instance.
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.486 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.489 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.492 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393529.3195827, 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.493 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] VM Resumed (Lifecycle Event)
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.518 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:29 compute-0 nova_compute[260603]: 2025-10-02 08:25:29.522 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:30 compute-0 ceph-mon[74477]: pgmap v1318: 305 pgs: 305 active+clean; 569 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 8.5 MiB/s wr, 374 op/s
Oct 02 08:25:30 compute-0 nova_compute[260603]: 2025-10-02 08:25:30.741 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:30 compute-0 nova_compute[260603]: 2025-10-02 08:25:30.742 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:30 compute-0 nova_compute[260603]: 2025-10-02 08:25:30.742 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:30 compute-0 nova_compute[260603]: 2025-10-02 08:25:30.743 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:30 compute-0 nova_compute[260603]: 2025-10-02 08:25:30.743 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:30 compute-0 nova_compute[260603]: 2025-10-02 08:25:30.744 2 INFO nova.compute.manager [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Terminating instance
Oct 02 08:25:30 compute-0 nova_compute[260603]: 2025-10-02 08:25:30.745 2 DEBUG nova.compute.manager [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:25:30 compute-0 kernel: tap74b4490f-0c (unregistering): left promiscuous mode
Oct 02 08:25:30 compute-0 NetworkManager[45129]: <info>  [1759393530.7816] device (tap74b4490f-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:25:30 compute-0 ovn_controller[152344]: 2025-10-02T08:25:30Z|00226|binding|INFO|Releasing lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 from this chassis (sb_readonly=0)
Oct 02 08:25:30 compute-0 ovn_controller[152344]: 2025-10-02T08:25:30Z|00227|binding|INFO|Setting lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 down in Southbound
Oct 02 08:25:30 compute-0 ovn_controller[152344]: 2025-10-02T08:25:30Z|00228|binding|INFO|Removing iface tap74b4490f-0c ovn-installed in OVS
Oct 02 08:25:30 compute-0 nova_compute[260603]: 2025-10-02 08:25:30.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:30 compute-0 nova_compute[260603]: 2025-10-02 08:25:30.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.797 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:18:6b 10.100.0.4'], port_security=['fa:16:3e:a0:18:6b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c57c283-5fac-4b60-b9dc-3dbf3bfd5828', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=74b4490f-0cc9-4b51-b6a0-246fd701e628) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.798 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 74b4490f-0cc9-4b51-b6a0-246fd701e628 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis
Oct 02 08:25:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.799 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:25:30 compute-0 nova_compute[260603]: 2025-10-02 08:25:30.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.820 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[877ec460-5910-4957-ab76-c9663319fb95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:30 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Deactivated successfully.
Oct 02 08:25:30 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Consumed 2.410s CPU time.
Oct 02 08:25:30 compute-0 systemd-machined[214636]: Machine qemu-39-instance-00000023 terminated.
Oct 02 08:25:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 569 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 7.2 MiB/s wr, 317 op/s
Oct 02 08:25:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.859 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6dfad62d-d95f-4d8f-bdc2-e38dcc180414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.862 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[91e64708-c0cf-44cf-94c7-d2a91501b9c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.889 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[378257ac-0514-4df6-9d3a-7b2abd898eee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.909 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b9673b83-5cba-4cd7-82d3-6ae30c006448]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438826, 'reachable_time': 28172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301020, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.927 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[51f3a762-2717-4391-858a-8b6566ccc93b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438841, 'tstamp': 438841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301021, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438844, 'tstamp': 438844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301021, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.931 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:30 compute-0 kernel: tap74b4490f-0c: entered promiscuous mode
Oct 02 08:25:30 compute-0 NetworkManager[45129]: <info>  [1759393530.9623] manager: (tap74b4490f-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Oct 02 08:25:30 compute-0 nova_compute[260603]: 2025-10-02 08:25:30.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:30 compute-0 ovn_controller[152344]: 2025-10-02T08:25:30Z|00229|binding|INFO|Claiming lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 for this chassis.
Oct 02 08:25:30 compute-0 ovn_controller[152344]: 2025-10-02T08:25:30Z|00230|binding|INFO|74b4490f-0cc9-4b51-b6a0-246fd701e628: Claiming fa:16:3e:a0:18:6b 10.100.0.4
Oct 02 08:25:30 compute-0 kernel: tap74b4490f-0c (unregistering): left promiscuous mode
Oct 02 08:25:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.993 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:18:6b 10.100.0.4'], port_security=['fa:16:3e:a0:18:6b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c57c283-5fac-4b60-b9dc-3dbf3bfd5828', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=74b4490f-0cc9-4b51-b6a0-246fd701e628) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.015 2 INFO nova.virt.libvirt.driver [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Instance destroyed successfully.
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.016 2 DEBUG nova.objects.instance [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'resources' on Instance uuid 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:31 compute-0 ovn_controller[152344]: 2025-10-02T08:25:31Z|00231|binding|INFO|Setting lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 ovn-installed in OVS
Oct 02 08:25:31 compute-0 ovn_controller[152344]: 2025-10-02T08:25:31Z|00232|binding|INFO|Setting lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 up in Southbound
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.044 2 DEBUG nova.virt.libvirt.vif [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:25:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-889597672',display_name='tempest-ImagesTestJSON-server-889597672',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-889597672',id=35,image_ref='82efe28e-fddc-4d67-b244-5e7b89266ac0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:25:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-02761tfj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='c00d9d50-5c81-4dc9-8316-c654d4802b4f',image_min_disk='1',image_min_ram='0',image_owner_id='56b1e1170f2e4a73aaf396476bc82261',image_owner_project_name='tempest-ImagesTestJSON-1188243509',image_owner_user_name='tempest-ImagesTestJSON-1188243509-project-member',image_user_id='6747651cfdcc4f868c43b9d78f5846c2',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:25:29Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=1c57c283-5fac-4b60-b9dc-3dbf3bfd5828,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.045 2 DEBUG nova.network.os_vif_util [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.046 2 DEBUG nova.network.os_vif_util [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.048 2 DEBUG os_vif [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.051 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.052 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:31 compute-0 ovn_controller[152344]: 2025-10-02T08:25:31Z|00233|binding|INFO|Releasing lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 from this chassis (sb_readonly=0)
Oct 02 08:25:31 compute-0 ovn_controller[152344]: 2025-10-02T08:25:31Z|00234|binding|INFO|Setting lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 down in Southbound
Oct 02 08:25:31 compute-0 ovn_controller[152344]: 2025-10-02T08:25:31Z|00235|binding|INFO|Removing iface tap74b4490f-0c ovn-installed in OVS
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.052 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.052 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.054 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 74b4490f-0cc9-4b51-b6a0-246fd701e628 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.055 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.054 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74b4490f-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.062 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:18:6b 10.100.0.4'], port_security=['fa:16:3e:a0:18:6b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c57c283-5fac-4b60-b9dc-3dbf3bfd5828', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=74b4490f-0cc9-4b51-b6a0-246fd701e628) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.077 2 INFO os_vif [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c')
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.079 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a9efe2-a815-4911-971b-3f9b8c03f331]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.109 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6d3721-e725-4e43-8e71-671395a8f90b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.116 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b3ca5f-b864-4428-bc93-466b853713eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.147 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[82b9d22a-6496-46f5-bfd5-1502d72db887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.164 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8242dc0-52eb-4bad-ad8a-e1d6ea97edee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438826, 'reachable_time': 28172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301056, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.182 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[23234c07-0cc7-4ec9-bfdf-1ac20095a247]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438841, 'tstamp': 438841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301057, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438844, 'tstamp': 438844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301057, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.184 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.187 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.187 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.187 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.187 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.188 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 74b4490f-0cc9-4b51-b6a0-246fd701e628 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.191 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.206 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[17226e45-b9a6-41d8-a9d1-1ddb61bcd6ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.225 2 DEBUG nova.compute.manager [req-55cfac81-b835-4fb0-ba08-b263b368ff73 req-2aeecc06-0290-4f0a-8074-ce77d1437746 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.226 2 DEBUG oslo_concurrency.lockutils [req-55cfac81-b835-4fb0-ba08-b263b368ff73 req-2aeecc06-0290-4f0a-8074-ce77d1437746 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.226 2 DEBUG oslo_concurrency.lockutils [req-55cfac81-b835-4fb0-ba08-b263b368ff73 req-2aeecc06-0290-4f0a-8074-ce77d1437746 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.227 2 DEBUG oslo_concurrency.lockutils [req-55cfac81-b835-4fb0-ba08-b263b368ff73 req-2aeecc06-0290-4f0a-8074-ce77d1437746 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.227 2 DEBUG nova.compute.manager [req-55cfac81-b835-4fb0-ba08-b263b368ff73 req-2aeecc06-0290-4f0a-8074-ce77d1437746 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.227 2 WARNING nova.compute.manager [req-55cfac81-b835-4fb0-ba08-b263b368ff73 req-2aeecc06-0290-4f0a-8074-ce77d1437746 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state active and task_state deleting.
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.245 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[98940eb9-d18e-46c9-9dff-d8be204195b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.247 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[97c99d25-d5e0-4a9c-903b-9c868fe250d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.286 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[92a36ac2-da30-4d50-87cf-add1aad34acb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.304 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83368c95-ba16-4b5e-a87c-8bc4e55f6c98]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438826, 'reachable_time': 28172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301064, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.323 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[384d1e30-886d-4b96-8998-b33a4e7ebfaa]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438841, 'tstamp': 438841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301065, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438844, 'tstamp': 438844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301065, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.325 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.328 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.328 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.329 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.329 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.402 2 INFO nova.virt.libvirt.driver [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Deleting instance files /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_del
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.403 2 INFO nova.virt.libvirt.driver [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Deletion of /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_del complete
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.450 2 INFO nova.compute.manager [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.451 2 DEBUG oslo.service.loopingcall [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.451 2 DEBUG nova.compute.manager [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.451 2 DEBUG nova.network.neutron [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:25:31 compute-0 nova_compute[260603]: 2025-10-02 08:25:31.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:25:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct 02 08:25:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct 02 08:25:32 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct 02 08:25:32 compute-0 ceph-mon[74477]: pgmap v1319: 305 pgs: 305 active+clean; 569 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 7.2 MiB/s wr, 317 op/s
Oct 02 08:25:32 compute-0 ceph-mon[74477]: osdmap e188: 3 total, 3 up, 3 in
Oct 02 08:25:32 compute-0 nova_compute[260603]: 2025-10-02 08:25:32.817 2 DEBUG nova.network.neutron [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:32 compute-0 nova_compute[260603]: 2025-10-02 08:25:32.840 2 INFO nova.compute.manager [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Took 1.39 seconds to deallocate network for instance.
Oct 02 08:25:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 569 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 6.3 MiB/s wr, 365 op/s
Oct 02 08:25:32 compute-0 nova_compute[260603]: 2025-10-02 08:25:32.887 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:32 compute-0 nova_compute[260603]: 2025-10-02 08:25:32.887 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.011 2 DEBUG oslo_concurrency.processutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.307 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-unplugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.308 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.309 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.310 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.311 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-unplugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.311 2 WARNING nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-unplugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state deleted and task_state None.
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.312 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.313 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.313 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.314 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.315 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.315 2 WARNING nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state deleted and task_state None.
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.316 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.316 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.317 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.318 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.318 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.319 2 WARNING nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state deleted and task_state None.
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.319 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.320 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.321 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.321 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.322 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.322 2 WARNING nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state deleted and task_state None.
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.323 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-unplugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.324 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.324 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.325 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.326 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-unplugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.327 2 WARNING nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-unplugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state deleted and task_state None.
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.328 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.329 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.329 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.330 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.331 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.331 2 WARNING nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state deleted and task_state None.
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.332 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-deleted-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1235987546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.542 2 DEBUG oslo_concurrency.processutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.552 2 DEBUG nova.compute.provider_tree [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.573 2 DEBUG nova.scheduler.client.report [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.602 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.635 2 INFO nova.scheduler.client.report [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Deleted allocations for instance 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828
Oct 02 08:25:33 compute-0 nova_compute[260603]: 2025-10-02 08:25:33.696 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct 02 08:25:34 compute-0 ceph-mon[74477]: pgmap v1321: 305 pgs: 305 active+clean; 569 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 6.3 MiB/s wr, 365 op/s
Oct 02 08:25:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1235987546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct 02 08:25:34 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct 02 08:25:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:34.812 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:34.812 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:34.813 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 574 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.6 MiB/s wr, 422 op/s
Oct 02 08:25:34 compute-0 nova_compute[260603]: 2025-10-02 08:25:34.913 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:34 compute-0 nova_compute[260603]: 2025-10-02 08:25:34.914 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:34 compute-0 nova_compute[260603]: 2025-10-02 08:25:34.914 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:34 compute-0 nova_compute[260603]: 2025-10-02 08:25:34.914 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:34 compute-0 nova_compute[260603]: 2025-10-02 08:25:34.915 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:34 compute-0 nova_compute[260603]: 2025-10-02 08:25:34.917 2 INFO nova.compute.manager [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Terminating instance
Oct 02 08:25:34 compute-0 nova_compute[260603]: 2025-10-02 08:25:34.920 2 DEBUG nova.compute.manager [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:25:34 compute-0 kernel: tapc606d1e1-b2 (unregistering): left promiscuous mode
Oct 02 08:25:34 compute-0 NetworkManager[45129]: <info>  [1759393534.9787] device (tapc606d1e1-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:25:34 compute-0 nova_compute[260603]: 2025-10-02 08:25:34.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:34 compute-0 ovn_controller[152344]: 2025-10-02T08:25:34Z|00236|binding|INFO|Releasing lport c606d1e1-b27f-498f-989e-2cce97a7589d from this chassis (sb_readonly=0)
Oct 02 08:25:34 compute-0 ovn_controller[152344]: 2025-10-02T08:25:34Z|00237|binding|INFO|Setting lport c606d1e1-b27f-498f-989e-2cce97a7589d down in Southbound
Oct 02 08:25:34 compute-0 ovn_controller[152344]: 2025-10-02T08:25:34Z|00238|binding|INFO|Removing iface tapc606d1e1-b2 ovn-installed in OVS
Oct 02 08:25:34 compute-0 nova_compute[260603]: 2025-10-02 08:25:34.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.002 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:5d:05 10.100.0.11'], port_security=['fa:16:3e:48:5d:05 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c00d9d50-5c81-4dc9-8316-c654d4802b4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c606d1e1-b27f-498f-989e-2cce97a7589d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.004 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c606d1e1-b27f-498f-989e-2cce97a7589d in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.007 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.008 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad4f4ce-5f6d-48bd-9bc0-983a0494ffd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.009 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace which is not needed anymore
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:35 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Deactivated successfully.
Oct 02 08:25:35 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Consumed 12.987s CPU time.
Oct 02 08:25:35 compute-0 systemd-machined[214636]: Machine qemu-37-instance-00000020 terminated.
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.168 2 INFO nova.virt.libvirt.driver [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Instance destroyed successfully.
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.169 2 DEBUG nova.objects.instance [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'resources' on Instance uuid c00d9d50-5c81-4dc9-8316-c654d4802b4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.181 2 DEBUG nova.virt.libvirt.vif [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1903679195',display_name='tempest-ImagesTestJSON-server-1903679195',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1903679195',id=32,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:25:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-lm5g3lue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:25:17Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=c00d9d50-5c81-4dc9-8316-c654d4802b4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.181 2 DEBUG nova.network.os_vif_util [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.182 2 DEBUG nova.network.os_vif_util [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.182 2 DEBUG os_vif [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc606d1e1-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.189 2 INFO os_vif [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2')
Oct 02 08:25:35 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [NOTICE]   (299548) : haproxy version is 2.8.14-c23fe91
Oct 02 08:25:35 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [NOTICE]   (299548) : path to executable is /usr/sbin/haproxy
Oct 02 08:25:35 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [WARNING]  (299548) : Exiting Master process...
Oct 02 08:25:35 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [WARNING]  (299548) : Exiting Master process...
Oct 02 08:25:35 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [ALERT]    (299548) : Current worker (299550) exited with code 143 (Terminated)
Oct 02 08:25:35 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [WARNING]  (299548) : All workers exited. Exiting... (0)
Oct 02 08:25:35 compute-0 systemd[1]: libpod-c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598.scope: Deactivated successfully.
Oct 02 08:25:35 compute-0 podman[301111]: 2025-10-02 08:25:35.220895917 +0000 UTC m=+0.072291573 container died c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598-userdata-shm.mount: Deactivated successfully.
Oct 02 08:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dec502266d477323c4c3cfd6e09d38867b92c6da0907359bbdddd0555e1b6f0-merged.mount: Deactivated successfully.
Oct 02 08:25:35 compute-0 podman[301111]: 2025-10-02 08:25:35.267255721 +0000 UTC m=+0.118651367 container cleanup c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 08:25:35 compute-0 systemd[1]: libpod-conmon-c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598.scope: Deactivated successfully.
Oct 02 08:25:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct 02 08:25:35 compute-0 ceph-mon[74477]: osdmap e189: 3 total, 3 up, 3 in
Oct 02 08:25:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct 02 08:25:35 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct 02 08:25:35 compute-0 podman[301166]: 2025-10-02 08:25:35.361982596 +0000 UTC m=+0.059330765 container remove c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.368 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4b3b13-9e04-4728-83b6-ebf0a5a9be78]: (4, ('Thu Oct  2 08:25:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598)\nc4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598\nThu Oct  2 08:25:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598)\nc4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.370 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[59d5b35f-3180-41fd-b711-65c9aa96be92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.372 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:35 compute-0 kernel: tap897d7abf-90: left promiscuous mode
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.400 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[af4f7633-985d-4fc5-8965-bf84c04233a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.420 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91dc25aa-b804-48e0-ae4c-13af1772b29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.424 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[022e6a94-2784-4868-bdf7-ae587e9913f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.448 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50b2d982-c297-4228-80da-d642eec06b2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438816, 'reachable_time': 41488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301182, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.453 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:25:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.453 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[94ad7053-bb14-4cd5-94da-81693d529386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d897d7abf\x2d9e23\x2d43cd\x2d8f60\x2d7156792a4360.mount: Deactivated successfully.
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.660 2 INFO nova.virt.libvirt.driver [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Deleting instance files /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f_del
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.662 2 INFO nova.virt.libvirt.driver [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Deletion of /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f_del complete
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.717 2 INFO nova.compute.manager [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.718 2 DEBUG oslo.service.loopingcall [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.719 2 DEBUG nova.compute.manager [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.719 2 DEBUG nova.network.neutron [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:25:35 compute-0 ovn_controller[152344]: 2025-10-02T08:25:35Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:cf:d8 10.100.0.3
Oct 02 08:25:35 compute-0 ovn_controller[152344]: 2025-10-02T08:25:35Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:cf:d8 10.100.0.3
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.882 2 DEBUG nova.compute.manager [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-vif-unplugged-c606d1e1-b27f-498f-989e-2cce97a7589d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.883 2 DEBUG oslo_concurrency.lockutils [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.883 2 DEBUG oslo_concurrency.lockutils [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.883 2 DEBUG oslo_concurrency.lockutils [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.883 2 DEBUG nova.compute.manager [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] No waiting events found dispatching network-vif-unplugged-c606d1e1-b27f-498f-989e-2cce97a7589d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.884 2 DEBUG nova.compute.manager [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-vif-unplugged-c606d1e1-b27f-498f-989e-2cce97a7589d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.884 2 DEBUG nova.compute.manager [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.884 2 DEBUG oslo_concurrency.lockutils [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.884 2 DEBUG oslo_concurrency.lockutils [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.884 2 DEBUG oslo_concurrency.lockutils [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.885 2 DEBUG nova.compute.manager [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] No waiting events found dispatching network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:35 compute-0 nova_compute[260603]: 2025-10-02 08:25:35.885 2 WARNING nova.compute.manager [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received unexpected event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d for instance with vm_state active and task_state deleting.
Oct 02 08:25:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct 02 08:25:36 compute-0 ceph-mon[74477]: pgmap v1323: 305 pgs: 305 active+clean; 574 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.6 MiB/s wr, 422 op/s
Oct 02 08:25:36 compute-0 ceph-mon[74477]: osdmap e190: 3 total, 3 up, 3 in
Oct 02 08:25:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct 02 08:25:36 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct 02 08:25:36 compute-0 nova_compute[260603]: 2025-10-02 08:25:36.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 574 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 972 KiB/s wr, 250 op/s
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.004 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "aebef537-a40c-45aa-98b5-ebdd7c27028b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.005 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "aebef537-a40c-45aa-98b5-ebdd7c27028b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.005 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "aebef537-a40c-45aa-98b5-ebdd7c27028b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.006 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "aebef537-a40c-45aa-98b5-ebdd7c27028b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.006 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "aebef537-a40c-45aa-98b5-ebdd7c27028b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.008 2 INFO nova.compute.manager [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Terminating instance
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.011 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "refresh_cache-aebef537-a40c-45aa-98b5-ebdd7c27028b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.012 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquired lock "refresh_cache-aebef537-a40c-45aa-98b5-ebdd7c27028b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.012 2 DEBUG nova.network.neutron [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:25:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.043 2 DEBUG nova.network.neutron [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.070 2 INFO nova.compute.manager [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Took 1.35 seconds to deallocate network for instance.
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.128 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.129 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.175 2 DEBUG nova.network.neutron [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.252 2 DEBUG oslo_concurrency.processutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:37 compute-0 ceph-mon[74477]: osdmap e191: 3 total, 3 up, 3 in
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.437 2 DEBUG nova.network.neutron [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.460 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.460 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.461 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Releasing lock "refresh_cache-aebef537-a40c-45aa-98b5-ebdd7c27028b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.462 2 DEBUG nova.compute.manager [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.478 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.517 2 DEBUG nova.compute.manager [req-1ea3a9c8-fa01-4058-a0dc-c6414b88c80f req-83f169a8-14a4-45f2-ad1f-3d6a5010b080 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-vif-deleted-c606d1e1-b27f-498f-989e-2cce97a7589d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.535 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:37 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000021.scope: Deactivated successfully.
Oct 02 08:25:37 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000021.scope: Consumed 12.987s CPU time.
Oct 02 08:25:37 compute-0 systemd-machined[214636]: Machine qemu-36-instance-00000021 terminated.
Oct 02 08:25:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2419347802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.711 2 INFO nova.virt.libvirt.driver [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Instance destroyed successfully.
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.712 2 DEBUG nova.objects.instance [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lazy-loading 'resources' on Instance uuid aebef537-a40c-45aa-98b5-ebdd7c27028b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:37 compute-0 podman[301205]: 2025-10-02 08:25:37.717774532 +0000 UTC m=+0.114037837 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.721 2 DEBUG oslo_concurrency.processutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:37 compute-0 podman[301204]: 2025-10-02 08:25:37.727974941 +0000 UTC m=+0.127245564 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.751 2 DEBUG nova.compute.provider_tree [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.769 2 DEBUG nova.scheduler.client.report [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.791 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.794 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.803 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.803 2 INFO nova.compute.claims [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.825 2 INFO nova.scheduler.client.report [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Deleted allocations for instance c00d9d50-5c81-4dc9-8316-c654d4802b4f
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.911 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:37 compute-0 nova_compute[260603]: 2025-10-02 08:25:37.980 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.188 2 INFO nova.virt.libvirt.driver [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Deleting instance files /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b_del
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.189 2 INFO nova.virt.libvirt.driver [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Deletion of /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b_del complete
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.236 2 INFO nova.compute.manager [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.236 2 DEBUG oslo.service.loopingcall [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.237 2 DEBUG nova.compute.manager [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.237 2 DEBUG nova.network.neutron [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:25:38 compute-0 ceph-mon[74477]: pgmap v1326: 305 pgs: 305 active+clean; 574 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 972 KiB/s wr, 250 op/s
Oct 02 08:25:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2419347802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026947198346031127 of space, bias 1.0, pg target 0.8084159503809338 quantized to 32 (current 32)
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002856621776714202 of space, bias 1.0, pg target 0.8569865330142605 quantized to 32 (current 32)
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:25:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687789325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.484 2 DEBUG nova.network.neutron [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.494 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.498 2 DEBUG nova.network.neutron [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.504 2 DEBUG nova.compute.provider_tree [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.517 2 INFO nova.compute.manager [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Took 0.28 seconds to deallocate network for instance.
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.526 2 DEBUG nova.scheduler.client.report [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.562 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.563 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.573 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.573 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.612 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.612 2 DEBUG nova.network.neutron [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.626 2 INFO nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.641 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.659 2 DEBUG oslo_concurrency.processutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.734 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.735 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.736 2 INFO nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Creating image(s)
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.760 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.787 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.822 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.827 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.862 2 DEBUG nova.policy [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6747651cfdcc4f868c43b9d78f5846c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56b1e1170f2e4a73aaf396476bc82261', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:25:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 241 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 464 op/s
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.870 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.870 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.887 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.913 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.913 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.914 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.914 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.932 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.935 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:38 compute-0 nova_compute[260603]: 2025-10-02 08:25:38.989 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:39 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071681277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.140 2 DEBUG oslo_concurrency.processutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.146 2 DEBUG nova.compute.provider_tree [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.161 2 DEBUG nova.scheduler.client.report [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.181 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.183 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.189 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.190 2 INFO nova.compute.claims [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.224 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.248 2 INFO nova.scheduler.client.report [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Deleted allocations for instance aebef537-a40c-45aa-98b5-ebdd7c27028b
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.300 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] resizing rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.338 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "aebef537-a40c-45aa-98b5-ebdd7c27028b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.351 2 DEBUG nova.network.neutron [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Successfully created port: 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:25:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2687789325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4071681277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.415 2 DEBUG nova.objects.instance [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.426 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.427 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Ensure instance console log exists: /var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.427 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.427 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.427 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.446 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:39 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2058094305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.867 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.872 2 DEBUG nova.compute.provider_tree [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.897 2 DEBUG nova.scheduler.client.report [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.922 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.923 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.959 2 DEBUG nova.network.neutron [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Successfully updated port: 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.968 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.969 2 DEBUG nova.network.neutron [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.974 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "refresh_cache-9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.974 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquired lock "refresh_cache-9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.975 2 DEBUG nova.network.neutron [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:25:39 compute-0 nova_compute[260603]: 2025-10-02 08:25:39.997 2 INFO nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.028 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.033 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "9ea31984-a45e-4154-9df9-3c4e8ce69309" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.033 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "9ea31984-a45e-4154-9df9-3c4e8ce69309" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.034 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "9ea31984-a45e-4154-9df9-3c4e8ce69309-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.034 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "9ea31984-a45e-4154-9df9-3c4e8ce69309-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.035 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "9ea31984-a45e-4154-9df9-3c4e8ce69309-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.037 2 INFO nova.compute.manager [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Terminating instance
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.039 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "refresh_cache-9ea31984-a45e-4154-9df9-3c4e8ce69309" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.040 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquired lock "refresh_cache-9ea31984-a45e-4154-9df9-3c4e8ce69309" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.040 2 DEBUG nova.network.neutron [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.048 2 DEBUG nova.compute.manager [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Received event network-changed-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.048 2 DEBUG nova.compute.manager [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Refreshing instance network info cache due to event network-changed-9bdc55f5-2bf2-467e-9e1e-33215451c0c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.049 2 DEBUG oslo_concurrency.lockutils [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.143 2 DEBUG nova.policy [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f5cf08c876094c4d847b57fd4506bbff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93b2c1583a83423288661225e3f86391', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.150 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.152 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.153 2 INFO nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Creating image(s)
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.186 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.224 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.259 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.263 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.303 2 DEBUG nova.network.neutron [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.358 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.359 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.360 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.360 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.384 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.388 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:40 compute-0 ceph-mon[74477]: pgmap v1327: 305 pgs: 305 active+clean; 241 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 464 op/s
Oct 02 08:25:40 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2058094305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.494 2 DEBUG nova.network.neutron [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.630 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.683 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] resizing rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.765 2 DEBUG nova.network.neutron [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.771 2 DEBUG nova.objects.instance [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lazy-loading 'migration_context' on Instance uuid 1bd45455-6745-4310-a5a6-f86dd4dcb4ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.785 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.786 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Ensure instance console log exists: /var/lib/nova/instances/1bd45455-6745-4310-a5a6-f86dd4dcb4ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.786 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.787 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.787 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.792 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Releasing lock "refresh_cache-9ea31984-a45e-4154-9df9-3c4e8ce69309" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:40 compute-0 nova_compute[260603]: 2025-10-02 08:25:40.792 2 DEBUG nova.compute.manager [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:25:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 241 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 736 KiB/s rd, 3.3 MiB/s wr, 345 op/s
Oct 02 08:25:40 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct 02 08:25:40 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 13.874s CPU time.
Oct 02 08:25:40 compute-0 systemd-machined[214636]: Machine qemu-35-instance-0000001f terminated.
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.019 2 DEBUG nova.network.neutron [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Successfully created port: a75b1cfa-e509-4676-bd92-b36be82f1e83 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.027 2 INFO nova.virt.libvirt.driver [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Instance destroyed successfully.
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.028 2 DEBUG nova.objects.instance [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lazy-loading 'resources' on Instance uuid 9ea31984-a45e-4154-9df9-3c4e8ce69309 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.140 2 DEBUG nova.network.neutron [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Updating instance_info_cache with network_info: [{"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.168 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Releasing lock "refresh_cache-9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.169 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Instance network_info: |[{"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.170 2 DEBUG oslo_concurrency.lockutils [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.171 2 DEBUG nova.network.neutron [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Refreshing network info cache for port 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.177 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Start _get_guest_xml network_info=[{"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.183 2 WARNING nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.189 2 DEBUG nova.virt.libvirt.host [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.190 2 DEBUG nova.virt.libvirt.host [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.206 2 DEBUG nova.virt.libvirt.host [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.207 2 DEBUG nova.virt.libvirt.host [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.208 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.208 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.209 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.210 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.210 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.210 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.211 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.211 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.212 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.212 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.213 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.214 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.221 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.486 2 INFO nova.virt.libvirt.driver [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Deleting instance files /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309_del
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.488 2 INFO nova.virt.libvirt.driver [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Deletion of /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309_del complete
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.578 2 INFO nova.compute.manager [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.578 2 DEBUG oslo.service.loopingcall [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.578 2 DEBUG nova.compute.manager [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.579 2 DEBUG nova.network.neutron [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.664 2 DEBUG nova.network.neutron [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Successfully updated port: a75b1cfa-e509-4676-bd92-b36be82f1e83 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.679 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.679 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquired lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.679 2 DEBUG nova.network.neutron [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:25:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1118430878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.713 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.747 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.754 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.798 2 DEBUG nova.network.neutron [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.813 2 DEBUG nova.network.neutron [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.819 2 DEBUG nova.network.neutron [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.835 2 INFO nova.compute.manager [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Took 0.26 seconds to deallocate network for instance.
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.891 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.892 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.952 2 DEBUG oslo_concurrency.lockutils [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-c28cb03c-6207-4ec5-9156-03252350561c-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.952 2 DEBUG oslo_concurrency.lockutils [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-c28cb03c-6207-4ec5-9156-03252350561c-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:41 compute-0 nova_compute[260603]: 2025-10-02 08:25:41.952 2 DEBUG nova.objects.instance [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid c28cb03c-6207-4ec5-9156-03252350561c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.020 2 DEBUG oslo_concurrency.processutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:25:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct 02 08:25:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct 02 08:25:42 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.122 2 DEBUG nova.compute.manager [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received event network-changed-a75b1cfa-e509-4676-bd92-b36be82f1e83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.122 2 DEBUG nova.compute.manager [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Refreshing instance network info cache due to event network-changed-a75b1cfa-e509-4676-bd92-b36be82f1e83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.123 2 DEBUG oslo_concurrency.lockutils [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133314659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.263 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.265 2 DEBUG nova.virt.libvirt.vif [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1285601584',display_name='tempest-ImagesTestJSON-server-1285601584',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1285601584',id=36,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-ac1xe9vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:38Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.265 2 DEBUG nova.network.os_vif_util [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.267 2 DEBUG nova.network.os_vif_util [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:3f:d3,bridge_name='br-int',has_traffic_filtering=True,id=9bdc55f5-2bf2-467e-9e1e-33215451c0c4,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bdc55f5-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.269 2 DEBUG nova.objects.instance [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.285 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:25:42 compute-0 nova_compute[260603]:   <uuid>9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a</uuid>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   <name>instance-00000024</name>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <nova:name>tempest-ImagesTestJSON-server-1285601584</nova:name>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:25:41</nova:creationTime>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:25:42 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:25:42 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:25:42 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:25:42 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:25:42 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:25:42 compute-0 nova_compute[260603]:         <nova:user uuid="6747651cfdcc4f868c43b9d78f5846c2">tempest-ImagesTestJSON-1188243509-project-member</nova:user>
Oct 02 08:25:42 compute-0 nova_compute[260603]:         <nova:project uuid="56b1e1170f2e4a73aaf396476bc82261">tempest-ImagesTestJSON-1188243509</nova:project>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:25:42 compute-0 nova_compute[260603]:         <nova:port uuid="9bdc55f5-2bf2-467e-9e1e-33215451c0c4">
Oct 02 08:25:42 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <system>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <entry name="serial">9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a</entry>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <entry name="uuid">9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a</entry>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     </system>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   <os>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   </os>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   <features>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   </features>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk">
Oct 02 08:25:42 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:42 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk.config">
Oct 02 08:25:42 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:42 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:88:3f:d3"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <target dev="tap9bdc55f5-2b"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a/console.log" append="off"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <video>
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     </video>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:25:42 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:25:42 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:25:42 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:25:42 compute-0 nova_compute[260603]: </domain>
Oct 02 08:25:42 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.286 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Preparing to wait for external event network-vif-plugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.286 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.286 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.287 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.288 2 DEBUG nova.virt.libvirt.vif [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1285601584',display_name='tempest-ImagesTestJSON-server-1285601584',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1285601584',id=36,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-ac1xe9vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:38Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.289 2 DEBUG nova.network.os_vif_util [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.290 2 DEBUG nova.network.os_vif_util [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:3f:d3,bridge_name='br-int',has_traffic_filtering=True,id=9bdc55f5-2bf2-467e-9e1e-33215451c0c4,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bdc55f5-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.291 2 DEBUG os_vif [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:3f:d3,bridge_name='br-int',has_traffic_filtering=True,id=9bdc55f5-2bf2-467e-9e1e-33215451c0c4,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bdc55f5-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.292 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bdc55f5-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9bdc55f5-2b, col_values=(('external_ids', {'iface-id': '9bdc55f5-2bf2-467e-9e1e-33215451c0c4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:3f:d3', 'vm-uuid': '9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:42 compute-0 NetworkManager[45129]: <info>  [1759393542.3046] manager: (tap9bdc55f5-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.310 2 INFO os_vif [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:3f:d3,bridge_name='br-int',has_traffic_filtering=True,id=9bdc55f5-2bf2-467e-9e1e-33215451c0c4,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bdc55f5-2b')
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.390 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.391 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.391 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No VIF found with MAC fa:16:3e:88:3f:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.392 2 INFO nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Using config drive
Oct 02 08:25:42 compute-0 ceph-mon[74477]: pgmap v1328: 305 pgs: 305 active+clean; 241 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 736 KiB/s rd, 3.3 MiB/s wr, 345 op/s
Oct 02 08:25:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1118430878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:42 compute-0 ceph-mon[74477]: osdmap e192: 3 total, 3 up, 3 in
Oct 02 08:25:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/133314659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.424 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3994778032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.457 2 DEBUG oslo_concurrency.processutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.466 2 DEBUG nova.compute.provider_tree [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.486 2 DEBUG nova.scheduler.client.report [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.513 2 DEBUG nova.network.neutron [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updating instance_info_cache with network_info: [{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.518 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.528 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Releasing lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.528 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Instance network_info: |[{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.529 2 DEBUG oslo_concurrency.lockutils [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.529 2 DEBUG nova.network.neutron [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Refreshing network info cache for port a75b1cfa-e509-4676-bd92-b36be82f1e83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.532 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Start _get_guest_xml network_info=[{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.537 2 WARNING nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.541 2 DEBUG nova.virt.libvirt.host [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.542 2 DEBUG nova.virt.libvirt.host [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.551 2 DEBUG nova.virt.libvirt.host [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.551 2 DEBUG nova.virt.libvirt.host [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.552 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.552 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.552 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.552 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.553 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.553 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.553 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.553 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.554 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.554 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.554 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.554 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.557 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.597 2 INFO nova.scheduler.client.report [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Deleted allocations for instance 9ea31984-a45e-4154-9df9-3c4e8ce69309
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.679 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "9ea31984-a45e-4154-9df9-3c4e8ce69309" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.807 2 DEBUG nova.objects.instance [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_requests' on Instance uuid c28cb03c-6207-4ec5-9156-03252350561c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.824 2 DEBUG nova.network.neutron [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:25:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 199 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 696 KiB/s rd, 6.1 MiB/s wr, 382 op/s
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.896 2 INFO nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Creating config drive at /var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a/disk.config
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.905 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpns3ssrxa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.954 2 DEBUG nova.network.neutron [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Updated VIF entry in instance network info cache for port 9bdc55f5-2bf2-467e-9e1e-33215451c0c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.956 2 DEBUG nova.network.neutron [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Updating instance_info_cache with network_info: [{"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:42 compute-0 nova_compute[260603]: 2025-10-02 08:25:42.978 2 DEBUG oslo_concurrency.lockutils [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/588820475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.007 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.042 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.048 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.098 2 DEBUG nova.policy [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:25:43 compute-0 rsyslogd[1004]: imjournal from <np0005465604:nova_compute>: begin to drop messages due to rate-limiting
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.102 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpns3ssrxa" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.137 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.142 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a/disk.config 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.356 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a/disk.config 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.357 2 INFO nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Deleting local config drive /var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a/disk.config because it was imported into RBD.
Oct 02 08:25:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3994778032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/588820475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:43 compute-0 kernel: tap9bdc55f5-2b: entered promiscuous mode
Oct 02 08:25:43 compute-0 NetworkManager[45129]: <info>  [1759393543.4437] manager: (tap9bdc55f5-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Oct 02 08:25:43 compute-0 systemd-udevd[301669]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:25:43 compute-0 NetworkManager[45129]: <info>  [1759393543.4675] device (tap9bdc55f5-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:25:43 compute-0 NetworkManager[45129]: <info>  [1759393543.4693] device (tap9bdc55f5-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:43 compute-0 ovn_controller[152344]: 2025-10-02T08:25:43Z|00239|binding|INFO|Claiming lport 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 for this chassis.
Oct 02 08:25:43 compute-0 ovn_controller[152344]: 2025-10-02T08:25:43Z|00240|binding|INFO|9bdc55f5-2bf2-467e-9e1e-33215451c0c4: Claiming fa:16:3e:88:3f:d3 10.100.0.11
Oct 02 08:25:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:25:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3545449055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.494 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:3f:d3 10.100.0.11'], port_security=['fa:16:3e:88:3f:d3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '2', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9bdc55f5-2bf2-467e-9e1e-33215451c0c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.497 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 bound to our chassis
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.500 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a4b0c0-5b3c-4476-af03-3512fca402da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.525 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap897d7abf-91 in ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.527 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap897d7abf-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.527 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0f860a65-7f67-4e72-82fa-483268b1bcd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.529 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[039ee4a1-0101-420f-a8eb-3de5bbe29a40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.529 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:43 compute-0 systemd-machined[214636]: New machine qemu-40-instance-00000024.
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.531 2 DEBUG nova.virt.libvirt.vif [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1639846623',display_name='tempest-AttachInterfacesUnderV243Test-server-1639846623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1639846623',id=37,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCKG1mgw8jr3d6I/rPaxpI/xykaK5mkiXQdfytljVDMuER4fKp12xeA534SFvLBxY6bTyRlvKcYu+E0zUTX3LJrHhpFk2NKvsPRSQ931HNOhMc2t1VukoDA/V98LWTem1w==',key_name='tempest-keypair-2083756465',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93b2c1583a83423288661225e3f86391',ramdisk_id='',reservation_id='r-wb0hbrf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1865175876',owner_user_name='tempest-AttachInterfacesUnderV243Test-1865175876-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5cf08c876094c4d847b57fd4506bbff',uuid=1bd45455-6745-4310-a5a6-f86dd4dcb4ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:25:43 compute-0 ovn_controller[152344]: 2025-10-02T08:25:43Z|00241|binding|INFO|Setting lport 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 ovn-installed in OVS
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.532 2 DEBUG nova.network.os_vif_util [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Converting VIF {"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:43 compute-0 ovn_controller[152344]: 2025-10-02T08:25:43Z|00242|binding|INFO|Setting lport 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 up in Southbound
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.533 2 DEBUG nova.network.os_vif_util [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:9e:33,bridge_name='br-int',has_traffic_filtering=True,id=a75b1cfa-e509-4676-bd92-b36be82f1e83,network=Network(e086d5b9-24e9-42fc-adba-1a3993e8f3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75b1cfa-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.535 2 DEBUG nova.objects.instance [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1bd45455-6745-4310-a5a6-f86dd4dcb4ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:43 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000024.
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.545 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[31a90fb5-d6fb-4668-a9e5-e502461db723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.554 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:25:43 compute-0 nova_compute[260603]:   <uuid>1bd45455-6745-4310-a5a6-f86dd4dcb4ca</uuid>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   <name>instance-00000025</name>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1639846623</nova:name>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:25:42</nova:creationTime>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:25:43 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:25:43 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:25:43 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:25:43 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:25:43 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:25:43 compute-0 nova_compute[260603]:         <nova:user uuid="f5cf08c876094c4d847b57fd4506bbff">tempest-AttachInterfacesUnderV243Test-1865175876-project-member</nova:user>
Oct 02 08:25:43 compute-0 nova_compute[260603]:         <nova:project uuid="93b2c1583a83423288661225e3f86391">tempest-AttachInterfacesUnderV243Test-1865175876</nova:project>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:25:43 compute-0 nova_compute[260603]:         <nova:port uuid="a75b1cfa-e509-4676-bd92-b36be82f1e83">
Oct 02 08:25:43 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <system>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <entry name="serial">1bd45455-6745-4310-a5a6-f86dd4dcb4ca</entry>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <entry name="uuid">1bd45455-6745-4310-a5a6-f86dd4dcb4ca</entry>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     </system>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   <os>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   </os>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   <features>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   </features>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk">
Oct 02 08:25:43 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:43 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk.config">
Oct 02 08:25:43 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:25:43 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:55:9e:33"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <target dev="tapa75b1cfa-e5"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/1bd45455-6745-4310-a5a6-f86dd4dcb4ca/console.log" append="off"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <video>
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     </video>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:25:43 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:25:43 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:25:43 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:25:43 compute-0 nova_compute[260603]: </domain>
Oct 02 08:25:43 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.555 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Preparing to wait for external event network-vif-plugged-a75b1cfa-e509-4676-bd92-b36be82f1e83 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.556 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.556 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.557 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.558 2 DEBUG nova.virt.libvirt.vif [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1639846623',display_name='tempest-AttachInterfacesUnderV243Test-server-1639846623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1639846623',id=37,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCKG1mgw8jr3d6I/rPaxpI/xykaK5mkiXQdfytljVDMuER4fKp12xeA534SFvLBxY6bTyRlvKcYu+E0zUTX3LJrHhpFk2NKvsPRSQ931HNOhMc2t1VukoDA/V98LWTem1w==',key_name='tempest-keypair-2083756465',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93b2c1583a83423288661225e3f86391',ramdisk_id='',reservation_id='r-wb0hbrf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1865175876',owner_user_name='tempest-AttachInterfacesUnderV243Test-1865175876-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5cf08c876094c4d847b57fd4506bbff',uuid=1bd45455-6745-4310-a5a6-f86dd4dcb4ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.558 2 DEBUG nova.network.os_vif_util [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Converting VIF {"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.559 2 DEBUG nova.network.os_vif_util [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:9e:33,bridge_name='br-int',has_traffic_filtering=True,id=a75b1cfa-e509-4676-bd92-b36be82f1e83,network=Network(e086d5b9-24e9-42fc-adba-1a3993e8f3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75b1cfa-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.560 2 DEBUG os_vif [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:9e:33,bridge_name='br-int',has_traffic_filtering=True,id=a75b1cfa-e509-4676-bd92-b36be82f1e83,network=Network(e086d5b9-24e9-42fc-adba-1a3993e8f3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75b1cfa-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.561 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.562 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa75b1cfa-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.567 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa75b1cfa-e5, col_values=(('external_ids', {'iface-id': 'a75b1cfa-e509-4676-bd92-b36be82f1e83', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:9e:33', 'vm-uuid': '1bd45455-6745-4310-a5a6-f86dd4dcb4ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:43 compute-0 NetworkManager[45129]: <info>  [1759393543.5701] manager: (tapa75b1cfa-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.578 2 INFO os_vif [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:9e:33,bridge_name='br-int',has_traffic_filtering=True,id=a75b1cfa-e509-4676-bd92-b36be82f1e83,network=Network(e086d5b9-24e9-42fc-adba-1a3993e8f3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75b1cfa-e5')
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.578 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b96a211-0644-4d6f-b00b-2b2c05bbc691]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.631 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1da247ec-f7ed-4c22-aa28-8a4ebc710065]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.642 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f6142a-f588-4f36-9a4a-f662abbe95df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 NetworkManager[45129]: <info>  [1759393543.6443] manager: (tap897d7abf-90): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.651 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.652 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.652 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] No VIF found with MAC fa:16:3e:55:9e:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.652 2 INFO nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Using config drive
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.687 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.700 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ab43e341-92a9-434b-b7ec-b5cfbe0a700d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.705 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[756c26d3-a543-4ea5-90ac-43b321f307f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.707 2 DEBUG nova.network.neutron [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updated VIF entry in instance network info cache for port a75b1cfa-e509-4676-bd92-b36be82f1e83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.707 2 DEBUG nova.network.neutron [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updating instance_info_cache with network_info: [{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.731 2 DEBUG oslo_concurrency.lockutils [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:43 compute-0 NetworkManager[45129]: <info>  [1759393543.7448] device (tap897d7abf-90): carrier: link connected
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.755 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ee788253-7bfc-4915-9be1-567812e8df65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.785 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ec88de-d4e0-4970-9a31-d89efc2c2db4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442234, 'reachable_time': 26126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301965, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.808 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4811147-3b43-4f3f-9cbb-d9f8c6ad0ce6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:18ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442234, 'tstamp': 442234}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301966, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.836 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[81050ba9-a31c-4e25-ad53-0f0041b080c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442234, 'reachable_time': 26126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301967, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.893 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5cce2b-00d7-46f9-b196-ba6e1689f516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.954 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbe2a21-061f-43b5-9138-540197599435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.955 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.956 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.956 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:43 compute-0 NetworkManager[45129]: <info>  [1759393543.9590] manager: (tap897d7abf-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:43 compute-0 kernel: tap897d7abf-90: entered promiscuous mode
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.967 2 INFO nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Creating config drive at /var/lib/nova/instances/1bd45455-6745-4310-a5a6-f86dd4dcb4ca/disk.config
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.968 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:43 compute-0 ovn_controller[152344]: 2025-10-02T08:25:43Z|00243|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct 02 08:25:43 compute-0 nova_compute[260603]: 2025-10-02 08:25:43.981 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1bd45455-6745-4310-a5a6-f86dd4dcb4ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps117oh0v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.996 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.997 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b77ba2cf-8d33-41c7-918f-347b83c68102]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.998 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 897d7abf-9e23-43cd-8f60-7156792a4360
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:25:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:43.999 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'env', 'PROCESS_TAG=haproxy-897d7abf-9e23-43cd-8f60-7156792a4360', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/897d7abf-9e23-43cd-8f60-7156792a4360.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.073 2 DEBUG nova.network.neutron [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Successfully created port: 4cab2b24-035a-4dfc-b153-277b0512d9f6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.133 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1bd45455-6745-4310-a5a6-f86dd4dcb4ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps117oh0v" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.163 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.168 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1bd45455-6745-4310-a5a6-f86dd4dcb4ca/disk.config 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.369 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1bd45455-6745-4310-a5a6-f86dd4dcb4ca/disk.config 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.370 2 INFO nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Deleting local config drive /var/lib/nova/instances/1bd45455-6745-4310-a5a6-f86dd4dcb4ca/disk.config because it was imported into RBD.
Oct 02 08:25:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct 02 08:25:44 compute-0 ceph-mon[74477]: pgmap v1330: 305 pgs: 305 active+clean; 199 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 696 KiB/s rd, 6.1 MiB/s wr, 382 op/s
Oct 02 08:25:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3545449055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:25:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct 02 08:25:44 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct 02 08:25:44 compute-0 podman[302042]: 2025-10-02 08:25:44.436981487 +0000 UTC m=+0.064722928 container create 2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 08:25:44 compute-0 kernel: tapa75b1cfa-e5: entered promiscuous mode
Oct 02 08:25:44 compute-0 NetworkManager[45129]: <info>  [1759393544.4531] manager: (tapa75b1cfa-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:44 compute-0 ovn_controller[152344]: 2025-10-02T08:25:44Z|00244|binding|INFO|Claiming lport a75b1cfa-e509-4676-bd92-b36be82f1e83 for this chassis.
Oct 02 08:25:44 compute-0 ovn_controller[152344]: 2025-10-02T08:25:44Z|00245|binding|INFO|a75b1cfa-e509-4676-bd92-b36be82f1e83: Claiming fa:16:3e:55:9e:33 10.100.0.13
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.472 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:9e:33 10.100.0.13'], port_security=['fa:16:3e:55:9e:33 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1bd45455-6745-4310-a5a6-f86dd4dcb4ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e086d5b9-24e9-42fc-adba-1a3993e8f3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93b2c1583a83423288661225e3f86391', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f33ffaac-75d9-45a6-a506-467fbdc687ac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc33fda3-a64f-4f9d-b5b2-d2bb922fc37f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a75b1cfa-e509-4676-bd92-b36be82f1e83) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:44 compute-0 ovn_controller[152344]: 2025-10-02T08:25:44Z|00246|binding|INFO|Setting lport a75b1cfa-e509-4676-bd92-b36be82f1e83 ovn-installed in OVS
Oct 02 08:25:44 compute-0 ovn_controller[152344]: 2025-10-02T08:25:44Z|00247|binding|INFO|Setting lport a75b1cfa-e509-4676-bd92-b36be82f1e83 up in Southbound
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:44 compute-0 podman[302042]: 2025-10-02 08:25:44.401917086 +0000 UTC m=+0.029658557 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:25:44 compute-0 systemd[1]: Started libpod-conmon-2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1.scope.
Oct 02 08:25:44 compute-0 systemd-udevd[302083]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:25:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:25:44 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000025.
Oct 02 08:25:44 compute-0 systemd-machined[214636]: New machine qemu-41-instance-00000025.
Oct 02 08:25:44 compute-0 NetworkManager[45129]: <info>  [1759393544.5632] device (tapa75b1cfa-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:25:44 compute-0 NetworkManager[45129]: <info>  [1759393544.5648] device (tapa75b1cfa-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36154f0e228e572add5335235d684e5fd88c9ffe8e89720ad40ace7d42c7189d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:25:44 compute-0 podman[302042]: 2025-10-02 08:25:44.583252373 +0000 UTC m=+0.210993814 container init 2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:25:44 compute-0 podman[302063]: 2025-10-02 08:25:44.588516143 +0000 UTC m=+0.101503354 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:25:44 compute-0 podman[302042]: 2025-10-02 08:25:44.593960448 +0000 UTC m=+0.221701889 container start 2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 08:25:44 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[302076]: [NOTICE]   (302095) : New worker (302099) forked
Oct 02 08:25:44 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[302076]: [NOTICE]   (302095) : Loading success.
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.666 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a75b1cfa-e509-4676-bd92-b36be82f1e83 in datapath e086d5b9-24e9-42fc-adba-1a3993e8f3d1 unbound from our chassis
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.669 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e086d5b9-24e9-42fc-adba-1a3993e8f3d1
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.681 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[feb29f07-b324-4cd3-aa37-97d1a0fb5cfc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.682 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape086d5b9-21 in ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.685 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape086d5b9-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.686 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0abd7c26-2ac2-4ea4-b296-ceca20bf5d39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.688 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2f87ca10-0e7f-417d-8e2f-12105d0f80a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.708 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3ed811-a708-4871-98d2-f6d6b25ec313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.741 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f754c12c-e003-4e5f-9447-3ed10b3c4e04]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.765 2 DEBUG nova.network.neutron [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Successfully updated port: 4cab2b24-035a-4dfc-b153-277b0512d9f6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.775 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[37d0e1d5-2e5e-4b1a-8c38-ec8998783aaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.781 2 DEBUG oslo_concurrency.lockutils [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.781 2 DEBUG oslo_concurrency.lockutils [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.781 2 DEBUG nova.network.neutron [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:25:44 compute-0 NetworkManager[45129]: <info>  [1759393544.7876] manager: (tape086d5b9-20): new Veth device (/org/freedesktop/NetworkManager/Devices/116)
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.792 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c64d3a00-f4e9-4dfc-beac-28b890197408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.838 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b937280f-786a-4a71-b78a-7e28381f814a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.841 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b2489855-1e76-4bd8-9449-1c61b94614d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 NetworkManager[45129]: <info>  [1759393544.8638] device (tape086d5b9-20): carrier: link connected
Oct 02 08:25:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 213 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 695 KiB/s rd, 8.0 MiB/s wr, 426 op/s
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.871 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b637414f-0b8c-4a9a-8e2a-82d14e0b000b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.887 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7fff03db-e6cd-45fa-9982-89d6e5b12e26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape086d5b9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:fa:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442346, 'reachable_time': 36748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302156, 'error': None, 'target': 'ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.901 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce0bbd0-710b-436e-9df4-316b33f1f276]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:fa32'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442346, 'tstamp': 442346}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302164, 'error': None, 'target': 'ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.918 2 WARNING nova.network.neutron [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.918 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d28a7799-c049-4f60-92be-bb06c0fb2674]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape086d5b9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:fa:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442346, 'reachable_time': 36748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302176, 'error': None, 'target': 'ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.935 2 DEBUG nova.compute.manager [req-2353c5c6-f873-4723-9bdc-b479d5f97904 req-85e349ab-16ca-4955-b2ec-7d7065d490f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Received event network-vif-plugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.936 2 DEBUG oslo_concurrency.lockutils [req-2353c5c6-f873-4723-9bdc-b479d5f97904 req-85e349ab-16ca-4955-b2ec-7d7065d490f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.936 2 DEBUG oslo_concurrency.lockutils [req-2353c5c6-f873-4723-9bdc-b479d5f97904 req-85e349ab-16ca-4955-b2ec-7d7065d490f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.936 2 DEBUG oslo_concurrency.lockutils [req-2353c5c6-f873-4723-9bdc-b479d5f97904 req-85e349ab-16ca-4955-b2ec-7d7065d490f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:44 compute-0 nova_compute[260603]: 2025-10-02 08:25:44.936 2 DEBUG nova.compute.manager [req-2353c5c6-f873-4723-9bdc-b479d5f97904 req-85e349ab-16ca-4955-b2ec-7d7065d490f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Processing event network-vif-plugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:25:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:44.959 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fd8aab-2c42-4887-89cb-0ec8d39936b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:45.052 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9d35dccc-dd00-41eb-b7ee-e74763b05428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:45.054 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape086d5b9-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:45.055 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:45.056 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape086d5b9-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:45 compute-0 NetworkManager[45129]: <info>  [1759393545.0587] manager: (tape086d5b9-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:45 compute-0 kernel: tape086d5b9-20: entered promiscuous mode
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:45.063 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape086d5b9-20, col_values=(('external_ids', {'iface-id': '7a7b217b-e72f-4a21-a2ea-891722d7d6fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:45 compute-0 ovn_controller[152344]: 2025-10-02T08:25:45Z|00248|binding|INFO|Releasing lport 7a7b217b-e72f-4a21-a2ea-891722d7d6fa from this chassis (sb_readonly=0)
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:45.110 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e086d5b9-24e9-42fc-adba-1a3993e8f3d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e086d5b9-24e9-42fc-adba-1a3993e8f3d1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:45.111 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1348c8d9-965d-4b14-b272-426a78814292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:45.112 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-e086d5b9-24e9-42fc-adba-1a3993e8f3d1
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/e086d5b9-24e9-42fc-adba-1a3993e8f3d1.pid.haproxy
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID e086d5b9-24e9-42fc-adba-1a3993e8f3d1
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:25:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:45.113 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1', 'env', 'PROCESS_TAG=haproxy-e086d5b9-24e9-42fc-adba-1a3993e8f3d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e086d5b9-24e9-42fc-adba-1a3993e8f3d1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:25:45 compute-0 ceph-mon[74477]: osdmap e193: 3 total, 3 up, 3 in
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.478 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393545.4784243, 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.479 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] VM Started (Lifecycle Event)
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.480 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.483 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.487 2 INFO nova.virt.libvirt.driver [-] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Instance spawned successfully.
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.487 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.500 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.505 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.511 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.511 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.511 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.512 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.513 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.513 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:45 compute-0 podman[302214]: 2025-10-02 08:25:45.523819719 +0000 UTC m=+0.072133866 container create e105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.534 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.534 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393545.4784873, 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.534 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] VM Paused (Lifecycle Event)
Oct 02 08:25:45 compute-0 systemd[1]: Started libpod-conmon-e105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2.scope.
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.563 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:45 compute-0 podman[302214]: 2025-10-02 08:25:45.477102582 +0000 UTC m=+0.025416779 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.572 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393545.4821913, 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.572 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] VM Resumed (Lifecycle Event)
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.577 2 INFO nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Took 6.84 seconds to spawn the instance on the hypervisor.
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.577 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.597 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83a235cb30ab31648a638f80b384a1df75c51ddd966395ebf8f9a7e9f71d807/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.607 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:45 compute-0 podman[302214]: 2025-10-02 08:25:45.618512082 +0000 UTC m=+0.166826259 container init e105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:25:45 compute-0 podman[302214]: 2025-10-02 08:25:45.624047811 +0000 UTC m=+0.172361948 container start e105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.643 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.644 2 INFO nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Took 8.12 seconds to build instance.
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.658 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:45 compute-0 neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1[302270]: [NOTICE]   (302275) : New worker (302277) forked
Oct 02 08:25:45 compute-0 neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1[302270]: [NOTICE]   (302275) : Loading success.
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.866 2 DEBUG nova.compute.manager [req-ad66db1c-a86c-424b-8ed8-b76a278f352c req-68af7302-3165-4a2a-8180-344c5ef61066 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received event network-vif-plugged-a75b1cfa-e509-4676-bd92-b36be82f1e83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.867 2 DEBUG oslo_concurrency.lockutils [req-ad66db1c-a86c-424b-8ed8-b76a278f352c req-68af7302-3165-4a2a-8180-344c5ef61066 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.867 2 DEBUG oslo_concurrency.lockutils [req-ad66db1c-a86c-424b-8ed8-b76a278f352c req-68af7302-3165-4a2a-8180-344c5ef61066 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.867 2 DEBUG oslo_concurrency.lockutils [req-ad66db1c-a86c-424b-8ed8-b76a278f352c req-68af7302-3165-4a2a-8180-344c5ef61066 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:45 compute-0 nova_compute[260603]: 2025-10-02 08:25:45.867 2 DEBUG nova.compute.manager [req-ad66db1c-a86c-424b-8ed8-b76a278f352c req-68af7302-3165-4a2a-8180-344c5ef61066 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Processing event network-vif-plugged-a75b1cfa-e509-4676-bd92-b36be82f1e83 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.010 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393531.00869, 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.011 2 INFO nova.compute.manager [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] VM Stopped (Lifecycle Event)
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.043 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393546.04273, 1bd45455-6745-4310-a5a6-f86dd4dcb4ca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.043 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] VM Started (Lifecycle Event)
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.045 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.061 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.062 2 DEBUG nova.compute.manager [None req-f44d59de-fbde-4556-be98-5aa6865c23da - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.068 2 INFO nova.virt.libvirt.driver [-] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Instance spawned successfully.
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.069 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.083 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.091 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.098 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.099 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.100 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.100 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.101 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.102 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.114 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.115 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393546.0428393, 1bd45455-6745-4310-a5a6-f86dd4dcb4ca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.115 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] VM Paused (Lifecycle Event)
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.150 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.169 2 INFO nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Took 6.02 seconds to spawn the instance on the hypervisor.
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.171 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.175 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393546.0577447, 1bd45455-6745-4310-a5a6-f86dd4dcb4ca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.176 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] VM Resumed (Lifecycle Event)
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.204 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.209 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.232 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.244 2 INFO nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Took 7.27 seconds to build instance.
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.260 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.427 2 DEBUG nova.network.neutron [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updating instance_info_cache with network_info: [{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "address": "fa:16:3e:37:5f:92", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cab2b24-03", "ovs_interfaceid": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct 02 08:25:46 compute-0 ceph-mon[74477]: pgmap v1332: 305 pgs: 305 active+clean; 213 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 695 KiB/s rd, 8.0 MiB/s wr, 426 op/s
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.444 2 DEBUG oslo_concurrency.lockutils [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.448 2 DEBUG nova.virt.libvirt.vif [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-458247204',display_name='tempest-AttachInterfacesTestJSON-server-458247204',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-458247204',id=34,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLaaEbFdfJl8DPmpnKJlEIvFg5540SqFF+MSTf3Rd/2eelZoXpVzf3sfGdNxC0G9xmrCg9ZN/m3Sts6FpSIoxcwOomYVGKVXIQ6YS2vLlaWSvr3+0XJ9/CqIl8gT/hDBw==',key_name='tempest-keypair-1310606954',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:25:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-jh0cjhom',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:25:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=c28cb03c-6207-4ec5-9156-03252350561c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "address": "fa:16:3e:37:5f:92", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cab2b24-03", "ovs_interfaceid": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.449 2 DEBUG nova.network.os_vif_util [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "address": "fa:16:3e:37:5f:92", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cab2b24-03", "ovs_interfaceid": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.450 2 DEBUG nova.network.os_vif_util [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:5f:92,bridge_name='br-int',has_traffic_filtering=True,id=4cab2b24-035a-4dfc-b153-277b0512d9f6,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cab2b24-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:46 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.450 2 DEBUG os_vif [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:5f:92,bridge_name='br-int',has_traffic_filtering=True,id=4cab2b24-035a-4dfc-b153-277b0512d9f6,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cab2b24-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.455 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4cab2b24-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4cab2b24-03, col_values=(('external_ids', {'iface-id': '4cab2b24-035a-4dfc-b153-277b0512d9f6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:5f:92', 'vm-uuid': 'c28cb03c-6207-4ec5-9156-03252350561c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:46 compute-0 NetworkManager[45129]: <info>  [1759393546.4601] manager: (tap4cab2b24-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.468 2 INFO os_vif [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:5f:92,bridge_name='br-int',has_traffic_filtering=True,id=4cab2b24-035a-4dfc-b153-277b0512d9f6,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cab2b24-03')
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.471 2 DEBUG nova.virt.libvirt.vif [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-458247204',display_name='tempest-AttachInterfacesTestJSON-server-458247204',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-458247204',id=34,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLaaEbFdfJl8DPmpnKJlEIvFg5540SqFF+MSTf3Rd/2eelZoXpVzf3sfGdNxC0G9xmrCg9ZN/m3Sts6FpSIoxcwOomYVGKVXIQ6YS2vLlaWSvr3+0XJ9/CqIl8gT/hDBw==',key_name='tempest-keypair-1310606954',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:25:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-jh0cjhom',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:25:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=c28cb03c-6207-4ec5-9156-03252350561c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "address": "fa:16:3e:37:5f:92", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cab2b24-03", "ovs_interfaceid": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.472 2 DEBUG nova.network.os_vif_util [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "address": "fa:16:3e:37:5f:92", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cab2b24-03", "ovs_interfaceid": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.473 2 DEBUG nova.network.os_vif_util [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:5f:92,bridge_name='br-int',has_traffic_filtering=True,id=4cab2b24-035a-4dfc-b153-277b0512d9f6,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cab2b24-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.476 2 DEBUG nova.virt.libvirt.guest [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] attach device xml: <interface type="ethernet">
Oct 02 08:25:46 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:37:5f:92"/>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   <target dev="tap4cab2b24-03"/>
Oct 02 08:25:46 compute-0 nova_compute[260603]: </interface>
Oct 02 08:25:46 compute-0 nova_compute[260603]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 08:25:46 compute-0 kernel: tap4cab2b24-03: entered promiscuous mode
Oct 02 08:25:46 compute-0 NetworkManager[45129]: <info>  [1759393546.4903] manager: (tap4cab2b24-03): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Oct 02 08:25:46 compute-0 ovn_controller[152344]: 2025-10-02T08:25:46Z|00249|binding|INFO|Claiming lport 4cab2b24-035a-4dfc-b153-277b0512d9f6 for this chassis.
Oct 02 08:25:46 compute-0 ovn_controller[152344]: 2025-10-02T08:25:46Z|00250|binding|INFO|4cab2b24-035a-4dfc-b153-277b0512d9f6: Claiming fa:16:3e:37:5f:92 10.100.0.14
Oct 02 08:25:46 compute-0 systemd-udevd[302125]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.503 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:5f:92 10.100.0.14'], port_security=['fa:16:3e:37:5f:92 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c28cb03c-6207-4ec5-9156-03252350561c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4cab2b24-035a-4dfc-b153-277b0512d9f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:46 compute-0 NetworkManager[45129]: <info>  [1759393546.5068] device (tap4cab2b24-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.506 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4cab2b24-035a-4dfc-b153-277b0512d9f6 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis
Oct 02 08:25:46 compute-0 NetworkManager[45129]: <info>  [1759393546.5076] device (tap4cab2b24-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.509 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:25:46 compute-0 ovn_controller[152344]: 2025-10-02T08:25:46Z|00251|binding|INFO|Setting lport 4cab2b24-035a-4dfc-b153-277b0512d9f6 ovn-installed in OVS
Oct 02 08:25:46 compute-0 ovn_controller[152344]: 2025-10-02T08:25:46Z|00252|binding|INFO|Setting lport 4cab2b24-035a-4dfc-b153-277b0512d9f6 up in Southbound
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.532 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9794b43d-f51b-4c34-8c1e-d2d6bb056bc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.576 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2336b2-f892-4dd7-845c-3f80dfbe8eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.579 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[79673fa9-76c5-43ac-9f61-05ad711b08fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.589 2 DEBUG nova.virt.libvirt.driver [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.589 2 DEBUG nova.virt.libvirt.driver [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.589 2 DEBUG nova.virt.libvirt.driver [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:02:cf:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.590 2 DEBUG nova.virt.libvirt.driver [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:37:5f:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.613 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a59546e9-d0d1-4f80-b465-4c9dee17da7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.614 2 DEBUG nova.virt.libvirt.guest [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:25:46 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-458247204</nova:name>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:25:46</nova:creationTime>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:25:46 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:25:46 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:25:46 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:25:46 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:25:46 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:25:46 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:25:46 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:25:46 compute-0 nova_compute[260603]:     <nova:port uuid="8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627">
Oct 02 08:25:46 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:25:46 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:25:46 compute-0 nova_compute[260603]:     <nova:port uuid="4cab2b24-035a-4dfc-b153-277b0512d9f6">
Oct 02 08:25:46 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 08:25:46 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:25:46 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:25:46 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:25:46 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.634 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[561e19bd-4040-4374-9785-48e001143c44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439746, 'reachable_time': 37526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302300, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.639 2 DEBUG oslo_concurrency.lockutils [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-c28cb03c-6207-4ec5-9156-03252350561c-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.656 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2b8f32-4320-449a-aaac-b505229f5398]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439758, 'tstamp': 439758}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302301, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439762, 'tstamp': 439762}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302301, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.659 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.662 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.663 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.663 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:46.664 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 213 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 7.1 MiB/s wr, 193 op/s
Oct 02 08:25:46 compute-0 nova_compute[260603]: 2025-10-02 08:25:46.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:25:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct 02 08:25:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct 02 08:25:47 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct 02 08:25:47 compute-0 ceph-mon[74477]: osdmap e194: 3 total, 3 up, 3 in
Oct 02 08:25:47 compute-0 nova_compute[260603]: 2025-10-02 08:25:47.912 2 DEBUG nova.compute.manager [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Received event network-vif-plugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:47 compute-0 nova_compute[260603]: 2025-10-02 08:25:47.912 2 DEBUG oslo_concurrency.lockutils [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:47 compute-0 nova_compute[260603]: 2025-10-02 08:25:47.913 2 DEBUG oslo_concurrency.lockutils [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:47 compute-0 nova_compute[260603]: 2025-10-02 08:25:47.913 2 DEBUG oslo_concurrency.lockutils [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:47 compute-0 nova_compute[260603]: 2025-10-02 08:25:47.913 2 DEBUG nova.compute.manager [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] No waiting events found dispatching network-vif-plugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:47 compute-0 nova_compute[260603]: 2025-10-02 08:25:47.913 2 WARNING nova.compute.manager [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Received unexpected event network-vif-plugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 for instance with vm_state active and task_state None.
Oct 02 08:25:47 compute-0 nova_compute[260603]: 2025-10-02 08:25:47.913 2 DEBUG nova.compute.manager [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-changed-4cab2b24-035a-4dfc-b153-277b0512d9f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:47 compute-0 nova_compute[260603]: 2025-10-02 08:25:47.913 2 DEBUG nova.compute.manager [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Refreshing instance network info cache due to event network-changed-4cab2b24-035a-4dfc-b153-277b0512d9f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:25:47 compute-0 nova_compute[260603]: 2025-10-02 08:25:47.914 2 DEBUG oslo_concurrency.lockutils [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:47 compute-0 nova_compute[260603]: 2025-10-02 08:25:47.914 2 DEBUG oslo_concurrency.lockutils [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:47 compute-0 nova_compute[260603]: 2025-10-02 08:25:47.914 2 DEBUG nova.network.neutron [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Refreshing network info cache for port 4cab2b24-035a-4dfc-b153-277b0512d9f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.418 2 DEBUG nova.compute.manager [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received event network-vif-plugged-a75b1cfa-e509-4676-bd92-b36be82f1e83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.418 2 DEBUG oslo_concurrency.lockutils [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.418 2 DEBUG oslo_concurrency.lockutils [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.419 2 DEBUG oslo_concurrency.lockutils [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.419 2 DEBUG nova.compute.manager [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] No waiting events found dispatching network-vif-plugged-a75b1cfa-e509-4676-bd92-b36be82f1e83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.419 2 WARNING nova.compute.manager [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received unexpected event network-vif-plugged-a75b1cfa-e509-4676-bd92-b36be82f1e83 for instance with vm_state active and task_state None.
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.419 2 DEBUG nova.compute.manager [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-plugged-4cab2b24-035a-4dfc-b153-277b0512d9f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.419 2 DEBUG oslo_concurrency.lockutils [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.419 2 DEBUG oslo_concurrency.lockutils [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.419 2 DEBUG oslo_concurrency.lockutils [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.420 2 DEBUG nova.compute.manager [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] No waiting events found dispatching network-vif-plugged-4cab2b24-035a-4dfc-b153-277b0512d9f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.420 2 WARNING nova.compute.manager [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received unexpected event network-vif-plugged-4cab2b24-035a-4dfc-b153-277b0512d9f6 for instance with vm_state active and task_state None.
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.420 2 DEBUG nova.compute.manager [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-plugged-4cab2b24-035a-4dfc-b153-277b0512d9f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.420 2 DEBUG oslo_concurrency.lockutils [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.420 2 DEBUG oslo_concurrency.lockutils [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.420 2 DEBUG oslo_concurrency.lockutils [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.420 2 DEBUG nova.compute.manager [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] No waiting events found dispatching network-vif-plugged-4cab2b24-035a-4dfc-b153-277b0512d9f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.421 2 WARNING nova.compute.manager [req-e96fd6b3-ffb6-46b4-8f9e-1bbe0c592202 req-c0a39cd2-09bc-4612-941d-c8eeed2c757d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received unexpected event network-vif-plugged-4cab2b24-035a-4dfc-b153-277b0512d9f6 for instance with vm_state active and task_state None.
Oct 02 08:25:48 compute-0 ceph-mon[74477]: pgmap v1334: 305 pgs: 305 active+clean; 213 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 7.1 MiB/s wr, 193 op/s
Oct 02 08:25:48 compute-0 ceph-mon[74477]: osdmap e195: 3 total, 3 up, 3 in
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.822 2 DEBUG nova.compute.manager [None req-5d27a626-9ec0-425f-af7e-589c5d002886 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:48 compute-0 ovn_controller[152344]: 2025-10-02T08:25:48Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:5f:92 10.100.0.14
Oct 02 08:25:48 compute-0 ovn_controller[152344]: 2025-10-02T08:25:48Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:5f:92 10.100.0.14
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.867 2 INFO nova.compute.manager [None req-5d27a626-9ec0-425f-af7e-589c5d002886 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] instance snapshotting
Oct 02 08:25:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 214 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.1 MiB/s wr, 489 op/s
Oct 02 08:25:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:48.914 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:48.915 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:25:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:48.915 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:48 compute-0 nova_compute[260603]: 2025-10-02 08:25:48.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:49 compute-0 podman[302302]: 2025-10-02 08:25:49.023997094 +0000 UTC m=+0.086615603 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.170 2 INFO nova.virt.libvirt.driver [None req-5d27a626-9ec0-425f-af7e-589c5d002886 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Beginning live snapshot process
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.340 2 DEBUG nova.virt.libvirt.imagebackend [None req-5d27a626-9ec0-425f-af7e-589c5d002886 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.532 2 DEBUG oslo_concurrency.lockutils [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-c28cb03c-6207-4ec5-9156-03252350561c-4cab2b24-035a-4dfc-b153-277b0512d9f6" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.532 2 DEBUG oslo_concurrency.lockutils [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-c28cb03c-6207-4ec5-9156-03252350561c-4cab2b24-035a-4dfc-b153-277b0512d9f6" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.547 2 DEBUG nova.objects.instance [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid c28cb03c-6207-4ec5-9156-03252350561c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.567 2 DEBUG nova.storage.rbd_utils [None req-5d27a626-9ec0-425f-af7e-589c5d002886 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(2756003189b044988f96d332563f3d39) on rbd image(9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.600 2 DEBUG nova.virt.libvirt.vif [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-458247204',display_name='tempest-AttachInterfacesTestJSON-server-458247204',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-458247204',id=34,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLaaEbFdfJl8DPmpnKJlEIvFg5540SqFF+MSTf3Rd/2eelZoXpVzf3sfGdNxC0G9xmrCg9ZN/m3Sts6FpSIoxcwOomYVGKVXIQ6YS2vLlaWSvr3+0XJ9/CqIl8gT/hDBw==',key_name='tempest-keypair-1310606954',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:25:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-jh0cjhom',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:25:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=c28cb03c-6207-4ec5-9156-03252350561c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "address": "fa:16:3e:37:5f:92", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cab2b24-03", "ovs_interfaceid": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.600 2 DEBUG nova.network.os_vif_util [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "address": "fa:16:3e:37:5f:92", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cab2b24-03", "ovs_interfaceid": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.601 2 DEBUG nova.network.os_vif_util [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:5f:92,bridge_name='br-int',has_traffic_filtering=True,id=4cab2b24-035a-4dfc-b153-277b0512d9f6,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cab2b24-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.605 2 DEBUG nova.virt.libvirt.guest [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:37:5f:92"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4cab2b24-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.607 2 DEBUG nova.virt.libvirt.guest [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:37:5f:92"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4cab2b24-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.610 2 DEBUG nova.virt.libvirt.driver [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Attempting to detach device tap4cab2b24-03 from instance c28cb03c-6207-4ec5-9156-03252350561c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.610 2 DEBUG nova.virt.libvirt.guest [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:37:5f:92"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <target dev="tap4cab2b24-03"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]: </interface>
Oct 02 08:25:49 compute-0 nova_compute[260603]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.616 2 DEBUG nova.virt.libvirt.guest [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:37:5f:92"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4cab2b24-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.628 2 DEBUG nova.virt.libvirt.guest [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:37:5f:92"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4cab2b24-03"/></interface>not found in domain: <domain type='kvm' id='38'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <name>instance-00000022</name>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <uuid>c28cb03c-6207-4ec5-9156-03252350561c</uuid>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-458247204</nova:name>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:25:46</nova:creationTime>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:port uuid="8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627">
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:port uuid="4cab2b24-035a-4dfc-b153-277b0512d9f6">
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:25:49 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <system>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <entry name='serial'>c28cb03c-6207-4ec5-9156-03252350561c</entry>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <entry name='uuid'>c28cb03c-6207-4ec5-9156-03252350561c</entry>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </system>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <os>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </os>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <features>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </features>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/c28cb03c-6207-4ec5-9156-03252350561c_disk' index='2'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/c28cb03c-6207-4ec5-9156-03252350561c_disk.config' index='1'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:02:cf:d8'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target dev='tap8b4ce2c0-9e'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:37:5f:92'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target dev='tap4cab2b24-03'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='net1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <source path='/dev/pts/3'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/console.log' append='off'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       </target>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/3'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <source path='/dev/pts/3'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/console.log' append='off'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </console>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </input>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </input>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </input>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <video>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </video>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c536,c945</label>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c536,c945</imagelabel>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:25:49 compute-0 nova_compute[260603]: </domain>
Oct 02 08:25:49 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.628 2 INFO nova.virt.libvirt.driver [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap4cab2b24-03 from instance c28cb03c-6207-4ec5-9156-03252350561c from the persistent domain config.
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.629 2 DEBUG nova.virt.libvirt.driver [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] (1/8): Attempting to detach device tap4cab2b24-03 with device alias net1 from instance c28cb03c-6207-4ec5-9156-03252350561c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.629 2 DEBUG nova.virt.libvirt.guest [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:37:5f:92"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <target dev="tap4cab2b24-03"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]: </interface>
Oct 02 08:25:49 compute-0 nova_compute[260603]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 08:25:49 compute-0 kernel: tap4cab2b24-03 (unregistering): left promiscuous mode
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.748 2 DEBUG nova.network.neutron [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updated VIF entry in instance network info cache for port 4cab2b24-035a-4dfc-b153-277b0512d9f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.749 2 DEBUG nova.network.neutron [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updating instance_info_cache with network_info: [{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "address": "fa:16:3e:37:5f:92", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cab2b24-03", "ovs_interfaceid": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:49 compute-0 NetworkManager[45129]: <info>  [1759393549.7612] device (tap4cab2b24-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:49 compute-0 ovn_controller[152344]: 2025-10-02T08:25:49Z|00253|binding|INFO|Releasing lport 4cab2b24-035a-4dfc-b153-277b0512d9f6 from this chassis (sb_readonly=0)
Oct 02 08:25:49 compute-0 ovn_controller[152344]: 2025-10-02T08:25:49Z|00254|binding|INFO|Setting lport 4cab2b24-035a-4dfc-b153-277b0512d9f6 down in Southbound
Oct 02 08:25:49 compute-0 ovn_controller[152344]: 2025-10-02T08:25:49Z|00255|binding|INFO|Removing iface tap4cab2b24-03 ovn-installed in OVS
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.776 2 DEBUG oslo_concurrency.lockutils [req-c588b1bb-7d73-4f85-91e0-8d22af90d0b4 req-bcaa0a7e-f414-42e1-a282-24a478f6c181 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.780 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Received event <DeviceRemovedEvent: 1759393549.7793086, c28cb03c-6207-4ec5-9156-03252350561c => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.784 2 DEBUG nova.virt.libvirt.driver [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Start waiting for the detach event from libvirt for device tap4cab2b24-03 with device alias net1 for instance c28cb03c-6207-4ec5-9156-03252350561c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.784 2 DEBUG nova.virt.libvirt.guest [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:37:5f:92"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4cab2b24-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.785 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:5f:92 10.100.0.14'], port_security=['fa:16:3e:37:5f:92 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c28cb03c-6207-4ec5-9156-03252350561c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4cab2b24-035a-4dfc-b153-277b0512d9f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.786 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4cab2b24-035a-4dfc-b153-277b0512d9f6 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.788 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.798 2 DEBUG nova.virt.libvirt.guest [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:37:5f:92"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4cab2b24-03"/></interface>not found in domain: <domain type='kvm' id='38'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <name>instance-00000022</name>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <uuid>c28cb03c-6207-4ec5-9156-03252350561c</uuid>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-458247204</nova:name>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:25:46</nova:creationTime>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:port uuid="8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627">
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:port uuid="4cab2b24-035a-4dfc-b153-277b0512d9f6">
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:25:49 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <system>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <entry name='serial'>c28cb03c-6207-4ec5-9156-03252350561c</entry>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <entry name='uuid'>c28cb03c-6207-4ec5-9156-03252350561c</entry>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </system>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <os>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </os>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <features>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </features>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/c28cb03c-6207-4ec5-9156-03252350561c_disk' index='2'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/c28cb03c-6207-4ec5-9156-03252350561c_disk.config' index='1'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       </source>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:02:cf:d8'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target dev='tap8b4ce2c0-9e'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <source path='/dev/pts/3'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/console.log' append='off'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       </target>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/3'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <source path='/dev/pts/3'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/console.log' append='off'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </console>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </input>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </input>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </input>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <video>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </video>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c536,c945</label>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c536,c945</imagelabel>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:25:49 compute-0 nova_compute[260603]: </domain>
Oct 02 08:25:49 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.799 2 INFO nova.virt.libvirt.driver [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap4cab2b24-03 from instance c28cb03c-6207-4ec5-9156-03252350561c from the live domain config.
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.800 2 DEBUG nova.virt.libvirt.vif [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-458247204',display_name='tempest-AttachInterfacesTestJSON-server-458247204',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-458247204',id=34,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLaaEbFdfJl8DPmpnKJlEIvFg5540SqFF+MSTf3Rd/2eelZoXpVzf3sfGdNxC0G9xmrCg9ZN/m3Sts6FpSIoxcwOomYVGKVXIQ6YS2vLlaWSvr3+0XJ9/CqIl8gT/hDBw==',key_name='tempest-keypair-1310606954',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:25:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-jh0cjhom',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:25:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=c28cb03c-6207-4ec5-9156-03252350561c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "address": "fa:16:3e:37:5f:92", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cab2b24-03", "ovs_interfaceid": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.800 2 DEBUG nova.network.os_vif_util [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "address": "fa:16:3e:37:5f:92", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cab2b24-03", "ovs_interfaceid": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.801 2 DEBUG nova.network.os_vif_util [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:5f:92,bridge_name='br-int',has_traffic_filtering=True,id=4cab2b24-035a-4dfc-b153-277b0512d9f6,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cab2b24-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.801 2 DEBUG os_vif [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:5f:92,bridge_name='br-int',has_traffic_filtering=True,id=4cab2b24-035a-4dfc-b153-277b0512d9f6,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cab2b24-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4cab2b24-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.813 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c78a49-eaa9-48bb-bef2-5bc9985db951]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.832 2 INFO os_vif [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:5f:92,bridge_name='br-int',has_traffic_filtering=True,id=4cab2b24-035a-4dfc-b153-277b0512d9f6,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cab2b24-03')
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.833 2 DEBUG nova.virt.libvirt.guest [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-458247204</nova:name>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:25:49</nova:creationTime>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     <nova:port uuid="8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627">
Oct 02 08:25:49 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:25:49 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:25:49 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:25:49 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:25:49 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.865 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ec097462-1c67-410d-ba3b-f72c8c35cc1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.869 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7d0df2-baeb-4b10-a106-36d3d5eb74a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.912 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[51abd0de-a70e-4b2f-98fd-445f0d2507bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.933 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9fcb476a-ea44-4d17-b8df-76386b6d62d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439746, 'reachable_time': 37526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302384, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.955 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b77e724-9988-4235-8c75-08aa1a6f1beb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439758, 'tstamp': 439758}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302385, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439762, 'tstamp': 439762}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302385, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.956 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:49 compute-0 nova_compute[260603]: 2025-10-02 08:25:49.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.960 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.960 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.961 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:49.961 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.162 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393535.1618264, c00d9d50-5c81-4dc9-8316-c654d4802b4f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.163 2 INFO nova.compute.manager [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] VM Stopped (Lifecycle Event)
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.187 2 DEBUG nova.compute.manager [None req-656471b5-e90d-473f-a8c2-667a42231e10 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.303 2 DEBUG nova.compute.manager [req-68f8e9e8-2564-469c-9893-7d944a4f5144 req-7815d8e4-7e66-4b98-be02-8b1675bc0d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received event network-changed-a75b1cfa-e509-4676-bd92-b36be82f1e83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.304 2 DEBUG nova.compute.manager [req-68f8e9e8-2564-469c-9893-7d944a4f5144 req-7815d8e4-7e66-4b98-be02-8b1675bc0d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Refreshing instance network info cache due to event network-changed-a75b1cfa-e509-4676-bd92-b36be82f1e83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.304 2 DEBUG oslo_concurrency.lockutils [req-68f8e9e8-2564-469c-9893-7d944a4f5144 req-7815d8e4-7e66-4b98-be02-8b1675bc0d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.304 2 DEBUG oslo_concurrency.lockutils [req-68f8e9e8-2564-469c-9893-7d944a4f5144 req-7815d8e4-7e66-4b98-be02-8b1675bc0d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.305 2 DEBUG nova.network.neutron [req-68f8e9e8-2564-469c-9893-7d944a4f5144 req-7815d8e4-7e66-4b98-be02-8b1675bc0d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Refreshing network info cache for port a75b1cfa-e509-4676-bd92-b36be82f1e83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:25:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct 02 08:25:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct 02 08:25:50 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct 02 08:25:50 compute-0 ceph-mon[74477]: pgmap v1336: 305 pgs: 305 active+clean; 214 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.1 MiB/s wr, 489 op/s
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.505 2 DEBUG nova.compute.manager [req-6acc2dca-17c3-4dbb-a570-f246d73eb5ea req-b0967a54-c689-4799-a2b9-d86b02b54b5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-unplugged-4cab2b24-035a-4dfc-b153-277b0512d9f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.506 2 DEBUG oslo_concurrency.lockutils [req-6acc2dca-17c3-4dbb-a570-f246d73eb5ea req-b0967a54-c689-4799-a2b9-d86b02b54b5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.506 2 DEBUG oslo_concurrency.lockutils [req-6acc2dca-17c3-4dbb-a570-f246d73eb5ea req-b0967a54-c689-4799-a2b9-d86b02b54b5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.506 2 DEBUG oslo_concurrency.lockutils [req-6acc2dca-17c3-4dbb-a570-f246d73eb5ea req-b0967a54-c689-4799-a2b9-d86b02b54b5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.506 2 DEBUG nova.compute.manager [req-6acc2dca-17c3-4dbb-a570-f246d73eb5ea req-b0967a54-c689-4799-a2b9-d86b02b54b5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] No waiting events found dispatching network-vif-unplugged-4cab2b24-035a-4dfc-b153-277b0512d9f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.506 2 WARNING nova.compute.manager [req-6acc2dca-17c3-4dbb-a570-f246d73eb5ea req-b0967a54-c689-4799-a2b9-d86b02b54b5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received unexpected event network-vif-unplugged-4cab2b24-035a-4dfc-b153-277b0512d9f6 for instance with vm_state active and task_state None.
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.506 2 DEBUG nova.compute.manager [req-6acc2dca-17c3-4dbb-a570-f246d73eb5ea req-b0967a54-c689-4799-a2b9-d86b02b54b5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-plugged-4cab2b24-035a-4dfc-b153-277b0512d9f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.507 2 DEBUG oslo_concurrency.lockutils [req-6acc2dca-17c3-4dbb-a570-f246d73eb5ea req-b0967a54-c689-4799-a2b9-d86b02b54b5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.507 2 DEBUG oslo_concurrency.lockutils [req-6acc2dca-17c3-4dbb-a570-f246d73eb5ea req-b0967a54-c689-4799-a2b9-d86b02b54b5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.507 2 DEBUG oslo_concurrency.lockutils [req-6acc2dca-17c3-4dbb-a570-f246d73eb5ea req-b0967a54-c689-4799-a2b9-d86b02b54b5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.507 2 DEBUG nova.compute.manager [req-6acc2dca-17c3-4dbb-a570-f246d73eb5ea req-b0967a54-c689-4799-a2b9-d86b02b54b5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] No waiting events found dispatching network-vif-plugged-4cab2b24-035a-4dfc-b153-277b0512d9f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.507 2 WARNING nova.compute.manager [req-6acc2dca-17c3-4dbb-a570-f246d73eb5ea req-b0967a54-c689-4799-a2b9-d86b02b54b5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received unexpected event network-vif-plugged-4cab2b24-035a-4dfc-b153-277b0512d9f6 for instance with vm_state active and task_state None.
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.546 2 DEBUG nova.storage.rbd_utils [None req-5d27a626-9ec0-425f-af7e-589c5d002886 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] cloning vms/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk@2756003189b044988f96d332563f3d39 to images/ac88df5a-95be-45fb-8821-71f463df7a31 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.629 2 DEBUG nova.storage.rbd_utils [None req-5d27a626-9ec0-425f-af7e-589c5d002886 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] flattening images/ac88df5a-95be-45fb-8821-71f463df7a31 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.815 2 DEBUG nova.storage.rbd_utils [None req-5d27a626-9ec0-425f-af7e-589c5d002886 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] removing snapshot(2756003189b044988f96d332563f3d39) on rbd image(9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.835 2 DEBUG oslo_concurrency.lockutils [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.836 2 DEBUG oslo_concurrency.lockutils [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:25:50 compute-0 nova_compute[260603]: 2025-10-02 08:25:50.836 2 DEBUG nova.network.neutron [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:25:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 214 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 63 KiB/s wr, 399 op/s
Oct 02 08:25:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct 02 08:25:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct 02 08:25:51 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct 02 08:25:51 compute-0 ceph-mon[74477]: osdmap e196: 3 total, 3 up, 3 in
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.540 2 DEBUG oslo_concurrency.lockutils [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.541 2 DEBUG oslo_concurrency.lockutils [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.541 2 DEBUG oslo_concurrency.lockutils [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.541 2 DEBUG oslo_concurrency.lockutils [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.542 2 DEBUG oslo_concurrency.lockutils [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.543 2 INFO nova.compute.manager [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Terminating instance
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.544 2 DEBUG nova.compute.manager [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.557 2 DEBUG nova.storage.rbd_utils [None req-5d27a626-9ec0-425f-af7e-589c5d002886 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(snap) on rbd image(ac88df5a-95be-45fb-8821-71f463df7a31) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:25:51 compute-0 kernel: tap8b4ce2c0-9e (unregistering): left promiscuous mode
Oct 02 08:25:51 compute-0 NetworkManager[45129]: <info>  [1759393551.6021] device (tap8b4ce2c0-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:25:51 compute-0 ovn_controller[152344]: 2025-10-02T08:25:51Z|00256|binding|INFO|Releasing lport 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 from this chassis (sb_readonly=0)
Oct 02 08:25:51 compute-0 ovn_controller[152344]: 2025-10-02T08:25:51Z|00257|binding|INFO|Setting lport 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 down in Southbound
Oct 02 08:25:51 compute-0 ovn_controller[152344]: 2025-10-02T08:25:51Z|00258|binding|INFO|Removing iface tap8b4ce2c0-9e ovn-installed in OVS
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.650 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:cf:d8 10.100.0.3'], port_security=['fa:16:3e:02:cf:d8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c28cb03c-6207-4ec5-9156-03252350561c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c4b338b0-15ff-4ccb-801c-e865bb41224d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.651 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.652 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa1bff6d-19fb-4792-a261-4da1165d95a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.653 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a11e0363-5335-411b-bdab-6fb7c45a057c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.654 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 namespace which is not needed anymore
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:51 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Deactivated successfully.
Oct 02 08:25:51 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Consumed 13.905s CPU time.
Oct 02 08:25:51 compute-0 systemd-machined[214636]: Machine qemu-38-instance-00000022 terminated.
Oct 02 08:25:51 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[300461]: [NOTICE]   (300465) : haproxy version is 2.8.14-c23fe91
Oct 02 08:25:51 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[300461]: [NOTICE]   (300465) : path to executable is /usr/sbin/haproxy
Oct 02 08:25:51 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[300461]: [WARNING]  (300465) : Exiting Master process...
Oct 02 08:25:51 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[300461]: [WARNING]  (300465) : Exiting Master process...
Oct 02 08:25:51 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[300461]: [ALERT]    (300465) : Current worker (300467) exited with code 143 (Terminated)
Oct 02 08:25:51 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[300461]: [WARNING]  (300465) : All workers exited. Exiting... (0)
Oct 02 08:25:51 compute-0 systemd[1]: libpod-4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d.scope: Deactivated successfully.
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.777 2 INFO nova.virt.libvirt.driver [-] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Instance destroyed successfully.
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.777 2 DEBUG nova.objects.instance [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'resources' on Instance uuid c28cb03c-6207-4ec5-9156-03252350561c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:51 compute-0 podman[302497]: 2025-10-02 08:25:51.783931821 +0000 UTC m=+0.052416771 container died 4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.794 2 DEBUG nova.virt.libvirt.vif [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-458247204',display_name='tempest-AttachInterfacesTestJSON-server-458247204',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-458247204',id=34,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLaaEbFdfJl8DPmpnKJlEIvFg5540SqFF+MSTf3Rd/2eelZoXpVzf3sfGdNxC0G9xmrCg9ZN/m3Sts6FpSIoxcwOomYVGKVXIQ6YS2vLlaWSvr3+0XJ9/CqIl8gT/hDBw==',key_name='tempest-keypair-1310606954',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:25:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-jh0cjhom',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:25:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=c28cb03c-6207-4ec5-9156-03252350561c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.795 2 DEBUG nova.network.os_vif_util [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.796 2 DEBUG nova.network.os_vif_util [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:cf:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b4ce2c0-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.796 2 DEBUG os_vif [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:cf:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b4ce2c0-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.798 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b4ce2c0-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.804 2 INFO os_vif [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:cf:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b4ce2c0-9e')
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.805 2 DEBUG nova.virt.libvirt.vif [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-458247204',display_name='tempest-AttachInterfacesTestJSON-server-458247204',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-458247204',id=34,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLaaEbFdfJl8DPmpnKJlEIvFg5540SqFF+MSTf3Rd/2eelZoXpVzf3sfGdNxC0G9xmrCg9ZN/m3Sts6FpSIoxcwOomYVGKVXIQ6YS2vLlaWSvr3+0XJ9/CqIl8gT/hDBw==',key_name='tempest-keypair-1310606954',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:25:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-jh0cjhom',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:25:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=c28cb03c-6207-4ec5-9156-03252350561c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "address": "fa:16:3e:37:5f:92", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cab2b24-03", "ovs_interfaceid": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.806 2 DEBUG nova.network.os_vif_util [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "address": "fa:16:3e:37:5f:92", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cab2b24-03", "ovs_interfaceid": "4cab2b24-035a-4dfc-b153-277b0512d9f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.806 2 DEBUG nova.network.os_vif_util [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:5f:92,bridge_name='br-int',has_traffic_filtering=True,id=4cab2b24-035a-4dfc-b153-277b0512d9f6,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cab2b24-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.807 2 DEBUG os_vif [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:5f:92,bridge_name='br-int',has_traffic_filtering=True,id=4cab2b24-035a-4dfc-b153-277b0512d9f6,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cab2b24-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4cab2b24-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d-userdata-shm.mount: Deactivated successfully.
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.810 2 INFO os_vif [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:5f:92,bridge_name='br-int',has_traffic_filtering=True,id=4cab2b24-035a-4dfc-b153-277b0512d9f6,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cab2b24-03')
Oct 02 08:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9b0303c5a3bf6706e7d4ed7fdb28ee9eeead9f26ad1ee83702d2c74668a46e0-merged.mount: Deactivated successfully.
Oct 02 08:25:51 compute-0 podman[302497]: 2025-10-02 08:25:51.821973668 +0000 UTC m=+0.090458608 container cleanup 4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 08:25:51 compute-0 systemd[1]: libpod-conmon-4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d.scope: Deactivated successfully.
Oct 02 08:25:51 compute-0 podman[302550]: 2025-10-02 08:25:51.87538217 +0000 UTC m=+0.033717978 container remove 4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.881 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[db301193-d6f5-49e9-9bd3-c26c866beb80]: (4, ('Thu Oct  2 08:25:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 (4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d)\n4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d\nThu Oct  2 08:25:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 (4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d)\n4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.883 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b41c2e-028a-40c5-8f3d-d8ee197b3ec5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.883 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:51 compute-0 kernel: tapfa1bff6d-10: left promiscuous mode
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.892 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[040aacb0-bd65-4634-809c-892e2a53731d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:51 compute-0 nova_compute[260603]: 2025-10-02 08:25:51.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.916 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5218cd41-85fb-48e8-a853-a6aefa72a5dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.917 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[55cc0738-9c35-48b5-91de-ad2c4b0331e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.935 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1916a916-5464-4a86-86f3-fd6bae99eb65]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439735, 'reachable_time': 24931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302567, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.938 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:25:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:51.938 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[22b51b3a-6fcb-4de8-b65c-71c6374cb5ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:51 compute-0 systemd[1]: run-netns-ovnmeta\x2dfa1bff6d\x2d19fb\x2d4792\x2da261\x2d4da1165d95a1.mount: Deactivated successfully.
Oct 02 08:25:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:25:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct 02 08:25:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct 02 08:25:52 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.068 2 DEBUG nova.network.neutron [req-68f8e9e8-2564-469c-9893-7d944a4f5144 req-7815d8e4-7e66-4b98-be02-8b1675bc0d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updated VIF entry in instance network info cache for port a75b1cfa-e509-4676-bd92-b36be82f1e83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.070 2 DEBUG nova.network.neutron [req-68f8e9e8-2564-469c-9893-7d944a4f5144 req-7815d8e4-7e66-4b98-be02-8b1675bc0d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updating instance_info_cache with network_info: [{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.099 2 DEBUG oslo_concurrency.lockutils [req-68f8e9e8-2564-469c-9893-7d944a4f5144 req-7815d8e4-7e66-4b98-be02-8b1675bc0d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.169 2 INFO nova.virt.libvirt.driver [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Deleting instance files /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c_del
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.170 2 INFO nova.virt.libvirt.driver [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Deletion of /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c_del complete
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.246 2 INFO nova.compute.manager [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.247 2 DEBUG oslo.service.loopingcall [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.247 2 DEBUG nova.compute.manager [-] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.247 2 DEBUG nova.network.neutron [-] [instance: c28cb03c-6207-4ec5-9156-03252350561c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver [None req-5d27a626-9ec0-425f-af7e-589c5d002886 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image ac88df5a-95be-45fb-8821-71f463df7a31 could not be found.
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID ac88df5a-95be-45fb-8821-71f463df7a31
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver 
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver 
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image ac88df5a-95be-45fb-8821-71f463df7a31 could not be found.
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.370 2 ERROR nova.virt.libvirt.driver 
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.433 2 DEBUG nova.storage.rbd_utils [None req-5d27a626-9ec0-425f-af7e-589c5d002886 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] removing snapshot(snap) on rbd image(ac88df5a-95be-45fb-8821-71f463df7a31) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:25:52 compute-0 ceph-mon[74477]: pgmap v1338: 305 pgs: 305 active+clean; 214 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 63 KiB/s wr, 399 op/s
Oct 02 08:25:52 compute-0 ceph-mon[74477]: osdmap e197: 3 total, 3 up, 3 in
Oct 02 08:25:52 compute-0 ceph-mon[74477]: osdmap e198: 3 total, 3 up, 3 in
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.563 2 DEBUG nova.compute.manager [req-de73eced-0ec8-4692-9535-8d6b7f0a9a97 req-5fd92a43-7920-491d-8387-e597389f8872 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-deleted-4cab2b24-035a-4dfc-b153-277b0512d9f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.564 2 INFO nova.compute.manager [req-de73eced-0ec8-4692-9535-8d6b7f0a9a97 req-5fd92a43-7920-491d-8387-e597389f8872 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Neutron deleted interface 4cab2b24-035a-4dfc-b153-277b0512d9f6; detaching it from the instance and deleting it from the info cache
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.564 2 DEBUG nova.network.neutron [req-de73eced-0ec8-4692-9535-8d6b7f0a9a97 req-5fd92a43-7920-491d-8387-e597389f8872 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updating instance_info_cache with network_info: [{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.585 2 DEBUG nova.compute.manager [req-de73eced-0ec8-4692-9535-8d6b7f0a9a97 req-5fd92a43-7920-491d-8387-e597389f8872 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Detach interface failed, port_id=4cab2b24-035a-4dfc-b153-277b0512d9f6, reason: Instance c28cb03c-6207-4ec5-9156-03252350561c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.639 2 DEBUG nova.compute.manager [req-6b3e061b-962f-499e-b404-b17e173cecf8 req-d44b3b4b-df6f-42d0-b050-6c0e8b5607c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-unplugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.640 2 DEBUG oslo_concurrency.lockutils [req-6b3e061b-962f-499e-b404-b17e173cecf8 req-d44b3b4b-df6f-42d0-b050-6c0e8b5607c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.640 2 DEBUG oslo_concurrency.lockutils [req-6b3e061b-962f-499e-b404-b17e173cecf8 req-d44b3b4b-df6f-42d0-b050-6c0e8b5607c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.640 2 DEBUG oslo_concurrency.lockutils [req-6b3e061b-962f-499e-b404-b17e173cecf8 req-d44b3b4b-df6f-42d0-b050-6c0e8b5607c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.640 2 DEBUG nova.compute.manager [req-6b3e061b-962f-499e-b404-b17e173cecf8 req-d44b3b4b-df6f-42d0-b050-6c0e8b5607c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] No waiting events found dispatching network-vif-unplugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.641 2 DEBUG nova.compute.manager [req-6b3e061b-962f-499e-b404-b17e173cecf8 req-d44b3b4b-df6f-42d0-b050-6c0e8b5607c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-unplugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.641 2 DEBUG nova.compute.manager [req-6b3e061b-962f-499e-b404-b17e173cecf8 req-d44b3b4b-df6f-42d0-b050-6c0e8b5607c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.641 2 DEBUG oslo_concurrency.lockutils [req-6b3e061b-962f-499e-b404-b17e173cecf8 req-d44b3b4b-df6f-42d0-b050-6c0e8b5607c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.642 2 DEBUG oslo_concurrency.lockutils [req-6b3e061b-962f-499e-b404-b17e173cecf8 req-d44b3b4b-df6f-42d0-b050-6c0e8b5607c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.642 2 DEBUG oslo_concurrency.lockutils [req-6b3e061b-962f-499e-b404-b17e173cecf8 req-d44b3b4b-df6f-42d0-b050-6c0e8b5607c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.642 2 DEBUG nova.compute.manager [req-6b3e061b-962f-499e-b404-b17e173cecf8 req-d44b3b4b-df6f-42d0-b050-6c0e8b5607c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] No waiting events found dispatching network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.642 2 WARNING nova.compute.manager [req-6b3e061b-962f-499e-b404-b17e173cecf8 req-d44b3b4b-df6f-42d0-b050-6c0e8b5607c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received unexpected event network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 for instance with vm_state active and task_state deleting.
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.697 2 INFO nova.network.neutron [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Port 4cab2b24-035a-4dfc-b153-277b0512d9f6 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.697 2 DEBUG nova.network.neutron [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updating instance_info_cache with network_info: [{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.709 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393537.7079403, aebef537-a40c-45aa-98b5-ebdd7c27028b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.709 2 INFO nova.compute.manager [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] VM Stopped (Lifecycle Event)
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.714 2 DEBUG oslo_concurrency.lockutils [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.733 2 DEBUG nova.compute.manager [None req-7bd7ef8e-6171-4c14-8913-5a6b1c5e661b - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:52 compute-0 nova_compute[260603]: 2025-10-02 08:25:52.735 2 DEBUG oslo_concurrency.lockutils [None req-e6dca448-aef6-4dd0-a4a9-1ed2024eeaaf 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-c28cb03c-6207-4ec5-9156-03252350561c-4cab2b24-035a-4dfc-b153-277b0512d9f6" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 195 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.8 MiB/s wr, 342 op/s
Oct 02 08:25:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct 02 08:25:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct 02 08:25:53 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct 02 08:25:53 compute-0 nova_compute[260603]: 2025-10-02 08:25:53.508 2 WARNING nova.compute.manager [None req-5d27a626-9ec0-425f-af7e-589c5d002886 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Image not found during snapshot: nova.exception.ImageNotFound: Image ac88df5a-95be-45fb-8821-71f463df7a31 could not be found.
Oct 02 08:25:53 compute-0 nova_compute[260603]: 2025-10-02 08:25:53.903 2 DEBUG nova.network.neutron [-] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:53 compute-0 nova_compute[260603]: 2025-10-02 08:25:53.926 2 INFO nova.compute.manager [-] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Took 1.68 seconds to deallocate network for instance.
Oct 02 08:25:53 compute-0 nova_compute[260603]: 2025-10-02 08:25:53.977 2 DEBUG oslo_concurrency.lockutils [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:53 compute-0 nova_compute[260603]: 2025-10-02 08:25:53.978 2 DEBUG oslo_concurrency.lockutils [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:54 compute-0 ceph-mon[74477]: pgmap v1341: 305 pgs: 305 active+clean; 195 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.8 MiB/s wr, 342 op/s
Oct 02 08:25:54 compute-0 ceph-mon[74477]: osdmap e199: 3 total, 3 up, 3 in
Oct 02 08:25:54 compute-0 nova_compute[260603]: 2025-10-02 08:25:54.064 2 DEBUG oslo_concurrency.processutils [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/237985515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:54 compute-0 nova_compute[260603]: 2025-10-02 08:25:54.591 2 DEBUG oslo_concurrency.processutils [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:54 compute-0 nova_compute[260603]: 2025-10-02 08:25:54.598 2 DEBUG nova.compute.provider_tree [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:54 compute-0 nova_compute[260603]: 2025-10-02 08:25:54.612 2 DEBUG nova.scheduler.client.report [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:54 compute-0 nova_compute[260603]: 2025-10-02 08:25:54.643 2 DEBUG oslo_concurrency.lockutils [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:54 compute-0 nova_compute[260603]: 2025-10-02 08:25:54.691 2 INFO nova.scheduler.client.report [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Deleted allocations for instance c28cb03c-6207-4ec5-9156-03252350561c
Oct 02 08:25:54 compute-0 nova_compute[260603]: 2025-10-02 08:25:54.783 2 DEBUG oslo_concurrency.lockutils [None req-5b65074a-0a0d-4354-95e9-7373c4a56491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:54 compute-0 nova_compute[260603]: 2025-10-02 08:25:54.812 2 DEBUG nova.compute.manager [req-bb76168e-49da-41d5-904b-da6d8e3c8681 req-f9d35ce0-935b-4b77-beb9-41f9f34af0f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-deleted-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 171 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.9 MiB/s wr, 259 op/s
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.054702) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393555054774, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2414, "num_deletes": 274, "total_data_size": 3400980, "memory_usage": 3453344, "flush_reason": "Manual Compaction"}
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 02 08:25:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/237985515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393555072565, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3342720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25664, "largest_seqno": 28077, "table_properties": {"data_size": 3331525, "index_size": 7305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23822, "raw_average_key_size": 21, "raw_value_size": 3309054, "raw_average_value_size": 2991, "num_data_blocks": 316, "num_entries": 1106, "num_filter_entries": 1106, "num_deletions": 274, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759393394, "oldest_key_time": 1759393394, "file_creation_time": 1759393555, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 17922 microseconds, and 12337 cpu microseconds.
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.072622) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3342720 bytes OK
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.072648) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.074221) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.074240) EVENT_LOG_v1 {"time_micros": 1759393555074234, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.074263) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3390540, prev total WAL file size 3390540, number of live WAL files 2.
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.075522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3264KB)], [59(6849KB)]
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393555075551, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10356356, "oldest_snapshot_seqno": -1}
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5296 keys, 8650098 bytes, temperature: kUnknown
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393555114890, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8650098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8612906, "index_size": 22874, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13253, "raw_key_size": 131663, "raw_average_key_size": 24, "raw_value_size": 8515726, "raw_average_value_size": 1607, "num_data_blocks": 938, "num_entries": 5296, "num_filter_entries": 5296, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759393555, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.115058) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8650098 bytes
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.116138) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 262.9 rd, 219.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.7 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 5839, records dropped: 543 output_compression: NoCompression
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.116151) EVENT_LOG_v1 {"time_micros": 1759393555116144, "job": 32, "event": "compaction_finished", "compaction_time_micros": 39393, "compaction_time_cpu_micros": 19101, "output_level": 6, "num_output_files": 1, "total_output_size": 8650098, "num_input_records": 5839, "num_output_records": 5296, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393555116683, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393555117633, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.075475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.117657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.117661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.117662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.117664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:25:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:25:55.117665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.486 2 DEBUG oslo_concurrency.lockutils [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.487 2 DEBUG oslo_concurrency.lockutils [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.488 2 DEBUG oslo_concurrency.lockutils [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.488 2 DEBUG oslo_concurrency.lockutils [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.488 2 DEBUG oslo_concurrency.lockutils [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.490 2 INFO nova.compute.manager [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Terminating instance
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.491 2 DEBUG nova.compute.manager [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:25:55 compute-0 kernel: tap9bdc55f5-2b (unregistering): left promiscuous mode
Oct 02 08:25:55 compute-0 NetworkManager[45129]: <info>  [1759393555.5630] device (tap9bdc55f5-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:25:55 compute-0 ovn_controller[152344]: 2025-10-02T08:25:55Z|00259|binding|INFO|Releasing lport 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 from this chassis (sb_readonly=0)
Oct 02 08:25:55 compute-0 ovn_controller[152344]: 2025-10-02T08:25:55Z|00260|binding|INFO|Setting lport 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 down in Southbound
Oct 02 08:25:55 compute-0 ovn_controller[152344]: 2025-10-02T08:25:55Z|00261|binding|INFO|Removing iface tap9bdc55f5-2b ovn-installed in OVS
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.581 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:3f:d3 10.100.0.11'], port_security=['fa:16:3e:88:3f:d3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9bdc55f5-2bf2-467e-9e1e-33215451c0c4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.583 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.585 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.587 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff3f32a-7b2c-4422-ad24-ba98f6b2f4bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.590 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace which is not needed anymore
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:55 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Deactivated successfully.
Oct 02 08:25:55 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Consumed 11.932s CPU time.
Oct 02 08:25:55 compute-0 systemd-machined[214636]: Machine qemu-40-instance-00000024 terminated.
Oct 02 08:25:55 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[302076]: [NOTICE]   (302095) : haproxy version is 2.8.14-c23fe91
Oct 02 08:25:55 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[302076]: [NOTICE]   (302095) : path to executable is /usr/sbin/haproxy
Oct 02 08:25:55 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[302076]: [WARNING]  (302095) : Exiting Master process...
Oct 02 08:25:55 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[302076]: [ALERT]    (302095) : Current worker (302099) exited with code 143 (Terminated)
Oct 02 08:25:55 compute-0 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[302076]: [WARNING]  (302095) : All workers exited. Exiting... (0)
Oct 02 08:25:55 compute-0 systemd[1]: libpod-2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1.scope: Deactivated successfully.
Oct 02 08:25:55 compute-0 conmon[302076]: conmon 2b59eb035a6f038d0fed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1.scope/container/memory.events
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.737 2 INFO nova.virt.libvirt.driver [-] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Instance destroyed successfully.
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.738 2 DEBUG nova.objects.instance [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'resources' on Instance uuid 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:25:55 compute-0 podman[302652]: 2025-10-02 08:25:55.740347956 +0000 UTC m=+0.057953049 container died 2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.755 2 DEBUG nova.virt.libvirt.vif [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1285601584',display_name='tempest-ImagesTestJSON-server-1285601584',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1285601584',id=36,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:25:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-ac1xe9vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:25:53Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.755 2 DEBUG nova.network.os_vif_util [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.756 2 DEBUG nova.network.os_vif_util [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:3f:d3,bridge_name='br-int',has_traffic_filtering=True,id=9bdc55f5-2bf2-467e-9e1e-33215451c0c4,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bdc55f5-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.756 2 DEBUG os_vif [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:3f:d3,bridge_name='br-int',has_traffic_filtering=True,id=9bdc55f5-2bf2-467e-9e1e-33215451c0c4,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bdc55f5-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.758 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bdc55f5-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.810 2 INFO os_vif [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:3f:d3,bridge_name='br-int',has_traffic_filtering=True,id=9bdc55f5-2bf2-467e-9e1e-33215451c0c4,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bdc55f5-2b')
Oct 02 08:25:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1-userdata-shm.mount: Deactivated successfully.
Oct 02 08:25:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-36154f0e228e572add5335235d684e5fd88c9ffe8e89720ad40ace7d42c7189d-merged.mount: Deactivated successfully.
Oct 02 08:25:55 compute-0 podman[302652]: 2025-10-02 08:25:55.829675016 +0000 UTC m=+0.147280109 container cleanup 2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:25:55 compute-0 systemd[1]: libpod-conmon-2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1.scope: Deactivated successfully.
Oct 02 08:25:55 compute-0 podman[302707]: 2025-10-02 08:25:55.896705938 +0000 UTC m=+0.048021880 container remove 2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.904 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa35cd4-99e9-4fe9-b80c-4b3cc1732fb0]: (4, ('Thu Oct  2 08:25:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1)\n2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1\nThu Oct  2 08:25:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1)\n2b59eb035a6f038d0fed61bdff4c197b5cdbbb86fce74b906f7368d6079bdeb1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.906 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[525df47f-a479-4203-b5f7-e05067a70d8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.908 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:55 compute-0 kernel: tap897d7abf-90: left promiscuous mode
Oct 02 08:25:55 compute-0 nova_compute[260603]: 2025-10-02 08:25:55.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.941 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4ee666-3d43-4728-81f4-9c96c6f6763c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.972 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[85bffaae-89f3-4460-a207-bea95a941596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.973 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e3501cea-0c9c-4a6b-8889-b20860f3f22d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.993 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[681d583e-d154-4257-a259-f3fd3d47f73a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442222, 'reachable_time': 37161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302725, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.996 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:25:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:25:55.996 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cbfa3e-e88b-4277-a96a-bd82df8a4f42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:25:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d897d7abf\x2d9e23\x2d43cd\x2d8f60\x2d7156792a4360.mount: Deactivated successfully.
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.019 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393541.017595, 9ea31984-a45e-4154-9df9-3c4e8ce69309 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.019 2 INFO nova.compute.manager [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] VM Stopped (Lifecycle Event)
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.049 2 DEBUG nova.compute.manager [None req-8ce44a66-4353-4244-8783-f5f621893a96 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:25:56 compute-0 ceph-mon[74477]: pgmap v1343: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 171 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.9 MiB/s wr, 259 op/s
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.177 2 INFO nova.virt.libvirt.driver [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Deleting instance files /var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_del
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.178 2 INFO nova.virt.libvirt.driver [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Deletion of /var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_del complete
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.228 2 INFO nova.compute.manager [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.229 2 DEBUG oslo.service.loopingcall [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.230 2 DEBUG nova.compute.manager [-] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.230 2 DEBUG nova.network.neutron [-] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.861 2 DEBUG nova.network.neutron [-] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:25:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 171 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.882 2 INFO nova.compute.manager [-] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Took 0.65 seconds to deallocate network for instance.
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.946 2 DEBUG oslo_concurrency.lockutils [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.947 2 DEBUG oslo_concurrency.lockutils [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:56 compute-0 nova_compute[260603]: 2025-10-02 08:25:56.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.021 2 DEBUG nova.compute.manager [req-56edca81-30b4-488c-b4f8-10d8753d9cf8 req-66e5d7d8-a001-4633-8918-4bfc8a2c7040 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Received event network-vif-unplugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.022 2 DEBUG oslo_concurrency.lockutils [req-56edca81-30b4-488c-b4f8-10d8753d9cf8 req-66e5d7d8-a001-4633-8918-4bfc8a2c7040 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.023 2 DEBUG oslo_concurrency.lockutils [req-56edca81-30b4-488c-b4f8-10d8753d9cf8 req-66e5d7d8-a001-4633-8918-4bfc8a2c7040 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.023 2 DEBUG oslo_concurrency.lockutils [req-56edca81-30b4-488c-b4f8-10d8753d9cf8 req-66e5d7d8-a001-4633-8918-4bfc8a2c7040 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.024 2 DEBUG nova.compute.manager [req-56edca81-30b4-488c-b4f8-10d8753d9cf8 req-66e5d7d8-a001-4633-8918-4bfc8a2c7040 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] No waiting events found dispatching network-vif-unplugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.025 2 WARNING nova.compute.manager [req-56edca81-30b4-488c-b4f8-10d8753d9cf8 req-66e5d7d8-a001-4633-8918-4bfc8a2c7040 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Received unexpected event network-vif-unplugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 for instance with vm_state deleted and task_state None.
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.025 2 DEBUG nova.compute.manager [req-56edca81-30b4-488c-b4f8-10d8753d9cf8 req-66e5d7d8-a001-4633-8918-4bfc8a2c7040 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Received event network-vif-plugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.026 2 DEBUG oslo_concurrency.lockutils [req-56edca81-30b4-488c-b4f8-10d8753d9cf8 req-66e5d7d8-a001-4633-8918-4bfc8a2c7040 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.027 2 DEBUG oslo_concurrency.lockutils [req-56edca81-30b4-488c-b4f8-10d8753d9cf8 req-66e5d7d8-a001-4633-8918-4bfc8a2c7040 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.027 2 DEBUG oslo_concurrency.lockutils [req-56edca81-30b4-488c-b4f8-10d8753d9cf8 req-66e5d7d8-a001-4633-8918-4bfc8a2c7040 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.028 2 DEBUG nova.compute.manager [req-56edca81-30b4-488c-b4f8-10d8753d9cf8 req-66e5d7d8-a001-4633-8918-4bfc8a2c7040 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] No waiting events found dispatching network-vif-plugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.029 2 WARNING nova.compute.manager [req-56edca81-30b4-488c-b4f8-10d8753d9cf8 req-66e5d7d8-a001-4633-8918-4bfc8a2c7040 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Received unexpected event network-vif-plugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 for instance with vm_state deleted and task_state None.
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.031 2 DEBUG oslo_concurrency.processutils [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:25:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:25:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct 02 08:25:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct 02 08:25:57 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct 02 08:25:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:25:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562411001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.509 2 DEBUG oslo_concurrency.processutils [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.518 2 DEBUG nova.compute.provider_tree [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.538 2 DEBUG nova.scheduler.client.report [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.567 2 DEBUG oslo_concurrency.lockutils [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.599 2 INFO nova.scheduler.client.report [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Deleted allocations for instance 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a
Oct 02 08:25:57 compute-0 nova_compute[260603]: 2025-10-02 08:25:57.685 2 DEBUG oslo_concurrency.lockutils [None req-037ef81b-37d9-400c-a8fa-2d01369e59ec 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:25:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:25:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:25:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:25:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:25:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:25:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:25:58 compute-0 ceph-mon[74477]: pgmap v1344: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 171 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Oct 02 08:25:58 compute-0 ceph-mon[74477]: osdmap e200: 3 total, 3 up, 3 in
Oct 02 08:25:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/562411001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:25:58 compute-0 ovn_controller[152344]: 2025-10-02T08:25:58Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:9e:33 10.100.0.13
Oct 02 08:25:58 compute-0 ovn_controller[152344]: 2025-10-02T08:25:58Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:9e:33 10.100.0.13
Oct 02 08:25:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 113 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.6 MiB/s wr, 301 op/s
Oct 02 08:25:59 compute-0 nova_compute[260603]: 2025-10-02 08:25:59.170 2 DEBUG nova.compute.manager [req-8a545321-9548-4af5-ade3-72a267700469 req-55008cf9-4ea2-49b3-8805-f11717bd0387 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Received event network-vif-deleted-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:00 compute-0 ceph-mon[74477]: pgmap v1346: 305 pgs: 305 active+clean; 113 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.6 MiB/s wr, 301 op/s
Oct 02 08:26:00 compute-0 nova_compute[260603]: 2025-10-02 08:26:00.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 113 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.4 MiB/s wr, 140 op/s
Oct 02 08:26:01 compute-0 sudo[302749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:26:01 compute-0 sudo[302749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:01 compute-0 sudo[302749]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:01 compute-0 sudo[302774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:26:01 compute-0 sudo[302774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:01 compute-0 sudo[302774]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:01 compute-0 sudo[302799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:26:01 compute-0 sudo[302799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:01 compute-0 sudo[302799]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:01 compute-0 sudo[302824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:26:01 compute-0 sudo[302824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:01 compute-0 sudo[302824]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:26:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:26:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:26:01 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:26:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:26:01 compute-0 nova_compute[260603]: 2025-10-02 08:26:01.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:01 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:26:01 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 678eaea1-6d68-40f5-bd66-f58ca0cfdaaf does not exist
Oct 02 08:26:01 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 74d921e6-9d90-45a8-a775-4aada8d470b1 does not exist
Oct 02 08:26:01 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 3d7a2857-8951-4165-87fd-75962cc72642 does not exist
Oct 02 08:26:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:26:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:26:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:26:01 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:26:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:26:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:26:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:26:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct 02 08:26:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct 02 08:26:02 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct 02 08:26:02 compute-0 ceph-mon[74477]: pgmap v1347: 305 pgs: 305 active+clean; 113 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.4 MiB/s wr, 140 op/s
Oct 02 08:26:02 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:26:02 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:26:02 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:26:02 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:26:02 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:26:02 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:26:02 compute-0 ceph-mon[74477]: osdmap e201: 3 total, 3 up, 3 in
Oct 02 08:26:02 compute-0 sudo[302880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:26:02 compute-0 sudo[302880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:02 compute-0 sudo[302880]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:02 compute-0 sudo[302905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:26:02 compute-0 sudo[302905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:02 compute-0 sudo[302905]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:02 compute-0 sudo[302930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:26:02 compute-0 sudo[302930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:02 compute-0 sudo[302930]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:02 compute-0 sudo[302955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:26:02 compute-0 sudo[302955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:02 compute-0 nova_compute[260603]: 2025-10-02 08:26:02.527 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:02 compute-0 nova_compute[260603]: 2025-10-02 08:26:02.527 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:02 compute-0 nova_compute[260603]: 2025-10-02 08:26:02.544 2 DEBUG nova.compute.manager [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:26:02 compute-0 nova_compute[260603]: 2025-10-02 08:26:02.595 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:02 compute-0 nova_compute[260603]: 2025-10-02 08:26:02.595 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:02 compute-0 nova_compute[260603]: 2025-10-02 08:26:02.602 2 DEBUG nova.virt.hardware [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:26:02 compute-0 nova_compute[260603]: 2025-10-02 08:26:02.602 2 INFO nova.compute.claims [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:26:02 compute-0 nova_compute[260603]: 2025-10-02 08:26:02.708 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:02 compute-0 podman[303019]: 2025-10-02 08:26:02.737145421 +0000 UTC m=+0.040722454 container create 66f8665c04d1ee32cfcf367579a914866e5f705b7677f535fcf848323c3826f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 08:26:02 compute-0 systemd[1]: Started libpod-conmon-66f8665c04d1ee32cfcf367579a914866e5f705b7677f535fcf848323c3826f7.scope.
Oct 02 08:26:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:26:02 compute-0 podman[303019]: 2025-10-02 08:26:02.718997996 +0000 UTC m=+0.022575049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:26:02 compute-0 podman[303019]: 2025-10-02 08:26:02.843361786 +0000 UTC m=+0.146938839 container init 66f8665c04d1ee32cfcf367579a914866e5f705b7677f535fcf848323c3826f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 08:26:02 compute-0 podman[303019]: 2025-10-02 08:26:02.851008932 +0000 UTC m=+0.154585966 container start 66f8665c04d1ee32cfcf367579a914866e5f705b7677f535fcf848323c3826f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 08:26:02 compute-0 beautiful_moore[303036]: 167 167
Oct 02 08:26:02 compute-0 systemd[1]: libpod-66f8665c04d1ee32cfcf367579a914866e5f705b7677f535fcf848323c3826f7.scope: Deactivated successfully.
Oct 02 08:26:02 compute-0 conmon[303036]: conmon 66f8665c04d1ee32cfcf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66f8665c04d1ee32cfcf367579a914866e5f705b7677f535fcf848323c3826f7.scope/container/memory.events
Oct 02 08:26:02 compute-0 podman[303019]: 2025-10-02 08:26:02.865578722 +0000 UTC m=+0.169155775 container attach 66f8665c04d1ee32cfcf367579a914866e5f705b7677f535fcf848323c3826f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 08:26:02 compute-0 podman[303019]: 2025-10-02 08:26:02.867729261 +0000 UTC m=+0.171306294 container died 66f8665c04d1ee32cfcf367579a914866e5f705b7677f535fcf848323c3826f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:26:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 120 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 539 KiB/s rd, 3.1 MiB/s wr, 160 op/s
Oct 02 08:26:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e48f5d05946c0d64e08aeaf346bb2ecaac26ae5d07628a9176fafb04a2079f13-merged.mount: Deactivated successfully.
Oct 02 08:26:02 compute-0 podman[303019]: 2025-10-02 08:26:02.931210858 +0000 UTC m=+0.234787921 container remove 66f8665c04d1ee32cfcf367579a914866e5f705b7677f535fcf848323c3826f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:26:02 compute-0 systemd[1]: libpod-conmon-66f8665c04d1ee32cfcf367579a914866e5f705b7677f535fcf848323c3826f7.scope: Deactivated successfully.
Oct 02 08:26:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:26:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561353246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.144 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.152 2 DEBUG nova.compute.provider_tree [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:26:03 compute-0 podman[303081]: 2025-10-02 08:26:03.161017398 +0000 UTC m=+0.064749079 container create a1a5c6338667886a2616b6503c7e5b117c1d17885647427a998eef68ca6523f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_clarke, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.166 2 DEBUG nova.scheduler.client.report [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.184 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.185 2 DEBUG nova.compute.manager [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:26:03 compute-0 systemd[1]: Started libpod-conmon-a1a5c6338667886a2616b6503c7e5b117c1d17885647427a998eef68ca6523f1.scope.
Oct 02 08:26:03 compute-0 podman[303081]: 2025-10-02 08:26:03.135496315 +0000 UTC m=+0.039228046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.230 2 DEBUG nova.compute.manager [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.232 2 DEBUG nova.network.neutron [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.254 2 INFO nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:26:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.276 2 DEBUG nova.compute.manager [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b049c1a8644f2cfd4c2999dd46a5cc435077ff7bde211625fc51056f9b1de9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b049c1a8644f2cfd4c2999dd46a5cc435077ff7bde211625fc51056f9b1de9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b049c1a8644f2cfd4c2999dd46a5cc435077ff7bde211625fc51056f9b1de9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b049c1a8644f2cfd4c2999dd46a5cc435077ff7bde211625fc51056f9b1de9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b049c1a8644f2cfd4c2999dd46a5cc435077ff7bde211625fc51056f9b1de9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:03 compute-0 podman[303081]: 2025-10-02 08:26:03.300203636 +0000 UTC m=+0.203935327 container init a1a5c6338667886a2616b6503c7e5b117c1d17885647427a998eef68ca6523f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:26:03 compute-0 podman[303081]: 2025-10-02 08:26:03.317619268 +0000 UTC m=+0.221350909 container start a1a5c6338667886a2616b6503c7e5b117c1d17885647427a998eef68ca6523f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:26:03 compute-0 podman[303081]: 2025-10-02 08:26:03.321037048 +0000 UTC m=+0.224768739 container attach a1a5c6338667886a2616b6503c7e5b117c1d17885647427a998eef68ca6523f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.395 2 DEBUG nova.compute.manager [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.397 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.397 2 INFO nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Creating image(s)
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.430 2 DEBUG nova.storage.rbd_utils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 5d595e00-2287-4a6f-b347-bc277006a626_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.458 2 DEBUG nova.storage.rbd_utils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 5d595e00-2287-4a6f-b347-bc277006a626_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.483 2 DEBUG nova.storage.rbd_utils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 5d595e00-2287-4a6f-b347-bc277006a626_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.487 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.578 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.579 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.580 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.581 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.608 2 DEBUG nova.storage.rbd_utils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 5d595e00-2287-4a6f-b347-bc277006a626_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.612 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5d595e00-2287-4a6f-b347-bc277006a626_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.694 2 DEBUG nova.policy [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.885 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5d595e00-2287-4a6f-b347-bc277006a626_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:03 compute-0 nova_compute[260603]: 2025-10-02 08:26:03.968 2 DEBUG nova.storage.rbd_utils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] resizing rbd image 5d595e00-2287-4a6f-b347-bc277006a626_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:26:04 compute-0 ceph-mon[74477]: pgmap v1349: 305 pgs: 305 active+clean; 120 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 539 KiB/s rd, 3.1 MiB/s wr, 160 op/s
Oct 02 08:26:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2561353246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:04 compute-0 nova_compute[260603]: 2025-10-02 08:26:04.081 2 DEBUG nova.objects.instance [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:04 compute-0 nova_compute[260603]: 2025-10-02 08:26:04.099 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:26:04 compute-0 nova_compute[260603]: 2025-10-02 08:26:04.100 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Ensure instance console log exists: /var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:26:04 compute-0 nova_compute[260603]: 2025-10-02 08:26:04.100 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:04 compute-0 nova_compute[260603]: 2025-10-02 08:26:04.101 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:04 compute-0 nova_compute[260603]: 2025-10-02 08:26:04.101 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:04 compute-0 distracted_clarke[303100]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:26:04 compute-0 distracted_clarke[303100]: --> relative data size: 1.0
Oct 02 08:26:04 compute-0 distracted_clarke[303100]: --> All data devices are unavailable
Oct 02 08:26:04 compute-0 systemd[1]: libpod-a1a5c6338667886a2616b6503c7e5b117c1d17885647427a998eef68ca6523f1.scope: Deactivated successfully.
Oct 02 08:26:04 compute-0 systemd[1]: libpod-a1a5c6338667886a2616b6503c7e5b117c1d17885647427a998eef68ca6523f1.scope: Consumed 1.254s CPU time.
Oct 02 08:26:04 compute-0 podman[303081]: 2025-10-02 08:26:04.656826326 +0000 UTC m=+1.560558007 container died a1a5c6338667886a2616b6503c7e5b117c1d17885647427a998eef68ca6523f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_clarke, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-65b049c1a8644f2cfd4c2999dd46a5cc435077ff7bde211625fc51056f9b1de9-merged.mount: Deactivated successfully.
Oct 02 08:26:04 compute-0 podman[303081]: 2025-10-02 08:26:04.729255382 +0000 UTC m=+1.632987053 container remove a1a5c6338667886a2616b6503c7e5b117c1d17885647427a998eef68ca6523f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 08:26:04 compute-0 systemd[1]: libpod-conmon-a1a5c6338667886a2616b6503c7e5b117c1d17885647427a998eef68ca6523f1.scope: Deactivated successfully.
Oct 02 08:26:04 compute-0 sudo[302955]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:04 compute-0 sudo[303305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:26:04 compute-0 sudo[303305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:04 compute-0 sudo[303305]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 121 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 547 KiB/s rd, 3.2 MiB/s wr, 170 op/s
Oct 02 08:26:04 compute-0 sudo[303330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:26:04 compute-0 sudo[303330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:04 compute-0 sudo[303330]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:05 compute-0 sudo[303355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:26:05 compute-0 sudo[303355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:05 compute-0 sudo[303355]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:05 compute-0 sudo[303380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:26:05 compute-0 sudo[303380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:05 compute-0 nova_compute[260603]: 2025-10-02 08:26:05.312 2 DEBUG nova.network.neutron [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Successfully created port: 0d888b1c-d237-4db9-9ca5-4796f8c1349d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:26:05 compute-0 ovn_controller[152344]: 2025-10-02T08:26:05Z|00262|binding|INFO|Releasing lport 7a7b217b-e72f-4a21-a2ea-891722d7d6fa from this chassis (sb_readonly=0)
Oct 02 08:26:05 compute-0 nova_compute[260603]: 2025-10-02 08:26:05.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:05 compute-0 podman[303445]: 2025-10-02 08:26:05.544555559 +0000 UTC m=+0.047228644 container create 959d14295a6ddd46b0ff1fac7f54b5ec455cb9e15e1e6007a953ca92231b15fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 08:26:05 compute-0 podman[303445]: 2025-10-02 08:26:05.52533004 +0000 UTC m=+0.028003165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:26:05 compute-0 systemd[1]: Started libpod-conmon-959d14295a6ddd46b0ff1fac7f54b5ec455cb9e15e1e6007a953ca92231b15fc.scope.
Oct 02 08:26:05 compute-0 nova_compute[260603]: 2025-10-02 08:26:05.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:26:05 compute-0 podman[303445]: 2025-10-02 08:26:05.681685651 +0000 UTC m=+0.184358746 container init 959d14295a6ddd46b0ff1fac7f54b5ec455cb9e15e1e6007a953ca92231b15fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:26:05 compute-0 podman[303445]: 2025-10-02 08:26:05.68879474 +0000 UTC m=+0.191467835 container start 959d14295a6ddd46b0ff1fac7f54b5ec455cb9e15e1e6007a953ca92231b15fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 08:26:05 compute-0 podman[303445]: 2025-10-02 08:26:05.692125597 +0000 UTC m=+0.194798722 container attach 959d14295a6ddd46b0ff1fac7f54b5ec455cb9e15e1e6007a953ca92231b15fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 08:26:05 compute-0 nostalgic_tharp[303461]: 167 167
Oct 02 08:26:05 compute-0 systemd[1]: libpod-959d14295a6ddd46b0ff1fac7f54b5ec455cb9e15e1e6007a953ca92231b15fc.scope: Deactivated successfully.
Oct 02 08:26:05 compute-0 podman[303445]: 2025-10-02 08:26:05.699297018 +0000 UTC m=+0.201970133 container died 959d14295a6ddd46b0ff1fac7f54b5ec455cb9e15e1e6007a953ca92231b15fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:26:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea0b8a9429b48881fb208a016e56ff2637456986dada221995b9ae3cbc5c146d-merged.mount: Deactivated successfully.
Oct 02 08:26:05 compute-0 podman[303445]: 2025-10-02 08:26:05.748779194 +0000 UTC m=+0.251452289 container remove 959d14295a6ddd46b0ff1fac7f54b5ec455cb9e15e1e6007a953ca92231b15fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:26:05 compute-0 systemd[1]: libpod-conmon-959d14295a6ddd46b0ff1fac7f54b5ec455cb9e15e1e6007a953ca92231b15fc.scope: Deactivated successfully.
Oct 02 08:26:05 compute-0 nova_compute[260603]: 2025-10-02 08:26:05.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:05 compute-0 podman[303488]: 2025-10-02 08:26:05.981590101 +0000 UTC m=+0.061640639 container create fbf46c78964685f280b4d816b26c8a1969f6917881a1f6b0e26858905dd3850e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:26:06 compute-0 systemd[1]: Started libpod-conmon-fbf46c78964685f280b4d816b26c8a1969f6917881a1f6b0e26858905dd3850e.scope.
Oct 02 08:26:06 compute-0 podman[303488]: 2025-10-02 08:26:05.96014625 +0000 UTC m=+0.040196798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:26:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11d3e1c87110a028bd35aa20ebf01a19b16a05c709a681c48eeddce8c7e09a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11d3e1c87110a028bd35aa20ebf01a19b16a05c709a681c48eeddce8c7e09a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11d3e1c87110a028bd35aa20ebf01a19b16a05c709a681c48eeddce8c7e09a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11d3e1c87110a028bd35aa20ebf01a19b16a05c709a681c48eeddce8c7e09a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:06 compute-0 ceph-mon[74477]: pgmap v1350: 305 pgs: 305 active+clean; 121 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 547 KiB/s rd, 3.2 MiB/s wr, 170 op/s
Oct 02 08:26:06 compute-0 podman[303488]: 2025-10-02 08:26:06.108858274 +0000 UTC m=+0.188908832 container init fbf46c78964685f280b4d816b26c8a1969f6917881a1f6b0e26858905dd3850e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_blackburn, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 08:26:06 compute-0 podman[303488]: 2025-10-02 08:26:06.121913135 +0000 UTC m=+0.201963673 container start fbf46c78964685f280b4d816b26c8a1969f6917881a1f6b0e26858905dd3850e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_blackburn, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 08:26:06 compute-0 podman[303488]: 2025-10-02 08:26:06.133493178 +0000 UTC m=+0.213543726 container attach fbf46c78964685f280b4d816b26c8a1969f6917881a1f6b0e26858905dd3850e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:26:06 compute-0 nova_compute[260603]: 2025-10-02 08:26:06.701 2 DEBUG nova.network.neutron [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Successfully updated port: 0d888b1c-d237-4db9-9ca5-4796f8c1349d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:26:06 compute-0 nova_compute[260603]: 2025-10-02 08:26:06.725 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:06 compute-0 nova_compute[260603]: 2025-10-02 08:26:06.726 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:06 compute-0 nova_compute[260603]: 2025-10-02 08:26:06.726 2 DEBUG nova.network.neutron [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:26:06 compute-0 nova_compute[260603]: 2025-10-02 08:26:06.775 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393551.7749197, c28cb03c-6207-4ec5-9156-03252350561c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:06 compute-0 nova_compute[260603]: 2025-10-02 08:26:06.775 2 INFO nova.compute.manager [-] [instance: c28cb03c-6207-4ec5-9156-03252350561c] VM Stopped (Lifecycle Event)
Oct 02 08:26:06 compute-0 nova_compute[260603]: 2025-10-02 08:26:06.801 2 DEBUG nova.compute.manager [None req-8b3f47db-c2e0-4e79-980d-5c37fcc4b55c - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:06 compute-0 nova_compute[260603]: 2025-10-02 08:26:06.841 2 DEBUG nova.compute.manager [req-bdb44835-1716-4cfe-b3b3-33be4c03493b req-2bc1cf66-43de-4511-9f60-a17d7d65cedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-changed-0d888b1c-d237-4db9-9ca5-4796f8c1349d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:06 compute-0 nova_compute[260603]: 2025-10-02 08:26:06.841 2 DEBUG nova.compute.manager [req-bdb44835-1716-4cfe-b3b3-33be4c03493b req-2bc1cf66-43de-4511-9f60-a17d7d65cedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing instance network info cache due to event network-changed-0d888b1c-d237-4db9-9ca5-4796f8c1349d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:26:06 compute-0 nova_compute[260603]: 2025-10-02 08:26:06.841 2 DEBUG oslo_concurrency.lockutils [req-bdb44835-1716-4cfe-b3b3-33be4c03493b req-2bc1cf66-43de-4511-9f60-a17d7d65cedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 121 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 446 KiB/s rd, 2.6 MiB/s wr, 139 op/s
Oct 02 08:26:06 compute-0 competent_blackburn[303504]: {
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:     "0": [
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:         {
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "devices": [
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "/dev/loop3"
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             ],
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_name": "ceph_lv0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_size": "21470642176",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "name": "ceph_lv0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "tags": {
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.cluster_name": "ceph",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.crush_device_class": "",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.encrypted": "0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.osd_id": "0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.type": "block",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.vdo": "0"
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             },
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "type": "block",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "vg_name": "ceph_vg0"
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:         }
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:     ],
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:     "1": [
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:         {
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "devices": [
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "/dev/loop4"
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             ],
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_name": "ceph_lv1",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_size": "21470642176",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "name": "ceph_lv1",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "tags": {
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.cluster_name": "ceph",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.crush_device_class": "",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.encrypted": "0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.osd_id": "1",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.type": "block",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.vdo": "0"
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             },
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "type": "block",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "vg_name": "ceph_vg1"
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:         }
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:     ],
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:     "2": [
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:         {
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "devices": [
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "/dev/loop5"
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             ],
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_name": "ceph_lv2",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_size": "21470642176",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "name": "ceph_lv2",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "tags": {
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.cluster_name": "ceph",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.crush_device_class": "",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.encrypted": "0",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.osd_id": "2",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.type": "block",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:                 "ceph.vdo": "0"
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             },
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "type": "block",
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:             "vg_name": "ceph_vg2"
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:         }
Oct 02 08:26:06 compute-0 competent_blackburn[303504]:     ]
Oct 02 08:26:06 compute-0 competent_blackburn[303504]: }
Oct 02 08:26:06 compute-0 systemd[1]: libpod-fbf46c78964685f280b4d816b26c8a1969f6917881a1f6b0e26858905dd3850e.scope: Deactivated successfully.
Oct 02 08:26:06 compute-0 conmon[303504]: conmon fbf46c78964685f280b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbf46c78964685f280b4d816b26c8a1969f6917881a1f6b0e26858905dd3850e.scope/container/memory.events
Oct 02 08:26:07 compute-0 nova_compute[260603]: 2025-10-02 08:26:07.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:07 compute-0 podman[303513]: 2025-10-02 08:26:07.041488084 +0000 UTC m=+0.060480300 container died fbf46c78964685f280b4d816b26c8a1969f6917881a1f6b0e26858905dd3850e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_blackburn, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:26:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11d3e1c87110a028bd35aa20ebf01a19b16a05c709a681c48eeddce8c7e09a4-merged.mount: Deactivated successfully.
Oct 02 08:26:07 compute-0 podman[303513]: 2025-10-02 08:26:07.121526185 +0000 UTC m=+0.140518331 container remove fbf46c78964685f280b4d816b26c8a1969f6917881a1f6b0e26858905dd3850e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_blackburn, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:26:07 compute-0 systemd[1]: libpod-conmon-fbf46c78964685f280b4d816b26c8a1969f6917881a1f6b0e26858905dd3850e.scope: Deactivated successfully.
Oct 02 08:26:07 compute-0 sudo[303380]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:07 compute-0 nova_compute[260603]: 2025-10-02 08:26:07.217 2 DEBUG nova.objects.instance [None req-d1af60ab-d8a9-4b4f-8310-738fff366064 f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lazy-loading 'flavor' on Instance uuid 1bd45455-6745-4310-a5a6-f86dd4dcb4ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:07 compute-0 nova_compute[260603]: 2025-10-02 08:26:07.238 2 DEBUG oslo_concurrency.lockutils [None req-d1af60ab-d8a9-4b4f-8310-738fff366064 f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:07 compute-0 nova_compute[260603]: 2025-10-02 08:26:07.238 2 DEBUG oslo_concurrency.lockutils [None req-d1af60ab-d8a9-4b4f-8310-738fff366064 f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquired lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:07 compute-0 sudo[303528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:26:07 compute-0 sudo[303528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:07 compute-0 sudo[303528]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:07 compute-0 sudo[303553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:26:07 compute-0 sudo[303553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:07 compute-0 sudo[303553]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:07 compute-0 sudo[303578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:26:07 compute-0 sudo[303578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:07 compute-0 sudo[303578]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:07 compute-0 sudo[303603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:26:07 compute-0 sudo[303603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:07 compute-0 nova_compute[260603]: 2025-10-02 08:26:07.547 2 DEBUG nova.network.neutron [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:26:08 compute-0 podman[303669]: 2025-10-02 08:26:08.032958071 +0000 UTC m=+0.081051624 container create 50e873556128dd6a0385eebe521f76252a1977c355a0cacaa19aefb50fe9318c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 08:26:08 compute-0 podman[303671]: 2025-10-02 08:26:08.050171197 +0000 UTC m=+0.096559625 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 02 08:26:08 compute-0 podman[303669]: 2025-10-02 08:26:07.995302968 +0000 UTC m=+0.043396571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:26:08 compute-0 ceph-mon[74477]: pgmap v1351: 305 pgs: 305 active+clean; 121 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 446 KiB/s rd, 2.6 MiB/s wr, 139 op/s
Oct 02 08:26:08 compute-0 systemd[1]: Started libpod-conmon-50e873556128dd6a0385eebe521f76252a1977c355a0cacaa19aefb50fe9318c.scope.
Oct 02 08:26:08 compute-0 podman[303668]: 2025-10-02 08:26:08.141972037 +0000 UTC m=+0.186591737 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:26:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:26:08 compute-0 podman[303669]: 2025-10-02 08:26:08.191718231 +0000 UTC m=+0.239811864 container init 50e873556128dd6a0385eebe521f76252a1977c355a0cacaa19aefb50fe9318c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 08:26:08 compute-0 podman[303669]: 2025-10-02 08:26:08.203926284 +0000 UTC m=+0.252019877 container start 50e873556128dd6a0385eebe521f76252a1977c355a0cacaa19aefb50fe9318c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 08:26:08 compute-0 podman[303669]: 2025-10-02 08:26:08.208611385 +0000 UTC m=+0.256705018 container attach 50e873556128dd6a0385eebe521f76252a1977c355a0cacaa19aefb50fe9318c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brattain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 08:26:08 compute-0 ecstatic_brattain[303729]: 167 167
Oct 02 08:26:08 compute-0 systemd[1]: libpod-50e873556128dd6a0385eebe521f76252a1977c355a0cacaa19aefb50fe9318c.scope: Deactivated successfully.
Oct 02 08:26:08 compute-0 conmon[303729]: conmon 50e873556128dd6a0385 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50e873556128dd6a0385eebe521f76252a1977c355a0cacaa19aefb50fe9318c.scope/container/memory.events
Oct 02 08:26:08 compute-0 podman[303669]: 2025-10-02 08:26:08.214538396 +0000 UTC m=+0.262631979 container died 50e873556128dd6a0385eebe521f76252a1977c355a0cacaa19aefb50fe9318c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 08:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7d4fda7ef22e8f90d1aaac653aef8266ae044d697034c5b00f687c006359244-merged.mount: Deactivated successfully.
Oct 02 08:26:08 compute-0 podman[303669]: 2025-10-02 08:26:08.269177169 +0000 UTC m=+0.317270752 container remove 50e873556128dd6a0385eebe521f76252a1977c355a0cacaa19aefb50fe9318c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:26:08 compute-0 systemd[1]: libpod-conmon-50e873556128dd6a0385eebe521f76252a1977c355a0cacaa19aefb50fe9318c.scope: Deactivated successfully.
Oct 02 08:26:08 compute-0 podman[303752]: 2025-10-02 08:26:08.515389117 +0000 UTC m=+0.060355158 container create 713a9b87256de28eb042b572be4620a85893d0806d4e9e41c2900c6407285972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galois, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 08:26:08 compute-0 systemd[1]: Started libpod-conmon-713a9b87256de28eb042b572be4620a85893d0806d4e9e41c2900c6407285972.scope.
Oct 02 08:26:08 compute-0 podman[303752]: 2025-10-02 08:26:08.486169055 +0000 UTC m=+0.031135106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:26:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc12a936600f22c654335a2a167bfc9196526926b2d51e4cfb95017f6ce5e53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc12a936600f22c654335a2a167bfc9196526926b2d51e4cfb95017f6ce5e53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc12a936600f22c654335a2a167bfc9196526926b2d51e4cfb95017f6ce5e53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc12a936600f22c654335a2a167bfc9196526926b2d51e4cfb95017f6ce5e53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:08 compute-0 podman[303752]: 2025-10-02 08:26:08.644236261 +0000 UTC m=+0.189202382 container init 713a9b87256de28eb042b572be4620a85893d0806d4e9e41c2900c6407285972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galois, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:26:08 compute-0 podman[303752]: 2025-10-02 08:26:08.660051481 +0000 UTC m=+0.205017502 container start 713a9b87256de28eb042b572be4620a85893d0806d4e9e41c2900c6407285972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 08:26:08 compute-0 podman[303752]: 2025-10-02 08:26:08.66406711 +0000 UTC m=+0.209033151 container attach 713a9b87256de28eb042b572be4620a85893d0806d4e9e41c2900c6407285972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.766 2 DEBUG nova.network.neutron [None req-d1af60ab-d8a9-4b4f-8310-738fff366064 f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.876 2 DEBUG nova.network.neutron [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 167 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.4 MiB/s wr, 78 op/s
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.895 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.895 2 DEBUG nova.compute.manager [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Instance network_info: |[{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.895 2 DEBUG oslo_concurrency.lockutils [req-bdb44835-1716-4cfe-b3b3-33be4c03493b req-2bc1cf66-43de-4511-9f60-a17d7d65cedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.896 2 DEBUG nova.network.neutron [req-bdb44835-1716-4cfe-b3b3-33be4c03493b req-2bc1cf66-43de-4511-9f60-a17d7d65cedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing network info cache for port 0d888b1c-d237-4db9-9ca5-4796f8c1349d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.898 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Start _get_guest_xml network_info=[{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.902 2 WARNING nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.907 2 DEBUG nova.virt.libvirt.host [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.908 2 DEBUG nova.virt.libvirt.host [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.912 2 DEBUG nova.virt.libvirt.host [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.913 2 DEBUG nova.virt.libvirt.host [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.913 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.913 2 DEBUG nova.virt.hardware [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.914 2 DEBUG nova.virt.hardware [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.914 2 DEBUG nova.virt.hardware [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.914 2 DEBUG nova.virt.hardware [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.914 2 DEBUG nova.virt.hardware [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.914 2 DEBUG nova.virt.hardware [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.915 2 DEBUG nova.virt.hardware [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.915 2 DEBUG nova.virt.hardware [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.915 2 DEBUG nova.virt.hardware [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.915 2 DEBUG nova.virt.hardware [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.916 2 DEBUG nova.virt.hardware [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.918 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.963 2 DEBUG nova.compute.manager [req-9bbb4b7c-2522-4ca4-8437-d390002957cd req-7d14897f-575f-4ec6-9d10-49097b3b1534 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received event network-changed-a75b1cfa-e509-4676-bd92-b36be82f1e83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.964 2 DEBUG nova.compute.manager [req-9bbb4b7c-2522-4ca4-8437-d390002957cd req-7d14897f-575f-4ec6-9d10-49097b3b1534 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Refreshing instance network info cache due to event network-changed-a75b1cfa-e509-4676-bd92-b36be82f1e83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:26:08 compute-0 nova_compute[260603]: 2025-10-02 08:26:08.964 2 DEBUG oslo_concurrency.lockutils [req-9bbb4b7c-2522-4ca4-8437-d390002957cd req-7d14897f-575f-4ec6-9d10-49097b3b1534 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:26:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/926517357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.428 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.469 2 DEBUG nova.storage.rbd_utils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 5d595e00-2287-4a6f-b347-bc277006a626_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.475 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:09 compute-0 eloquent_galois[303769]: {
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "osd_id": 2,
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "type": "bluestore"
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:     },
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "osd_id": 1,
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "type": "bluestore"
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:     },
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "osd_id": 0,
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:         "type": "bluestore"
Oct 02 08:26:09 compute-0 eloquent_galois[303769]:     }
Oct 02 08:26:09 compute-0 eloquent_galois[303769]: }
Oct 02 08:26:09 compute-0 systemd[1]: libpod-713a9b87256de28eb042b572be4620a85893d0806d4e9e41c2900c6407285972.scope: Deactivated successfully.
Oct 02 08:26:09 compute-0 systemd[1]: libpod-713a9b87256de28eb042b572be4620a85893d0806d4e9e41c2900c6407285972.scope: Consumed 1.167s CPU time.
Oct 02 08:26:09 compute-0 podman[303752]: 2025-10-02 08:26:09.832627558 +0000 UTC m=+1.377593569 container died 713a9b87256de28eb042b572be4620a85893d0806d4e9e41c2900c6407285972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galois, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:26:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-abc12a936600f22c654335a2a167bfc9196526926b2d51e4cfb95017f6ce5e53-merged.mount: Deactivated successfully.
Oct 02 08:26:09 compute-0 podman[303752]: 2025-10-02 08:26:09.898023666 +0000 UTC m=+1.442989687 container remove 713a9b87256de28eb042b572be4620a85893d0806d4e9e41c2900c6407285972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:26:09 compute-0 systemd[1]: libpod-conmon-713a9b87256de28eb042b572be4620a85893d0806d4e9e41c2900c6407285972.scope: Deactivated successfully.
Oct 02 08:26:09 compute-0 sudo[303603]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:26:09 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:26:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:26:09 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:26:09 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 326e4925-7829-42d5-bcc2-6e2c5ce7d4e7 does not exist
Oct 02 08:26:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:26:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3846885391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:26:09 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6b52b1d9-7611-48e7-9e8b-81277e8cc28e does not exist
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.966 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.967 2 DEBUG nova.virt.libvirt.vif [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:26:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.968 2 DEBUG nova.network.os_vif_util [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.969 2 DEBUG nova.network.os_vif_util [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:e3:68,bridge_name='br-int',has_traffic_filtering=True,id=0d888b1c-d237-4db9-9ca5-4796f8c1349d,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d888b1c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.970 2 DEBUG nova.objects.instance [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.985 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:26:09 compute-0 nova_compute[260603]:   <uuid>5d595e00-2287-4a6f-b347-bc277006a626</uuid>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   <name>instance-00000026</name>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:26:08</nova:creationTime>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:26:09 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:26:09 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:26:09 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:26:09 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:26:09 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:26:09 compute-0 nova_compute[260603]:         <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:26:09 compute-0 nova_compute[260603]:         <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:26:09 compute-0 nova_compute[260603]:         <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct 02 08:26:09 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <system>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <entry name="serial">5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <entry name="uuid">5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     </system>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   <os>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   </os>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   <features>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   </features>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5d595e00-2287-4a6f-b347-bc277006a626_disk">
Oct 02 08:26:09 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       </source>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:26:09 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5d595e00-2287-4a6f-b347-bc277006a626_disk.config">
Oct 02 08:26:09 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       </source>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:26:09 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:cf:e3:68"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <target dev="tap0d888b1c-d2"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log" append="off"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <video>
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     </video>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:26:09 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:26:09 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:26:09 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:26:09 compute-0 nova_compute[260603]: </domain>
Oct 02 08:26:09 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.986 2 DEBUG nova.compute.manager [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Preparing to wait for external event network-vif-plugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.986 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.986 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.987 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.988 2 DEBUG nova.virt.libvirt.vif [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:26:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.988 2 DEBUG nova.network.os_vif_util [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.989 2 DEBUG nova.network.os_vif_util [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:e3:68,bridge_name='br-int',has_traffic_filtering=True,id=0d888b1c-d237-4db9-9ca5-4796f8c1349d,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d888b1c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.989 2 DEBUG os_vif [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:e3:68,bridge_name='br-int',has_traffic_filtering=True,id=0d888b1c-d237-4db9-9ca5-4796f8c1349d,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d888b1c-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.991 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.991 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.998 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d888b1c-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:09 compute-0 nova_compute[260603]: 2025-10-02 08:26:09.999 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d888b1c-d2, col_values=(('external_ids', {'iface-id': '0d888b1c-d237-4db9-9ca5-4796f8c1349d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:e3:68', 'vm-uuid': '5d595e00-2287-4a6f-b347-bc277006a626'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:10 compute-0 NetworkManager[45129]: <info>  [1759393570.0020] manager: (tap0d888b1c-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.012 2 INFO os_vif [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:e3:68,bridge_name='br-int',has_traffic_filtering=True,id=0d888b1c-d237-4db9-9ca5-4796f8c1349d,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d888b1c-d2')
Oct 02 08:26:10 compute-0 sudo[303875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:26:10 compute-0 sudo[303875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:10 compute-0 sudo[303875]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.076 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.077 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.077 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:cf:e3:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.078 2 INFO nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Using config drive
Oct 02 08:26:10 compute-0 sudo[303903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:26:10 compute-0 sudo[303903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:26:10 compute-0 sudo[303903]: pam_unix(sudo:session): session closed for user root
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.108 2 DEBUG nova.storage.rbd_utils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 5d595e00-2287-4a6f-b347-bc277006a626_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:10 compute-0 ceph-mon[74477]: pgmap v1352: 305 pgs: 305 active+clean; 167 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.4 MiB/s wr, 78 op/s
Oct 02 08:26:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/926517357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:26:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:26:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:26:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3846885391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.578 2 DEBUG nova.network.neutron [None req-d1af60ab-d8a9-4b4f-8310-738fff366064 f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updating instance_info_cache with network_info: [{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.581 2 DEBUG nova.network.neutron [req-bdb44835-1716-4cfe-b3b3-33be4c03493b req-2bc1cf66-43de-4511-9f60-a17d7d65cedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updated VIF entry in instance network info cache for port 0d888b1c-d237-4db9-9ca5-4796f8c1349d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.582 2 DEBUG nova.network.neutron [req-bdb44835-1716-4cfe-b3b3-33be4c03493b req-2bc1cf66-43de-4511-9f60-a17d7d65cedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.603 2 DEBUG oslo_concurrency.lockutils [req-bdb44835-1716-4cfe-b3b3-33be4c03493b req-2bc1cf66-43de-4511-9f60-a17d7d65cedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.608 2 DEBUG oslo_concurrency.lockutils [None req-d1af60ab-d8a9-4b4f-8310-738fff366064 f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Releasing lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.609 2 DEBUG nova.compute.manager [None req-d1af60ab-d8a9-4b4f-8310-738fff366064 f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.609 2 DEBUG nova.compute.manager [None req-d1af60ab-d8a9-4b4f-8310-738fff366064 f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] network_info to inject: |[{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.613 2 DEBUG oslo_concurrency.lockutils [req-9bbb4b7c-2522-4ca4-8437-d390002957cd req-7d14897f-575f-4ec6-9d10-49097b3b1534 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.614 2 DEBUG nova.network.neutron [req-9bbb4b7c-2522-4ca4-8437-d390002957cd req-7d14897f-575f-4ec6-9d10-49097b3b1534 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Refreshing network info cache for port a75b1cfa-e509-4676-bd92-b36be82f1e83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.686 2 INFO nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Creating config drive at /var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/disk.config
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.698 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxlum5rd2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.742 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393555.7299862, 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.743 2 INFO nova.compute.manager [-] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] VM Stopped (Lifecycle Event)
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.773 2 DEBUG nova.compute.manager [None req-e8a668a9-e9b0-444d-b5fb-338248114d96 - - - - - -] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.858 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxlum5rd2" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 167 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.4 MiB/s wr, 78 op/s
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.900 2 DEBUG nova.storage.rbd_utils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 5d595e00-2287-4a6f-b347-bc277006a626_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:10 compute-0 nova_compute[260603]: 2025-10-02 08:26:10.905 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/disk.config 5d595e00-2287-4a6f-b347-bc277006a626_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.111 2 DEBUG oslo_concurrency.processutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/disk.config 5d595e00-2287-4a6f-b347-bc277006a626_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.113 2 INFO nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Deleting local config drive /var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/disk.config because it was imported into RBD.
Oct 02 08:26:11 compute-0 kernel: tap0d888b1c-d2: entered promiscuous mode
Oct 02 08:26:11 compute-0 NetworkManager[45129]: <info>  [1759393571.1751] manager: (tap0d888b1c-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:11 compute-0 ovn_controller[152344]: 2025-10-02T08:26:11Z|00263|binding|INFO|Claiming lport 0d888b1c-d237-4db9-9ca5-4796f8c1349d for this chassis.
Oct 02 08:26:11 compute-0 ovn_controller[152344]: 2025-10-02T08:26:11Z|00264|binding|INFO|0d888b1c-d237-4db9-9ca5-4796f8c1349d: Claiming fa:16:3e:cf:e3:68 10.100.0.10
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.188 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:e3:68 10.100.0.10'], port_security=['fa:16:3e:cf:e3:68 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5197f4ad-c335-4607-928c-2b7946565ac7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0d888b1c-d237-4db9-9ca5-4796f8c1349d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.190 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0d888b1c-d237-4db9-9ca5-4796f8c1349d in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.192 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.213 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[849c970f-e9bb-4212-9f85-4630af7c3b6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.215 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfa1bff6d-11 in ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.217 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfa1bff6d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.217 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c37317-624d-42bb-a896-ec43be3fb7f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 ovn_controller[152344]: 2025-10-02T08:26:11Z|00265|binding|INFO|Setting lport 0d888b1c-d237-4db9-9ca5-4796f8c1349d ovn-installed in OVS
Oct 02 08:26:11 compute-0 ovn_controller[152344]: 2025-10-02T08:26:11Z|00266|binding|INFO|Setting lport 0d888b1c-d237-4db9-9ca5-4796f8c1349d up in Southbound
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.218 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3455ce5a-22eb-465f-be0f-518bb028756f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:11 compute-0 systemd-machined[214636]: New machine qemu-42-instance-00000026.
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.236 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[523bfea5-72b5-46b7-b19f-50798bcab71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000026.
Oct 02 08:26:11 compute-0 systemd-udevd[304002]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.267 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e19aeb-c767-4532-b3de-bec1f739309f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 NetworkManager[45129]: <info>  [1759393571.2805] device (tap0d888b1c-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:26:11 compute-0 NetworkManager[45129]: <info>  [1759393571.2813] device (tap0d888b1c-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.310 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a586227d-b7eb-435b-9530-328016bf52c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 NetworkManager[45129]: <info>  [1759393571.3199] manager: (tapfa1bff6d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/122)
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.319 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[03be3945-c512-49b4-bbf7-988f80cf1a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.388 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[485ade90-9203-41fe-9c50-573dab25bc65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.392 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fba7d21d-5f93-49df-98a9-0bdaf56b08e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 NetworkManager[45129]: <info>  [1759393571.4226] device (tapfa1bff6d-10): carrier: link connected
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.428 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad9702a-73c8-4a81-8352-572418168d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.449 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7ac969-a4c0-43bf-9731-e245f8455246]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304032, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.467 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5ee8db-7328-4f48-8f47-26245e8795b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:c92f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445002, 'tstamp': 445002}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304033, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.488 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[af038712-b5c9-4c14-8a6f-f3d3e92bd3f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304034, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.526 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[919c2830-84a0-4082-9faf-31e0322c4007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.604 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8fb3631-f516-4ae5-b144-bde53183f5d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.605 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.606 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.606 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:11 compute-0 kernel: tapfa1bff6d-10: entered promiscuous mode
Oct 02 08:26:11 compute-0 NetworkManager[45129]: <info>  [1759393571.6460] manager: (tapfa1bff6d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.649 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:11 compute-0 ovn_controller[152344]: 2025-10-02T08:26:11Z|00267|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.678 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.679 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ddea2a7-72a4-4edf-b7e0-283eff88f8ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.680 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:26:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:11.681 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'env', 'PROCESS_TAG=haproxy-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fa1bff6d-19fb-4792-a261-4da1165d95a1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.866 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.866 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.885 2 DEBUG nova.compute.manager [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.974 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.975 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.983 2 DEBUG nova.virt.hardware [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:26:11 compute-0 nova_compute[260603]: 2025-10-02 08:26:11.983 2 INFO nova.compute.claims [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:26:12 compute-0 podman[304109]: 2025-10-02 08:26:12.120917498 +0000 UTC m=+0.084308669 container create 1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.121 2 DEBUG nova.objects.instance [None req-28752833-47d8-4d99-b37c-e7e8ed62875f f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lazy-loading 'flavor' on Instance uuid 1bd45455-6745-4310-a5a6-f86dd4dcb4ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:12 compute-0 ceph-mon[74477]: pgmap v1353: 305 pgs: 305 active+clean; 167 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.4 MiB/s wr, 78 op/s
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.141 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:12 compute-0 podman[304109]: 2025-10-02 08:26:12.075794003 +0000 UTC m=+0.039185234 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:26:12 compute-0 systemd[1]: Started libpod-conmon-1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971.scope.
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.193 2 DEBUG oslo_concurrency.lockutils [None req-28752833-47d8-4d99-b37c-e7e8ed62875f f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:26:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9055c4e28162e79657260aa471df2a75d9f8b592688bd242ac2c8aac34526e3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.236 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393572.2339416, 5d595e00-2287-4a6f-b347-bc277006a626 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.237 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] VM Started (Lifecycle Event)
Oct 02 08:26:12 compute-0 podman[304109]: 2025-10-02 08:26:12.243867952 +0000 UTC m=+0.207259113 container init 1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 08:26:12 compute-0 podman[304109]: 2025-10-02 08:26:12.248817762 +0000 UTC m=+0.212208893 container start 1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.261 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.267 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393572.235772, 5d595e00-2287-4a6f-b347-bc277006a626 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.267 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] VM Paused (Lifecycle Event)
Oct 02 08:26:12 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [NOTICE]   (304129) : New worker (304131) forked
Oct 02 08:26:12 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [NOTICE]   (304129) : Loading success.
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.293 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.297 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.319 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.569 2 DEBUG nova.network.neutron [req-9bbb4b7c-2522-4ca4-8437-d390002957cd req-7d14897f-575f-4ec6-9d10-49097b3b1534 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updated VIF entry in instance network info cache for port a75b1cfa-e509-4676-bd92-b36be82f1e83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.570 2 DEBUG nova.network.neutron [req-9bbb4b7c-2522-4ca4-8437-d390002957cd req-7d14897f-575f-4ec6-9d10-49097b3b1534 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updating instance_info_cache with network_info: [{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.586 2 DEBUG oslo_concurrency.lockutils [req-9bbb4b7c-2522-4ca4-8437-d390002957cd req-7d14897f-575f-4ec6-9d10-49097b3b1534 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.586 2 DEBUG oslo_concurrency.lockutils [None req-28752833-47d8-4d99-b37c-e7e8ed62875f f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquired lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:26:12 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3087818771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.622 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.629 2 DEBUG nova.compute.provider_tree [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.645 2 DEBUG nova.scheduler.client.report [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.670 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.671 2 DEBUG nova.compute.manager [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.712 2 DEBUG nova.compute.manager [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.713 2 DEBUG nova.network.neutron [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.727 2 INFO nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.743 2 DEBUG nova.compute.manager [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.836 2 DEBUG nova.compute.manager [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.837 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.837 2 INFO nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Creating image(s)
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.856 2 DEBUG nova.storage.rbd_utils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] rbd image 6de0ab38-2086-43ab-a32f-827aebf2432d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.879 2 DEBUG nova.storage.rbd_utils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] rbd image 6de0ab38-2086-43ab-a32f-827aebf2432d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 167 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.2 MiB/s wr, 81 op/s
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.901 2 DEBUG nova.storage.rbd_utils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] rbd image 6de0ab38-2086-43ab-a32f-827aebf2432d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.904 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.973 2 DEBUG nova.policy [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '758b34678d69489d8841d33743bd238a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6fa690d01af4f20ba341e59a2be26bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.997 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.998 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.998 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:12 compute-0 nova_compute[260603]: 2025-10-02 08:26:12.999 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.025 2 DEBUG nova.storage.rbd_utils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] rbd image 6de0ab38-2086-43ab-a32f-827aebf2432d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.029 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 6de0ab38-2086-43ab-a32f-827aebf2432d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:13 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3087818771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.347 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 6de0ab38-2086-43ab-a32f-827aebf2432d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.426 2 DEBUG nova.storage.rbd_utils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] resizing rbd image 6de0ab38-2086-43ab-a32f-827aebf2432d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.542 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.543 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.550 2 DEBUG nova.objects.instance [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lazy-loading 'migration_context' on Instance uuid 6de0ab38-2086-43ab-a32f-827aebf2432d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.566 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.567 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Ensure instance console log exists: /var/lib/nova/instances/6de0ab38-2086-43ab-a32f-827aebf2432d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.568 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.568 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.569 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.920 2 DEBUG nova.network.neutron [None req-28752833-47d8-4d99-b37c-e7e8ed62875f f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:26:13 compute-0 nova_compute[260603]: 2025-10-02 08:26:13.994 2 DEBUG nova.network.neutron [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Successfully created port: 47449477-dd00-4329-9053-67f80b8caafb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:26:14 compute-0 nova_compute[260603]: 2025-10-02 08:26:14.063 2 DEBUG nova.compute.manager [req-bf905040-c0e9-4363-9630-e25bed39b4f1 req-25c2df09-071f-40fb-978b-fae40ddf9ce6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received event network-changed-a75b1cfa-e509-4676-bd92-b36be82f1e83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:14 compute-0 nova_compute[260603]: 2025-10-02 08:26:14.064 2 DEBUG nova.compute.manager [req-bf905040-c0e9-4363-9630-e25bed39b4f1 req-25c2df09-071f-40fb-978b-fae40ddf9ce6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Refreshing instance network info cache due to event network-changed-a75b1cfa-e509-4676-bd92-b36be82f1e83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:26:14 compute-0 nova_compute[260603]: 2025-10-02 08:26:14.064 2 DEBUG oslo_concurrency.lockutils [req-bf905040-c0e9-4363-9630-e25bed39b4f1 req-25c2df09-071f-40fb-978b-fae40ddf9ce6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:14 compute-0 ceph-mon[74477]: pgmap v1354: 305 pgs: 305 active+clean; 167 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.2 MiB/s wr, 81 op/s
Oct 02 08:26:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 176 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.0 MiB/s wr, 52 op/s
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:15 compute-0 podman[304327]: 2025-10-02 08:26:15.041029749 +0000 UTC m=+0.092078700 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.113 2 DEBUG nova.network.neutron [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Successfully updated port: 47449477-dd00-4329-9053-67f80b8caafb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.132 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.132 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquired lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.132 2 DEBUG nova.network.neutron [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.291 2 DEBUG nova.network.neutron [None req-28752833-47d8-4d99-b37c-e7e8ed62875f f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updating instance_info_cache with network_info: [{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.309 2 DEBUG oslo_concurrency.lockutils [None req-28752833-47d8-4d99-b37c-e7e8ed62875f f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Releasing lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.309 2 DEBUG nova.compute.manager [None req-28752833-47d8-4d99-b37c-e7e8ed62875f f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.310 2 DEBUG nova.compute.manager [None req-28752833-47d8-4d99-b37c-e7e8ed62875f f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] network_info to inject: |[{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.314 2 DEBUG oslo_concurrency.lockutils [req-bf905040-c0e9-4363-9630-e25bed39b4f1 req-25c2df09-071f-40fb-978b-fae40ddf9ce6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.314 2 DEBUG nova.network.neutron [req-bf905040-c0e9-4363-9630-e25bed39b4f1 req-25c2df09-071f-40fb-978b-fae40ddf9ce6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Refreshing network info cache for port a75b1cfa-e509-4676-bd92-b36be82f1e83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.319 2 DEBUG nova.network.neutron [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.547 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.547 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.720 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.774 2 DEBUG oslo_concurrency.lockutils [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.775 2 DEBUG oslo_concurrency.lockutils [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.775 2 DEBUG oslo_concurrency.lockutils [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.776 2 DEBUG oslo_concurrency.lockutils [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.776 2 DEBUG oslo_concurrency.lockutils [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.778 2 INFO nova.compute.manager [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Terminating instance
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.779 2 DEBUG nova.compute.manager [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:26:15 compute-0 kernel: tapa75b1cfa-e5 (unregistering): left promiscuous mode
Oct 02 08:26:15 compute-0 NetworkManager[45129]: <info>  [1759393575.8386] device (tapa75b1cfa-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:15 compute-0 ovn_controller[152344]: 2025-10-02T08:26:15Z|00268|binding|INFO|Releasing lport a75b1cfa-e509-4676-bd92-b36be82f1e83 from this chassis (sb_readonly=0)
Oct 02 08:26:15 compute-0 ovn_controller[152344]: 2025-10-02T08:26:15Z|00269|binding|INFO|Setting lport a75b1cfa-e509-4676-bd92-b36be82f1e83 down in Southbound
Oct 02 08:26:15 compute-0 ovn_controller[152344]: 2025-10-02T08:26:15Z|00270|binding|INFO|Removing iface tapa75b1cfa-e5 ovn-installed in OVS
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:15.892 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:9e:33 10.100.0.13'], port_security=['fa:16:3e:55:9e:33 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1bd45455-6745-4310-a5a6-f86dd4dcb4ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e086d5b9-24e9-42fc-adba-1a3993e8f3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93b2c1583a83423288661225e3f86391', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f33ffaac-75d9-45a6-a506-467fbdc687ac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc33fda3-a64f-4f9d-b5b2-d2bb922fc37f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a75b1cfa-e509-4676-bd92-b36be82f1e83) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:26:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:15.895 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a75b1cfa-e509-4676-bd92-b36be82f1e83 in datapath e086d5b9-24e9-42fc-adba-1a3993e8f3d1 unbound from our chassis
Oct 02 08:26:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:15.897 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e086d5b9-24e9-42fc-adba-1a3993e8f3d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:26:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:15.898 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42212419-7694-4a37-a3be-691c22cbe88b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:15.899 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1 namespace which is not needed anymore
Oct 02 08:26:15 compute-0 nova_compute[260603]: 2025-10-02 08:26:15.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:15 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Deactivated successfully.
Oct 02 08:26:15 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Consumed 14.045s CPU time.
Oct 02 08:26:15 compute-0 systemd-machined[214636]: Machine qemu-41-instance-00000025 terminated.
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.029 2 INFO nova.virt.libvirt.driver [-] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Instance destroyed successfully.
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.030 2 DEBUG nova.objects.instance [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lazy-loading 'resources' on Instance uuid 1bd45455-6745-4310-a5a6-f86dd4dcb4ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:16 compute-0 neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1[302270]: [NOTICE]   (302275) : haproxy version is 2.8.14-c23fe91
Oct 02 08:26:16 compute-0 neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1[302270]: [NOTICE]   (302275) : path to executable is /usr/sbin/haproxy
Oct 02 08:26:16 compute-0 neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1[302270]: [WARNING]  (302275) : Exiting Master process...
Oct 02 08:26:16 compute-0 neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1[302270]: [WARNING]  (302275) : Exiting Master process...
Oct 02 08:26:16 compute-0 neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1[302270]: [ALERT]    (302275) : Current worker (302277) exited with code 143 (Terminated)
Oct 02 08:26:16 compute-0 neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1[302270]: [WARNING]  (302275) : All workers exited. Exiting... (0)
Oct 02 08:26:16 compute-0 systemd[1]: libpod-e105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2.scope: Deactivated successfully.
Oct 02 08:26:16 compute-0 podman[304371]: 2025-10-02 08:26:16.048847554 +0000 UTC m=+0.057762513 container died e105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.054 2 DEBUG nova.virt.libvirt.vif [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:25:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1639846623',display_name='tempest-AttachInterfacesUnderV243Test-server-1639846623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1639846623',id=37,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCKG1mgw8jr3d6I/rPaxpI/xykaK5mkiXQdfytljVDMuER4fKp12xeA534SFvLBxY6bTyRlvKcYu+E0zUTX3LJrHhpFk2NKvsPRSQ931HNOhMc2t1VukoDA/V98LWTem1w==',key_name='tempest-keypair-2083756465',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:25:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='93b2c1583a83423288661225e3f86391',ramdisk_id='',reservation_id='r-wb0hbrf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1865175876',owner_user_name='tempest-AttachInterfacesUnderV243Test-1865175876-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f5cf08c876094c4d847b57fd4506bbff',uuid=1bd45455-6745-4310-a5a6-f86dd4dcb4ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.054 2 DEBUG nova.network.os_vif_util [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Converting VIF {"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.059 2 DEBUG nova.network.os_vif_util [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:9e:33,bridge_name='br-int',has_traffic_filtering=True,id=a75b1cfa-e509-4676-bd92-b36be82f1e83,network=Network(e086d5b9-24e9-42fc-adba-1a3993e8f3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75b1cfa-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.060 2 DEBUG os_vif [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:9e:33,bridge_name='br-int',has_traffic_filtering=True,id=a75b1cfa-e509-4676-bd92-b36be82f1e83,network=Network(e086d5b9-24e9-42fc-adba-1a3993e8f3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75b1cfa-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.063 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa75b1cfa-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.078 2 INFO os_vif [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:9e:33,bridge_name='br-int',has_traffic_filtering=True,id=a75b1cfa-e509-4676-bd92-b36be82f1e83,network=Network(e086d5b9-24e9-42fc-adba-1a3993e8f3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75b1cfa-e5')
Oct 02 08:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2-userdata-shm.mount: Deactivated successfully.
Oct 02 08:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d83a235cb30ab31648a638f80b384a1df75c51ddd966395ebf8f9a7e9f71d807-merged.mount: Deactivated successfully.
Oct 02 08:26:16 compute-0 podman[304371]: 2025-10-02 08:26:16.10486696 +0000 UTC m=+0.113781919 container cleanup e105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 08:26:16 compute-0 systemd[1]: libpod-conmon-e105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2.scope: Deactivated successfully.
Oct 02 08:26:16 compute-0 ceph-mon[74477]: pgmap v1355: 305 pgs: 305 active+clean; 176 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.0 MiB/s wr, 52 op/s
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.187 2 DEBUG nova.compute.manager [req-230309fc-8328-4ead-9c6a-f770a2630540 req-d59bce66-3d83-447c-8590-bf50d4295e16 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received event network-vif-unplugged-a75b1cfa-e509-4676-bd92-b36be82f1e83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.187 2 DEBUG oslo_concurrency.lockutils [req-230309fc-8328-4ead-9c6a-f770a2630540 req-d59bce66-3d83-447c-8590-bf50d4295e16 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.188 2 DEBUG oslo_concurrency.lockutils [req-230309fc-8328-4ead-9c6a-f770a2630540 req-d59bce66-3d83-447c-8590-bf50d4295e16 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.188 2 DEBUG oslo_concurrency.lockutils [req-230309fc-8328-4ead-9c6a-f770a2630540 req-d59bce66-3d83-447c-8590-bf50d4295e16 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.188 2 DEBUG nova.compute.manager [req-230309fc-8328-4ead-9c6a-f770a2630540 req-d59bce66-3d83-447c-8590-bf50d4295e16 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] No waiting events found dispatching network-vif-unplugged-a75b1cfa-e509-4676-bd92-b36be82f1e83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.188 2 DEBUG nova.compute.manager [req-230309fc-8328-4ead-9c6a-f770a2630540 req-d59bce66-3d83-447c-8590-bf50d4295e16 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received event network-vif-unplugged-a75b1cfa-e509-4676-bd92-b36be82f1e83 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:26:16 compute-0 podman[304424]: 2025-10-02 08:26:16.195864994 +0000 UTC m=+0.053672832 container remove e105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:26:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:16.202 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[47d9c5d0-73c6-4034-abc5-84970ee30ca5]: (4, ('Thu Oct  2 08:26:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1 (e105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2)\ne105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2\nThu Oct  2 08:26:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1 (e105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2)\ne105e2371c4a0afb280da843615a3b98c6869c8bce4a1a9e37d2d47bf57b47f2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:16.205 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0689b40d-d0e8-4df0-8279-7cec234cf06e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:16.206 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape086d5b9-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:16 compute-0 kernel: tape086d5b9-20: left promiscuous mode
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:16.215 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba4b8a6-3221-491f-87e5-99551b28ea0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:16.252 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd00261d-7b0d-4482-b137-463b04f038fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:16.253 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1b4f4a77-9d12-493c-996b-377284b98067]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:16.279 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b5bb20-d52b-406e-bcb8-c280fd42668c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442337, 'reachable_time': 34002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304443, 'error': None, 'target': 'ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:16 compute-0 systemd[1]: run-netns-ovnmeta\x2de086d5b9\x2d24e9\x2d42fc\x2dadba\x2d1a3993e8f3d1.mount: Deactivated successfully.
Oct 02 08:26:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:16.283 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e086d5b9-24e9-42fc-adba-1a3993e8f3d1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:26:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:16.284 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[141c9079-3d91-49a2-8ede-5c03a242080d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.299 2 DEBUG nova.network.neutron [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updating instance_info_cache with network_info: [{"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.315 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Releasing lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.316 2 DEBUG nova.compute.manager [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Instance network_info: |[{"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.318 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Start _get_guest_xml network_info=[{"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.326 2 WARNING nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.334 2 DEBUG nova.virt.libvirt.host [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.335 2 DEBUG nova.virt.libvirt.host [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.338 2 DEBUG nova.compute.manager [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.338 2 DEBUG oslo_concurrency.lockutils [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.338 2 DEBUG oslo_concurrency.lockutils [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.339 2 DEBUG oslo_concurrency.lockutils [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.339 2 DEBUG nova.compute.manager [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Processing event network-vif-plugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.340 2 DEBUG nova.compute.manager [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.340 2 DEBUG oslo_concurrency.lockutils [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.340 2 DEBUG oslo_concurrency.lockutils [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.341 2 DEBUG oslo_concurrency.lockutils [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.341 2 DEBUG nova.compute.manager [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.341 2 WARNING nova.compute.manager [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d for instance with vm_state building and task_state spawning.
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.342 2 DEBUG nova.compute.manager [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-changed-47449477-dd00-4329-9053-67f80b8caafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.342 2 DEBUG nova.compute.manager [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Refreshing instance network info cache due to event network-changed-47449477-dd00-4329-9053-67f80b8caafb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.342 2 DEBUG oslo_concurrency.lockutils [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.343 2 DEBUG oslo_concurrency.lockutils [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.343 2 DEBUG nova.network.neutron [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Refreshing network info cache for port 47449477-dd00-4329-9053-67f80b8caafb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.347 2 DEBUG nova.compute.manager [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.352 2 DEBUG nova.virt.libvirt.host [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.353 2 DEBUG nova.virt.libvirt.host [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.354 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.354 2 DEBUG nova.virt.hardware [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.355 2 DEBUG nova.virt.hardware [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.355 2 DEBUG nova.virt.hardware [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.356 2 DEBUG nova.virt.hardware [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.356 2 DEBUG nova.virt.hardware [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.356 2 DEBUG nova.virt.hardware [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.357 2 DEBUG nova.virt.hardware [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.357 2 DEBUG nova.virt.hardware [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.358 2 DEBUG nova.virt.hardware [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.358 2 DEBUG nova.virt.hardware [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.359 2 DEBUG nova.virt.hardware [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.364 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.415 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393576.3528583, 5d595e00-2287-4a6f-b347-bc277006a626 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.417 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] VM Resumed (Lifecycle Event)
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.432 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.443 2 INFO nova.virt.libvirt.driver [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Instance spawned successfully.
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.444 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.447 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.453 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.464 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.464 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.465 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.465 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.466 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.466 2 DEBUG nova.virt.libvirt.driver [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.492 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.516 2 INFO nova.virt.libvirt.driver [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Deleting instance files /var/lib/nova/instances/1bd45455-6745-4310-a5a6-f86dd4dcb4ca_del
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.517 2 INFO nova.virt.libvirt.driver [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Deletion of /var/lib/nova/instances/1bd45455-6745-4310-a5a6-f86dd4dcb4ca_del complete
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.528 2 INFO nova.compute.manager [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Took 13.13 seconds to spawn the instance on the hypervisor.
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.529 2 DEBUG nova.compute.manager [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.597 2 INFO nova.compute.manager [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.597 2 DEBUG oslo.service.loopingcall [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.606 2 INFO nova.compute.manager [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Took 14.03 seconds to build instance.
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.608 2 DEBUG nova.compute.manager [-] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.608 2 DEBUG nova.network.neutron [-] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.636 2 DEBUG oslo_concurrency.lockutils [None req-f5f3db75-4597-4c51-b527-8d912ec81491 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:26:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092411264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:26:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 176 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.9 MiB/s wr, 45 op/s
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.887 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.919 2 DEBUG nova.storage.rbd_utils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] rbd image 6de0ab38-2086-43ab-a32f-827aebf2432d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.926 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.978 2 DEBUG nova.network.neutron [req-bf905040-c0e9-4363-9630-e25bed39b4f1 req-25c2df09-071f-40fb-978b-fae40ddf9ce6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updated VIF entry in instance network info cache for port a75b1cfa-e509-4676-bd92-b36be82f1e83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:26:16 compute-0 nova_compute[260603]: 2025-10-02 08:26:16.980 2 DEBUG nova.network.neutron [req-bf905040-c0e9-4363-9630-e25bed39b4f1 req-25c2df09-071f-40fb-978b-fae40ddf9ce6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updating instance_info_cache with network_info: [{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.053 2 DEBUG oslo_concurrency.lockutils [req-bf905040-c0e9-4363-9630-e25bed39b4f1 req-25c2df09-071f-40fb-978b-fae40ddf9ce6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.055 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.055 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.055 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1bd45455-6745-4310-a5a6-f86dd4dcb4ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4092411264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:26:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:26:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2886050793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.408 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.411 2 DEBUG nova.virt.libvirt.vif [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:26:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-489114303',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-489114303',id=39,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6fa690d01af4f20ba341e59a2be26bb',ramdisk_id='',reservation_id='r-2okxr2ge',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-655113368',owner_user_name='tempest-AttachInterfacesV270Test-655113368-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:26:12Z,user_data=None,user_id='758b34678d69489d8841d33743bd238a',uuid=6de0ab38-2086-43ab-a32f-827aebf2432d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.411 2 DEBUG nova.network.os_vif_util [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converting VIF {"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.413 2 DEBUG nova.network.os_vif_util [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:b1:c0,bridge_name='br-int',has_traffic_filtering=True,id=47449477-dd00-4329-9053-67f80b8caafb,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47449477-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.416 2 DEBUG nova.objects.instance [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lazy-loading 'pci_devices' on Instance uuid 6de0ab38-2086-43ab-a32f-827aebf2432d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.434 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:26:17 compute-0 nova_compute[260603]:   <uuid>6de0ab38-2086-43ab-a32f-827aebf2432d</uuid>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   <name>instance-00000027</name>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <nova:name>tempest-AttachInterfacesV270Test-server-489114303</nova:name>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:26:16</nova:creationTime>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:26:17 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:26:17 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:26:17 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:26:17 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:26:17 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:26:17 compute-0 nova_compute[260603]:         <nova:user uuid="758b34678d69489d8841d33743bd238a">tempest-AttachInterfacesV270Test-655113368-project-member</nova:user>
Oct 02 08:26:17 compute-0 nova_compute[260603]:         <nova:project uuid="e6fa690d01af4f20ba341e59a2be26bb">tempest-AttachInterfacesV270Test-655113368</nova:project>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:26:17 compute-0 nova_compute[260603]:         <nova:port uuid="47449477-dd00-4329-9053-67f80b8caafb">
Oct 02 08:26:17 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <system>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <entry name="serial">6de0ab38-2086-43ab-a32f-827aebf2432d</entry>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <entry name="uuid">6de0ab38-2086-43ab-a32f-827aebf2432d</entry>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     </system>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   <os>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   </os>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   <features>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   </features>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6de0ab38-2086-43ab-a32f-827aebf2432d_disk">
Oct 02 08:26:17 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       </source>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:26:17 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6de0ab38-2086-43ab-a32f-827aebf2432d_disk.config">
Oct 02 08:26:17 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       </source>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:26:17 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:0f:b1:c0"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <target dev="tap47449477-dd"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/6de0ab38-2086-43ab-a32f-827aebf2432d/console.log" append="off"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <video>
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     </video>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:26:17 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:26:17 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:26:17 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:26:17 compute-0 nova_compute[260603]: </domain>
Oct 02 08:26:17 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.436 2 DEBUG nova.compute.manager [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Preparing to wait for external event network-vif-plugged-47449477-dd00-4329-9053-67f80b8caafb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.437 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.437 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.437 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.438 2 DEBUG nova.virt.libvirt.vif [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:26:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-489114303',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-489114303',id=39,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6fa690d01af4f20ba341e59a2be26bb',ramdisk_id='',reservation_id='r-2okxr2ge',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-655113368',owner_user_name='tempest-AttachInterfacesV270Test-655113368-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:26:12Z,user_data=None,user_id='758b34678d69489d8841d33743bd238a',uuid=6de0ab38-2086-43ab-a32f-827aebf2432d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.439 2 DEBUG nova.network.os_vif_util [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converting VIF {"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.439 2 DEBUG nova.network.os_vif_util [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:b1:c0,bridge_name='br-int',has_traffic_filtering=True,id=47449477-dd00-4329-9053-67f80b8caafb,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47449477-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.440 2 DEBUG os_vif [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:b1:c0,bridge_name='br-int',has_traffic_filtering=True,id=47449477-dd00-4329-9053-67f80b8caafb,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47449477-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.441 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.441 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47449477-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47449477-dd, col_values=(('external_ids', {'iface-id': '47449477-dd00-4329-9053-67f80b8caafb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:b1:c0', 'vm-uuid': '6de0ab38-2086-43ab-a32f-827aebf2432d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:17 compute-0 NetworkManager[45129]: <info>  [1759393577.4468] manager: (tap47449477-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.452 2 INFO os_vif [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:b1:c0,bridge_name='br-int',has_traffic_filtering=True,id=47449477-dd00-4329-9053-67f80b8caafb,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47449477-dd')
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.504 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.505 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.505 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] No VIF found with MAC fa:16:3e:0f:b1:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.506 2 INFO nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Using config drive
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.526 2 DEBUG nova.storage.rbd_utils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] rbd image 6de0ab38-2086-43ab-a32f-827aebf2432d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.737 2 DEBUG nova.network.neutron [-] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.755 2 INFO nova.compute.manager [-] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Took 1.15 seconds to deallocate network for instance.
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.829 2 DEBUG oslo_concurrency.lockutils [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.832 2 DEBUG oslo_concurrency.lockutils [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.903 2 INFO nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Creating config drive at /var/lib/nova/instances/6de0ab38-2086-43ab-a32f-827aebf2432d/disk.config
Oct 02 08:26:17 compute-0 nova_compute[260603]: 2025-10-02 08:26:17.914 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6de0ab38-2086-43ab-a32f-827aebf2432d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7vofsgca execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.004 2 DEBUG oslo_concurrency.processutils [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.063 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6de0ab38-2086-43ab-a32f-827aebf2432d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7vofsgca" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.107 2 DEBUG nova.storage.rbd_utils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] rbd image 6de0ab38-2086-43ab-a32f-827aebf2432d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.113 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6de0ab38-2086-43ab-a32f-827aebf2432d/disk.config 6de0ab38-2086-43ab-a32f-827aebf2432d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:18 compute-0 ceph-mon[74477]: pgmap v1356: 305 pgs: 305 active+clean; 176 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.9 MiB/s wr, 45 op/s
Oct 02 08:26:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2886050793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.276 2 DEBUG oslo_concurrency.processutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6de0ab38-2086-43ab-a32f-827aebf2432d/disk.config 6de0ab38-2086-43ab-a32f-827aebf2432d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.277 2 INFO nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Deleting local config drive /var/lib/nova/instances/6de0ab38-2086-43ab-a32f-827aebf2432d/disk.config because it was imported into RBD.
Oct 02 08:26:18 compute-0 kernel: tap47449477-dd: entered promiscuous mode
Oct 02 08:26:18 compute-0 systemd-udevd[304350]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:26:18 compute-0 NetworkManager[45129]: <info>  [1759393578.3371] manager: (tap47449477-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Oct 02 08:26:18 compute-0 NetworkManager[45129]: <info>  [1759393578.3478] device (tap47449477-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:26:18 compute-0 NetworkManager[45129]: <info>  [1759393578.3492] device (tap47449477-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.367 2 DEBUG nova.compute.manager [req-5985e8f5-7c85-4f92-a7f6-070eefe6d27b req-4815cb45-ab36-4c69-aaaa-069826ec2a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received event network-vif-plugged-a75b1cfa-e509-4676-bd92-b36be82f1e83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.369 2 DEBUG oslo_concurrency.lockutils [req-5985e8f5-7c85-4f92-a7f6-070eefe6d27b req-4815cb45-ab36-4c69-aaaa-069826ec2a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.370 2 DEBUG oslo_concurrency.lockutils [req-5985e8f5-7c85-4f92-a7f6-070eefe6d27b req-4815cb45-ab36-4c69-aaaa-069826ec2a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.370 2 DEBUG oslo_concurrency.lockutils [req-5985e8f5-7c85-4f92-a7f6-070eefe6d27b req-4815cb45-ab36-4c69-aaaa-069826ec2a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.371 2 DEBUG nova.compute.manager [req-5985e8f5-7c85-4f92-a7f6-070eefe6d27b req-4815cb45-ab36-4c69-aaaa-069826ec2a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] No waiting events found dispatching network-vif-plugged-a75b1cfa-e509-4676-bd92-b36be82f1e83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.371 2 WARNING nova.compute.manager [req-5985e8f5-7c85-4f92-a7f6-070eefe6d27b req-4815cb45-ab36-4c69-aaaa-069826ec2a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received unexpected event network-vif-plugged-a75b1cfa-e509-4676-bd92-b36be82f1e83 for instance with vm_state deleted and task_state None.
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.372 2 DEBUG nova.compute.manager [req-5985e8f5-7c85-4f92-a7f6-070eefe6d27b req-4815cb45-ab36-4c69-aaaa-069826ec2a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received event network-vif-deleted-a75b1cfa-e509-4676-bd92-b36be82f1e83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:18 compute-0 ovn_controller[152344]: 2025-10-02T08:26:18Z|00271|binding|INFO|Claiming lport 47449477-dd00-4329-9053-67f80b8caafb for this chassis.
Oct 02 08:26:18 compute-0 ovn_controller[152344]: 2025-10-02T08:26:18Z|00272|binding|INFO|47449477-dd00-4329-9053-67f80b8caafb: Claiming fa:16:3e:0f:b1:c0 10.100.0.5
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.392 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:b1:c0 10.100.0.5'], port_security=['fa:16:3e:0f:b1:c0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6de0ab38-2086-43ab-a32f-827aebf2432d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2652a07f-2d55-4460-ab66-db7b9bf18992', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6fa690d01af4f20ba341e59a2be26bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5c3e938-ad8f-46df-8997-cca3dab53ce8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608c728a-2464-4481-93df-0a324345398f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=47449477-dd00-4329-9053-67f80b8caafb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.394 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 47449477-dd00-4329-9053-67f80b8caafb in datapath 2652a07f-2d55-4460-ab66-db7b9bf18992 bound to our chassis
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.400 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2652a07f-2d55-4460-ab66-db7b9bf18992
Oct 02 08:26:18 compute-0 systemd-machined[214636]: New machine qemu-43-instance-00000027.
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:18 compute-0 ovn_controller[152344]: 2025-10-02T08:26:18Z|00273|binding|INFO|Setting lport 47449477-dd00-4329-9053-67f80b8caafb ovn-installed in OVS
Oct 02 08:26:18 compute-0 ovn_controller[152344]: 2025-10-02T08:26:18Z|00274|binding|INFO|Setting lport 47449477-dd00-4329-9053-67f80b8caafb up in Southbound
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.416 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ddbf765b-b211-4338-be88-b1d3f4cd1cc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.417 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2652a07f-21 in ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.419 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2652a07f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.419 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[14b8cb11-60dc-4eac-bbe7-656f06ff72c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.420 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3d5835-5ba0-43fd-a299-fcb00df3ec60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000027.
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.439 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[47922cfd-b159-49f1-9e26-145278aafed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.462 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1605d8de-c691-467c-9e90-9f9327b9f388]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:26:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1297206575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.497 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4c10f031-5aa1-40a4-aa7a-831c23821682]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 NetworkManager[45129]: <info>  [1759393578.5084] manager: (tap2652a07f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/126)
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.507 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3388a03-370a-49ce-81f8-b1b728aa4479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.513 2 DEBUG oslo_concurrency.processutils [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.535 2 DEBUG nova.compute.provider_tree [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.550 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[173d0b32-430d-49b6-8cc4-ea220615b7df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.552 2 DEBUG nova.scheduler.client.report [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.556 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[15d33f5c-7321-4cc7-a12a-8d29627a662f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.568 2 DEBUG nova.network.neutron [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updated VIF entry in instance network info cache for port 47449477-dd00-4329-9053-67f80b8caafb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.569 2 DEBUG nova.network.neutron [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updating instance_info_cache with network_info: [{"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.581 2 DEBUG oslo_concurrency.lockutils [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:18 compute-0 NetworkManager[45129]: <info>  [1759393578.5830] device (tap2652a07f-20): carrier: link connected
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.588 2 DEBUG oslo_concurrency.lockutils [req-8206855f-b3ce-43de-8b36-18c30290bd21 req-7013a494-e525-40ea-b23b-4a274fcaa891 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.591 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[98ab3637-6488-4472-b658-63d11f773b44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.600 2 INFO nova.scheduler.client.report [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Deleted allocations for instance 1bd45455-6745-4310-a5a6-f86dd4dcb4ca
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.615 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[95b9f5c7-7a60-4616-bfd1-d6aa2af6b2bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2652a07f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:48:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445718, 'reachable_time': 31419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304633, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.635 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[26c700e9-4986-4727-b566-4d11359fd22f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:480e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445718, 'tstamp': 445718}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304634, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.655 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d354a75-ab42-48fd-ad26-60fa2fe93d38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2652a07f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:48:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445718, 'reachable_time': 31419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304635, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.686 2 DEBUG oslo_concurrency.lockutils [None req-8ff6463a-40ff-4633-b8a9-ca078b89492a f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.693 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[290f5507-0ee7-4d3a-881d-7169332e8e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.766 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[225e86c5-e5c7-418c-b3ad-f36032cefb3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.768 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2652a07f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.768 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.768 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2652a07f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:18 compute-0 NetworkManager[45129]: <info>  [1759393578.7713] manager: (tap2652a07f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Oct 02 08:26:18 compute-0 kernel: tap2652a07f-20: entered promiscuous mode
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.787 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2652a07f-20, col_values=(('external_ids', {'iface-id': 'c6bfc1fe-b805-4828-bbbe-2ced2b46c90a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:18 compute-0 ovn_controller[152344]: 2025-10-02T08:26:18Z|00275|binding|INFO|Releasing lport c6bfc1fe-b805-4828-bbbe-2ced2b46c90a from this chassis (sb_readonly=0)
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.823 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2652a07f-2d55-4460-ab66-db7b9bf18992.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2652a07f-2d55-4460-ab66-db7b9bf18992.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:26:18 compute-0 nova_compute[260603]: 2025-10-02 08:26:18.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.824 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[897b49e6-9644-4fcc-9dac-d4db081a37b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.825 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-2652a07f-2d55-4460-ab66-db7b9bf18992
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/2652a07f-2d55-4460-ab66-db7b9bf18992.pid.haproxy
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 2652a07f-2d55-4460-ab66-db7b9bf18992
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:26:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:18.826 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'env', 'PROCESS_TAG=haproxy-2652a07f-2d55-4460-ab66-db7b9bf18992', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2652a07f-2d55-4460-ab66-db7b9bf18992.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:26:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 134 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 132 op/s
Oct 02 08:26:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1297206575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:19 compute-0 podman[304710]: 2025-10-02 08:26:19.207852669 +0000 UTC m=+0.055508281 container create 9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:26:19 compute-0 systemd[1]: Started libpod-conmon-9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c.scope.
Oct 02 08:26:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:26:19 compute-0 podman[304710]: 2025-10-02 08:26:19.176202858 +0000 UTC m=+0.023858490 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75440ca422d118eb1b1497a7f95015ff02c6e17a2b7309fec1ecfd9cabce63eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:19 compute-0 podman[304710]: 2025-10-02 08:26:19.29319872 +0000 UTC m=+0.140854382 container init 9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:26:19 compute-0 podman[304710]: 2025-10-02 08:26:19.299917317 +0000 UTC m=+0.147572939 container start 9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:26:19 compute-0 podman[304723]: 2025-10-02 08:26:19.30343957 +0000 UTC m=+0.063913541 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:26:19 compute-0 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [NOTICE]   (304749) : New worker (304751) forked
Oct 02 08:26:19 compute-0 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [NOTICE]   (304749) : Loading success.
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.400 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393579.3996978, 6de0ab38-2086-43ab-a32f-827aebf2432d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.400 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] VM Started (Lifecycle Event)
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.420 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.425 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393579.3998556, 6de0ab38-2086-43ab-a32f-827aebf2432d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.426 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] VM Paused (Lifecycle Event)
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.444 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.447 2 DEBUG nova.compute.manager [req-556a4463-d851-409d-8975-1300bec45c33 req-e2e455db-37fa-41af-bde5-c4adca02b587 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-changed-0d888b1c-d237-4db9-9ca5-4796f8c1349d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.447 2 DEBUG nova.compute.manager [req-556a4463-d851-409d-8975-1300bec45c33 req-e2e455db-37fa-41af-bde5-c4adca02b587 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing instance network info cache due to event network-changed-0d888b1c-d237-4db9-9ca5-4796f8c1349d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.448 2 DEBUG oslo_concurrency.lockutils [req-556a4463-d851-409d-8975-1300bec45c33 req-e2e455db-37fa-41af-bde5-c4adca02b587 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.448 2 DEBUG oslo_concurrency.lockutils [req-556a4463-d851-409d-8975-1300bec45c33 req-e2e455db-37fa-41af-bde5-c4adca02b587 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.448 2 DEBUG nova.network.neutron [req-556a4463-d851-409d-8975-1300bec45c33 req-e2e455db-37fa-41af-bde5-c4adca02b587 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing network info cache for port 0d888b1c-d237-4db9-9ca5-4796f8c1349d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.452 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.468 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.528 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updating instance_info_cache with network_info: [{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.563 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.563 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.563 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.564 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.564 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.564 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.587 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.588 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.588 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.588 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:26:19 compute-0 nova_compute[260603]: 2025-10-02 08:26:19.588 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:26:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137320427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.089 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.178 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.179 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.186 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.187 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:26:20 compute-0 ceph-mon[74477]: pgmap v1357: 305 pgs: 305 active+clean; 134 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 132 op/s
Oct 02 08:26:20 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/137320427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.439 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.441 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4074MB free_disk=59.94660186767578GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.441 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.441 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.508 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 5d595e00-2287-4a6f-b347-bc277006a626 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.509 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 6de0ab38-2086-43ab-a32f-827aebf2432d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.509 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.509 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.563 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.733 2 DEBUG nova.compute.manager [req-b5ef0d29-3b97-4baf-8736-1c28dba7f7fc req-184af97b-fede-428c-b8fa-0c971d1dedc4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-plugged-47449477-dd00-4329-9053-67f80b8caafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.735 2 DEBUG oslo_concurrency.lockutils [req-b5ef0d29-3b97-4baf-8736-1c28dba7f7fc req-184af97b-fede-428c-b8fa-0c971d1dedc4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.735 2 DEBUG oslo_concurrency.lockutils [req-b5ef0d29-3b97-4baf-8736-1c28dba7f7fc req-184af97b-fede-428c-b8fa-0c971d1dedc4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.735 2 DEBUG oslo_concurrency.lockutils [req-b5ef0d29-3b97-4baf-8736-1c28dba7f7fc req-184af97b-fede-428c-b8fa-0c971d1dedc4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.736 2 DEBUG nova.compute.manager [req-b5ef0d29-3b97-4baf-8736-1c28dba7f7fc req-184af97b-fede-428c-b8fa-0c971d1dedc4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Processing event network-vif-plugged-47449477-dd00-4329-9053-67f80b8caafb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.736 2 DEBUG nova.compute.manager [req-b5ef0d29-3b97-4baf-8736-1c28dba7f7fc req-184af97b-fede-428c-b8fa-0c971d1dedc4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-plugged-47449477-dd00-4329-9053-67f80b8caafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.736 2 DEBUG oslo_concurrency.lockutils [req-b5ef0d29-3b97-4baf-8736-1c28dba7f7fc req-184af97b-fede-428c-b8fa-0c971d1dedc4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.736 2 DEBUG oslo_concurrency.lockutils [req-b5ef0d29-3b97-4baf-8736-1c28dba7f7fc req-184af97b-fede-428c-b8fa-0c971d1dedc4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.737 2 DEBUG oslo_concurrency.lockutils [req-b5ef0d29-3b97-4baf-8736-1c28dba7f7fc req-184af97b-fede-428c-b8fa-0c971d1dedc4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.737 2 DEBUG nova.compute.manager [req-b5ef0d29-3b97-4baf-8736-1c28dba7f7fc req-184af97b-fede-428c-b8fa-0c971d1dedc4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-plugged-47449477-dd00-4329-9053-67f80b8caafb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.737 2 WARNING nova.compute.manager [req-b5ef0d29-3b97-4baf-8736-1c28dba7f7fc req-184af97b-fede-428c-b8fa-0c971d1dedc4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received unexpected event network-vif-plugged-47449477-dd00-4329-9053-67f80b8caafb for instance with vm_state building and task_state spawning.
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.738 2 DEBUG nova.compute.manager [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.743 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.747 2 INFO nova.virt.libvirt.driver [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Instance spawned successfully.
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.747 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.750 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393580.7500327, 6de0ab38-2086-43ab-a32f-827aebf2432d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.750 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] VM Resumed (Lifecycle Event)
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.771 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.777 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.777 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.778 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.778 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.779 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.779 2 DEBUG nova.virt.libvirt.driver [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.784 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.826 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.850 2 INFO nova.compute.manager [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Took 8.01 seconds to spawn the instance on the hypervisor.
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.851 2 DEBUG nova.compute.manager [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 134 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.927 2 INFO nova.compute.manager [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Took 8.98 seconds to build instance.
Oct 02 08:26:20 compute-0 nova_compute[260603]: 2025-10-02 08:26:20.950 2 DEBUG oslo_concurrency.lockutils [None req-49378a55-e6c9-4714-9ceb-c0dd369697d2 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:26:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2657053617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:21 compute-0 nova_compute[260603]: 2025-10-02 08:26:21.091 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:21 compute-0 nova_compute[260603]: 2025-10-02 08:26:21.098 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:26:21 compute-0 nova_compute[260603]: 2025-10-02 08:26:21.104 2 DEBUG nova.network.neutron [req-556a4463-d851-409d-8975-1300bec45c33 req-e2e455db-37fa-41af-bde5-c4adca02b587 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updated VIF entry in instance network info cache for port 0d888b1c-d237-4db9-9ca5-4796f8c1349d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:26:21 compute-0 nova_compute[260603]: 2025-10-02 08:26:21.105 2 DEBUG nova.network.neutron [req-556a4463-d851-409d-8975-1300bec45c33 req-e2e455db-37fa-41af-bde5-c4adca02b587 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:21 compute-0 nova_compute[260603]: 2025-10-02 08:26:21.120 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:26:21 compute-0 nova_compute[260603]: 2025-10-02 08:26:21.129 2 DEBUG oslo_concurrency.lockutils [req-556a4463-d851-409d-8975-1300bec45c33 req-e2e455db-37fa-41af-bde5-c4adca02b587 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:21 compute-0 nova_compute[260603]: 2025-10-02 08:26:21.149 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:26:21 compute-0 nova_compute[260603]: 2025-10-02 08:26:21.149 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:21 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2657053617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:26:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3805349113' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:26:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:26:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3805349113' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:26:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:26:22 compute-0 nova_compute[260603]: 2025-10-02 08:26:22.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:22 compute-0 ceph-mon[74477]: pgmap v1358: 305 pgs: 305 active+clean; 134 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct 02 08:26:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3805349113' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:26:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3805349113' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:26:22 compute-0 nova_compute[260603]: 2025-10-02 08:26:22.267 2 DEBUG oslo_concurrency.lockutils [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "interface-6de0ab38-2086-43ab-a32f-827aebf2432d-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:22 compute-0 nova_compute[260603]: 2025-10-02 08:26:22.268 2 DEBUG oslo_concurrency.lockutils [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "interface-6de0ab38-2086-43ab-a32f-827aebf2432d-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:22 compute-0 nova_compute[260603]: 2025-10-02 08:26:22.269 2 DEBUG nova.objects.instance [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lazy-loading 'flavor' on Instance uuid 6de0ab38-2086-43ab-a32f-827aebf2432d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:22 compute-0 nova_compute[260603]: 2025-10-02 08:26:22.292 2 DEBUG nova.objects.instance [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lazy-loading 'pci_requests' on Instance uuid 6de0ab38-2086-43ab-a32f-827aebf2432d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:22 compute-0 nova_compute[260603]: 2025-10-02 08:26:22.306 2 DEBUG nova.network.neutron [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:26:22 compute-0 nova_compute[260603]: 2025-10-02 08:26:22.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 134 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Oct 02 08:26:23 compute-0 nova_compute[260603]: 2025-10-02 08:26:23.496 2 DEBUG nova.policy [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '758b34678d69489d8841d33743bd238a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6fa690d01af4f20ba341e59a2be26bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:26:24 compute-0 nova_compute[260603]: 2025-10-02 08:26:24.144 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:26:24 compute-0 nova_compute[260603]: 2025-10-02 08:26:24.146 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:26:24 compute-0 ovn_controller[152344]: 2025-10-02T08:26:24Z|00276|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct 02 08:26:24 compute-0 ovn_controller[152344]: 2025-10-02T08:26:24Z|00277|binding|INFO|Releasing lport c6bfc1fe-b805-4828-bbbe-2ced2b46c90a from this chassis (sb_readonly=0)
Oct 02 08:26:24 compute-0 ceph-mon[74477]: pgmap v1359: 305 pgs: 305 active+clean; 134 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Oct 02 08:26:24 compute-0 nova_compute[260603]: 2025-10-02 08:26:24.287 2 DEBUG nova.network.neutron [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Successfully created port: 92e4e44e-f231-45c1-8284-1059a5c03a05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:26:24 compute-0 nova_compute[260603]: 2025-10-02 08:26:24.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 134 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 194 op/s
Oct 02 08:26:25 compute-0 nova_compute[260603]: 2025-10-02 08:26:25.150 2 DEBUG nova.network.neutron [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Successfully updated port: 92e4e44e-f231-45c1-8284-1059a5c03a05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:26:25 compute-0 nova_compute[260603]: 2025-10-02 08:26:25.168 2 DEBUG oslo_concurrency.lockutils [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:25 compute-0 nova_compute[260603]: 2025-10-02 08:26:25.171 2 DEBUG oslo_concurrency.lockutils [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquired lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:25 compute-0 nova_compute[260603]: 2025-10-02 08:26:25.173 2 DEBUG nova.network.neutron [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:26:25 compute-0 nova_compute[260603]: 2025-10-02 08:26:25.410 2 DEBUG nova.compute.manager [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-changed-92e4e44e-f231-45c1-8284-1059a5c03a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:25 compute-0 nova_compute[260603]: 2025-10-02 08:26:25.411 2 DEBUG nova.compute.manager [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Refreshing instance network info cache due to event network-changed-92e4e44e-f231-45c1-8284-1059a5c03a05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:26:25 compute-0 nova_compute[260603]: 2025-10-02 08:26:25.411 2 DEBUG oslo_concurrency.lockutils [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:25 compute-0 nova_compute[260603]: 2025-10-02 08:26:25.516 2 WARNING nova.network.neutron [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] 2652a07f-2d55-4460-ab66-db7b9bf18992 already exists in list: networks containing: ['2652a07f-2d55-4460-ab66-db7b9bf18992']. ignoring it
Oct 02 08:26:26 compute-0 ceph-mon[74477]: pgmap v1360: 305 pgs: 305 active+clean; 134 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 194 op/s
Oct 02 08:26:26 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 02 08:26:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 134 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.7 MiB/s wr, 184 op/s
Oct 02 08:26:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:26:27 compute-0 nova_compute[260603]: 2025-10-02 08:26:27.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:27 compute-0 nova_compute[260603]: 2025-10-02 08:26:27.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:26:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:26:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:26:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:26:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:26:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:26:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:26:27
Oct 02 08:26:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:26:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:26:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'images', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.log']
Oct 02 08:26:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:26:28 compute-0 rsyslogd[1004]: imjournal: 3093 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 02 08:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:26:28 compute-0 ceph-mon[74477]: pgmap v1361: 305 pgs: 305 active+clean; 134 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.7 MiB/s wr, 184 op/s
Oct 02 08:26:28 compute-0 ovn_controller[152344]: 2025-10-02T08:26:28Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:e3:68 10.100.0.10
Oct 02 08:26:28 compute-0 ovn_controller[152344]: 2025-10-02T08:26:28Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:e3:68 10.100.0.10
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.380 2 DEBUG nova.network.neutron [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updating instance_info_cache with network_info: [{"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.412 2 DEBUG oslo_concurrency.lockutils [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Releasing lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.414 2 DEBUG oslo_concurrency.lockutils [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.414 2 DEBUG nova.network.neutron [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Refreshing network info cache for port 92e4e44e-f231-45c1-8284-1059a5c03a05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.419 2 DEBUG nova.virt.libvirt.vif [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-489114303',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-489114303',id=39,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e6fa690d01af4f20ba341e59a2be26bb',ramdisk_id='',reservation_id='r-2okxr2ge',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-655113368',owner_user_name='tempest-AttachInterfacesV270Test-655113368-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:20Z,user_data=None,user_id='758b34678d69489d8841d33743bd238a',uuid=6de0ab38-2086-43ab-a32f-827aebf2432d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.424 2 DEBUG nova.network.os_vif_util [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converting VIF {"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.427 2 DEBUG nova.network.os_vif_util [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.428 2 DEBUG os_vif [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e4e44e-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92e4e44e-f2, col_values=(('external_ids', {'iface-id': '92e4e44e-f231-45c1-8284-1059a5c03a05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:1b:56', 'vm-uuid': '6de0ab38-2086-43ab-a32f-827aebf2432d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:26:28 compute-0 NetworkManager[45129]: <info>  [1759393588.4823] manager: (tap92e4e44e-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.492 2 INFO os_vif [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2')
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.492 2 DEBUG nova.virt.libvirt.vif [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-489114303',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-489114303',id=39,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e6fa690d01af4f20ba341e59a2be26bb',ramdisk_id='',reservation_id='r-2okxr2ge',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-655113368',owner_user_name='tempest-AttachInterfacesV270Test-655113368-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:20Z,user_data=None,user_id='758b34678d69489d8841d33743bd238a',uuid=6de0ab38-2086-43ab-a32f-827aebf2432d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.493 2 DEBUG nova.network.os_vif_util [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converting VIF {"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.493 2 DEBUG nova.network.os_vif_util [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.495 2 DEBUG nova.virt.libvirt.guest [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] attach device xml: <interface type="ethernet">
Oct 02 08:26:28 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:fd:1b:56"/>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   <target dev="tap92e4e44e-f2"/>
Oct 02 08:26:28 compute-0 nova_compute[260603]: </interface>
Oct 02 08:26:28 compute-0 nova_compute[260603]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 08:26:28 compute-0 kernel: tap92e4e44e-f2: entered promiscuous mode
Oct 02 08:26:28 compute-0 NetworkManager[45129]: <info>  [1759393588.5078] manager: (tap92e4e44e-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Oct 02 08:26:28 compute-0 ovn_controller[152344]: 2025-10-02T08:26:28Z|00278|binding|INFO|Claiming lport 92e4e44e-f231-45c1-8284-1059a5c03a05 for this chassis.
Oct 02 08:26:28 compute-0 ovn_controller[152344]: 2025-10-02T08:26:28Z|00279|binding|INFO|92e4e44e-f231-45c1-8284-1059a5c03a05: Claiming fa:16:3e:fd:1b:56 10.100.0.3
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.526 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:1b:56 10.100.0.3'], port_security=['fa:16:3e:fd:1b:56 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6de0ab38-2086-43ab-a32f-827aebf2432d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2652a07f-2d55-4460-ab66-db7b9bf18992', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6fa690d01af4f20ba341e59a2be26bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5c3e938-ad8f-46df-8997-cca3dab53ce8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608c728a-2464-4481-93df-0a324345398f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=92e4e44e-f231-45c1-8284-1059a5c03a05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.529 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 92e4e44e-f231-45c1-8284-1059a5c03a05 in datapath 2652a07f-2d55-4460-ab66-db7b9bf18992 bound to our chassis
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.531 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2652a07f-2d55-4460-ab66-db7b9bf18992
Oct 02 08:26:28 compute-0 ovn_controller[152344]: 2025-10-02T08:26:28Z|00280|binding|INFO|Setting lport 92e4e44e-f231-45c1-8284-1059a5c03a05 ovn-installed in OVS
Oct 02 08:26:28 compute-0 ovn_controller[152344]: 2025-10-02T08:26:28Z|00281|binding|INFO|Setting lport 92e4e44e-f231-45c1-8284-1059a5c03a05 up in Southbound
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.555 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[739a2a78-ac65-4c24-a6cb-ab04fdfa163d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:28 compute-0 systemd-udevd[304813]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:26:28 compute-0 NetworkManager[45129]: <info>  [1759393588.5918] device (tap92e4e44e-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:26:28 compute-0 NetworkManager[45129]: <info>  [1759393588.5986] device (tap92e4e44e-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.605 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2306f12b-be3c-4ca9-8c3d-c6150fa0b3e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.612 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0db832-e15a-4a42-baa4-2849b8472217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.619 2 DEBUG nova.virt.libvirt.driver [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.620 2 DEBUG nova.virt.libvirt.driver [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.621 2 DEBUG nova.virt.libvirt.driver [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] No VIF found with MAC fa:16:3e:0f:b1:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.621 2 DEBUG nova.virt.libvirt.driver [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] No VIF found with MAC fa:16:3e:fd:1b:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.652 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbfe5e6-291b-43ac-a753-ae8719ac5c30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.655 2 DEBUG nova.virt.libvirt.guest [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:26:28 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesV270Test-server-489114303</nova:name>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:26:28</nova:creationTime>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:26:28 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:26:28 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:26:28 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:26:28 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:26:28 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:26:28 compute-0 nova_compute[260603]:     <nova:user uuid="758b34678d69489d8841d33743bd238a">tempest-AttachInterfacesV270Test-655113368-project-member</nova:user>
Oct 02 08:26:28 compute-0 nova_compute[260603]:     <nova:project uuid="e6fa690d01af4f20ba341e59a2be26bb">tempest-AttachInterfacesV270Test-655113368</nova:project>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:26:28 compute-0 nova_compute[260603]:     <nova:port uuid="47449477-dd00-4329-9053-67f80b8caafb">
Oct 02 08:26:28 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:26:28 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:26:28 compute-0 nova_compute[260603]:     <nova:port uuid="92e4e44e-f231-45c1-8284-1059a5c03a05">
Oct 02 08:26:28 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:26:28 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:26:28 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:26:28 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:26:28 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.676 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[123a317f-cbe3-4b76-8e66-89bb0296f0e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2652a07f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:48:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445718, 'reachable_time': 31419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304820, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.685 2 DEBUG oslo_concurrency.lockutils [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "interface-6de0ab38-2086-43ab-a32f-827aebf2432d-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.702 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[034e6017-7996-4c23-8bd1-d8d74de8792d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2652a07f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445732, 'tstamp': 445732}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304821, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2652a07f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445736, 'tstamp': 445736}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304821, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.704 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2652a07f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.708 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2652a07f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.708 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.709 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2652a07f-20, col_values=(('external_ids', {'iface-id': 'c6bfc1fe-b805-4828-bbbe-2ced2b46c90a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.710 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.796 2 DEBUG nova.compute.manager [req-b8be1e34-db3a-419f-984e-2828b058b4f2 req-853b9c82-93ad-433e-9dfd-b005bf75dfaf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.797 2 DEBUG oslo_concurrency.lockutils [req-b8be1e34-db3a-419f-984e-2828b058b4f2 req-853b9c82-93ad-433e-9dfd-b005bf75dfaf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.797 2 DEBUG oslo_concurrency.lockutils [req-b8be1e34-db3a-419f-984e-2828b058b4f2 req-853b9c82-93ad-433e-9dfd-b005bf75dfaf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.797 2 DEBUG oslo_concurrency.lockutils [req-b8be1e34-db3a-419f-984e-2828b058b4f2 req-853b9c82-93ad-433e-9dfd-b005bf75dfaf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.797 2 DEBUG nova.compute.manager [req-b8be1e34-db3a-419f-984e-2828b058b4f2 req-853b9c82-93ad-433e-9dfd-b005bf75dfaf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:26:28 compute-0 nova_compute[260603]: 2025-10-02 08:26:28.798 2 WARNING nova.compute.manager [req-b8be1e34-db3a-419f-984e-2828b058b4f2 req-853b9c82-93ad-433e-9dfd-b005bf75dfaf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received unexpected event network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 for instance with vm_state active and task_state None.
Oct 02 08:26:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 158 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.7 MiB/s wr, 231 op/s
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.071 2 DEBUG nova.network.neutron [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updated VIF entry in instance network info cache for port 92e4e44e-f231-45c1-8284-1059a5c03a05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.073 2 DEBUG nova.network.neutron [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updating instance_info_cache with network_info: [{"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.111 2 DEBUG oslo_concurrency.lockutils [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.119 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.119 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.120 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.121 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.121 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.123 2 INFO nova.compute.manager [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Terminating instance
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.125 2 DEBUG nova.compute.manager [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:26:30 compute-0 kernel: tap47449477-dd (unregistering): left promiscuous mode
Oct 02 08:26:30 compute-0 NetworkManager[45129]: <info>  [1759393590.1707] device (tap47449477-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 ovn_controller[152344]: 2025-10-02T08:26:30Z|00282|binding|INFO|Releasing lport 47449477-dd00-4329-9053-67f80b8caafb from this chassis (sb_readonly=0)
Oct 02 08:26:30 compute-0 ovn_controller[152344]: 2025-10-02T08:26:30Z|00283|binding|INFO|Setting lport 47449477-dd00-4329-9053-67f80b8caafb down in Southbound
Oct 02 08:26:30 compute-0 ovn_controller[152344]: 2025-10-02T08:26:30Z|00284|binding|INFO|Removing iface tap47449477-dd ovn-installed in OVS
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.193 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:b1:c0 10.100.0.5'], port_security=['fa:16:3e:0f:b1:c0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6de0ab38-2086-43ab-a32f-827aebf2432d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2652a07f-2d55-4460-ab66-db7b9bf18992', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6fa690d01af4f20ba341e59a2be26bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5c3e938-ad8f-46df-8997-cca3dab53ce8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608c728a-2464-4481-93df-0a324345398f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=47449477-dd00-4329-9053-67f80b8caafb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.194 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 47449477-dd00-4329-9053-67f80b8caafb in datapath 2652a07f-2d55-4460-ab66-db7b9bf18992 unbound from our chassis
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.195 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2652a07f-2d55-4460-ab66-db7b9bf18992
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 kernel: tap92e4e44e-f2 (unregistering): left promiscuous mode
Oct 02 08:26:30 compute-0 NetworkManager[45129]: <info>  [1759393590.2219] device (tap92e4e44e-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.225 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fdabcb-14f9-44ae-b0bd-ce411e1cc413]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 ovn_controller[152344]: 2025-10-02T08:26:30Z|00285|binding|INFO|Releasing lport 92e4e44e-f231-45c1-8284-1059a5c03a05 from this chassis (sb_readonly=0)
Oct 02 08:26:30 compute-0 ovn_controller[152344]: 2025-10-02T08:26:30Z|00286|binding|INFO|Setting lport 92e4e44e-f231-45c1-8284-1059a5c03a05 down in Southbound
Oct 02 08:26:30 compute-0 ovn_controller[152344]: 2025-10-02T08:26:30Z|00287|binding|INFO|Removing iface tap92e4e44e-f2 ovn-installed in OVS
Oct 02 08:26:30 compute-0 ceph-mon[74477]: pgmap v1362: 305 pgs: 305 active+clean; 158 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.7 MiB/s wr, 231 op/s
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.245 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:1b:56 10.100.0.3'], port_security=['fa:16:3e:fd:1b:56 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6de0ab38-2086-43ab-a32f-827aebf2432d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2652a07f-2d55-4460-ab66-db7b9bf18992', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6fa690d01af4f20ba341e59a2be26bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5c3e938-ad8f-46df-8997-cca3dab53ce8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608c728a-2464-4481-93df-0a324345398f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=92e4e44e-f231-45c1-8284-1059a5c03a05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.266 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[746c6932-0d29-47d7-91f4-c3e8cd97a420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.272 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[97c0a760-aa3e-4544-829b-4a440ee7394d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Deactivated successfully.
Oct 02 08:26:30 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Consumed 10.333s CPU time.
Oct 02 08:26:30 compute-0 systemd-machined[214636]: Machine qemu-43-instance-00000027 terminated.
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.316 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb104e8f-c84f-4a34-8189-6eabbfc9778b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 NetworkManager[45129]: <info>  [1759393590.3523] manager: (tap47449477-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.351 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b7b9280b-b1c4-49be-9bd1-e247d95b1375]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2652a07f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:48:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445718, 'reachable_time': 31419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304836, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 NetworkManager[45129]: <info>  [1759393590.3615] manager: (tap92e4e44e-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.384 2 INFO nova.virt.libvirt.driver [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Instance destroyed successfully.
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.385 2 DEBUG nova.objects.instance [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lazy-loading 'resources' on Instance uuid 6de0ab38-2086-43ab-a32f-827aebf2432d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.384 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[79d5e930-f3a9-49d5-b6b7-32bc51698a9b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2652a07f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445732, 'tstamp': 445732}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304843, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2652a07f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445736, 'tstamp': 445736}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304843, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.387 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2652a07f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.399 2 DEBUG nova.virt.libvirt.vif [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-489114303',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-489114303',id=39,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6fa690d01af4f20ba341e59a2be26bb',ramdisk_id='',reservation_id='r-2okxr2ge',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-655113368',owner_user_name='tempest-AttachInterfacesV270Test-655113368-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:20Z,user_data=None,user_id='758b34678d69489d8841d33743bd238a',uuid=6de0ab38-2086-43ab-a32f-827aebf2432d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.399 2 DEBUG nova.network.os_vif_util [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converting VIF {"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.400 2 DEBUG nova.network.os_vif_util [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:b1:c0,bridge_name='br-int',has_traffic_filtering=True,id=47449477-dd00-4329-9053-67f80b8caafb,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47449477-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.401 2 DEBUG os_vif [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:b1:c0,bridge_name='br-int',has_traffic_filtering=True,id=47449477-dd00-4329-9053-67f80b8caafb,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47449477-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.402 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47449477-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.403 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2652a07f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.404 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.404 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2652a07f-20, col_values=(('external_ids', {'iface-id': 'c6bfc1fe-b805-4828-bbbe-2ced2b46c90a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.405 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.407 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 92e4e44e-f231-45c1-8284-1059a5c03a05 in datapath 2652a07f-2d55-4460-ab66-db7b9bf18992 unbound from our chassis
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.410 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2652a07f-2d55-4460-ab66-db7b9bf18992, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.411 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[33ccd363-43bb-4922-8e24-45973f194ccb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.413 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992 namespace which is not needed anymore
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.419 2 INFO os_vif [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:b1:c0,bridge_name='br-int',has_traffic_filtering=True,id=47449477-dd00-4329-9053-67f80b8caafb,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47449477-dd')
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.420 2 DEBUG nova.virt.libvirt.vif [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-489114303',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-489114303',id=39,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6fa690d01af4f20ba341e59a2be26bb',ramdisk_id='',reservation_id='r-2okxr2ge',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-655113368',owner_user_name='tempest-AttachInterfacesV270Test-655113368-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:20Z,user_data=None,user_id='758b34678d69489d8841d33743bd238a',uuid=6de0ab38-2086-43ab-a32f-827aebf2432d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.420 2 DEBUG nova.network.os_vif_util [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converting VIF {"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.421 2 DEBUG nova.network.os_vif_util [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.421 2 DEBUG os_vif [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.423 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e4e44e-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.429 2 INFO os_vif [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2')
Oct 02 08:26:30 compute-0 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [NOTICE]   (304749) : haproxy version is 2.8.14-c23fe91
Oct 02 08:26:30 compute-0 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [NOTICE]   (304749) : path to executable is /usr/sbin/haproxy
Oct 02 08:26:30 compute-0 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [WARNING]  (304749) : Exiting Master process...
Oct 02 08:26:30 compute-0 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [WARNING]  (304749) : Exiting Master process...
Oct 02 08:26:30 compute-0 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [ALERT]    (304749) : Current worker (304751) exited with code 143 (Terminated)
Oct 02 08:26:30 compute-0 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [WARNING]  (304749) : All workers exited. Exiting... (0)
Oct 02 08:26:30 compute-0 systemd[1]: libpod-9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c.scope: Deactivated successfully.
Oct 02 08:26:30 compute-0 conmon[304726]: conmon 9ccb146bf210acff7f98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c.scope/container/memory.events
Oct 02 08:26:30 compute-0 podman[304892]: 2025-10-02 08:26:30.605141096 +0000 UTC m=+0.055414497 container died 9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 08:26:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c-userdata-shm.mount: Deactivated successfully.
Oct 02 08:26:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-75440ca422d118eb1b1497a7f95015ff02c6e17a2b7309fec1ecfd9cabce63eb-merged.mount: Deactivated successfully.
Oct 02 08:26:30 compute-0 podman[304892]: 2025-10-02 08:26:30.659881591 +0000 UTC m=+0.110154972 container cleanup 9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:26:30 compute-0 systemd[1]: libpod-conmon-9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c.scope: Deactivated successfully.
Oct 02 08:26:30 compute-0 podman[304924]: 2025-10-02 08:26:30.740587424 +0000 UTC m=+0.050444237 container remove 9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.748 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[99b4e7b2-e775-4bf8-9f1a-69d1335bcb75]: (4, ('Thu Oct  2 08:26:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992 (9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c)\n9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c\nThu Oct  2 08:26:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992 (9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c)\n9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.751 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29bb3cca-d03a-4dd7-8ffe-988257876c86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.753 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2652a07f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 kernel: tap2652a07f-20: left promiscuous mode
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.763 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[855d13be-0863-4db7-844a-d2c13d5cabdf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.793 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[37620cb5-e1f3-42f1-ae17-bef39df8c997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.794 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d979548e-4bd9-4c0d-84f3-8d4abc46b517]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.810 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eb865244-4c7d-4920-877b-3bf98142d747]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445709, 'reachable_time': 26346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304939, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d2652a07f\x2d2d55\x2d4460\x2dab66\x2ddb7b9bf18992.mount: Deactivated successfully.
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.816 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:26:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.816 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f52df912-e0fe-4d7d-bb07-87edccb842ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.826 2 INFO nova.virt.libvirt.driver [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Deleting instance files /var/lib/nova/instances/6de0ab38-2086-43ab-a32f-827aebf2432d_del
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.827 2 INFO nova.virt.libvirt.driver [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Deletion of /var/lib/nova/instances/6de0ab38-2086-43ab-a32f-827aebf2432d_del complete
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.871 2 INFO nova.compute.manager [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.872 2 DEBUG oslo.service.loopingcall [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.872 2 DEBUG nova.compute.manager [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.872 2 DEBUG nova.network.neutron [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:26:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 158 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.899 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.899 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.899 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.899 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.899 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.900 2 WARNING nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received unexpected event network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 for instance with vm_state active and task_state deleting.
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.900 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-unplugged-47449477-dd00-4329-9053-67f80b8caafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.900 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.900 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.900 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.901 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-unplugged-47449477-dd00-4329-9053-67f80b8caafb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.901 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-unplugged-47449477-dd00-4329-9053-67f80b8caafb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.901 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-plugged-47449477-dd00-4329-9053-67f80b8caafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.901 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.901 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.902 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.902 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-plugged-47449477-dd00-4329-9053-67f80b8caafb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.902 2 WARNING nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received unexpected event network-vif-plugged-47449477-dd00-4329-9053-67f80b8caafb for instance with vm_state active and task_state deleting.
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.902 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-unplugged-92e4e44e-f231-45c1-8284-1059a5c03a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.902 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.903 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.903 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.903 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-unplugged-92e4e44e-f231-45c1-8284-1059a5c03a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.903 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-unplugged-92e4e44e-f231-45c1-8284-1059a5c03a05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.903 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.904 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.904 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.904 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.904 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:26:30 compute-0 nova_compute[260603]: 2025-10-02 08:26:30.904 2 WARNING nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received unexpected event network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 for instance with vm_state active and task_state deleting.
Oct 02 08:26:31 compute-0 nova_compute[260603]: 2025-10-02 08:26:31.023 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393576.0226574, 1bd45455-6745-4310-a5a6-f86dd4dcb4ca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:31 compute-0 nova_compute[260603]: 2025-10-02 08:26:31.024 2 INFO nova.compute.manager [-] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] VM Stopped (Lifecycle Event)
Oct 02 08:26:31 compute-0 nova_compute[260603]: 2025-10-02 08:26:31.060 2 DEBUG nova.compute.manager [None req-f6c07aca-55d4-4174-addc-14c344234497 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:31 compute-0 nova_compute[260603]: 2025-10-02 08:26:31.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:26:32 compute-0 nova_compute[260603]: 2025-10-02 08:26:32.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:32 compute-0 ceph-mon[74477]: pgmap v1363: 305 pgs: 305 active+clean; 158 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Oct 02 08:26:32 compute-0 nova_compute[260603]: 2025-10-02 08:26:32.275 2 DEBUG nova.compute.manager [req-26ec0f05-f652-439b-a921-9e219d1a7081 req-f7d31239-ccfd-48a5-97bb-4f8fc4f45c05 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-deleted-92e4e44e-f231-45c1-8284-1059a5c03a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:32 compute-0 nova_compute[260603]: 2025-10-02 08:26:32.275 2 INFO nova.compute.manager [req-26ec0f05-f652-439b-a921-9e219d1a7081 req-f7d31239-ccfd-48a5-97bb-4f8fc4f45c05 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Neutron deleted interface 92e4e44e-f231-45c1-8284-1059a5c03a05; detaching it from the instance and deleting it from the info cache
Oct 02 08:26:32 compute-0 nova_compute[260603]: 2025-10-02 08:26:32.276 2 DEBUG nova.network.neutron [req-26ec0f05-f652-439b-a921-9e219d1a7081 req-f7d31239-ccfd-48a5-97bb-4f8fc4f45c05 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updating instance_info_cache with network_info: [{"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:32 compute-0 nova_compute[260603]: 2025-10-02 08:26:32.300 2 DEBUG nova.compute.manager [req-26ec0f05-f652-439b-a921-9e219d1a7081 req-f7d31239-ccfd-48a5-97bb-4f8fc4f45c05 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Detach interface failed, port_id=92e4e44e-f231-45c1-8284-1059a5c03a05, reason: Instance 6de0ab38-2086-43ab-a32f-827aebf2432d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 08:26:32 compute-0 nova_compute[260603]: 2025-10-02 08:26:32.755 2 DEBUG nova.network.neutron [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:32 compute-0 nova_compute[260603]: 2025-10-02 08:26:32.771 2 INFO nova.compute.manager [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Took 1.90 seconds to deallocate network for instance.
Oct 02 08:26:32 compute-0 nova_compute[260603]: 2025-10-02 08:26:32.808 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:32 compute-0 nova_compute[260603]: 2025-10-02 08:26:32.809 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 151 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct 02 08:26:32 compute-0 nova_compute[260603]: 2025-10-02 08:26:32.915 2 DEBUG oslo_concurrency.processutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:32 compute-0 nova_compute[260603]: 2025-10-02 08:26:32.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:26:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3151088308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:33 compute-0 nova_compute[260603]: 2025-10-02 08:26:33.354 2 DEBUG oslo_concurrency.processutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:33 compute-0 nova_compute[260603]: 2025-10-02 08:26:33.362 2 DEBUG nova.compute.provider_tree [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:26:33 compute-0 nova_compute[260603]: 2025-10-02 08:26:33.388 2 DEBUG nova.scheduler.client.report [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:26:33 compute-0 nova_compute[260603]: 2025-10-02 08:26:33.424 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:33 compute-0 nova_compute[260603]: 2025-10-02 08:26:33.459 2 INFO nova.scheduler.client.report [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Deleted allocations for instance 6de0ab38-2086-43ab-a32f-827aebf2432d
Oct 02 08:26:33 compute-0 nova_compute[260603]: 2025-10-02 08:26:33.539 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:34 compute-0 ceph-mon[74477]: pgmap v1364: 305 pgs: 305 active+clean; 151 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct 02 08:26:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3151088308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:34 compute-0 nova_compute[260603]: 2025-10-02 08:26:34.369 2 DEBUG nova.compute.manager [req-21912fc6-3441-464e-9a74-6c10d308d94d req-9503ea3f-ea6f-4905-8f3c-c638a2b9385c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-deleted-47449477-dd00-4329-9053-67f80b8caafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:34.813 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:34.813 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:34.814 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 02 08:26:35 compute-0 nova_compute[260603]: 2025-10-02 08:26:35.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:36 compute-0 ceph-mon[74477]: pgmap v1365: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 02 08:26:36 compute-0 nova_compute[260603]: 2025-10-02 08:26:36.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 02 08:26:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:26:37 compute-0 nova_compute[260603]: 2025-10-02 08:26:37.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:37 compute-0 nova_compute[260603]: 2025-10-02 08:26:37.326 2 DEBUG oslo_concurrency.lockutils [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:37 compute-0 nova_compute[260603]: 2025-10-02 08:26:37.328 2 DEBUG oslo_concurrency.lockutils [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:37 compute-0 nova_compute[260603]: 2025-10-02 08:26:37.329 2 DEBUG nova.objects.instance [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:37 compute-0 nova_compute[260603]: 2025-10-02 08:26:37.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:37 compute-0 nova_compute[260603]: 2025-10-02 08:26:37.354 2 DEBUG nova.objects.instance [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:37 compute-0 nova_compute[260603]: 2025-10-02 08:26:37.377 2 DEBUG nova.network.neutron [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:26:37 compute-0 ovn_controller[152344]: 2025-10-02T08:26:37Z|00288|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct 02 08:26:37 compute-0 nova_compute[260603]: 2025-10-02 08:26:37.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:37 compute-0 nova_compute[260603]: 2025-10-02 08:26:37.821 2 DEBUG nova.policy [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:26:38 compute-0 ceph-mon[74477]: pgmap v1366: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007571745580191443 of space, bias 1.0, pg target 0.2271523674057433 quantized to 32 (current 32)
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:26:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 348 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 08:26:38 compute-0 nova_compute[260603]: 2025-10-02 08:26:38.912 2 DEBUG nova.network.neutron [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Successfully created port: 8e47820a-f777-4d29-8bce-45c6eb3b7b5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:26:38 compute-0 podman[304964]: 2025-10-02 08:26:38.992699863 +0000 UTC m=+0.059262522 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Oct 02 08:26:39 compute-0 podman[304963]: 2025-10-02 08:26:39.025568302 +0000 UTC m=+0.091014305 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 02 08:26:40 compute-0 nova_compute[260603]: 2025-10-02 08:26:40.239 2 DEBUG nova.network.neutron [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Successfully updated port: 8e47820a-f777-4d29-8bce-45c6eb3b7b5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:26:40 compute-0 nova_compute[260603]: 2025-10-02 08:26:40.287 2 DEBUG oslo_concurrency.lockutils [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:40 compute-0 nova_compute[260603]: 2025-10-02 08:26:40.288 2 DEBUG oslo_concurrency.lockutils [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:40 compute-0 ceph-mon[74477]: pgmap v1367: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 348 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 08:26:40 compute-0 nova_compute[260603]: 2025-10-02 08:26:40.289 2 DEBUG nova.network.neutron [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:26:40 compute-0 nova_compute[260603]: 2025-10-02 08:26:40.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:40 compute-0 nova_compute[260603]: 2025-10-02 08:26:40.530 2 WARNING nova.network.neutron [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it
Oct 02 08:26:40 compute-0 nova_compute[260603]: 2025-10-02 08:26:40.804 2 DEBUG nova.compute.manager [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-changed-8e47820a-f777-4d29-8bce-45c6eb3b7b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:40 compute-0 nova_compute[260603]: 2025-10-02 08:26:40.805 2 DEBUG nova.compute.manager [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing instance network info cache due to event network-changed-8e47820a-f777-4d29-8bce-45c6eb3b7b5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:26:40 compute-0 nova_compute[260603]: 2025-10-02 08:26:40.806 2 DEBUG oslo_concurrency.lockutils [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 105 KiB/s wr, 44 op/s
Oct 02 08:26:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:26:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6322 writes, 28K keys, 6322 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6322 writes, 6322 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1677 writes, 7489 keys, 1677 commit groups, 1.0 writes per commit group, ingest: 9.95 MB, 0.02 MB/s
                                           Interval WAL: 1677 writes, 1677 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    146.3      0.23              0.13        16    0.014       0      0       0.0       0.0
                                             L6      1/0    8.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    194.3    158.1      0.70              0.39        15    0.047     70K   8382       0.0       0.0
                                            Sum      1/0    8.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    146.3    155.2      0.93              0.51        31    0.030     70K   8382       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    206.1    210.7      0.20              0.11         8    0.025     22K   2604       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    194.3    158.1      0.70              0.39        15    0.047     70K   8382       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    149.6      0.23              0.13        15    0.015       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.033, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 0.9 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 15.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000171 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(995,15.02 MB,4.93995%) FilterBlock(32,201.36 KB,0.0646842%) IndexBlock(32,378.80 KB,0.121684%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 08:26:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:42 compute-0 ceph-mon[74477]: pgmap v1368: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 105 KiB/s wr, 44 op/s
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.595 2 DEBUG nova.network.neutron [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.629 2 DEBUG oslo_concurrency.lockutils [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.631 2 DEBUG oslo_concurrency.lockutils [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.632 2 DEBUG nova.network.neutron [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing network info cache for port 8e47820a-f777-4d29-8bce-45c6eb3b7b5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.635 2 DEBUG nova.virt.libvirt.vif [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.636 2 DEBUG nova.network.os_vif_util [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.636 2 DEBUG nova.network.os_vif_util [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.638 2 DEBUG os_vif [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e47820a-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e47820a-f7, col_values=(('external_ids', {'iface-id': '8e47820a-f777-4d29-8bce-45c6eb3b7b5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:86:1a', 'vm-uuid': '5d595e00-2287-4a6f-b347-bc277006a626'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:42 compute-0 NetworkManager[45129]: <info>  [1759393602.6461] manager: (tap8e47820a-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.656 2 INFO os_vif [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7')
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.657 2 DEBUG nova.virt.libvirt.vif [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.657 2 DEBUG nova.network.os_vif_util [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.658 2 DEBUG nova.network.os_vif_util [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.661 2 DEBUG nova.virt.libvirt.guest [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] attach device xml: <interface type="ethernet">
Oct 02 08:26:42 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:65:86:1a"/>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   <target dev="tap8e47820a-f7"/>
Oct 02 08:26:42 compute-0 nova_compute[260603]: </interface>
Oct 02 08:26:42 compute-0 nova_compute[260603]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 08:26:42 compute-0 kernel: tap8e47820a-f7: entered promiscuous mode
Oct 02 08:26:42 compute-0 NetworkManager[45129]: <info>  [1759393602.6783] manager: (tap8e47820a-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:42 compute-0 ovn_controller[152344]: 2025-10-02T08:26:42Z|00289|binding|INFO|Claiming lport 8e47820a-f777-4d29-8bce-45c6eb3b7b5c for this chassis.
Oct 02 08:26:42 compute-0 ovn_controller[152344]: 2025-10-02T08:26:42Z|00290|binding|INFO|8e47820a-f777-4d29-8bce-45c6eb3b7b5c: Claiming fa:16:3e:65:86:1a 10.100.0.12
Oct 02 08:26:42 compute-0 systemd-udevd[305013]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.723 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:86:1a 10.100.0.12'], port_security=['fa:16:3e:65:86:1a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8e47820a-f777-4d29-8bce-45c6eb3b7b5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.724 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8e47820a-f777-4d29-8bce-45c6eb3b7b5c in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.726 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:26:42 compute-0 NetworkManager[45129]: <info>  [1759393602.7340] device (tap8e47820a-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:26:42 compute-0 NetworkManager[45129]: <info>  [1759393602.7353] device (tap8e47820a-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.741 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[79f941bc-731d-4f5b-a566-e01649724424]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:42 compute-0 ovn_controller[152344]: 2025-10-02T08:26:42Z|00291|binding|INFO|Setting lport 8e47820a-f777-4d29-8bce-45c6eb3b7b5c ovn-installed in OVS
Oct 02 08:26:42 compute-0 ovn_controller[152344]: 2025-10-02T08:26:42Z|00292|binding|INFO|Setting lport 8e47820a-f777-4d29-8bce-45c6eb3b7b5c up in Southbound
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.776 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b4aefece-4846-4873-b6ac-c410c05a676a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.780 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5004a8-0ead-414e-9e57-4724c2db15d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.800 2 DEBUG nova.virt.libvirt.driver [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.801 2 DEBUG nova.virt.libvirt.driver [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.801 2 DEBUG nova.virt.libvirt.driver [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:cf:e3:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.802 2 DEBUG nova.virt.libvirt.driver [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:65:86:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.821 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2deae077-1992-487b-9397-7367b49f46fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.830 2 DEBUG nova.virt.libvirt.guest [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:26:42 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:26:42</nova:creationTime>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:26:42 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:26:42 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:26:42 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:26:42 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:26:42 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:26:42 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:26:42 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:26:42 compute-0 nova_compute[260603]:     <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct 02 08:26:42 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:26:42 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:26:42 compute-0 nova_compute[260603]:     <nova:port uuid="8e47820a-f777-4d29-8bce-45c6eb3b7b5c">
Oct 02 08:26:42 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:26:42 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:26:42 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:26:42 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:26:42 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.846 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d3fcc46-fbac-467d-a7c4-bccb74a4308f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305022, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.862 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b48f9a-5fc5-4a84-8469-eb951f045853]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445016, 'tstamp': 445016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305023, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445020, 'tstamp': 445020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305023, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.863 2 DEBUG oslo_concurrency.lockutils [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.865 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:42 compute-0 nova_compute[260603]: 2025-10-02 08:26:42.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.869 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.870 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.871 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.871 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 110 KiB/s wr, 44 op/s
Oct 02 08:26:44 compute-0 nova_compute[260603]: 2025-10-02 08:26:44.187 2 DEBUG nova.compute.manager [req-3f77ce62-f6f1-4534-bef6-957111ced482 req-93729338-ae6e-4439-a78c-07785a38201f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:44 compute-0 nova_compute[260603]: 2025-10-02 08:26:44.187 2 DEBUG oslo_concurrency.lockutils [req-3f77ce62-f6f1-4534-bef6-957111ced482 req-93729338-ae6e-4439-a78c-07785a38201f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:44 compute-0 nova_compute[260603]: 2025-10-02 08:26:44.188 2 DEBUG oslo_concurrency.lockutils [req-3f77ce62-f6f1-4534-bef6-957111ced482 req-93729338-ae6e-4439-a78c-07785a38201f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:44 compute-0 nova_compute[260603]: 2025-10-02 08:26:44.188 2 DEBUG oslo_concurrency.lockutils [req-3f77ce62-f6f1-4534-bef6-957111ced482 req-93729338-ae6e-4439-a78c-07785a38201f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:44 compute-0 nova_compute[260603]: 2025-10-02 08:26:44.188 2 DEBUG nova.compute.manager [req-3f77ce62-f6f1-4534-bef6-957111ced482 req-93729338-ae6e-4439-a78c-07785a38201f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:26:44 compute-0 nova_compute[260603]: 2025-10-02 08:26:44.188 2 WARNING nova.compute.manager [req-3f77ce62-f6f1-4534-bef6-957111ced482 req-93729338-ae6e-4439-a78c-07785a38201f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c for instance with vm_state active and task_state None.
Oct 02 08:26:44 compute-0 ceph-mon[74477]: pgmap v1369: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 110 KiB/s wr, 44 op/s
Oct 02 08:26:44 compute-0 nova_compute[260603]: 2025-10-02 08:26:44.506 2 DEBUG nova.network.neutron [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updated VIF entry in instance network info cache for port 8e47820a-f777-4d29-8bce-45c6eb3b7b5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:26:44 compute-0 nova_compute[260603]: 2025-10-02 08:26:44.506 2 DEBUG nova.network.neutron [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:44 compute-0 nova_compute[260603]: 2025-10-02 08:26:44.547 2 DEBUG oslo_concurrency.lockutils [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:44 compute-0 nova_compute[260603]: 2025-10-02 08:26:44.794 2 DEBUG oslo_concurrency.lockutils [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:44 compute-0 nova_compute[260603]: 2025-10-02 08:26:44.794 2 DEBUG oslo_concurrency.lockutils [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:44 compute-0 ovn_controller[152344]: 2025-10-02T08:26:44Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:86:1a 10.100.0.12
Oct 02 08:26:44 compute-0 nova_compute[260603]: 2025-10-02 08:26:44.795 2 DEBUG nova.objects.instance [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:44 compute-0 ovn_controller[152344]: 2025-10-02T08:26:44Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:86:1a 10.100.0.12
Oct 02 08:26:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 17 KiB/s wr, 8 op/s
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.076 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.077 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.099 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.187 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.187 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.203 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.203 2 INFO nova.compute.claims [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.334 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.381 2 DEBUG nova.objects.instance [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.384 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393590.382308, 6de0ab38-2086-43ab-a32f-827aebf2432d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.384 2 INFO nova.compute.manager [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] VM Stopped (Lifecycle Event)
Oct 02 08:26:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:26:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2871463019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.816 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.823 2 DEBUG nova.compute.provider_tree [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.848 2 DEBUG nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.856 2 DEBUG nova.compute.manager [None req-762a05dc-ef03-451b-86f0-e163b484d2ea - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.860 2 DEBUG nova.scheduler.client.report [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.888 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.889 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.964 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:26:45 compute-0 nova_compute[260603]: 2025-10-02 08:26:45.965 2 DEBUG nova.network.neutron [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:26:46 compute-0 podman[305046]: 2025-10-02 08:26:46.004549482 +0000 UTC m=+0.068639024 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.014 2 INFO nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.060 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.194 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.195 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.196 2 INFO nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Creating image(s)
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.215 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.236 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.254 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.257 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.291 2 DEBUG nova.policy [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.294 2 DEBUG nova.policy [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e785f94f63c44d0f842750666ed49360', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de07bf9b5bef4254bdcb4d7b856304f3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:26:46 compute-0 ceph-mon[74477]: pgmap v1370: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 17 KiB/s wr, 8 op/s
Oct 02 08:26:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2871463019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.349 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.350 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.351 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.351 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.374 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.378 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f0bfef78-36cf-4c57-9205-ad81a216a221_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.629 2 DEBUG nova.compute.manager [req-3faf20a9-7dd8-4277-b663-acbbc1355ec4 req-86cb6346-8567-41b1-b114-e956713583fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.631 2 DEBUG oslo_concurrency.lockutils [req-3faf20a9-7dd8-4277-b663-acbbc1355ec4 req-86cb6346-8567-41b1-b114-e956713583fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.631 2 DEBUG oslo_concurrency.lockutils [req-3faf20a9-7dd8-4277-b663-acbbc1355ec4 req-86cb6346-8567-41b1-b114-e956713583fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.632 2 DEBUG oslo_concurrency.lockutils [req-3faf20a9-7dd8-4277-b663-acbbc1355ec4 req-86cb6346-8567-41b1-b114-e956713583fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.633 2 DEBUG nova.compute.manager [req-3faf20a9-7dd8-4277-b663-acbbc1355ec4 req-86cb6346-8567-41b1-b114-e956713583fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.633 2 WARNING nova.compute.manager [req-3faf20a9-7dd8-4277-b663-acbbc1355ec4 req-86cb6346-8567-41b1-b114-e956713583fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c for instance with vm_state active and task_state None.
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.656 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f0bfef78-36cf-4c57-9205-ad81a216a221_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.752 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] resizing rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.884 2 DEBUG nova.objects.instance [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lazy-loading 'migration_context' on Instance uuid f0bfef78-36cf-4c57-9205-ad81a216a221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.905 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.906 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Ensure instance console log exists: /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.907 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.907 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:46 compute-0 nova_compute[260603]: 2025-10-02 08:26:46.908 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:26:47 compute-0 nova_compute[260603]: 2025-10-02 08:26:47.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:47 compute-0 nova_compute[260603]: 2025-10-02 08:26:47.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:47 compute-0 nova_compute[260603]: 2025-10-02 08:26:47.212 2 DEBUG nova.network.neutron [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Successfully created port: 686b6e3b-80e4-43c6-a917-3751dddecd76 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:26:47 compute-0 nova_compute[260603]: 2025-10-02 08:26:47.578 2 DEBUG nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Successfully created port: 4f39890f-a968-41d4-9cae-0b6948551923 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:26:47 compute-0 nova_compute[260603]: 2025-10-02 08:26:47.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:48 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.117 2 DEBUG nova.network.neutron [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Successfully updated port: 686b6e3b-80e4-43c6-a917-3751dddecd76 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:26:48 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.135 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:48 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.136 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquired lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:48 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.136 2 DEBUG nova.network.neutron [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:26:48 compute-0 ceph-mon[74477]: pgmap v1371: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 02 08:26:48 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.364 2 DEBUG nova.network.neutron [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:26:48 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.769 2 DEBUG nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Successfully updated port: 4f39890f-a968-41d4-9cae-0b6948551923 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:26:48 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.794 2 DEBUG oslo_concurrency.lockutils [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:48 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.794 2 DEBUG oslo_concurrency.lockutils [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:48 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.794 2 DEBUG nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:26:48 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.851 2 DEBUG nova.compute.manager [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-changed-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:48 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.852 2 DEBUG nova.compute.manager [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Refreshing instance network info cache due to event network-changed-686b6e3b-80e4-43c6-a917-3751dddecd76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:26:48 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.852 2 DEBUG oslo_concurrency.lockutils [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:48.999 2 WARNING nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.000 2 WARNING nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it
Oct 02 08:26:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:49.097 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:49.100 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.176 2 DEBUG nova.network.neutron [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updating instance_info_cache with network_info: [{"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.198 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Releasing lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.199 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Instance network_info: |[{"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.200 2 DEBUG oslo_concurrency.lockutils [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.201 2 DEBUG nova.network.neutron [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Refreshing network info cache for port 686b6e3b-80e4-43c6-a917-3751dddecd76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.207 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Start _get_guest_xml network_info=[{"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.213 2 WARNING nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.220 2 DEBUG nova.virt.libvirt.host [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.221 2 DEBUG nova.virt.libvirt.host [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.236 2 DEBUG nova.virt.libvirt.host [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.237 2 DEBUG nova.virt.libvirt.host [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.238 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.239 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.240 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.240 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.241 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.241 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.242 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.242 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.243 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.243 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.244 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.244 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.249 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.562 2 DEBUG nova.compute.manager [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-changed-4f39890f-a968-41d4-9cae-0b6948551923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.563 2 DEBUG nova.compute.manager [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing instance network info cache due to event network-changed-4f39890f-a968-41d4-9cae-0b6948551923. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.564 2 DEBUG oslo_concurrency.lockutils [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:26:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:26:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251950945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.740 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.765 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:49 compute-0 nova_compute[260603]: 2025-10-02 08:26:49.770 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:50 compute-0 podman[305292]: 2025-10-02 08:26:50.032439031 +0000 UTC m=+0.093253857 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:26:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:26:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/484372325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.202 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.206 2 DEBUG nova.virt.libvirt.vif [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1088051137',display_name='tempest-ServersTestJSON-server-1088051137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1088051137',id=40,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL26GjJ8yDVULcjBGSyQgq1XOXe3L4joW7EO0dcbypSf2PTplyBxh+0WCuN1+fy7bCfJLP+B7xaBUKjJV0y0oM9upBKq48cBxt+Uq1aMn9LSPatVD3E+4qRWUEz85mTqNQ==',key_name='tempest-keypair-680018376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de07bf9b5bef4254bdcb4d7b856304f3',ramdisk_id='',reservation_id='r-bs2htycx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-279922609',owner_user_name='tempest-ServersTestJSON-279922609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:26:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e785f94f63c44d0f842750666ed49360',uuid=f0bfef78-36cf-4c57-9205-ad81a216a221,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.207 2 DEBUG nova.network.os_vif_util [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Converting VIF {"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.208 2 DEBUG nova.network.os_vif_util [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.210 2 DEBUG nova.objects.instance [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lazy-loading 'pci_devices' on Instance uuid f0bfef78-36cf-4c57-9205-ad81a216a221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.231 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:26:50 compute-0 nova_compute[260603]:   <uuid>f0bfef78-36cf-4c57-9205-ad81a216a221</uuid>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   <name>instance-00000028</name>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersTestJSON-server-1088051137</nova:name>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:26:49</nova:creationTime>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:26:50 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:26:50 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:26:50 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:26:50 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:26:50 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:26:50 compute-0 nova_compute[260603]:         <nova:user uuid="e785f94f63c44d0f842750666ed49360">tempest-ServersTestJSON-279922609-project-member</nova:user>
Oct 02 08:26:50 compute-0 nova_compute[260603]:         <nova:project uuid="de07bf9b5bef4254bdcb4d7b856304f3">tempest-ServersTestJSON-279922609</nova:project>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:26:50 compute-0 nova_compute[260603]:         <nova:port uuid="686b6e3b-80e4-43c6-a917-3751dddecd76">
Oct 02 08:26:50 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <system>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <entry name="serial">f0bfef78-36cf-4c57-9205-ad81a216a221</entry>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <entry name="uuid">f0bfef78-36cf-4c57-9205-ad81a216a221</entry>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     </system>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   <os>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   </os>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   <features>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   </features>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f0bfef78-36cf-4c57-9205-ad81a216a221_disk">
Oct 02 08:26:50 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       </source>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:26:50 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f0bfef78-36cf-4c57-9205-ad81a216a221_disk.config">
Oct 02 08:26:50 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       </source>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:26:50 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:0b:4a:c6"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <target dev="tap686b6e3b-80"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/console.log" append="off"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <video>
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     </video>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:26:50 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:26:50 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:26:50 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:26:50 compute-0 nova_compute[260603]: </domain>
Oct 02 08:26:50 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.232 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Preparing to wait for external event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.232 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.233 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.233 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.234 2 DEBUG nova.virt.libvirt.vif [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1088051137',display_name='tempest-ServersTestJSON-server-1088051137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1088051137',id=40,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL26GjJ8yDVULcjBGSyQgq1XOXe3L4joW7EO0dcbypSf2PTplyBxh+0WCuN1+fy7bCfJLP+B7xaBUKjJV0y0oM9upBKq48cBxt+Uq1aMn9LSPatVD3E+4qRWUEz85mTqNQ==',key_name='tempest-keypair-680018376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de07bf9b5bef4254bdcb4d7b856304f3',ramdisk_id='',reservation_id='r-bs2htycx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-279922609',owner_user_name='tempest-ServersTestJSON-279922609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:26:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e785f94f63c44d0f842750666ed49360',uuid=f0bfef78-36cf-4c57-9205-ad81a216a221,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.235 2 DEBUG nova.network.os_vif_util [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Converting VIF {"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.236 2 DEBUG nova.network.os_vif_util [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.236 2 DEBUG os_vif [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.238 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.244 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap686b6e3b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.245 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap686b6e3b-80, col_values=(('external_ids', {'iface-id': '686b6e3b-80e4-43c6-a917-3751dddecd76', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:4a:c6', 'vm-uuid': 'f0bfef78-36cf-4c57-9205-ad81a216a221'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:50 compute-0 NetworkManager[45129]: <info>  [1759393610.2486] manager: (tap686b6e3b-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.257 2 INFO os_vif [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80')
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.322 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.322 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.322 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] No VIF found with MAC fa:16:3e:0b:4a:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.323 2 INFO nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Using config drive
Oct 02 08:26:50 compute-0 ceph-mon[74477]: pgmap v1372: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 08:26:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2251950945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:26:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/484372325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:26:50 compute-0 nova_compute[260603]: 2025-10-02 08:26:50.355 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:26:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:26:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.102 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:52 compute-0 nova_compute[260603]: 2025-10-02 08:26:52.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:52 compute-0 ceph-mon[74477]: pgmap v1373: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:26:52 compute-0 nova_compute[260603]: 2025-10-02 08:26:52.400 2 INFO nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Creating config drive at /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/disk.config
Oct 02 08:26:52 compute-0 nova_compute[260603]: 2025-10-02 08:26:52.408 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2m4op_1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:52 compute-0 nova_compute[260603]: 2025-10-02 08:26:52.565 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2m4op_1" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:52 compute-0 nova_compute[260603]: 2025-10-02 08:26:52.601 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:26:52 compute-0 nova_compute[260603]: 2025-10-02 08:26:52.605 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/disk.config f0bfef78-36cf-4c57-9205-ad81a216a221_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:26:52 compute-0 nova_compute[260603]: 2025-10-02 08:26:52.795 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/disk.config f0bfef78-36cf-4c57-9205-ad81a216a221_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:26:52 compute-0 nova_compute[260603]: 2025-10-02 08:26:52.796 2 INFO nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Deleting local config drive /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/disk.config because it was imported into RBD.
Oct 02 08:26:52 compute-0 kernel: tap686b6e3b-80: entered promiscuous mode
Oct 02 08:26:52 compute-0 NetworkManager[45129]: <info>  [1759393612.8722] manager: (tap686b6e3b-80): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Oct 02 08:26:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:26:52 compute-0 ovn_controller[152344]: 2025-10-02T08:26:52Z|00293|binding|INFO|Claiming lport 686b6e3b-80e4-43c6-a917-3751dddecd76 for this chassis.
Oct 02 08:26:52 compute-0 ovn_controller[152344]: 2025-10-02T08:26:52Z|00294|binding|INFO|686b6e3b-80e4-43c6-a917-3751dddecd76: Claiming fa:16:3e:0b:4a:c6 10.100.0.13
Oct 02 08:26:52 compute-0 nova_compute[260603]: 2025-10-02 08:26:52.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.916 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:4a:c6 10.100.0.13'], port_security=['fa:16:3e:0b:4a:c6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f0bfef78-36cf-4c57-9205-ad81a216a221', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7afa539b-e5b5-443e-ad15-53058c3f7566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de07bf9b5bef4254bdcb4d7b856304f3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1fbf4c4c-2cd7-49c6-ae31-7e9f79f25939', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b61ad9cd-415d-4ce0-9d5a-fc0dd5afdd65, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=686b6e3b-80e4-43c6-a917-3751dddecd76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:26:52 compute-0 systemd-udevd[305387]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:26:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.919 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 686b6e3b-80e4-43c6-a917-3751dddecd76 in datapath 7afa539b-e5b5-443e-ad15-53058c3f7566 bound to our chassis
Oct 02 08:26:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.922 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7afa539b-e5b5-443e-ad15-53058c3f7566
Oct 02 08:26:52 compute-0 NetworkManager[45129]: <info>  [1759393612.9374] device (tap686b6e3b-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:26:52 compute-0 NetworkManager[45129]: <info>  [1759393612.9397] device (tap686b6e3b-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:26:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.941 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d774cf44-12e9-4389-aaf0-8c36f5312026]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.942 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7afa539b-e1 in ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:26:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.945 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7afa539b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:26:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.945 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f61705d5-d687-4fec-8cab-cf92de237f3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.948 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a2d3d4-e13b-48de-b0a5-120b69538a28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:52 compute-0 systemd-machined[214636]: New machine qemu-44-instance-00000028.
Oct 02 08:26:52 compute-0 ovn_controller[152344]: 2025-10-02T08:26:52Z|00295|binding|INFO|Setting lport 686b6e3b-80e4-43c6-a917-3751dddecd76 ovn-installed in OVS
Oct 02 08:26:52 compute-0 ovn_controller[152344]: 2025-10-02T08:26:52Z|00296|binding|INFO|Setting lport 686b6e3b-80e4-43c6-a917-3751dddecd76 up in Southbound
Oct 02 08:26:52 compute-0 nova_compute[260603]: 2025-10-02 08:26:52.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:52 compute-0 nova_compute[260603]: 2025-10-02 08:26:52.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:52 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000028.
Oct 02 08:26:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.969 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8d144427-e8d3-4213-b7c6-cc2060931b13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.003 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa337b9-6755-4ad5-9817-d9f15a9efc11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.050 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[82e2ae3c-8fa5-4a66-89f9-f6b11893ebd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.056 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[329fdfce-7168-4044-95fe-6256d7a60ec0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 NetworkManager[45129]: <info>  [1759393613.0594] manager: (tap7afa539b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.104 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f6992821-705b-4aa3-8cff-95b18baaee96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.108 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3e36bfd0-91d5-41f2-8c83-e1c1fa23e1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 NetworkManager[45129]: <info>  [1759393613.1405] device (tap7afa539b-e0): carrier: link connected
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.154 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c3958cc5-1f07-4821-9204-7a4e42922df7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.174 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[54c32e82-582a-4121-9c9d-000c0d438140]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7afa539b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:30:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449174, 'reachable_time': 42494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305422, 'error': None, 'target': 'ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.195 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[65c43c65-ec21-4af0-8c05-9c01f4d9a1c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefb:3090'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449174, 'tstamp': 449174}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305423, 'error': None, 'target': 'ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.214 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ed49ff86-213d-4f84-a271-4dfba7ee87e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7afa539b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:30:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449174, 'reachable_time': 42494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305424, 'error': None, 'target': 'ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.249 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0dbc8745-640f-4174-8a15-1ac74fd59cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.311 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3e2ad9-565a-4500-a7d2-cfe82b1d540c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.313 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7afa539b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.313 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.314 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7afa539b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:53 compute-0 NetworkManager[45129]: <info>  [1759393613.3166] manager: (tap7afa539b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Oct 02 08:26:53 compute-0 nova_compute[260603]: 2025-10-02 08:26:53.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:53 compute-0 kernel: tap7afa539b-e0: entered promiscuous mode
Oct 02 08:26:53 compute-0 nova_compute[260603]: 2025-10-02 08:26:53.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.320 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7afa539b-e0, col_values=(('external_ids', {'iface-id': '2fa8f568-a3b6-4390-a456-29e1aa7b3761'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:53 compute-0 nova_compute[260603]: 2025-10-02 08:26:53.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:53 compute-0 ovn_controller[152344]: 2025-10-02T08:26:53Z|00297|binding|INFO|Releasing lport 2fa8f568-a3b6-4390-a456-29e1aa7b3761 from this chassis (sb_readonly=0)
Oct 02 08:26:53 compute-0 nova_compute[260603]: 2025-10-02 08:26:53.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:53 compute-0 nova_compute[260603]: 2025-10-02 08:26:53.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.357 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7afa539b-e5b5-443e-ad15-53058c3f7566.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7afa539b-e5b5-443e-ad15-53058c3f7566.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.358 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c53341d8-8f96-4984-960d-f44c5b60c34c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.359 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-7afa539b-e5b5-443e-ad15-53058c3f7566
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/7afa539b-e5b5-443e-ad15-53058c3f7566.pid.haproxy
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 7afa539b-e5b5-443e-ad15-53058c3f7566
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:26:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.360 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566', 'env', 'PROCESS_TAG=haproxy-7afa539b-e5b5-443e-ad15-53058c3f7566', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7afa539b-e5b5-443e-ad15-53058c3f7566.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:26:53 compute-0 podman[305499]: 2025-10-02 08:26:53.82551856 +0000 UTC m=+0.072923822 container create 06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 08:26:53 compute-0 systemd[1]: Started libpod-conmon-06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733.scope.
Oct 02 08:26:53 compute-0 podman[305499]: 2025-10-02 08:26:53.78920452 +0000 UTC m=+0.036609872 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:26:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd2358f3274ee85f1cdb0a8441439d8ff6fe7b64e90052b15f897c73d3c0f4d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:26:53 compute-0 podman[305499]: 2025-10-02 08:26:53.929247785 +0000 UTC m=+0.176653057 container init 06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:26:53 compute-0 podman[305499]: 2025-10-02 08:26:53.939208767 +0000 UTC m=+0.186614029 container start 06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:26:53 compute-0 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [NOTICE]   (305518) : New worker (305520) forked
Oct 02 08:26:53 compute-0 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [NOTICE]   (305518) : Loading success.
Oct 02 08:26:54 compute-0 nova_compute[260603]: 2025-10-02 08:26:54.009 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393614.0086, f0bfef78-36cf-4c57-9205-ad81a216a221 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:54 compute-0 nova_compute[260603]: 2025-10-02 08:26:54.010 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] VM Started (Lifecycle Event)
Oct 02 08:26:54 compute-0 nova_compute[260603]: 2025-10-02 08:26:54.035 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:54 compute-0 nova_compute[260603]: 2025-10-02 08:26:54.040 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393614.009917, f0bfef78-36cf-4c57-9205-ad81a216a221 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:54 compute-0 nova_compute[260603]: 2025-10-02 08:26:54.040 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] VM Paused (Lifecycle Event)
Oct 02 08:26:54 compute-0 nova_compute[260603]: 2025-10-02 08:26:54.060 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:54 compute-0 nova_compute[260603]: 2025-10-02 08:26:54.063 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:26:54 compute-0 nova_compute[260603]: 2025-10-02 08:26:54.091 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:26:54 compute-0 nova_compute[260603]: 2025-10-02 08:26:54.179 2 DEBUG nova.network.neutron [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updated VIF entry in instance network info cache for port 686b6e3b-80e4-43c6-a917-3751dddecd76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:26:54 compute-0 nova_compute[260603]: 2025-10-02 08:26:54.180 2 DEBUG nova.network.neutron [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updating instance_info_cache with network_info: [{"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:54 compute-0 nova_compute[260603]: 2025-10-02 08:26:54.197 2 DEBUG oslo_concurrency.lockutils [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:54 compute-0 ceph-mon[74477]: pgmap v1374: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:26:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 08:26:55 compute-0 nova_compute[260603]: 2025-10-02 08:26:55.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:55 compute-0 nova_compute[260603]: 2025-10-02 08:26:55.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:56 compute-0 ceph-mon[74477]: pgmap v1375: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 08:26:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 08:26:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:26:57 compute-0 nova_compute[260603]: 2025-10-02 08:26:57.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:26:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:26:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:26:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:26:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:26:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:26:58 compute-0 ceph-mon[74477]: pgmap v1376: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.696 2 DEBUG nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.716 2 DEBUG oslo_concurrency.lockutils [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.718 2 DEBUG oslo_concurrency.lockutils [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.719 2 DEBUG nova.network.neutron [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing network info cache for port 4f39890f-a968-41d4-9cae-0b6948551923 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.724 2 DEBUG nova.virt.libvirt.vif [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.724 2 DEBUG nova.network.os_vif_util [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.726 2 DEBUG nova.network.os_vif_util [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.727 2 DEBUG os_vif [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.728 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.729 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f39890f-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f39890f-a9, col_values=(('external_ids', {'iface-id': '4f39890f-a968-41d4-9cae-0b6948551923', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:40:80', 'vm-uuid': '5d595e00-2287-4a6f-b347-bc277006a626'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:58 compute-0 NetworkManager[45129]: <info>  [1759393618.7389] manager: (tap4f39890f-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.752 2 INFO os_vif [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9')
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.754 2 DEBUG nova.virt.libvirt.vif [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.755 2 DEBUG nova.network.os_vif_util [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.756 2 DEBUG nova.network.os_vif_util [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.761 2 DEBUG nova.virt.libvirt.guest [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] attach device xml: <interface type="ethernet">
Oct 02 08:26:58 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:01:40:80"/>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   <target dev="tap4f39890f-a9"/>
Oct 02 08:26:58 compute-0 nova_compute[260603]: </interface>
Oct 02 08:26:58 compute-0 nova_compute[260603]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 08:26:58 compute-0 kernel: tap4f39890f-a9: entered promiscuous mode
Oct 02 08:26:58 compute-0 NetworkManager[45129]: <info>  [1759393618.7798] manager: (tap4f39890f-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Oct 02 08:26:58 compute-0 ovn_controller[152344]: 2025-10-02T08:26:58Z|00298|binding|INFO|Claiming lport 4f39890f-a968-41d4-9cae-0b6948551923 for this chassis.
Oct 02 08:26:58 compute-0 ovn_controller[152344]: 2025-10-02T08:26:58Z|00299|binding|INFO|4f39890f-a968-41d4-9cae-0b6948551923: Claiming fa:16:3e:01:40:80 10.100.0.4
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.809 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:40:80 10.100.0.4'], port_security=['fa:16:3e:01:40:80 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4f39890f-a968-41d4-9cae-0b6948551923) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:26:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.811 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4f39890f-a968-41d4-9cae-0b6948551923 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis
Oct 02 08:26:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.814 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:26:58 compute-0 ovn_controller[152344]: 2025-10-02T08:26:58Z|00300|binding|INFO|Setting lport 4f39890f-a968-41d4-9cae-0b6948551923 ovn-installed in OVS
Oct 02 08:26:58 compute-0 ovn_controller[152344]: 2025-10-02T08:26:58Z|00301|binding|INFO|Setting lport 4f39890f-a968-41d4-9cae-0b6948551923 up in Southbound
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:58 compute-0 systemd-udevd[305536]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:26:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.831 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f741c83b-6ead-40bf-8496-4c4b93f4e727]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:58 compute-0 NetworkManager[45129]: <info>  [1759393618.8485] device (tap4f39890f-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:26:58 compute-0 NetworkManager[45129]: <info>  [1759393618.8498] device (tap4f39890f-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:26:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.881 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d980aeba-f870-4547-bec4-95dfad6382d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.884 2 DEBUG nova.virt.libvirt.driver [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.884 2 DEBUG nova.virt.libvirt.driver [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.884 2 DEBUG nova.virt.libvirt.driver [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:cf:e3:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.885 2 DEBUG nova.virt.libvirt.driver [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:65:86:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.885 2 DEBUG nova.virt.libvirt.driver [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:01:40:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:26:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.885 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1d81bb1a-8b69-4760-bd03-6dd91f2585a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.917 2 DEBUG nova.virt.libvirt.guest [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:26:58 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:26:58</nova:creationTime>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:26:58 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:26:58 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:26:58 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:26:58 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:26:58 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:26:58 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:26:58 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:26:58 compute-0 nova_compute[260603]:     <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct 02 08:26:58 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:26:58 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:26:58 compute-0 nova_compute[260603]:     <nova:port uuid="8e47820a-f777-4d29-8bce-45c6eb3b7b5c">
Oct 02 08:26:58 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:26:58 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:26:58 compute-0 nova_compute[260603]:     <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct 02 08:26:58 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:26:58 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:26:58 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:26:58 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:26:58 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:26:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.932 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c395b64e-b9b9-413f-a044-1c2690be40ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:58 compute-0 nova_compute[260603]: 2025-10-02 08:26:58.941 2 DEBUG oslo_concurrency.lockutils [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 14.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.958 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9833755-4146-4ff4-a35a-8e28bfbe1001]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305543, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.984 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b64138fc-3c40-41f6-b73f-9848c64dac2b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445016, 'tstamp': 445016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305544, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445020, 'tstamp': 445020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305544, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:26:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.986 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:26:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:59.026 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:59.026 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:59.026 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:26:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:26:59.027 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.471 2 DEBUG nova.compute.manager [req-c3af9a15-3451-471f-9462-02a1932117c4 req-f0f23488-9e1d-4d00-af3e-cfbe9aef81d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.472 2 DEBUG oslo_concurrency.lockutils [req-c3af9a15-3451-471f-9462-02a1932117c4 req-f0f23488-9e1d-4d00-af3e-cfbe9aef81d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.472 2 DEBUG oslo_concurrency.lockutils [req-c3af9a15-3451-471f-9462-02a1932117c4 req-f0f23488-9e1d-4d00-af3e-cfbe9aef81d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.472 2 DEBUG oslo_concurrency.lockutils [req-c3af9a15-3451-471f-9462-02a1932117c4 req-f0f23488-9e1d-4d00-af3e-cfbe9aef81d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.473 2 DEBUG nova.compute.manager [req-c3af9a15-3451-471f-9462-02a1932117c4 req-f0f23488-9e1d-4d00-af3e-cfbe9aef81d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Processing event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.474 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.479 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393619.478812, f0bfef78-36cf-4c57-9205-ad81a216a221 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.479 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] VM Resumed (Lifecycle Event)
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.484 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.490 2 INFO nova.virt.libvirt.driver [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Instance spawned successfully.
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.490 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.521 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.532 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.541 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.541 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.543 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.543 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.544 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.545 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.599 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.649 2 INFO nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Took 13.45 seconds to spawn the instance on the hypervisor.
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.650 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.743 2 INFO nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Took 14.58 seconds to build instance.
Oct 02 08:26:59 compute-0 nova_compute[260603]: 2025-10-02 08:26:59.766 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:00 compute-0 ovn_controller[152344]: 2025-10-02T08:27:00Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:40:80 10.100.0.4
Oct 02 08:27:00 compute-0 ovn_controller[152344]: 2025-10-02T08:27:00Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:40:80 10.100.0.4
Oct 02 08:27:00 compute-0 ceph-mon[74477]: pgmap v1377: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 08:27:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 18 KiB/s wr, 10 op/s
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.781 2 DEBUG nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.781 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.782 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.782 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.783 2 DEBUG nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] No waiting events found dispatching network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.783 2 WARNING nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received unexpected event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 for instance with vm_state active and task_state None.
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.784 2 DEBUG nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-4f39890f-a968-41d4-9cae-0b6948551923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.784 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.785 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.785 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.785 2 DEBUG nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-4f39890f-a968-41d4-9cae-0b6948551923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.786 2 WARNING nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-4f39890f-a968-41d4-9cae-0b6948551923 for instance with vm_state active and task_state None.
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.786 2 DEBUG nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-4f39890f-a968-41d4-9cae-0b6948551923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.787 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.787 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.788 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.788 2 DEBUG nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-4f39890f-a968-41d4-9cae-0b6948551923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.788 2 WARNING nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-4f39890f-a968-41d4-9cae-0b6948551923 for instance with vm_state active and task_state None.
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.980 2 DEBUG nova.network.neutron [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updated VIF entry in instance network info cache for port 4f39890f-a968-41d4-9cae-0b6948551923. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:27:01 compute-0 nova_compute[260603]: 2025-10-02 08:27:01.981 2 DEBUG nova.network.neutron [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:02 compute-0 nova_compute[260603]: 2025-10-02 08:27:02.011 2 DEBUG oslo_concurrency.lockutils [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:27:02 compute-0 nova_compute[260603]: 2025-10-02 08:27:02.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:02 compute-0 ceph-mon[74477]: pgmap v1378: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 18 KiB/s wr, 10 op/s
Oct 02 08:27:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 21 KiB/s wr, 63 op/s
Oct 02 08:27:02 compute-0 nova_compute[260603]: 2025-10-02 08:27:02.986 2 DEBUG oslo_concurrency.lockutils [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:02 compute-0 nova_compute[260603]: 2025-10-02 08:27:02.987 2 DEBUG oslo_concurrency.lockutils [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:02 compute-0 nova_compute[260603]: 2025-10-02 08:27:02.988 2 DEBUG nova.objects.instance [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:03 compute-0 nova_compute[260603]: 2025-10-02 08:27:03.103 2 DEBUG nova.compute.manager [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-changed-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:03 compute-0 nova_compute[260603]: 2025-10-02 08:27:03.104 2 DEBUG nova.compute.manager [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Refreshing instance network info cache due to event network-changed-686b6e3b-80e4-43c6-a917-3751dddecd76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:27:03 compute-0 nova_compute[260603]: 2025-10-02 08:27:03.105 2 DEBUG oslo_concurrency.lockutils [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:03 compute-0 nova_compute[260603]: 2025-10-02 08:27:03.105 2 DEBUG oslo_concurrency.lockutils [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:03 compute-0 nova_compute[260603]: 2025-10-02 08:27:03.106 2 DEBUG nova.network.neutron [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Refreshing network info cache for port 686b6e3b-80e4-43c6-a917-3751dddecd76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:27:03 compute-0 ovn_controller[152344]: 2025-10-02T08:27:03Z|00302|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct 02 08:27:03 compute-0 ovn_controller[152344]: 2025-10-02T08:27:03Z|00303|binding|INFO|Releasing lport 2fa8f568-a3b6-4390-a456-29e1aa7b3761 from this chassis (sb_readonly=0)
Oct 02 08:27:03 compute-0 nova_compute[260603]: 2025-10-02 08:27:03.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:03 compute-0 nova_compute[260603]: 2025-10-02 08:27:03.561 2 DEBUG nova.objects.instance [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:03 compute-0 nova_compute[260603]: 2025-10-02 08:27:03.577 2 DEBUG nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:27:03 compute-0 nova_compute[260603]: 2025-10-02 08:27:03.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:03 compute-0 nova_compute[260603]: 2025-10-02 08:27:03.987 2 DEBUG nova.policy [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:27:04 compute-0 ceph-mon[74477]: pgmap v1379: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 21 KiB/s wr, 63 op/s
Oct 02 08:27:04 compute-0 nova_compute[260603]: 2025-10-02 08:27:04.455 2 DEBUG nova.network.neutron [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updated VIF entry in instance network info cache for port 686b6e3b-80e4-43c6-a917-3751dddecd76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:27:04 compute-0 nova_compute[260603]: 2025-10-02 08:27:04.456 2 DEBUG nova.network.neutron [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updating instance_info_cache with network_info: [{"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:04 compute-0 nova_compute[260603]: 2025-10-02 08:27:04.474 2 DEBUG oslo_concurrency.lockutils [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:04 compute-0 nova_compute[260603]: 2025-10-02 08:27:04.769 2 DEBUG nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Successfully updated port: 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:27:04 compute-0 nova_compute[260603]: 2025-10-02 08:27:04.788 2 DEBUG oslo_concurrency.lockutils [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:04 compute-0 nova_compute[260603]: 2025-10-02 08:27:04.789 2 DEBUG oslo_concurrency.lockutils [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:04 compute-0 nova_compute[260603]: 2025-10-02 08:27:04.789 2 DEBUG nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:27:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Oct 02 08:27:05 compute-0 nova_compute[260603]: 2025-10-02 08:27:05.376 2 WARNING nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it
Oct 02 08:27:05 compute-0 nova_compute[260603]: 2025-10-02 08:27:05.377 2 WARNING nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it
Oct 02 08:27:05 compute-0 nova_compute[260603]: 2025-10-02 08:27:05.377 2 WARNING nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it
Oct 02 08:27:05 compute-0 nova_compute[260603]: 2025-10-02 08:27:05.774 2 DEBUG nova.compute.manager [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-changed-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:05 compute-0 nova_compute[260603]: 2025-10-02 08:27:05.775 2 DEBUG nova.compute.manager [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing instance network info cache due to event network-changed-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:27:05 compute-0 nova_compute[260603]: 2025-10-02 08:27:05.776 2 DEBUG oslo_concurrency.lockutils [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:06 compute-0 ceph-mon[74477]: pgmap v1380: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Oct 02 08:27:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.7 KiB/s wr, 73 op/s
Oct 02 08:27:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:27:07 compute-0 nova_compute[260603]: 2025-10-02 08:27:07.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:08 compute-0 ceph-mon[74477]: pgmap v1381: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.7 KiB/s wr, 73 op/s
Oct 02 08:27:08 compute-0 nova_compute[260603]: 2025-10-02 08:27:08.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.3 KiB/s wr, 73 op/s
Oct 02 08:27:10 compute-0 podman[305546]: 2025-10-02 08:27:10.054325113 +0000 UTC m=+0.107610195 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Oct 02 08:27:10 compute-0 podman[305545]: 2025-10-02 08:27:10.090716073 +0000 UTC m=+0.141747185 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:27:10 compute-0 sudo[305589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:27:10 compute-0 sudo[305589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:10 compute-0 sudo[305589]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:10 compute-0 sudo[305617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:27:10 compute-0 sudo[305617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:10 compute-0 sudo[305617]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.293 2 DEBUG nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.319 2 DEBUG oslo_concurrency.lockutils [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.320 2 DEBUG oslo_concurrency.lockutils [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.320 2 DEBUG nova.network.neutron [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing network info cache for port 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.324 2 DEBUG nova.virt.libvirt.vif [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.324 2 DEBUG nova.network.os_vif_util [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.325 2 DEBUG nova.network.os_vif_util [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.326 2 DEBUG os_vif [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19a09bbc-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap19a09bbc-9b, col_values=(('external_ids', {'iface-id': '19a09bbc-9b50-4f99-8dd4-0f7f9ab15851', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:04:1b', 'vm-uuid': '5d595e00-2287-4a6f-b347-bc277006a626'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:10 compute-0 NetworkManager[45129]: <info>  [1759393630.3348] manager: (tap19a09bbc-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.342 2 INFO os_vif [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b')
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.343 2 DEBUG nova.virt.libvirt.vif [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.344 2 DEBUG nova.network.os_vif_util [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.345 2 DEBUG nova.network.os_vif_util [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.348 2 DEBUG nova.virt.libvirt.guest [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] attach device xml: <interface type="ethernet">
Oct 02 08:27:10 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:17:04:1b"/>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   <target dev="tap19a09bbc-9b"/>
Oct 02 08:27:10 compute-0 nova_compute[260603]: </interface>
Oct 02 08:27:10 compute-0 nova_compute[260603]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 08:27:10 compute-0 sudo[305642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:27:10 compute-0 sudo[305642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:10 compute-0 sudo[305642]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:10 compute-0 NetworkManager[45129]: <info>  [1759393630.3711] manager: (tap19a09bbc-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Oct 02 08:27:10 compute-0 kernel: tap19a09bbc-9b: entered promiscuous mode
Oct 02 08:27:10 compute-0 ovn_controller[152344]: 2025-10-02T08:27:10Z|00304|binding|INFO|Claiming lport 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 for this chassis.
Oct 02 08:27:10 compute-0 ovn_controller[152344]: 2025-10-02T08:27:10Z|00305|binding|INFO|19a09bbc-9b50-4f99-8dd4-0f7f9ab15851: Claiming fa:16:3e:17:04:1b 10.100.0.7
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.382 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:04:1b 10.100.0.7'], port_security=['fa:16:3e:17:04:1b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1146870412', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1146870412', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.383 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.384 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.404 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee8d0b9-1ac1-49fb-ad68-cf9bd939471c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:10 compute-0 ovn_controller[152344]: 2025-10-02T08:27:10Z|00306|binding|INFO|Setting lport 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 ovn-installed in OVS
Oct 02 08:27:10 compute-0 ovn_controller[152344]: 2025-10-02T08:27:10Z|00307|binding|INFO|Setting lport 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 up in Southbound
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:10 compute-0 ceph-mon[74477]: pgmap v1382: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.3 KiB/s wr, 73 op/s
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:10 compute-0 sudo[305671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:27:10 compute-0 sudo[305671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:10 compute-0 systemd-udevd[305701]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.452 2 DEBUG nova.virt.libvirt.driver [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.452 2 DEBUG nova.virt.libvirt.driver [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.452 2 DEBUG nova.virt.libvirt.driver [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:cf:e3:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.453 2 DEBUG nova.virt.libvirt.driver [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:65:86:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.453 2 DEBUG nova.virt.libvirt.driver [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:01:40:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.453 2 DEBUG nova.virt.libvirt.driver [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:17:04:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.453 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b917c888-b1dc-4080-97e8-6178d9bf7516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:10 compute-0 NetworkManager[45129]: <info>  [1759393630.4574] device (tap19a09bbc-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:27:10 compute-0 NetworkManager[45129]: <info>  [1759393630.4593] device (tap19a09bbc-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.459 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa2f1b5-a76d-4f2f-9bc8-d44043012025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.474 2 DEBUG nova.virt.libvirt.guest [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:27:10 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:27:10</nova:creationTime>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:27:10 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct 02 08:27:10 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     <nova:port uuid="8e47820a-f777-4d29-8bce-45c6eb3b7b5c">
Oct 02 08:27:10 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct 02 08:27:10 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct 02 08:27:10 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:27:10 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:10 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:27:10 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:27:10 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.485 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0646fd50-bed9-433f-89f6-5b6f3298b54e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.501 2 DEBUG oslo_concurrency.lockutils [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.504 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1770a3-7f81-4545-a8f6-7b1636d0161d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 10, 'rx_bytes': 1084, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 10, 'rx_bytes': 1084, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305706, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd5abb0-636a-482d-a79a-1e7d27341879]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445016, 'tstamp': 445016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305707, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445020, 'tstamp': 445020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305707, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.525 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.532 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.533 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.533 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.534 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:10 compute-0 ovn_controller[152344]: 2025-10-02T08:27:10Z|00308|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct 02 08:27:10 compute-0 ovn_controller[152344]: 2025-10-02T08:27:10Z|00309|binding|INFO|Releasing lport 2fa8f568-a3b6-4390-a456-29e1aa7b3761 from this chassis (sb_readonly=0)
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.674 2 DEBUG nova.compute.manager [req-94da06b0-9c3d-4b0a-bad0-e51d71e5aa6f req-8e60bb47-6d40-4a70-94d8-0d8b4a738e66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.675 2 DEBUG oslo_concurrency.lockutils [req-94da06b0-9c3d-4b0a-bad0-e51d71e5aa6f req-8e60bb47-6d40-4a70-94d8-0d8b4a738e66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.676 2 DEBUG oslo_concurrency.lockutils [req-94da06b0-9c3d-4b0a-bad0-e51d71e5aa6f req-8e60bb47-6d40-4a70-94d8-0d8b4a738e66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.676 2 DEBUG oslo_concurrency.lockutils [req-94da06b0-9c3d-4b0a-bad0-e51d71e5aa6f req-8e60bb47-6d40-4a70-94d8-0d8b4a738e66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.676 2 DEBUG nova.compute.manager [req-94da06b0-9c3d-4b0a-bad0-e51d71e5aa6f req-8e60bb47-6d40-4a70-94d8-0d8b4a738e66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:10 compute-0 nova_compute[260603]: 2025-10-02 08:27:10.676 2 WARNING nova.compute.manager [req-94da06b0-9c3d-4b0a-bad0-e51d71e5aa6f req-8e60bb47-6d40-4a70-94d8-0d8b4a738e66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 for instance with vm_state active and task_state None.
Oct 02 08:27:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 64 op/s
Oct 02 08:27:10 compute-0 sudo[305671]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:27:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:27:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:27:11 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:27:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:27:11 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:27:11 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 68f127d3-9607-46b2-aca0-ee345f8979b4 does not exist
Oct 02 08:27:11 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 1d5de993-b81e-4ae6-bfb1-533e0e0c3448 does not exist
Oct 02 08:27:11 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5c2b2ef3-2119-48a9-85a6-42d7d785ec57 does not exist
Oct 02 08:27:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:27:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:27:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:27:11 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:27:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:27:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:27:11 compute-0 sudo[305740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:27:11 compute-0 sudo[305740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:11 compute-0 sudo[305740]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:11 compute-0 sudo[305765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:27:11 compute-0 sudo[305765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:11 compute-0 sudo[305765]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:11 compute-0 sudo[305790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:27:11 compute-0 ovn_controller[152344]: 2025-10-02T08:27:11Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:4a:c6 10.100.0.13
Oct 02 08:27:11 compute-0 ovn_controller[152344]: 2025-10-02T08:27:11Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:4a:c6 10.100.0.13
Oct 02 08:27:11 compute-0 sudo[305790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:11 compute-0 sudo[305790]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:11 compute-0 sudo[305815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:27:11 compute-0 sudo[305815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:27:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:27:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:27:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:27:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:27:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:27:11 compute-0 podman[305880]: 2025-10-02 08:27:11.750499504 +0000 UTC m=+0.040083507 container create 2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feynman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 08:27:11 compute-0 systemd[1]: Started libpod-conmon-2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730.scope.
Oct 02 08:27:11 compute-0 podman[305880]: 2025-10-02 08:27:11.731259016 +0000 UTC m=+0.020842999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:27:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:27:11 compute-0 podman[305880]: 2025-10-02 08:27:11.84560919 +0000 UTC m=+0.135193163 container init 2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feynman, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:27:11 compute-0 podman[305880]: 2025-10-02 08:27:11.854430604 +0000 UTC m=+0.144014607 container start 2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feynman, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 08:27:11 compute-0 podman[305880]: 2025-10-02 08:27:11.857785478 +0000 UTC m=+0.147369481 container attach 2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feynman, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:27:11 compute-0 laughing_feynman[305898]: 167 167
Oct 02 08:27:11 compute-0 systemd[1]: libpod-2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730.scope: Deactivated successfully.
Oct 02 08:27:11 compute-0 podman[305880]: 2025-10-02 08:27:11.860527303 +0000 UTC m=+0.150111336 container died 2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 08:27:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb2bf3a266e98d39668a965787ea65b0e99dea4053e9641ee0677f4ecc0a21f0-merged.mount: Deactivated successfully.
Oct 02 08:27:11 compute-0 podman[305880]: 2025-10-02 08:27:11.901726614 +0000 UTC m=+0.191310577 container remove 2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:27:11 compute-0 systemd[1]: libpod-conmon-2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730.scope: Deactivated successfully.
Oct 02 08:27:12 compute-0 ovn_controller[152344]: 2025-10-02T08:27:12Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:04:1b 10.100.0.7
Oct 02 08:27:12 compute-0 ovn_controller[152344]: 2025-10-02T08:27:12Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:04:1b 10.100.0.7
Oct 02 08:27:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:27:12 compute-0 podman[305920]: 2025-10-02 08:27:12.162229999 +0000 UTC m=+0.057098235 container create 4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 08:27:12 compute-0 systemd[1]: Started libpod-conmon-4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab.scope.
Oct 02 08:27:12 compute-0 podman[305920]: 2025-10-02 08:27:12.143382873 +0000 UTC m=+0.038251099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:27:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e358395a7b3324d31ac02211e0f268b47200fc4e5d9abd253fc49c792a640/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e358395a7b3324d31ac02211e0f268b47200fc4e5d9abd253fc49c792a640/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e358395a7b3324d31ac02211e0f268b47200fc4e5d9abd253fc49c792a640/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e358395a7b3324d31ac02211e0f268b47200fc4e5d9abd253fc49c792a640/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e358395a7b3324d31ac02211e0f268b47200fc4e5d9abd253fc49c792a640/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:12 compute-0 podman[305920]: 2025-10-02 08:27:12.268233413 +0000 UTC m=+0.163101659 container init 4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:12 compute-0 podman[305920]: 2025-10-02 08:27:12.282637051 +0000 UTC m=+0.177505257 container start 4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:27:12 compute-0 podman[305920]: 2025-10-02 08:27:12.286824161 +0000 UTC m=+0.181692407 container attach 4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.383 2 DEBUG nova.network.neutron [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updated VIF entry in instance network info cache for port 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.384 2 DEBUG nova.network.neutron [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.403 2 DEBUG oslo_concurrency.lockutils [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:12 compute-0 ceph-mon[74477]: pgmap v1383: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 64 op/s
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.507 2 DEBUG oslo_concurrency.lockutils [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-8e47820a-f777-4d29-8bce-45c6eb3b7b5c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.508 2 DEBUG oslo_concurrency.lockutils [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-8e47820a-f777-4d29-8bce-45c6eb3b7b5c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.527 2 DEBUG nova.objects.instance [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.548 2 DEBUG nova.virt.libvirt.vif [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.548 2 DEBUG nova.network.os_vif_util [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.549 2 DEBUG nova.network.os_vif_util [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.554 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.557 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.560 2 DEBUG nova.virt.libvirt.driver [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Attempting to detach device tap8e47820a-f7 from instance 5d595e00-2287-4a6f-b347-bc277006a626 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.561 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:65:86:1a"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <target dev="tap8e47820a-f7"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]: </interface>
Oct 02 08:27:12 compute-0 nova_compute[260603]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.568 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.572 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface>not found in domain: <domain type='kvm' id='42'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <name>instance-00000026</name>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <uuid>5d595e00-2287-4a6f-b347-bc277006a626</uuid>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:27:10</nova:creationTime>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:port uuid="8e47820a-f777-4d29-8bce-45c6eb3b7b5c">
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:27:12 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <system>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <entry name='serial'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <entry name='uuid'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </system>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <os>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </os>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <features>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </features>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk' index='2'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk.config' index='1'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:cf:e3:68'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target dev='tap0d888b1c-d2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:65:86:1a'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target dev='tap8e47820a-f7'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='net1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:01:40:80'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target dev='tap4f39890f-a9'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='net2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:17:04:1b'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target dev='tap19a09bbc-9b'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='net3'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       </target>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </console>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </input>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </input>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </input>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <video>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </video>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c497,c676</label>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c497,c676</imagelabel>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:27:12 compute-0 nova_compute[260603]: </domain>
Oct 02 08:27:12 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.575 2 INFO nova.virt.libvirt.driver [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap8e47820a-f7 from instance 5d595e00-2287-4a6f-b347-bc277006a626 from the persistent domain config.
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.576 2 DEBUG nova.virt.libvirt.driver [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] (1/8): Attempting to detach device tap8e47820a-f7 with device alias net1 from instance 5d595e00-2287-4a6f-b347-bc277006a626 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.576 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:65:86:1a"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <target dev="tap8e47820a-f7"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]: </interface>
Oct 02 08:27:12 compute-0 nova_compute[260603]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 08:27:12 compute-0 kernel: tap8e47820a-f7 (unregistering): left promiscuous mode
Oct 02 08:27:12 compute-0 NetworkManager[45129]: <info>  [1759393632.6878] device (tap8e47820a-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:12 compute-0 ovn_controller[152344]: 2025-10-02T08:27:12Z|00310|binding|INFO|Releasing lport 8e47820a-f777-4d29-8bce-45c6eb3b7b5c from this chassis (sb_readonly=0)
Oct 02 08:27:12 compute-0 ovn_controller[152344]: 2025-10-02T08:27:12Z|00311|binding|INFO|Setting lport 8e47820a-f777-4d29-8bce-45c6eb3b7b5c down in Southbound
Oct 02 08:27:12 compute-0 ovn_controller[152344]: 2025-10-02T08:27:12Z|00312|binding|INFO|Removing iface tap8e47820a-f7 ovn-installed in OVS
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.703 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:86:1a 10.100.0.12'], port_security=['fa:16:3e:65:86:1a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8e47820a-f777-4d29-8bce-45c6eb3b7b5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.705 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8e47820a-f777-4d29-8bce-45c6eb3b7b5c in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.707 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.719 2 DEBUG nova.virt.libvirt.driver [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Start waiting for the detach event from libvirt for device tap8e47820a-f7 with device alias net1 for instance 5d595e00-2287-4a6f-b347-bc277006a626 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.720 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Received event <DeviceRemovedEvent: 1759393632.7178273, 5d595e00-2287-4a6f-b347-bc277006a626 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.721 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.723 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9337eb93-8cd4-461b-9c93-04a33242d87b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.726 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface>not found in domain: <domain type='kvm' id='42'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <name>instance-00000026</name>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <uuid>5d595e00-2287-4a6f-b347-bc277006a626</uuid>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:27:10</nova:creationTime>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:port uuid="8e47820a-f777-4d29-8bce-45c6eb3b7b5c">
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:27:12 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <system>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <entry name='serial'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <entry name='uuid'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </system>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <os>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </os>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <features>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </features>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk' index='2'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk.config' index='1'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:cf:e3:68'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target dev='tap0d888b1c-d2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:01:40:80'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target dev='tap4f39890f-a9'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='net2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:17:04:1b'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target dev='tap19a09bbc-9b'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='net3'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       </target>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </console>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </input>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </input>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </input>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <video>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </video>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c497,c676</label>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c497,c676</imagelabel>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:27:12 compute-0 nova_compute[260603]: </domain>
Oct 02 08:27:12 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.727 2 INFO nova.virt.libvirt.driver [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap8e47820a-f7 from instance 5d595e00-2287-4a6f-b347-bc277006a626 from the live domain config.
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.728 2 DEBUG nova.virt.libvirt.vif [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.728 2 DEBUG nova.network.os_vif_util [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.729 2 DEBUG nova.network.os_vif_util [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.730 2 DEBUG os_vif [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.732 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e47820a-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.745 2 INFO os_vif [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7')
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.746 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:27:12</nova:creationTime>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct 02 08:27:12 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:27:12 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:12 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:27:12 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:27:12 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.753 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e13c0503-c026-44b2-ab2c-07909b64ad27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.764 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[38f7472f-46d3-498d-9e18-4148bf2656dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.787 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[35e3b900-d3ff-4029-a17f-4f3b2ab74f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.808 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0a692b3f-e5f2-4070-94b4-81c03ff624e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305950, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.825 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cbd76e-11e3-4981-a48b-c5ac4f0f63a0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445016, 'tstamp': 445016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305951, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445020, 'tstamp': 445020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305951, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.833 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.834 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.835 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.835 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.862 2 DEBUG nova.compute.manager [req-d996b36c-bc58-4810-a69a-16f229f197a0 req-80012b89-cbc6-49de-8af1-4c0d1e67d6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.862 2 DEBUG oslo_concurrency.lockutils [req-d996b36c-bc58-4810-a69a-16f229f197a0 req-80012b89-cbc6-49de-8af1-4c0d1e67d6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.863 2 DEBUG oslo_concurrency.lockutils [req-d996b36c-bc58-4810-a69a-16f229f197a0 req-80012b89-cbc6-49de-8af1-4c0d1e67d6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.863 2 DEBUG oslo_concurrency.lockutils [req-d996b36c-bc58-4810-a69a-16f229f197a0 req-80012b89-cbc6-49de-8af1-4c0d1e67d6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.863 2 DEBUG nova.compute.manager [req-d996b36c-bc58-4810-a69a-16f229f197a0 req-80012b89-cbc6-49de-8af1-4c0d1e67d6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:12 compute-0 nova_compute[260603]: 2025-10-02 08:27:12.863 2 WARNING nova.compute.manager [req-d996b36c-bc58-4810-a69a-16f229f197a0 req-80012b89-cbc6-49de-8af1-4c0d1e67d6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 for instance with vm_state active and task_state None.
Oct 02 08:27:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 189 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 115 op/s
Oct 02 08:27:13 compute-0 nova_compute[260603]: 2025-10-02 08:27:13.072 2 DEBUG nova.compute.manager [req-360d9e02-ea17-48cd-b7a9-bdf4718c615f req-261bbace-efd9-4bfe-b668-007cd7f63a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-unplugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:13 compute-0 nova_compute[260603]: 2025-10-02 08:27:13.073 2 DEBUG oslo_concurrency.lockutils [req-360d9e02-ea17-48cd-b7a9-bdf4718c615f req-261bbace-efd9-4bfe-b668-007cd7f63a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:13 compute-0 nova_compute[260603]: 2025-10-02 08:27:13.074 2 DEBUG oslo_concurrency.lockutils [req-360d9e02-ea17-48cd-b7a9-bdf4718c615f req-261bbace-efd9-4bfe-b668-007cd7f63a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:13 compute-0 nova_compute[260603]: 2025-10-02 08:27:13.074 2 DEBUG oslo_concurrency.lockutils [req-360d9e02-ea17-48cd-b7a9-bdf4718c615f req-261bbace-efd9-4bfe-b668-007cd7f63a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:13 compute-0 nova_compute[260603]: 2025-10-02 08:27:13.074 2 DEBUG nova.compute.manager [req-360d9e02-ea17-48cd-b7a9-bdf4718c615f req-261bbace-efd9-4bfe-b668-007cd7f63a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-unplugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:13 compute-0 nova_compute[260603]: 2025-10-02 08:27:13.075 2 WARNING nova.compute.manager [req-360d9e02-ea17-48cd-b7a9-bdf4718c615f req-261bbace-efd9-4bfe-b668-007cd7f63a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-unplugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c for instance with vm_state active and task_state None.
Oct 02 08:27:13 compute-0 nova_compute[260603]: 2025-10-02 08:27:13.398 2 DEBUG oslo_concurrency.lockutils [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:13 compute-0 nova_compute[260603]: 2025-10-02 08:27:13.400 2 DEBUG oslo_concurrency.lockutils [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:13 compute-0 nova_compute[260603]: 2025-10-02 08:27:13.401 2 DEBUG nova.network.neutron [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:27:13 compute-0 wonderful_sutherland[305936]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:27:13 compute-0 wonderful_sutherland[305936]: --> relative data size: 1.0
Oct 02 08:27:13 compute-0 wonderful_sutherland[305936]: --> All data devices are unavailable
Oct 02 08:27:13 compute-0 systemd[1]: libpod-4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab.scope: Deactivated successfully.
Oct 02 08:27:13 compute-0 systemd[1]: libpod-4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab.scope: Consumed 1.126s CPU time.
Oct 02 08:27:13 compute-0 podman[305920]: 2025-10-02 08:27:13.499803027 +0000 UTC m=+1.394671243 container died 4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:27:13 compute-0 nova_compute[260603]: 2025-10-02 08:27:13.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:27:13 compute-0 nova_compute[260603]: 2025-10-02 08:27:13.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:27:13 compute-0 nova_compute[260603]: 2025-10-02 08:27:13.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e3e358395a7b3324d31ac02211e0f268b47200fc4e5d9abd253fc49c792a640-merged.mount: Deactivated successfully.
Oct 02 08:27:13 compute-0 podman[305920]: 2025-10-02 08:27:13.564871469 +0000 UTC m=+1.459739675 container remove 4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:27:13 compute-0 systemd[1]: libpod-conmon-4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab.scope: Deactivated successfully.
Oct 02 08:27:13 compute-0 sudo[305815]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:13 compute-0 sudo[305990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:27:13 compute-0 sudo[305990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:13 compute-0 sudo[305990]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:13 compute-0 sudo[306015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:27:13 compute-0 sudo[306015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:13 compute-0 sudo[306015]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:13 compute-0 sudo[306040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:27:13 compute-0 sudo[306040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:13 compute-0 sudo[306040]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:13 compute-0 sudo[306065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:27:13 compute-0 sudo[306065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:14 compute-0 podman[306132]: 2025-10-02 08:27:14.281182509 +0000 UTC m=+0.055748704 container create 6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:27:14 compute-0 systemd[1]: Started libpod-conmon-6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3.scope.
Oct 02 08:27:14 compute-0 podman[306132]: 2025-10-02 08:27:14.254779808 +0000 UTC m=+0.029346073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:27:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:27:14 compute-0 podman[306132]: 2025-10-02 08:27:14.371213936 +0000 UTC m=+0.145780181 container init 6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:27:14 compute-0 podman[306132]: 2025-10-02 08:27:14.383623482 +0000 UTC m=+0.158189657 container start 6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:27:14 compute-0 podman[306132]: 2025-10-02 08:27:14.387256075 +0000 UTC m=+0.161822300 container attach 6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:27:14 compute-0 stupefied_tu[306148]: 167 167
Oct 02 08:27:14 compute-0 systemd[1]: libpod-6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3.scope: Deactivated successfully.
Oct 02 08:27:14 compute-0 podman[306132]: 2025-10-02 08:27:14.390557818 +0000 UTC m=+0.165124023 container died 6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 08:27:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f2f550e90c7659208f2d276d5dba70a20a150386bdfb131b24a227fda3fffde-merged.mount: Deactivated successfully.
Oct 02 08:27:14 compute-0 ceph-mon[74477]: pgmap v1384: 305 pgs: 305 active+clean; 189 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 115 op/s
Oct 02 08:27:14 compute-0 podman[306132]: 2025-10-02 08:27:14.441283854 +0000 UTC m=+0.215850059 container remove 6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:27:14 compute-0 systemd[1]: libpod-conmon-6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3.scope: Deactivated successfully.
Oct 02 08:27:14 compute-0 podman[306171]: 2025-10-02 08:27:14.679864109 +0000 UTC m=+0.052369549 container create 5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:27:14 compute-0 nova_compute[260603]: 2025-10-02 08:27:14.701 2 INFO nova.network.neutron [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Port 8e47820a-f777-4d29-8bce-45c6eb3b7b5c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 08:27:14 compute-0 systemd[1]: Started libpod-conmon-5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05.scope.
Oct 02 08:27:14 compute-0 podman[306171]: 2025-10-02 08:27:14.653480739 +0000 UTC m=+0.025986219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:27:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0586f30b78ea50eb63eb30237511575ea707d4eb9f7f25b41538732fc5df0e25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0586f30b78ea50eb63eb30237511575ea707d4eb9f7f25b41538732fc5df0e25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0586f30b78ea50eb63eb30237511575ea707d4eb9f7f25b41538732fc5df0e25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0586f30b78ea50eb63eb30237511575ea707d4eb9f7f25b41538732fc5df0e25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:14 compute-0 podman[306171]: 2025-10-02 08:27:14.789482094 +0000 UTC m=+0.161987574 container init 5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 02 08:27:14 compute-0 podman[306171]: 2025-10-02 08:27:14.800995043 +0000 UTC m=+0.173500483 container start 5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:27:14 compute-0 podman[306171]: 2025-10-02 08:27:14.804356737 +0000 UTC m=+0.176862207 container attach 5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:27:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 200 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 684 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.007 2 DEBUG nova.compute.manager [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-deleted-8e47820a-f777-4d29-8bce-45c6eb3b7b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.007 2 INFO nova.compute.manager [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Neutron deleted interface 8e47820a-f777-4d29-8bce-45c6eb3b7b5c; detaching it from the instance and deleting it from the info cache
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.008 2 DEBUG nova.network.neutron [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.046 2 DEBUG nova.objects.instance [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lazy-loading 'system_metadata' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.094 2 DEBUG nova.objects.instance [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lazy-loading 'flavor' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.150 2 DEBUG nova.virt.libvirt.vif [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.151 2 DEBUG nova.network.os_vif_util [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converting VIF {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.152 2 DEBUG nova.network.os_vif_util [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.158 2 DEBUG nova.virt.libvirt.guest [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.314 162357 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port f2354639-f248-4aa2-abea-ec2e3637feb1 with type ""
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.315 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:04:1b 10.100.0.7'], port_security=['fa:16:3e:17:04:1b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1146870412', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1146870412', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.316 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.318 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:27:15 compute-0 ovn_controller[152344]: 2025-10-02T08:27:15Z|00313|binding|INFO|Removing iface tap19a09bbc-9b ovn-installed in OVS
Oct 02 08:27:15 compute-0 ovn_controller[152344]: 2025-10-02T08:27:15Z|00314|binding|INFO|Removing lport 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 ovn-installed in OVS
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.332 2 DEBUG nova.compute.manager [req-70f57358-0b15-4593-931b-f2a6d87a1a1c req-3d2a493f-21e8-4e3f-93f3-c7e1d57f5e34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.333 2 DEBUG oslo_concurrency.lockutils [req-70f57358-0b15-4593-931b-f2a6d87a1a1c req-3d2a493f-21e8-4e3f-93f3-c7e1d57f5e34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.333 2 DEBUG oslo_concurrency.lockutils [req-70f57358-0b15-4593-931b-f2a6d87a1a1c req-3d2a493f-21e8-4e3f-93f3-c7e1d57f5e34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.334 2 DEBUG oslo_concurrency.lockutils [req-70f57358-0b15-4593-931b-f2a6d87a1a1c req-3d2a493f-21e8-4e3f-93f3-c7e1d57f5e34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.334 2 DEBUG nova.compute.manager [req-70f57358-0b15-4593-931b-f2a6d87a1a1c req-3d2a493f-21e8-4e3f-93f3-c7e1d57f5e34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.335 2 WARNING nova.compute.manager [req-70f57358-0b15-4593-931b-f2a6d87a1a1c req-3d2a493f-21e8-4e3f-93f3-c7e1d57f5e34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c for instance with vm_state active and task_state None.
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.335 2 DEBUG nova.virt.libvirt.guest [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface>not found in domain: <domain type='kvm' id='42'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <name>instance-00000026</name>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <uuid>5d595e00-2287-4a6f-b347-bc277006a626</uuid>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:27:12</nova:creationTime>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:27:15 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <system>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <entry name='serial'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <entry name='uuid'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </system>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <os>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </os>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <features>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </features>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk' index='2'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk.config' index='1'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:cf:e3:68'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target dev='tap0d888b1c-d2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:01:40:80'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target dev='tap4f39890f-a9'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='net2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:17:04:1b'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target dev='tap19a09bbc-9b'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='net3'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       </target>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </console>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </input>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </input>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </input>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <video>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </video>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c497,c676</label>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c497,c676</imagelabel>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:27:15 compute-0 nova_compute[260603]: </domain>
Oct 02 08:27:15 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.336 2 DEBUG nova.virt.libvirt.guest [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.335 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94dd0103-ba4b-4856-9384-e457be19715f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.351 2 DEBUG nova.virt.libvirt.guest [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface>not found in domain: <domain type='kvm' id='42'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <name>instance-00000026</name>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <uuid>5d595e00-2287-4a6f-b347-bc277006a626</uuid>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:27:12</nova:creationTime>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:27:15 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <system>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <entry name='serial'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <entry name='uuid'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </system>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <os>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </os>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <features>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </features>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk' index='2'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk.config' index='1'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:cf:e3:68'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target dev='tap0d888b1c-d2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:01:40:80'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target dev='tap4f39890f-a9'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='net2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:17:04:1b'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target dev='tap19a09bbc-9b'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='net3'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       </target>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </console>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </input>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </input>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </input>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <video>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </video>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c497,c676</label>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c497,c676</imagelabel>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:27:15 compute-0 nova_compute[260603]: </domain>
Oct 02 08:27:15 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.351 2 WARNING nova.virt.libvirt.driver [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Detaching interface fa:16:3e:65:86:1a failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap8e47820a-f7' not found.
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.352 2 DEBUG nova.virt.libvirt.vif [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.353 2 DEBUG nova.network.os_vif_util [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converting VIF {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.354 2 DEBUG nova.network.os_vif_util [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.354 2 DEBUG os_vif [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e47820a-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.360 2 INFO os_vif [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7')
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.361 2 DEBUG nova.virt.libvirt.guest [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:27:15</nova:creationTime>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct 02 08:27:15 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:27:15 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:27:15 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:27:15 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:27:15 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.388 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e1931045-b415-4d93-86f5-33b58046e9fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.391 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[559fb3c4-ad8f-4ee4-9b8c-c213369c07fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.429 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4e10f005-df76-4b35-b3d0-bd0f5a2ebef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.450 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0728ef6b-39e8-42c7-b6c5-99d8bc45bb58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306197, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.473 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[906043d6-beba-4f18-96c3-c1f536dde78e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445016, 'tstamp': 445016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306199, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445020, 'tstamp': 445020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306199, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.475 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.480 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.480 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.481 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.482 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.543 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.544 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.544 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.545 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.545 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.547 2 INFO nova.compute.manager [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Terminating instance
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.549 2 DEBUG nova.compute.manager [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]: {
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:     "0": [
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:         {
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "devices": [
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "/dev/loop3"
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             ],
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_name": "ceph_lv0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_size": "21470642176",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "name": "ceph_lv0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "tags": {
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.cluster_name": "ceph",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.crush_device_class": "",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.encrypted": "0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.osd_id": "0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.type": "block",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.vdo": "0"
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             },
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "type": "block",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "vg_name": "ceph_vg0"
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:         }
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:     ],
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:     "1": [
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:         {
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "devices": [
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "/dev/loop4"
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             ],
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_name": "ceph_lv1",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_size": "21470642176",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "name": "ceph_lv1",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "tags": {
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.cluster_name": "ceph",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.crush_device_class": "",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.encrypted": "0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.osd_id": "1",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.type": "block",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.vdo": "0"
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             },
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "type": "block",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "vg_name": "ceph_vg1"
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:         }
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:     ],
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:     "2": [
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:         {
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "devices": [
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "/dev/loop5"
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             ],
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_name": "ceph_lv2",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_size": "21470642176",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "name": "ceph_lv2",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "tags": {
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.cluster_name": "ceph",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.crush_device_class": "",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.encrypted": "0",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.osd_id": "2",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.type": "block",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:                 "ceph.vdo": "0"
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             },
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "type": "block",
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:             "vg_name": "ceph_vg2"
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:         }
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]:     ]
Oct 02 08:27:15 compute-0 awesome_goldberg[306187]: }
Oct 02 08:27:15 compute-0 kernel: tap0d888b1c-d2 (unregistering): left promiscuous mode
Oct 02 08:27:15 compute-0 NetworkManager[45129]: <info>  [1759393635.6178] device (tap0d888b1c-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 ovn_controller[152344]: 2025-10-02T08:27:15Z|00315|binding|INFO|Releasing lport 0d888b1c-d237-4db9-9ca5-4796f8c1349d from this chassis (sb_readonly=0)
Oct 02 08:27:15 compute-0 ovn_controller[152344]: 2025-10-02T08:27:15Z|00316|binding|INFO|Setting lport 0d888b1c-d237-4db9-9ca5-4796f8c1349d down in Southbound
Oct 02 08:27:15 compute-0 ovn_controller[152344]: 2025-10-02T08:27:15Z|00317|binding|INFO|Removing iface tap0d888b1c-d2 ovn-installed in OVS
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.641 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:e3:68 10.100.0.10'], port_security=['fa:16:3e:cf:e3:68 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5197f4ad-c335-4607-928c-2b7946565ac7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0d888b1c-d237-4db9-9ca5-4796f8c1349d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.642 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0d888b1c-d237-4db9-9ca5-4796f8c1349d in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.644 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:27:15 compute-0 systemd[1]: libpod-5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05.scope: Deactivated successfully.
Oct 02 08:27:15 compute-0 podman[306171]: 2025-10-02 08:27:15.651512304 +0000 UTC m=+1.024017744 container died 5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 08:27:15 compute-0 kernel: tap4f39890f-a9 (unregistering): left promiscuous mode
Oct 02 08:27:15 compute-0 NetworkManager[45129]: <info>  [1759393635.6732] device (tap4f39890f-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.689 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7590862-c59a-4bf8-b4e7-3eb54e73fd3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 kernel: tap19a09bbc-9b (unregistering): left promiscuous mode
Oct 02 08:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-0586f30b78ea50eb63eb30237511575ea707d4eb9f7f25b41538732fc5df0e25-merged.mount: Deactivated successfully.
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 ovn_controller[152344]: 2025-10-02T08:27:15Z|00318|binding|INFO|Releasing lport 4f39890f-a968-41d4-9cae-0b6948551923 from this chassis (sb_readonly=0)
Oct 02 08:27:15 compute-0 ovn_controller[152344]: 2025-10-02T08:27:15Z|00319|binding|INFO|Setting lport 4f39890f-a968-41d4-9cae-0b6948551923 down in Southbound
Oct 02 08:27:15 compute-0 ovn_controller[152344]: 2025-10-02T08:27:15Z|00320|binding|INFO|Removing iface tap4f39890f-a9 ovn-installed in OVS
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 NetworkManager[45129]: <info>  [1759393635.7066] device (tap19a09bbc-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.711 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:40:80 10.100.0.4'], port_security=['fa:16:3e:01:40:80 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4f39890f-a968-41d4-9cae-0b6948551923) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 podman[306171]: 2025-10-02 08:27:15.737736203 +0000 UTC m=+1.110241643 container remove 5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.741 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8fd6c0-e227-4fe6-9541-8c3dd718045c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.744 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[25c969b4-40e9-41d8-978a-f99fee4a63c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 systemd[1]: libpod-conmon-5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05.scope: Deactivated successfully.
Oct 02 08:27:15 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Deactivated successfully.
Oct 02 08:27:15 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Consumed 14.789s CPU time.
Oct 02 08:27:15 compute-0 systemd-machined[214636]: Machine qemu-42-instance-00000026 terminated.
Oct 02 08:27:15 compute-0 sudo[306065]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.779 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ef81f1-7d54-4f13-8c95-f491f81f14fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.801 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10024f05-89ef-4e77-b5d1-504d5421346f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 16, 'rx_bytes': 1084, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 16, 'rx_bytes': 1084, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306235, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.820 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[102b78a2-b7c4-4919-b63e-4e0b86853fbe]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445016, 'tstamp': 445016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306241, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445020, 'tstamp': 445020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306241, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.821 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.838 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.838 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.839 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.840 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.842 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4f39890f-a968-41d4-9cae-0b6948551923 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.844 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa1bff6d-19fb-4792-a261-4da1165d95a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.845 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a97a4d-1f63-4ec7-b00a-4cf39a881064]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.845 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 namespace which is not needed anymore
Oct 02 08:27:15 compute-0 sudo[306236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:27:15 compute-0 sudo[306236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:15 compute-0 sudo[306236]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 sudo[306268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:27:15 compute-0 sudo[306268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:15 compute-0 sudo[306268]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:15 compute-0 nova_compute[260603]: 2025-10-02 08:27:15.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:15 compute-0 sudo[306307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:27:15 compute-0 sudo[306307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:15 compute-0 sudo[306307]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.041 2 INFO nova.virt.libvirt.driver [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Instance destroyed successfully.
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.042 2 DEBUG nova.objects.instance [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'resources' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:16 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [NOTICE]   (304129) : haproxy version is 2.8.14-c23fe91
Oct 02 08:27:16 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [NOTICE]   (304129) : path to executable is /usr/sbin/haproxy
Oct 02 08:27:16 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [WARNING]  (304129) : Exiting Master process...
Oct 02 08:27:16 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [WARNING]  (304129) : Exiting Master process...
Oct 02 08:27:16 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [ALERT]    (304129) : Current worker (304131) exited with code 143 (Terminated)
Oct 02 08:27:16 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [WARNING]  (304129) : All workers exited. Exiting... (0)
Oct 02 08:27:16 compute-0 systemd[1]: libpod-1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971.scope: Deactivated successfully.
Oct 02 08:27:16 compute-0 podman[306313]: 2025-10-02 08:27:16.058215544 +0000 UTC m=+0.098406920 container died 1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.065 2 DEBUG nova.virt.libvirt.vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.066 2 DEBUG nova.network.os_vif_util [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.067 2 DEBUG nova.network.os_vif_util [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:e3:68,bridge_name='br-int',has_traffic_filtering=True,id=0d888b1c-d237-4db9-9ca5-4796f8c1349d,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d888b1c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.067 2 DEBUG os_vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:e3:68,bridge_name='br-int',has_traffic_filtering=True,id=0d888b1c-d237-4db9-9ca5-4796f8c1349d,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d888b1c-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.069 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d888b1c-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.081 2 INFO os_vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:e3:68,bridge_name='br-int',has_traffic_filtering=True,id=0d888b1c-d237-4db9-9ca5-4796f8c1349d,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d888b1c-d2')
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.081 2 DEBUG nova.virt.libvirt.vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.082 2 DEBUG nova.network.os_vif_util [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.082 2 DEBUG nova.network.os_vif_util [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.083 2 DEBUG os_vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971-userdata-shm.mount: Deactivated successfully.
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.088 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f39890f-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9055c4e28162e79657260aa471df2a75d9f8b592688bd242ac2c8aac34526e3-merged.mount: Deactivated successfully.
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.098 2 INFO os_vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9')
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.099 2 DEBUG nova.virt.libvirt.vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.099 2 DEBUG nova.network.os_vif_util [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.100 2 DEBUG nova.network.os_vif_util [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.100 2 DEBUG os_vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.102 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19a09bbc-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:16 compute-0 sudo[306374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:27:16 compute-0 sudo[306374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:16 compute-0 podman[306313]: 2025-10-02 08:27:16.105225764 +0000 UTC m=+0.145417140 container cleanup 1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.111 2 INFO os_vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b')
Oct 02 08:27:16 compute-0 systemd[1]: libpod-conmon-1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971.scope: Deactivated successfully.
Oct 02 08:27:16 compute-0 podman[306371]: 2025-10-02 08:27:16.121731776 +0000 UTC m=+0.067113645 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:27:16 compute-0 podman[306457]: 2025-10-02 08:27:16.203987423 +0000 UTC m=+0.047727784 container remove 1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:27:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.212 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ac23990d-637e-432e-a911-f78d11e4d572]: (4, ('Thu Oct  2 08:27:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 (1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971)\n1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971\nThu Oct  2 08:27:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 (1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971)\n1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.213 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[43986a93-6590-497a-a26a-12f2c288e8fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.214 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:16 compute-0 kernel: tapfa1bff6d-10: left promiscuous mode
Oct 02 08:27:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.223 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fbad1866-8956-4a82-802c-6e24a58bdc53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.250 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c611fa85-0d22-4d04-926a-8ef700a4f855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.251 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc2f16c-fca8-475d-b80a-ca775c3842bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.266 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a30f2bb5-f23d-41bc-b5f8-3fb38c314b08]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444990, 'reachable_time': 25341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306481, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:16 compute-0 systemd[1]: run-netns-ovnmeta\x2dfa1bff6d\x2d19fb\x2d4792\x2da261\x2d4da1165d95a1.mount: Deactivated successfully.
Oct 02 08:27:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.272 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:27:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.272 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[511a6b4e-baf7-4f95-ac2f-094d971da4e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:16 compute-0 ceph-mon[74477]: pgmap v1385: 305 pgs: 305 active+clean; 200 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 684 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.470 2 INFO nova.virt.libvirt.driver [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Deleting instance files /var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626_del
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.471 2 INFO nova.virt.libvirt.driver [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Deletion of /var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626_del complete
Oct 02 08:27:16 compute-0 podman[306515]: 2025-10-02 08:27:16.507827765 +0000 UTC m=+0.051848952 container create 52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.525 2 INFO nova.compute.manager [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Took 0.98 seconds to destroy the instance on the hypervisor.
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.526 2 DEBUG oslo.service.loopingcall [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.526 2 DEBUG nova.compute.manager [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.526 2 DEBUG nova.network.neutron [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.542 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 08:27:16 compute-0 systemd[1]: Started libpod-conmon-52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f.scope.
Oct 02 08:27:16 compute-0 podman[306515]: 2025-10-02 08:27:16.482045804 +0000 UTC m=+0.026067041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:27:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:27:16 compute-0 podman[306515]: 2025-10-02 08:27:16.596733078 +0000 UTC m=+0.140754285 container init 52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 08:27:16 compute-0 podman[306515]: 2025-10-02 08:27:16.602690053 +0000 UTC m=+0.146711240 container start 52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:27:16 compute-0 podman[306515]: 2025-10-02 08:27:16.605954634 +0000 UTC m=+0.149975881 container attach 52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 08:27:16 compute-0 busy_vaughan[306531]: 167 167
Oct 02 08:27:16 compute-0 systemd[1]: libpod-52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f.scope: Deactivated successfully.
Oct 02 08:27:16 compute-0 conmon[306531]: conmon 52627bd1a88639912cf7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f.scope/container/memory.events
Oct 02 08:27:16 compute-0 podman[306515]: 2025-10-02 08:27:16.611293411 +0000 UTC m=+0.155314608 container died 52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 08:27:16 compute-0 podman[306515]: 2025-10-02 08:27:16.661016595 +0000 UTC m=+0.205037822 container remove 52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 08:27:16 compute-0 systemd[1]: libpod-conmon-52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f.scope: Deactivated successfully.
Oct 02 08:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f28f875501abe61ab9763870febbf011fe23c69cf08a422245d15c0a7efb7bde-merged.mount: Deactivated successfully.
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.865 2 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.867 2 DEBUG nova.network.neutron [-] Unable to show port 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666
Oct 02 08:27:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 200 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:27:16 compute-0 podman[306556]: 2025-10-02 08:27:16.913590005 +0000 UTC m=+0.051789600 container create d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_montalcini, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.919 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.920 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.920 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:27:16 compute-0 nova_compute[260603]: 2025-10-02 08:27:16.921 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f0bfef78-36cf-4c57-9205-ad81a216a221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:16 compute-0 systemd[1]: Started libpod-conmon-d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5.scope.
Oct 02 08:27:16 compute-0 podman[306556]: 2025-10-02 08:27:16.892155799 +0000 UTC m=+0.030355464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:27:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7371a09a479f14adbc17ff523ba05cf38f97050b7f06d0b9fa8510e83569138/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7371a09a479f14adbc17ff523ba05cf38f97050b7f06d0b9fa8510e83569138/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7371a09a479f14adbc17ff523ba05cf38f97050b7f06d0b9fa8510e83569138/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7371a09a479f14adbc17ff523ba05cf38f97050b7f06d0b9fa8510e83569138/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:17 compute-0 podman[306556]: 2025-10-02 08:27:17.028040762 +0000 UTC m=+0.166240347 container init d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_montalcini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:27:17 compute-0 podman[306556]: 2025-10-02 08:27:17.037729683 +0000 UTC m=+0.175929288 container start d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_montalcini, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 08:27:17 compute-0 podman[306556]: 2025-10-02 08:27:17.04118743 +0000 UTC m=+0.179387035 container attach d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_montalcini, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 08:27:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.123 2 DEBUG nova.compute.manager [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-unplugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.123 2 DEBUG oslo_concurrency.lockutils [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.123 2 DEBUG oslo_concurrency.lockutils [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.124 2 DEBUG oslo_concurrency.lockutils [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.124 2 DEBUG nova.compute.manager [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-unplugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.125 2 DEBUG nova.compute.manager [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-unplugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.125 2 DEBUG nova.compute.manager [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.125 2 DEBUG oslo_concurrency.lockutils [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.126 2 DEBUG oslo_concurrency.lockutils [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.126 2 DEBUG oslo_concurrency.lockutils [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.126 2 DEBUG nova.compute.manager [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.126 2 WARNING nova.compute.manager [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d for instance with vm_state active and task_state deleting.
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.255 2 DEBUG nova.compute.manager [req-74860bba-8480-45ce-bf0e-eb95e5c747ae req-6bbde909-fc16-4251-83ba-04c55a6c9c3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-deleted-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.255 2 INFO nova.compute.manager [req-74860bba-8480-45ce-bf0e-eb95e5c747ae req-6bbde909-fc16-4251-83ba-04c55a6c9c3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Neutron deleted interface 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851; detaching it from the instance and deleting it from the info cache
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.255 2 DEBUG nova.network.neutron [req-74860bba-8480-45ce-bf0e-eb95e5c747ae req-6bbde909-fc16-4251-83ba-04c55a6c9c3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.257 2 DEBUG nova.network.neutron [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.293 2 DEBUG nova.compute.manager [req-74860bba-8480-45ce-bf0e-eb95e5c747ae req-6bbde909-fc16-4251-83ba-04c55a6c9c3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Detach interface failed, port_id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851, reason: Instance 5d595e00-2287-4a6f-b347-bc277006a626 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.299 2 DEBUG oslo_concurrency.lockutils [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.460 2 DEBUG oslo_concurrency.lockutils [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-8e47820a-f777-4d29-8bce-45c6eb3b7b5c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.961 2 DEBUG nova.network.neutron [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:17 compute-0 nova_compute[260603]: 2025-10-02 08:27:17.981 2 INFO nova.compute.manager [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Took 1.45 seconds to deallocate network for instance.
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]: {
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "osd_id": 2,
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "type": "bluestore"
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:     },
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "osd_id": 1,
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "type": "bluestore"
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:     },
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "osd_id": 0,
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:         "type": "bluestore"
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]:     }
Oct 02 08:27:17 compute-0 peaceful_montalcini[306573]: }
Oct 02 08:27:18 compute-0 systemd[1]: libpod-d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5.scope: Deactivated successfully.
Oct 02 08:27:18 compute-0 podman[306556]: 2025-10-02 08:27:18.020740581 +0000 UTC m=+1.158940166 container died d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_montalcini, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.024 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.025 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7371a09a479f14adbc17ff523ba05cf38f97050b7f06d0b9fa8510e83569138-merged.mount: Deactivated successfully.
Oct 02 08:27:18 compute-0 podman[306556]: 2025-10-02 08:27:18.071647803 +0000 UTC m=+1.209847388 container remove d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_montalcini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:27:18 compute-0 systemd[1]: libpod-conmon-d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5.scope: Deactivated successfully.
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.103 2 DEBUG oslo_concurrency.processutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:18 compute-0 sudo[306374]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:27:18 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:27:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:27:18 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:27:18 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8ac6d70c-6dd6-4945-8768-98e50c2fd03f does not exist
Oct 02 08:27:18 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 72c7dabd-c0a9-4d23-b78e-4460a51bbe99 does not exist
Oct 02 08:27:18 compute-0 sudo[306619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:27:18 compute-0 sudo[306619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:18 compute-0 sudo[306619]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:18 compute-0 sudo[306644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:27:18 compute-0 sudo[306644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:27:18 compute-0 sudo[306644]: pam_unix(sudo:session): session closed for user root
Oct 02 08:27:18 compute-0 ceph-mon[74477]: pgmap v1386: 305 pgs: 305 active+clean; 200 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:27:18 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:27:18 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:27:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:27:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1736913234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.514 2 DEBUG oslo_concurrency.processutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.523 2 DEBUG nova.compute.provider_tree [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.537 2 DEBUG nova.scheduler.client.report [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.555 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.599 2 INFO nova.scheduler.client.report [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Deleted allocations for instance 5d595e00-2287-4a6f-b347-bc277006a626
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.658 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updating instance_info_cache with network_info: [{"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.702 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.702 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.703 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.728 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:18 compute-0 nova_compute[260603]: 2025-10-02 08:27:18.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 121 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.365 2 DEBUG nova.compute.manager [req-d3075fd0-ba89-4bc8-9ceb-cf7fd875f9e1 req-23fee858-c0ca-4f1c-96ed-75604435ffbb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-deleted-4f39890f-a968-41d4-9cae-0b6948551923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.366 2 DEBUG nova.compute.manager [req-d3075fd0-ba89-4bc8-9ceb-cf7fd875f9e1 req-23fee858-c0ca-4f1c-96ed-75604435ffbb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-deleted-0d888b1c-d237-4db9-9ca5-4796f8c1349d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1736913234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.550 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.551 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.551 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.552 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.600 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.601 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.624 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.689 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.689 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.700 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.700 2 INFO nova.compute.claims [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:27:19 compute-0 nova_compute[260603]: 2025-10-02 08:27:19.838 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:27:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499404778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.038 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.134 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.135 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:27:20 compute-0 podman[306732]: 2025-10-02 08:27:20.140822305 +0000 UTC m=+0.061417000 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 08:27:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:27:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/265351867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.269 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.275 2 DEBUG nova.compute.provider_tree [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.291 2 DEBUG nova.scheduler.client.report [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.325 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.326 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.341 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.342 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4094MB free_disk=59.94271469116211GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.343 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.343 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.422 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.423 2 DEBUG nova.network.neutron [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.443 2 INFO nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.449 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance f0bfef78-36cf-4c57-9205-ad81a216a221 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.462 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:27:20 compute-0 ceph-mon[74477]: pgmap v1387: 305 pgs: 305 active+clean; 121 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 02 08:27:20 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1499404778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:20 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/265351867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.512 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.552 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.555 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.555 2 INFO nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Creating image(s)
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.581 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.606 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.631 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.635 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.673 2 DEBUG nova.policy [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ac6f72f7366459a86c086737b89ea69', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f269abbe5769427dbf44c430d7529c04', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.741 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.742 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.743 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.744 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.777 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:20 compute-0 nova_compute[260603]: 2025-10-02 08:27:20.782 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 121 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct 02 08:27:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:27:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219052854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.022 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.031 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.052 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.088 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.179 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.180 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.184 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] resizing rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.287 2 DEBUG nova.objects.instance [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'migration_context' on Instance uuid 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.301 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.302 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Ensure instance console log exists: /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.302 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.303 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.303 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.304 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.305 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.305 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.306 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.307 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.308 2 INFO nova.compute.manager [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Terminating instance
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.309 2 DEBUG nova.compute.manager [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:27:21 compute-0 kernel: tap686b6e3b-80 (unregistering): left promiscuous mode
Oct 02 08:27:21 compute-0 NetworkManager[45129]: <info>  [1759393641.3700] device (tap686b6e3b-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:27:21 compute-0 ovn_controller[152344]: 2025-10-02T08:27:21Z|00321|binding|INFO|Releasing lport 686b6e3b-80e4-43c6-a917-3751dddecd76 from this chassis (sb_readonly=0)
Oct 02 08:27:21 compute-0 ovn_controller[152344]: 2025-10-02T08:27:21Z|00322|binding|INFO|Setting lport 686b6e3b-80e4-43c6-a917-3751dddecd76 down in Southbound
Oct 02 08:27:21 compute-0 ovn_controller[152344]: 2025-10-02T08:27:21Z|00323|binding|INFO|Removing iface tap686b6e3b-80 ovn-installed in OVS
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.388 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:4a:c6 10.100.0.13'], port_security=['fa:16:3e:0b:4a:c6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f0bfef78-36cf-4c57-9205-ad81a216a221', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7afa539b-e5b5-443e-ad15-53058c3f7566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de07bf9b5bef4254bdcb4d7b856304f3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1fbf4c4c-2cd7-49c6-ae31-7e9f79f25939', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b61ad9cd-415d-4ce0-9d5a-fc0dd5afdd65, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=686b6e3b-80e4-43c6-a917-3751dddecd76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.390 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 686b6e3b-80e4-43c6-a917-3751dddecd76 in datapath 7afa539b-e5b5-443e-ad15-53058c3f7566 unbound from our chassis
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.392 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7afa539b-e5b5-443e-ad15-53058c3f7566, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.394 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e7732e-ad85-4fb7-86c1-565938d56f29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.395 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566 namespace which is not needed anymore
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.412 2 DEBUG nova.network.neutron [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Successfully created port: dcccfe82-bc7c-4036-bbb1-5a2f90418794 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:27:21 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct 02 08:27:21 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Consumed 12.945s CPU time.
Oct 02 08:27:21 compute-0 systemd-machined[214636]: Machine qemu-44-instance-00000028 terminated.
Oct 02 08:27:21 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1219052854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:21 compute-0 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [NOTICE]   (305518) : haproxy version is 2.8.14-c23fe91
Oct 02 08:27:21 compute-0 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [NOTICE]   (305518) : path to executable is /usr/sbin/haproxy
Oct 02 08:27:21 compute-0 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [WARNING]  (305518) : Exiting Master process...
Oct 02 08:27:21 compute-0 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [ALERT]    (305518) : Current worker (305520) exited with code 143 (Terminated)
Oct 02 08:27:21 compute-0 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [WARNING]  (305518) : All workers exited. Exiting... (0)
Oct 02 08:27:21 compute-0 systemd[1]: libpod-06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733.scope: Deactivated successfully.
Oct 02 08:27:21 compute-0 podman[306966]: 2025-10-02 08:27:21.535211377 +0000 UTC m=+0.044326957 container died 06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.550 2 INFO nova.virt.libvirt.driver [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Instance destroyed successfully.
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.551 2 DEBUG nova.objects.instance [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lazy-loading 'resources' on Instance uuid f0bfef78-36cf-4c57-9205-ad81a216a221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.564 2 DEBUG nova.compute.manager [req-c7b1fbd7-06c2-465a-86e0-24499fb88dce req-98c64db5-29c1-40cd-969f-761df2247874 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-vif-unplugged-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.564 2 DEBUG oslo_concurrency.lockutils [req-c7b1fbd7-06c2-465a-86e0-24499fb88dce req-98c64db5-29c1-40cd-969f-761df2247874 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.568 2 DEBUG oslo_concurrency.lockutils [req-c7b1fbd7-06c2-465a-86e0-24499fb88dce req-98c64db5-29c1-40cd-969f-761df2247874 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.568 2 DEBUG oslo_concurrency.lockutils [req-c7b1fbd7-06c2-465a-86e0-24499fb88dce req-98c64db5-29c1-40cd-969f-761df2247874 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.568 2 DEBUG nova.compute.manager [req-c7b1fbd7-06c2-465a-86e0-24499fb88dce req-98c64db5-29c1-40cd-969f-761df2247874 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] No waiting events found dispatching network-vif-unplugged-686b6e3b-80e4-43c6-a917-3751dddecd76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.569 2 DEBUG nova.compute.manager [req-c7b1fbd7-06c2-465a-86e0-24499fb88dce req-98c64db5-29c1-40cd-969f-761df2247874 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-vif-unplugged-686b6e3b-80e4-43c6-a917-3751dddecd76 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733-userdata-shm.mount: Deactivated successfully.
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.571 2 DEBUG nova.virt.libvirt.vif [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1088051137',display_name='tempest-ServersTestJSON-server-1088051137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1088051137',id=40,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL26GjJ8yDVULcjBGSyQgq1XOXe3L4joW7EO0dcbypSf2PTplyBxh+0WCuN1+fy7bCfJLP+B7xaBUKjJV0y0oM9upBKq48cBxt+Uq1aMn9LSPatVD3E+4qRWUEz85mTqNQ==',key_name='tempest-keypair-680018376',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de07bf9b5bef4254bdcb4d7b856304f3',ramdisk_id='',reservation_id='r-bs2htycx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-279922609',owner_user_name='tempest-ServersTestJSON-279922609-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e785f94f63c44d0f842750666ed49360',uuid=f0bfef78-36cf-4c57-9205-ad81a216a221,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.572 2 DEBUG nova.network.os_vif_util [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Converting VIF {"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.573 2 DEBUG nova.network.os_vif_util [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.573 2 DEBUG os_vif [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap686b6e3b-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dd2358f3274ee85f1cdb0a8441439d8ff6fe7b64e90052b15f897c73d3c0f4d-merged.mount: Deactivated successfully.
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:27:21 compute-0 podman[306966]: 2025-10-02 08:27:21.581496556 +0000 UTC m=+0.090612136 container cleanup 06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.583 2 INFO os_vif [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80')
Oct 02 08:27:21 compute-0 systemd[1]: libpod-conmon-06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733.scope: Deactivated successfully.
Oct 02 08:27:21 compute-0 podman[307012]: 2025-10-02 08:27:21.658582261 +0000 UTC m=+0.052508632 container remove 06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.669 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f99591c1-f8fb-415c-9352-c87a5a512c61]: (4, ('Thu Oct  2 08:27:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566 (06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733)\n06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733\nThu Oct  2 08:27:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566 (06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733)\n06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.671 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[daec8eac-e1aa-4133-aa6b-9fe45d8a6a44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.672 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7afa539b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:21 compute-0 kernel: tap7afa539b-e0: left promiscuous mode
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.678 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[be3eb1b7-365b-4aa3-afd1-ef461308b87f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.707 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c28ef03-581b-46a8-bfd3-9e453ed418cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.708 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9de1529c-e06f-40b0-9b53-264413d32885]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.722 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[050c41a9-ff36-4586-90e5-c7a8dfe21cee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449164, 'reachable_time': 33543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307043, 'error': None, 'target': 'ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d7afa539b\x2de5b5\x2d443e\x2dad15\x2d53058c3f7566.mount: Deactivated successfully.
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.726 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:27:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.727 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1053a985-09ed-44aa-bb83-52590429e448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.877 2 INFO nova.virt.libvirt.driver [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Deleting instance files /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221_del
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.878 2 INFO nova.virt.libvirt.driver [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Deletion of /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221_del complete
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.935 2 INFO nova.compute.manager [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Took 0.63 seconds to destroy the instance on the hypervisor.
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.936 2 DEBUG oslo.service.loopingcall [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.937 2 DEBUG nova.compute.manager [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:27:21 compute-0 nova_compute[260603]: 2025-10-02 08:27:21.937 2 DEBUG nova.network.neutron [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:27:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:27:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1907413456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:27:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:27:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1907413456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:27:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:27:22 compute-0 nova_compute[260603]: 2025-10-02 08:27:22.181 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:27:22 compute-0 nova_compute[260603]: 2025-10-02 08:27:22.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:22 compute-0 ceph-mon[74477]: pgmap v1388: 305 pgs: 305 active+clean; 121 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct 02 08:27:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1907413456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:27:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1907413456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:27:22 compute-0 nova_compute[260603]: 2025-10-02 08:27:22.804 2 DEBUG nova.network.neutron [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:22 compute-0 nova_compute[260603]: 2025-10-02 08:27:22.830 2 INFO nova.compute.manager [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Took 0.89 seconds to deallocate network for instance.
Oct 02 08:27:22 compute-0 nova_compute[260603]: 2025-10-02 08:27:22.883 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:22 compute-0 nova_compute[260603]: 2025-10-02 08:27:22.883 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 101 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 381 KiB/s rd, 3.5 MiB/s wr, 146 op/s
Oct 02 08:27:22 compute-0 nova_compute[260603]: 2025-10-02 08:27:22.973 2 DEBUG oslo_concurrency.processutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.364 2 DEBUG nova.network.neutron [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Successfully updated port: dcccfe82-bc7c-4036-bbb1-5a2f90418794 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.389 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "refresh_cache-3626202d-fd1e-497c-bbfd-ea0e7a7321c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.390 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquired lock "refresh_cache-3626202d-fd1e-497c-bbfd-ea0e7a7321c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.390 2 DEBUG nova.network.neutron [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:27:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:27:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1867720042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.411 2 DEBUG oslo_concurrency.processutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.421 2 DEBUG nova.compute.provider_tree [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.437 2 DEBUG nova.scheduler.client.report [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.459 2 DEBUG nova.compute.manager [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-changed-dcccfe82-bc7c-4036-bbb1-5a2f90418794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.460 2 DEBUG nova.compute.manager [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Refreshing instance network info cache due to event network-changed-dcccfe82-bc7c-4036-bbb1-5a2f90418794. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.461 2 DEBUG oslo_concurrency.lockutils [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3626202d-fd1e-497c-bbfd-ea0e7a7321c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.463 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.487 2 INFO nova.scheduler.client.report [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Deleted allocations for instance f0bfef78-36cf-4c57-9205-ad81a216a221
Oct 02 08:27:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1867720042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.547 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.622 2 DEBUG nova.network.neutron [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.652 2 DEBUG nova.compute.manager [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.652 2 DEBUG oslo_concurrency.lockutils [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.652 2 DEBUG oslo_concurrency.lockutils [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.653 2 DEBUG oslo_concurrency.lockutils [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.653 2 DEBUG nova.compute.manager [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] No waiting events found dispatching network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.653 2 WARNING nova.compute.manager [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received unexpected event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 for instance with vm_state deleted and task_state None.
Oct 02 08:27:23 compute-0 nova_compute[260603]: 2025-10-02 08:27:23.653 2 DEBUG nova.compute.manager [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-vif-deleted-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:24 compute-0 ceph-mon[74477]: pgmap v1389: 305 pgs: 305 active+clean; 101 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 381 KiB/s rd, 3.5 MiB/s wr, 146 op/s
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.526 2 DEBUG nova.network.neutron [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Updating instance_info_cache with network_info: [{"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.555 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Releasing lock "refresh_cache-3626202d-fd1e-497c-bbfd-ea0e7a7321c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.556 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Instance network_info: |[{"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.556 2 DEBUG oslo_concurrency.lockutils [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3626202d-fd1e-497c-bbfd-ea0e7a7321c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.557 2 DEBUG nova.network.neutron [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Refreshing network info cache for port dcccfe82-bc7c-4036-bbb1-5a2f90418794 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.562 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Start _get_guest_xml network_info=[{"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.567 2 WARNING nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.573 2 DEBUG nova.virt.libvirt.host [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.574 2 DEBUG nova.virt.libvirt.host [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.582 2 DEBUG nova.virt.libvirt.host [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.583 2 DEBUG nova.virt.libvirt.host [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.583 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.584 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.584 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.585 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.585 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.586 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.586 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.587 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.587 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.587 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.588 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.588 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:27:24 compute-0 nova_compute[260603]: 2025-10-02 08:27:24.593 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 2.4 MiB/s wr, 98 op/s
Oct 02 08:27:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:27:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3876417165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.065 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.087 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.092 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:27:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2495510105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.507 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.510 2 DEBUG nova.virt.libvirt.vif [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-277551696',display_name='tempest-DeleteServersTestJSON-server-277551696',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-277551696',id=41,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-tia1r4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:20Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=3626202d-fd1e-497c-bbfd-ea0e7a7321c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.511 2 DEBUG nova.network.os_vif_util [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.513 2 DEBUG nova.network.os_vif_util [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.515 2 DEBUG nova.objects.instance [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3876417165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2495510105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.538 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:27:25 compute-0 nova_compute[260603]:   <uuid>3626202d-fd1e-497c-bbfd-ea0e7a7321c8</uuid>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   <name>instance-00000029</name>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <nova:name>tempest-DeleteServersTestJSON-server-277551696</nova:name>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:27:24</nova:creationTime>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:27:25 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:27:25 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:27:25 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:27:25 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:27:25 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:27:25 compute-0 nova_compute[260603]:         <nova:user uuid="1ac6f72f7366459a86c086737b89ea69">tempest-DeleteServersTestJSON-812177785-project-member</nova:user>
Oct 02 08:27:25 compute-0 nova_compute[260603]:         <nova:project uuid="f269abbe5769427dbf44c430d7529c04">tempest-DeleteServersTestJSON-812177785</nova:project>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:27:25 compute-0 nova_compute[260603]:         <nova:port uuid="dcccfe82-bc7c-4036-bbb1-5a2f90418794">
Oct 02 08:27:25 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <system>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <entry name="serial">3626202d-fd1e-497c-bbfd-ea0e7a7321c8</entry>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <entry name="uuid">3626202d-fd1e-497c-bbfd-ea0e7a7321c8</entry>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     </system>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   <os>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   </os>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   <features>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   </features>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk">
Oct 02 08:27:25 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:27:25 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk.config">
Oct 02 08:27:25 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:27:25 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:58:03:0e"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <target dev="tapdcccfe82-bc"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/console.log" append="off"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <video>
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     </video>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:27:25 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:27:25 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:27:25 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:27:25 compute-0 nova_compute[260603]: </domain>
Oct 02 08:27:25 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.540 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Preparing to wait for external event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.540 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.541 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.541 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.542 2 DEBUG nova.virt.libvirt.vif [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-277551696',display_name='tempest-DeleteServersTestJSON-server-277551696',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-277551696',id=41,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-tia1r4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:20Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=3626202d-fd1e-497c-bbfd-ea0e7a7321c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.542 2 DEBUG nova.network.os_vif_util [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.543 2 DEBUG nova.network.os_vif_util [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.543 2 DEBUG os_vif [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdcccfe82-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdcccfe82-bc, col_values=(('external_ids', {'iface-id': 'dcccfe82-bc7c-4036-bbb1-5a2f90418794', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:03:0e', 'vm-uuid': '3626202d-fd1e-497c-bbfd-ea0e7a7321c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:25 compute-0 NetworkManager[45129]: <info>  [1759393645.5505] manager: (tapdcccfe82-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.558 2 INFO os_vif [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc')
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.625 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.626 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.627 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No VIF found with MAC fa:16:3e:58:03:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.628 2 INFO nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Using config drive
Oct 02 08:27:25 compute-0 nova_compute[260603]: 2025-10-02 08:27:25.660 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:26 compute-0 ceph-mon[74477]: pgmap v1390: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 2.4 MiB/s wr, 98 op/s
Oct 02 08:27:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct 02 08:27:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:27:27 compute-0 nova_compute[260603]: 2025-10-02 08:27:27.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:27 compute-0 nova_compute[260603]: 2025-10-02 08:27:27.644 2 INFO nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Creating config drive at /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/disk.config
Oct 02 08:27:27 compute-0 nova_compute[260603]: 2025-10-02 08:27:27.654 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyuuatoqd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:27 compute-0 nova_compute[260603]: 2025-10-02 08:27:27.809 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyuuatoqd" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:27 compute-0 nova_compute[260603]: 2025-10-02 08:27:27.847 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:27 compute-0 nova_compute[260603]: 2025-10-02 08:27:27.852 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/disk.config 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:27:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:27:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:27:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:27:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:27:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:27:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:27:27
Oct 02 08:27:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:27:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:27:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'vms']
Oct 02 08:27:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:27:28 compute-0 nova_compute[260603]: 2025-10-02 08:27:28.074 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/disk.config 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:28 compute-0 nova_compute[260603]: 2025-10-02 08:27:28.076 2 INFO nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Deleting local config drive /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/disk.config because it was imported into RBD.
Oct 02 08:27:28 compute-0 kernel: tapdcccfe82-bc: entered promiscuous mode
Oct 02 08:27:28 compute-0 NetworkManager[45129]: <info>  [1759393648.1583] manager: (tapdcccfe82-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Oct 02 08:27:28 compute-0 ovn_controller[152344]: 2025-10-02T08:27:28Z|00324|binding|INFO|Claiming lport dcccfe82-bc7c-4036-bbb1-5a2f90418794 for this chassis.
Oct 02 08:27:28 compute-0 nova_compute[260603]: 2025-10-02 08:27:28.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:28 compute-0 ovn_controller[152344]: 2025-10-02T08:27:28Z|00325|binding|INFO|dcccfe82-bc7c-4036-bbb1-5a2f90418794: Claiming fa:16:3e:58:03:0e 10.100.0.5
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.168 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:03:0e 10.100.0.5'], port_security=['fa:16:3e:58:03:0e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3626202d-fd1e-497c-bbfd-ea0e7a7321c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dcccfe82-bc7c-4036-bbb1-5a2f90418794) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.170 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dcccfe82-bc7c-4036-bbb1-5a2f90418794 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca bound to our chassis
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.172 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.191 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6b529d-e3dd-48de-859f-ce7e79380c58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.193 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa72ac8c9-11 in ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:27:28 compute-0 ovn_controller[152344]: 2025-10-02T08:27:28Z|00326|binding|INFO|Setting lport dcccfe82-bc7c-4036-bbb1-5a2f90418794 ovn-installed in OVS
Oct 02 08:27:28 compute-0 ovn_controller[152344]: 2025-10-02T08:27:28Z|00327|binding|INFO|Setting lport dcccfe82-bc7c-4036-bbb1-5a2f90418794 up in Southbound
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.195 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa72ac8c9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.196 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[46160dce-14c9-491a-9c3d-3340e41091b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 nova_compute[260603]: 2025-10-02 08:27:28.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.199 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e9b5fe-0165-4218-949e-12f681658106]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 systemd-udevd[307205]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.218 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[418a9d2b-aff2-4f8b-8b35-5ca8039bbd10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 systemd-machined[214636]: New machine qemu-45-instance-00000029.
Oct 02 08:27:28 compute-0 NetworkManager[45129]: <info>  [1759393648.2264] device (tapdcccfe82-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:27:28 compute-0 NetworkManager[45129]: <info>  [1759393648.2274] device (tapdcccfe82-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.236 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd5340d6-f57d-4ec4-8002-dfed6dbd0e4d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000029.
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.279 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[17ed816c-4600-40a5-b200-abd795e1c575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 NetworkManager[45129]: <info>  [1759393648.2886] manager: (tapa72ac8c9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.287 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce08969-638c-47d6-ad54-9219c33fabdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.336 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[71bbfc4d-2fef-47e9-a6ff-64ce10bf73a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.341 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ccead4d3-783d-4b8f-8fd8-b4d56a39ae3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 NetworkManager[45129]: <info>  [1759393648.3740] device (tapa72ac8c9-10): carrier: link connected
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.382 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8d61728e-8974-40ab-bd52-52cb50a6707c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.410 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86e70d12-0257-4473-bf0b-76f5d7d3c9c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452697, 'reachable_time': 17860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307236, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6b9005-14b9-4b19-8f21-d45d087a6ac8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3b:61d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452697, 'tstamp': 452697}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307237, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.463 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dded00d7-82ea-431f-b401-878cbbced5ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452697, 'reachable_time': 17860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307238, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.506 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e54eb24c-2a50-428a-838d-c09cc7d88896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 ceph-mon[74477]: pgmap v1391: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.600 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7429dcd5-5af9-419b-a4a0-41b6c1d4a7eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.602 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.602 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.604 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa72ac8c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:28 compute-0 nova_compute[260603]: 2025-10-02 08:27:28.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:28 compute-0 NetworkManager[45129]: <info>  [1759393648.6421] manager: (tapa72ac8c9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Oct 02 08:27:28 compute-0 kernel: tapa72ac8c9-10: entered promiscuous mode
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.646 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa72ac8c9-10, col_values=(('external_ids', {'iface-id': 'f9acec59-0200-4a1d-84e4-06e67c730498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:28 compute-0 nova_compute[260603]: 2025-10-02 08:27:28.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:28 compute-0 ovn_controller[152344]: 2025-10-02T08:27:28Z|00328|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.682 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.687 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5970226e-85e7-4c10-9ea3-13c60e5c5a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:28 compute-0 nova_compute[260603]: 2025-10-02 08:27:28.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.688 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:27:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.688 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'env', 'PROCESS_TAG=haproxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:27:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct 02 08:27:29 compute-0 podman[307310]: 2025-10-02 08:27:29.100862551 +0000 UTC m=+0.065498106 container create e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 08:27:29 compute-0 systemd[1]: Started libpod-conmon-e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089.scope.
Oct 02 08:27:29 compute-0 podman[307310]: 2025-10-02 08:27:29.06123653 +0000 UTC m=+0.025872145 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:27:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da0ecc13612b3c76bc010b82dce48ca9d09de89c8effd567f89e84bab2ebf6f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:29 compute-0 podman[307310]: 2025-10-02 08:27:29.202528801 +0000 UTC m=+0.167164346 container init e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.207 2 DEBUG nova.network.neutron [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Updated VIF entry in instance network info cache for port dcccfe82-bc7c-4036-bbb1-5a2f90418794. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:27:29 compute-0 podman[307310]: 2025-10-02 08:27:29.207689882 +0000 UTC m=+0.172325427 container start e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.207 2 DEBUG nova.network.neutron [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Updating instance_info_cache with network_info: [{"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.222 2 DEBUG oslo_concurrency.lockutils [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3626202d-fd1e-497c-bbfd-ea0e7a7321c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:29 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[307327]: [NOTICE]   (307331) : New worker (307333) forked
Oct 02 08:27:29 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[307327]: [NOTICE]   (307331) : Loading success.
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.553 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393649.5523596, 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.554 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] VM Started (Lifecycle Event)
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.579 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.584 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393649.552828, 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.584 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] VM Paused (Lifecycle Event)
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.602 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.605 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.624 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.913 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.915 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:29 compute-0 nova_compute[260603]: 2025-10-02 08:27:29.936 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.049 2 DEBUG nova.compute.manager [req-04cc5d92-6eb8-410f-85d9-d922f6b3060c req-b62a8f58-b7c0-4beb-b7d5-f4cbe0380660 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.050 2 DEBUG oslo_concurrency.lockutils [req-04cc5d92-6eb8-410f-85d9-d922f6b3060c req-b62a8f58-b7c0-4beb-b7d5-f4cbe0380660 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.051 2 DEBUG oslo_concurrency.lockutils [req-04cc5d92-6eb8-410f-85d9-d922f6b3060c req-b62a8f58-b7c0-4beb-b7d5-f4cbe0380660 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.051 2 DEBUG oslo_concurrency.lockutils [req-04cc5d92-6eb8-410f-85d9-d922f6b3060c req-b62a8f58-b7c0-4beb-b7d5-f4cbe0380660 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.052 2 DEBUG nova.compute.manager [req-04cc5d92-6eb8-410f-85d9-d922f6b3060c req-b62a8f58-b7c0-4beb-b7d5-f4cbe0380660 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Processing event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.055 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.061 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393650.0600991, 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.062 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] VM Resumed (Lifecycle Event)
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.065 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.070 2 INFO nova.virt.libvirt.driver [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Instance spawned successfully.
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.070 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.085 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.086 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.089 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.097 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.099 2 INFO nova.compute.claims [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.108 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.113 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.113 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.114 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.115 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.116 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.117 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.150 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.194 2 INFO nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Took 9.64 seconds to spawn the instance on the hypervisor.
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.195 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.273 2 INFO nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Took 10.61 seconds to build instance.
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.287 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.302 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.395 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.397 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.418 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.491 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:30 compute-0 ceph-mon[74477]: pgmap v1392: 305 pgs: 305 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:27:30 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2617038930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.803 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.810 2 DEBUG nova.compute.provider_tree [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.829 2 DEBUG nova.scheduler.client.report [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.861 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.862 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.865 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.874 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.874 2 INFO nova.compute.claims [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:27:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.997 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:27:30 compute-0 nova_compute[260603]: 2025-10-02 08:27:30.997 2 DEBUG nova.network.neutron [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.017 2 INFO nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.033 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.039 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393636.0377965, 5d595e00-2287-4a6f-b347-bc277006a626 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.039 2 INFO nova.compute.manager [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] VM Stopped (Lifecycle Event)
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.068 2 DEBUG nova.compute.manager [None req-0da4dc6c-88d5-4408-8a3c-0a8a4a450b8a - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.149 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.178 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.181 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.182 2 INFO nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Creating image(s)
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.210 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.238 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.261 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.265 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.330 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.331 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.331 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.332 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.350 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.353 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2617038930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:27:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1348302952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.621 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.661 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.711 2 DEBUG nova.policy [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.721 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] resizing rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.764 2 DEBUG nova.compute.provider_tree [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.786 2 DEBUG nova.scheduler.client.report [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.825 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.826 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.833 2 DEBUG nova.objects.instance [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'migration_context' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.849 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.849 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Ensure instance console log exists: /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.850 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.850 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.850 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.868 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.869 2 DEBUG nova.network.neutron [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.888 2 INFO nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:27:31 compute-0 nova_compute[260603]: 2025-10-02 08:27:31.904 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.005 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.007 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.008 2 INFO nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Creating image(s)
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.039 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.078 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.112 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.116 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.192 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.194 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.194 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.195 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.215 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.219 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 05cc7244-c419-4c24-b995-95ca760837a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.346 2 DEBUG nova.policy [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '019cd25dce6249ce9c2cf326ec62df28', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.359 2 DEBUG nova.compute.manager [req-bd323876-c258-4589-b8c2-c9fc7d3c3917 req-6dbc804e-8f8f-4610-b86e-786858da4492 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.360 2 DEBUG oslo_concurrency.lockutils [req-bd323876-c258-4589-b8c2-c9fc7d3c3917 req-6dbc804e-8f8f-4610-b86e-786858da4492 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.360 2 DEBUG oslo_concurrency.lockutils [req-bd323876-c258-4589-b8c2-c9fc7d3c3917 req-6dbc804e-8f8f-4610-b86e-786858da4492 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.360 2 DEBUG oslo_concurrency.lockutils [req-bd323876-c258-4589-b8c2-c9fc7d3c3917 req-6dbc804e-8f8f-4610-b86e-786858da4492 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.361 2 DEBUG nova.compute.manager [req-bd323876-c258-4589-b8c2-c9fc7d3c3917 req-6dbc804e-8f8f-4610-b86e-786858da4492 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] No waiting events found dispatching network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.361 2 WARNING nova.compute.manager [req-bd323876-c258-4589-b8c2-c9fc7d3c3917 req-6dbc804e-8f8f-4610-b86e-786858da4492 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received unexpected event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 for instance with vm_state active and task_state None.
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.452 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.452 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.453 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.453 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.453 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.454 2 INFO nova.compute.manager [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Terminating instance
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.455 2 DEBUG nova.compute.manager [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.501 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 05cc7244-c419-4c24-b995-95ca760837a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:32 compute-0 kernel: tapdcccfe82-bc (unregistering): left promiscuous mode
Oct 02 08:27:32 compute-0 NetworkManager[45129]: <info>  [1759393652.5217] device (tapdcccfe82-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:27:32 compute-0 ovn_controller[152344]: 2025-10-02T08:27:32Z|00329|binding|INFO|Releasing lport dcccfe82-bc7c-4036-bbb1-5a2f90418794 from this chassis (sb_readonly=0)
Oct 02 08:27:32 compute-0 ovn_controller[152344]: 2025-10-02T08:27:32Z|00330|binding|INFO|Setting lport dcccfe82-bc7c-4036-bbb1-5a2f90418794 down in Southbound
Oct 02 08:27:32 compute-0 ovn_controller[152344]: 2025-10-02T08:27:32Z|00331|binding|INFO|Removing iface tapdcccfe82-bc ovn-installed in OVS
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.542 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:03:0e 10.100.0.5'], port_security=['fa:16:3e:58:03:0e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3626202d-fd1e-497c-bbfd-ea0e7a7321c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dcccfe82-bc7c-4036-bbb1-5a2f90418794) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.543 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dcccfe82-bc7c-4036-bbb1-5a2f90418794 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca unbound from our chassis
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.544 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.545 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b4264815-afb6-4c33-ae83-f3e94f6b63b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.545 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace which is not needed anymore
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:32 compute-0 ceph-mon[74477]: pgmap v1393: 305 pgs: 305 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 02 08:27:32 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Deactivated successfully.
Oct 02 08:27:32 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Consumed 3.655s CPU time.
Oct 02 08:27:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1348302952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:32 compute-0 systemd-machined[214636]: Machine qemu-45-instance-00000029 terminated.
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.616 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] resizing rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:27:32 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[307327]: [NOTICE]   (307331) : haproxy version is 2.8.14-c23fe91
Oct 02 08:27:32 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[307327]: [NOTICE]   (307331) : path to executable is /usr/sbin/haproxy
Oct 02 08:27:32 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[307327]: [ALERT]    (307331) : Current worker (307333) exited with code 143 (Terminated)
Oct 02 08:27:32 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[307327]: [WARNING]  (307331) : All workers exited. Exiting... (0)
Oct 02 08:27:32 compute-0 systemd[1]: libpod-e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089.scope: Deactivated successfully.
Oct 02 08:27:32 compute-0 podman[307724]: 2025-10-02 08:27:32.73021014 +0000 UTC m=+0.063086082 container died e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.744 2 DEBUG nova.objects.instance [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'migration_context' on Instance uuid 05cc7244-c419-4c24-b995-95ca760837a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.748 2 INFO nova.virt.libvirt.driver [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Instance destroyed successfully.
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.749 2 DEBUG nova.objects.instance [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'resources' on Instance uuid 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089-userdata-shm.mount: Deactivated successfully.
Oct 02 08:27:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1da0ecc13612b3c76bc010b82dce48ca9d09de89c8effd567f89e84bab2ebf6f-merged.mount: Deactivated successfully.
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.777 2 DEBUG nova.virt.libvirt.vif [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-277551696',display_name='tempest-DeleteServersTestJSON-server-277551696',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-277551696',id=41,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-tia1r4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:30Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=3626202d-fd1e-497c-bbfd-ea0e7a7321c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:27:32 compute-0 podman[307724]: 2025-10-02 08:27:32.777704665 +0000 UTC m=+0.110580617 container cleanup e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.777 2 DEBUG nova.network.os_vif_util [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.778 2 DEBUG nova.network.os_vif_util [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.778 2 DEBUG os_vif [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.780 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.780 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Ensure instance console log exists: /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.781 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.781 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.781 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.782 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdcccfe82-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.789 2 INFO os_vif [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc')
Oct 02 08:27:32 compute-0 systemd[1]: libpod-conmon-e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089.scope: Deactivated successfully.
Oct 02 08:27:32 compute-0 podman[307778]: 2025-10-02 08:27:32.880995295 +0000 UTC m=+0.066669813 container remove e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.893 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[06c7e754-2def-4c12-9e2f-714308cb987a]: (4, ('Thu Oct  2 08:27:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089)\ne44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089\nThu Oct  2 08:27:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089)\ne44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.895 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bd9b4e-c952-47c5-9641-3894f4d46a52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.896 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:32 compute-0 kernel: tapa72ac8c9-10: left promiscuous mode
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 113 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Oct 02 08:27:32 compute-0 nova_compute[260603]: 2025-10-02 08:27:32.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.925 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b7ebf4-7126-47fe-8011-777e8155a79e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.957 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[04ea676a-b436-4ff7-92ea-eb43ce7c63cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.958 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[63bb9a1d-3d51-4cb3-8741-46084411dbc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.975 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1aadc4-5bf5-4d0d-be8a-52551c4bd8b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452687, 'reachable_time': 17689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307811, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:32 compute-0 systemd[1]: run-netns-ovnmeta\x2da72ac8c9\x2d16ee\x2d4ec0\x2db23d\x2d2741fda000ca.mount: Deactivated successfully.
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.980 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:27:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.980 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e3338d-eb30-4545-a02f-c680ebd5d3e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:33 compute-0 nova_compute[260603]: 2025-10-02 08:27:33.163 2 INFO nova.virt.libvirt.driver [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Deleting instance files /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8_del
Oct 02 08:27:33 compute-0 nova_compute[260603]: 2025-10-02 08:27:33.164 2 INFO nova.virt.libvirt.driver [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Deletion of /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8_del complete
Oct 02 08:27:33 compute-0 nova_compute[260603]: 2025-10-02 08:27:33.236 2 INFO nova.compute.manager [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 02 08:27:33 compute-0 nova_compute[260603]: 2025-10-02 08:27:33.237 2 DEBUG oslo.service.loopingcall [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:27:33 compute-0 nova_compute[260603]: 2025-10-02 08:27:33.237 2 DEBUG nova.compute.manager [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:27:33 compute-0 nova_compute[260603]: 2025-10-02 08:27:33.237 2 DEBUG nova.network.neutron [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:27:33 compute-0 nova_compute[260603]: 2025-10-02 08:27:33.742 2 DEBUG nova.network.neutron [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Successfully created port: e978c120-9b3a-4a48-b553-c38b05073ad9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:27:33 compute-0 nova_compute[260603]: 2025-10-02 08:27:33.823 2 DEBUG nova.network.neutron [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Successfully created port: 136aeb8e-dedd-4cd8-a72d-1c4309716daf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.537 2 DEBUG nova.compute.manager [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-vif-unplugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.538 2 DEBUG oslo_concurrency.lockutils [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.538 2 DEBUG oslo_concurrency.lockutils [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.539 2 DEBUG oslo_concurrency.lockutils [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.539 2 DEBUG nova.compute.manager [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] No waiting events found dispatching network-vif-unplugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.540 2 DEBUG nova.compute.manager [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-vif-unplugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.540 2 DEBUG nova.compute.manager [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.541 2 DEBUG oslo_concurrency.lockutils [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.541 2 DEBUG oslo_concurrency.lockutils [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.541 2 DEBUG oslo_concurrency.lockutils [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.542 2 DEBUG nova.compute.manager [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] No waiting events found dispatching network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.542 2 WARNING nova.compute.manager [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received unexpected event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 for instance with vm_state active and task_state deleting.
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.554 2 DEBUG nova.network.neutron [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.588 2 INFO nova.compute.manager [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Took 1.35 seconds to deallocate network for instance.
Oct 02 08:27:34 compute-0 ceph-mon[74477]: pgmap v1394: 305 pgs: 305 active+clean; 113 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.670 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.671 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:34 compute-0 nova_compute[260603]: 2025-10-02 08:27:34.756 2 DEBUG oslo_concurrency.processutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:34.814 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:34.816 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:34.816 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 144 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 107 op/s
Oct 02 08:27:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:27:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2317047195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.271 2 DEBUG oslo_concurrency.processutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.277 2 DEBUG nova.compute.provider_tree [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.328 2 DEBUG nova.scheduler.client.report [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.354 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.386 2 INFO nova.scheduler.client.report [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Deleted allocations for instance 3626202d-fd1e-497c-bbfd-ea0e7a7321c8
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.457 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.546 2 DEBUG nova.network.neutron [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Successfully updated port: e978c120-9b3a-4a48-b553-c38b05073ad9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.554 2 DEBUG nova.network.neutron [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Successfully updated port: 136aeb8e-dedd-4cd8-a72d-1c4309716daf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.568 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.569 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquired lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.569 2 DEBUG nova.network.neutron [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.572 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.572 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.573 2 DEBUG nova.network.neutron [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:27:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2317047195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.777 2 DEBUG nova.network.neutron [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:27:35 compute-0 nova_compute[260603]: 2025-10-02 08:27:35.784 2 DEBUG nova.network.neutron [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:27:36 compute-0 nova_compute[260603]: 2025-10-02 08:27:36.548 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393641.5465412, f0bfef78-36cf-4c57-9205-ad81a216a221 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:36 compute-0 nova_compute[260603]: 2025-10-02 08:27:36.549 2 INFO nova.compute.manager [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] VM Stopped (Lifecycle Event)
Oct 02 08:27:36 compute-0 nova_compute[260603]: 2025-10-02 08:27:36.580 2 DEBUG nova.compute.manager [None req-a1ceab11-5d63-4bd6-bf7a-e37e7ced89c7 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:36 compute-0 ceph-mon[74477]: pgmap v1395: 305 pgs: 305 active+clean; 144 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 107 op/s
Oct 02 08:27:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 144 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 104 op/s
Oct 02 08:27:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.095 2 DEBUG nova.compute.manager [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-vif-deleted-dcccfe82-bc7c-4036-bbb1-5a2f90418794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.096 2 DEBUG nova.compute.manager [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-changed-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.096 2 DEBUG nova.compute.manager [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Refreshing instance network info cache due to event network-changed-e978c120-9b3a-4a48-b553-c38b05073ad9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.097 2 DEBUG oslo_concurrency.lockutils [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.853 2 DEBUG nova.network.neutron [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updating instance_info_cache with network_info: [{"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.872 2 DEBUG nova.network.neutron [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.887 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Releasing lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.887 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Instance network_info: |[{"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.888 2 DEBUG oslo_concurrency.lockutils [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.888 2 DEBUG nova.network.neutron [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Refreshing network info cache for port e978c120-9b3a-4a48-b553-c38b05073ad9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.894 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Start _get_guest_xml network_info=[{"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.898 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.899 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Instance network_info: |[{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.903 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Start _get_guest_xml network_info=[{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.912 2 WARNING nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.921 2 WARNING nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.927 2 DEBUG nova.virt.libvirt.host [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.928 2 DEBUG nova.virt.libvirt.host [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.929 2 DEBUG nova.virt.libvirt.host [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.930 2 DEBUG nova.virt.libvirt.host [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.934 2 DEBUG nova.virt.libvirt.host [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.935 2 DEBUG nova.virt.libvirt.host [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.936 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.937 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.938 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.938 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.939 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.939 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.940 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.940 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.941 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.941 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.942 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.942 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.947 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.981 2 DEBUG nova.virt.libvirt.host [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.983 2 DEBUG nova.virt.libvirt.host [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.983 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.984 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.984 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.985 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.985 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.985 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.985 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.986 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.986 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.986 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.987 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.987 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:27:37 compute-0 nova_compute[260603]: 2025-10-02 08:27:37.991 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:27:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1932988185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.396 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.430 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.436 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:27:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001243750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.478 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008102727990492934 of space, bias 1.0, pg target 0.24308183971478803 quantized to 32 (current 32)
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.515 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.520 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:38 compute-0 ceph-mon[74477]: pgmap v1396: 305 pgs: 305 active+clean; 144 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 104 op/s
Oct 02 08:27:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1932988185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3001243750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:27:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2585865126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.918 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.921 2 DEBUG nova.virt.libvirt.vif [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1736454109',display_name='tempest-SecurityGroupsTestJSON-server-1736454109',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1736454109',id=43,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-hb3fhgan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:31Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=05cc7244-c419-4c24-b995-95ca760837a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.922 2 DEBUG nova.network.os_vif_util [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.923 2 DEBUG nova.network.os_vif_util [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.925 2 DEBUG nova.objects.instance [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'pci_devices' on Instance uuid 05cc7244-c419-4c24-b995-95ca760837a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.953 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:27:38 compute-0 nova_compute[260603]:   <uuid>05cc7244-c419-4c24-b995-95ca760837a4</uuid>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   <name>instance-0000002b</name>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1736454109</nova:name>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:27:37</nova:creationTime>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:27:38 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:27:38 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:27:38 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:27:38 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:27:38 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:27:38 compute-0 nova_compute[260603]:         <nova:user uuid="019cd25dce6249ce9c2cf326ec62df28">tempest-SecurityGroupsTestJSON-2081142325-project-member</nova:user>
Oct 02 08:27:38 compute-0 nova_compute[260603]:         <nova:project uuid="35a4ab7cf79e41f68a1ea888c2a3592e">tempest-SecurityGroupsTestJSON-2081142325</nova:project>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:27:38 compute-0 nova_compute[260603]:         <nova:port uuid="e978c120-9b3a-4a48-b553-c38b05073ad9">
Oct 02 08:27:38 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <system>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <entry name="serial">05cc7244-c419-4c24-b995-95ca760837a4</entry>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <entry name="uuid">05cc7244-c419-4c24-b995-95ca760837a4</entry>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     </system>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   <os>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   </os>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   <features>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   </features>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/05cc7244-c419-4c24-b995-95ca760837a4_disk">
Oct 02 08:27:38 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:27:38 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/05cc7244-c419-4c24-b995-95ca760837a4_disk.config">
Oct 02 08:27:38 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:27:38 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:86:a6:e3"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <target dev="tape978c120-9b"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/console.log" append="off"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <video>
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     </video>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:27:38 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:27:38 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:27:38 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:27:38 compute-0 nova_compute[260603]: </domain>
Oct 02 08:27:38 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.956 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Preparing to wait for external event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.956 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.956 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.957 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.958 2 DEBUG nova.virt.libvirt.vif [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1736454109',display_name='tempest-SecurityGroupsTestJSON-server-1736454109',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1736454109',id=43,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-hb3fhgan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:31Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=05cc7244-c419-4c24-b995-95ca760837a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.958 2 DEBUG nova.network.os_vif_util [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.959 2 DEBUG nova.network.os_vif_util [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.960 2 DEBUG os_vif [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.962 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.963 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.966 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape978c120-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.967 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape978c120-9b, col_values=(('external_ids', {'iface-id': 'e978c120-9b3a-4a48-b553-c38b05073ad9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:a6:e3', 'vm-uuid': '05cc7244-c419-4c24-b995-95ca760837a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:38 compute-0 NetworkManager[45129]: <info>  [1759393658.9700] manager: (tape978c120-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:38 compute-0 nova_compute[260603]: 2025-10-02 08:27:38.976 2 INFO os_vif [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b')
Oct 02 08:27:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:27:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/383078312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.006 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.009 2 DEBUG nova.virt.libvirt.vif [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.010 2 DEBUG nova.network.os_vif_util [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.011 2 DEBUG nova.network.os_vif_util [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.012 2 DEBUG nova.objects.instance [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_devices' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.041 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:27:39 compute-0 nova_compute[260603]:   <uuid>f13ff7c1-d7d3-443e-9f06-69f8c466af30</uuid>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   <name>instance-0000002a</name>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <nova:name>tempest-tempest.common.compute-instance-53387023</nova:name>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:27:37</nova:creationTime>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:27:39 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:27:39 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:27:39 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:27:39 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:27:39 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:27:39 compute-0 nova_compute[260603]:         <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:27:39 compute-0 nova_compute[260603]:         <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:27:39 compute-0 nova_compute[260603]:         <nova:port uuid="136aeb8e-dedd-4cd8-a72d-1c4309716daf">
Oct 02 08:27:39 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <system>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <entry name="serial">f13ff7c1-d7d3-443e-9f06-69f8c466af30</entry>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <entry name="uuid">f13ff7c1-d7d3-443e-9f06-69f8c466af30</entry>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     </system>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   <os>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   </os>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   <features>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   </features>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk">
Oct 02 08:27:39 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:27:39 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config">
Oct 02 08:27:39 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:27:39 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:0b:59:2a"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <target dev="tap136aeb8e-de"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/console.log" append="off"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <video>
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     </video>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:27:39 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:27:39 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:27:39 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:27:39 compute-0 nova_compute[260603]: </domain>
Oct 02 08:27:39 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.043 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Preparing to wait for external event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.043 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.044 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.044 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.046 2 DEBUG nova.virt.libvirt.vif [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.046 2 DEBUG nova.network.os_vif_util [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.048 2 DEBUG nova.network.os_vif_util [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.049 2 DEBUG os_vif [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.051 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap136aeb8e-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap136aeb8e-de, col_values=(('external_ids', {'iface-id': '136aeb8e-dedd-4cd8-a72d-1c4309716daf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:59:2a', 'vm-uuid': 'f13ff7c1-d7d3-443e-9f06-69f8c466af30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:39 compute-0 NetworkManager[45129]: <info>  [1759393659.0622] manager: (tap136aeb8e-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.071 2 INFO os_vif [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de')
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.074 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.075 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.075 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] No VIF found with MAC fa:16:3e:86:a6:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.076 2 INFO nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Using config drive
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.106 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.192 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.193 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.194 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:0b:59:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.195 2 INFO nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Using config drive
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.226 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2585865126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/383078312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.641 2 INFO nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Creating config drive at /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/disk.config
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.647 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxgwritu0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.746 2 DEBUG nova.network.neutron [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updated VIF entry in instance network info cache for port e978c120-9b3a-4a48-b553-c38b05073ad9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.748 2 DEBUG nova.network.neutron [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updating instance_info_cache with network_info: [{"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.770 2 DEBUG oslo_concurrency.lockutils [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.771 2 DEBUG nova.compute.manager [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.771 2 DEBUG nova.compute.manager [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing instance network info cache due to event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.771 2 DEBUG oslo_concurrency.lockutils [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.772 2 DEBUG oslo_concurrency.lockutils [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.772 2 DEBUG nova.network.neutron [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.798 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxgwritu0" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.829 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.833 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/disk.config 05cc7244-c419-4c24-b995-95ca760837a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.877 2 INFO nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Creating config drive at /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/disk.config
Oct 02 08:27:39 compute-0 nova_compute[260603]: 2025-10-02 08:27:39.888 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpldivgrxx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.031 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/disk.config 05cc7244-c419-4c24-b995-95ca760837a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.032 2 INFO nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Deleting local config drive /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/disk.config because it was imported into RBD.
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.051 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpldivgrxx" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.098 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.105 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/disk.config f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:40 compute-0 NetworkManager[45129]: <info>  [1759393660.1503] manager: (tape978c120-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Oct 02 08:27:40 compute-0 kernel: tape978c120-9b: entered promiscuous mode
Oct 02 08:27:40 compute-0 ovn_controller[152344]: 2025-10-02T08:27:40Z|00332|binding|INFO|Claiming lport e978c120-9b3a-4a48-b553-c38b05073ad9 for this chassis.
Oct 02 08:27:40 compute-0 ovn_controller[152344]: 2025-10-02T08:27:40Z|00333|binding|INFO|e978c120-9b3a-4a48-b553-c38b05073ad9: Claiming fa:16:3e:86:a6:e3 10.100.0.10
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.169 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:a6:e3 10.100.0.10'], port_security=['fa:16:3e:86:a6:e3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '05cc7244-c419-4c24-b995-95ca760837a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aceef71f-a2e6-4998-bc1f-5a8f9213efeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e031551-92b2-44b9-87f8-368034b7a542, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e978c120-9b3a-4a48-b553-c38b05073ad9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.170 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e978c120-9b3a-4a48-b553-c38b05073ad9 in datapath cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 bound to our chassis
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.172 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.186 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fb1063-69a6-4581-8981-4803e75d2afc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.187 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcce7e8b6-91 in ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.189 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcce7e8b6-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.190 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d8da88e2-ec19-4342-9c28-e28f4b36f5fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.194 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b142023e-a269-484c-981a-42c78bc79b20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.204 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[47d20c7b-1f71-485c-a789-3f7e924cbfdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 systemd-machined[214636]: New machine qemu-46-instance-0000002b.
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.221 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e69495-c759-4220-854c-1480bdc12ad0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-0000002b.
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.256 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f205a6d3-769a-4d62-9701-77ac05e682cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 systemd-udevd[308129]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.261 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[722205e6-92f9-43b2-94f8-ca31a4849e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 systemd-udevd[308132]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:27:40 compute-0 NetworkManager[45129]: <info>  [1759393660.2640] manager: (tapcce7e8b6-90): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Oct 02 08:27:40 compute-0 NetworkManager[45129]: <info>  [1759393660.2802] device (tape978c120-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:27:40 compute-0 NetworkManager[45129]: <info>  [1759393660.2808] device (tape978c120-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:40 compute-0 ovn_controller[152344]: 2025-10-02T08:27:40Z|00334|binding|INFO|Setting lport e978c120-9b3a-4a48-b553-c38b05073ad9 ovn-installed in OVS
Oct 02 08:27:40 compute-0 ovn_controller[152344]: 2025-10-02T08:27:40Z|00335|binding|INFO|Setting lport e978c120-9b3a-4a48-b553-c38b05073ad9 up in Southbound
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.298 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[32967c7e-7325-45f7-8b20-fd5738a8a615]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 podman[308081]: 2025-10-02 08:27:40.303280194 +0000 UTC m=+0.107761070 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.301 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2e455377-9981-42bc-8da7-df20a6627120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 NetworkManager[45129]: <info>  [1759393660.3306] device (tapcce7e8b6-90): carrier: link connected
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.336 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[83993b30-2458-47c7-a653-7e3cf9990059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 podman[308075]: 2025-10-02 08:27:40.348081746 +0000 UTC m=+0.157545067 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.358 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/disk.config f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.359 2 INFO nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Deleting local config drive /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/disk.config because it was imported into RBD.
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.363 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a45faa67-4f1f-462c-b79a-28baaf327f00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce7e8b6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:5a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453893, 'reachable_time': 18032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308174, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.377 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f46b66c7-7a28-4d26-86d3-cca5c183388f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:5a77'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453893, 'tstamp': 453893}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308175, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.393 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e8fb11-d5f8-4fb1-9f13-32f411bc11d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce7e8b6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:5a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453893, 'reachable_time': 18032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308179, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 kernel: tap136aeb8e-de: entered promiscuous mode
Oct 02 08:27:40 compute-0 NetworkManager[45129]: <info>  [1759393660.4124] manager: (tap136aeb8e-de): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Oct 02 08:27:40 compute-0 ovn_controller[152344]: 2025-10-02T08:27:40Z|00336|binding|INFO|Claiming lport 136aeb8e-dedd-4cd8-a72d-1c4309716daf for this chassis.
Oct 02 08:27:40 compute-0 ovn_controller[152344]: 2025-10-02T08:27:40Z|00337|binding|INFO|136aeb8e-dedd-4cd8-a72d-1c4309716daf: Claiming fa:16:3e:0b:59:2a 10.100.0.3
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:40 compute-0 systemd-udevd[308154]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.424 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:59:2a 10.100.0.3'], port_security=['fa:16:3e:0b:59:2a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f13ff7c1-d7d3-443e-9f06-69f8c466af30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1616ad3a-ff5f-4423-8dd9-f2ff5717f8c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=136aeb8e-dedd-4cd8-a72d-1c4309716daf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:27:40 compute-0 NetworkManager[45129]: <info>  [1759393660.4269] device (tap136aeb8e-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:27:40 compute-0 NetworkManager[45129]: <info>  [1759393660.4282] device (tap136aeb8e-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:27:40 compute-0 systemd-machined[214636]: New machine qemu-47-instance-0000002a.
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.447 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[56ff942e-0c71-4cd6-9dc5-af0fb16cf69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-0000002a.
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:40 compute-0 ovn_controller[152344]: 2025-10-02T08:27:40Z|00338|binding|INFO|Setting lport 136aeb8e-dedd-4cd8-a72d-1c4309716daf ovn-installed in OVS
Oct 02 08:27:40 compute-0 ovn_controller[152344]: 2025-10-02T08:27:40Z|00339|binding|INFO|Setting lport 136aeb8e-dedd-4cd8-a72d-1c4309716daf up in Southbound
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.534 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cf788e4c-9b34-4421-8dd6-e8caee0bdc89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.535 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce7e8b6-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.536 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.536 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce7e8b6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:40 compute-0 NetworkManager[45129]: <info>  [1759393660.5384] manager: (tapcce7e8b6-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Oct 02 08:27:40 compute-0 kernel: tapcce7e8b6-90: entered promiscuous mode
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.542 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcce7e8b6-90, col_values=(('external_ids', {'iface-id': '2a218cce-83be-4768-9f4e-7d61802765d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:40 compute-0 ovn_controller[152344]: 2025-10-02T08:27:40Z|00340|binding|INFO|Releasing lport 2a218cce-83be-4768-9f4e-7d61802765d4 from this chassis (sb_readonly=0)
Oct 02 08:27:40 compute-0 nova_compute[260603]: 2025-10-02 08:27:40.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.566 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.567 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[25f901c9-100b-416a-a725-494ed883b616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.569 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9.pid.haproxy
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:27:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.571 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'env', 'PROCESS_TAG=haproxy-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:27:40 compute-0 ceph-mon[74477]: pgmap v1397: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 02 08:27:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 153 op/s
Oct 02 08:27:41 compute-0 podman[308296]: 2025-10-02 08:27:41.008336555 +0000 UTC m=+0.078090999 container create 685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:27:41 compute-0 podman[308296]: 2025-10-02 08:27:40.959730574 +0000 UTC m=+0.029485028 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.057 2 DEBUG nova.compute.manager [req-449c98a0-d05f-4b0e-83ca-d939840adca6 req-acce05f6-0e27-4dfc-8413-299113b87656 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.058 2 DEBUG oslo_concurrency.lockutils [req-449c98a0-d05f-4b0e-83ca-d939840adca6 req-acce05f6-0e27-4dfc-8413-299113b87656 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.059 2 DEBUG oslo_concurrency.lockutils [req-449c98a0-d05f-4b0e-83ca-d939840adca6 req-acce05f6-0e27-4dfc-8413-299113b87656 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.059 2 DEBUG oslo_concurrency.lockutils [req-449c98a0-d05f-4b0e-83ca-d939840adca6 req-acce05f6-0e27-4dfc-8413-299113b87656 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.059 2 DEBUG nova.compute.manager [req-449c98a0-d05f-4b0e-83ca-d939840adca6 req-acce05f6-0e27-4dfc-8413-299113b87656 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Processing event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:27:41 compute-0 systemd[1]: Started libpod-conmon-685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96.scope.
Oct 02 08:27:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4204cebb47775c2a9fbcd07d14a4cbaafc0d93423478e2e45ec500726da6ccd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:41 compute-0 podman[308296]: 2025-10-02 08:27:41.124078381 +0000 UTC m=+0.193832855 container init 685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:27:41 compute-0 podman[308296]: 2025-10-02 08:27:41.136022013 +0000 UTC m=+0.205776447 container start 685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 08:27:41 compute-0 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [NOTICE]   (308332) : New worker (308334) forked
Oct 02 08:27:41 compute-0 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [NOTICE]   (308332) : Loading success.
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.204 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 136aeb8e-dedd-4cd8-a72d-1c4309716daf in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.208 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.227 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[70e88fa9-1b5a-4823-b77b-f5126edccd42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.228 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfa1bff6d-11 in ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.231 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfa1bff6d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.231 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[141b3c48-bb28-4081-9827-6d33d5f0aa82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.233 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa91d2fd-f9ab-44c0-b564-4dabad80eb2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.251 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[80bac23d-e577-4bcd-84f0-3f9428b98b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.271 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8723cdf2-9947-4c21-b6c6-bb3dc61760a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.289 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393661.2868018, 05cc7244-c419-4c24-b995-95ca760837a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.290 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] VM Started (Lifecycle Event)
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.301 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.311 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[627a2c64-f8a7-4714-ba15-735765f74408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.312 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.315 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.320 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7275ec25-69a3-4753-91ec-8dd0e2e8ec58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 NetworkManager[45129]: <info>  [1759393661.3212] manager: (tapfa1bff6d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/152)
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.325 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.328 2 INFO nova.virt.libvirt.driver [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Instance spawned successfully.
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.328 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.348 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.348 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393661.2870655, 05cc7244-c419-4c24-b995-95ca760837a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.348 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] VM Paused (Lifecycle Event)
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.352 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.352 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.353 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.353 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.353 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.354 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.362 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec72e85-0a4f-47a6-9694-9bbe71213f4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.367 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[24afd838-8a25-45f8-bbb6-684dea71670b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.382 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.385 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393661.3144326, 05cc7244-c419-4c24-b995-95ca760837a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.385 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] VM Resumed (Lifecycle Event)
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.405 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:41 compute-0 NetworkManager[45129]: <info>  [1759393661.4083] device (tapfa1bff6d-10): carrier: link connected
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.410 2 INFO nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Took 9.40 seconds to spawn the instance on the hypervisor.
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.410 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.411 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.416 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2a23b1-cbc5-4088-86e9-412bcf21bb0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.435 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.441 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b456d8ec-a5e8-4d7d-bc7c-d9ff603e447c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308353, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.463 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ce4f96-f439-4755-ba7d-f579b2ff33bb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:c92f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454001, 'tstamp': 454001}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308354, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.468 2 INFO nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Took 11.00 seconds to build instance.
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.481 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.484 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[554a5bf7-5aa6-4d61-a3ec-4aaac45df4be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308355, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.525 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc8e9b6-8ffe-457c-a9d2-d41a61b44305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.570 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393661.5695255, f13ff7c1-d7d3-443e-9f06-69f8c466af30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.570 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] VM Started (Lifecycle Event)
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.590 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.595 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393661.569632, f13ff7c1-d7d3-443e-9f06-69f8c466af30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.596 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] VM Paused (Lifecycle Event)
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.609 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b15c4f23-c68c-4d34-8c67-6f71d1c2b812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.611 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.612 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.613 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:41 compute-0 kernel: tapfa1bff6d-10: entered promiscuous mode
Oct 02 08:27:41 compute-0 NetworkManager[45129]: <info>  [1759393661.6166] manager: (tapfa1bff6d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.619 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.624 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:41 compute-0 ovn_controller[152344]: 2025-10-02T08:27:41Z|00341|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.626 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.633 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.634 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[646c4ce4-5ac7-4086-bc36-19cd7e18905c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.635 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:27:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.637 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'env', 'PROCESS_TAG=haproxy-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fa1bff6d-19fb-4792-a261-4da1165d95a1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.650 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.723 2 DEBUG nova.network.neutron [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updated VIF entry in instance network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.723 2 DEBUG nova.network.neutron [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:41 compute-0 nova_compute[260603]: 2025-10-02 08:27:41.740 2 DEBUG oslo_concurrency.lockutils [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:42 compute-0 podman[308387]: 2025-10-02 08:27:42.030676796 +0000 UTC m=+0.073136805 container create 2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:27:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:27:42 compute-0 podman[308387]: 2025-10-02 08:27:41.989399753 +0000 UTC m=+0.031859712 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:27:42 compute-0 systemd[1]: Started libpod-conmon-2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57.scope.
Oct 02 08:27:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d474468b501cd6334914f2b4a9f295c2f9e45b4d98ca3d1d88fd0ebc5aaafcb2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:42 compute-0 podman[308387]: 2025-10-02 08:27:42.134607395 +0000 UTC m=+0.177067294 container init 2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 08:27:42 compute-0 podman[308387]: 2025-10-02 08:27:42.142005115 +0000 UTC m=+0.184464994 container start 2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:27:42 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [NOTICE]   (308407) : New worker (308409) forked
Oct 02 08:27:42 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [NOTICE]   (308407) : Loading success.
Oct 02 08:27:42 compute-0 nova_compute[260603]: 2025-10-02 08:27:42.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:42 compute-0 nova_compute[260603]: 2025-10-02 08:27:42.591 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:42 compute-0 nova_compute[260603]: 2025-10-02 08:27:42.592 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:42 compute-0 nova_compute[260603]: 2025-10-02 08:27:42.614 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:27:42 compute-0 ceph-mon[74477]: pgmap v1398: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 153 op/s
Oct 02 08:27:42 compute-0 nova_compute[260603]: 2025-10-02 08:27:42.710 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:42 compute-0 nova_compute[260603]: 2025-10-02 08:27:42.711 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:42 compute-0 nova_compute[260603]: 2025-10-02 08:27:42.717 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:27:42 compute-0 nova_compute[260603]: 2025-10-02 08:27:42.717 2 INFO nova.compute.claims [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:27:42 compute-0 nova_compute[260603]: 2025-10-02 08:27:42.864 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 168 op/s
Oct 02 08:27:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:27:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1189683259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.303 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.312 2 DEBUG nova.compute.provider_tree [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.338 2 DEBUG nova.scheduler.client.report [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.372 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.373 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.420 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.420 2 DEBUG nova.network.neutron [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.440 2 INFO nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.461 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.547 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.550 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.551 2 INFO nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Creating image(s)
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.600 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.644 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1189683259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.688 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.696 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.743 2 DEBUG nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.744 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.744 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.745 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.745 2 DEBUG nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] No waiting events found dispatching network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.745 2 WARNING nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received unexpected event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 for instance with vm_state active and task_state None.
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.745 2 DEBUG nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.746 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.746 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.746 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.746 2 DEBUG nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Processing event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.747 2 DEBUG nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.747 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.747 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.748 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.748 2 DEBUG nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.748 2 WARNING nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received unexpected event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf for instance with vm_state building and task_state spawning.
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.749 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.763 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.764 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393663.7626524, f13ff7c1-d7d3-443e-9f06-69f8c466af30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.764 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] VM Resumed (Lifecycle Event)
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.782 2 INFO nova.virt.libvirt.driver [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Instance spawned successfully.
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.783 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.788 2 DEBUG nova.policy [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ac6f72f7366459a86c086737b89ea69', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f269abbe5769427dbf44c430d7529c04', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.798 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.799 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.800 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.800 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.828 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.833 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.923 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.932 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.933 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.933 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.934 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.935 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.935 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.947 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.968 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.993 2 INFO nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Took 12.81 seconds to spawn the instance on the hypervisor.
Oct 02 08:27:43 compute-0 nova_compute[260603]: 2025-10-02 08:27:43.995 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.060 2 INFO nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Took 14.02 seconds to build instance.
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.074 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.086 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.121 2 DEBUG nova.compute.manager [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-changed-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.121 2 DEBUG nova.compute.manager [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Refreshing instance network info cache due to event network-changed-e978c120-9b3a-4a48-b553-c38b05073ad9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.122 2 DEBUG oslo_concurrency.lockutils [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.122 2 DEBUG oslo_concurrency.lockutils [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.122 2 DEBUG nova.network.neutron [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Refreshing network info cache for port e978c120-9b3a-4a48-b553-c38b05073ad9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.179 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] resizing rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.319 2 DEBUG nova.objects.instance [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'migration_context' on Instance uuid 84f8672c-7a2a-4307-a5c4-7e2968d84225 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.343 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.344 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Ensure instance console log exists: /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.344 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.345 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:44 compute-0 nova_compute[260603]: 2025-10-02 08:27:44.345 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:44 compute-0 ceph-mon[74477]: pgmap v1399: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 168 op/s
Oct 02 08:27:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 144 op/s
Oct 02 08:27:45 compute-0 nova_compute[260603]: 2025-10-02 08:27:45.292 2 DEBUG nova.network.neutron [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Successfully created port: d73ba644-fe1d-4d32-9e65-532dd96466b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.183 2 DEBUG nova.compute.manager [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-changed-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.184 2 DEBUG nova.compute.manager [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Refreshing instance network info cache due to event network-changed-e978c120-9b3a-4a48-b553-c38b05073ad9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.184 2 DEBUG oslo_concurrency.lockutils [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:46 compute-0 NetworkManager[45129]: <info>  [1759393666.3290] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Oct 02 08:27:46 compute-0 NetworkManager[45129]: <info>  [1759393666.3301] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.359 2 DEBUG nova.network.neutron [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updated VIF entry in instance network info cache for port e978c120-9b3a-4a48-b553-c38b05073ad9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.360 2 DEBUG nova.network.neutron [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updating instance_info_cache with network_info: [{"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.382 2 DEBUG oslo_concurrency.lockutils [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.383 2 DEBUG oslo_concurrency.lockutils [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.383 2 DEBUG nova.network.neutron [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Refreshing network info cache for port e978c120-9b3a-4a48-b553-c38b05073ad9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:46 compute-0 ovn_controller[152344]: 2025-10-02T08:27:46Z|00342|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct 02 08:27:46 compute-0 ovn_controller[152344]: 2025-10-02T08:27:46Z|00343|binding|INFO|Releasing lport 2a218cce-83be-4768-9f4e-7d61802765d4 from this chassis (sb_readonly=0)
Oct 02 08:27:46 compute-0 ceph-mon[74477]: pgmap v1400: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 144 op/s
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.719 2 DEBUG nova.compute.manager [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.719 2 DEBUG nova.compute.manager [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing instance network info cache due to event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.720 2 DEBUG oslo_concurrency.lockutils [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.720 2 DEBUG oslo_concurrency.lockutils [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:46 compute-0 nova_compute[260603]: 2025-10-02 08:27:46.720 2 DEBUG nova.network.neutron [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:27:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 824 KiB/s wr, 112 op/s
Oct 02 08:27:47 compute-0 podman[308607]: 2025-10-02 08:27:47.022155213 +0000 UTC m=+0.085889050 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:27:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:27:47 compute-0 nova_compute[260603]: 2025-10-02 08:27:47.440 2 DEBUG nova.network.neutron [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Successfully updated port: d73ba644-fe1d-4d32-9e65-532dd96466b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:27:47 compute-0 nova_compute[260603]: 2025-10-02 08:27:47.461 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "refresh_cache-84f8672c-7a2a-4307-a5c4-7e2968d84225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:47 compute-0 nova_compute[260603]: 2025-10-02 08:27:47.462 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquired lock "refresh_cache-84f8672c-7a2a-4307-a5c4-7e2968d84225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:47 compute-0 nova_compute[260603]: 2025-10-02 08:27:47.462 2 DEBUG nova.network.neutron [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:27:47 compute-0 nova_compute[260603]: 2025-10-02 08:27:47.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:47 compute-0 nova_compute[260603]: 2025-10-02 08:27:47.736 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393652.7103205, 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:47 compute-0 nova_compute[260603]: 2025-10-02 08:27:47.736 2 INFO nova.compute.manager [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] VM Stopped (Lifecycle Event)
Oct 02 08:27:47 compute-0 nova_compute[260603]: 2025-10-02 08:27:47.758 2 DEBUG nova.compute.manager [None req-448606df-39d8-49c9-ba8b-6b1bf32cfcde - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:48 compute-0 nova_compute[260603]: 2025-10-02 08:27:48.648 2 DEBUG nova.network.neutron [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:27:48 compute-0 ceph-mon[74477]: pgmap v1401: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 824 KiB/s wr, 112 op/s
Oct 02 08:27:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 181 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 224 op/s
Oct 02 08:27:49 compute-0 nova_compute[260603]: 2025-10-02 08:27:49.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:50 compute-0 ceph-mon[74477]: pgmap v1402: 305 pgs: 305 active+clean; 181 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 224 op/s
Oct 02 08:27:50 compute-0 nova_compute[260603]: 2025-10-02 08:27:50.840 2 DEBUG nova.compute.manager [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-changed-d73ba644-fe1d-4d32-9e65-532dd96466b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:50 compute-0 nova_compute[260603]: 2025-10-02 08:27:50.841 2 DEBUG nova.compute.manager [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Refreshing instance network info cache due to event network-changed-d73ba644-fe1d-4d32-9e65-532dd96466b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:27:50 compute-0 nova_compute[260603]: 2025-10-02 08:27:50.841 2 DEBUG oslo_concurrency.lockutils [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-84f8672c-7a2a-4307-a5c4-7e2968d84225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 181 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 02 08:27:51 compute-0 podman[308629]: 2025-10-02 08:27:51.030610692 +0000 UTC m=+0.089717619 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 08:27:51 compute-0 nova_compute[260603]: 2025-10-02 08:27:51.780 2 DEBUG nova.network.neutron [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updated VIF entry in instance network info cache for port e978c120-9b3a-4a48-b553-c38b05073ad9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:27:51 compute-0 nova_compute[260603]: 2025-10-02 08:27:51.781 2 DEBUG nova.network.neutron [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updating instance_info_cache with network_info: [{"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:51 compute-0 nova_compute[260603]: 2025-10-02 08:27:51.814 2 DEBUG oslo_concurrency.lockutils [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:27:52 compute-0 nova_compute[260603]: 2025-10-02 08:27:52.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:52 compute-0 ceph-mon[74477]: pgmap v1403: 305 pgs: 305 active+clean; 181 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 02 08:27:52 compute-0 nova_compute[260603]: 2025-10-02 08:27:52.862 2 DEBUG nova.network.neutron [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updated VIF entry in instance network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:27:52 compute-0 nova_compute[260603]: 2025-10-02 08:27:52.863 2 DEBUG nova.network.neutron [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:52 compute-0 nova_compute[260603]: 2025-10-02 08:27:52.885 2 DEBUG oslo_concurrency.lockutils [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 198 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 203 op/s
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.230 2 DEBUG nova.network.neutron [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Updating instance_info_cache with network_info: [{"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.252 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Releasing lock "refresh_cache-84f8672c-7a2a-4307-a5c4-7e2968d84225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.253 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Instance network_info: |[{"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.253 2 DEBUG oslo_concurrency.lockutils [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-84f8672c-7a2a-4307-a5c4-7e2968d84225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.253 2 DEBUG nova.network.neutron [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Refreshing network info cache for port d73ba644-fe1d-4d32-9e65-532dd96466b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.257 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Start _get_guest_xml network_info=[{"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.262 2 WARNING nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.269 2 DEBUG nova.virt.libvirt.host [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.269 2 DEBUG nova.virt.libvirt.host [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.275 2 DEBUG nova.virt.libvirt.host [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.276 2 DEBUG nova.virt.libvirt.host [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.276 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.277 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.277 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.278 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.278 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.278 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.279 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.279 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.279 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.279 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.280 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.280 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.283 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:53.579 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:53.587 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:27:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:27:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/150886568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.700 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:53 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/150886568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.731 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.738 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.947 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.948 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:53 compute-0 nova_compute[260603]: 2025-10-02 08:27:53.966 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.046 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.046 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.054 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.054 2 INFO nova.compute.claims [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:54 compute-0 ovn_controller[152344]: 2025-10-02T08:27:54Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:a6:e3 10.100.0.10
Oct 02 08:27:54 compute-0 ovn_controller[152344]: 2025-10-02T08:27:54Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:a6:e3 10.100.0.10
Oct 02 08:27:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:27:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1461974147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.157 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.160 2 DEBUG nova.virt.libvirt.vif [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1351338014',display_name='tempest-DeleteServersTestJSON-server-1351338014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1351338014',id=44,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-dl5bh0q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:43Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=84f8672c-7a2a-4307-a5c4-7e2968d84225,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.160 2 DEBUG nova.network.os_vif_util [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.162 2 DEBUG nova.network.os_vif_util [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.164 2 DEBUG nova.objects.instance [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84f8672c-7a2a-4307-a5c4-7e2968d84225 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.185 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:27:54 compute-0 nova_compute[260603]:   <uuid>84f8672c-7a2a-4307-a5c4-7e2968d84225</uuid>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   <name>instance-0000002c</name>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <nova:name>tempest-DeleteServersTestJSON-server-1351338014</nova:name>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:27:53</nova:creationTime>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:27:54 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:27:54 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:27:54 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:27:54 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:27:54 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:27:54 compute-0 nova_compute[260603]:         <nova:user uuid="1ac6f72f7366459a86c086737b89ea69">tempest-DeleteServersTestJSON-812177785-project-member</nova:user>
Oct 02 08:27:54 compute-0 nova_compute[260603]:         <nova:project uuid="f269abbe5769427dbf44c430d7529c04">tempest-DeleteServersTestJSON-812177785</nova:project>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:27:54 compute-0 nova_compute[260603]:         <nova:port uuid="d73ba644-fe1d-4d32-9e65-532dd96466b4">
Oct 02 08:27:54 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <system>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <entry name="serial">84f8672c-7a2a-4307-a5c4-7e2968d84225</entry>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <entry name="uuid">84f8672c-7a2a-4307-a5c4-7e2968d84225</entry>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     </system>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   <os>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   </os>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   <features>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   </features>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/84f8672c-7a2a-4307-a5c4-7e2968d84225_disk">
Oct 02 08:27:54 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:27:54 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/84f8672c-7a2a-4307-a5c4-7e2968d84225_disk.config">
Oct 02 08:27:54 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       </source>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:27:54 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:e9:aa:14"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <target dev="tapd73ba644-fe"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/console.log" append="off"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <video>
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     </video>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:27:54 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:27:54 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:27:54 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:27:54 compute-0 nova_compute[260603]: </domain>
Oct 02 08:27:54 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.185 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Preparing to wait for external event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.185 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.186 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.186 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.187 2 DEBUG nova.virt.libvirt.vif [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1351338014',display_name='tempest-DeleteServersTestJSON-server-1351338014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1351338014',id=44,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-dl5bh0q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:43Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=84f8672c-7a2a-4307-a5c4-7e2968d84225,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.187 2 DEBUG nova.network.os_vif_util [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.187 2 DEBUG nova.network.os_vif_util [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.188 2 DEBUG os_vif [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.189 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.189 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd73ba644-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd73ba644-fe, col_values=(('external_ids', {'iface-id': 'd73ba644-fe1d-4d32-9e65-532dd96466b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:aa:14', 'vm-uuid': '84f8672c-7a2a-4307-a5c4-7e2968d84225'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:54 compute-0 NetworkManager[45129]: <info>  [1759393674.1994] manager: (tapd73ba644-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.206 2 INFO os_vif [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe')
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.228 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:54 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.299 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.300 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.300 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No VIF found with MAC fa:16:3e:e9:aa:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.301 2 INFO nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Using config drive
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.325 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:27:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3905763339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.674 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.678 2 DEBUG nova.compute.provider_tree [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.701 2 DEBUG nova.scheduler.client.report [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:27:54 compute-0 ceph-mon[74477]: pgmap v1404: 305 pgs: 305 active+clean; 198 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 203 op/s
Oct 02 08:27:54 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1461974147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:27:54 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3905763339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.717 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.718 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.734 2 INFO nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Creating config drive at /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/disk.config
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.738 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8mpgimwr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:54 compute-0 ovn_controller[152344]: 2025-10-02T08:27:54Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:59:2a 10.100.0.3
Oct 02 08:27:54 compute-0 ovn_controller[152344]: 2025-10-02T08:27:54Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:59:2a 10.100.0.3
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.777 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.777 2 DEBUG nova.network.neutron [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.801 2 INFO nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.818 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.850 2 DEBUG nova.network.neutron [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Updated VIF entry in instance network info cache for port d73ba644-fe1d-4d32-9e65-532dd96466b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.850 2 DEBUG nova.network.neutron [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Updating instance_info_cache with network_info: [{"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.868 2 DEBUG oslo_concurrency.lockutils [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-84f8672c-7a2a-4307-a5c4-7e2968d84225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.886 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8mpgimwr" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.905 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.910 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/disk.config 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 206 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.8 MiB/s wr, 202 op/s
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.936 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.937 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.938 2 INFO nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Creating image(s)
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.958 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:54 compute-0 nova_compute[260603]: 2025-10-02 08:27:54.979 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.010 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.018 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.055 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/disk.config 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.056 2 INFO nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Deleting local config drive /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/disk.config because it was imported into RBD.
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.093 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.094 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.094 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.095 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:55 compute-0 kernel: tapd73ba644-fe: entered promiscuous mode
Oct 02 08:27:55 compute-0 NetworkManager[45129]: <info>  [1759393675.1589] manager: (tapd73ba644-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/157)
Oct 02 08:27:55 compute-0 ovn_controller[152344]: 2025-10-02T08:27:55Z|00344|binding|INFO|Claiming lport d73ba644-fe1d-4d32-9e65-532dd96466b4 for this chassis.
Oct 02 08:27:55 compute-0 ovn_controller[152344]: 2025-10-02T08:27:55Z|00345|binding|INFO|d73ba644-fe1d-4d32-9e65-532dd96466b4: Claiming fa:16:3e:e9:aa:14 10.100.0.12
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.170 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:aa:14 10.100.0.12'], port_security=['fa:16:3e:e9:aa:14 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '84f8672c-7a2a-4307-a5c4-7e2968d84225', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d73ba644-fe1d-4d32-9e65-532dd96466b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.171 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d73ba644-fe1d-4d32-9e65-532dd96466b4 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca bound to our chassis
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.173 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:27:55 compute-0 ovn_controller[152344]: 2025-10-02T08:27:55Z|00346|binding|INFO|Setting lport d73ba644-fe1d-4d32-9e65-532dd96466b4 ovn-installed in OVS
Oct 02 08:27:55 compute-0 ovn_controller[152344]: 2025-10-02T08:27:55Z|00347|binding|INFO|Setting lport d73ba644-fe1d-4d32-9e65-532dd96466b4 up in Southbound
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.183 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.195 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab2d2a1-6d41-4760-914f-3197c9ff0d95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.196 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa72ac8c9-11 in ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.199 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa72ac8c9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.199 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dabfaaa3-b265-4491-9a99-99f98f464c7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.200 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0fbd48b5-362f-45ec-8de2-a40595949b6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.204 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 247e32e5-5f07-4db4-9e6f-dcfade745228_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:55 compute-0 systemd-udevd[308881]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:27:55 compute-0 systemd-machined[214636]: New machine qemu-48-instance-0000002c.
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.219 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e2f1f5-e537-4231-bcc1-59188bc05552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 NetworkManager[45129]: <info>  [1759393675.2247] device (tapd73ba644-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:27:55 compute-0 NetworkManager[45129]: <info>  [1759393675.2254] device (tapd73ba644-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:27:55 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-0000002c.
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.240 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e284654a-7500-47b5-8f20-d375af739629]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.257 2 DEBUG nova.policy [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.274 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1303eb55-d7ce-4e64-8aa6-a662cba192e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.282 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8f3c18-ec4a-4d32-b03e-76a9e2e6fb5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 NetworkManager[45129]: <info>  [1759393675.2836] manager: (tapa72ac8c9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/158)
Oct 02 08:27:55 compute-0 systemd-udevd[308886]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.335 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6573aabd-f1e0-432a-bb4a-5cec7d2673cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.338 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5f12a2-f638-4a4c-be54-57d51a6925ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 NetworkManager[45129]: <info>  [1759393675.3700] device (tapa72ac8c9-10): carrier: link connected
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.378 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec85b9e-7d8c-4d2c-8c69-6e5215344458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.404 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0c7d63-dbb8-4631-806a-218bf95550ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455397, 'reachable_time': 23563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308933, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d142e9-d927-4269-bba6-91c94db687ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3b:61d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455397, 'tstamp': 455397}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308934, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.473 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[629304f2-62ae-40db-a84a-69347302feb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455397, 'reachable_time': 23563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308935, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.518 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3778c708-d845-4156-a3e3-060a9720e5e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.532 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 247e32e5-5f07-4db4-9e6f-dcfade745228_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.581 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9715f7c4-8374-4c00-8ac3-15058d7c7e10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.583 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.584 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.585 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa72ac8c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:55 compute-0 NetworkManager[45129]: <info>  [1759393675.5878] manager: (tapa72ac8c9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Oct 02 08:27:55 compute-0 kernel: tapa72ac8c9-10: entered promiscuous mode
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.593 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa72ac8c9-10, col_values=(('external_ids', {'iface-id': 'f9acec59-0200-4a1d-84e4-06e67c730498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:27:55 compute-0 ovn_controller[152344]: 2025-10-02T08:27:55Z|00348|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.598 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.600 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a26d31-b4cd-4e73-a703-c1022eb26922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.601 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:27:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.603 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'env', 'PROCESS_TAG=haproxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.639 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] resizing rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.734 2 DEBUG nova.objects.instance [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'migration_context' on Instance uuid 247e32e5-5f07-4db4-9e6f-dcfade745228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.750 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.751 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Ensure instance console log exists: /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.752 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.752 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:55 compute-0 nova_compute[260603]: 2025-10-02 08:27:55.752 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:56 compute-0 podman[309081]: 2025-10-02 08:27:56.005504084 +0000 UTC m=+0.047243148 container create a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:27:56 compute-0 systemd[1]: Started libpod-conmon-a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf.scope.
Oct 02 08:27:56 compute-0 podman[309081]: 2025-10-02 08:27:55.981254071 +0000 UTC m=+0.022993155 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:27:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59eb34d5cb9d5ee03edb075070edb4d31fba25c937c3d835829c09d4788425a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:27:56 compute-0 podman[309081]: 2025-10-02 08:27:56.104606274 +0000 UTC m=+0.146345358 container init a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:27:56 compute-0 podman[309081]: 2025-10-02 08:27:56.109790625 +0000 UTC m=+0.151529689 container start a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 08:27:56 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [NOTICE]   (309100) : New worker (309102) forked
Oct 02 08:27:56 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [NOTICE]   (309100) : Loading success.
Oct 02 08:27:56 compute-0 nova_compute[260603]: 2025-10-02 08:27:56.240 2 DEBUG nova.network.neutron [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Successfully created port: 262f44f5-df56-4176-96b2-4819d8b7e258 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:27:56 compute-0 nova_compute[260603]: 2025-10-02 08:27:56.273 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393676.273379, 84f8672c-7a2a-4307-a5c4-7e2968d84225 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:56 compute-0 nova_compute[260603]: 2025-10-02 08:27:56.274 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] VM Started (Lifecycle Event)
Oct 02 08:27:56 compute-0 nova_compute[260603]: 2025-10-02 08:27:56.290 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:56 compute-0 nova_compute[260603]: 2025-10-02 08:27:56.294 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393676.2759104, 84f8672c-7a2a-4307-a5c4-7e2968d84225 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:56 compute-0 nova_compute[260603]: 2025-10-02 08:27:56.294 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] VM Paused (Lifecycle Event)
Oct 02 08:27:56 compute-0 nova_compute[260603]: 2025-10-02 08:27:56.312 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:56 compute-0 nova_compute[260603]: 2025-10-02 08:27:56.315 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:27:56 compute-0 nova_compute[260603]: 2025-10-02 08:27:56.331 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:27:56 compute-0 ceph-mon[74477]: pgmap v1405: 305 pgs: 305 active+clean; 206 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.8 MiB/s wr, 202 op/s
Oct 02 08:27:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 206 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 156 op/s
Oct 02 08:27:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:27:57 compute-0 nova_compute[260603]: 2025-10-02 08:27:57.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:27:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:27:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:27:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:27:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:27:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.025 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.025 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.045 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.133 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.133 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.140 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.141 2 INFO nova.compute.claims [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.205 2 DEBUG nova.network.neutron [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Successfully updated port: 262f44f5-df56-4176-96b2-4819d8b7e258 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.228 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.228 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.229 2 DEBUG nova.network.neutron [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.281 2 DEBUG nova.compute.manager [req-1f96fa62-3720-4ee3-9341-160117d83831 req-343f1b45-faa1-48ef-aaf5-f39732c81622 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.281 2 DEBUG oslo_concurrency.lockutils [req-1f96fa62-3720-4ee3-9341-160117d83831 req-343f1b45-faa1-48ef-aaf5-f39732c81622 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.281 2 DEBUG oslo_concurrency.lockutils [req-1f96fa62-3720-4ee3-9341-160117d83831 req-343f1b45-faa1-48ef-aaf5-f39732c81622 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.282 2 DEBUG oslo_concurrency.lockutils [req-1f96fa62-3720-4ee3-9341-160117d83831 req-343f1b45-faa1-48ef-aaf5-f39732c81622 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.282 2 DEBUG nova.compute.manager [req-1f96fa62-3720-4ee3-9341-160117d83831 req-343f1b45-faa1-48ef-aaf5-f39732c81622 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Processing event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.282 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.286 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393678.2858777, 84f8672c-7a2a-4307-a5c4-7e2968d84225 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.286 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] VM Resumed (Lifecycle Event)
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.287 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.293 2 INFO nova.virt.libvirt.driver [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Instance spawned successfully.
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.293 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.312 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.318 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.323 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.323 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.324 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.324 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.324 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.325 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.356 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.363 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.402 2 DEBUG nova.network.neutron [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.409 2 INFO nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Took 14.86 seconds to spawn the instance on the hypervisor.
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.409 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.469 2 INFO nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Took 15.79 seconds to build instance.
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.485 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:58 compute-0 ceph-mon[74477]: pgmap v1406: 305 pgs: 305 active+clean; 206 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 156 op/s
Oct 02 08:27:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:27:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2357591501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.833 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.840 2 DEBUG nova.compute.provider_tree [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.855 2 DEBUG nova.scheduler.client.report [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.889 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.890 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:27:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 293 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 7.8 MiB/s wr, 277 op/s
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.948 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.949 2 DEBUG nova.network.neutron [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.974 2 INFO nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:27:58 compute-0 nova_compute[260603]: 2025-10-02 08:27:58.998 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.090 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.091 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.092 2 INFO nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Creating image(s)
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.123 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.153 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.182 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.186 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.256 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.257 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.258 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.259 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.281 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.284 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 197184a1-4270-40c9-87b5-6eca7e832812_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.507 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 197184a1-4270-40c9-87b5-6eca7e832812_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.564 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] resizing rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.647 2 DEBUG nova.policy [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3e67f389787b453faa1dfdb728caea35', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd531839bafe441a391fb9161a54c74ee', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.654 2 DEBUG nova.objects.instance [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lazy-loading 'migration_context' on Instance uuid 197184a1-4270-40c9-87b5-6eca7e832812 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.675 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.675 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Ensure instance console log exists: /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.676 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.676 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:27:59 compute-0 nova_compute[260603]: 2025-10-02 08:27:59.677 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:27:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2357591501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.652 2 DEBUG nova.network.neutron [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.682 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.682 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Instance network_info: |[{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.684 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Start _get_guest_xml network_info=[{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.692 2 WARNING nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.697 2 DEBUG nova.virt.libvirt.host [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.698 2 DEBUG nova.virt.libvirt.host [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.702 2 DEBUG nova.virt.libvirt.host [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.702 2 DEBUG nova.virt.libvirt.host [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.702 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.703 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.703 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.703 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.703 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.704 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.704 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.704 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.704 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.705 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.705 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.705 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.707 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:00 compute-0 ceph-mon[74477]: pgmap v1407: 305 pgs: 305 active+clean; 293 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 7.8 MiB/s wr, 277 op/s
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.815 2 INFO nova.compute.manager [None req-02f57a2d-dbba-47af-aad6-d7d227e5059a 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Pausing
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.818 2 DEBUG nova.objects.instance [None req-02f57a2d-dbba-47af-aad6-d7d227e5059a 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'flavor' on Instance uuid 84f8672c-7a2a-4307-a5c4-7e2968d84225 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.854 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393680.853726, 84f8672c-7a2a-4307-a5c4-7e2968d84225 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.855 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] VM Paused (Lifecycle Event)
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.860 2 DEBUG nova.compute.manager [None req-02f57a2d-dbba-47af-aad6-d7d227e5059a 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.866 2 DEBUG nova.compute.manager [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.866 2 DEBUG nova.compute.manager [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing instance network info cache due to event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.868 2 DEBUG oslo_concurrency.lockutils [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.868 2 DEBUG oslo_concurrency.lockutils [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.869 2 DEBUG nova.network.neutron [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.883 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.894 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:28:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 293 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 6.0 MiB/s wr, 165 op/s
Oct 02 08:28:00 compute-0 nova_compute[260603]: 2025-10-02 08:28:00.923 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 02 08:28:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:28:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471208804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.172 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.209 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.215 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:01.591 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:28:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2035058678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.648 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.650 2 DEBUG nova.virt.libvirt.vif [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.650 2 DEBUG nova.network.os_vif_util [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.651 2 DEBUG nova.network.os_vif_util [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.653 2 DEBUG nova.objects.instance [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_devices' on Instance uuid 247e32e5-5f07-4db4-9e6f-dcfade745228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.684 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:28:01 compute-0 nova_compute[260603]:   <uuid>247e32e5-5f07-4db4-9e6f-dcfade745228</uuid>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   <name>instance-0000002d</name>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <nova:name>tempest-tempest.common.compute-instance-1204621376</nova:name>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:28:00</nova:creationTime>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:28:01 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:28:01 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:28:01 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:28:01 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:01 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:28:01 compute-0 nova_compute[260603]:         <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:28:01 compute-0 nova_compute[260603]:         <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:28:01 compute-0 nova_compute[260603]:         <nova:port uuid="262f44f5-df56-4176-96b2-4819d8b7e258">
Oct 02 08:28:01 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <system>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <entry name="serial">247e32e5-5f07-4db4-9e6f-dcfade745228</entry>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <entry name="uuid">247e32e5-5f07-4db4-9e6f-dcfade745228</entry>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     </system>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   <os>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   </os>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   <features>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   </features>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/247e32e5-5f07-4db4-9e6f-dcfade745228_disk">
Oct 02 08:28:01 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:28:01 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config">
Oct 02 08:28:01 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:28:01 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:df:f8:c0"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <target dev="tap262f44f5-df"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/console.log" append="off"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <video>
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     </video>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:28:01 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:28:01 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:28:01 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:28:01 compute-0 nova_compute[260603]: </domain>
Oct 02 08:28:01 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.685 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Preparing to wait for external event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.686 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.686 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.687 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.688 2 DEBUG nova.virt.libvirt.vif [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.688 2 DEBUG nova.network.os_vif_util [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.689 2 DEBUG nova.network.os_vif_util [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.690 2 DEBUG os_vif [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.693 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap262f44f5-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.700 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap262f44f5-df, col_values=(('external_ids', {'iface-id': '262f44f5-df56-4176-96b2-4819d8b7e258', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:f8:c0', 'vm-uuid': '247e32e5-5f07-4db4-9e6f-dcfade745228'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:01 compute-0 NetworkManager[45129]: <info>  [1759393681.7043] manager: (tap262f44f5-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.711 2 INFO os_vif [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df')
Oct 02 08:28:01 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1471208804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:01 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2035058678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.776 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.777 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.778 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:df:f8:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.778 2 INFO nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Using config drive
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.808 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:01 compute-0 nova_compute[260603]: 2025-10-02 08:28:01.953 2 DEBUG nova.network.neutron [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Successfully created port: b70e7dbb-3605-4af5-a977-d1c35f1ec20a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:28:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:28:02 compute-0 nova_compute[260603]: 2025-10-02 08:28:02.609 2 INFO nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Creating config drive at /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/disk.config
Oct 02 08:28:02 compute-0 nova_compute[260603]: 2025-10-02 08:28:02.621 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvlg2_ot execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:02 compute-0 nova_compute[260603]: 2025-10-02 08:28:02.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:02 compute-0 ceph-mon[74477]: pgmap v1408: 305 pgs: 305 active+clean; 293 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 6.0 MiB/s wr, 165 op/s
Oct 02 08:28:02 compute-0 nova_compute[260603]: 2025-10-02 08:28:02.780 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvlg2_ot" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:02 compute-0 nova_compute[260603]: 2025-10-02 08:28:02.824 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:02 compute-0 nova_compute[260603]: 2025-10-02 08:28:02.829 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/disk.config 247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 319 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.4 MiB/s wr, 235 op/s
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.027 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/disk.config 247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.029 2 INFO nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Deleting local config drive /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/disk.config because it was imported into RBD.
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.037 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.038 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.039 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.039 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.042 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.046 2 INFO nova.compute.manager [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Terminating instance
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.048 2 DEBUG nova.compute.manager [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:28:03 compute-0 kernel: tapd73ba644-fe (unregistering): left promiscuous mode
Oct 02 08:28:03 compute-0 NetworkManager[45129]: <info>  [1759393683.1096] device (tapd73ba644-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:28:03 compute-0 kernel: tap262f44f5-df: entered promiscuous mode
Oct 02 08:28:03 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:28:03 compute-0 NetworkManager[45129]: <info>  [1759393683.1146] manager: (tap262f44f5-df): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Oct 02 08:28:03 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:03 compute-0 ovn_controller[152344]: 2025-10-02T08:28:03Z|00349|binding|INFO|Releasing lport d73ba644-fe1d-4d32-9e65-532dd96466b4 from this chassis (sb_readonly=0)
Oct 02 08:28:03 compute-0 ovn_controller[152344]: 2025-10-02T08:28:03Z|00350|binding|INFO|Setting lport d73ba644-fe1d-4d32-9e65-532dd96466b4 down in Southbound
Oct 02 08:28:03 compute-0 ovn_controller[152344]: 2025-10-02T08:28:03Z|00351|binding|INFO|Claiming lport 262f44f5-df56-4176-96b2-4819d8b7e258 for this chassis.
Oct 02 08:28:03 compute-0 ovn_controller[152344]: 2025-10-02T08:28:03Z|00352|binding|INFO|262f44f5-df56-4176-96b2-4819d8b7e258: Claiming fa:16:3e:df:f8:c0 10.100.0.14
Oct 02 08:28:03 compute-0 ovn_controller[152344]: 2025-10-02T08:28:03Z|00353|binding|INFO|Removing iface tapd73ba644-fe ovn-installed in OVS
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.130 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:f8:c0 10.100.0.14'], port_security=['fa:16:3e:df:f8:c0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '247e32e5-5f07-4db4-9e6f-dcfade745228', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1616ad3a-ff5f-4423-8dd9-f2ff5717f8c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=262f44f5-df56-4176-96b2-4819d8b7e258) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.141 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:aa:14 10.100.0.12'], port_security=['fa:16:3e:e9:aa:14 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '84f8672c-7a2a-4307-a5c4-7e2968d84225', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d73ba644-fe1d-4d32-9e65-532dd96466b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.144 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 262f44f5-df56-4176-96b2-4819d8b7e258 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.148 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:28:03 compute-0 systemd-machined[214636]: New machine qemu-49-instance-0000002d.
Oct 02 08:28:03 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000002d.
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.174 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cb178a-a719-4b83-93c3-c07965777ae7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Oct 02 08:28:03 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002c.scope: Consumed 3.531s CPU time.
Oct 02 08:28:03 compute-0 systemd-machined[214636]: Machine qemu-48-instance-0000002c terminated.
Oct 02 08:28:03 compute-0 ovn_controller[152344]: 2025-10-02T08:28:03Z|00354|binding|INFO|Setting lport 262f44f5-df56-4176-96b2-4819d8b7e258 ovn-installed in OVS
Oct 02 08:28:03 compute-0 ovn_controller[152344]: 2025-10-02T08:28:03Z|00355|binding|INFO|Setting lport 262f44f5-df56-4176-96b2-4819d8b7e258 up in Southbound
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:03 compute-0 systemd-udevd[309445]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.223 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb344d82-ffe7-487b-8924-16873ccdfa26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.228 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1e16960e-85e7-42a9-9e36-c781fc470eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 NetworkManager[45129]: <info>  [1759393683.2368] device (tap262f44f5-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:28:03 compute-0 NetworkManager[45129]: <info>  [1759393683.2378] device (tap262f44f5-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.267 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ac98ac-9007-415b-bcdd-42ef05fa17a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.303 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e857eaa9-21f2-4ccb-b711-df125f6bf665]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309457, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.315 2 INFO nova.virt.libvirt.driver [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Instance destroyed successfully.
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.316 2 DEBUG nova.objects.instance [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'resources' on Instance uuid 84f8672c-7a2a-4307-a5c4-7e2968d84225 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.338 2 DEBUG nova.virt.libvirt.vif [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1351338014',display_name='tempest-DeleteServersTestJSON-server-1351338014',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1351338014',id=44,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-dl5bh0q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:00Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=84f8672c-7a2a-4307-a5c4-7e2968d84225,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.338 2 DEBUG nova.network.os_vif_util [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.339 2 DEBUG nova.network.os_vif_util [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.339 2 DEBUG os_vif [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd73ba644-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.339 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2d4e4f-fbff-4b03-8486-cac276e60671]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454016, 'tstamp': 454016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309467, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454020, 'tstamp': 454020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309467, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.347 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.348 2 INFO os_vif [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe')
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.357 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.357 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.358 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.358 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.359 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d73ba644-fe1d-4d32-9e65-532dd96466b4 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca unbound from our chassis
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.361 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.362 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2f5401-8810-4513-ba65-6bb814b62295]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.362 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace which is not needed anymore
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:03 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [NOTICE]   (309100) : haproxy version is 2.8.14-c23fe91
Oct 02 08:28:03 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [NOTICE]   (309100) : path to executable is /usr/sbin/haproxy
Oct 02 08:28:03 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [WARNING]  (309100) : Exiting Master process...
Oct 02 08:28:03 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [WARNING]  (309100) : Exiting Master process...
Oct 02 08:28:03 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [ALERT]    (309100) : Current worker (309102) exited with code 143 (Terminated)
Oct 02 08:28:03 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [WARNING]  (309100) : All workers exited. Exiting... (0)
Oct 02 08:28:03 compute-0 systemd[1]: libpod-a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf.scope: Deactivated successfully.
Oct 02 08:28:03 compute-0 podman[309523]: 2025-10-02 08:28:03.559258399 +0000 UTC m=+0.072900126 container died a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 08:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf-userdata-shm.mount: Deactivated successfully.
Oct 02 08:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-59eb34d5cb9d5ee03edb075070edb4d31fba25c937c3d835829c09d4788425a8-merged.mount: Deactivated successfully.
Oct 02 08:28:03 compute-0 podman[309523]: 2025-10-02 08:28:03.605037511 +0000 UTC m=+0.118679228 container cleanup a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 08:28:03 compute-0 systemd[1]: libpod-conmon-a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf.scope: Deactivated successfully.
Oct 02 08:28:03 compute-0 podman[309580]: 2025-10-02 08:28:03.697489955 +0000 UTC m=+0.060525192 container remove a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.697 2 DEBUG nova.network.neutron [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updated VIF entry in instance network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.699 2 DEBUG nova.network.neutron [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.707 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42cbfb74-6577-4c4d-bc4f-8d2c93b98d89]: (4, ('Thu Oct  2 08:28:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf)\na6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf\nThu Oct  2 08:28:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf)\na6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.709 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7e7c20-45c2-4320-9ccb-fa9053e71da6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.710 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.726 2 DEBUG oslo_concurrency.lockutils [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.726 2 DEBUG nova.compute.manager [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.727 2 DEBUG oslo_concurrency.lockutils [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.727 2 DEBUG oslo_concurrency.lockutils [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.727 2 DEBUG oslo_concurrency.lockutils [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.727 2 DEBUG nova.compute.manager [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] No waiting events found dispatching network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.728 2 WARNING nova.compute.manager [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received unexpected event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 for instance with vm_state active and task_state pausing.
Oct 02 08:28:03 compute-0 kernel: tapa72ac8c9-10: left promiscuous mode
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.761 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8dfc0f-929c-4704-83e3-12f18acfe0f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.773 2 INFO nova.virt.libvirt.driver [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Deleting instance files /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225_del
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.774 2 INFO nova.virt.libvirt.driver [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Deletion of /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225_del complete
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.780 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83f4182a-b3e5-48ae-aefa-bc8a84ca6f83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.782 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3314fd-bb56-40a4-8f8d-63d88c638ee0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.807 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b8c77e-cc0f-42cd-84dc-b890346bf8f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455387, 'reachable_time': 33571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309594, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 systemd[1]: run-netns-ovnmeta\x2da72ac8c9\x2d16ee\x2d4ec0\x2db23d\x2d2741fda000ca.mount: Deactivated successfully.
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.812 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:28:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.812 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f52c60ab-4367-4e28-87a3-9f0cde7ca63c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.844 2 INFO nova.compute.manager [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.845 2 DEBUG oslo.service.loopingcall [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.845 2 DEBUG nova.compute.manager [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:28:03 compute-0 nova_compute[260603]: 2025-10-02 08:28:03.845 2 DEBUG nova.network.neutron [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.017 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393684.017244, 247e32e5-5f07-4db4-9e6f-dcfade745228 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.018 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] VM Started (Lifecycle Event)
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.047 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.052 2 DEBUG nova.compute.manager [req-f2011652-ab4e-4ecf-ab73-4b822a4f5bf8 req-e38e4c38-05d3-450f-bc06-cb533338c26d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.052 2 DEBUG oslo_concurrency.lockutils [req-f2011652-ab4e-4ecf-ab73-4b822a4f5bf8 req-e38e4c38-05d3-450f-bc06-cb533338c26d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.052 2 DEBUG oslo_concurrency.lockutils [req-f2011652-ab4e-4ecf-ab73-4b822a4f5bf8 req-e38e4c38-05d3-450f-bc06-cb533338c26d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.053 2 DEBUG oslo_concurrency.lockutils [req-f2011652-ab4e-4ecf-ab73-4b822a4f5bf8 req-e38e4c38-05d3-450f-bc06-cb533338c26d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.053 2 DEBUG nova.compute.manager [req-f2011652-ab4e-4ecf-ab73-4b822a4f5bf8 req-e38e4c38-05d3-450f-bc06-cb533338c26d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Processing event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.054 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.055 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393684.0173428, 247e32e5-5f07-4db4-9e6f-dcfade745228 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.056 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] VM Paused (Lifecycle Event)
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.059 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.062 2 INFO nova.virt.libvirt.driver [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Instance spawned successfully.
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.062 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.081 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.091 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393684.0592353, 247e32e5-5f07-4db4-9e6f-dcfade745228 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.092 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] VM Resumed (Lifecycle Event)
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.095 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.095 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.096 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.096 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.097 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.097 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.127 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.130 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.160 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.173 2 INFO nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Took 9.24 seconds to spawn the instance on the hypervisor.
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.174 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.257 2 INFO nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Took 10.24 seconds to build instance.
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.275 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.414 2 DEBUG nova.network.neutron [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Successfully updated port: b70e7dbb-3605-4af5-a977-d1c35f1ec20a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.428 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "refresh_cache-197184a1-4270-40c9-87b5-6eca7e832812" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.429 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquired lock "refresh_cache-197184a1-4270-40c9-87b5-6eca7e832812" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.429 2 DEBUG nova.network.neutron [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.649 2 DEBUG nova.network.neutron [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:28:04 compute-0 ceph-mon[74477]: pgmap v1409: 305 pgs: 305 active+clean; 319 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.4 MiB/s wr, 235 op/s
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.812 2 DEBUG nova.compute.manager [req-0e93545d-b6f5-4db0-a7bd-fc8be2a0127b req-5fbcaf4c-8698-4378-85a9-f9d4c4792e5d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-vif-unplugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.813 2 DEBUG oslo_concurrency.lockutils [req-0e93545d-b6f5-4db0-a7bd-fc8be2a0127b req-5fbcaf4c-8698-4378-85a9-f9d4c4792e5d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.814 2 DEBUG oslo_concurrency.lockutils [req-0e93545d-b6f5-4db0-a7bd-fc8be2a0127b req-5fbcaf4c-8698-4378-85a9-f9d4c4792e5d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.814 2 DEBUG oslo_concurrency.lockutils [req-0e93545d-b6f5-4db0-a7bd-fc8be2a0127b req-5fbcaf4c-8698-4378-85a9-f9d4c4792e5d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.814 2 DEBUG nova.compute.manager [req-0e93545d-b6f5-4db0-a7bd-fc8be2a0127b req-5fbcaf4c-8698-4378-85a9-f9d4c4792e5d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] No waiting events found dispatching network-vif-unplugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.815 2 DEBUG nova.compute.manager [req-0e93545d-b6f5-4db0-a7bd-fc8be2a0127b req-5fbcaf4c-8698-4378-85a9-f9d4c4792e5d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-vif-unplugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.838 2 DEBUG nova.network.neutron [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.855 2 INFO nova.compute.manager [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Took 1.01 seconds to deallocate network for instance.
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.894 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:04 compute-0 nova_compute[260603]: 2025-10-02 08:28:04.895 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 339 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.3 MiB/s wr, 228 op/s
Oct 02 08:28:05 compute-0 nova_compute[260603]: 2025-10-02 08:28:05.037 2 DEBUG oslo_concurrency.processutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121913684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:05 compute-0 nova_compute[260603]: 2025-10-02 08:28:05.558 2 DEBUG oslo_concurrency.processutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:05 compute-0 nova_compute[260603]: 2025-10-02 08:28:05.563 2 DEBUG nova.compute.provider_tree [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:28:05 compute-0 nova_compute[260603]: 2025-10-02 08:28:05.587 2 DEBUG nova.scheduler.client.report [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:28:05 compute-0 nova_compute[260603]: 2025-10-02 08:28:05.643 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:05 compute-0 nova_compute[260603]: 2025-10-02 08:28:05.686 2 INFO nova.scheduler.client.report [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Deleted allocations for instance 84f8672c-7a2a-4307-a5c4-7e2968d84225
Oct 02 08:28:05 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3121913684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:05 compute-0 nova_compute[260603]: 2025-10-02 08:28:05.786 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.365 2 DEBUG nova.network.neutron [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Updating instance_info_cache with network_info: [{"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.438 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Releasing lock "refresh_cache-197184a1-4270-40c9-87b5-6eca7e832812" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.439 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Instance network_info: |[{"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.444 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Start _get_guest_xml network_info=[{"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.452 2 WARNING nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.458 2 DEBUG nova.virt.libvirt.host [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.459 2 DEBUG nova.virt.libvirt.host [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.463 2 DEBUG nova.virt.libvirt.host [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.464 2 DEBUG nova.virt.libvirt.host [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.465 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.466 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.467 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.467 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.468 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.469 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.469 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.470 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.471 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.471 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.472 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.472 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.479 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:06 compute-0 ceph-mon[74477]: pgmap v1410: 305 pgs: 305 active+clean; 339 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.3 MiB/s wr, 228 op/s
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.833 2 DEBUG nova.compute.manager [req-f7fde9a9-c332-44a4-987c-e566e2487965 req-c00a6e39-7291-48ab-8858-31ed4098877a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.834 2 DEBUG oslo_concurrency.lockutils [req-f7fde9a9-c332-44a4-987c-e566e2487965 req-c00a6e39-7291-48ab-8858-31ed4098877a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.835 2 DEBUG oslo_concurrency.lockutils [req-f7fde9a9-c332-44a4-987c-e566e2487965 req-c00a6e39-7291-48ab-8858-31ed4098877a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.835 2 DEBUG oslo_concurrency.lockutils [req-f7fde9a9-c332-44a4-987c-e566e2487965 req-c00a6e39-7291-48ab-8858-31ed4098877a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.836 2 DEBUG nova.compute.manager [req-f7fde9a9-c332-44a4-987c-e566e2487965 req-c00a6e39-7291-48ab-8858-31ed4098877a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.836 2 WARNING nova.compute.manager [req-f7fde9a9-c332-44a4-987c-e566e2487965 req-c00a6e39-7291-48ab-8858-31ed4098877a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received unexpected event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 for instance with vm_state active and task_state None.
Oct 02 08:28:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 339 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 214 op/s
Oct 02 08:28:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:28:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2301358748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.966 2 DEBUG nova.compute.manager [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received event network-changed-b70e7dbb-3605-4af5-a977-d1c35f1ec20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.967 2 DEBUG nova.compute.manager [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Refreshing instance network info cache due to event network-changed-b70e7dbb-3605-4af5-a977-d1c35f1ec20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.968 2 DEBUG oslo_concurrency.lockutils [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-197184a1-4270-40c9-87b5-6eca7e832812" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.968 2 DEBUG oslo_concurrency.lockutils [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-197184a1-4270-40c9-87b5-6eca7e832812" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.969 2 DEBUG nova.network.neutron [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Refreshing network info cache for port b70e7dbb-3605-4af5-a977-d1c35f1ec20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:06 compute-0 nova_compute[260603]: 2025-10-02 08:28:06.983 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.018 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.024 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:28:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:28:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035342952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.499 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.502 2 DEBUG nova.virt.libvirt.vif [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1439041942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1439041942',id=46,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d531839bafe441a391fb9161a54c74ee',ramdisk_id='',reservation_id='r-3hltheaa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-588685837',owner_user_name='tempest-InstanceActionsV221TestJSON-588685837-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:59Z,user_data=None,user_id='3e67f389787b453faa1dfdb728caea35',uuid=197184a1-4270-40c9-87b5-6eca7e832812,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.503 2 DEBUG nova.network.os_vif_util [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Converting VIF {"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.504 2 DEBUG nova.network.os_vif_util [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.506 2 DEBUG nova.objects.instance [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lazy-loading 'pci_devices' on Instance uuid 197184a1-4270-40c9-87b5-6eca7e832812 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.527 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:28:07 compute-0 nova_compute[260603]:   <uuid>197184a1-4270-40c9-87b5-6eca7e832812</uuid>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   <name>instance-0000002e</name>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-1439041942</nova:name>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:28:06</nova:creationTime>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:28:07 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:28:07 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:28:07 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:28:07 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:07 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:28:07 compute-0 nova_compute[260603]:         <nova:user uuid="3e67f389787b453faa1dfdb728caea35">tempest-InstanceActionsV221TestJSON-588685837-project-member</nova:user>
Oct 02 08:28:07 compute-0 nova_compute[260603]:         <nova:project uuid="d531839bafe441a391fb9161a54c74ee">tempest-InstanceActionsV221TestJSON-588685837</nova:project>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:28:07 compute-0 nova_compute[260603]:         <nova:port uuid="b70e7dbb-3605-4af5-a977-d1c35f1ec20a">
Oct 02 08:28:07 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <system>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <entry name="serial">197184a1-4270-40c9-87b5-6eca7e832812</entry>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <entry name="uuid">197184a1-4270-40c9-87b5-6eca7e832812</entry>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     </system>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   <os>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   </os>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   <features>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   </features>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/197184a1-4270-40c9-87b5-6eca7e832812_disk">
Oct 02 08:28:07 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:28:07 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/197184a1-4270-40c9-87b5-6eca7e832812_disk.config">
Oct 02 08:28:07 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:28:07 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:e7:85:5d"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <target dev="tapb70e7dbb-36"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/console.log" append="off"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <video>
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     </video>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:28:07 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:28:07 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:28:07 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:28:07 compute-0 nova_compute[260603]: </domain>
Oct 02 08:28:07 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.529 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Preparing to wait for external event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.530 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.530 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.530 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.532 2 DEBUG nova.virt.libvirt.vif [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1439041942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1439041942',id=46,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d531839bafe441a391fb9161a54c74ee',ramdisk_id='',reservation_id='r-3hltheaa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-588685837',owner_user_name='tempest-InstanceActionsV221TestJSON-588685837-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:59Z,user_data=None,user_id='3e67f389787b453faa1dfdb728caea35',uuid=197184a1-4270-40c9-87b5-6eca7e832812,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.532 2 DEBUG nova.network.os_vif_util [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Converting VIF {"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.533 2 DEBUG nova.network.os_vif_util [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.534 2 DEBUG os_vif [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.535 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.536 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb70e7dbb-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb70e7dbb-36, col_values=(('external_ids', {'iface-id': 'b70e7dbb-3605-4af5-a977-d1c35f1ec20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:85:5d', 'vm-uuid': '197184a1-4270-40c9-87b5-6eca7e832812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:07 compute-0 NetworkManager[45129]: <info>  [1759393687.5451] manager: (tapb70e7dbb-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.554 2 INFO os_vif [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36')
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.633 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.634 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.634 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] No VIF found with MAC fa:16:3e:e7:85:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.635 2 INFO nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Using config drive
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.670 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:07 compute-0 nova_compute[260603]: 2025-10-02 08:28:07.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2301358748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3035342952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.018 2 INFO nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Creating config drive at /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/disk.config
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.027 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcx8zifqk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.178 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcx8zifqk" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.216 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.221 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/disk.config 197184a1-4270-40c9-87b5-6eca7e832812_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.393 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/disk.config 197184a1-4270-40c9-87b5-6eca7e832812_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.394 2 INFO nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Deleting local config drive /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/disk.config because it was imported into RBD.
Oct 02 08:28:08 compute-0 kernel: tapb70e7dbb-36: entered promiscuous mode
Oct 02 08:28:08 compute-0 NetworkManager[45129]: <info>  [1759393688.4484] manager: (tapb70e7dbb-36): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Oct 02 08:28:08 compute-0 systemd-udevd[309749]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:28:08 compute-0 ovn_controller[152344]: 2025-10-02T08:28:08Z|00356|binding|INFO|Claiming lport b70e7dbb-3605-4af5-a977-d1c35f1ec20a for this chassis.
Oct 02 08:28:08 compute-0 ovn_controller[152344]: 2025-10-02T08:28:08Z|00357|binding|INFO|b70e7dbb-3605-4af5-a977-d1c35f1ec20a: Claiming fa:16:3e:e7:85:5d 10.100.0.11
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.488 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:85:5d 10.100.0.11'], port_security=['fa:16:3e:e7:85:5d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '197184a1-4270-40c9-87b5-6eca7e832812', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd531839bafe441a391fb9161a54c74ee', 'neutron:revision_number': '2', 'neutron:security_group_ids': '968634c4-ac96-4f74-8c28-677b698bf17c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec8729b6-f94a-4108-b4ba-253312192658, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b70e7dbb-3605-4af5-a977-d1c35f1ec20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.490 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b70e7dbb-3605-4af5-a977-d1c35f1ec20a in datapath a14a44c4-2ea4-49fe-ba19-5ba96209c2bc bound to our chassis
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.491 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a14a44c4-2ea4-49fe-ba19-5ba96209c2bc
Oct 02 08:28:08 compute-0 NetworkManager[45129]: <info>  [1759393688.4970] device (tapb70e7dbb-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:28:08 compute-0 NetworkManager[45129]: <info>  [1759393688.5022] device (tapb70e7dbb-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:28:08 compute-0 ovn_controller[152344]: 2025-10-02T08:28:08Z|00358|binding|INFO|Setting lport b70e7dbb-3605-4af5-a977-d1c35f1ec20a ovn-installed in OVS
Oct 02 08:28:08 compute-0 ovn_controller[152344]: 2025-10-02T08:28:08Z|00359|binding|INFO|Setting lport b70e7dbb-3605-4af5-a977-d1c35f1ec20a up in Southbound
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e29b3b08-527c-4e8f-980c-5aabbdd2c79c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.509 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa14a44c4-21 in ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.512 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa14a44c4-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.512 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac7fa7a-424e-406d-85fe-4f7a9e0e6f58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.513 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e7007a0d-a443-40ab-9c19-7f6ebd16f966]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.524 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8578aa7b-9e82-4bb8-a399-9e6ca3cbe579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 systemd-machined[214636]: New machine qemu-50-instance-0000002e.
Oct 02 08:28:08 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000002e.
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.547 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f1e6e1-432e-4033-b7a0-dd7798985f89]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.579 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d69bce7b-c3d3-41db-99e1-96351e1e8bfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 systemd-udevd[309753]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:28:08 compute-0 NetworkManager[45129]: <info>  [1759393688.5892] manager: (tapa14a44c4-20): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.590 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[58308777-ee2b-40a5-bb88-a5d5a755d7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.633 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc79194-5335-4b3b-b4b0-c04bb06fc476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.635 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d57f1baa-08f0-4b33-8eee-0446acbe7b0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.667 2 DEBUG nova.network.neutron [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Updated VIF entry in instance network info cache for port b70e7dbb-3605-4af5-a977-d1c35f1ec20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.667 2 DEBUG nova.network.neutron [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Updating instance_info_cache with network_info: [{"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:08 compute-0 NetworkManager[45129]: <info>  [1759393688.6710] device (tapa14a44c4-20): carrier: link connected
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.677 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[86c871f6-67ed-4f3d-b36c-4a8d8b528a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.687 2 DEBUG oslo_concurrency.lockutils [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-197184a1-4270-40c9-87b5-6eca7e832812" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.687 2 DEBUG nova.compute.manager [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.688 2 DEBUG oslo_concurrency.lockutils [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.688 2 DEBUG oslo_concurrency.lockutils [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.688 2 DEBUG oslo_concurrency.lockutils [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.688 2 DEBUG nova.compute.manager [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] No waiting events found dispatching network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.689 2 WARNING nova.compute.manager [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received unexpected event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 for instance with vm_state deleted and task_state None.
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.689 2 DEBUG nova.compute.manager [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-vif-deleted-d73ba644-fe1d-4d32-9e65-532dd96466b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.696 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ef545259-257b-48fb-be4b-0824bb76a1cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa14a44c4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:64:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456727, 'reachable_time': 36232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309785, 'error': None, 'target': 'ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.714 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eb73ff6d-08de-4383-997d-e24e94cafe08]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:64de'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456727, 'tstamp': 456727}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309786, 'error': None, 'target': 'ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.732 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c949a12a-9e34-4afe-82b6-9c1d8ea5e66a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa14a44c4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:64:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456727, 'reachable_time': 36232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309787, 'error': None, 'target': 'ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.767 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[90175b05-2e76-4c14-af48-93e9b0e5980c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 ceph-mon[74477]: pgmap v1411: 305 pgs: 305 active+clean; 339 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 214 op/s
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.865 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[771b5cdb-8ecb-4edd-a6e3-80a11ee144d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.866 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa14a44c4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.867 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.867 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa14a44c4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:08 compute-0 kernel: tapa14a44c4-20: entered promiscuous mode
Oct 02 08:28:08 compute-0 NetworkManager[45129]: <info>  [1759393688.8710] manager: (tapa14a44c4-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.874 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa14a44c4-20, col_values=(('external_ids', {'iface-id': '96e9ca07-5376-4a96-befe-3386cfd0b28d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:08 compute-0 ovn_controller[152344]: 2025-10-02T08:28:08Z|00360|binding|INFO|Releasing lport 96e9ca07-5376-4a96-befe-3386cfd0b28d from this chassis (sb_readonly=0)
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:08 compute-0 nova_compute[260603]: 2025-10-02 08:28:08.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.901 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a14a44c4-2ea4-49fe-ba19-5ba96209c2bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a14a44c4-2ea4-49fe-ba19-5ba96209c2bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.902 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5631e7fe-fab0-431f-bbd6-6bcc0ab945e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.903 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/a14a44c4-2ea4-49fe-ba19-5ba96209c2bc.pid.haproxy
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID a14a44c4-2ea4-49fe-ba19-5ba96209c2bc
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:28:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.904 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'env', 'PROCESS_TAG=haproxy-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a14a44c4-2ea4-49fe-ba19-5ba96209c2bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:28:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 293 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.8 MiB/s wr, 313 op/s
Oct 02 08:28:09 compute-0 podman[309861]: 2025-10-02 08:28:09.339139598 +0000 UTC m=+0.059504280 container create 20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 08:28:09 compute-0 systemd[1]: Started libpod-conmon-20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b.scope.
Oct 02 08:28:09 compute-0 podman[309861]: 2025-10-02 08:28:09.302788178 +0000 UTC m=+0.023152900 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:28:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a67a4952718d6abeada72cfb2cec3851c08f6d1026af2e4c7575087548f9cfd9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:09 compute-0 podman[309861]: 2025-10-02 08:28:09.424017295 +0000 UTC m=+0.144382007 container init 20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 08:28:09 compute-0 podman[309861]: 2025-10-02 08:28:09.429608359 +0000 UTC m=+0.149973041 container start 20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 08:28:09 compute-0 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [NOTICE]   (309881) : New worker (309883) forked
Oct 02 08:28:09 compute-0 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [NOTICE]   (309881) : Loading success.
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.472 2 DEBUG nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.473 2 DEBUG nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing instance network info cache due to event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.473 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.474 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.474 2 DEBUG nova.network.neutron [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.607 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393689.606536, 197184a1-4270-40c9-87b5-6eca7e832812 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.608 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] VM Started (Lifecycle Event)
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.630 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.635 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393689.6068947, 197184a1-4270-40c9-87b5-6eca7e832812 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.636 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] VM Paused (Lifecycle Event)
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.656 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.661 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.685 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.697 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.698 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.714 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.846 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.847 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.856 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:28:09 compute-0 nova_compute[260603]: 2025-10-02 08:28:09.856 2 INFO nova.compute.claims [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.069 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017470258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.592 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.602 2 DEBUG nova.compute.provider_tree [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.624 2 DEBUG nova.scheduler.client.report [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.650 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.652 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.707 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.708 2 DEBUG nova.network.neutron [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.731 2 INFO nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.754 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:28:10 compute-0 ceph-mon[74477]: pgmap v1412: 305 pgs: 305 active+clean; 293 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.8 MiB/s wr, 313 op/s
Oct 02 08:28:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4017470258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.853 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.855 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.856 2 INFO nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Creating image(s)
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.888 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.912 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 293 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.980 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:10 compute-0 nova_compute[260603]: 2025-10-02 08:28:10.987 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.028 2 DEBUG nova.policy [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ac6f72f7366459a86c086737b89ea69', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f269abbe5769427dbf44c430d7529c04', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:28:11 compute-0 podman[309951]: 2025-10-02 08:28:11.030456918 +0000 UTC m=+0.088307425 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 08:28:11 compute-0 podman[309948]: 2025-10-02 08:28:11.07430624 +0000 UTC m=+0.136491322 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.096 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.098 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.099 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.099 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.132 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.137 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 e3ae3c82-7eb4-4727-a846-92afca9a8330_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.296 2 DEBUG nova.network.neutron [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updated VIF entry in instance network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.297 2 DEBUG nova.network.neutron [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.318 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.318 2 DEBUG nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.319 2 DEBUG nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing instance network info cache due to event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.319 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.319 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.319 2 DEBUG nova.network.neutron [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.423 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 e3ae3c82-7eb4-4727-a846-92afca9a8330_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.495 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] resizing rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.554 2 DEBUG oslo_concurrency.lockutils [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-f13ff7c1-d7d3-443e-9f06-69f8c466af30-2f45b100-9bc2-4853-87ff-324e74ddfee5" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.554 2 DEBUG oslo_concurrency.lockutils [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-f13ff7c1-d7d3-443e-9f06-69f8c466af30-2f45b100-9bc2-4853-87ff-324e74ddfee5" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.555 2 DEBUG nova.objects.instance [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.560 2 DEBUG nova.compute.manager [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.561 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.561 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.562 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.562 2 DEBUG nova.compute.manager [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Processing event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.562 2 DEBUG nova.compute.manager [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.562 2 DEBUG nova.compute.manager [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing instance network info cache due to event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.562 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.564 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.569 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393691.5686553, 197184a1-4270-40c9-87b5-6eca7e832812 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.569 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] VM Resumed (Lifecycle Event)
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.633 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.638 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.652 2 DEBUG nova.objects.instance [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'migration_context' on Instance uuid e3ae3c82-7eb4-4727-a846-92afca9a8330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.655 2 INFO nova.virt.libvirt.driver [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Instance spawned successfully.
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.655 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.664 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.672 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.672 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.674 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.675 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Ensure instance console log exists: /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.675 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.675 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.676 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.685 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.685 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.686 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.686 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.687 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.687 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.692 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.714 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.776 2 INFO nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Took 12.69 seconds to spawn the instance on the hypervisor.
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.776 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.794 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.794 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.804 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.804 2 INFO nova.compute.claims [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.854 2 INFO nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Took 13.74 seconds to build instance.
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.885 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:11 compute-0 nova_compute[260603]: 2025-10-02 08:28:11.907 2 DEBUG nova.network.neutron [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Successfully created port: 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.012 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.441 2 DEBUG nova.objects.instance [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_requests' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.458 2 DEBUG nova.network.neutron [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:28:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:12 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964096948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.520 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.529 2 DEBUG nova.compute.provider_tree [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.543 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.552 2 DEBUG nova.scheduler.client.report [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.573 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.574 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.648 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.649 2 DEBUG nova.network.neutron [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.669 2 INFO nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.685 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.771 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.772 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.773 2 INFO nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Creating image(s)
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.792 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.814 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:12 compute-0 ceph-mon[74477]: pgmap v1413: 305 pgs: 305 active+clean; 293 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Oct 02 08:28:12 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3964096948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.843 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.846 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 318 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.7 MiB/s wr, 220 op/s
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.943 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.945 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.945 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.946 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.973 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:12 compute-0 nova_compute[260603]: 2025-10-02 08:28:12.978 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.034 2 DEBUG nova.network.neutron [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updated VIF entry in instance network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.035 2 DEBUG nova.network.neutron [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.045 2 DEBUG nova.policy [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '019cd25dce6249ce9c2cf326ec62df28', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.050 2 DEBUG nova.policy [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.062 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.062 2 DEBUG nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.063 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.063 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.064 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.064 2 DEBUG nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] No waiting events found dispatching network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.064 2 WARNING nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received unexpected event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a for instance with vm_state building and task_state spawning.
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.066 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.067 2 DEBUG nova.network.neutron [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.240 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.310 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] resizing rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.421 2 DEBUG nova.objects.instance [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'migration_context' on Instance uuid 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.436 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.436 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Ensure instance console log exists: /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.437 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.437 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.437 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.544 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.544 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.584 2 DEBUG nova.network.neutron [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Successfully updated port: 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.597 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.597 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquired lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.597 2 DEBUG nova.network.neutron [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.662 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.663 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.663 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.663 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.664 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.667 2 INFO nova.compute.manager [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Terminating instance
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.670 2 DEBUG nova.compute.manager [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.702 2 DEBUG nova.compute.manager [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received event network-changed-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.702 2 DEBUG nova.compute.manager [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Refreshing instance network info cache due to event network-changed-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.703 2 DEBUG oslo_concurrency.lockutils [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:13 compute-0 kernel: tapb70e7dbb-36 (unregistering): left promiscuous mode
Oct 02 08:28:13 compute-0 NetworkManager[45129]: <info>  [1759393693.7130] device (tapb70e7dbb-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:28:13 compute-0 ovn_controller[152344]: 2025-10-02T08:28:13Z|00361|binding|INFO|Releasing lport b70e7dbb-3605-4af5-a977-d1c35f1ec20a from this chassis (sb_readonly=0)
Oct 02 08:28:13 compute-0 ovn_controller[152344]: 2025-10-02T08:28:13Z|00362|binding|INFO|Setting lport b70e7dbb-3605-4af5-a977-d1c35f1ec20a down in Southbound
Oct 02 08:28:13 compute-0 ovn_controller[152344]: 2025-10-02T08:28:13Z|00363|binding|INFO|Removing iface tapb70e7dbb-36 ovn-installed in OVS
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:13.746 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:85:5d 10.100.0.11'], port_security=['fa:16:3e:e7:85:5d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '197184a1-4270-40c9-87b5-6eca7e832812', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd531839bafe441a391fb9161a54c74ee', 'neutron:revision_number': '4', 'neutron:security_group_ids': '968634c4-ac96-4f74-8c28-677b698bf17c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec8729b6-f94a-4108-b4ba-253312192658, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b70e7dbb-3605-4af5-a977-d1c35f1ec20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:13.750 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b70e7dbb-3605-4af5-a977-d1c35f1ec20a in datapath a14a44c4-2ea4-49fe-ba19-5ba96209c2bc unbound from our chassis
Oct 02 08:28:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:13.752 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:28:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:13.754 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1d20ea89-0b7e-4245-876c-638f492f27f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:13.754 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc namespace which is not needed anymore
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:13 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Oct 02 08:28:13 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002e.scope: Consumed 3.045s CPU time.
Oct 02 08:28:13 compute-0 systemd-machined[214636]: Machine qemu-50-instance-0000002e terminated.
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.905 2 INFO nova.virt.libvirt.driver [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Instance destroyed successfully.
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.905 2 DEBUG nova.objects.instance [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lazy-loading 'resources' on Instance uuid 197184a1-4270-40c9-87b5-6eca7e832812 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:13 compute-0 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [NOTICE]   (309881) : haproxy version is 2.8.14-c23fe91
Oct 02 08:28:13 compute-0 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [NOTICE]   (309881) : path to executable is /usr/sbin/haproxy
Oct 02 08:28:13 compute-0 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [WARNING]  (309881) : Exiting Master process...
Oct 02 08:28:13 compute-0 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [ALERT]    (309881) : Current worker (309883) exited with code 143 (Terminated)
Oct 02 08:28:13 compute-0 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [WARNING]  (309881) : All workers exited. Exiting... (0)
Oct 02 08:28:13 compute-0 systemd[1]: libpod-20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b.scope: Deactivated successfully.
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.939 2 DEBUG nova.virt.libvirt.vif [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1439041942',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1439041942',id=46,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d531839bafe441a391fb9161a54c74ee',ramdisk_id='',reservation_id='r-3hltheaa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-588685837',owner_user_name='tempest-InstanceActionsV221TestJSON-588685837-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:11Z,user_data=None,user_id='3e67f389787b453faa1dfdb728caea35',uuid=197184a1-4270-40c9-87b5-6eca7e832812,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.939 2 DEBUG nova.network.os_vif_util [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Converting VIF {"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.940 2 DEBUG nova.network.os_vif_util [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.940 2 DEBUG os_vif [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.942 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb70e7dbb-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:13 compute-0 podman[310337]: 2025-10-02 08:28:13.94331668 +0000 UTC m=+0.065264310 container died 20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.947 2 DEBUG nova.network.neutron [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:28:13 compute-0 nova_compute[260603]: 2025-10-02 08:28:13.951 2 INFO os_vif [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36')
Oct 02 08:28:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b-userdata-shm.mount: Deactivated successfully.
Oct 02 08:28:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a67a4952718d6abeada72cfb2cec3851c08f6d1026af2e4c7575087548f9cfd9-merged.mount: Deactivated successfully.
Oct 02 08:28:13 compute-0 podman[310337]: 2025-10-02 08:28:13.988417881 +0000 UTC m=+0.110365521 container cleanup 20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:28:14 compute-0 systemd[1]: libpod-conmon-20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b.scope: Deactivated successfully.
Oct 02 08:28:14 compute-0 podman[310396]: 2025-10-02 08:28:14.074692482 +0000 UTC m=+0.057074015 container remove 20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 08:28:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.084 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1dff7418-4736-4521-96ee-e35572374769]: (4, ('Thu Oct  2 08:28:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc (20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b)\n20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b\nThu Oct  2 08:28:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc (20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b)\n20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.087 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[01836acb-d675-49b8-86a5-5c15fac7245f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.088 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa14a44c4-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:14 compute-0 kernel: tapa14a44c4-20: left promiscuous mode
Oct 02 08:28:14 compute-0 nova_compute[260603]: 2025-10-02 08:28:14.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.101 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9512bd9-f41a-461c-b2a3-f1dfae33a556]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:14 compute-0 nova_compute[260603]: 2025-10-02 08:28:14.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.122 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[615195e9-0cf5-4720-a697-2e6008ad488c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.123 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0be0eb-3fa4-4c38-b7c1-98eaa0f34637]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.148 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0967b51a-e72d-4818-bbbf-978770fb6449]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456717, 'reachable_time': 22331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310411, 'error': None, 'target': 'ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:14 compute-0 systemd[1]: run-netns-ovnmeta\x2da14a44c4\x2d2ea4\x2d49fe\x2dba19\x2d5ba96209c2bc.mount: Deactivated successfully.
Oct 02 08:28:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.151 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:28:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.151 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f8cba5-575e-4833-adab-39e2f27e12cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:14 compute-0 nova_compute[260603]: 2025-10-02 08:28:14.315 2 INFO nova.virt.libvirt.driver [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Deleting instance files /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812_del
Oct 02 08:28:14 compute-0 nova_compute[260603]: 2025-10-02 08:28:14.316 2 INFO nova.virt.libvirt.driver [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Deletion of /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812_del complete
Oct 02 08:28:14 compute-0 nova_compute[260603]: 2025-10-02 08:28:14.399 2 INFO nova.compute.manager [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 02 08:28:14 compute-0 nova_compute[260603]: 2025-10-02 08:28:14.399 2 DEBUG oslo.service.loopingcall [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:28:14 compute-0 nova_compute[260603]: 2025-10-02 08:28:14.400 2 DEBUG nova.compute.manager [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:28:14 compute-0 nova_compute[260603]: 2025-10-02 08:28:14.400 2 DEBUG nova.network.neutron [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:28:14 compute-0 nova_compute[260603]: 2025-10-02 08:28:14.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:14 compute-0 nova_compute[260603]: 2025-10-02 08:28:14.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 08:28:14 compute-0 nova_compute[260603]: 2025-10-02 08:28:14.644 2 DEBUG nova.network.neutron [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Successfully created port: 922a693b-1cb4-42e8-a97d-78973183c774 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:28:14 compute-0 ceph-mon[74477]: pgmap v1414: 305 pgs: 305 active+clean; 318 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.7 MiB/s wr, 220 op/s
Oct 02 08:28:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 348 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 182 op/s
Oct 02 08:28:15 compute-0 ovn_controller[152344]: 2025-10-02T08:28:15Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:df:f8:c0 10.100.0.14
Oct 02 08:28:15 compute-0 ovn_controller[152344]: 2025-10-02T08:28:15Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:f8:c0 10.100.0.14
Oct 02 08:28:15 compute-0 nova_compute[260603]: 2025-10-02 08:28:15.540 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:15 compute-0 nova_compute[260603]: 2025-10-02 08:28:15.871 2 DEBUG nova.network.neutron [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updated VIF entry in instance network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:15 compute-0 nova_compute[260603]: 2025-10-02 08:28:15.872 2 DEBUG nova.network.neutron [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:15 compute-0 nova_compute[260603]: 2025-10-02 08:28:15.907 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:15 compute-0 nova_compute[260603]: 2025-10-02 08:28:15.908 2 DEBUG nova.compute.manager [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:15 compute-0 nova_compute[260603]: 2025-10-02 08:28:15.908 2 DEBUG nova.compute.manager [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing instance network info cache due to event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:15 compute-0 nova_compute[260603]: 2025-10-02 08:28:15.909 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:15 compute-0 nova_compute[260603]: 2025-10-02 08:28:15.909 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:15 compute-0 nova_compute[260603]: 2025-10-02 08:28:15.910 2 DEBUG nova.network.neutron [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.280 2 DEBUG nova.network.neutron [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Updating instance_info_cache with network_info: [{"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.299 2 DEBUG nova.network.neutron [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.302 2 DEBUG nova.network.neutron [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Successfully updated port: 2f45b100-9bc2-4853-87ff-324e74ddfee5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.309 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Releasing lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.310 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance network_info: |[{"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.311 2 DEBUG oslo_concurrency.lockutils [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.311 2 DEBUG nova.network.neutron [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Refreshing network info cache for port 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.317 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Start _get_guest_xml network_info=[{"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.322 2 DEBUG oslo_concurrency.lockutils [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.324 2 WARNING nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.335 2 INFO nova.compute.manager [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Took 1.93 seconds to deallocate network for instance.
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.345 2 DEBUG nova.virt.libvirt.host [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.346 2 DEBUG nova.virt.libvirt.host [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.351 2 DEBUG nova.virt.libvirt.host [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.352 2 DEBUG nova.virt.libvirt.host [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.352 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.353 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.353 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.354 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.354 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.354 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.355 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.355 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.355 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.356 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.356 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.357 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.361 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.407 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.408 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.577 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.578 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.757 2 DEBUG oslo_concurrency.processutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:28:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/756598222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.815 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.831 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:16 compute-0 ceph-mon[74477]: pgmap v1415: 305 pgs: 305 active+clean; 348 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 182 op/s
Oct 02 08:28:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/756598222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.857 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:16 compute-0 nova_compute[260603]: 2025-10-02 08:28:16.861 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 348 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Oct 02 08:28:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:28:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565091287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.212 2 DEBUG oslo_concurrency.processutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.220 2 DEBUG nova.compute.provider_tree [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.241 2 DEBUG nova.scheduler.client.report [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.275 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:28:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1456522547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.298 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.301 2 DEBUG nova.virt.libvirt.vif [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-379096976',display_name='tempest-DeleteServersTestJSON-server-379096976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-379096976',id=47,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-f7cj6hb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:10Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=e3ae3c82-7eb4-4727-a846-92afca9a8330,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.301 2 DEBUG nova.network.os_vif_util [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.303 2 DEBUG nova.network.os_vif_util [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.305 2 DEBUG nova.objects.instance [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'pci_devices' on Instance uuid e3ae3c82-7eb4-4727-a846-92afca9a8330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.308 2 INFO nova.scheduler.client.report [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Deleted allocations for instance 197184a1-4270-40c9-87b5-6eca7e832812
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.335 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:28:17 compute-0 nova_compute[260603]:   <uuid>e3ae3c82-7eb4-4727-a846-92afca9a8330</uuid>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   <name>instance-0000002f</name>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <nova:name>tempest-DeleteServersTestJSON-server-379096976</nova:name>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:28:16</nova:creationTime>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:28:17 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:28:17 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:28:17 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:28:17 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:17 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:28:17 compute-0 nova_compute[260603]:         <nova:user uuid="1ac6f72f7366459a86c086737b89ea69">tempest-DeleteServersTestJSON-812177785-project-member</nova:user>
Oct 02 08:28:17 compute-0 nova_compute[260603]:         <nova:project uuid="f269abbe5769427dbf44c430d7529c04">tempest-DeleteServersTestJSON-812177785</nova:project>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:28:17 compute-0 nova_compute[260603]:         <nova:port uuid="8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f">
Oct 02 08:28:17 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <system>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <entry name="serial">e3ae3c82-7eb4-4727-a846-92afca9a8330</entry>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <entry name="uuid">e3ae3c82-7eb4-4727-a846-92afca9a8330</entry>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     </system>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   <os>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   </os>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   <features>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   </features>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/e3ae3c82-7eb4-4727-a846-92afca9a8330_disk">
Oct 02 08:28:17 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:28:17 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/e3ae3c82-7eb4-4727-a846-92afca9a8330_disk.config">
Oct 02 08:28:17 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:28:17 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:5a:62:72"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <target dev="tap8e7deb4e-9f"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/console.log" append="off"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <video>
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     </video>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:28:17 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:28:17 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:28:17 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:28:17 compute-0 nova_compute[260603]: </domain>
Oct 02 08:28:17 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.336 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Preparing to wait for external event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.337 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.337 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.337 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.338 2 DEBUG nova.virt.libvirt.vif [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-379096976',display_name='tempest-DeleteServersTestJSON-server-379096976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-379096976',id=47,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-f7cj6hb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:10Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=e3ae3c82-7eb4-4727-a846-92afca9a8330,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.339 2 DEBUG nova.network.os_vif_util [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.340 2 DEBUG nova.network.os_vif_util [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.340 2 DEBUG os_vif [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.342 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e7deb4e-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e7deb4e-9f, col_values=(('external_ids', {'iface-id': '8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:62:72', 'vm-uuid': 'e3ae3c82-7eb4-4727-a846-92afca9a8330'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:17 compute-0 NetworkManager[45129]: <info>  [1759393697.3854] manager: (tap8e7deb4e-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.398 2 INFO os_vif [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f')
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.438 2 DEBUG nova.network.neutron [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Successfully updated port: 922a693b-1cb4-42e8-a97d-78973183c774 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.473 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.474 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquired lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.474 2 DEBUG nova.network.neutron [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.498 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.511 2 DEBUG nova.compute.manager [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received event network-vif-unplugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.511 2 DEBUG oslo_concurrency.lockutils [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.512 2 DEBUG oslo_concurrency.lockutils [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.512 2 DEBUG oslo_concurrency.lockutils [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.512 2 DEBUG nova.compute.manager [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] No waiting events found dispatching network-vif-unplugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.513 2 WARNING nova.compute.manager [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received unexpected event network-vif-unplugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a for instance with vm_state deleted and task_state None.
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.513 2 DEBUG nova.compute.manager [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.513 2 DEBUG oslo_concurrency.lockutils [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.514 2 DEBUG oslo_concurrency.lockutils [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.514 2 DEBUG oslo_concurrency.lockutils [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.515 2 DEBUG nova.compute.manager [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] No waiting events found dispatching network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.515 2 WARNING nova.compute.manager [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received unexpected event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a for instance with vm_state deleted and task_state None.
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.525 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.526 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.527 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No VIF found with MAC fa:16:3e:5a:62:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.528 2 INFO nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Using config drive
Oct 02 08:28:17 compute-0 podman[310500]: 2025-10-02 08:28:17.545879964 +0000 UTC m=+0.099837953 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.560 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:17 compute-0 nova_compute[260603]: 2025-10-02 08:28:17.660 2 DEBUG nova.network.neutron [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:28:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2565091287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1456522547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.313 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393683.310509, 84f8672c-7a2a-4307-a5c4-7e2968d84225 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.314 2 INFO nova.compute.manager [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] VM Stopped (Lifecycle Event)
Oct 02 08:28:18 compute-0 sudo[310538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:28:18 compute-0 sudo[310538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:18 compute-0 sudo[310538]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.345 2 DEBUG nova.compute.manager [None req-7f681a8f-2b73-43f1-9003-d16ad385f01c - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:18 compute-0 sudo[310563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:28:18 compute-0 sudo[310563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:18 compute-0 sudo[310563]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:18 compute-0 sudo[310588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:28:18 compute-0 sudo[310588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:18 compute-0 sudo[310588]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:18 compute-0 sudo[310613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:28:18 compute-0 sudo[310613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.743 2 INFO nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Creating config drive at /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/disk.config
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.753 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpatj6_4c_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.789 2 DEBUG nova.network.neutron [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updated VIF entry in instance network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.790 2 DEBUG nova.network.neutron [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.814 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.815 2 DEBUG oslo_concurrency.lockutils [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.816 2 DEBUG nova.network.neutron [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:28:18 compute-0 ceph-mon[74477]: pgmap v1416: 305 pgs: 305 active+clean; 348 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.895 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpatj6_4c_" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.925 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:18 compute-0 nova_compute[260603]: 2025-10-02 08:28:18.929 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/disk.config e3ae3c82-7eb4-4727-a846-92afca9a8330_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 315 op/s
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.042 2 DEBUG nova.network.neutron [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Updated VIF entry in instance network info cache for port 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.044 2 DEBUG nova.network.neutron [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Updating instance_info_cache with network_info: [{"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.060 2 WARNING nova.network.neutron [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.066 2 DEBUG oslo_concurrency.lockutils [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.091 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/disk.config e3ae3c82-7eb4-4727-a846-92afca9a8330_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.091 2 INFO nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Deleting local config drive /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/disk.config because it was imported into RBD.
Oct 02 08:28:19 compute-0 sudo[310613]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:19 compute-0 kernel: tap8e7deb4e-9f: entered promiscuous mode
Oct 02 08:28:19 compute-0 NetworkManager[45129]: <info>  [1759393699.1553] manager: (tap8e7deb4e-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Oct 02 08:28:19 compute-0 ovn_controller[152344]: 2025-10-02T08:28:19Z|00364|binding|INFO|Claiming lport 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f for this chassis.
Oct 02 08:28:19 compute-0 ovn_controller[152344]: 2025-10-02T08:28:19Z|00365|binding|INFO|8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f: Claiming fa:16:3e:5a:62:72 10.100.0.5
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.173 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:62:72 10.100.0.5'], port_security=['fa:16:3e:5a:62:72 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e3ae3c82-7eb4-4727-a846-92afca9a8330', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.175 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca bound to our chassis
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.179 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:28:19 compute-0 ovn_controller[152344]: 2025-10-02T08:28:19Z|00366|binding|INFO|Setting lport 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f ovn-installed in OVS
Oct 02 08:28:19 compute-0 ovn_controller[152344]: 2025-10-02T08:28:19Z|00367|binding|INFO|Setting lport 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f up in Southbound
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:19 compute-0 systemd-udevd[310721]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.200 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[76f21afc-e2fb-4005-a5fa-15f51e606c26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.202 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa72ac8c9-11 in ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:28:19 compute-0 NetworkManager[45129]: <info>  [1759393699.2033] device (tap8e7deb4e-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:28:19 compute-0 NetworkManager[45129]: <info>  [1759393699.2039] device (tap8e7deb4e-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:28:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:28:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:28:19 compute-0 systemd-machined[214636]: New machine qemu-51-instance-0000002f.
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.207 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa72ac8c9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.207 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[44ec57a1-bbe6-4a6d-9b77-e82f1628c590]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.209 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[304cb2ef-a998-4d26-ade9-8cbcadc0d015]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:28:19 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:28:19 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000002f.
Oct 02 08:28:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:28:19 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:28:19 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev d29818d0-05ea-4b46-b458-db19a2e8c294 does not exist
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.230 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[025993cd-f3df-4989-b22d-b85d4f14294d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9a8f5757-0566-4845-b7ac-d93164472c71 does not exist
Oct 02 08:28:19 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev d8f11a62-e0ad-4550-bc06-0b833418bf95 does not exist
Oct 02 08:28:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:28:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:28:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:28:19 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:28:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:28:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.258 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[afc2eebf-8167-4af2-998a-2d51c4d3980e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.292 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c57e9a-ae96-4688-b1b4-71f2ff485cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.298 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fe1a1e-f181-4336-b528-0ee601bc3766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 NetworkManager[45129]: <info>  [1759393699.3001] manager: (tapa72ac8c9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Oct 02 08:28:19 compute-0 systemd-udevd[310725]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:28:19 compute-0 sudo[310728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:28:19 compute-0 sudo[310728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:19 compute-0 sudo[310728]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.345 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c766a2b5-6698-43e7-9cef-8a6d4bdc7130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.348 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2118a4d3-f5ea-4e1d-b0f4-056c5ef08e81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 NetworkManager[45129]: <info>  [1759393699.3716] device (tapa72ac8c9-10): carrier: link connected
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.380 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4f980688-f7d0-40ea-9c3f-d235c491da21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 sudo[310776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:28:19 compute-0 sudo[310776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:19 compute-0 sudo[310776]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.406 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b423796-bacc-43bd-95b8-43a60816283f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457797, 'reachable_time': 42739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310804, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.428 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0508c9-a269-4909-b5be-b1b905a5cb84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3b:61d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 457797, 'tstamp': 457797}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310811, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.448 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a29bbe57-2079-42ef-9a36-948f51044a5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457797, 'reachable_time': 42739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310829, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 sudo[310806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:28:19 compute-0 sudo[310806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:19 compute-0 sudo[310806]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.492 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5a386d-5343-45b4-8672-e9cc5113caba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 sudo[310835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:28:19 compute-0 sudo[310835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.596 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[145d93e9-60ad-493c-9adb-325a17203961]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.597 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.598 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.598 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa72ac8c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:19 compute-0 kernel: tapa72ac8c9-10: entered promiscuous mode
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:19 compute-0 NetworkManager[45129]: <info>  [1759393699.6381] manager: (tapa72ac8c9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.639 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa72ac8c9-10, col_values=(('external_ids', {'iface-id': 'f9acec59-0200-4a1d-84e4-06e67c730498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:19 compute-0 ovn_controller[152344]: 2025-10-02T08:28:19Z|00368|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.656 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.657 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1413d423-a8cd-4ce0-98b7-a30f7888d385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.657 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:28:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.658 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'env', 'PROCESS_TAG=haproxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.679 2 DEBUG nova.network.neutron [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updating instance_info_cache with network_info: [{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.708 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Releasing lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.708 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance network_info: |[{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.712 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Start _get_guest_xml network_info=[{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.721 2 WARNING nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.730 2 DEBUG nova.virt.libvirt.host [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.731 2 DEBUG nova.virt.libvirt.host [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.736 2 DEBUG nova.virt.libvirt.host [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.737 2 DEBUG nova.virt.libvirt.host [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.738 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.738 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.739 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.739 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.741 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.741 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.742 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.742 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.742 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.743 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.743 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.743 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:28:19 compute-0 nova_compute[260603]: 2025-10-02 08:28:19.747 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:28:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:28:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:28:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:28:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:28:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:28:20 compute-0 podman[310982]: 2025-10-02 08:28:20.01994046 +0000 UTC m=+0.060231633 container create d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_grothendieck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 08:28:20 compute-0 podman[310998]: 2025-10-02 08:28:20.055768473 +0000 UTC m=+0.062974528 container create 039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:28:20 compute-0 systemd[1]: Started libpod-conmon-d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906.scope.
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.071 2 DEBUG nova.compute.manager [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received event network-vif-deleted-b70e7dbb-3605-4af5-a977-d1c35f1ec20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.072 2 DEBUG nova.compute.manager [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-changed-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.072 2 DEBUG nova.compute.manager [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing instance network info cache due to event network-changed-2f45b100-9bc2-4853-87ff-324e74ddfee5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.073 2 DEBUG oslo_concurrency.lockutils [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:20 compute-0 podman[310982]: 2025-10-02 08:28:19.989887736 +0000 UTC m=+0.030178929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:28:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:28:20 compute-0 podman[310998]: 2025-10-02 08:28:20.014998686 +0000 UTC m=+0.022204761 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:28:20 compute-0 systemd[1]: Started libpod-conmon-039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22.scope.
Oct 02 08:28:20 compute-0 podman[310982]: 2025-10-02 08:28:20.131171757 +0000 UTC m=+0.171462980 container init d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 08:28:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:28:20 compute-0 podman[310982]: 2025-10-02 08:28:20.145146471 +0000 UTC m=+0.185437614 container start d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_grothendieck, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ebb512a42e1b9f43784c36b0857c486bcadaa4002099f22dfeb9568e9eabfd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:20 compute-0 podman[310982]: 2025-10-02 08:28:20.149849146 +0000 UTC m=+0.190140319 container attach d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:28:20 compute-0 systemd[1]: libpod-d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906.scope: Deactivated successfully.
Oct 02 08:28:20 compute-0 interesting_grothendieck[311013]: 167 167
Oct 02 08:28:20 compute-0 podman[310982]: 2025-10-02 08:28:20.154521512 +0000 UTC m=+0.194812665 container died d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:28:20 compute-0 conmon[311013]: conmon d62c579e4f5e2bc7db62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906.scope/container/memory.events
Oct 02 08:28:20 compute-0 podman[310998]: 2025-10-02 08:28:20.170907361 +0000 UTC m=+0.178113446 container init 039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 08:28:20 compute-0 podman[310998]: 2025-10-02 08:28:20.179428026 +0000 UTC m=+0.186634081 container start 039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 08:28:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:28:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592014135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7efcecc1fe449763128402c7f4aced386a4d4fe9620b671511e9e5ccc8e3d91-merged.mount: Deactivated successfully.
Oct 02 08:28:20 compute-0 podman[310982]: 2025-10-02 08:28:20.207091405 +0000 UTC m=+0.247382548 container remove d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_grothendieck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:28:20 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [NOTICE]   (311032) : New worker (311043) forked
Oct 02 08:28:20 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [NOTICE]   (311032) : Loading success.
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.229 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:20 compute-0 systemd[1]: libpod-conmon-d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906.scope: Deactivated successfully.
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.257 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.262 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.342 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393700.3415005, e3ae3c82-7eb4-4727-a846-92afca9a8330 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.343 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] VM Started (Lifecycle Event)
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.367 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.374 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393700.3419397, e3ae3c82-7eb4-4727-a846-92afca9a8330 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.374 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] VM Paused (Lifecycle Event)
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.394 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.406 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:28:20 compute-0 podman[311077]: 2025-10-02 08:28:20.430290632 +0000 UTC m=+0.042485971 container create 715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.434 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:28:20 compute-0 systemd[1]: Started libpod-conmon-715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3.scope.
Oct 02 08:28:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74824d06f280fc5f2bb86c9252908f2c8de21722b17ed63b102d28cc95cefe2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:20 compute-0 podman[311077]: 2025-10-02 08:28:20.412733887 +0000 UTC m=+0.024929206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74824d06f280fc5f2bb86c9252908f2c8de21722b17ed63b102d28cc95cefe2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74824d06f280fc5f2bb86c9252908f2c8de21722b17ed63b102d28cc95cefe2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74824d06f280fc5f2bb86c9252908f2c8de21722b17ed63b102d28cc95cefe2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74824d06f280fc5f2bb86c9252908f2c8de21722b17ed63b102d28cc95cefe2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:20 compute-0 podman[311077]: 2025-10-02 08:28:20.545412819 +0000 UTC m=+0.157608188 container init 715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 08:28:20 compute-0 podman[311077]: 2025-10-02 08:28:20.555279896 +0000 UTC m=+0.167475255 container start 715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:28:20 compute-0 podman[311077]: 2025-10-02 08:28:20.559775536 +0000 UTC m=+0.171970945 container attach 715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 08:28:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:28:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045614169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.698 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.702 2 DEBUG nova.virt.libvirt.vif [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2008796594',display_name='tempest-SecurityGroupsTestJSON-server-2008796594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2008796594',id=48,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-limo2kg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:12Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=21802142-fcf7-4eb2-b43b-e0fa48cab4d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.702 2 DEBUG nova.network.os_vif_util [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.705 2 DEBUG nova.network.os_vif_util [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.709 2 DEBUG nova.objects.instance [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'pci_devices' on Instance uuid 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.730 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:28:20 compute-0 nova_compute[260603]:   <uuid>21802142-fcf7-4eb2-b43b-e0fa48cab4d6</uuid>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   <name>instance-00000030</name>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <nova:name>tempest-SecurityGroupsTestJSON-server-2008796594</nova:name>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:28:19</nova:creationTime>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:28:20 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:28:20 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:28:20 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:28:20 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:20 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:28:20 compute-0 nova_compute[260603]:         <nova:user uuid="019cd25dce6249ce9c2cf326ec62df28">tempest-SecurityGroupsTestJSON-2081142325-project-member</nova:user>
Oct 02 08:28:20 compute-0 nova_compute[260603]:         <nova:project uuid="35a4ab7cf79e41f68a1ea888c2a3592e">tempest-SecurityGroupsTestJSON-2081142325</nova:project>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:28:20 compute-0 nova_compute[260603]:         <nova:port uuid="922a693b-1cb4-42e8-a97d-78973183c774">
Oct 02 08:28:20 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <system>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <entry name="serial">21802142-fcf7-4eb2-b43b-e0fa48cab4d6</entry>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <entry name="uuid">21802142-fcf7-4eb2-b43b-e0fa48cab4d6</entry>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     </system>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   <os>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   </os>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   <features>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   </features>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk">
Oct 02 08:28:20 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:28:20 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config">
Oct 02 08:28:20 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:28:20 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:bd:10:ae"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <target dev="tap922a693b-1c"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/console.log" append="off"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <video>
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     </video>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:28:20 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:28:20 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:28:20 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:28:20 compute-0 nova_compute[260603]: </domain>
Oct 02 08:28:20 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.731 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Preparing to wait for external event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.731 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.732 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.732 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.734 2 DEBUG nova.virt.libvirt.vif [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2008796594',display_name='tempest-SecurityGroupsTestJSON-server-2008796594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2008796594',id=48,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-limo2kg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:12Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=21802142-fcf7-4eb2-b43b-e0fa48cab4d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.734 2 DEBUG nova.network.os_vif_util [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.736 2 DEBUG nova.network.os_vif_util [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.736 2 DEBUG os_vif [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap922a693b-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.746 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap922a693b-1c, col_values=(('external_ids', {'iface-id': '922a693b-1cb4-42e8-a97d-78973183c774', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:10:ae', 'vm-uuid': '21802142-fcf7-4eb2-b43b-e0fa48cab4d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:20 compute-0 NetworkManager[45129]: <info>  [1759393700.7509] manager: (tap922a693b-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.761 2 INFO os_vif [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c')
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.839 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.840 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.840 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] No VIF found with MAC fa:16:3e:bd:10:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.842 2 INFO nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Using config drive
Oct 02 08:28:20 compute-0 ceph-mon[74477]: pgmap v1417: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 315 op/s
Oct 02 08:28:20 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3592014135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:20 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4045614169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:20 compute-0 nova_compute[260603]: 2025-10-02 08:28:20.889 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 216 op/s
Oct 02 08:28:21 compute-0 eager_noether[311112]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:28:21 compute-0 eager_noether[311112]: --> relative data size: 1.0
Oct 02 08:28:21 compute-0 eager_noether[311112]: --> All data devices are unavailable
Oct 02 08:28:21 compute-0 systemd[1]: libpod-715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3.scope: Deactivated successfully.
Oct 02 08:28:21 compute-0 podman[311077]: 2025-10-02 08:28:21.603847142 +0000 UTC m=+1.216042441 container died 715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.673 2 INFO nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Creating config drive at /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/disk.config
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.677 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp78_4p3_s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f74824d06f280fc5f2bb86c9252908f2c8de21722b17ed63b102d28cc95cefe2-merged.mount: Deactivated successfully.
Oct 02 08:28:21 compute-0 podman[311164]: 2025-10-02 08:28:21.732569833 +0000 UTC m=+0.098004008 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 08:28:21 compute-0 podman[311077]: 2025-10-02 08:28:21.744965578 +0000 UTC m=+1.357160887 container remove 715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:28:21 compute-0 sudo[310835]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:21 compute-0 systemd[1]: libpod-conmon-715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3.scope: Deactivated successfully.
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.809 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp78_4p3_s" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.833 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.837 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/disk.config 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:21 compute-0 sudo[311200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:28:21 compute-0 sudo[311200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:21 compute-0 sudo[311200]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.902 2 DEBUG nova.network.neutron [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:21 compute-0 sudo[311244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:28:21 compute-0 sudo[311244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:21 compute-0 sudo[311244]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.946 2 DEBUG oslo_concurrency.lockutils [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.947 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.947 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.947 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.952 2 DEBUG nova.virt.libvirt.vif [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.952 2 DEBUG nova.network.os_vif_util [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.953 2 DEBUG nova.network.os_vif_util [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.954 2 DEBUG os_vif [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.959 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f45b100-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:21 compute-0 nova_compute[260603]: 2025-10-02 08:28:21.959 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f45b100-9b, col_values=(('external_ids', {'iface-id': '2f45b100-9bc2-4853-87ff-324e74ddfee5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:c8:97', 'vm-uuid': 'f13ff7c1-d7d3-443e-9f06-69f8c466af30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:21 compute-0 sudo[311282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:28:21 compute-0 sudo[311282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:21 compute-0 sudo[311282]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:21 compute-0 NetworkManager[45129]: <info>  [1759393701.9940] manager: (tap2f45b100-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.014 2 INFO os_vif [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b')
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.015 2 DEBUG nova.virt.libvirt.vif [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.015 2 DEBUG nova.network.os_vif_util [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.016 2 DEBUG nova.network.os_vif_util [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.019 2 DEBUG nova.virt.libvirt.guest [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] attach device xml: <interface type="ethernet">
Oct 02 08:28:22 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:69:c8:97"/>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   <target dev="tap2f45b100-9b"/>
Oct 02 08:28:22 compute-0 nova_compute[260603]: </interface>
Oct 02 08:28:22 compute-0 nova_compute[260603]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 08:28:22 compute-0 NetworkManager[45129]: <info>  [1759393702.0293] manager: (tap2f45b100-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Oct 02 08:28:22 compute-0 systemd-udevd[310778]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:28:22 compute-0 kernel: tap2f45b100-9b: entered promiscuous mode
Oct 02 08:28:22 compute-0 ovn_controller[152344]: 2025-10-02T08:28:22Z|00369|binding|INFO|Claiming lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 for this chassis.
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:22 compute-0 ovn_controller[152344]: 2025-10-02T08:28:22Z|00370|binding|INFO|2f45b100-9bc2-4853-87ff-324e74ddfee5: Claiming fa:16:3e:69:c8:97 10.100.0.5
Oct 02 08:28:22 compute-0 NetworkManager[45129]: <info>  [1759393702.0466] device (tap2f45b100-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:28:22 compute-0 NetworkManager[45129]: <info>  [1759393702.0476] device (tap2f45b100-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.048 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:c8:97 10.100.0.5'], port_security=['fa:16:3e:69:c8:97 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f13ff7c1-d7d3-443e-9f06-69f8c466af30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2f45b100-9bc2-4853-87ff-324e74ddfee5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.050 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2f45b100-9bc2-4853-87ff-324e74ddfee5 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.051 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:28:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:28:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3393871243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:28:22 compute-0 sudo[311313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:28:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:28:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3393871243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:28:22 compute-0 sudo[311313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:22 compute-0 ovn_controller[152344]: 2025-10-02T08:28:22Z|00371|binding|INFO|Setting lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 ovn-installed in OVS
Oct 02 08:28:22 compute-0 ovn_controller[152344]: 2025-10-02T08:28:22Z|00372|binding|INFO|Setting lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 up in Southbound
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.067 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf990dc9-3e35-4a79-81e0-70790e2727e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.077 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/disk.config 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.078 2 INFO nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Deleting local config drive /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/disk.config because it was imported into RBD.
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.106 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[00c77050-a377-45d4-8bc4-3efd715fe663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.109 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c677de0c-fb41-460a-b7d2-78b6be542be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:22 compute-0 kernel: tap922a693b-1c: entered promiscuous mode
Oct 02 08:28:22 compute-0 NetworkManager[45129]: <info>  [1759393702.1274] manager: (tap922a693b-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:22 compute-0 ovn_controller[152344]: 2025-10-02T08:28:22Z|00373|binding|INFO|Claiming lport 922a693b-1cb4-42e8-a97d-78973183c774 for this chassis.
Oct 02 08:28:22 compute-0 ovn_controller[152344]: 2025-10-02T08:28:22Z|00374|binding|INFO|922a693b-1cb4-42e8-a97d-78973183c774: Claiming fa:16:3e:bd:10:ae 10.100.0.7
Oct 02 08:28:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:28:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 17K writes, 70K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 17K writes, 5936 syncs, 3.03 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 47.61 MB, 0.08 MB/s
                                           Interval WAL: 11K writes, 4765 syncs, 2.47 writes per sync, written: 0.05 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 08:28:22 compute-0 NetworkManager[45129]: <info>  [1759393702.1396] device (tap922a693b-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:28:22 compute-0 NetworkManager[45129]: <info>  [1759393702.1404] device (tap922a693b-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.144 2 DEBUG nova.virt.libvirt.driver [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.144 2 DEBUG nova.virt.libvirt.driver [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.145 2 DEBUG nova.virt.libvirt.driver [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:0b:59:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.145 2 DEBUG nova.virt.libvirt.driver [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:69:c8:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.145 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[46f95bb0-1cbd-46f6-be11-f7284c2ad179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:22 compute-0 ovn_controller[152344]: 2025-10-02T08:28:22Z|00375|binding|INFO|Setting lport 922a693b-1cb4-42e8-a97d-78973183c774 ovn-installed in OVS
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:22 compute-0 ovn_controller[152344]: 2025-10-02T08:28:22Z|00376|binding|INFO|Setting lport 922a693b-1cb4-42e8-a97d-78973183c774 up in Southbound
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.153 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:10:ae 10.100.0.7'], port_security=['fa:16:3e:bd:10:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21802142-fcf7-4eb2-b43b-e0fa48cab4d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aceef71f-a2e6-4998-bc1f-5a8f9213efeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e031551-92b2-44b9-87f8-368034b7a542, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=922a693b-1cb4-42e8-a97d-78973183c774) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.163 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[695d9731-e950-42bc-aa05-06dec29cd5cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311365, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:22 compute-0 systemd-machined[214636]: New machine qemu-52-instance-00000030.
Oct 02 08:28:22 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-00000030.
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.181 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf06144-e928-4aeb-baa9-cb95ecb5fc83]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454016, 'tstamp': 454016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311368, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454020, 'tstamp': 454020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311368, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.183 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.186 2 DEBUG nova.virt.libvirt.guest [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:22 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   <nova:name>tempest-tempest.common.compute-instance-53387023</nova:name>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:28:22</nova:creationTime>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:28:22 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:28:22 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:28:22 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:28:22 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:22 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:28:22 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:28:22 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:28:22 compute-0 nova_compute[260603]:     <nova:port uuid="136aeb8e-dedd-4cd8-a72d-1c4309716daf">
Oct 02 08:28:22 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:28:22 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:22 compute-0 nova_compute[260603]:     <nova:port uuid="2f45b100-9bc2-4853-87ff-324e74ddfee5">
Oct 02 08:28:22 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:28:22 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:22 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:28:22 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:28:22 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.186 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.186 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.187 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.187 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.191 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 922a693b-1cb4-42e8-a97d-78973183c774 in datapath cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 unbound from our chassis
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.193 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.210 2 DEBUG oslo_concurrency.lockutils [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-f13ff7c1-d7d3-443e-9f06-69f8c466af30-2f45b100-9bc2-4853-87ff-324e74ddfee5" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.215 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[408de17d-8ce4-4d6d-b483-1548817d7817]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.243 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d8cc7a-1eb5-4d9b-abca-b21599845f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.246 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[37e6ff8d-9c6f-4476-9746-eb715062c0ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.269 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c43bddb3-2364-4107-b1e8-e02a99342527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.290 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0dd734-e8d1-44df-a25b-2dfb8d9964ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce7e8b6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:5a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453893, 'reachable_time': 18032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311403, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.305 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab860fd-99ef-4b1e-9073-0c908c499020]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453909, 'tstamp': 453909}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311406, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453913, 'tstamp': 453913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311406, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.306 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce7e8b6-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.309 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce7e8b6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.309 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.309 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcce7e8b6-90, col_values=(('external_ids', {'iface-id': '2a218cce-83be-4768-9f4e-7d61802765d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.310 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:22 compute-0 podman[311419]: 2025-10-02 08:28:22.418619203 +0000 UTC m=+0.043889616 container create 3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_knuth, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 08:28:22 compute-0 systemd[1]: Started libpod-conmon-3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7.scope.
Oct 02 08:28:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:28:22 compute-0 podman[311419]: 2025-10-02 08:28:22.400725086 +0000 UTC m=+0.025995519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:28:22 compute-0 podman[311419]: 2025-10-02 08:28:22.510009722 +0000 UTC m=+0.135280185 container init 3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:28:22 compute-0 podman[311419]: 2025-10-02 08:28:22.517144474 +0000 UTC m=+0.142414887 container start 3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 08:28:22 compute-0 podman[311419]: 2025-10-02 08:28:22.520476887 +0000 UTC m=+0.145747310 container attach 3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 08:28:22 compute-0 thirsty_knuth[311435]: 167 167
Oct 02 08:28:22 compute-0 podman[311419]: 2025-10-02 08:28:22.528333821 +0000 UTC m=+0.153604234 container died 3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:28:22 compute-0 systemd[1]: libpod-3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7.scope: Deactivated successfully.
Oct 02 08:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-694240d09dcdd3d202a9b83c5bdb68817147e9480559bb31723d3ad6cea5a108-merged.mount: Deactivated successfully.
Oct 02 08:28:22 compute-0 podman[311419]: 2025-10-02 08:28:22.577937173 +0000 UTC m=+0.203207626 container remove 3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_knuth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:28:22 compute-0 systemd[1]: libpod-conmon-3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7.scope: Deactivated successfully.
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.764 2 DEBUG nova.compute.manager [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.765 2 DEBUG oslo_concurrency.lockutils [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.766 2 DEBUG oslo_concurrency.lockutils [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.766 2 DEBUG oslo_concurrency.lockutils [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.767 2 DEBUG nova.compute.manager [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Processing event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.767 2 DEBUG nova.compute.manager [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.768 2 DEBUG oslo_concurrency.lockutils [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.768 2 DEBUG oslo_concurrency.lockutils [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.769 2 DEBUG oslo_concurrency.lockutils [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.769 2 DEBUG nova.compute.manager [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] No waiting events found dispatching network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.769 2 WARNING nova.compute.manager [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received unexpected event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f for instance with vm_state building and task_state spawning.
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.771 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.776 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393702.7757068, e3ae3c82-7eb4-4727-a846-92afca9a8330 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.776 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] VM Resumed (Lifecycle Event)
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.779 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.784 2 INFO nova.virt.libvirt.driver [-] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance spawned successfully.
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.785 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.798 2 DEBUG nova.compute.manager [req-321d7bfc-499d-4bb9-835d-2955a4685499 req-8a70a5a2-5e31-4167-af6f-d4ad80dfc6fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.798 2 DEBUG oslo_concurrency.lockutils [req-321d7bfc-499d-4bb9-835d-2955a4685499 req-8a70a5a2-5e31-4167-af6f-d4ad80dfc6fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.799 2 DEBUG oslo_concurrency.lockutils [req-321d7bfc-499d-4bb9-835d-2955a4685499 req-8a70a5a2-5e31-4167-af6f-d4ad80dfc6fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.800 2 DEBUG oslo_concurrency.lockutils [req-321d7bfc-499d-4bb9-835d-2955a4685499 req-8a70a5a2-5e31-4167-af6f-d4ad80dfc6fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.800 2 DEBUG nova.compute.manager [req-321d7bfc-499d-4bb9-835d-2955a4685499 req-8a70a5a2-5e31-4167-af6f-d4ad80dfc6fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.800 2 WARNING nova.compute.manager [req-321d7bfc-499d-4bb9-835d-2955a4685499 req-8a70a5a2-5e31-4167-af6f-d4ad80dfc6fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received unexpected event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.803 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.811 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.817 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.818 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.818 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.819 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.820 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.821 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.833 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:28:22 compute-0 podman[311501]: 2025-10-02 08:28:22.839590465 +0000 UTC m=+0.050994506 container create 65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 08:28:22 compute-0 systemd[1]: Started libpod-conmon-65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1.scope.
Oct 02 08:28:22 compute-0 ceph-mon[74477]: pgmap v1418: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 216 op/s
Oct 02 08:28:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3393871243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:28:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3393871243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.893 2 INFO nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Took 12.04 seconds to spawn the instance on the hypervisor.
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.894 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:28:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b0dcf8c44ec7e500648d7cc7adce6fdcd49ffb68474c2348f8c81072e9bb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b0dcf8c44ec7e500648d7cc7adce6fdcd49ffb68474c2348f8c81072e9bb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b0dcf8c44ec7e500648d7cc7adce6fdcd49ffb68474c2348f8c81072e9bb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b0dcf8c44ec7e500648d7cc7adce6fdcd49ffb68474c2348f8c81072e9bb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:22 compute-0 podman[311501]: 2025-10-02 08:28:22.821134341 +0000 UTC m=+0.032538412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:28:22 compute-0 podman[311501]: 2025-10-02 08:28:22.920386756 +0000 UTC m=+0.131790817 container init 65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 02 08:28:22 compute-0 podman[311501]: 2025-10-02 08:28:22.929998784 +0000 UTC m=+0.141402825 container start 65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:28:22 compute-0 podman[311501]: 2025-10-02 08:28:22.932505532 +0000 UTC m=+0.143909573 container attach 65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:28:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 222 op/s
Oct 02 08:28:22 compute-0 nova_compute[260603]: 2025-10-02 08:28:22.984 2 INFO nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Took 13.21 seconds to build instance.
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.006 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.182 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393703.1825626, 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.183 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] VM Started (Lifecycle Event)
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.208 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.212 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393703.1826618, 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.212 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] VM Paused (Lifecycle Event)
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.231 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.235 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.254 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.450 2 DEBUG oslo_concurrency.lockutils [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-f13ff7c1-d7d3-443e-9f06-69f8c466af30-2f45b100-9bc2-4853-87ff-324e74ddfee5" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.450 2 DEBUG oslo_concurrency.lockutils [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-f13ff7c1-d7d3-443e-9f06-69f8c466af30-2f45b100-9bc2-4853-87ff-324e74ddfee5" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.472 2 DEBUG nova.objects.instance [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.495 2 DEBUG nova.virt.libvirt.vif [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.495 2 DEBUG nova.network.os_vif_util [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.496 2 DEBUG nova.network.os_vif_util [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.499 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.501 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.504 2 DEBUG nova.virt.libvirt.driver [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Attempting to detach device tap2f45b100-9b from instance f13ff7c1-d7d3-443e-9f06-69f8c466af30 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.504 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:69:c8:97"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <target dev="tap2f45b100-9b"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]: </interface>
Oct 02 08:28:23 compute-0 nova_compute[260603]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.511 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.514 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface>not found in domain: <domain type='kvm' id='47'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <name>instance-0000002a</name>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <uuid>f13ff7c1-d7d3-443e-9f06-69f8c466af30</uuid>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:name>tempest-tempest.common.compute-instance-53387023</nova:name>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:28:22</nova:creationTime>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:port uuid="136aeb8e-dedd-4cd8-a72d-1c4309716daf">
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:port uuid="2f45b100-9bc2-4853-87ff-324e74ddfee5">
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:28:23 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <system>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <entry name='serial'>f13ff7c1-d7d3-443e-9f06-69f8c466af30</entry>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <entry name='uuid'>f13ff7c1-d7d3-443e-9f06-69f8c466af30</entry>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </system>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <os>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </os>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <features>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </features>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk' index='2'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config' index='1'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:0b:59:2a'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target dev='tap136aeb8e-de'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:69:c8:97'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target dev='tap2f45b100-9b'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='net1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <source path='/dev/pts/1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/console.log' append='off'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       </target>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <source path='/dev/pts/1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/console.log' append='off'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </console>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </input>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </input>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </input>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <video>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </video>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c737,c790</label>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c737,c790</imagelabel>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:28:23 compute-0 nova_compute[260603]: </domain>
Oct 02 08:28:23 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.514 2 INFO nova.virt.libvirt.driver [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap2f45b100-9b from instance f13ff7c1-d7d3-443e-9f06-69f8c466af30 from the persistent domain config.
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.514 2 DEBUG nova.virt.libvirt.driver [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] (1/8): Attempting to detach device tap2f45b100-9b with device alias net1 from instance f13ff7c1-d7d3-443e-9f06-69f8c466af30 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.514 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:69:c8:97"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <target dev="tap2f45b100-9b"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]: </interface>
Oct 02 08:28:23 compute-0 nova_compute[260603]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 08:28:23 compute-0 kernel: tap2f45b100-9b (unregistering): left promiscuous mode
Oct 02 08:28:23 compute-0 NetworkManager[45129]: <info>  [1759393703.6249] device (tap2f45b100-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.636 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Received event <DeviceRemovedEvent: 1759393703.6363068, f13ff7c1-d7d3-443e-9f06-69f8c466af30 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 08:28:23 compute-0 ovn_controller[152344]: 2025-10-02T08:28:23Z|00377|binding|INFO|Releasing lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 from this chassis (sb_readonly=0)
Oct 02 08:28:23 compute-0 ovn_controller[152344]: 2025-10-02T08:28:23Z|00378|binding|INFO|Setting lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 down in Southbound
Oct 02 08:28:23 compute-0 ovn_controller[152344]: 2025-10-02T08:28:23Z|00379|binding|INFO|Removing iface tap2f45b100-9b ovn-installed in OVS
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.651 2 DEBUG nova.virt.libvirt.driver [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Start waiting for the detach event from libvirt for device tap2f45b100-9b with device alias net1 for instance f13ff7c1-d7d3-443e-9f06-69f8c466af30 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.652 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.651 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:c8:97 10.100.0.5'], port_security=['fa:16:3e:69:c8:97 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f13ff7c1-d7d3-443e-9f06-69f8c466af30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2f45b100-9bc2-4853-87ff-324e74ddfee5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.653 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2f45b100-9bc2-4853-87ff-324e74ddfee5 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.655 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.662 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface>not found in domain: <domain type='kvm' id='47'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <name>instance-0000002a</name>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <uuid>f13ff7c1-d7d3-443e-9f06-69f8c466af30</uuid>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:name>tempest-tempest.common.compute-instance-53387023</nova:name>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:28:22</nova:creationTime>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:port uuid="136aeb8e-dedd-4cd8-a72d-1c4309716daf">
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:port uuid="2f45b100-9bc2-4853-87ff-324e74ddfee5">
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:28:23 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <system>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <entry name='serial'>f13ff7c1-d7d3-443e-9f06-69f8c466af30</entry>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <entry name='uuid'>f13ff7c1-d7d3-443e-9f06-69f8c466af30</entry>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </system>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <os>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </os>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <features>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </features>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk' index='2'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config' index='1'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:0b:59:2a'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target dev='tap136aeb8e-de'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <source path='/dev/pts/1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/console.log' append='off'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       </target>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <source path='/dev/pts/1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/console.log' append='off'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </console>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </input>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </input>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </input>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <video>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </video>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c737,c790</label>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c737,c790</imagelabel>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:28:23 compute-0 nova_compute[260603]: </domain>
Oct 02 08:28:23 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.662 2 INFO nova.virt.libvirt.driver [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap2f45b100-9b from instance f13ff7c1-d7d3-443e-9f06-69f8c466af30 from the live domain config.
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.663 2 DEBUG nova.virt.libvirt.vif [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.664 2 DEBUG nova.network.os_vif_util [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.664 2 DEBUG nova.network.os_vif_util [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.665 2 DEBUG os_vif [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f45b100-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.680 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8e622b2a-c3e8-4c07-96db-22f3fcf46597]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.687 2 INFO os_vif [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b')
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.688 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:name>tempest-tempest.common.compute-instance-53387023</nova:name>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:28:23</nova:creationTime>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     <nova:port uuid="136aeb8e-dedd-4cd8-a72d-1c4309716daf">
Oct 02 08:28:23 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:28:23 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:23 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:28:23 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:28:23 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.729 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[689e8f16-332c-4672-be97-5efac5ee471d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.734 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c70a41df-1a4c-4742-aa31-f4430821c928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:23 compute-0 keen_kilby[311519]: {
Oct 02 08:28:23 compute-0 keen_kilby[311519]:     "0": [
Oct 02 08:28:23 compute-0 keen_kilby[311519]:         {
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "devices": [
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "/dev/loop3"
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             ],
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_name": "ceph_lv0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_size": "21470642176",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "name": "ceph_lv0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "tags": {
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.cluster_name": "ceph",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.crush_device_class": "",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.encrypted": "0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.osd_id": "0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.type": "block",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.vdo": "0"
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             },
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "type": "block",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "vg_name": "ceph_vg0"
Oct 02 08:28:23 compute-0 keen_kilby[311519]:         }
Oct 02 08:28:23 compute-0 keen_kilby[311519]:     ],
Oct 02 08:28:23 compute-0 keen_kilby[311519]:     "1": [
Oct 02 08:28:23 compute-0 keen_kilby[311519]:         {
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "devices": [
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "/dev/loop4"
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             ],
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_name": "ceph_lv1",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_size": "21470642176",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "name": "ceph_lv1",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "tags": {
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.cluster_name": "ceph",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.crush_device_class": "",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.encrypted": "0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.osd_id": "1",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.type": "block",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.vdo": "0"
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             },
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "type": "block",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "vg_name": "ceph_vg1"
Oct 02 08:28:23 compute-0 keen_kilby[311519]:         }
Oct 02 08:28:23 compute-0 keen_kilby[311519]:     ],
Oct 02 08:28:23 compute-0 keen_kilby[311519]:     "2": [
Oct 02 08:28:23 compute-0 keen_kilby[311519]:         {
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "devices": [
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "/dev/loop5"
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             ],
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_name": "ceph_lv2",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_size": "21470642176",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "name": "ceph_lv2",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "tags": {
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.cluster_name": "ceph",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.crush_device_class": "",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.encrypted": "0",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.osd_id": "2",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.type": "block",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:                 "ceph.vdo": "0"
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             },
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "type": "block",
Oct 02 08:28:23 compute-0 keen_kilby[311519]:             "vg_name": "ceph_vg2"
Oct 02 08:28:23 compute-0 keen_kilby[311519]:         }
Oct 02 08:28:23 compute-0 keen_kilby[311519]:     ]
Oct 02 08:28:23 compute-0 keen_kilby[311519]: }
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.791 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb5ece2-4ea5-4c2e-8d10-c77f72387e95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:23 compute-0 systemd[1]: libpod-65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1.scope: Deactivated successfully.
Oct 02 08:28:23 compute-0 conmon[311519]: conmon 65d1e04b93ce5a77efcf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1.scope/container/memory.events
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.813 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a63b15-3f4c-4937-862e-2b536fa5cf1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311538, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.829 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa9c667b-92dd-4aeb-9cc9-5447f94770e6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454016, 'tstamp': 454016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311543, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454020, 'tstamp': 454020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311543, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.831 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:23 compute-0 nova_compute[260603]: 2025-10-02 08:28:23.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.834 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.835 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.835 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.835 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:23 compute-0 podman[311539]: 2025-10-02 08:28:23.857738614 +0000 UTC m=+0.037721262 container died 65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:28:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a2b0dcf8c44ec7e500648d7cc7adce6fdcd49ffb68474c2348f8c81072e9bb6-merged.mount: Deactivated successfully.
Oct 02 08:28:23 compute-0 podman[311539]: 2025-10-02 08:28:23.909472852 +0000 UTC m=+0.089455480 container remove 65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 08:28:23 compute-0 systemd[1]: libpod-conmon-65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1.scope: Deactivated successfully.
Oct 02 08:28:23 compute-0 sudo[311313]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:24 compute-0 sudo[311556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:28:24 compute-0 sudo[311556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:24 compute-0 sudo[311556]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:24 compute-0 sudo[311581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:28:24 compute-0 sudo[311581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:24 compute-0 sudo[311581]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:24 compute-0 sudo[311606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:28:24 compute-0 sudo[311606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:24 compute-0 sudo[311606]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:24 compute-0 sudo[311631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:28:24 compute-0 sudo[311631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:24 compute-0 podman[311699]: 2025-10-02 08:28:24.535954101 +0000 UTC m=+0.036585797 container create 4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_stonebraker, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:28:24 compute-0 systemd[1]: Started libpod-conmon-4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c.scope.
Oct 02 08:28:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:28:24 compute-0 podman[311699]: 2025-10-02 08:28:24.613691408 +0000 UTC m=+0.114323114 container init 4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_stonebraker, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:28:24 compute-0 podman[311699]: 2025-10-02 08:28:24.52143598 +0000 UTC m=+0.022067716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:28:24 compute-0 podman[311699]: 2025-10-02 08:28:24.620873591 +0000 UTC m=+0.121505337 container start 4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_stonebraker, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 08:28:24 compute-0 pensive_stonebraker[311715]: 167 167
Oct 02 08:28:24 compute-0 systemd[1]: libpod-4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c.scope: Deactivated successfully.
Oct 02 08:28:24 compute-0 podman[311699]: 2025-10-02 08:28:24.624502993 +0000 UTC m=+0.125134739 container attach 4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:28:24 compute-0 podman[311699]: 2025-10-02 08:28:24.624905556 +0000 UTC m=+0.125537282 container died 4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 08:28:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cfadec55263f5b1123564c1eda634ec668e07a3fb52b1d8c89fd64da04c4087-merged.mount: Deactivated successfully.
Oct 02 08:28:24 compute-0 podman[311699]: 2025-10-02 08:28:24.668505941 +0000 UTC m=+0.169137657 container remove 4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_stonebraker, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:28:24 compute-0 systemd[1]: libpod-conmon-4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c.scope: Deactivated successfully.
Oct 02 08:28:24 compute-0 ceph-mon[74477]: pgmap v1419: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 222 op/s
Oct 02 08:28:24 compute-0 podman[311738]: 2025-10-02 08:28:24.89373377 +0000 UTC m=+0.053134422 container create 0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_vaughan, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:28:24 compute-0 systemd[1]: Started libpod-conmon-0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768.scope.
Oct 02 08:28:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.8 MiB/s wr, 206 op/s
Oct 02 08:28:24 compute-0 podman[311738]: 2025-10-02 08:28:24.861735076 +0000 UTC m=+0.021135778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:28:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4087ba807de9057ac8d4f505ee3b24b7da5b46592c5673a7afeef0ef9006f0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4087ba807de9057ac8d4f505ee3b24b7da5b46592c5673a7afeef0ef9006f0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4087ba807de9057ac8d4f505ee3b24b7da5b46592c5673a7afeef0ef9006f0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4087ba807de9057ac8d4f505ee3b24b7da5b46592c5673a7afeef0ef9006f0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:24 compute-0 podman[311738]: 2025-10-02 08:28:24.977127482 +0000 UTC m=+0.136528154 container init 0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_vaughan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 08:28:24 compute-0 podman[311738]: 2025-10-02 08:28:24.989177016 +0000 UTC m=+0.148577668 container start 0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_vaughan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:28:24 compute-0 podman[311738]: 2025-10-02 08:28:24.992892711 +0000 UTC m=+0.152293363 container attach 0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.236 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.237 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.238 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.238 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.238 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.239 2 WARNING nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received unexpected event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.239 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.239 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.239 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.240 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.240 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Processing event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.240 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.241 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.241 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.241 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.242 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.243 2 WARNING nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state building and task_state spawning.
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.243 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-unplugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.244 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.244 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.245 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.245 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-unplugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.246 2 WARNING nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received unexpected event network-vif-unplugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.247 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.247 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.248 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.248 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.249 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.249 2 WARNING nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received unexpected event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.256 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.261 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393705.2612765, 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.262 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] VM Resumed (Lifecycle Event)
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.265 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.269 2 INFO nova.virt.libvirt.driver [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance spawned successfully.
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.270 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.301 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.313 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.319 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.320 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.321 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.322 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.323 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.324 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.367 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.409 2 INFO nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Took 12.64 seconds to spawn the instance on the hypervisor.
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.410 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.485 2 INFO nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Took 13.72 seconds to build instance.
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.504 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.603 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.633 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.633 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.634 2 DEBUG oslo_concurrency.lockutils [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.635 2 DEBUG nova.network.neutron [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing network info cache for port 2f45b100-9bc2-4853-87ff-324e74ddfee5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.637 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.638 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.639 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.640 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.674 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.676 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.676 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.677 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.678 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:25 compute-0 ovn_controller[152344]: 2025-10-02T08:28:25Z|00380|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct 02 08:28:25 compute-0 ovn_controller[152344]: 2025-10-02T08:28:25Z|00381|binding|INFO|Releasing lport 2a218cce-83be-4768-9f4e-7d61802765d4 from this chassis (sb_readonly=0)
Oct 02 08:28:25 compute-0 ovn_controller[152344]: 2025-10-02T08:28:25Z|00382|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.814 2 DEBUG oslo_concurrency.lockutils [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:25 compute-0 ceph-mon[74477]: pgmap v1420: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.8 MiB/s wr, 206 op/s
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.936 2 INFO nova.network.neutron [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Port 2f45b100-9bc2-4853-87ff-324e74ddfee5 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.937 2 DEBUG nova.network.neutron [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.965 2 DEBUG oslo_concurrency.lockutils [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.965 2 DEBUG nova.compute.manager [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-changed-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.966 2 DEBUG nova.compute.manager [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Refreshing instance network info cache due to event network-changed-922a693b-1cb4-42e8-a97d-78973183c774. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.966 2 DEBUG oslo_concurrency.lockutils [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.966 2 DEBUG oslo_concurrency.lockutils [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.967 2 DEBUG nova.network.neutron [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Refreshing network info cache for port 922a693b-1cb4-42e8-a97d-78973183c774 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.967 2 DEBUG oslo_concurrency.lockutils [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:25 compute-0 nova_compute[260603]: 2025-10-02 08:28:25.968 2 DEBUG nova.network.neutron [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:28:26 compute-0 clever_vaughan[311754]: {
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "osd_id": 2,
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "type": "bluestore"
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:     },
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "osd_id": 1,
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "type": "bluestore"
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:     },
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "osd_id": 0,
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:         "type": "bluestore"
Oct 02 08:28:26 compute-0 clever_vaughan[311754]:     }
Oct 02 08:28:26 compute-0 clever_vaughan[311754]: }
Oct 02 08:28:26 compute-0 systemd[1]: libpod-0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768.scope: Deactivated successfully.
Oct 02 08:28:26 compute-0 podman[311738]: 2025-10-02 08:28:26.055862035 +0000 UTC m=+1.215262697 container died 0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 08:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4087ba807de9057ac8d4f505ee3b24b7da5b46592c5673a7afeef0ef9006f0e-merged.mount: Deactivated successfully.
Oct 02 08:28:26 compute-0 podman[311738]: 2025-10-02 08:28:26.114647662 +0000 UTC m=+1.274048314 container remove 0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 08:28:26 compute-0 systemd[1]: libpod-conmon-0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768.scope: Deactivated successfully.
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.136 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.137 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.137 2 INFO nova.compute.manager [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Shelving
Oct 02 08:28:26 compute-0 sudo[311631]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2793012690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.170 2 DEBUG nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:28:26 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:28:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:28:26 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:28:26 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 422b3fd1-85ee-4046-a1cb-4882c15357c1 does not exist
Oct 02 08:28:26 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev fc74a7da-e589-4db7-8f44-0b84a91af12c does not exist
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.188 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:26 compute-0 sudo[311820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:28:26 compute-0 sudo[311820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:26 compute-0 sudo[311820]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.283 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.285 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.290 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.290 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.295 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.296 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.299 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.299 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.302 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.303 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:28:26 compute-0 sudo[311846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:28:26 compute-0 sudo[311846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:28:26 compute-0 sudo[311846]: pam_unix(sudo:session): session closed for user root
Oct 02 08:28:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:28:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 18K writes, 73K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 18K writes, 5973 syncs, 3.14 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 44.97 MB, 0.07 MB/s
                                           Interval WAL: 11K writes, 4472 syncs, 2.55 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.586 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.587 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3503MB free_disk=59.809844970703125GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.587 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.587 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.679 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance f13ff7c1-d7d3-443e-9f06-69f8c466af30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.679 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 05cc7244-c419-4c24-b995-95ca760837a4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.679 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 247e32e5-5f07-4db4-9e6f-dcfade745228 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.679 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance e3ae3c82-7eb4-4727-a846-92afca9a8330 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.680 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.680 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.680 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:28:26 compute-0 nova_compute[260603]: 2025-10-02 08:28:26.823 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2793012690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:26 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:28:26 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:28:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 174 op/s
Oct 02 08:28:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:28:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3096039856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:27 compute-0 nova_compute[260603]: 2025-10-02 08:28:27.282 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:27 compute-0 nova_compute[260603]: 2025-10-02 08:28:27.293 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:28:27 compute-0 nova_compute[260603]: 2025-10-02 08:28:27.321 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:28:27 compute-0 nova_compute[260603]: 2025-10-02 08:28:27.362 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:28:27 compute-0 nova_compute[260603]: 2025-10-02 08:28:27.363 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:27 compute-0 nova_compute[260603]: 2025-10-02 08:28:27.365 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:27 compute-0 nova_compute[260603]: 2025-10-02 08:28:27.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:28:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:28:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:28:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:28:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:28:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:28:27 compute-0 ceph-mon[74477]: pgmap v1421: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 174 op/s
Oct 02 08:28:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3096039856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:28:27
Oct 02 08:28:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:28:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:28:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta']
Oct 02 08:28:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:28:28 compute-0 nova_compute[260603]: 2025-10-02 08:28:28.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:28 compute-0 nova_compute[260603]: 2025-10-02 08:28:28.902 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393693.9021835, 197184a1-4270-40c9-87b5-6eca7e832812 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:28 compute-0 nova_compute[260603]: 2025-10-02 08:28:28.903 2 INFO nova.compute.manager [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] VM Stopped (Lifecycle Event)
Oct 02 08:28:28 compute-0 nova_compute[260603]: 2025-10-02 08:28:28.928 2 DEBUG nova.compute.manager [None req-842e3b19-080d-4a21-b0e8-3757c7d0e8a3 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.6 MiB/s wr, 304 op/s
Oct 02 08:28:29 compute-0 nova_compute[260603]: 2025-10-02 08:28:29.852 2 DEBUG nova.network.neutron [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updated VIF entry in instance network info cache for port 922a693b-1cb4-42e8-a97d-78973183c774. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:29 compute-0 nova_compute[260603]: 2025-10-02 08:28:29.853 2 DEBUG nova.network.neutron [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updating instance_info_cache with network_info: [{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:29 compute-0 nova_compute[260603]: 2025-10-02 08:28:29.877 2 DEBUG nova.network.neutron [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:29 compute-0 nova_compute[260603]: 2025-10-02 08:28:29.881 2 DEBUG oslo_concurrency.lockutils [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:29 compute-0 nova_compute[260603]: 2025-10-02 08:28:29.890 2 DEBUG oslo_concurrency.lockutils [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:29 compute-0 nova_compute[260603]: 2025-10-02 08:28:29.922 2 DEBUG oslo_concurrency.lockutils [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-f13ff7c1-d7d3-443e-9f06-69f8c466af30-2f45b100-9bc2-4853-87ff-324e74ddfee5" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 6.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:29 compute-0 ceph-mon[74477]: pgmap v1422: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.6 MiB/s wr, 304 op/s
Oct 02 08:28:30 compute-0 nova_compute[260603]: 2025-10-02 08:28:30.376 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:30 compute-0 nova_compute[260603]: 2025-10-02 08:28:30.377 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:28:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 43 KiB/s wr, 149 op/s
Oct 02 08:28:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:28:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 16K writes, 62K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 5152 syncs, 3.11 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9973 writes, 37K keys, 9973 commit groups, 1.0 writes per commit group, ingest: 39.85 MB, 0.07 MB/s
                                           Interval WAL: 9973 writes, 4074 syncs, 2.45 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 08:28:31 compute-0 nova_compute[260603]: 2025-10-02 08:28:31.443 2 DEBUG nova.compute.manager [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-changed-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:31 compute-0 nova_compute[260603]: 2025-10-02 08:28:31.443 2 DEBUG nova.compute.manager [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Refreshing instance network info cache due to event network-changed-922a693b-1cb4-42e8-a97d-78973183c774. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:31 compute-0 nova_compute[260603]: 2025-10-02 08:28:31.443 2 DEBUG oslo_concurrency.lockutils [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:31 compute-0 nova_compute[260603]: 2025-10-02 08:28:31.444 2 DEBUG oslo_concurrency.lockutils [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:31 compute-0 nova_compute[260603]: 2025-10-02 08:28:31.444 2 DEBUG nova.network.neutron [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Refreshing network info cache for port 922a693b-1cb4-42e8-a97d-78973183c774 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:32 compute-0 ceph-mon[74477]: pgmap v1423: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 43 KiB/s wr, 149 op/s
Oct 02 08:28:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:28:32 compute-0 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct 02 08:28:32 compute-0 nova_compute[260603]: 2025-10-02 08:28:32.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:32 compute-0 nova_compute[260603]: 2025-10-02 08:28:32.713 2 DEBUG oslo_concurrency.lockutils [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-247e32e5-5f07-4db4-9e6f-dcfade745228-2f45b100-9bc2-4853-87ff-324e74ddfee5" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:32 compute-0 nova_compute[260603]: 2025-10-02 08:28:32.714 2 DEBUG oslo_concurrency.lockutils [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-247e32e5-5f07-4db4-9e6f-dcfade745228-2f45b100-9bc2-4853-87ff-324e74ddfee5" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:32 compute-0 nova_compute[260603]: 2025-10-02 08:28:32.715 2 DEBUG nova.objects.instance [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid 247e32e5-5f07-4db4-9e6f-dcfade745228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 43 KiB/s wr, 149 op/s
Oct 02 08:28:33 compute-0 nova_compute[260603]: 2025-10-02 08:28:33.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:33 compute-0 nova_compute[260603]: 2025-10-02 08:28:33.828 2 DEBUG oslo_concurrency.lockutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:33 compute-0 nova_compute[260603]: 2025-10-02 08:28:33.829 2 DEBUG oslo_concurrency.lockutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:33 compute-0 nova_compute[260603]: 2025-10-02 08:28:33.829 2 INFO nova.compute.manager [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Rebooting instance
Oct 02 08:28:33 compute-0 nova_compute[260603]: 2025-10-02 08:28:33.852 2 DEBUG oslo_concurrency.lockutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:33 compute-0 nova_compute[260603]: 2025-10-02 08:28:33.887 2 DEBUG nova.compute.manager [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:33 compute-0 nova_compute[260603]: 2025-10-02 08:28:33.888 2 DEBUG nova.compute.manager [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing instance network info cache due to event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:33 compute-0 nova_compute[260603]: 2025-10-02 08:28:33.888 2 DEBUG oslo_concurrency.lockutils [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:33 compute-0 nova_compute[260603]: 2025-10-02 08:28:33.889 2 DEBUG oslo_concurrency.lockutils [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:33 compute-0 nova_compute[260603]: 2025-10-02 08:28:33.889 2 DEBUG nova.network.neutron [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:33 compute-0 nova_compute[260603]: 2025-10-02 08:28:33.916 2 DEBUG nova.objects.instance [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_requests' on Instance uuid 247e32e5-5f07-4db4-9e6f-dcfade745228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:33 compute-0 nova_compute[260603]: 2025-10-02 08:28:33.951 2 DEBUG nova.network.neutron [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:28:34 compute-0 ceph-mon[74477]: pgmap v1424: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 43 KiB/s wr, 149 op/s
Oct 02 08:28:34 compute-0 nova_compute[260603]: 2025-10-02 08:28:34.386 2 DEBUG nova.policy [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:28:34 compute-0 nova_compute[260603]: 2025-10-02 08:28:34.442 2 DEBUG nova.network.neutron [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updated VIF entry in instance network info cache for port 922a693b-1cb4-42e8-a97d-78973183c774. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:34 compute-0 nova_compute[260603]: 2025-10-02 08:28:34.443 2 DEBUG nova.network.neutron [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updating instance_info_cache with network_info: [{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:34 compute-0 nova_compute[260603]: 2025-10-02 08:28:34.468 2 DEBUG oslo_concurrency.lockutils [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:34 compute-0 nova_compute[260603]: 2025-10-02 08:28:34.469 2 DEBUG oslo_concurrency.lockutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquired lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:34 compute-0 nova_compute[260603]: 2025-10-02 08:28:34.470 2 DEBUG nova.network.neutron [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:28:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:34.815 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:34.815 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:34.816 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 143 op/s
Oct 02 08:28:35 compute-0 ovn_controller[152344]: 2025-10-02T08:28:35Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5a:62:72 10.100.0.5
Oct 02 08:28:35 compute-0 ovn_controller[152344]: 2025-10-02T08:28:35Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:62:72 10.100.0.5
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.299 2 DEBUG nova.network.neutron [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Successfully updated port: 2f45b100-9bc2-4853-87ff-324e74ddfee5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.326 2 DEBUG oslo_concurrency.lockutils [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.326 2 DEBUG oslo_concurrency.lockutils [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.326 2 DEBUG nova.network.neutron [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.439 2 DEBUG nova.compute.manager [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-changed-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.440 2 DEBUG nova.compute.manager [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing instance network info cache due to event network-changed-2f45b100-9bc2-4853-87ff-324e74ddfee5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.440 2 DEBUG oslo_concurrency.lockutils [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.494 2 WARNING nova.network.neutron [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.801 2 DEBUG nova.network.neutron [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updated VIF entry in instance network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.802 2 DEBUG nova.network.neutron [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.824 2 DEBUG oslo_concurrency.lockutils [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.825 2 DEBUG nova.compute.manager [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.825 2 DEBUG nova.compute.manager [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing instance network info cache due to event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.825 2 DEBUG oslo_concurrency.lockutils [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.947 2 DEBUG nova.network.neutron [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updating instance_info_cache with network_info: [{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.969 2 DEBUG oslo_concurrency.lockutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Releasing lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:35 compute-0 nova_compute[260603]: 2025-10-02 08:28:35.971 2 DEBUG nova.compute.manager [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:36 compute-0 ceph-mon[74477]: pgmap v1425: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 143 op/s
Oct 02 08:28:36 compute-0 kernel: tap922a693b-1c (unregistering): left promiscuous mode
Oct 02 08:28:36 compute-0 NetworkManager[45129]: <info>  [1759393716.1836] device (tap922a693b-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:28:36 compute-0 ovn_controller[152344]: 2025-10-02T08:28:36Z|00383|binding|INFO|Releasing lport 922a693b-1cb4-42e8-a97d-78973183c774 from this chassis (sb_readonly=0)
Oct 02 08:28:36 compute-0 ovn_controller[152344]: 2025-10-02T08:28:36Z|00384|binding|INFO|Setting lport 922a693b-1cb4-42e8-a97d-78973183c774 down in Southbound
Oct 02 08:28:36 compute-0 ovn_controller[152344]: 2025-10-02T08:28:36Z|00385|binding|INFO|Removing iface tap922a693b-1c ovn-installed in OVS
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.210 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:10:ae 10.100.0.7'], port_security=['fa:16:3e:bd:10:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21802142-fcf7-4eb2-b43b-e0fa48cab4d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '474d8864-210a-46b9-a362-54729ada24f1 aceef71f-a2e6-4998-bc1f-5a8f9213efeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e031551-92b2-44b9-87f8-368034b7a542, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=922a693b-1cb4-42e8-a97d-78973183c774) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.216 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 922a693b-1cb4-42e8-a97d-78973183c774 in datapath cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 unbound from our chassis
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.222 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.230 2 DEBUG nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.259 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cbfeaf58-c222-4ee5-bbf6-187a6639e4ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:36 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct 02 08:28:36 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000030.scope: Consumed 11.813s CPU time.
Oct 02 08:28:36 compute-0 systemd-machined[214636]: Machine qemu-52-instance-00000030 terminated.
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.316 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[40ae5c6c-b789-4d30-8e4c-f975973db248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.322 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad147b8-3d76-4f88-ad68-1e059777019d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.359 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b2e06c-e37a-45a8-bcc6-733e4bb079a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.367 2 INFO nova.virt.libvirt.driver [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance destroyed successfully.
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.368 2 DEBUG nova.objects.instance [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'resources' on Instance uuid 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.389 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[181d1855-1e52-48c6-a814-4f1fc0366e81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce7e8b6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:5a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453893, 'reachable_time': 18032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311914, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.391 2 DEBUG nova.virt.libvirt.vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2008796594',display_name='tempest-SecurityGroupsTestJSON-server-2008796594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2008796594',id=48,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-limo2kg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:36Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=21802142-fcf7-4eb2-b43b-e0fa48cab4d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.392 2 DEBUG nova.network.os_vif_util [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.393 2 DEBUG nova.network.os_vif_util [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.394 2 DEBUG os_vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.396 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap922a693b-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.404 2 INFO os_vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c')
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.410 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0f4c7e-5b3f-4baf-be98-98e0c56caa96]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453909, 'tstamp': 453909}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311916, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453913, 'tstamp': 453913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311916, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.411 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce7e8b6-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.414 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce7e8b6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.415 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.415 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcce7e8b6-90, col_values=(('external_ids', {'iface-id': '2a218cce-83be-4768-9f4e-7d61802765d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.416 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.417 2 DEBUG nova.virt.libvirt.driver [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Start _get_guest_xml network_info=[{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.424 2 WARNING nova.virt.libvirt.driver [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.431 2 DEBUG nova.virt.libvirt.host [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.432 2 DEBUG nova.virt.libvirt.host [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.439 2 DEBUG nova.virt.libvirt.host [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.439 2 DEBUG nova.virt.libvirt.host [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.440 2 DEBUG nova.virt.libvirt.driver [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.440 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.441 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.441 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.442 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.442 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.442 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.443 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.443 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.443 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.444 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.444 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.444 2 DEBUG nova.objects.instance [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.465 2 DEBUG oslo_concurrency.processutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.7 KiB/s wr, 130 op/s
Oct 02 08:28:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:28:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833570645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:36 compute-0 nova_compute[260603]: 2025-10-02 08:28:36.980 2 DEBUG oslo_concurrency.processutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.012 2 DEBUG oslo_concurrency.processutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/833570645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:28:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:28:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2268203493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.516 2 DEBUG oslo_concurrency.processutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.517 2 DEBUG nova.virt.libvirt.vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2008796594',display_name='tempest-SecurityGroupsTestJSON-server-2008796594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2008796594',id=48,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-limo2kg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:36Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=21802142-fcf7-4eb2-b43b-e0fa48cab4d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.518 2 DEBUG nova.network.os_vif_util [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.518 2 DEBUG nova.network.os_vif_util [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.519 2 DEBUG nova.objects.instance [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'pci_devices' on Instance uuid 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.538 2 DEBUG nova.virt.libvirt.driver [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <uuid>21802142-fcf7-4eb2-b43b-e0fa48cab4d6</uuid>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <name>instance-00000030</name>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <nova:name>tempest-SecurityGroupsTestJSON-server-2008796594</nova:name>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:28:36</nova:creationTime>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:28:37 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:28:37 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:28:37 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:28:37 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:37 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:28:37 compute-0 nova_compute[260603]:         <nova:user uuid="019cd25dce6249ce9c2cf326ec62df28">tempest-SecurityGroupsTestJSON-2081142325-project-member</nova:user>
Oct 02 08:28:37 compute-0 nova_compute[260603]:         <nova:project uuid="35a4ab7cf79e41f68a1ea888c2a3592e">tempest-SecurityGroupsTestJSON-2081142325</nova:project>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:28:37 compute-0 nova_compute[260603]:         <nova:port uuid="922a693b-1cb4-42e8-a97d-78973183c774">
Oct 02 08:28:37 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <system>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <entry name="serial">21802142-fcf7-4eb2-b43b-e0fa48cab4d6</entry>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <entry name="uuid">21802142-fcf7-4eb2-b43b-e0fa48cab4d6</entry>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     </system>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <os>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   </os>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <features>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   </features>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk">
Oct 02 08:28:37 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:28:37 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config">
Oct 02 08:28:37 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:28:37 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:bd:10:ae"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <target dev="tap922a693b-1c"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/console.log" append="off"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <video>
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     </video>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:28:37 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:28:37 compute-0 nova_compute[260603]: </domain>
Oct 02 08:28:37 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.539 2 DEBUG nova.virt.libvirt.driver [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.539 2 DEBUG nova.virt.libvirt.driver [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.540 2 DEBUG nova.virt.libvirt.vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2008796594',display_name='tempest-SecurityGroupsTestJSON-server-2008796594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2008796594',id=48,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-limo2kg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:36Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=21802142-fcf7-4eb2-b43b-e0fa48cab4d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.540 2 DEBUG nova.network.os_vif_util [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.540 2 DEBUG nova.network.os_vif_util [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.541 2 DEBUG os_vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.542 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.542 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap922a693b-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.545 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap922a693b-1c, col_values=(('external_ids', {'iface-id': '922a693b-1cb4-42e8-a97d-78973183c774', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:10:ae', 'vm-uuid': '21802142-fcf7-4eb2-b43b-e0fa48cab4d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 NetworkManager[45129]: <info>  [1759393717.5471] manager: (tap922a693b-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.551 2 INFO os_vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c')
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.572 2 DEBUG nova.compute.manager [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-unplugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.573 2 DEBUG oslo_concurrency.lockutils [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.573 2 DEBUG oslo_concurrency.lockutils [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.573 2 DEBUG oslo_concurrency.lockutils [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.574 2 DEBUG nova.compute.manager [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-unplugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.574 2 WARNING nova.compute.manager [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-unplugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.574 2 DEBUG nova.compute.manager [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.574 2 DEBUG oslo_concurrency.lockutils [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.574 2 DEBUG oslo_concurrency.lockutils [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.574 2 DEBUG oslo_concurrency.lockutils [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.575 2 DEBUG nova.compute.manager [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.575 2 WARNING nova.compute.manager [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 08:28:37 compute-0 kernel: tap922a693b-1c: entered promiscuous mode
Oct 02 08:28:37 compute-0 NetworkManager[45129]: <info>  [1759393717.6277] manager: (tap922a693b-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/175)
Oct 02 08:28:37 compute-0 systemd-udevd[311897]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:28:37 compute-0 ovn_controller[152344]: 2025-10-02T08:28:37Z|00386|binding|INFO|Claiming lport 922a693b-1cb4-42e8-a97d-78973183c774 for this chassis.
Oct 02 08:28:37 compute-0 ovn_controller[152344]: 2025-10-02T08:28:37Z|00387|binding|INFO|922a693b-1cb4-42e8-a97d-78973183c774: Claiming fa:16:3e:bd:10:ae 10.100.0.7
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.644 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:10:ae 10.100.0.7'], port_security=['fa:16:3e:bd:10:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21802142-fcf7-4eb2-b43b-e0fa48cab4d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '474d8864-210a-46b9-a362-54729ada24f1 aceef71f-a2e6-4998-bc1f-5a8f9213efeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e031551-92b2-44b9-87f8-368034b7a542, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=922a693b-1cb4-42e8-a97d-78973183c774) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.645 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 922a693b-1cb4-42e8-a97d-78973183c774 in datapath cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 bound to our chassis
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.646 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9
Oct 02 08:28:37 compute-0 NetworkManager[45129]: <info>  [1759393717.6491] device (tap922a693b-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:28:37 compute-0 ovn_controller[152344]: 2025-10-02T08:28:37Z|00388|binding|INFO|Setting lport 922a693b-1cb4-42e8-a97d-78973183c774 ovn-installed in OVS
Oct 02 08:28:37 compute-0 ovn_controller[152344]: 2025-10-02T08:28:37Z|00389|binding|INFO|Setting lport 922a693b-1cb4-42e8-a97d-78973183c774 up in Southbound
Oct 02 08:28:37 compute-0 NetworkManager[45129]: <info>  [1759393717.6520] device (tap922a693b-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 ovn_controller[152344]: 2025-10-02T08:28:37Z|00390|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct 02 08:28:37 compute-0 ovn_controller[152344]: 2025-10-02T08:28:37Z|00391|binding|INFO|Releasing lport 2a218cce-83be-4768-9f4e-7d61802765d4 from this chassis (sb_readonly=0)
Oct 02 08:28:37 compute-0 ovn_controller[152344]: 2025-10-02T08:28:37Z|00392|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.666 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8c379113-fe22-4565-b77c-c9323ba8bbef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:37 compute-0 systemd-machined[214636]: New machine qemu-53-instance-00000030.
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.702 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3a405217-804b-4d3b-b071-db6b4802633e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:37 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-00000030.
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.705 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad82583-cf8d-43d2-adee-fe8104ed2819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.741 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b9aa4f94-90ed-47f0-bca2-a76de6a22c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.764 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aedcc586-4f30-45ea-92d1-69fdaa34ecca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce7e8b6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:5a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453893, 'reachable_time': 18032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312000, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.791 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d23e9403-cb64-46fa-8ec3-21b9f21dc609]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453909, 'tstamp': 453909}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312004, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453913, 'tstamp': 453913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312004, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.793 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce7e8b6-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.796 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce7e8b6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.797 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.797 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcce7e8b6-90, col_values=(('external_ids', {'iface-id': '2a218cce-83be-4768-9f4e-7d61802765d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.797 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.882 2 DEBUG nova.network.neutron [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.916 2 DEBUG oslo_concurrency.lockutils [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.917 2 DEBUG oslo_concurrency.lockutils [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.917 2 DEBUG nova.network.neutron [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing network info cache for port 2f45b100-9bc2-4853-87ff-324e74ddfee5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.920 2 DEBUG nova.virt.libvirt.vif [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.920 2 DEBUG nova.network.os_vif_util [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.921 2 DEBUG nova.network.os_vif_util [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.921 2 DEBUG os_vif [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.922 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.923 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f45b100-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f45b100-9b, col_values=(('external_ids', {'iface-id': '2f45b100-9bc2-4853-87ff-324e74ddfee5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:c8:97', 'vm-uuid': '247e32e5-5f07-4db4-9e6f-dcfade745228'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 NetworkManager[45129]: <info>  [1759393717.9276] manager: (tap2f45b100-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.934 2 INFO os_vif [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b')
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.935 2 DEBUG nova.virt.libvirt.vif [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.935 2 DEBUG nova.network.os_vif_util [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.936 2 DEBUG nova.network.os_vif_util [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.938 2 DEBUG nova.virt.libvirt.guest [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] attach device xml: <interface type="ethernet">
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:69:c8:97"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]:   <target dev="tap2f45b100-9b"/>
Oct 02 08:28:37 compute-0 nova_compute[260603]: </interface>
Oct 02 08:28:37 compute-0 nova_compute[260603]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 08:28:37 compute-0 kernel: tap2f45b100-9b: entered promiscuous mode
Oct 02 08:28:37 compute-0 NetworkManager[45129]: <info>  [1759393717.9510] manager: (tap2f45b100-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/177)
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 ovn_controller[152344]: 2025-10-02T08:28:37Z|00393|binding|INFO|Claiming lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 for this chassis.
Oct 02 08:28:37 compute-0 ovn_controller[152344]: 2025-10-02T08:28:37Z|00394|binding|INFO|2f45b100-9bc2-4853-87ff-324e74ddfee5: Claiming fa:16:3e:69:c8:97 10.100.0.5
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.958 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:c8:97 10.100.0.5'], port_security=['fa:16:3e:69:c8:97 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '247e32e5-5f07-4db4-9e6f-dcfade745228', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2f45b100-9bc2-4853-87ff-324e74ddfee5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.959 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2f45b100-9bc2-4853-87ff-324e74ddfee5 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.960 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:28:37 compute-0 ovn_controller[152344]: 2025-10-02T08:28:37Z|00395|binding|INFO|Setting lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 ovn-installed in OVS
Oct 02 08:28:37 compute-0 ovn_controller[152344]: 2025-10-02T08:28:37Z|00396|binding|INFO|Setting lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 up in Southbound
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 nova_compute[260603]: 2025-10-02 08:28:37.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:37 compute-0 NetworkManager[45129]: <info>  [1759393717.9776] device (tap2f45b100-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:28:37 compute-0 NetworkManager[45129]: <info>  [1759393717.9795] device (tap2f45b100-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:28:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.990 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3fa298-da09-4888-a91f-b4e6d2f71fff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.027 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d07349-73f7-4367-b447-43f5ebae2868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.030 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9c1fec-69ea-4b6e-8da3-f345fd720217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 ceph-mon[74477]: pgmap v1426: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.7 KiB/s wr, 130 op/s
Oct 02 08:28:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2268203493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.045 2 DEBUG nova.virt.libvirt.driver [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.045 2 DEBUG nova.virt.libvirt.driver [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.045 2 DEBUG nova.virt.libvirt.driver [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:df:f8:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.046 2 DEBUG nova.virt.libvirt.driver [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:69:c8:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.056 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bf49c3e3-1d78-450a-8c8b-e23576945ed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.073 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86c248e9-73e9-4a85-a686-ea97e97f5fd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312019, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.085 2 DEBUG nova.virt.libvirt.guest [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:38 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:38 compute-0 nova_compute[260603]:   <nova:name>tempest-tempest.common.compute-instance-1204621376</nova:name>
Oct 02 08:28:38 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:28:38</nova:creationTime>
Oct 02 08:28:38 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:28:38 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:28:38 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:28:38 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:28:38 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:38 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:38 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:28:38 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:28:38 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:28:38 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:28:38 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:28:38 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:38 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:28:38 compute-0 nova_compute[260603]:     <nova:port uuid="262f44f5-df56-4176-96b2-4819d8b7e258">
Oct 02 08:28:38 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 08:28:38 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:38 compute-0 nova_compute[260603]:     <nova:port uuid="2f45b100-9bc2-4853-87ff-324e74ddfee5">
Oct 02 08:28:38 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:28:38 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:38 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:28:38 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:28:38 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.088 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[54f7f0c5-5eb5-4b4b-912f-be342f19eef4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454016, 'tstamp': 454016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312020, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454020, 'tstamp': 454020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312020, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.089 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.091 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.111 2 DEBUG oslo_concurrency.lockutils [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-247e32e5-5f07-4db4-9e6f-dcfade745228-2f45b100-9bc2-4853-87ff-324e74ddfee5" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:38 compute-0 kernel: tap8e7deb4e-9f (unregistering): left promiscuous mode
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:28:38 compute-0 NetworkManager[45129]: <info>  [1759393718.5055] device (tap8e7deb4e-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002974836902552582 of space, bias 1.0, pg target 0.8924510707657746 quantized to 32 (current 32)
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:38 compute-0 ovn_controller[152344]: 2025-10-02T08:28:38Z|00397|binding|INFO|Releasing lport 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f from this chassis (sb_readonly=0)
Oct 02 08:28:38 compute-0 ovn_controller[152344]: 2025-10-02T08:28:38Z|00398|binding|INFO|Setting lport 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f down in Southbound
Oct 02 08:28:38 compute-0 ovn_controller[152344]: 2025-10-02T08:28:38Z|00399|binding|INFO|Removing iface tap8e7deb4e-9f ovn-installed in OVS
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.519 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:62:72 10.100.0.5'], port_security=['fa:16:3e:5a:62:72 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e3ae3c82-7eb4-4727-a846-92afca9a8330', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.520 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca unbound from our chassis
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.521 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.522 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1d4528-569e-4e20-9d2d-d6905d0fec97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.522 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace which is not needed anymore
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:38 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Oct 02 08:28:38 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002f.scope: Consumed 13.295s CPU time.
Oct 02 08:28:38 compute-0 systemd-machined[214636]: Machine qemu-51-instance-0000002f terminated.
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.655 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.656 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393718.6528234, 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.656 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] VM Resumed (Lifecycle Event)
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.657 2 DEBUG nova.compute.manager [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.660 2 INFO nova.virt.libvirt.driver [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance rebooted successfully.
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.661 2 DEBUG nova.compute.manager [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:38 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [NOTICE]   (311032) : haproxy version is 2.8.14-c23fe91
Oct 02 08:28:38 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [NOTICE]   (311032) : path to executable is /usr/sbin/haproxy
Oct 02 08:28:38 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [WARNING]  (311032) : Exiting Master process...
Oct 02 08:28:38 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [ALERT]    (311032) : Current worker (311043) exited with code 143 (Terminated)
Oct 02 08:28:38 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [WARNING]  (311032) : All workers exited. Exiting... (0)
Oct 02 08:28:38 compute-0 systemd[1]: libpod-039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22.scope: Deactivated successfully.
Oct 02 08:28:38 compute-0 conmon[311019]: conmon 039a6ca8595be57c963d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22.scope/container/memory.events
Oct 02 08:28:38 compute-0 podman[312083]: 2025-10-02 08:28:38.682544869 +0000 UTC m=+0.046913440 container died 039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:28:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22-userdata-shm.mount: Deactivated successfully.
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.711 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-14ebb512a42e1b9f43784c36b0857c486bcadaa4002099f22dfeb9568e9eabfd-merged.mount: Deactivated successfully.
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.716 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:28:38 compute-0 podman[312083]: 2025-10-02 08:28:38.721967644 +0000 UTC m=+0.086336205 container cleanup 039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:28:38 compute-0 systemd[1]: libpod-conmon-039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22.scope: Deactivated successfully.
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.772 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.773 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393718.6546621, 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.773 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] VM Started (Lifecycle Event)
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.785 2 DEBUG oslo_concurrency.lockutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:38 compute-0 podman[312112]: 2025-10-02 08:28:38.800001529 +0000 UTC m=+0.055710252 container remove 039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.803 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.806 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.806 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[666ec414-ad8b-4366-8ab3-96b09c1b607f]: (4, ('Thu Oct  2 08:28:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22)\n039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22\nThu Oct  2 08:28:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22)\n039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.808 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d80f7c-47eb-417e-aeda-182841f989ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.809 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:38 compute-0 kernel: tapa72ac8c9-10: left promiscuous mode
Oct 02 08:28:38 compute-0 nova_compute[260603]: 2025-10-02 08:28:38.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.832 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea29e01-b04f-4f07-bb52-a77c0ee414a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.859 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[784ce043-8258-419a-8b79-4b5a5228d10e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.860 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2a69c024-79cf-4bc3-be69-db91f686ef70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.878 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c1915ad5-be5a-4c6a-9026-7d55b498ee9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457788, 'reachable_time': 27285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312140, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 systemd[1]: run-netns-ovnmeta\x2da72ac8c9\x2d16ee\x2d4ec0\x2db23d\x2d2741fda000ca.mount: Deactivated successfully.
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.884 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:28:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.884 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0862626c-72dc-4e82-b026-0a5eb7a71428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 405 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 199 op/s
Oct 02 08:28:39 compute-0 ovn_controller[152344]: 2025-10-02T08:28:39Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:c8:97 10.100.0.5
Oct 02 08:28:39 compute-0 ovn_controller[152344]: 2025-10-02T08:28:39Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:c8:97 10.100.0.5
Oct 02 08:28:39 compute-0 nova_compute[260603]: 2025-10-02 08:28:39.248 2 INFO nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance shutdown successfully after 13 seconds.
Oct 02 08:28:39 compute-0 nova_compute[260603]: 2025-10-02 08:28:39.255 2 INFO nova.virt.libvirt.driver [-] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance destroyed successfully.
Oct 02 08:28:39 compute-0 nova_compute[260603]: 2025-10-02 08:28:39.256 2 DEBUG nova.objects.instance [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'numa_topology' on Instance uuid e3ae3c82-7eb4-4727-a846-92afca9a8330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:39 compute-0 nova_compute[260603]: 2025-10-02 08:28:39.434 2 DEBUG nova.compute.manager [req-2c41d4b1-110e-49f0-a5ff-af91702c0802 req-2953245f-3bec-41a8-911c-e983383c0c30 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received event network-vif-unplugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:39 compute-0 nova_compute[260603]: 2025-10-02 08:28:39.435 2 DEBUG oslo_concurrency.lockutils [req-2c41d4b1-110e-49f0-a5ff-af91702c0802 req-2953245f-3bec-41a8-911c-e983383c0c30 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:39 compute-0 nova_compute[260603]: 2025-10-02 08:28:39.435 2 DEBUG oslo_concurrency.lockutils [req-2c41d4b1-110e-49f0-a5ff-af91702c0802 req-2953245f-3bec-41a8-911c-e983383c0c30 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:39 compute-0 nova_compute[260603]: 2025-10-02 08:28:39.435 2 DEBUG oslo_concurrency.lockutils [req-2c41d4b1-110e-49f0-a5ff-af91702c0802 req-2953245f-3bec-41a8-911c-e983383c0c30 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:39 compute-0 nova_compute[260603]: 2025-10-02 08:28:39.436 2 DEBUG nova.compute.manager [req-2c41d4b1-110e-49f0-a5ff-af91702c0802 req-2953245f-3bec-41a8-911c-e983383c0c30 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] No waiting events found dispatching network-vif-unplugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:39 compute-0 nova_compute[260603]: 2025-10-02 08:28:39.436 2 WARNING nova.compute.manager [req-2c41d4b1-110e-49f0-a5ff-af91702c0802 req-2953245f-3bec-41a8-911c-e983383c0c30 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received unexpected event network-vif-unplugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f for instance with vm_state active and task_state shelving.
Oct 02 08:28:39 compute-0 nova_compute[260603]: 2025-10-02 08:28:39.731 2 INFO nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Beginning cold snapshot process
Oct 02 08:28:39 compute-0 nova_compute[260603]: 2025-10-02 08:28:39.903 2 DEBUG nova.virt.libvirt.imagebackend [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:28:40 compute-0 ceph-mon[74477]: pgmap v1427: 305 pgs: 305 active+clean; 405 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 199 op/s
Oct 02 08:28:40 compute-0 nova_compute[260603]: 2025-10-02 08:28:40.190 2 DEBUG nova.storage.rbd_utils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] creating snapshot(72558ed2fc5944b8953d2fe6d0214bb6) on rbd image(e3ae3c82-7eb4-4727-a846-92afca9a8330_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:28:40 compute-0 nova_compute[260603]: 2025-10-02 08:28:40.665 2 DEBUG nova.network.neutron [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updated VIF entry in instance network info cache for port 2f45b100-9bc2-4853-87ff-324e74ddfee5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:40 compute-0 nova_compute[260603]: 2025-10-02 08:28:40.666 2 DEBUG nova.network.neutron [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:40 compute-0 nova_compute[260603]: 2025-10-02 08:28:40.698 2 DEBUG oslo_concurrency.lockutils [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:40 compute-0 nova_compute[260603]: 2025-10-02 08:28:40.698 2 DEBUG oslo_concurrency.lockutils [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:40 compute-0 nova_compute[260603]: 2025-10-02 08:28:40.699 2 DEBUG nova.network.neutron [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 405 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct 02 08:28:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct 02 08:28:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct 02 08:28:41 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct 02 08:28:41 compute-0 nova_compute[260603]: 2025-10-02 08:28:41.096 2 DEBUG nova.storage.rbd_utils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] cloning vms/e3ae3c82-7eb4-4727-a846-92afca9a8330_disk@72558ed2fc5944b8953d2fe6d0214bb6 to images/42c47313-5859-4db4-83db-08656e9d2bfd clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:28:41 compute-0 nova_compute[260603]: 2025-10-02 08:28:41.205 2 DEBUG nova.storage.rbd_utils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] flattening images/42c47313-5859-4db4-83db-08656e9d2bfd flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:28:41 compute-0 nova_compute[260603]: 2025-10-02 08:28:41.520 2 DEBUG nova.storage.rbd_utils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] removing snapshot(72558ed2fc5944b8953d2fe6d0214bb6) on rbd image(e3ae3c82-7eb4-4727-a846-92afca9a8330_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:28:42 compute-0 podman[312265]: 2025-10-02 08:28:42.015128674 +0000 UTC m=+0.075007703 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 08:28:42 compute-0 podman[312264]: 2025-10-02 08:28:42.023970848 +0000 UTC m=+0.085678993 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 08:28:42 compute-0 ceph-mon[74477]: pgmap v1428: 305 pgs: 305 active+clean; 405 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct 02 08:28:42 compute-0 ceph-mon[74477]: osdmap e202: 3 total, 3 up, 3 in
Oct 02 08:28:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct 02 08:28:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct 02 08:28:42 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.057 2 DEBUG nova.compute.manager [req-96be292f-12a0-4a3c-b090-09153f6ec030 req-c0aa0378-f1c2-41fb-9121-1bbfa1ac4a9d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.057 2 DEBUG oslo_concurrency.lockutils [req-96be292f-12a0-4a3c-b090-09153f6ec030 req-c0aa0378-f1c2-41fb-9121-1bbfa1ac4a9d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.057 2 DEBUG oslo_concurrency.lockutils [req-96be292f-12a0-4a3c-b090-09153f6ec030 req-c0aa0378-f1c2-41fb-9121-1bbfa1ac4a9d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.058 2 DEBUG oslo_concurrency.lockutils [req-96be292f-12a0-4a3c-b090-09153f6ec030 req-c0aa0378-f1c2-41fb-9121-1bbfa1ac4a9d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.058 2 DEBUG nova.compute.manager [req-96be292f-12a0-4a3c-b090-09153f6ec030 req-c0aa0378-f1c2-41fb-9121-1bbfa1ac4a9d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.058 2 WARNING nova.compute.manager [req-96be292f-12a0-4a3c-b090-09153f6ec030 req-c0aa0378-f1c2-41fb-9121-1bbfa1ac4a9d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state active and task_state None.
Oct 02 08:28:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.092 2 DEBUG nova.storage.rbd_utils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] creating snapshot(snap) on rbd image(42c47313-5859-4db4-83db-08656e9d2bfd) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.124 2 DEBUG nova.compute.manager [req-21d8582a-8b96-47bc-82c4-b7169d79cedb req-cfef9512-23f5-4f6c-9438-9555634a66fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.125 2 DEBUG oslo_concurrency.lockutils [req-21d8582a-8b96-47bc-82c4-b7169d79cedb req-cfef9512-23f5-4f6c-9438-9555634a66fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.125 2 DEBUG oslo_concurrency.lockutils [req-21d8582a-8b96-47bc-82c4-b7169d79cedb req-cfef9512-23f5-4f6c-9438-9555634a66fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.126 2 DEBUG oslo_concurrency.lockutils [req-21d8582a-8b96-47bc-82c4-b7169d79cedb req-cfef9512-23f5-4f6c-9438-9555634a66fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.126 2 DEBUG nova.compute.manager [req-21d8582a-8b96-47bc-82c4-b7169d79cedb req-cfef9512-23f5-4f6c-9438-9555634a66fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] No waiting events found dispatching network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.127 2 WARNING nova.compute.manager [req-21d8582a-8b96-47bc-82c4-b7169d79cedb req-cfef9512-23f5-4f6c-9438-9555634a66fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received unexpected event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.590 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.591 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.591 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.591 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.591 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.592 2 INFO nova.compute.manager [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Terminating instance
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.593 2 DEBUG nova.compute.manager [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:28:42 compute-0 kernel: tap922a693b-1c (unregistering): left promiscuous mode
Oct 02 08:28:42 compute-0 NetworkManager[45129]: <info>  [1759393722.6378] device (tap922a693b-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:28:42 compute-0 ovn_controller[152344]: 2025-10-02T08:28:42Z|00400|binding|INFO|Releasing lport 922a693b-1cb4-42e8-a97d-78973183c774 from this chassis (sb_readonly=0)
Oct 02 08:28:42 compute-0 ovn_controller[152344]: 2025-10-02T08:28:42Z|00401|binding|INFO|Setting lport 922a693b-1cb4-42e8-a97d-78973183c774 down in Southbound
Oct 02 08:28:42 compute-0 ovn_controller[152344]: 2025-10-02T08:28:42Z|00402|binding|INFO|Removing iface tap922a693b-1c ovn-installed in OVS
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.655 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:10:ae 10.100.0.7'], port_security=['fa:16:3e:bd:10:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21802142-fcf7-4eb2-b43b-e0fa48cab4d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'neutron:revision_number': '8', 'neutron:security_group_ids': '40bf9bb5-121b-4489-a712-e32650e47ab0 474d8864-210a-46b9-a362-54729ada24f1 aceef71f-a2e6-4998-bc1f-5a8f9213efeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e031551-92b2-44b9-87f8-368034b7a542, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=922a693b-1cb4-42e8-a97d-78973183c774) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.656 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 922a693b-1cb4-42e8-a97d-78973183c774 in datapath cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 unbound from our chassis
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.657 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.671 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[798a5992-d077-42ab-945e-af1a1e5d48fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.703 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3c632d-dcb2-419a-860a-f25ce9fbadab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.706 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6985fa3b-f7c9-491c-842d-34fae300934b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:42 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct 02 08:28:42 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Consumed 4.865s CPU time.
Oct 02 08:28:42 compute-0 systemd-machined[214636]: Machine qemu-53-instance-00000030 terminated.
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.735 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4367f060-b2bd-420d-b6ce-cb8b095e1a69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.749 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d82bf79-c19a-4ca3-b93a-99b502f4b29b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce7e8b6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:5a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453893, 'reachable_time': 18032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312335, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.763 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a0dba7-a017-4eb6-a9a0-3f39752b010d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453909, 'tstamp': 453909}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312336, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453913, 'tstamp': 453913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312336, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.764 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce7e8b6-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.772 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce7e8b6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.772 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.772 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcce7e8b6-90, col_values=(('external_ids', {'iface-id': '2a218cce-83be-4768-9f4e-7d61802765d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.773 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.831 2 INFO nova.virt.libvirt.driver [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance destroyed successfully.
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.831 2 DEBUG nova.objects.instance [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'resources' on Instance uuid 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.858 2 DEBUG nova.virt.libvirt.vif [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2008796594',display_name='tempest-SecurityGroupsTestJSON-server-2008796594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2008796594',id=48,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-limo2kg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:38Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=21802142-fcf7-4eb2-b43b-e0fa48cab4d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.859 2 DEBUG nova.network.os_vif_util [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.859 2 DEBUG nova.network.os_vif_util [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.860 2 DEBUG os_vif [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap922a693b-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:42 compute-0 nova_compute[260603]: 2025-10-02 08:28:42.870 2 INFO os_vif [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c')
Oct 02 08:28:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 3 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 298 active+clean; 459 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 6.8 MiB/s wr, 246 op/s
Oct 02 08:28:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct 02 08:28:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct 02 08:28:43 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct 02 08:28:43 compute-0 ceph-mon[74477]: osdmap e203: 3 total, 3 up, 3 in
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.240 2 INFO nova.virt.libvirt.driver [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Deleting instance files /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6_del
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.241 2 INFO nova.virt.libvirt.driver [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Deletion of /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6_del complete
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.308 2 INFO nova.compute.manager [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.309 2 DEBUG oslo.service.loopingcall [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.310 2 DEBUG nova.compute.manager [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.310 2 DEBUG nova.network.neutron [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.646 2 DEBUG nova.network.neutron [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updated VIF entry in instance network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.647 2 DEBUG nova.network.neutron [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.665 2 DEBUG oslo_concurrency.lockutils [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.700 2 DEBUG oslo_concurrency.lockutils [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-247e32e5-5f07-4db4-9e6f-dcfade745228-2f45b100-9bc2-4853-87ff-324e74ddfee5" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.700 2 DEBUG oslo_concurrency.lockutils [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-247e32e5-5f07-4db4-9e6f-dcfade745228-2f45b100-9bc2-4853-87ff-324e74ddfee5" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.716 2 DEBUG nova.objects.instance [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid 247e32e5-5f07-4db4-9e6f-dcfade745228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.733 2 DEBUG nova.virt.libvirt.vif [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.734 2 DEBUG nova.network.os_vif_util [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.734 2 DEBUG nova.network.os_vif_util [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.739 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.742 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.745 2 DEBUG nova.virt.libvirt.driver [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Attempting to detach device tap2f45b100-9b from instance 247e32e5-5f07-4db4-9e6f-dcfade745228 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.745 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:69:c8:97"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <target dev="tap2f45b100-9b"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]: </interface>
Oct 02 08:28:43 compute-0 nova_compute[260603]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.754 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.760 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface>not found in domain: <domain type='kvm' id='49'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <name>instance-0000002d</name>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <uuid>247e32e5-5f07-4db4-9e6f-dcfade745228</uuid>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:name>tempest-tempest.common.compute-instance-1204621376</nova:name>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:28:38</nova:creationTime>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:port uuid="262f44f5-df56-4176-96b2-4819d8b7e258">
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:port uuid="2f45b100-9bc2-4853-87ff-324e74ddfee5">
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:28:43 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <system>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <entry name='serial'>247e32e5-5f07-4db4-9e6f-dcfade745228</entry>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <entry name='uuid'>247e32e5-5f07-4db4-9e6f-dcfade745228</entry>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </system>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <os>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </os>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <features>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </features>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/247e32e5-5f07-4db4-9e6f-dcfade745228_disk' index='2'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config' index='1'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:df:f8:c0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target dev='tap262f44f5-df'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:69:c8:97'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target dev='tap2f45b100-9b'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='net1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <source path='/dev/pts/2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/console.log' append='off'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       </target>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/2'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <source path='/dev/pts/2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/console.log' append='off'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </console>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </input>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </input>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </input>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <video>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </video>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c132,c415</label>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c132,c415</imagelabel>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:28:43 compute-0 nova_compute[260603]: </domain>
Oct 02 08:28:43 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.760 2 INFO nova.virt.libvirt.driver [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap2f45b100-9b from instance 247e32e5-5f07-4db4-9e6f-dcfade745228 from the persistent domain config.
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.760 2 DEBUG nova.virt.libvirt.driver [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] (1/8): Attempting to detach device tap2f45b100-9b with device alias net1 from instance 247e32e5-5f07-4db4-9e6f-dcfade745228 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.761 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:69:c8:97"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <target dev="tap2f45b100-9b"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]: </interface>
Oct 02 08:28:43 compute-0 nova_compute[260603]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 08:28:43 compute-0 kernel: tap2f45b100-9b (unregistering): left promiscuous mode
Oct 02 08:28:43 compute-0 NetworkManager[45129]: <info>  [1759393723.8336] device (tap2f45b100-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.845 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Received event <DeviceRemovedEvent: 1759393723.8448665, 247e32e5-5f07-4db4-9e6f-dcfade745228 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.848 2 DEBUG nova.virt.libvirt.driver [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Start waiting for the detach event from libvirt for device tap2f45b100-9b with device alias net1 for instance 247e32e5-5f07-4db4-9e6f-dcfade745228 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 08:28:43 compute-0 ovn_controller[152344]: 2025-10-02T08:28:43Z|00403|binding|INFO|Releasing lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 from this chassis (sb_readonly=0)
Oct 02 08:28:43 compute-0 ovn_controller[152344]: 2025-10-02T08:28:43Z|00404|binding|INFO|Setting lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 down in Southbound
Oct 02 08:28:43 compute-0 ovn_controller[152344]: 2025-10-02T08:28:43Z|00405|binding|INFO|Removing iface tap2f45b100-9b ovn-installed in OVS
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.848 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.853 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface>not found in domain: <domain type='kvm' id='49'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <name>instance-0000002d</name>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <uuid>247e32e5-5f07-4db4-9e6f-dcfade745228</uuid>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:name>tempest-tempest.common.compute-instance-1204621376</nova:name>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:28:38</nova:creationTime>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:port uuid="262f44f5-df56-4176-96b2-4819d8b7e258">
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:port uuid="2f45b100-9bc2-4853-87ff-324e74ddfee5">
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:28:43 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <system>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <entry name='serial'>247e32e5-5f07-4db4-9e6f-dcfade745228</entry>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <entry name='uuid'>247e32e5-5f07-4db4-9e6f-dcfade745228</entry>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </system>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <os>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </os>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <features>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </features>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/247e32e5-5f07-4db4-9e6f-dcfade745228_disk' index='2'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config' index='1'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:df:f8:c0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target dev='tap262f44f5-df'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <source path='/dev/pts/2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/console.log' append='off'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       </target>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/2'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <source path='/dev/pts/2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/console.log' append='off'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </console>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </input>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </input>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </input>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <video>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </video>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c132,c415</label>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c132,c415</imagelabel>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:28:43 compute-0 nova_compute[260603]: </domain>
Oct 02 08:28:43 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.853 2 INFO nova.virt.libvirt.driver [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap2f45b100-9b from instance 247e32e5-5f07-4db4-9e6f-dcfade745228 from the live domain config.
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.854 2 DEBUG nova.virt.libvirt.vif [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.854 2 DEBUG nova.network.os_vif_util [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.854 2 DEBUG nova.network.os_vif_util [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.855 2 DEBUG os_vif [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.856 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f45b100-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.858 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:c8:97 10.100.0.5'], port_security=['fa:16:3e:69:c8:97 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '247e32e5-5f07-4db4-9e6f-dcfade745228', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2f45b100-9bc2-4853-87ff-324e74ddfee5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.861 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2f45b100-9bc2-4853-87ff-324e74ddfee5 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis
Oct 02 08:28:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.863 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.878 2 INFO os_vif [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b')
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.878 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:name>tempest-tempest.common.compute-instance-1204621376</nova:name>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:28:43</nova:creationTime>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     <nova:port uuid="262f44f5-df56-4176-96b2-4819d8b7e258">
Oct 02 08:28:43 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 08:28:43 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:28:43 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:28:43 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:28:43 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:28:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.893 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab2fe14-ba93-4504-8668-b3845c50aec8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.941 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bc790085-192c-4d83-ad0c-5e0368417dcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.947 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[66e5f779-7557-41a6-a830-ca520ea47a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.980 2 DEBUG nova.network.neutron [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:43 compute-0 nova_compute[260603]: 2025-10-02 08:28:43.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.990 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e95648ff-6f1e-44e5-9ebb-5d1ba2f08023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.000 2 INFO nova.compute.manager [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Took 0.69 seconds to deallocate network for instance.
Oct 02 08:28:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.017 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1aaad4d4-4602-4427-b7d8-f691e2ec8cb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 1042, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 1042, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312383, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.046 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8fe60e-71f8-4ed6-a1ff-2b3c53aaf651]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454016, 'tstamp': 454016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312384, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454020, 'tstamp': 454020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312384, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.047 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.056 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.056 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.056 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.057 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.057 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.057 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:44 compute-0 ceph-mon[74477]: pgmap v1431: 305 pgs: 3 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 298 active+clean; 459 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 6.8 MiB/s wr, 246 op/s
Oct 02 08:28:44 compute-0 ceph-mon[74477]: osdmap e204: 3 total, 3 up, 3 in
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.197 2 DEBUG oslo_concurrency.processutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.382 2 DEBUG nova.compute.manager [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-changed-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.383 2 DEBUG nova.compute.manager [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Refreshing instance network info cache due to event network-changed-922a693b-1cb4-42e8-a97d-78973183c774. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.384 2 DEBUG oslo_concurrency.lockutils [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.384 2 DEBUG oslo_concurrency.lockutils [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.385 2 DEBUG nova.network.neutron [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Refreshing network info cache for port 922a693b-1cb4-42e8-a97d-78973183c774 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.509 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.510 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.511 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.511 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.512 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.512 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state deleted and task_state None.
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.513 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.513 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.514 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.514 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.515 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.515 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received unexpected event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.516 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.516 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.517 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.517 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.518 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.518 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received unexpected event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.519 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-unplugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.519 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.520 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.520 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.521 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-unplugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.521 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-unplugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state deleted and task_state None.
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.522 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.522 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.523 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.523 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.524 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.524 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state deleted and task_state None.
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.525 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-unplugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.526 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.526 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.526 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.526 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-unplugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.527 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received unexpected event network-vif-unplugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.527 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.527 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.527 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.527 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.528 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.528 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received unexpected event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.631 2 DEBUG nova.network.neutron [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:28:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1894505686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.690 2 DEBUG oslo_concurrency.processutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.696 2 DEBUG nova.compute.provider_tree [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.723 2 INFO nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Snapshot image upload complete
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.723 2 DEBUG nova.compute.manager [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.728 2 DEBUG nova.scheduler.client.report [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.750 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.779 2 INFO nova.scheduler.client.report [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Deleted allocations for instance 21802142-fcf7-4eb2-b43b-e0fa48cab4d6
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.790 2 INFO nova.compute.manager [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Shelve offloading
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.806 2 INFO nova.virt.libvirt.driver [-] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance destroyed successfully.
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.806 2 DEBUG nova.compute.manager [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.809 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.809 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquired lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.809 2 DEBUG nova.network.neutron [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:28:44 compute-0 nova_compute[260603]: 2025-10-02 08:28:44.852 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 3 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 298 active+clean; 475 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 305 op/s
Oct 02 08:28:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1894505686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:45 compute-0 nova_compute[260603]: 2025-10-02 08:28:45.146 2 DEBUG nova.network.neutron [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:45 compute-0 nova_compute[260603]: 2025-10-02 08:28:45.172 2 DEBUG oslo_concurrency.lockutils [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:45 compute-0 nova_compute[260603]: 2025-10-02 08:28:45.173 2 DEBUG nova.compute.manager [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-deleted-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:45 compute-0 nova_compute[260603]: 2025-10-02 08:28:45.390 2 DEBUG oslo_concurrency.lockutils [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:45 compute-0 nova_compute[260603]: 2025-10-02 08:28:45.391 2 DEBUG oslo_concurrency.lockutils [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:45 compute-0 nova_compute[260603]: 2025-10-02 08:28:45.391 2 DEBUG nova.network.neutron [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:28:46 compute-0 ceph-mon[74477]: pgmap v1433: 305 pgs: 3 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 298 active+clean; 475 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 305 op/s
Oct 02 08:28:46 compute-0 nova_compute[260603]: 2025-10-02 08:28:46.293 2 DEBUG nova.network.neutron [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Updating instance_info_cache with network_info: [{"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:46 compute-0 nova_compute[260603]: 2025-10-02 08:28:46.317 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Releasing lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 3 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 298 active+clean; 475 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 305 op/s
Oct 02 08:28:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.251 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.251 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.252 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.253 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.253 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.255 2 INFO nova.compute.manager [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Terminating instance
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.258 2 DEBUG nova.compute.manager [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:28:47 compute-0 kernel: tap262f44f5-df (unregistering): left promiscuous mode
Oct 02 08:28:47 compute-0 NetworkManager[45129]: <info>  [1759393727.3257] device (tap262f44f5-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:28:47 compute-0 ovn_controller[152344]: 2025-10-02T08:28:47Z|00406|binding|INFO|Releasing lport 262f44f5-df56-4176-96b2-4819d8b7e258 from this chassis (sb_readonly=0)
Oct 02 08:28:47 compute-0 ovn_controller[152344]: 2025-10-02T08:28:47Z|00407|binding|INFO|Setting lport 262f44f5-df56-4176-96b2-4819d8b7e258 down in Southbound
Oct 02 08:28:47 compute-0 ovn_controller[152344]: 2025-10-02T08:28:47Z|00408|binding|INFO|Removing iface tap262f44f5-df ovn-installed in OVS
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.380 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:f8:c0 10.100.0.14'], port_security=['fa:16:3e:df:f8:c0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '247e32e5-5f07-4db4-9e6f-dcfade745228', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1616ad3a-ff5f-4423-8dd9-f2ff5717f8c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=262f44f5-df56-4176-96b2-4819d8b7e258) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.383 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 262f44f5-df56-4176-96b2-4819d8b7e258 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.386 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.404 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd47eb2b-8ab0-45b6-b459-d04cc972ff32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:47 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct 02 08:28:47 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002d.scope: Consumed 14.496s CPU time.
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.438 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4635f16b-92ae-47bb-838c-ce415e6c3a5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:47 compute-0 systemd-machined[214636]: Machine qemu-49-instance-0000002d terminated.
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.444 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3e689c1e-780f-4929-a3e8-e21965177e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.485 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[dadc372e-b1a1-40c4-99fa-a3df2fa38606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.501 2 INFO nova.virt.libvirt.driver [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Instance destroyed successfully.
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.502 2 DEBUG nova.objects.instance [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'resources' on Instance uuid 247e32e5-5f07-4db4-9e6f-dcfade745228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.510 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[07dc5876-ba0c-466d-94d1-79f4005219f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 1042, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 1042, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312428, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.520 2 DEBUG nova.virt.libvirt.vif [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.520 2 DEBUG nova.network.os_vif_util [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.521 2 DEBUG nova.network.os_vif_util [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.521 2 DEBUG os_vif [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.523 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap262f44f5-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.532 2 INFO os_vif [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df')
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.533 2 DEBUG nova.virt.libvirt.vif [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.533 2 DEBUG nova.network.os_vif_util [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.533 2 DEBUG nova.network.os_vif_util [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.534 2 DEBUG os_vif [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.535 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f45b100-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.535 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.536 2 INFO os_vif [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b')
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.540 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[20b13bb4-edb0-40b4-8135-74d683fc8909]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454016, 'tstamp': 454016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312435, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454020, 'tstamp': 454020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312435, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.541 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.546 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.546 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.546 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.546 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.881 2 INFO nova.virt.libvirt.driver [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Deleting instance files /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228_del
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.882 2 INFO nova.virt.libvirt.driver [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Deletion of /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228_del complete
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.934 2 INFO nova.compute.manager [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Took 0.68 seconds to destroy the instance on the hypervisor.
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.934 2 DEBUG oslo.service.loopingcall [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.934 2 DEBUG nova.compute.manager [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:28:47 compute-0 nova_compute[260603]: 2025-10-02 08:28:47.935 2 DEBUG nova.network.neutron [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:28:48 compute-0 podman[312457]: 2025-10-02 08:28:48.018132247 +0000 UTC m=+0.071148121 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.040 2 INFO nova.network.neutron [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Port 2f45b100-9bc2-4853-87ff-324e74ddfee5 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.040 2 DEBUG nova.network.neutron [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.076 2 DEBUG oslo_concurrency.lockutils [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:48 compute-0 ceph-mon[74477]: pgmap v1434: 305 pgs: 3 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 298 active+clean; 475 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 305 op/s
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.106 2 DEBUG oslo_concurrency.lockutils [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-247e32e5-5f07-4db4-9e6f-dcfade745228-2f45b100-9bc2-4853-87ff-324e74ddfee5" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.279 2 INFO nova.virt.libvirt.driver [-] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance destroyed successfully.
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.279 2 DEBUG nova.objects.instance [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'resources' on Instance uuid e3ae3c82-7eb4-4727-a846-92afca9a8330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.295 2 DEBUG nova.virt.libvirt.vif [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-379096976',display_name='tempest-DeleteServersTestJSON-server-379096976',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-379096976',id=47,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-f7cj6hb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member',shelved_at='2025-10-02T08:28:44.723439',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='42c47313-5859-4db4-83db-08656e9d2bfd'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:39Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=e3ae3c82-7eb4-4727-a846-92afca9a8330,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.296 2 DEBUG nova.network.os_vif_util [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.296 2 DEBUG nova.network.os_vif_util [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.296 2 DEBUG os_vif [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e7deb4e-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.303 2 INFO os_vif [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f')
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.339 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.340 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.343 2 DEBUG nova.compute.manager [req-7098ead1-d8ce-448f-933b-ef21417f48e4 req-cef535f4-f8af-497a-a584-80e05b15d5e8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-unplugged-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.343 2 DEBUG oslo_concurrency.lockutils [req-7098ead1-d8ce-448f-933b-ef21417f48e4 req-cef535f4-f8af-497a-a584-80e05b15d5e8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.344 2 DEBUG oslo_concurrency.lockutils [req-7098ead1-d8ce-448f-933b-ef21417f48e4 req-cef535f4-f8af-497a-a584-80e05b15d5e8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.344 2 DEBUG oslo_concurrency.lockutils [req-7098ead1-d8ce-448f-933b-ef21417f48e4 req-cef535f4-f8af-497a-a584-80e05b15d5e8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.345 2 DEBUG nova.compute.manager [req-7098ead1-d8ce-448f-933b-ef21417f48e4 req-cef535f4-f8af-497a-a584-80e05b15d5e8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-unplugged-262f44f5-df56-4176-96b2-4819d8b7e258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.345 2 DEBUG nova.compute.manager [req-7098ead1-d8ce-448f-933b-ef21417f48e4 req-cef535f4-f8af-497a-a584-80e05b15d5e8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-unplugged-262f44f5-df56-4176-96b2-4819d8b7e258 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.367 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.443 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.443 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.450 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.450 2 INFO nova.compute.claims [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.647 2 INFO nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Deleting instance files /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330_del
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.648 2 INFO nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Deletion of /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330_del complete
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.672 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.760 2 INFO nova.scheduler.client.report [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Deleted allocations for instance e3ae3c82-7eb4-4727-a846-92afca9a8330
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.837 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 407 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 5.9 MiB/s wr, 282 op/s
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.980 2 DEBUG nova.compute.manager [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received event network-changed-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.981 2 DEBUG nova.compute.manager [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Refreshing instance network info cache due to event network-changed-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.982 2 DEBUG oslo_concurrency.lockutils [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.982 2 DEBUG oslo_concurrency.lockutils [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:48 compute-0 nova_compute[260603]: 2025-10-02 08:28:48.982 2 DEBUG nova.network.neutron [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Refreshing network info cache for port 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3620264730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.117 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.125 2 DEBUG nova.compute.provider_tree [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.148 2 DEBUG nova.scheduler.client.report [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.175 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.176 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.181 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.245 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.246 2 DEBUG nova.network.neutron [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.273 2 INFO nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.297 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.381 2 DEBUG oslo_concurrency.processutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.427 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.430 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.430 2 INFO nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Creating image(s)
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.466 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.502 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.538 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.543 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.616 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.617 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.617 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.618 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.640 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.643 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 331bfae3-95e5-4c18-96ca-56597994c6b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.682 2 DEBUG nova.policy [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b3acbcd92044b7a9df5bd8748ab7599', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '226ee42bc28344d49ab6b0000485ab4d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:28:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388165678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.916 2 DEBUG oslo_concurrency.processutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.923 2 DEBUG nova.compute.provider_tree [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.935 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 331bfae3-95e5-4c18-96ca-56597994c6b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:49 compute-0 nova_compute[260603]: 2025-10-02 08:28:49.981 2 DEBUG nova.scheduler.client.report [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:28:50 compute-0 nova_compute[260603]: 2025-10-02 08:28:50.035 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:50 compute-0 nova_compute[260603]: 2025-10-02 08:28:50.047 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] resizing rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:28:50 compute-0 ceph-mon[74477]: pgmap v1435: 305 pgs: 305 active+clean; 407 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 5.9 MiB/s wr, 282 op/s
Oct 02 08:28:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3620264730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1388165678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:50 compute-0 nova_compute[260603]: 2025-10-02 08:28:50.116 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 23.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:50 compute-0 nova_compute[260603]: 2025-10-02 08:28:50.165 2 DEBUG nova.objects.instance [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lazy-loading 'migration_context' on Instance uuid 331bfae3-95e5-4c18-96ca-56597994c6b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:50 compute-0 nova_compute[260603]: 2025-10-02 08:28:50.184 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:28:50 compute-0 nova_compute[260603]: 2025-10-02 08:28:50.185 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Ensure instance console log exists: /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:28:50 compute-0 nova_compute[260603]: 2025-10-02 08:28:50.186 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:50 compute-0 nova_compute[260603]: 2025-10-02 08:28:50.186 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:50 compute-0 nova_compute[260603]: 2025-10-02 08:28:50.187 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:50 compute-0 nova_compute[260603]: 2025-10-02 08:28:50.846 2 DEBUG nova.network.neutron [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Successfully created port: e4363d00-4c59-46bb-ac20-98bde0cd0d4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:28:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 407 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.2 MiB/s wr, 251 op/s
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.001 2 DEBUG nova.compute.manager [req-b385ac5b-5dae-4152-927d-8c558cdf88c0 req-a79e37b2-39e6-4a3f-9a77-6dbb0dc865aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.001 2 DEBUG oslo_concurrency.lockutils [req-b385ac5b-5dae-4152-927d-8c558cdf88c0 req-a79e37b2-39e6-4a3f-9a77-6dbb0dc865aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.001 2 DEBUG oslo_concurrency.lockutils [req-b385ac5b-5dae-4152-927d-8c558cdf88c0 req-a79e37b2-39e6-4a3f-9a77-6dbb0dc865aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.002 2 DEBUG oslo_concurrency.lockutils [req-b385ac5b-5dae-4152-927d-8c558cdf88c0 req-a79e37b2-39e6-4a3f-9a77-6dbb0dc865aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.002 2 DEBUG nova.compute.manager [req-b385ac5b-5dae-4152-927d-8c558cdf88c0 req-a79e37b2-39e6-4a3f-9a77-6dbb0dc865aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.002 2 WARNING nova.compute.manager [req-b385ac5b-5dae-4152-927d-8c558cdf88c0 req-a79e37b2-39e6-4a3f-9a77-6dbb0dc865aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received unexpected event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 for instance with vm_state active and task_state deleting.
Oct 02 08:28:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct 02 08:28:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct 02 08:28:51 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.149 2 DEBUG nova.network.neutron [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.180 2 INFO nova.compute.manager [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Took 3.25 seconds to deallocate network for instance.
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.248 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.249 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.377 2 DEBUG nova.network.neutron [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Updated VIF entry in instance network info cache for port 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.378 2 DEBUG nova.network.neutron [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Updating instance_info_cache with network_info: [{"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": null, "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.401 2 DEBUG oslo_concurrency.lockutils [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.414 2 DEBUG oslo_concurrency.processutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2927921251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.857 2 DEBUG nova.compute.manager [req-0341af7a-c006-4f80-ac95-66a1a14a4e3c req-27167bbb-f2d4-45db-a6ac-241ef46bf487 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-deleted-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.859 2 DEBUG oslo_concurrency.processutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.870 2 DEBUG nova.compute.provider_tree [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.892 2 DEBUG nova.scheduler.client.report [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.912 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.966 2 INFO nova.scheduler.client.report [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Deleted allocations for instance 247e32e5-5f07-4db4-9e6f-dcfade745228
Oct 02 08:28:51 compute-0 nova_compute[260603]: 2025-10-02 08:28:51.986 2 DEBUG nova.network.neutron [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Successfully updated port: e4363d00-4c59-46bb-ac20-98bde0cd0d4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.034 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "refresh_cache-331bfae3-95e5-4c18-96ca-56597994c6b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.034 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquired lock "refresh_cache-331bfae3-95e5-4c18-96ca-56597994c6b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.035 2 DEBUG nova.network.neutron [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:28:52 compute-0 podman[312728]: 2025-10-02 08:28:52.042897624 +0000 UTC m=+0.090497364 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.068 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:28:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct 02 08:28:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct 02 08:28:52 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct 02 08:28:52 compute-0 ceph-mon[74477]: pgmap v1436: 305 pgs: 305 active+clean; 407 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.2 MiB/s wr, 251 op/s
Oct 02 08:28:52 compute-0 ceph-mon[74477]: osdmap e205: 3 total, 3 up, 3 in
Oct 02 08:28:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2927921251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:52 compute-0 ceph-mon[74477]: osdmap e206: 3 total, 3 up, 3 in
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.199 2 DEBUG nova.network.neutron [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.598 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.599 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.599 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.600 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.600 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.601 2 INFO nova.compute.manager [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Terminating instance
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.602 2 DEBUG nova.compute.manager [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:28:52 compute-0 kernel: tap136aeb8e-de (unregistering): left promiscuous mode
Oct 02 08:28:52 compute-0 NetworkManager[45129]: <info>  [1759393732.6546] device (tap136aeb8e-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:52 compute-0 ovn_controller[152344]: 2025-10-02T08:28:52Z|00409|binding|INFO|Releasing lport 136aeb8e-dedd-4cd8-a72d-1c4309716daf from this chassis (sb_readonly=0)
Oct 02 08:28:52 compute-0 ovn_controller[152344]: 2025-10-02T08:28:52Z|00410|binding|INFO|Setting lport 136aeb8e-dedd-4cd8-a72d-1c4309716daf down in Southbound
Oct 02 08:28:52 compute-0 ovn_controller[152344]: 2025-10-02T08:28:52Z|00411|binding|INFO|Removing iface tap136aeb8e-de ovn-installed in OVS
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.679 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:59:2a 10.100.0.3'], port_security=['fa:16:3e:0b:59:2a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f13ff7c1-d7d3-443e-9f06-69f8c466af30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1616ad3a-ff5f-4423-8dd9-f2ff5717f8c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=136aeb8e-dedd-4cd8-a72d-1c4309716daf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.680 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 136aeb8e-dedd-4cd8-a72d-1c4309716daf in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis
Oct 02 08:28:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.682 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa1bff6d-19fb-4792-a261-4da1165d95a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:28:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.686 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca3beb4-f86f-4054-b219-326fa13a5478]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.687 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 namespace which is not needed anymore
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:52 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct 02 08:28:52 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Consumed 14.759s CPU time.
Oct 02 08:28:52 compute-0 systemd-machined[214636]: Machine qemu-47-instance-0000002a terminated.
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:52 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [NOTICE]   (308407) : haproxy version is 2.8.14-c23fe91
Oct 02 08:28:52 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [NOTICE]   (308407) : path to executable is /usr/sbin/haproxy
Oct 02 08:28:52 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [WARNING]  (308407) : Exiting Master process...
Oct 02 08:28:52 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [WARNING]  (308407) : Exiting Master process...
Oct 02 08:28:52 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [ALERT]    (308407) : Current worker (308409) exited with code 143 (Terminated)
Oct 02 08:28:52 compute-0 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [WARNING]  (308407) : All workers exited. Exiting... (0)
Oct 02 08:28:52 compute-0 systemd[1]: libpod-2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57.scope: Deactivated successfully.
Oct 02 08:28:52 compute-0 podman[312772]: 2025-10-02 08:28:52.84756557 +0000 UTC m=+0.052763822 container died 2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.851 2 INFO nova.virt.libvirt.driver [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Instance destroyed successfully.
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.852 2 DEBUG nova.objects.instance [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'resources' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.867 2 DEBUG nova.virt.libvirt.vif [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.869 2 DEBUG nova.network.os_vif_util [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.871 2 DEBUG nova.network.os_vif_util [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.872 2 DEBUG os_vif [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap136aeb8e-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.887 2 INFO os_vif [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de')
Oct 02 08:28:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d474468b501cd6334914f2b4a9f295c2f9e45b4d98ca3d1d88fd0ebc5aaafcb2-merged.mount: Deactivated successfully.
Oct 02 08:28:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57-userdata-shm.mount: Deactivated successfully.
Oct 02 08:28:52 compute-0 podman[312772]: 2025-10-02 08:28:52.910458544 +0000 UTC m=+0.115656796 container cleanup 2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:28:52 compute-0 systemd[1]: libpod-conmon-2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57.scope: Deactivated successfully.
Oct 02 08:28:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 291 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 1.2 MiB/s wr, 150 op/s
Oct 02 08:28:52 compute-0 podman[312828]: 2025-10-02 08:28:52.982021078 +0000 UTC m=+0.045908488 container remove 2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 08:28:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.991 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8b77ab02-25f0-4cdf-9edf-f0d545705e9c]: (4, ('Thu Oct  2 08:28:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 (2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57)\n2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57\nThu Oct  2 08:28:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 (2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57)\n2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.993 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba9f73e-9b0e-4cfa-8ddf-9ccfc52921f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.994 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:52 compute-0 nova_compute[260603]: 2025-10-02 08:28:52.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:52 compute-0 kernel: tapfa1bff6d-10: left promiscuous mode
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:53.023 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e00d242f-6d13-469d-97a9-8a8b13484c3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:53.053 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca9ca9b-e0f3-4d72-b58f-b11d86c46a76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:53.054 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9fae1cd-31fd-4485-80c7-b9f6e84b4532]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:53.071 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9d438b-e294-4618-bc79-e714d8118b9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453990, 'reachable_time': 19448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312846, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:53 compute-0 systemd[1]: run-netns-ovnmeta\x2dfa1bff6d\x2d19fb\x2d4792\x2da261\x2d4da1165d95a1.mount: Deactivated successfully.
Oct 02 08:28:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:53.079 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:28:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:53.080 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[2e00b9e3-7a64-45e8-9167-17e76e859e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.202 2 DEBUG nova.network.neutron [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Updating instance_info_cache with network_info: [{"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.230 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Releasing lock "refresh_cache-331bfae3-95e5-4c18-96ca-56597994c6b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.231 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Instance network_info: |[{"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.234 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Start _get_guest_xml network_info=[{"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.240 2 WARNING nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.246 2 DEBUG nova.virt.libvirt.host [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.247 2 DEBUG nova.virt.libvirt.host [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.253 2 DEBUG nova.virt.libvirt.host [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.254 2 DEBUG nova.virt.libvirt.host [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.255 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.255 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.256 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.256 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.257 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.257 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.257 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.258 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.259 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.259 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.259 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.260 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.264 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.314 2 INFO nova.virt.libvirt.driver [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Deleting instance files /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30_del
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.315 2 INFO nova.virt.libvirt.driver [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Deletion of /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30_del complete
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.392 2 INFO nova.compute.manager [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.394 2 DEBUG oslo.service.loopingcall [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.394 2 DEBUG nova.compute.manager [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.395 2 DEBUG nova.network.neutron [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.405 2 DEBUG nova.compute.manager [req-5a23cd2c-e559-4b45-b9ee-55d26f8e8c09 req-b0263ddb-01c0-4178-b9a2-9b2b4cc1456f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-unplugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.405 2 DEBUG oslo_concurrency.lockutils [req-5a23cd2c-e559-4b45-b9ee-55d26f8e8c09 req-b0263ddb-01c0-4178-b9a2-9b2b4cc1456f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.406 2 DEBUG oslo_concurrency.lockutils [req-5a23cd2c-e559-4b45-b9ee-55d26f8e8c09 req-b0263ddb-01c0-4178-b9a2-9b2b4cc1456f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.406 2 DEBUG oslo_concurrency.lockutils [req-5a23cd2c-e559-4b45-b9ee-55d26f8e8c09 req-b0263ddb-01c0-4178-b9a2-9b2b4cc1456f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.407 2 DEBUG nova.compute.manager [req-5a23cd2c-e559-4b45-b9ee-55d26f8e8c09 req-b0263ddb-01c0-4178-b9a2-9b2b4cc1456f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-unplugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.407 2 DEBUG nova.compute.manager [req-5a23cd2c-e559-4b45-b9ee-55d26f8e8c09 req-b0263ddb-01c0-4178-b9a2-9b2b4cc1456f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-unplugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:28:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:28:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/937517978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.737 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.758 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.761 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.829 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393718.7640457, e3ae3c82-7eb4-4727-a846-92afca9a8330 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.830 2 INFO nova.compute.manager [-] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] VM Stopped (Lifecycle Event)
Oct 02 08:28:53 compute-0 nova_compute[260603]: 2025-10-02 08:28:53.859 2 DEBUG nova.compute.manager [None req-662b77da-1cad-4d6d-a213-5ac0cbc1df56 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:54 compute-0 ceph-mon[74477]: pgmap v1439: 305 pgs: 305 active+clean; 291 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 1.2 MiB/s wr, 150 op/s
Oct 02 08:28:54 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/937517978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:28:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1727180314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.161 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.164 2 DEBUG nova.virt.libvirt.vif [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1491987986',display_name='tempest-InstanceActionsNegativeTestJSON-server-1491987986',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1491987986',id=49,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='226ee42bc28344d49ab6b0000485ab4d',ramdisk_id='',reservation_id='r-w4z7ku8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-472656503',owner_user_name='tempest-InstanceActionsNegativeTestJSON-472656503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:49Z,user_data=None,user_id='2b3acbcd92044b7a9df5bd8748ab7599',uuid=331bfae3-95e5-4c18-96ca-56597994c6b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.165 2 DEBUG nova.network.os_vif_util [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Converting VIF {"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.166 2 DEBUG nova.network.os_vif_util [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.168 2 DEBUG nova.objects.instance [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lazy-loading 'pci_devices' on Instance uuid 331bfae3-95e5-4c18-96ca-56597994c6b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.200 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:28:54 compute-0 nova_compute[260603]:   <uuid>331bfae3-95e5-4c18-96ca-56597994c6b7</uuid>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   <name>instance-00000031</name>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1491987986</nova:name>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:28:53</nova:creationTime>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:28:54 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:28:54 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:28:54 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:28:54 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:28:54 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:28:54 compute-0 nova_compute[260603]:         <nova:user uuid="2b3acbcd92044b7a9df5bd8748ab7599">tempest-InstanceActionsNegativeTestJSON-472656503-project-member</nova:user>
Oct 02 08:28:54 compute-0 nova_compute[260603]:         <nova:project uuid="226ee42bc28344d49ab6b0000485ab4d">tempest-InstanceActionsNegativeTestJSON-472656503</nova:project>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:28:54 compute-0 nova_compute[260603]:         <nova:port uuid="e4363d00-4c59-46bb-ac20-98bde0cd0d4f">
Oct 02 08:28:54 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <system>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <entry name="serial">331bfae3-95e5-4c18-96ca-56597994c6b7</entry>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <entry name="uuid">331bfae3-95e5-4c18-96ca-56597994c6b7</entry>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     </system>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   <os>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   </os>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   <features>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   </features>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/331bfae3-95e5-4c18-96ca-56597994c6b7_disk">
Oct 02 08:28:54 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:28:54 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/331bfae3-95e5-4c18-96ca-56597994c6b7_disk.config">
Oct 02 08:28:54 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       </source>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:28:54 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:ba:e0:ec"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <target dev="tape4363d00-4c"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/console.log" append="off"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <video>
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     </video>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:28:54 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:28:54 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:28:54 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:28:54 compute-0 nova_compute[260603]: </domain>
Oct 02 08:28:54 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.203 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Preparing to wait for external event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.203 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.204 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.205 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.206 2 DEBUG nova.virt.libvirt.vif [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1491987986',display_name='tempest-InstanceActionsNegativeTestJSON-server-1491987986',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1491987986',id=49,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='226ee42bc28344d49ab6b0000485ab4d',ramdisk_id='',reservation_id='r-w4z7ku8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-472656503',owner_user_name='tempest-InstanceActionsNegativeTestJSON-472656503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:49Z,user_data=None,user_id='2b3acbcd92044b7a9df5bd8748ab7599',uuid=331bfae3-95e5-4c18-96ca-56597994c6b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.207 2 DEBUG nova.network.os_vif_util [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Converting VIF {"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.208 2 DEBUG nova.network.os_vif_util [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.209 2 DEBUG os_vif [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.212 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.213 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4363d00-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.219 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4363d00-4c, col_values=(('external_ids', {'iface-id': 'e4363d00-4c59-46bb-ac20-98bde0cd0d4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:e0:ec', 'vm-uuid': '331bfae3-95e5-4c18-96ca-56597994c6b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:54 compute-0 NetworkManager[45129]: <info>  [1759393734.2222] manager: (tape4363d00-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.227 2 INFO os_vif [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c')
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.234 2 DEBUG nova.compute.manager [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-changed-e4363d00-4c59-46bb-ac20-98bde0cd0d4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.235 2 DEBUG nova.compute.manager [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Refreshing instance network info cache due to event network-changed-e4363d00-4c59-46bb-ac20-98bde0cd0d4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.236 2 DEBUG oslo_concurrency.lockutils [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-331bfae3-95e5-4c18-96ca-56597994c6b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.236 2 DEBUG oslo_concurrency.lockutils [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-331bfae3-95e5-4c18-96ca-56597994c6b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.237 2 DEBUG nova.network.neutron [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Refreshing network info cache for port e4363d00-4c59-46bb-ac20-98bde0cd0d4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.299 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.299 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.300 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] No VIF found with MAC fa:16:3e:ba:e0:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.300 2 INFO nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Using config drive
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.324 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:54.446 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:54.447 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.493 2 DEBUG nova.network.neutron [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.509 2 INFO nova.compute.manager [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Took 1.11 seconds to deallocate network for instance.
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.598 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.598 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:54 compute-0 nova_compute[260603]: 2025-10-02 08:28:54.714 2 DEBUG oslo_concurrency.processutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 208 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 2.7 MiB/s wr, 231 op/s
Oct 02 08:28:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:55 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1140997525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1727180314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:28:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1140997525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.135 2 DEBUG oslo_concurrency.processutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.142 2 DEBUG nova.compute.provider_tree [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.166 2 DEBUG nova.scheduler.client.report [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.187 2 INFO nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Creating config drive at /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/disk.config
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.192 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_6lyi017 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.227 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.267 2 INFO nova.scheduler.client.report [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Deleted allocations for instance f13ff7c1-d7d3-443e-9f06-69f8c466af30
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.328 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_6lyi017" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.349 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.352 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/disk.config 331bfae3-95e5-4c18-96ca-56597994c6b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.386 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.498 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/disk.config 331bfae3-95e5-4c18-96ca-56597994c6b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.499 2 INFO nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Deleting local config drive /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/disk.config because it was imported into RBD.
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.533 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.533 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.534 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.534 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.534 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.536 2 INFO nova.compute.manager [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Terminating instance
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.537 2 DEBUG nova.compute.manager [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:28:55 compute-0 kernel: tape4363d00-4c: entered promiscuous mode
Oct 02 08:28:55 compute-0 systemd-udevd[312751]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:28:55 compute-0 NetworkManager[45129]: <info>  [1759393735.5593] manager: (tape4363d00-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Oct 02 08:28:55 compute-0 NetworkManager[45129]: <info>  [1759393735.5715] device (tape4363d00-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:28:55 compute-0 NetworkManager[45129]: <info>  [1759393735.5722] device (tape4363d00-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:28:55 compute-0 ovn_controller[152344]: 2025-10-02T08:28:55Z|00412|binding|INFO|Claiming lport e4363d00-4c59-46bb-ac20-98bde0cd0d4f for this chassis.
Oct 02 08:28:55 compute-0 ovn_controller[152344]: 2025-10-02T08:28:55Z|00413|binding|INFO|e4363d00-4c59-46bb-ac20-98bde0cd0d4f: Claiming fa:16:3e:ba:e0:ec 10.100.0.5
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.610 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e0:ec 10.100.0.5'], port_security=['fa:16:3e:ba:e0:ec 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '331bfae3-95e5-4c18-96ca-56597994c6b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '226ee42bc28344d49ab6b0000485ab4d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cadeb1ae-9c39-4da5-a2f0-8ecfaebb7175', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20da51ff-5df7-43b0-b08f-68d65fa498f7, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e4363d00-4c59-46bb-ac20-98bde0cd0d4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.612 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e4363d00-4c59-46bb-ac20-98bde0cd0d4f in datapath 6b09ac7d-d406-46ea-b446-c4810bf0f847 bound to our chassis
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.614 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6b09ac7d-d406-46ea-b446-c4810bf0f847
Oct 02 08:28:55 compute-0 ovn_controller[152344]: 2025-10-02T08:28:55Z|00414|binding|INFO|Setting lport e4363d00-4c59-46bb-ac20-98bde0cd0d4f ovn-installed in OVS
Oct 02 08:28:55 compute-0 ovn_controller[152344]: 2025-10-02T08:28:55Z|00415|binding|INFO|Setting lport e4363d00-4c59-46bb-ac20-98bde0cd0d4f up in Southbound
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.627 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2af9c8d5-192f-4e1d-adfe-b1ba311617e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.628 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6b09ac7d-d1 in ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.631 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6b09ac7d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.631 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a3dc3ae6-3f6b-42f6-a3ec-2a8ec0daac09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.632 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f6539a-ec12-4ce6-97cc-a0295002b566]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 systemd-machined[214636]: New machine qemu-54-instance-00000031.
Oct 02 08:28:55 compute-0 kernel: tape978c120-9b (unregistering): left promiscuous mode
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.647 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e2dd1344-3f4b-4da6-8ad9-1b8c4aa97b8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-00000031.
Oct 02 08:28:55 compute-0 NetworkManager[45129]: <info>  [1759393735.6546] device (tape978c120-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:55 compute-0 ovn_controller[152344]: 2025-10-02T08:28:55Z|00416|binding|INFO|Releasing lport e978c120-9b3a-4a48-b553-c38b05073ad9 from this chassis (sb_readonly=0)
Oct 02 08:28:55 compute-0 ovn_controller[152344]: 2025-10-02T08:28:55Z|00417|binding|INFO|Setting lport e978c120-9b3a-4a48-b553-c38b05073ad9 down in Southbound
Oct 02 08:28:55 compute-0 ovn_controller[152344]: 2025-10-02T08:28:55Z|00418|binding|INFO|Removing iface tape978c120-9b ovn-installed in OVS
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.673 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:a6:e3 10.100.0.10'], port_security=['fa:16:3e:86:a6:e3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '05cc7244-c419-4c24-b995-95ca760837a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2cf2a9c1-4dd0-4e0e-854c-d9add7d70aee 36042788-300a-4f5b-b3a8-970ef5264604 aceef71f-a2e6-4998-bc1f-5a8f9213efeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e031551-92b2-44b9-87f8-368034b7a542, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e978c120-9b3a-4a48-b553-c38b05073ad9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.675 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b0316b-2246-4a48-8d7b-d578f3576d89]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:55 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 02 08:28:55 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002b.scope: Consumed 15.563s CPU time.
Oct 02 08:28:55 compute-0 systemd-machined[214636]: Machine qemu-46-instance-0000002b terminated.
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.713 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8e8679-ec1e-4450-bb86-e3fed4b22ace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 NetworkManager[45129]: <info>  [1759393735.7203] manager: (tap6b09ac7d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/180)
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.718 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cc79099a-024d-46f3-be60-52892be807fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 systemd-udevd[313024]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.753 2 DEBUG nova.compute.manager [req-7ca00351-a487-48d9-bf61-9188953e29bf req-154d7cee-e649-4070-b4a0-3ce6959eb250 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.753 2 DEBUG oslo_concurrency.lockutils [req-7ca00351-a487-48d9-bf61-9188953e29bf req-154d7cee-e649-4070-b4a0-3ce6959eb250 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.754 2 DEBUG oslo_concurrency.lockutils [req-7ca00351-a487-48d9-bf61-9188953e29bf req-154d7cee-e649-4070-b4a0-3ce6959eb250 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.754 2 DEBUG oslo_concurrency.lockutils [req-7ca00351-a487-48d9-bf61-9188953e29bf req-154d7cee-e649-4070-b4a0-3ce6959eb250 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.754 2 DEBUG nova.compute.manager [req-7ca00351-a487-48d9-bf61-9188953e29bf req-154d7cee-e649-4070-b4a0-3ce6959eb250 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.754 2 WARNING nova.compute.manager [req-7ca00351-a487-48d9-bf61-9188953e29bf req-154d7cee-e649-4070-b4a0-3ce6959eb250 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received unexpected event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf for instance with vm_state deleted and task_state None.
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.758 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[919c83a9-45df-4c67-94df-fd8d19a4c93c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 NetworkManager[45129]: <info>  [1759393735.7602] manager: (tape978c120-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/181)
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.762 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c982de3f-1bd6-4040-8693-d79022ba70d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.784 2 INFO nova.virt.libvirt.driver [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Instance destroyed successfully.
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.784 2 DEBUG nova.objects.instance [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'resources' on Instance uuid 05cc7244-c419-4c24-b995-95ca760837a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:55 compute-0 NetworkManager[45129]: <info>  [1759393735.7907] device (tap6b09ac7d-d0): carrier: link connected
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.795 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[84a3ba84-0d3e-44c2-8cbf-bcfbcd0333ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.801 2 DEBUG nova.virt.libvirt.vif [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1736454109',display_name='tempest-SecurityGroupsTestJSON-server-1736454109',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1736454109',id=43,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-hb3fhgan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:41Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=05cc7244-c419-4c24-b995-95ca760837a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.802 2 DEBUG nova.network.os_vif_util [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.802 2 DEBUG nova.network.os_vif_util [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.802 2 DEBUG os_vif [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.804 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape978c120-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.812 2 INFO os_vif [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b')
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.828 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc3a670-a7f2-40a3-8437-1e64170ab926]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b09ac7d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:3f:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461439, 'reachable_time': 19762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313056, 'error': None, 'target': 'ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.846 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2f30d6-5d23-4be5-aa88-3fa70ad62cca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5a:3fa5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461439, 'tstamp': 461439}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313072, 'error': None, 'target': 'ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.864 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7bbd48c8-c311-4252-957c-68841abf9e48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b09ac7d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:3f:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461439, 'reachable_time': 19762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313073, 'error': None, 'target': 'ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.894 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5799c030-3448-476b-8028-9c1e898a9e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.961 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[da1512d0-c11a-4f8a-a26b-644763a43b3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.963 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b09ac7d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.963 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.964 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b09ac7d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:55 compute-0 NetworkManager[45129]: <info>  [1759393735.9664] manager: (tap6b09ac7d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Oct 02 08:28:55 compute-0 kernel: tap6b09ac7d-d0: entered promiscuous mode
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.974 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6b09ac7d-d0, col_values=(('external_ids', {'iface-id': '90b3a048-b044-4d5f-b423-1da282a65ad4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:55 compute-0 ovn_controller[152344]: 2025-10-02T08:28:55Z|00419|binding|INFO|Releasing lport 90b3a048-b044-4d5f-b423-1da282a65ad4 from this chassis (sb_readonly=0)
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.980 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6b09ac7d-d406-46ea-b446-c4810bf0f847.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6b09ac7d-d406-46ea-b446-c4810bf0f847.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.980 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[684dc2cd-3b8e-42ec-9bd4-c2809021e208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.981 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-6b09ac7d-d406-46ea-b446-c4810bf0f847
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/6b09ac7d-d406-46ea-b446-c4810bf0f847.pid.haproxy
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 6b09ac7d-d406-46ea-b446-c4810bf0f847
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:28:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.982 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'env', 'PROCESS_TAG=haproxy-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6b09ac7d-d406-46ea-b446-c4810bf0f847.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:28:55 compute-0 nova_compute[260603]: 2025-10-02 08:28:55.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:56 compute-0 ceph-mon[74477]: pgmap v1440: 305 pgs: 305 active+clean; 208 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 2.7 MiB/s wr, 231 op/s
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.203 2 INFO nova.virt.libvirt.driver [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Deleting instance files /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4_del
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.204 2 INFO nova.virt.libvirt.driver [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Deletion of /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4_del complete
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.289 2 INFO nova.compute.manager [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.292 2 DEBUG oslo.service.loopingcall [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.292 2 DEBUG nova.compute.manager [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.293 2 DEBUG nova.network.neutron [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:28:56 compute-0 podman[313151]: 2025-10-02 08:28:56.364852294 +0000 UTC m=+0.045589797 container create cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 08:28:56 compute-0 systemd[1]: Started libpod-conmon-cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc.scope.
Oct 02 08:28:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:28:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d04520e0e8b2823fcf24f311ce97671288cd92dcafceef9e2538eda23350bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:28:56 compute-0 podman[313151]: 2025-10-02 08:28:56.427652966 +0000 UTC m=+0.108390489 container init cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:28:56 compute-0 podman[313151]: 2025-10-02 08:28:56.433479937 +0000 UTC m=+0.114217430 container start cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:28:56 compute-0 podman[313151]: 2025-10-02 08:28:56.341510849 +0000 UTC m=+0.022248372 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:28:56 compute-0 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [NOTICE]   (313170) : New worker (313172) forked
Oct 02 08:28:56 compute-0 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [NOTICE]   (313170) : Loading success.
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.494 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e978c120-9b3a-4a48-b553-c38b05073ad9 in datapath cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 unbound from our chassis
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.496 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.496 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f1dfa740-136a-4529-9079-ec2705e7d2cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.497 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 namespace which is not needed anymore
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.570 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393736.5706172, 331bfae3-95e5-4c18-96ca-56597994c6b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.571 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] VM Started (Lifecycle Event)
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.607 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.610 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393736.570777, 331bfae3-95e5-4c18-96ca-56597994c6b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.611 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] VM Paused (Lifecycle Event)
Oct 02 08:28:56 compute-0 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [NOTICE]   (308332) : haproxy version is 2.8.14-c23fe91
Oct 02 08:28:56 compute-0 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [NOTICE]   (308332) : path to executable is /usr/sbin/haproxy
Oct 02 08:28:56 compute-0 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [WARNING]  (308332) : Exiting Master process...
Oct 02 08:28:56 compute-0 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [WARNING]  (308332) : Exiting Master process...
Oct 02 08:28:56 compute-0 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [ALERT]    (308332) : Current worker (308334) exited with code 143 (Terminated)
Oct 02 08:28:56 compute-0 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [WARNING]  (308332) : All workers exited. Exiting... (0)
Oct 02 08:28:56 compute-0 systemd[1]: libpod-685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96.scope: Deactivated successfully.
Oct 02 08:28:56 compute-0 podman[313198]: 2025-10-02 08:28:56.625722811 +0000 UTC m=+0.044626318 container died 685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.641 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.644 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:28:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96-userdata-shm.mount: Deactivated successfully.
Oct 02 08:28:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4204cebb47775c2a9fbcd07d14a4cbaafc0d93423478e2e45ec500726da6ccd-merged.mount: Deactivated successfully.
Oct 02 08:28:56 compute-0 podman[313198]: 2025-10-02 08:28:56.655468255 +0000 UTC m=+0.074371762 container cleanup 685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.668 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:28:56 compute-0 systemd[1]: libpod-conmon-685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96.scope: Deactivated successfully.
Oct 02 08:28:56 compute-0 podman[313227]: 2025-10-02 08:28:56.741571851 +0000 UTC m=+0.057908430 container remove 685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.748 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[52832537-23b3-4845-854e-75e2d44cfc86]: (4, ('Thu Oct  2 08:28:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 (685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96)\n685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96\nThu Oct  2 08:28:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 (685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96)\n685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.750 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[036df5e2-9da1-4eaf-8611-e0e031ca71ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.750 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce7e8b6-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:56 compute-0 kernel: tapcce7e8b6-90: left promiscuous mode
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.777 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee4ec7f-3d40-4f8d-abc4-ce2e9e33506b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.810 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2a78ee65-71e4-493b-8a6a-691f385fd06b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.811 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78b6fc57-22a1-4b45-bc81-ff6df1eb6a0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:56 compute-0 nova_compute[260603]: 2025-10-02 08:28:56.818 2 DEBUG nova.compute.manager [req-00013593-556a-4aa2-9c5b-4e26bcd8c990 req-b03e9c77-a7b0-4217-997f-6dba96024bba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-deleted-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.828 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf97633-1249-4b34-a8cd-43f97fc12144]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453885, 'reachable_time': 23874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313242, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.831 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:28:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.831 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee47c42-46da-4bcb-9223-91a6bd76b1a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:28:56 compute-0 systemd[1]: run-netns-ovnmeta\x2dcce7e8b6\x2d93a5\x2d4ed0\x2db6da\x2dabfd081fb2e9.mount: Deactivated successfully.
Oct 02 08:28:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 208 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 2.7 MiB/s wr, 181 op/s
Oct 02 08:28:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:28:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct 02 08:28:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct 02 08:28:57 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.177 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.178 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.202 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.307 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.308 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.319 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.320 2 INFO nova.compute.claims [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:28:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:28:57.449 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.460 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.830 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393722.8281705, 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.831 2 INFO nova.compute.manager [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] VM Stopped (Lifecycle Event)
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.859 2 DEBUG nova.compute.manager [None req-8e5eda7b-779b-45c1-b6d7-d40de5090fb8 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.882 2 DEBUG nova.network.neutron [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Updated VIF entry in instance network info cache for port e4363d00-4c59-46bb-ac20-98bde0cd0d4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.883 2 DEBUG nova.network.neutron [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Updating instance_info_cache with network_info: [{"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.904 2 DEBUG oslo_concurrency.lockutils [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-331bfae3-95e5-4c18-96ca-56597994c6b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:28:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:28:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:28:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:28:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:28:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:28:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:28:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694643309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.937 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.946 2 DEBUG nova.compute.provider_tree [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:28:57 compute-0 nova_compute[260603]: 2025-10-02 08:28:57.976 2 DEBUG nova.scheduler.client.report [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.045 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.046 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:28:58 compute-0 ceph-mon[74477]: pgmap v1441: 305 pgs: 305 active+clean; 208 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 2.7 MiB/s wr, 181 op/s
Oct 02 08:28:58 compute-0 ceph-mon[74477]: osdmap e207: 3 total, 3 up, 3 in
Oct 02 08:28:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/694643309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.111 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.112 2 DEBUG nova.network.neutron [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.120 2 DEBUG nova.compute.manager [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-vif-unplugged-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.121 2 DEBUG oslo_concurrency.lockutils [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.122 2 DEBUG oslo_concurrency.lockutils [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.122 2 DEBUG oslo_concurrency.lockutils [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.122 2 DEBUG nova.compute.manager [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] No waiting events found dispatching network-vif-unplugged-e978c120-9b3a-4a48-b553-c38b05073ad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.123 2 DEBUG nova.compute.manager [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-vif-unplugged-e978c120-9b3a-4a48-b553-c38b05073ad9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.123 2 DEBUG nova.compute.manager [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.124 2 DEBUG oslo_concurrency.lockutils [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.124 2 DEBUG oslo_concurrency.lockutils [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.125 2 DEBUG oslo_concurrency.lockutils [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.125 2 DEBUG nova.compute.manager [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] No waiting events found dispatching network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.126 2 WARNING nova.compute.manager [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received unexpected event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 for instance with vm_state active and task_state deleting.
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.128 2 DEBUG nova.network.neutron [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.147 2 INFO nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.166 2 INFO nova.compute.manager [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Took 1.87 seconds to deallocate network for instance.
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.173 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.252 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.253 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.298 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.300 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.301 2 INFO nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Creating image(s)
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.332 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.363 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.396 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.400 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.440 2 DEBUG nova.policy [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ac6f72f7366459a86c086737b89ea69', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f269abbe5769427dbf44c430d7529c04', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.471 2 DEBUG oslo_concurrency.processutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.502 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.504 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.506 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.507 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.538 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.543 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.835 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.908 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] resizing rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:28:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:28:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245333166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.947 2 DEBUG oslo_concurrency.processutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:28:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 88 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 2.7 MiB/s wr, 253 op/s
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.959 2 DEBUG nova.compute.provider_tree [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.972 2 DEBUG nova.compute.manager [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.973 2 DEBUG oslo_concurrency.lockutils [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.974 2 DEBUG oslo_concurrency.lockutils [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.974 2 DEBUG oslo_concurrency.lockutils [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.975 2 DEBUG nova.compute.manager [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Processing event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.975 2 DEBUG nova.compute.manager [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.975 2 DEBUG oslo_concurrency.lockutils [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.976 2 DEBUG oslo_concurrency.lockutils [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.976 2 DEBUG oslo_concurrency.lockutils [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.976 2 DEBUG nova.compute.manager [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] No waiting events found dispatching network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.976 2 WARNING nova.compute.manager [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received unexpected event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f for instance with vm_state building and task_state spawning.
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.977 2 DEBUG nova.compute.manager [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-vif-deleted-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:28:58 compute-0 nova_compute[260603]: 2025-10-02 08:28:58.978 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.015 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393738.980666, 331bfae3-95e5-4c18-96ca-56597994c6b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.015 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] VM Resumed (Lifecycle Event)
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.019 2 DEBUG nova.scheduler.client.report [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.023 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.028 2 DEBUG nova.objects.instance [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'migration_context' on Instance uuid 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.031 2 INFO nova.virt.libvirt.driver [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Instance spawned successfully.
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.032 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.049 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.052 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.057 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.067 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.067 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.068 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.068 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.069 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.069 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:28:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4245333166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.100 2 INFO nova.scheduler.client.report [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Deleted allocations for instance 05cc7244-c419-4c24-b995-95ca760837a4
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.119 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.127 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.127 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Ensure instance console log exists: /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.128 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.128 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.128 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.140 2 DEBUG nova.network.neutron [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Successfully created port: f6ec21c0-188d-4c89-8b6b-a64a6d85f131 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.175 2 INFO nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Took 9.75 seconds to spawn the instance on the hypervisor.
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.176 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.187 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.246 2 INFO nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Took 10.83 seconds to build instance.
Oct 02 08:28:59 compute-0 nova_compute[260603]: 2025-10-02 08:28:59.267 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:00 compute-0 ceph-mon[74477]: pgmap v1443: 305 pgs: 305 active+clean; 88 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 2.7 MiB/s wr, 253 op/s
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.138 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.139 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.139 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.140 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.140 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.141 2 INFO nova.compute.manager [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Terminating instance
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.142 2 DEBUG nova.compute.manager [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:29:00 compute-0 kernel: tape4363d00-4c (unregistering): left promiscuous mode
Oct 02 08:29:00 compute-0 NetworkManager[45129]: <info>  [1759393740.1784] device (tape4363d00-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:29:00 compute-0 ovn_controller[152344]: 2025-10-02T08:29:00Z|00420|binding|INFO|Releasing lport e4363d00-4c59-46bb-ac20-98bde0cd0d4f from this chassis (sb_readonly=0)
Oct 02 08:29:00 compute-0 ovn_controller[152344]: 2025-10-02T08:29:00Z|00421|binding|INFO|Setting lport e4363d00-4c59-46bb-ac20-98bde0cd0d4f down in Southbound
Oct 02 08:29:00 compute-0 ovn_controller[152344]: 2025-10-02T08:29:00Z|00422|binding|INFO|Removing iface tape4363d00-4c ovn-installed in OVS
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.194 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e0:ec 10.100.0.5'], port_security=['fa:16:3e:ba:e0:ec 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '331bfae3-95e5-4c18-96ca-56597994c6b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '226ee42bc28344d49ab6b0000485ab4d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cadeb1ae-9c39-4da5-a2f0-8ecfaebb7175', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20da51ff-5df7-43b0-b08f-68d65fa498f7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e4363d00-4c59-46bb-ac20-98bde0cd0d4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.195 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e4363d00-4c59-46bb-ac20-98bde0cd0d4f in datapath 6b09ac7d-d406-46ea-b446-c4810bf0f847 unbound from our chassis
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.196 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6b09ac7d-d406-46ea-b446-c4810bf0f847, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.197 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b4825376-ec17-459a-8751-3c45a1a59f58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.197 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847 namespace which is not needed anymore
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:00 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Deactivated successfully.
Oct 02 08:29:00 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Consumed 1.999s CPU time.
Oct 02 08:29:00 compute-0 systemd-machined[214636]: Machine qemu-54-instance-00000031 terminated.
Oct 02 08:29:00 compute-0 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [NOTICE]   (313170) : haproxy version is 2.8.14-c23fe91
Oct 02 08:29:00 compute-0 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [NOTICE]   (313170) : path to executable is /usr/sbin/haproxy
Oct 02 08:29:00 compute-0 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [WARNING]  (313170) : Exiting Master process...
Oct 02 08:29:00 compute-0 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [WARNING]  (313170) : Exiting Master process...
Oct 02 08:29:00 compute-0 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [ALERT]    (313170) : Current worker (313172) exited with code 143 (Terminated)
Oct 02 08:29:00 compute-0 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [WARNING]  (313170) : All workers exited. Exiting... (0)
Oct 02 08:29:00 compute-0 systemd[1]: libpod-cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc.scope: Deactivated successfully.
Oct 02 08:29:00 compute-0 conmon[313166]: conmon cd51998dccadb453ae7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc.scope/container/memory.events
Oct 02 08:29:00 compute-0 podman[313479]: 2025-10-02 08:29:00.327222281 +0000 UTC m=+0.050221192 container died cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 08:29:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc-userdata-shm.mount: Deactivated successfully.
Oct 02 08:29:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2d04520e0e8b2823fcf24f311ce97671288cd92dcafceef9e2538eda23350bf-merged.mount: Deactivated successfully.
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.377 2 INFO nova.virt.libvirt.driver [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Instance destroyed successfully.
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.377 2 DEBUG nova.objects.instance [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lazy-loading 'resources' on Instance uuid 331bfae3-95e5-4c18-96ca-56597994c6b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:00 compute-0 podman[313479]: 2025-10-02 08:29:00.377945827 +0000 UTC m=+0.100944718 container cleanup cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:29:00 compute-0 systemd[1]: libpod-conmon-cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc.scope: Deactivated successfully.
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.401 2 DEBUG nova.virt.libvirt.vif [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1491987986',display_name='tempest-InstanceActionsNegativeTestJSON-server-1491987986',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1491987986',id=49,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='226ee42bc28344d49ab6b0000485ab4d',ramdisk_id='',reservation_id='r-w4z7ku8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-472656503',owner_user_name='tempest-InstanceActionsNegativeTestJSON-472656503-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:59Z,user_data=None,user_id='2b3acbcd92044b7a9df5bd8748ab7599',uuid=331bfae3-95e5-4c18-96ca-56597994c6b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.402 2 DEBUG nova.network.os_vif_util [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Converting VIF {"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.403 2 DEBUG nova.network.os_vif_util [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.403 2 DEBUG os_vif [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.405 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4363d00-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.414 2 INFO os_vif [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c')
Oct 02 08:29:00 compute-0 podman[313517]: 2025-10-02 08:29:00.467601623 +0000 UTC m=+0.058387095 container remove cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.474 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10875fd3-687a-4e99-a911-efc7131aff42]: (4, ('Thu Oct  2 08:29:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847 (cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc)\ncd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc\nThu Oct  2 08:29:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847 (cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc)\ncd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.476 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78fcdded-ec99-44d5-9b29-b5c1398a1691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.478 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b09ac7d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:00 compute-0 kernel: tap6b09ac7d-d0: left promiscuous mode
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.498 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[41314d30-92e7-4978-b1c8-a135a6da2ff2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f69c8d0-321a-4617-93c7-6333702677d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.524 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e51b9cb8-7372-4f5a-8754-d5cc84caa84d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.539 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d0586570-cca0-45be-b644-cd67993fe6ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461431, 'reachable_time': 15981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313551, 'error': None, 'target': 'ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d6b09ac7d\x2dd406\x2d46ea\x2db446\x2dc4810bf0f847.mount: Deactivated successfully.
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.542 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:29:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.542 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e8159a-0d47-40b4-9e82-99540346a73f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.719 2 INFO nova.virt.libvirt.driver [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Deleting instance files /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7_del
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.721 2 INFO nova.virt.libvirt.driver [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Deletion of /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7_del complete
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.781 2 INFO nova.compute.manager [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Took 0.64 seconds to destroy the instance on the hypervisor.
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.782 2 DEBUG oslo.service.loopingcall [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.782 2 DEBUG nova.compute.manager [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.782 2 DEBUG nova.network.neutron [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:29:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 88 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 2.4 MiB/s wr, 224 op/s
Oct 02 08:29:00 compute-0 nova_compute[260603]: 2025-10-02 08:29:00.983 2 DEBUG nova.network.neutron [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Successfully updated port: f6ec21c0-188d-4c89-8b6b-a64a6d85f131 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.000 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "refresh_cache-640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.001 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquired lock "refresh_cache-640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.001 2 DEBUG nova.network.neutron [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.244 2 DEBUG nova.network.neutron [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.356 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-vif-unplugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.358 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.358 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.359 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.359 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] No waiting events found dispatching network-vif-unplugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.360 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-vif-unplugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.360 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.361 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.361 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.361 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.362 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] No waiting events found dispatching network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.362 2 WARNING nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received unexpected event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f for instance with vm_state active and task_state deleting.
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.362 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received event network-changed-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.363 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Refreshing instance network info cache due to event network-changed-f6ec21c0-188d-4c89-8b6b-a64a6d85f131. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:29:01 compute-0 nova_compute[260603]: 2025-10-02 08:29:01.363 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.070 2 DEBUG nova.network.neutron [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.097 2 INFO nova.compute.manager [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Took 1.32 seconds to deallocate network for instance.
Oct 02 08:29:02 compute-0 ceph-mon[74477]: pgmap v1444: 305 pgs: 305 active+clean; 88 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 2.4 MiB/s wr, 224 op/s
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.161 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.162 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.222 2 DEBUG oslo_concurrency.processutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.500 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393727.4986176, 247e32e5-5f07-4db4-9e6f-dcfade745228 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.500 2 INFO nova.compute.manager [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] VM Stopped (Lifecycle Event)
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.524 2 DEBUG nova.compute.manager [None req-7fecf3d3-42cb-42fa-9a4f-9d25ac9985bf - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1722373738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.655 2 DEBUG oslo_concurrency.processutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.660 2 DEBUG nova.compute.provider_tree [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.692 2 DEBUG nova.scheduler.client.report [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.720 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.757 2 INFO nova.scheduler.client.report [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Deleted allocations for instance 331bfae3-95e5-4c18-96ca-56597994c6b7
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.817 2 DEBUG nova.network.neutron [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Updating instance_info_cache with network_info: [{"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.855 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Releasing lock "refresh_cache-640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.856 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance network_info: |[{"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.856 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.856 2 DEBUG nova.network.neutron [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Refreshing network info cache for port f6ec21c0-188d-4c89-8b6b-a64a6d85f131 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.858 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Start _get_guest_xml network_info=[{"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.862 2 WARNING nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.867 2 DEBUG nova.virt.libvirt.host [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.868 2 DEBUG nova.virt.libvirt.host [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.871 2 DEBUG nova.virt.libvirt.host [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.872 2 DEBUG nova.virt.libvirt.host [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.872 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.873 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.873 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.874 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.874 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.874 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.875 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.875 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.875 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.876 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.876 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.876 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.882 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:02 compute-0 nova_compute[260603]: 2025-10-02 08:29:02.932 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 98 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 210 op/s
Oct 02 08:29:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1722373738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3733634119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.421 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.443 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.447 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.889 2 DEBUG nova.compute.manager [req-7ffd1886-6356-45e9-ba3b-46cdfeb63700 req-e8c367d7-39a3-49e0-93eb-b6fc12bddbad 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-vif-deleted-e4363d00-4c59-46bb-ac20-98bde0cd0d4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1940109715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.946 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.947 2 DEBUG nova.virt.libvirt.vif [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-17603736',display_name='tempest-DeleteServersTestJSON-server-17603736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-17603736',id=50,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-jo24t19c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:58Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=640bb5bf-5ae3-455f-82e7-3e6d647a0fbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.948 2 DEBUG nova.network.os_vif_util [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.949 2 DEBUG nova.network.os_vif_util [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.950 2 DEBUG nova.objects.instance [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.968 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:29:03 compute-0 nova_compute[260603]:   <uuid>640bb5bf-5ae3-455f-82e7-3e6d647a0fbf</uuid>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   <name>instance-00000032</name>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <nova:name>tempest-DeleteServersTestJSON-server-17603736</nova:name>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:29:02</nova:creationTime>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:29:03 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:29:03 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:29:03 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:29:03 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:29:03 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:29:03 compute-0 nova_compute[260603]:         <nova:user uuid="1ac6f72f7366459a86c086737b89ea69">tempest-DeleteServersTestJSON-812177785-project-member</nova:user>
Oct 02 08:29:03 compute-0 nova_compute[260603]:         <nova:project uuid="f269abbe5769427dbf44c430d7529c04">tempest-DeleteServersTestJSON-812177785</nova:project>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:29:03 compute-0 nova_compute[260603]:         <nova:port uuid="f6ec21c0-188d-4c89-8b6b-a64a6d85f131">
Oct 02 08:29:03 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <system>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <entry name="serial">640bb5bf-5ae3-455f-82e7-3e6d647a0fbf</entry>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <entry name="uuid">640bb5bf-5ae3-455f-82e7-3e6d647a0fbf</entry>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     </system>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   <os>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   </os>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   <features>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   </features>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk">
Oct 02 08:29:03 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:03 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk.config">
Oct 02 08:29:03 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:03 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:5f:37:2c"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <target dev="tapf6ec21c0-18"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/console.log" append="off"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <video>
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     </video>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:29:03 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:29:03 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:29:03 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:29:03 compute-0 nova_compute[260603]: </domain>
Oct 02 08:29:03 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.970 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Preparing to wait for external event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.971 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.972 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.973 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.975 2 DEBUG nova.virt.libvirt.vif [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-17603736',display_name='tempest-DeleteServersTestJSON-server-17603736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-17603736',id=50,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-jo24t19c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:58Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=640bb5bf-5ae3-455f-82e7-3e6d647a0fbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.975 2 DEBUG nova.network.os_vif_util [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.977 2 DEBUG nova.network.os_vif_util [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.978 2 DEBUG os_vif [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.981 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.987 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6ec21c0-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.988 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6ec21c0-18, col_values=(('external_ids', {'iface-id': 'f6ec21c0-188d-4c89-8b6b-a64a6d85f131', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:37:2c', 'vm-uuid': '640bb5bf-5ae3-455f-82e7-3e6d647a0fbf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:03 compute-0 NetworkManager[45129]: <info>  [1759393743.9919] manager: (tapf6ec21c0-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:03 compute-0 nova_compute[260603]: 2025-10-02 08:29:03.998 2 INFO os_vif [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18')
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.076 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.077 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.077 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No VIF found with MAC fa:16:3e:5f:37:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.078 2 INFO nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Using config drive
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.103 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:04 compute-0 ceph-mon[74477]: pgmap v1445: 305 pgs: 305 active+clean; 98 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 210 op/s
Oct 02 08:29:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3733634119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1940109715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.462 2 INFO nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Creating config drive at /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/disk.config
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.467 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7jsjyrfy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.606 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7jsjyrfy" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.646 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.651 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/disk.config 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.828 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/disk.config 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.831 2 INFO nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Deleting local config drive /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/disk.config because it was imported into RBD.
Oct 02 08:29:04 compute-0 kernel: tapf6ec21c0-18: entered promiscuous mode
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:04 compute-0 ovn_controller[152344]: 2025-10-02T08:29:04Z|00423|binding|INFO|Claiming lport f6ec21c0-188d-4c89-8b6b-a64a6d85f131 for this chassis.
Oct 02 08:29:04 compute-0 ovn_controller[152344]: 2025-10-02T08:29:04Z|00424|binding|INFO|f6ec21c0-188d-4c89-8b6b-a64a6d85f131: Claiming fa:16:3e:5f:37:2c 10.100.0.12
Oct 02 08:29:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.912 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:37:2c 10.100.0.12'], port_security=['fa:16:3e:5f:37:2c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '640bb5bf-5ae3-455f-82e7-3e6d647a0fbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f6ec21c0-188d-4c89-8b6b-a64a6d85f131) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:04 compute-0 NetworkManager[45129]: <info>  [1759393744.9136] manager: (tapf6ec21c0-18): new Tun device (/org/freedesktop/NetworkManager/Devices/184)
Oct 02 08:29:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.914 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f6ec21c0-188d-4c89-8b6b-a64a6d85f131 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca bound to our chassis
Oct 02 08:29:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.916 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:29:04 compute-0 ovn_controller[152344]: 2025-10-02T08:29:04Z|00425|binding|INFO|Setting lport f6ec21c0-188d-4c89-8b6b-a64a6d85f131 ovn-installed in OVS
Oct 02 08:29:04 compute-0 ovn_controller[152344]: 2025-10-02T08:29:04Z|00426|binding|INFO|Setting lport f6ec21c0-188d-4c89-8b6b-a64a6d85f131 up in Southbound
Oct 02 08:29:04 compute-0 nova_compute[260603]: 2025-10-02 08:29:04.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.936 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94325111-df38-4853-bc93-088abbe6dced]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.936 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa72ac8c9-11 in ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:29:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.939 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa72ac8c9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:29:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.939 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3490c72-d15b-45ca-a645-0201fed9a9df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.940 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eaea69aa-83c5-4a8c-8de7-c29cbd2fde5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:04 compute-0 systemd-machined[214636]: New machine qemu-55-instance-00000032.
Oct 02 08:29:04 compute-0 systemd-udevd[313711]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:29:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 88 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 193 op/s
Oct 02 08:29:04 compute-0 NetworkManager[45129]: <info>  [1759393744.9560] device (tapf6ec21c0-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:29:04 compute-0 NetworkManager[45129]: <info>  [1759393744.9575] device (tapf6ec21c0-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:29:04 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-00000032.
Oct 02 08:29:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.965 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[df5bf083-e58f-40b8-98b2-7ecc536d0dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.987 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[da90d58c-efc2-46ae-9753-272811c1bc40]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.028 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[90d024cb-4f48-4582-b6d7-26d0b1ffad8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.037 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[131208ae-5100-499c-8a03-2b7825d32564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:05 compute-0 systemd-udevd[313714]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:29:05 compute-0 NetworkManager[45129]: <info>  [1759393745.0395] manager: (tapa72ac8c9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/185)
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.087 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3e27de11-1fa8-4278-9baa-c80d0f5616f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.093 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0b29bf1a-9f10-4199-9732-143e50d68f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:05 compute-0 NetworkManager[45129]: <info>  [1759393745.1211] device (tapa72ac8c9-10): carrier: link connected
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.132 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0c206b-b703-4dd3-a8c8-2f13d87f5547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.170 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c65ae93e-4e29-4a68-96c3-e421ba9cf2be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462372, 'reachable_time': 18026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313743, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.196 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e52a0c06-aab0-46a9-a57b-e35176cde302]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3b:61d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462372, 'tstamp': 462372}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313744, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.223 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[35523270-628e-4b76-9dc9-eac6f3af613e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462372, 'reachable_time': 18026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313745, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.258 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a6631673-82ea-4c3f-b7be-7e05891e48e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.259 2 DEBUG nova.compute.manager [req-98dcca3f-9603-4b5a-b009-20a0831580cf req-e3c7bfa8-1b65-4c2b-b354-68daf80b1a9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.260 2 DEBUG oslo_concurrency.lockutils [req-98dcca3f-9603-4b5a-b009-20a0831580cf req-e3c7bfa8-1b65-4c2b-b354-68daf80b1a9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.262 2 DEBUG oslo_concurrency.lockutils [req-98dcca3f-9603-4b5a-b009-20a0831580cf req-e3c7bfa8-1b65-4c2b-b354-68daf80b1a9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.262 2 DEBUG oslo_concurrency.lockutils [req-98dcca3f-9603-4b5a-b009-20a0831580cf req-e3c7bfa8-1b65-4c2b-b354-68daf80b1a9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.263 2 DEBUG nova.compute.manager [req-98dcca3f-9603-4b5a-b009-20a0831580cf req-e3c7bfa8-1b65-4c2b-b354-68daf80b1a9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Processing event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.354 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd75ab5a-77f7-4c7d-8cbb-582d5bc171e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.356 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.357 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.358 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa72ac8c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:05 compute-0 NetworkManager[45129]: <info>  [1759393745.3625] manager: (tapa72ac8c9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Oct 02 08:29:05 compute-0 kernel: tapa72ac8c9-10: entered promiscuous mode
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.369 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa72ac8c9-10, col_values=(('external_ids', {'iface-id': 'f9acec59-0200-4a1d-84e4-06e67c730498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:05 compute-0 ovn_controller[152344]: 2025-10-02T08:29:05Z|00427|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.375 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.384 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0aecff28-da39-4e25-a3ca-1f8b41ece230]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.385 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:29:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.386 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'env', 'PROCESS_TAG=haproxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.512 2 DEBUG nova.network.neutron [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Updated VIF entry in instance network info cache for port f6ec21c0-188d-4c89-8b6b-a64a6d85f131. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.513 2 DEBUG nova.network.neutron [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Updating instance_info_cache with network_info: [{"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:05 compute-0 ovn_controller[152344]: 2025-10-02T08:29:05Z|00428|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.534 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:05 compute-0 ovn_controller[152344]: 2025-10-02T08:29:05Z|00429|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct 02 08:29:05 compute-0 nova_compute[260603]: 2025-10-02 08:29:05.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:05 compute-0 podman[313817]: 2025-10-02 08:29:05.858620728 +0000 UTC m=+0.098286476 container create e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:29:05 compute-0 podman[313817]: 2025-10-02 08:29:05.790570313 +0000 UTC m=+0.030236101 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:29:05 compute-0 systemd[1]: Started libpod-conmon-e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f.scope.
Oct 02 08:29:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7366a1c716223f76f05604a8db394b41e4fc284cf97ee714770130aac3014769/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:05 compute-0 podman[313817]: 2025-10-02 08:29:05.967050877 +0000 UTC m=+0.206716685 container init e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:29:05 compute-0 podman[313817]: 2025-10-02 08:29:05.972795196 +0000 UTC m=+0.212460964 container start e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 08:29:05 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [NOTICE]   (313834) : New worker (313836) forked
Oct 02 08:29:05 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [NOTICE]   (313834) : Loading success.
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.044 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393746.0439987, 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.045 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] VM Started (Lifecycle Event)
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.049 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.054 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.058 2 INFO nova.virt.libvirt.driver [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance spawned successfully.
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.059 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.076 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.085 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.092 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.093 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.093 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.094 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.095 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.096 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.107 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.108 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393746.0443287, 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.109 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] VM Paused (Lifecycle Event)
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.131 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.137 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393746.053895, 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:06 compute-0 ceph-mon[74477]: pgmap v1446: 305 pgs: 305 active+clean; 88 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 193 op/s
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.137 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] VM Resumed (Lifecycle Event)
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.155 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.163 2 INFO nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Took 7.86 seconds to spawn the instance on the hypervisor.
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.164 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.165 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.191 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.220 2 INFO nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Took 8.94 seconds to build instance.
Oct 02 08:29:06 compute-0 nova_compute[260603]: 2025-10-02 08:29:06.235 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 88 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 193 op/s
Oct 02 08:29:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:29:07 compute-0 nova_compute[260603]: 2025-10-02 08:29:07.380 2 DEBUG nova.compute.manager [req-50b296bb-7452-4bf5-94dd-fac8cbfca419 req-6d3075c3-55e7-4915-a4d0-85f5b857b213 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:07 compute-0 nova_compute[260603]: 2025-10-02 08:29:07.381 2 DEBUG oslo_concurrency.lockutils [req-50b296bb-7452-4bf5-94dd-fac8cbfca419 req-6d3075c3-55e7-4915-a4d0-85f5b857b213 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:07 compute-0 nova_compute[260603]: 2025-10-02 08:29:07.383 2 DEBUG oslo_concurrency.lockutils [req-50b296bb-7452-4bf5-94dd-fac8cbfca419 req-6d3075c3-55e7-4915-a4d0-85f5b857b213 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:07 compute-0 nova_compute[260603]: 2025-10-02 08:29:07.383 2 DEBUG oslo_concurrency.lockutils [req-50b296bb-7452-4bf5-94dd-fac8cbfca419 req-6d3075c3-55e7-4915-a4d0-85f5b857b213 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:07 compute-0 nova_compute[260603]: 2025-10-02 08:29:07.384 2 DEBUG nova.compute.manager [req-50b296bb-7452-4bf5-94dd-fac8cbfca419 req-6d3075c3-55e7-4915-a4d0-85f5b857b213 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] No waiting events found dispatching network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:07 compute-0 nova_compute[260603]: 2025-10-02 08:29:07.384 2 WARNING nova.compute.manager [req-50b296bb-7452-4bf5-94dd-fac8cbfca419 req-6d3075c3-55e7-4915-a4d0-85f5b857b213 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received unexpected event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 for instance with vm_state active and task_state None.
Oct 02 08:29:07 compute-0 nova_compute[260603]: 2025-10-02 08:29:07.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:07 compute-0 nova_compute[260603]: 2025-10-02 08:29:07.846 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393732.8450491, f13ff7c1-d7d3-443e-9f06-69f8c466af30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:07 compute-0 nova_compute[260603]: 2025-10-02 08:29:07.846 2 INFO nova.compute.manager [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] VM Stopped (Lifecycle Event)
Oct 02 08:29:07 compute-0 nova_compute[260603]: 2025-10-02 08:29:07.879 2 DEBUG nova.compute.manager [None req-db802f98-76db-495f-83d9-912f59dec966 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:08 compute-0 ceph-mon[74477]: pgmap v1447: 305 pgs: 305 active+clean; 88 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 193 op/s
Oct 02 08:29:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 237 op/s
Oct 02 08:29:09 compute-0 nova_compute[260603]: 2025-10-02 08:29:09.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:09 compute-0 nova_compute[260603]: 2025-10-02 08:29:09.157 2 DEBUG oslo_concurrency.lockutils [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:09 compute-0 nova_compute[260603]: 2025-10-02 08:29:09.158 2 DEBUG oslo_concurrency.lockutils [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:09 compute-0 nova_compute[260603]: 2025-10-02 08:29:09.159 2 DEBUG nova.compute.manager [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:09 compute-0 nova_compute[260603]: 2025-10-02 08:29:09.164 2 DEBUG nova.compute.manager [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 08:29:09 compute-0 nova_compute[260603]: 2025-10-02 08:29:09.166 2 DEBUG nova.objects.instance [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'flavor' on Instance uuid 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:09 compute-0 nova_compute[260603]: 2025-10-02 08:29:09.190 2 DEBUG nova.virt.libvirt.driver [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:29:10 compute-0 ceph-mon[74477]: pgmap v1448: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 237 op/s
Oct 02 08:29:10 compute-0 nova_compute[260603]: 2025-10-02 08:29:10.782 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393735.7808106, 05cc7244-c419-4c24-b995-95ca760837a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:10 compute-0 nova_compute[260603]: 2025-10-02 08:29:10.783 2 INFO nova.compute.manager [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] VM Stopped (Lifecycle Event)
Oct 02 08:29:10 compute-0 nova_compute[260603]: 2025-10-02 08:29:10.812 2 DEBUG nova.compute.manager [None req-ea9283d0-0150-4dd2-aeec-b442a86b2a39 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Oct 02 08:29:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:29:12 compute-0 ceph-mon[74477]: pgmap v1449: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Oct 02 08:29:12 compute-0 nova_compute[260603]: 2025-10-02 08:29:12.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Oct 02 08:29:13 compute-0 podman[313846]: 2025-10-02 08:29:13.057504824 +0000 UTC m=+0.108277376 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 08:29:13 compute-0 podman[313845]: 2025-10-02 08:29:13.108973373 +0000 UTC m=+0.165042139 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 08:29:13 compute-0 nova_compute[260603]: 2025-10-02 08:29:13.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:29:13 compute-0 nova_compute[260603]: 2025-10-02 08:29:13.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:29:14 compute-0 nova_compute[260603]: 2025-10-02 08:29:14.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:14 compute-0 ceph-mon[74477]: pgmap v1450: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Oct 02 08:29:14 compute-0 nova_compute[260603]: 2025-10-02 08:29:14.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:29:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 422 KiB/s wr, 113 op/s
Oct 02 08:29:15 compute-0 nova_compute[260603]: 2025-10-02 08:29:15.375 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393740.3743992, 331bfae3-95e5-4c18-96ca-56597994c6b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:15 compute-0 nova_compute[260603]: 2025-10-02 08:29:15.376 2 INFO nova.compute.manager [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] VM Stopped (Lifecycle Event)
Oct 02 08:29:15 compute-0 nova_compute[260603]: 2025-10-02 08:29:15.397 2 DEBUG nova.compute.manager [None req-72bcb7af-3afd-4534-ad47-fdb44da7f3e7 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:16 compute-0 ceph-mon[74477]: pgmap v1451: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 422 KiB/s wr, 113 op/s
Oct 02 08:29:16 compute-0 nova_compute[260603]: 2025-10-02 08:29:16.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:29:16 compute-0 nova_compute[260603]: 2025-10-02 08:29:16.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:29:16 compute-0 nova_compute[260603]: 2025-10-02 08:29:16.586 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:29:16 compute-0 nova_compute[260603]: 2025-10-02 08:29:16.588 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:29:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:29:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:29:17 compute-0 ovn_controller[152344]: 2025-10-02T08:29:17Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:37:2c 10.100.0.12
Oct 02 08:29:17 compute-0 ovn_controller[152344]: 2025-10-02T08:29:17Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:37:2c 10.100.0.12
Oct 02 08:29:17 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 02 08:29:17 compute-0 nova_compute[260603]: 2025-10-02 08:29:17.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:18 compute-0 ceph-mon[74477]: pgmap v1452: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:29:18 compute-0 nova_compute[260603]: 2025-10-02 08:29:18.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:29:18 compute-0 nova_compute[260603]: 2025-10-02 08:29:18.935 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:29:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 115 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 02 08:29:18 compute-0 nova_compute[260603]: 2025-10-02 08:29:18.960 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 08:29:18 compute-0 nova_compute[260603]: 2025-10-02 08:29:18.961 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:19 compute-0 nova_compute[260603]: 2025-10-02 08:29:19.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:19 compute-0 podman[313891]: 2025-10-02 08:29:19.097206029 +0000 UTC m=+0.146202936 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 02 08:29:19 compute-0 nova_compute[260603]: 2025-10-02 08:29:19.238 2 DEBUG nova.virt.libvirt.driver [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:29:19 compute-0 nova_compute[260603]: 2025-10-02 08:29:19.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:29:19 compute-0 nova_compute[260603]: 2025-10-02 08:29:19.560 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:19 compute-0 nova_compute[260603]: 2025-10-02 08:29:19.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:19 compute-0 nova_compute[260603]: 2025-10-02 08:29:19.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:19 compute-0 nova_compute[260603]: 2025-10-02 08:29:19.561 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:29:19 compute-0 nova_compute[260603]: 2025-10-02 08:29:19.562 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/635095801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.014 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:20 compute-0 ceph-mon[74477]: pgmap v1453: 305 pgs: 305 active+clean; 115 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 02 08:29:20 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/635095801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.287 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.287 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.511 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.512 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4075MB free_disk=59.94316864013672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.512 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.512 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.624 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.624 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.624 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.656 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.682 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.683 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.699 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.728 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 08:29:20 compute-0 nova_compute[260603]: 2025-10-02 08:29:20.773 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 115 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.133 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.134 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.166 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.204799) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393761204865, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2382, "num_deletes": 512, "total_data_size": 3276056, "memory_usage": 3348240, "flush_reason": "Manual Compaction"}
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393761224310, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3218090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28078, "largest_seqno": 30459, "table_properties": {"data_size": 3207875, "index_size": 6005, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 24708, "raw_average_key_size": 19, "raw_value_size": 3185100, "raw_average_value_size": 2546, "num_data_blocks": 265, "num_entries": 1251, "num_filter_entries": 1251, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759393556, "oldest_key_time": 1759393556, "file_creation_time": 1759393761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 19567 microseconds, and 12506 cpu microseconds.
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.224373) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3218090 bytes OK
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.224402) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.226936) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.226962) EVENT_LOG_v1 {"time_micros": 1759393761226953, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.226987) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3264990, prev total WAL file size 3264990, number of live WAL files 2.
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.228398) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3142KB)], [62(8447KB)]
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393761228461, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11868188, "oldest_snapshot_seqno": -1}
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.264 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1364771817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5506 keys, 10252855 bytes, temperature: kUnknown
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393761275421, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10252855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10211747, "index_size": 26214, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 138090, "raw_average_key_size": 25, "raw_value_size": 10108555, "raw_average_value_size": 1835, "num_data_blocks": 1073, "num_entries": 5506, "num_filter_entries": 5506, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759393761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.275716) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10252855 bytes
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.277138) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 252.2 rd, 217.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 6547, records dropped: 1041 output_compression: NoCompression
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.277166) EVENT_LOG_v1 {"time_micros": 1759393761277153, "job": 34, "event": "compaction_finished", "compaction_time_micros": 47051, "compaction_time_cpu_micros": 28159, "output_level": 6, "num_output_files": 1, "total_output_size": 10252855, "num_input_records": 6547, "num_output_records": 5506, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393761278218, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393761280830, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.228316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.280875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.280881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.280884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.280887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:29:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.280890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.283 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.290 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.308 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.333 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.333 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.334 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.343 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.344 2 INFO nova.compute.claims [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.471 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:21 compute-0 kernel: tapf6ec21c0-18 (unregistering): left promiscuous mode
Oct 02 08:29:21 compute-0 NetworkManager[45129]: <info>  [1759393761.5634] device (tapf6ec21c0-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:29:21 compute-0 ovn_controller[152344]: 2025-10-02T08:29:21Z|00430|binding|INFO|Releasing lport f6ec21c0-188d-4c89-8b6b-a64a6d85f131 from this chassis (sb_readonly=0)
Oct 02 08:29:21 compute-0 ovn_controller[152344]: 2025-10-02T08:29:21Z|00431|binding|INFO|Setting lport f6ec21c0-188d-4c89-8b6b-a64a6d85f131 down in Southbound
Oct 02 08:29:21 compute-0 ovn_controller[152344]: 2025-10-02T08:29:21Z|00432|binding|INFO|Removing iface tapf6ec21c0-18 ovn-installed in OVS
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.580 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:37:2c 10.100.0.12'], port_security=['fa:16:3e:5f:37:2c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '640bb5bf-5ae3-455f-82e7-3e6d647a0fbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f6ec21c0-188d-4c89-8b6b-a64a6d85f131) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.583 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f6ec21c0-188d-4c89-8b6b-a64a6d85f131 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca unbound from our chassis
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.585 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.586 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72218843-8cdb-4b80-9e9f-a417af50e71e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.589 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace which is not needed anymore
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:21 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000032.scope: Deactivated successfully.
Oct 02 08:29:21 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000032.scope: Consumed 12.628s CPU time.
Oct 02 08:29:21 compute-0 systemd-machined[214636]: Machine qemu-55-instance-00000032 terminated.
Oct 02 08:29:21 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [NOTICE]   (313834) : haproxy version is 2.8.14-c23fe91
Oct 02 08:29:21 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [NOTICE]   (313834) : path to executable is /usr/sbin/haproxy
Oct 02 08:29:21 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [WARNING]  (313834) : Exiting Master process...
Oct 02 08:29:21 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [ALERT]    (313834) : Current worker (313836) exited with code 143 (Terminated)
Oct 02 08:29:21 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [WARNING]  (313834) : All workers exited. Exiting... (0)
Oct 02 08:29:21 compute-0 systemd[1]: libpod-e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f.scope: Deactivated successfully.
Oct 02 08:29:21 compute-0 podman[313999]: 2025-10-02 08:29:21.769369413 +0000 UTC m=+0.065094084 container died e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 08:29:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7366a1c716223f76f05604a8db394b41e4fc284cf97ee714770130aac3014769-merged.mount: Deactivated successfully.
Oct 02 08:29:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f-userdata-shm.mount: Deactivated successfully.
Oct 02 08:29:21 compute-0 podman[313999]: 2025-10-02 08:29:21.809739648 +0000 UTC m=+0.105464249 container cleanup e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:29:21 compute-0 systemd[1]: libpod-conmon-e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f.scope: Deactivated successfully.
Oct 02 08:29:21 compute-0 podman[314034]: 2025-10-02 08:29:21.883788779 +0000 UTC m=+0.040082487 container remove e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.889 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[134e0441-a6f1-4555-96e8-9ca1377dfe85]: (4, ('Thu Oct  2 08:29:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f)\ne2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f\nThu Oct  2 08:29:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f)\ne2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.892 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0b810f-655f-40d1-89dd-a6b13a4d1fe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.893 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:21 compute-0 kernel: tapa72ac8c9-10: left promiscuous mode
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.926 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ebef7566-83f7-439b-8bbf-e2e2298397f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976252696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.954 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.960 2 DEBUG nova.compute.provider_tree [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.961 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ed2e13-4459-48e9-bfae-f0bfb39832ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.964 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a09c6a56-f296-4376-975e-d88f7ac3bfde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:21 compute-0 nova_compute[260603]: 2025-10-02 08:29:21.979 2 DEBUG nova.scheduler.client.report [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.980 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b960da6a-a273-4d04-95c1-5c96656975ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462361, 'reachable_time': 25182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314057, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.983 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:29:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.983 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1f7f8b92-b220-4168-93de-a6ee1ed4dd7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:21 compute-0 systemd[1]: run-netns-ovnmeta\x2da72ac8c9\x2d16ee\x2d4ec0\x2db23d\x2d2741fda000ca.mount: Deactivated successfully.
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.004 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.005 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.050 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.050 2 DEBUG nova.network.neutron [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:29:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:29:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/70845979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:29:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:29:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/70845979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.071 2 INFO nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:29:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.098 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:29:22 compute-0 ceph-mon[74477]: pgmap v1454: 305 pgs: 305 active+clean; 115 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Oct 02 08:29:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1364771817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/976252696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/70845979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:29:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/70845979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.216 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.218 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.218 2 INFO nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Creating image(s)
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.251 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.283 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.312 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.317 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.372 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.373 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.377 2 INFO nova.virt.libvirt.driver [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance shutdown successfully after 13 seconds.
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.382 2 INFO nova.virt.libvirt.driver [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance destroyed successfully.
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.383 2 DEBUG nova.objects.instance [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'numa_topology' on Instance uuid 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.413 2 DEBUG nova.policy [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1d66932c11043b5b90140cd2dde53d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.420 2 DEBUG nova.compute.manager [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.426 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.428 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.430 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.431 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.468 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.473 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9924ce7f-b701-4560-b2c5-67f673b45807_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.654 2 DEBUG oslo_concurrency.lockutils [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.655 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.656 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.656 2 INFO nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] During sync_power_state the instance has a pending task (powering-off). Skip.
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.656 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.799 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9924ce7f-b701-4560-b2c5-67f673b45807_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:22 compute-0 nova_compute[260603]: 2025-10-02 08:29:22.873 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] resizing rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:29:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 218 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.008 2 DEBUG nova.objects.instance [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'migration_context' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:23 compute-0 podman[314206]: 2025-10-02 08:29:23.019196153 +0000 UTC m=+0.082854755 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2)
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.025 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.026 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Ensure instance console log exists: /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.027 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.028 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.028 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.087 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "797fde07-e88a-4d6e-a1a3-25e22c66097c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.088 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.114 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.201 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.202 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.213 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.213 2 INFO nova.compute.claims [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.454 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.729 2 DEBUG nova.network.neutron [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Successfully created port: bf9cdb7f-4cda-403b-b27e-12385e93db02 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:29:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/754519048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.904 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.913 2 DEBUG nova.compute.provider_tree [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.936 2 DEBUG nova.scheduler.client.report [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.964 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:23 compute-0 nova_compute[260603]: 2025-10-02 08:29:23.965 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.013 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.014 2 DEBUG nova.network.neutron [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.043 2 INFO nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.070 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.166 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.168 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.169 2 INFO nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Creating image(s)
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.200 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:24 compute-0 ceph-mon[74477]: pgmap v1455: 305 pgs: 305 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 218 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 08:29:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/754519048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.233 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.259 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.264 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.365 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.366 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.366 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.367 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.389 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.393 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.712 2 DEBUG nova.policy [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1d66932c11043b5b90140cd2dde53d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.726 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.826 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] resizing rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.947 2 DEBUG nova.objects.instance [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'migration_context' on Instance uuid 797fde07-e88a-4d6e-a1a3-25e22c66097c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 137 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 2.8 MiB/s wr, 74 op/s
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.966 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.967 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Ensure instance console log exists: /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.967 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.968 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:24 compute-0 nova_compute[260603]: 2025-10-02 08:29:24.968 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.010 2 DEBUG nova.compute.manager [req-c0784f4f-e9f4-4118-a4e7-c554de27e72a req-5d696ec3-dd26-4c4a-ae03-d81c66786988 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received event network-vif-unplugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.011 2 DEBUG oslo_concurrency.lockutils [req-c0784f4f-e9f4-4118-a4e7-c554de27e72a req-5d696ec3-dd26-4c4a-ae03-d81c66786988 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.011 2 DEBUG oslo_concurrency.lockutils [req-c0784f4f-e9f4-4118-a4e7-c554de27e72a req-5d696ec3-dd26-4c4a-ae03-d81c66786988 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.012 2 DEBUG oslo_concurrency.lockutils [req-c0784f4f-e9f4-4118-a4e7-c554de27e72a req-5d696ec3-dd26-4c4a-ae03-d81c66786988 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.012 2 DEBUG nova.compute.manager [req-c0784f4f-e9f4-4118-a4e7-c554de27e72a req-5d696ec3-dd26-4c4a-ae03-d81c66786988 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] No waiting events found dispatching network-vif-unplugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.012 2 WARNING nova.compute.manager [req-c0784f4f-e9f4-4118-a4e7-c554de27e72a req-5d696ec3-dd26-4c4a-ae03-d81c66786988 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received unexpected event network-vif-unplugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 for instance with vm_state stopped and task_state None.
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.598 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.599 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.599 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.600 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.601 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.603 2 INFO nova.compute.manager [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Terminating instance
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.606 2 DEBUG nova.compute.manager [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.618 2 INFO nova.virt.libvirt.driver [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance destroyed successfully.
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.619 2 DEBUG nova.objects.instance [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'resources' on Instance uuid 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.641 2 DEBUG nova.virt.libvirt.vif [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-17603736',display_name='tempest-DeleteServersTestJSON-server-17603736',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-17603736',id=50,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-jo24t19c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:22Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=640bb5bf-5ae3-455f-82e7-3e6d647a0fbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.642 2 DEBUG nova.network.os_vif_util [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.643 2 DEBUG nova.network.os_vif_util [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.644 2 DEBUG os_vif [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6ec21c0-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:25 compute-0 nova_compute[260603]: 2025-10-02 08:29:25.656 2 INFO os_vif [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18')
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.003 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.003 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.045 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.163 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.165 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.174 2 INFO nova.virt.libvirt.driver [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Deleting instance files /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_del
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.175 2 INFO nova.virt.libvirt.driver [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Deletion of /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_del complete
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.184 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.185 2 INFO nova.compute.claims [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:29:26 compute-0 ceph-mon[74477]: pgmap v1456: 305 pgs: 305 active+clean; 137 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 2.8 MiB/s wr, 74 op/s
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.270 2 INFO nova.compute.manager [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Took 0.66 seconds to destroy the instance on the hypervisor.
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.271 2 DEBUG oslo.service.loopingcall [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.271 2 DEBUG nova.compute.manager [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.272 2 DEBUG nova.network.neutron [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.326 2 DEBUG nova.network.neutron [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Successfully updated port: bf9cdb7f-4cda-403b-b27e-12385e93db02 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.348 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.348 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquired lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.348 2 DEBUG nova.network.neutron [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.407 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:26 compute-0 sudo[314452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:29:26 compute-0 sudo[314452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:26 compute-0 sudo[314452]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.452 2 DEBUG nova.network.neutron [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Successfully created port: 29a765f0-6b44-4aad-9974-a0845658d5f2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.472 2 DEBUG nova.compute.manager [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-changed-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.473 2 DEBUG nova.compute.manager [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Refreshing instance network info cache due to event network-changed-bf9cdb7f-4cda-403b-b27e-12385e93db02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.473 2 DEBUG oslo_concurrency.lockutils [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:26 compute-0 sudo[314478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:29:26 compute-0 sudo[314478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:26 compute-0 sudo[314478]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.522 2 DEBUG nova.network.neutron [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:29:26 compute-0 sudo[314503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:29:26 compute-0 sudo[314503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:26 compute-0 sudo[314503]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:26 compute-0 sudo[314547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 08:29:26 compute-0 sudo[314547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.616 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.616 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.634 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.659 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.660 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.693 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.715 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.715 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.742 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.753 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.801 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:26 compute-0 sudo[314547]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023867281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.824 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:26 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:29:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:29:26 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.836 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.841 2 DEBUG nova.compute.provider_tree [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.856 2 DEBUG nova.scheduler.client.report [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.876 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.877 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.879 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.886 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.886 2 INFO nova.compute.claims [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:29:26 compute-0 sudo[314594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:29:26 compute-0 sudo[314594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:26 compute-0 sudo[314594]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.941 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.941 2 DEBUG nova.network.neutron [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:29:26 compute-0 sudo[314619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:29:26 compute-0 sudo[314619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 137 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 2.8 MiB/s wr, 74 op/s
Oct 02 08:29:26 compute-0 sudo[314619]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.969 2 DEBUG nova.network.neutron [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:26 compute-0 nova_compute[260603]: 2025-10-02 08:29:26.980 2 INFO nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.004 2 INFO nova.compute.manager [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Took 0.73 seconds to deallocate network for instance.
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.009 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:29:27 compute-0 sudo[314644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:29:27 compute-0 sudo[314644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:27 compute-0 sudo[314644]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.066 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:29:27 compute-0 sudo[314669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:29:27 compute-0 sudo[314669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.109 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.110 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.111 2 INFO nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Creating image(s)
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.130 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.152 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.175 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.178 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.213 2 DEBUG nova.policy [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1d66932c11043b5b90140cd2dde53d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.217 2 DEBUG nova.compute.manager [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.217 2 DEBUG oslo_concurrency.lockutils [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.218 2 DEBUG oslo_concurrency.lockutils [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.218 2 DEBUG oslo_concurrency.lockutils [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.218 2 DEBUG nova.compute.manager [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] No waiting events found dispatching network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.218 2 WARNING nova.compute.manager [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received unexpected event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 for instance with vm_state deleted and task_state None.
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.218 2 DEBUG nova.compute.manager [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received event network-vif-deleted-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3023867281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:27 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:29:27 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.268 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.269 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.270 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.270 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.307 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.313 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.359 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:27 compute-0 sudo[314669]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.624 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.675 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] resizing rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:29:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:29:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:29:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:29:27 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:29:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:29:27 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 715c637f-14fa-413c-8aab-2b6750112854 does not exist
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev d85788bb-6dda-4eb0-a595-c34742f8858d does not exist
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 174529aa-a377-4cad-9796-ba5426119614 does not exist
Oct 02 08:29:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:29:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:29:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:29:27 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:29:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:29:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:27 compute-0 sudo[314893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:29:27 compute-0 sudo[314893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:27 compute-0 sudo[314893]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.770 2 DEBUG nova.network.neutron [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Successfully created port: 257d115c-e196-4921-a9d3-942604825516 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.781 2 DEBUG nova.objects.instance [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'migration_context' on Instance uuid d15c7c6a-e6a1-4538-9db0-ee1aef10f38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.800 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.800 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Ensure instance console log exists: /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.801 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.801 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.801 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996189806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:27 compute-0 sudo[314936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:29:27 compute-0 sudo[314936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:27 compute-0 sudo[314936]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.842 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.847 2 DEBUG nova.compute.provider_tree [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.853 2 DEBUG nova.network.neutron [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updating instance_info_cache with network_info: [{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.870 2 DEBUG nova.scheduler.client.report [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.874 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Releasing lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.875 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance network_info: |[{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.875 2 DEBUG oslo_concurrency.lockutils [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.875 2 DEBUG nova.network.neutron [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Refreshing network info cache for port bf9cdb7f-4cda-403b-b27e-12385e93db02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.878 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Start _get_guest_xml network_info=[{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.882 2 WARNING nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:29:27 compute-0 sudo[314963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:29:27 compute-0 sudo[314963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:27 compute-0 sudo[314963]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.887 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.888 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.890 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.892 2 DEBUG nova.virt.libvirt.host [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.892 2 DEBUG nova.virt.libvirt.host [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.896 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.897 2 INFO nova.compute.claims [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.899 2 DEBUG nova.virt.libvirt.host [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.900 2 DEBUG nova.virt.libvirt.host [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.900 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.900 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.901 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.901 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.901 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.901 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.902 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.902 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.902 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.902 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.902 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.902 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.905 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:29:27
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'images', 'vms']
Oct 02 08:29:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:29:27 compute-0 sudo[314988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:29:27 compute-0 sudo[314988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.987 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:29:27 compute-0 nova_compute[260603]: 2025-10-02 08:29:27.988 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.017 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.035 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.159 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.160 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.161 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Creating image(s)
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.182 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.204 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.225 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:28 compute-0 ceph-mon[74477]: pgmap v1457: 305 pgs: 305 active+clean; 137 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 2.8 MiB/s wr, 74 op/s
Oct 02 08:29:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:29:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:29:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:29:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:29:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:29:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:29:28 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1996189806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.230 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:28 compute-0 podman[315103]: 2025-10-02 08:29:28.268702509 +0000 UTC m=+0.054666259 container create d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.298 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:28 compute-0 systemd[1]: Started libpod-conmon-d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7.scope.
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.328 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.329 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.329 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.330 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:28 compute-0 podman[315103]: 2025-10-02 08:29:28.244635772 +0000 UTC m=+0.030599542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:29:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.359 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272301993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.367 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 73e8c7a5-4621-4f07-824a-b81ea314a672_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:28 compute-0 podman[315103]: 2025-10-02 08:29:28.374328722 +0000 UTC m=+0.160292512 container init d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:29:28 compute-0 podman[315103]: 2025-10-02 08:29:28.387856353 +0000 UTC m=+0.173820093 container start d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:29:28 compute-0 podman[315103]: 2025-10-02 08:29:28.391454534 +0000 UTC m=+0.177418374 container attach d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:29:28 compute-0 xenodochial_vaughan[315144]: 167 167
Oct 02 08:29:28 compute-0 systemd[1]: libpod-d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7.scope: Deactivated successfully.
Oct 02 08:29:28 compute-0 podman[315103]: 2025-10-02 08:29:28.396821701 +0000 UTC m=+0.182785441 container died d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.398 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-63e6b5bd3f72aa4371caad0a29cbb36dbd70856ebf5c8d0742d4f56e42d5784a-merged.mount: Deactivated successfully.
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.432 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.440 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:28 compute-0 podman[315103]: 2025-10-02 08:29:28.445517514 +0000 UTC m=+0.231481254 container remove d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:29:28 compute-0 systemd[1]: libpod-conmon-d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7.scope: Deactivated successfully.
Oct 02 08:29:28 compute-0 podman[315243]: 2025-10-02 08:29:28.642036912 +0000 UTC m=+0.040367096 container create 32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_proskuriakova, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.648 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 73e8c7a5-4621-4f07-824a-b81ea314a672_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:28 compute-0 systemd[1]: Started libpod-conmon-32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b.scope.
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.685 2 DEBUG nova.policy [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1057882eff8f490d837773415bf65a8a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.688 2 DEBUG nova.network.neutron [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Successfully updated port: 29a765f0-6b44-4aad-9974-a0845658d5f2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:29:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11469fe5d7bf15f097cad835b739b661232c08db8c1119178e44476bd11be85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11469fe5d7bf15f097cad835b739b661232c08db8c1119178e44476bd11be85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11469fe5d7bf15f097cad835b739b661232c08db8c1119178e44476bd11be85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11469fe5d7bf15f097cad835b739b661232c08db8c1119178e44476bd11be85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11469fe5d7bf15f097cad835b739b661232c08db8c1119178e44476bd11be85/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:28 compute-0 podman[315243]: 2025-10-02 08:29:28.627135998 +0000 UTC m=+0.025466202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:29:28 compute-0 podman[315243]: 2025-10-02 08:29:28.729316594 +0000 UTC m=+0.127646798 container init 32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_proskuriakova, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 08:29:28 compute-0 podman[315243]: 2025-10-02 08:29:28.743332789 +0000 UTC m=+0.141663003 container start 32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 08:29:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815389092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:28 compute-0 podman[315243]: 2025-10-02 08:29:28.749314885 +0000 UTC m=+0.147645149 container attach 32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.760 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "refresh_cache-797fde07-e88a-4d6e-a1a3-25e22c66097c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.760 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquired lock "refresh_cache-797fde07-e88a-4d6e-a1a3-25e22c66097c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.760 2 DEBUG nova.network.neutron [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.766 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] resizing rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.811 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.818 2 DEBUG nova.compute.provider_tree [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.849 2 DEBUG nova.scheduler.client.report [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.858 2 DEBUG nova.objects.instance [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'migration_context' on Instance uuid 73e8c7a5-4621-4f07-824a-b81ea314a672 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.874 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.875 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Ensure instance console log exists: /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.875 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.875 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.875 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.877 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.878 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.880 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.887 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.887 2 INFO nova.compute.claims [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:29:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941853479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.933 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.935 2 DEBUG nova.virt.libvirt.vif [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-769762436',display_name='tempest-ListServerFiltersTestJSON-instance-769762436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-769762436',id=51,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-t76hsctw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:22Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=9924ce7f-b701-4560-b2c5-67f673b45807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.935 2 DEBUG nova.network.os_vif_util [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.935 2 DEBUG nova.network.os_vif_util [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.936 2 DEBUG nova.objects.instance [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 143 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 6.2 MiB/s wr, 214 op/s
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.969 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.970 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.975 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:29:28 compute-0 nova_compute[260603]:   <uuid>9924ce7f-b701-4560-b2c5-67f673b45807</uuid>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   <name>instance-00000033</name>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-769762436</nova:name>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:29:27</nova:creationTime>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:29:28 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:29:28 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:29:28 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:29:28 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:29:28 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:29:28 compute-0 nova_compute[260603]:         <nova:user uuid="c1d66932c11043b5b90140cd2dde53d2">tempest-ListServerFiltersTestJSON-1545892750-project-member</nova:user>
Oct 02 08:29:28 compute-0 nova_compute[260603]:         <nova:project uuid="e7c4373fe01a4a14bea07af6dba4d170">tempest-ListServerFiltersTestJSON-1545892750</nova:project>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:29:28 compute-0 nova_compute[260603]:         <nova:port uuid="bf9cdb7f-4cda-403b-b27e-12385e93db02">
Oct 02 08:29:28 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <system>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <entry name="serial">9924ce7f-b701-4560-b2c5-67f673b45807</entry>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <entry name="uuid">9924ce7f-b701-4560-b2c5-67f673b45807</entry>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     </system>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   <os>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   </os>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   <features>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   </features>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/9924ce7f-b701-4560-b2c5-67f673b45807_disk">
Oct 02 08:29:28 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:28 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/9924ce7f-b701-4560-b2c5-67f673b45807_disk.config">
Oct 02 08:29:28 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:28 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:64:b2:ea"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <target dev="tapbf9cdb7f-4c"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/console.log" append="off"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <video>
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     </video>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:29:28 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:29:28 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:29:28 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:29:28 compute-0 nova_compute[260603]: </domain>
Oct 02 08:29:28 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.975 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Preparing to wait for external event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.975 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.975 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.976 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.976 2 DEBUG nova.virt.libvirt.vif [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-769762436',display_name='tempest-ListServerFiltersTestJSON-instance-769762436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-769762436',id=51,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-t76hsctw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:22Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=9924ce7f-b701-4560-b2c5-67f673b45807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.976 2 DEBUG nova.network.os_vif_util [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.977 2 DEBUG nova.network.os_vif_util [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.977 2 DEBUG os_vif [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.987 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf9cdb7f-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.988 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf9cdb7f-4c, col_values=(('external_ids', {'iface-id': 'bf9cdb7f-4cda-403b-b27e-12385e93db02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:b2:ea', 'vm-uuid': '9924ce7f-b701-4560-b2c5-67f673b45807'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:28 compute-0 NetworkManager[45129]: <info>  [1759393768.9909] manager: (tapbf9cdb7f-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.991 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:29:28 compute-0 nova_compute[260603]: 2025-10-02 08:29:28.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.002 2 INFO os_vif [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c')
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.019 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.074 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.074 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.074 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No VIF found with MAC fa:16:3e:64:b2:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.075 2 INFO nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Using config drive
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.098 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.231 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.232 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.232 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Creating image(s)
Oct 02 08:29:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3272301993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/815389092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/941853479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.261 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.287 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.315 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.319 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.376 2 DEBUG nova.network.neutron [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.379 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.416 2 DEBUG nova.compute.manager [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Received event network-changed-29a765f0-6b44-4aad-9974-a0845658d5f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.417 2 DEBUG nova.compute.manager [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Refreshing instance network info cache due to event network-changed-29a765f0-6b44-4aad-9974-a0845658d5f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.417 2 DEBUG oslo_concurrency.lockutils [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-797fde07-e88a-4d6e-a1a3-25e22c66097c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.425 2 DEBUG nova.policy [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1057882eff8f490d837773415bf65a8a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.449 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.450 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.451 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.451 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.482 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.490 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.787 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Successfully created port: b4dfefd6-6971-4450-ae0e-50f4bf7eaafa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.795 2 DEBUG nova.network.neutron [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Successfully updated port: 257d115c-e196-4921-a9d3-942604825516 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.804 2 INFO nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Creating config drive at /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/disk.config
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.814 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk2ebn_xy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702013294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.865 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.900 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "refresh_cache-d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.901 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquired lock "refresh_cache-d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.901 2 DEBUG nova.network.neutron [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.907 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.909 2 DEBUG nova.compute.manager [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Received event network-changed-257d115c-e196-4921-a9d3-942604825516 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.910 2 DEBUG nova.compute.manager [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Refreshing instance network info cache due to event network-changed-257d115c-e196-4921-a9d3-942604825516. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.910 2 DEBUG oslo_concurrency.lockutils [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.957 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] resizing rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:29:29 compute-0 musing_proskuriakova[315293]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:29:29 compute-0 nova_compute[260603]: 2025-10-02 08:29:29.991 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk2ebn_xy" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:29 compute-0 musing_proskuriakova[315293]: --> relative data size: 1.0
Oct 02 08:29:29 compute-0 musing_proskuriakova[315293]: --> All data devices are unavailable
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.038 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:30 compute-0 systemd[1]: libpod-32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b.scope: Deactivated successfully.
Oct 02 08:29:30 compute-0 systemd[1]: libpod-32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b.scope: Consumed 1.185s CPU time.
Oct 02 08:29:30 compute-0 podman[315243]: 2025-10-02 08:29:30.046296761 +0000 UTC m=+1.444626985 container died 32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_proskuriakova, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.054 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/disk.config 9924ce7f-b701-4560-b2c5-67f673b45807_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11469fe5d7bf15f097cad835b739b661232c08db8c1119178e44476bd11be85-merged.mount: Deactivated successfully.
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.103 2 DEBUG nova.compute.provider_tree [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.117 2 DEBUG nova.network.neutron [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:29:30 compute-0 podman[315243]: 2025-10-02 08:29:30.121448267 +0000 UTC m=+1.519778461 container remove 32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_proskuriakova, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.126 2 DEBUG nova.scheduler.client.report [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:30 compute-0 systemd[1]: libpod-conmon-32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b.scope: Deactivated successfully.
Oct 02 08:29:30 compute-0 sudo[314988]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.180 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.181 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.184 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 3.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.196 2 DEBUG nova.objects.instance [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'migration_context' on Instance uuid f7005e7b-8982-4d23-b12a-4b67c90a6c89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:30 compute-0 ceph-mon[74477]: pgmap v1458: 305 pgs: 305 active+clean; 143 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 6.2 MiB/s wr, 214 op/s
Oct 02 08:29:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/702013294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.261 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/disk.config 9924ce7f-b701-4560-b2c5-67f673b45807_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.262 2 INFO nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Deleting local config drive /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/disk.config because it was imported into RBD.
Oct 02 08:29:30 compute-0 sudo[315639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.271 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.271 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Ensure instance console log exists: /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.272 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.272 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.272 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:30 compute-0 sudo[315639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:30 compute-0 sudo[315639]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:30 compute-0 kernel: tapbf9cdb7f-4c: entered promiscuous mode
Oct 02 08:29:30 compute-0 NetworkManager[45129]: <info>  [1759393770.3416] manager: (tapbf9cdb7f-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:30 compute-0 ovn_controller[152344]: 2025-10-02T08:29:30Z|00433|binding|INFO|Claiming lport bf9cdb7f-4cda-403b-b27e-12385e93db02 for this chassis.
Oct 02 08:29:30 compute-0 ovn_controller[152344]: 2025-10-02T08:29:30Z|00434|binding|INFO|bf9cdb7f-4cda-403b-b27e-12385e93db02: Claiming fa:16:3e:64:b2:ea 10.100.0.12
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:30 compute-0 sudo[315670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:29:30 compute-0 sudo[315670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:30 compute-0 sudo[315670]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:30 compute-0 systemd-udevd[315711]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:29:30 compute-0 systemd-machined[214636]: New machine qemu-56-instance-00000033.
Oct 02 08:29:30 compute-0 NetworkManager[45129]: <info>  [1759393770.4062] device (tapbf9cdb7f-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:30 compute-0 NetworkManager[45129]: <info>  [1759393770.4082] device (tapbf9cdb7f-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:29:30 compute-0 ovn_controller[152344]: 2025-10-02T08:29:30Z|00435|binding|INFO|Setting lport bf9cdb7f-4cda-403b-b27e-12385e93db02 ovn-installed in OVS
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:30 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000033.
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.434 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:b2:ea 10.100.0.12'], port_security=['fa:16:3e:64:b2:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9924ce7f-b701-4560-b2c5-67f673b45807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bf9cdb7f-4cda-403b-b27e-12385e93db02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:30 compute-0 ovn_controller[152344]: 2025-10-02T08:29:30Z|00436|binding|INFO|Setting lport bf9cdb7f-4cda-403b-b27e-12385e93db02 up in Southbound
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.435 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bf9cdb7f-4cda-403b-b27e-12385e93db02 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e bound to our chassis
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.436 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e
Oct 02 08:29:30 compute-0 sudo[315704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:29:30 compute-0 sudo[315704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.455 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c06b27a-d163-49fc-bb2e-f77bf0dd0cd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.456 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00da8a36-b1 in ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.459 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00da8a36-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:29:30 compute-0 sudo[315704]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.460 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7a0183-0c9e-4948-9a21-f685ce673af3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.461 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[330ad6d0-e37c-4ed6-b560-5ee7838481c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.478 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[3384d561-bc7a-4bde-b26d-2b53fe4b4071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.482 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.483 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.504 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.511 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c8313f-3341-44eb-b037-932b0b6fbb62]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.532 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:29:30 compute-0 sudo[315738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.545 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[87871e6a-b867-462b-a556-90a75ee8ca00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 sudo[315738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.554 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9667d6f-3833-490e-af8d-81723e456638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 NetworkManager[45129]: <info>  [1759393770.5554] manager: (tap00da8a36-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/189)
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.602 2 DEBUG oslo_concurrency.processutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.614 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a4317239-601a-4c99-b25c-a43e920bdce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.621 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d6c09f-8f91-43f9-89cd-43278949b101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 NetworkManager[45129]: <info>  [1759393770.6523] device (tap00da8a36-b0): carrier: link connected
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.659 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[70e12a1f-c702-4af6-8b32-d21ac456977c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.659 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.662 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.662 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Creating image(s)
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.681 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c48daeb2-fba2-4eda-b685-cdad2093a6fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315791, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.693 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.701 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4e292ce7-f777-45b2-9f64-26744ea3eb0c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:d8ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464925, 'tstamp': 464925}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315815, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.725 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8284814e-b81c-42f1-bf09-177abb61ba60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315822, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.740 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.760 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[103faacb-318a-4f2d-9c7b-fc14f2db8c9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.799 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.821 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.835 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[263d1fd8-6052-47f8-a073-5d0ff84877e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.837 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.837 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.838 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:30 compute-0 NetworkManager[45129]: <info>  [1759393770.8412] manager: (tap00da8a36-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Oct 02 08:29:30 compute-0 kernel: tap00da8a36-b0: entered promiscuous mode
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.847 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:30 compute-0 ovn_controller[152344]: 2025-10-02T08:29:30Z|00437|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.873 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00da8a36-bc54-4cc1-a0e2-53333358378e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00da8a36-bc54-4cc1-a0e2-53333358378e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.878 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0c24ebc1-24fc-47e3-84f2-892829947f8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.878 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-00da8a36-bc54-4cc1-a0e2-53333358378e
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/00da8a36-bc54-4cc1-a0e2-53333358378e.pid.haproxy
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 00da8a36-bc54-4cc1-a0e2-53333358378e
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:29:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.880 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'env', 'PROCESS_TAG=haproxy-00da8a36-bc54-4cc1-a0e2-53333358378e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00da8a36-bc54-4cc1-a0e2-53333358378e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.919 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.921 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.922 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.923 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:30 compute-0 podman[315943]: 2025-10-02 08:29:30.923573163 +0000 UTC m=+0.047459595 container create 50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 08:29:30 compute-0 systemd[1]: Started libpod-conmon-50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef.scope.
Oct 02 08:29:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 143 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 127 KiB/s rd, 4.1 MiB/s wr, 167 op/s
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.966 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:30 compute-0 nova_compute[260603]: 2025-10-02 08:29:30.982 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:30 compute-0 podman[315943]: 2025-10-02 08:29:30.899854776 +0000 UTC m=+0.023741238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:29:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:29:31 compute-0 podman[315943]: 2025-10-02 08:29:31.015559483 +0000 UTC m=+0.139445955 container init 50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lewin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:29:31 compute-0 podman[315943]: 2025-10-02 08:29:31.025057657 +0000 UTC m=+0.148944089 container start 50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lewin, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:29:31 compute-0 podman[315943]: 2025-10-02 08:29:31.028491134 +0000 UTC m=+0.152377586 container attach 50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lewin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:29:31 compute-0 nifty_lewin[315986]: 167 167
Oct 02 08:29:31 compute-0 systemd[1]: libpod-50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef.scope: Deactivated successfully.
Oct 02 08:29:31 compute-0 podman[315943]: 2025-10-02 08:29:31.046788583 +0000 UTC m=+0.170675015 container died 50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 08:29:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0b72e1e8c3ccde314c102a04cb41605891c0de5a87b7085408eaef46a8be132-merged.mount: Deactivated successfully.
Oct 02 08:29:31 compute-0 podman[315943]: 2025-10-02 08:29:31.085567468 +0000 UTC m=+0.209453890 container remove 50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct 02 08:29:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1103214818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.094 2 DEBUG nova.policy [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1057882eff8f490d837773415bf65a8a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.118 2 DEBUG oslo_concurrency.processutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:31 compute-0 systemd[1]: libpod-conmon-50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef.scope: Deactivated successfully.
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.131 2 DEBUG nova.compute.provider_tree [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.153 2 DEBUG nova.scheduler.client.report [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.200 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.232 2 INFO nova.scheduler.client.report [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Deleted allocations for instance 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf
Oct 02 08:29:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1103214818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:31 compute-0 podman[316057]: 2025-10-02 08:29:31.272261199 +0000 UTC m=+0.044538325 container create 57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:29:31 compute-0 podman[316047]: 2025-10-02 08:29:31.285255984 +0000 UTC m=+0.069202392 container create b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.285 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:31 compute-0 systemd[1]: Started libpod-conmon-57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7.scope.
Oct 02 08:29:31 compute-0 systemd[1]: Started libpod-conmon-b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2.scope.
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.330 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:31 compute-0 podman[316047]: 2025-10-02 08:29:31.250520484 +0000 UTC m=+0.034466932 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:29:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:29:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:29:31 compute-0 podman[316057]: 2025-10-02 08:29:31.252676681 +0000 UTC m=+0.024953837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757accabf38b3355c6e67e0051ac348d909cefcd9f797154530e12690d3d714a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b5c5ae73f0794dadb32620ddb0a1a034413642c4424df120944407778f4d1d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757accabf38b3355c6e67e0051ac348d909cefcd9f797154530e12690d3d714a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757accabf38b3355c6e67e0051ac348d909cefcd9f797154530e12690d3d714a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757accabf38b3355c6e67e0051ac348d909cefcd9f797154530e12690d3d714a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:31 compute-0 podman[316047]: 2025-10-02 08:29:31.380878995 +0000 UTC m=+0.164825453 container init b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:29:31 compute-0 podman[316057]: 2025-10-02 08:29:31.386024535 +0000 UTC m=+0.158301691 container init 57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:29:31 compute-0 podman[316047]: 2025-10-02 08:29:31.389918256 +0000 UTC m=+0.173864684 container start b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:29:31 compute-0 podman[316057]: 2025-10-02 08:29:31.394304563 +0000 UTC m=+0.166581679 container start 57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 08:29:31 compute-0 podman[316057]: 2025-10-02 08:29:31.39710544 +0000 UTC m=+0.169382586 container attach 57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.416 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] resizing rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:29:31 compute-0 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [NOTICE]   (316129) : New worker (316132) forked
Oct 02 08:29:31 compute-0 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [NOTICE]   (316129) : Loading success.
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.513 2 DEBUG nova.objects.instance [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'migration_context' on Instance uuid f56dc5d2-b1f8-42ef-882c-62bcbd600954 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.529 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.529 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Ensure instance console log exists: /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.529 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.530 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.530 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.550 2 DEBUG nova.network.neutron [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Updating instance_info_cache with network_info: [{"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.555 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393771.5548983, 9924ce7f-b701-4560-b2c5-67f673b45807 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.556 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] VM Started (Lifecycle Event)
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.577 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.581 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393771.5559416, 9924ce7f-b701-4560-b2c5-67f673b45807 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.583 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] VM Paused (Lifecycle Event)
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.585 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Releasing lock "refresh_cache-797fde07-e88a-4d6e-a1a3-25e22c66097c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.585 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Instance network_info: |[{"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.586 2 DEBUG oslo_concurrency.lockutils [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-797fde07-e88a-4d6e-a1a3-25e22c66097c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.586 2 DEBUG nova.network.neutron [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Refreshing network info cache for port 29a765f0-6b44-4aad-9974-a0845658d5f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.589 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Start _get_guest_xml network_info=[{"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'eeb8c9a4-e143-4b44-a997-e04d544bc537'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.594 2 WARNING nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.601 2 DEBUG nova.virt.libvirt.host [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.601 2 DEBUG nova.virt.libvirt.host [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.604 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.612 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.615 2 DEBUG nova.virt.libvirt.host [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.616 2 DEBUG nova.virt.libvirt.host [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.616 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.616 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.617 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.617 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.617 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.617 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.618 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.618 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.618 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.618 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.618 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.619 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.622 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.656 2 DEBUG nova.network.neutron [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Updating instance_info_cache with network_info: [{"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.658 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Successfully created port: 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.661 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.694 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Releasing lock "refresh_cache-d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.695 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Instance network_info: |[{"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.695 2 DEBUG oslo_concurrency.lockutils [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.695 2 DEBUG nova.network.neutron [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Refreshing network info cache for port 257d115c-e196-4921-a9d3-942604825516 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.698 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Start _get_guest_xml network_info=[{"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.703 2 WARNING nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.707 2 DEBUG nova.virt.libvirt.host [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.708 2 DEBUG nova.virt.libvirt.host [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.713 2 DEBUG nova.virt.libvirt.host [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.714 2 DEBUG nova.virt.libvirt.host [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.714 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.714 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2bd3a3ae-55dd-4609-80ca-4b6ab15f763d',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.716 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.716 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.716 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.718 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.869 2 DEBUG nova.network.neutron [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updated VIF entry in instance network info cache for port bf9cdb7f-4cda-403b-b27e-12385e93db02. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.869 2 DEBUG nova.network.neutron [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updating instance_info_cache with network_info: [{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:31 compute-0 nova_compute[260603]: 2025-10-02 08:29:31.885 2 DEBUG oslo_concurrency.lockutils [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272438370' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.073 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.111 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971208537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.118 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:32 compute-0 peaceful_bell[316101]: {
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:     "0": [
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:         {
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "devices": [
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "/dev/loop3"
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             ],
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_name": "ceph_lv0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_size": "21470642176",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "name": "ceph_lv0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "tags": {
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.cluster_name": "ceph",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.crush_device_class": "",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.encrypted": "0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.osd_id": "0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.type": "block",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.vdo": "0"
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             },
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "type": "block",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "vg_name": "ceph_vg0"
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:         }
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:     ],
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:     "1": [
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:         {
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "devices": [
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "/dev/loop4"
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             ],
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_name": "ceph_lv1",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_size": "21470642176",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "name": "ceph_lv1",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "tags": {
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.cluster_name": "ceph",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.crush_device_class": "",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.encrypted": "0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.osd_id": "1",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.type": "block",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.vdo": "0"
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             },
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "type": "block",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "vg_name": "ceph_vg1"
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:         }
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:     ],
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:     "2": [
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:         {
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "devices": [
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "/dev/loop5"
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             ],
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_name": "ceph_lv2",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_size": "21470642176",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "name": "ceph_lv2",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "tags": {
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.cluster_name": "ceph",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.crush_device_class": "",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.encrypted": "0",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.osd_id": "2",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.type": "block",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:                 "ceph.vdo": "0"
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             },
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "type": "block",
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:             "vg_name": "ceph_vg2"
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:         }
Oct 02 08:29:32 compute-0 peaceful_bell[316101]:     ]
Oct 02 08:29:32 compute-0 peaceful_bell[316101]: }
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.172 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:32 compute-0 systemd[1]: libpod-57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7.scope: Deactivated successfully.
Oct 02 08:29:32 compute-0 podman[316057]: 2025-10-02 08:29:32.179642258 +0000 UTC m=+0.951919474 container died 57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:29:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-757accabf38b3355c6e67e0051ac348d909cefcd9f797154530e12690d3d714a-merged.mount: Deactivated successfully.
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.222 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.234 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:32 compute-0 podman[316057]: 2025-10-02 08:29:32.237256698 +0000 UTC m=+1.009533824 container remove 57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 08:29:32 compute-0 systemd[1]: libpod-conmon-57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7.scope: Deactivated successfully.
Oct 02 08:29:32 compute-0 ceph-mon[74477]: pgmap v1459: 305 pgs: 305 active+clean; 143 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 127 KiB/s rd, 4.1 MiB/s wr, 167 op/s
Oct 02 08:29:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1272438370' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3971208537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:32 compute-0 sudo[315738]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:32 compute-0 sudo[316291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:29:32 compute-0 sudo[316291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:32 compute-0 sudo[316291]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:32 compute-0 sudo[316332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:29:32 compute-0 sudo[316332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:32 compute-0 sudo[316332]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:32 compute-0 sudo[316360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:29:32 compute-0 sudo[316360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:32 compute-0 sudo[316360]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3092069418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:32 compute-0 sudo[316385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:29:32 compute-0 sudo[316385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.598 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.601 2 DEBUG nova.virt.libvirt.vif [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1372609845',display_name='tempest-ListServerFiltersTestJSON-instance-1372609845',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1372609845',id=52,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-2sfo7if8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:24Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=797fde07-e88a-4d6e-a1a3-25e22c66097c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.601 2 DEBUG nova.network.os_vif_util [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.602 2 DEBUG nova.network.os_vif_util [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.603 2 DEBUG nova.objects.instance [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'pci_devices' on Instance uuid 797fde07-e88a-4d6e-a1a3-25e22c66097c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.622 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <uuid>797fde07-e88a-4d6e-a1a3-25e22c66097c</uuid>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <name>instance-00000034</name>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1372609845</nova:name>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:29:31</nova:creationTime>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:user uuid="c1d66932c11043b5b90140cd2dde53d2">tempest-ListServerFiltersTestJSON-1545892750-project-member</nova:user>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:project uuid="e7c4373fe01a4a14bea07af6dba4d170">tempest-ListServerFiltersTestJSON-1545892750</nova:project>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:port uuid="29a765f0-6b44-4aad-9974-a0845658d5f2">
Oct 02 08:29:32 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <system>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <entry name="serial">797fde07-e88a-4d6e-a1a3-25e22c66097c</entry>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <entry name="uuid">797fde07-e88a-4d6e-a1a3-25e22c66097c</entry>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </system>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <os>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </os>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <features>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </features>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/797fde07-e88a-4d6e-a1a3-25e22c66097c_disk">
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/797fde07-e88a-4d6e-a1a3-25e22c66097c_disk.config">
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:c6:7e:1d"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <target dev="tap29a765f0-6b"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/console.log" append="off"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <video>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </video>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:29:32 compute-0 nova_compute[260603]: </domain>
Oct 02 08:29:32 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.623 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Preparing to wait for external event network-vif-plugged-29a765f0-6b44-4aad-9974-a0845658d5f2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.624 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.624 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.624 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.625 2 DEBUG nova.virt.libvirt.vif [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1372609845',display_name='tempest-ListServerFiltersTestJSON-instance-1372609845',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1372609845',id=52,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-2sfo7if8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:24Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=797fde07-e88a-4d6e-a1a3-25e22c66097c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.625 2 DEBUG nova.network.os_vif_util [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.626 2 DEBUG nova.network.os_vif_util [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.627 2 DEBUG os_vif [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.628 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.629 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.635 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29a765f0-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29a765f0-6b, col_values=(('external_ids', {'iface-id': '29a765f0-6b44-4aad-9974-a0845658d5f2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:7e:1d', 'vm-uuid': '797fde07-e88a-4d6e-a1a3-25e22c66097c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:32 compute-0 NetworkManager[45129]: <info>  [1759393772.6394] manager: (tap29a765f0-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.652 2 INFO os_vif [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b')
Oct 02 08:29:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4232591236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.695 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.697 2 DEBUG nova.virt.libvirt.vif [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1454521037',display_name='tempest-ListServerFiltersTestJSON-instance-1454521037',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1454521037',id=53,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-w2kc410q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:27Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=d15c7c6a-e6a1-4538-9db0-ee1aef10f38b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.698 2 DEBUG nova.network.os_vif_util [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.699 2 DEBUG nova.network.os_vif_util [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.700 2 DEBUG nova.objects.instance [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'pci_devices' on Instance uuid d15c7c6a-e6a1-4538-9db0-ee1aef10f38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.723 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <uuid>d15c7c6a-e6a1-4538-9db0-ee1aef10f38b</uuid>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <name>instance-00000035</name>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <memory>196608</memory>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1454521037</nova:name>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:29:31</nova:creationTime>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:flavor name="m1.micro">
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:memory>192</nova:memory>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:user uuid="c1d66932c11043b5b90140cd2dde53d2">tempest-ListServerFiltersTestJSON-1545892750-project-member</nova:user>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:project uuid="e7c4373fe01a4a14bea07af6dba4d170">tempest-ListServerFiltersTestJSON-1545892750</nova:project>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <nova:port uuid="257d115c-e196-4921-a9d3-942604825516">
Oct 02 08:29:32 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <system>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <entry name="serial">d15c7c6a-e6a1-4538-9db0-ee1aef10f38b</entry>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <entry name="uuid">d15c7c6a-e6a1-4538-9db0-ee1aef10f38b</entry>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </system>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <os>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </os>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <features>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </features>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk">
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk.config">
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:32 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:5f:eb:54"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <target dev="tap257d115c-e1"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/console.log" append="off"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <video>
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </video>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:29:32 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:29:32 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:29:32 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:29:32 compute-0 nova_compute[260603]: </domain>
Oct 02 08:29:32 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.726 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Preparing to wait for external event network-vif-plugged-257d115c-e196-4921-a9d3-942604825516 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.726 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.726 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.727 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.728 2 DEBUG nova.virt.libvirt.vif [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1454521037',display_name='tempest-ListServerFiltersTestJSON-instance-1454521037',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1454521037',id=53,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-w2kc410q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:27Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=d15c7c6a-e6a1-4538-9db0-ee1aef10f38b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.729 2 DEBUG nova.network.os_vif_util [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.729 2 DEBUG nova.network.os_vif_util [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.730 2 DEBUG os_vif [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap257d115c-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap257d115c-e1, col_values=(('external_ids', {'iface-id': '257d115c-e196-4921-a9d3-942604825516', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:eb:54', 'vm-uuid': 'd15c7c6a-e6a1-4538-9db0-ee1aef10f38b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:32 compute-0 NetworkManager[45129]: <info>  [1759393772.7430] manager: (tap257d115c-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.751 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.751 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.751 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No VIF found with MAC fa:16:3e:c6:7e:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.752 2 INFO nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Using config drive
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.772 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.777 2 INFO os_vif [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1')
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.830 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.830 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.831 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No VIF found with MAC fa:16:3e:5f:eb:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.831 2 INFO nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Using config drive
Oct 02 08:29:32 compute-0 nova_compute[260603]: 2025-10-02 08:29:32.852 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 280 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 9.1 MiB/s wr, 251 op/s
Oct 02 08:29:33 compute-0 podman[316496]: 2025-10-02 08:29:33.002605572 +0000 UTC m=+0.055502386 container create 1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_burnell, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 08:29:33 compute-0 systemd[1]: Started libpod-conmon-1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2.scope.
Oct 02 08:29:33 compute-0 podman[316496]: 2025-10-02 08:29:32.975459949 +0000 UTC m=+0.028356823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:29:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:29:33 compute-0 podman[316496]: 2025-10-02 08:29:33.095265892 +0000 UTC m=+0.148162766 container init 1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.099 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Successfully updated port: b4dfefd6-6971-4450-ae0e-50f4bf7eaafa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:29:33 compute-0 podman[316496]: 2025-10-02 08:29:33.107492662 +0000 UTC m=+0.160389436 container start 1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 08:29:33 compute-0 podman[316496]: 2025-10-02 08:29:33.111088564 +0000 UTC m=+0.163985398 container attach 1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_burnell, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:29:33 compute-0 amazing_burnell[316512]: 167 167
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.118 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "refresh_cache-73e8c7a5-4621-4f07-824a-b81ea314a672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.120 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquired lock "refresh_cache-73e8c7a5-4621-4f07-824a-b81ea314a672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.120 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:29:33 compute-0 systemd[1]: libpod-1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2.scope: Deactivated successfully.
Oct 02 08:29:33 compute-0 podman[316496]: 2025-10-02 08:29:33.125608075 +0000 UTC m=+0.178504889 container died 1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_burnell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 08:29:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-371426c3d7a95a6f166309f0d3659297f1d2a84105b79a789093f79b7b5adc5e-merged.mount: Deactivated successfully.
Oct 02 08:29:33 compute-0 podman[316496]: 2025-10-02 08:29:33.171306986 +0000 UTC m=+0.224203790 container remove 1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_burnell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 08:29:33 compute-0 systemd[1]: libpod-conmon-1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2.scope: Deactivated successfully.
Oct 02 08:29:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3092069418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4232591236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.315 2 DEBUG nova.network.neutron [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Updated VIF entry in instance network info cache for port 257d115c-e196-4921-a9d3-942604825516. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.316 2 DEBUG nova.network.neutron [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Updating instance_info_cache with network_info: [{"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.334 2 DEBUG oslo_concurrency.lockutils [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:33 compute-0 podman[316537]: 2025-10-02 08:29:33.369936958 +0000 UTC m=+0.049394646 container create 4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:29:33 compute-0 systemd[1]: Started libpod-conmon-4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885.scope.
Oct 02 08:29:33 compute-0 podman[316537]: 2025-10-02 08:29:33.347976936 +0000 UTC m=+0.027434634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:29:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.471 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16c9ed56f150ff1f4e4f213b40602c9b286539335c094af45554ba9764125c51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16c9ed56f150ff1f4e4f213b40602c9b286539335c094af45554ba9764125c51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16c9ed56f150ff1f4e4f213b40602c9b286539335c094af45554ba9764125c51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16c9ed56f150ff1f4e4f213b40602c9b286539335c094af45554ba9764125c51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.496 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Successfully created port: 12abaaed-2f93-40bd-bddd-8143c3709480 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:29:33 compute-0 podman[316537]: 2025-10-02 08:29:33.507023878 +0000 UTC m=+0.186481566 container init 4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 08:29:33 compute-0 podman[316537]: 2025-10-02 08:29:33.519113754 +0000 UTC m=+0.198571452 container start 4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:29:33 compute-0 podman[316537]: 2025-10-02 08:29:33.523324615 +0000 UTC m=+0.202782283 container attach 4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.661 2 INFO nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Creating config drive at /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/disk.config
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.673 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx3qjrauc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.723 2 INFO nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Creating config drive at /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/disk.config
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.733 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcyxobxao execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.831 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx3qjrauc" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.872 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.876 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/disk.config 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.920 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcyxobxao" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.960 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:33 compute-0 nova_compute[260603]: 2025-10-02 08:29:33.967 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/disk.config d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.038 2 DEBUG nova.compute.manager [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-changed-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.039 2 DEBUG nova.compute.manager [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Refreshing instance network info cache due to event network-changed-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.039 2 DEBUG oslo_concurrency.lockutils [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-73e8c7a5-4621-4f07-824a-b81ea314a672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.053 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/disk.config 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.054 2 INFO nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Deleting local config drive /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/disk.config because it was imported into RBD.
Oct 02 08:29:34 compute-0 kernel: tap29a765f0-6b: entered promiscuous mode
Oct 02 08:29:34 compute-0 NetworkManager[45129]: <info>  [1759393774.1380] manager: (tap29a765f0-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/193)
Oct 02 08:29:34 compute-0 ovn_controller[152344]: 2025-10-02T08:29:34Z|00438|binding|INFO|Claiming lport 29a765f0-6b44-4aad-9974-a0845658d5f2 for this chassis.
Oct 02 08:29:34 compute-0 ovn_controller[152344]: 2025-10-02T08:29:34Z|00439|binding|INFO|29a765f0-6b44-4aad-9974-a0845658d5f2: Claiming fa:16:3e:c6:7e:1d 10.100.0.9
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.150 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:7e:1d 10.100.0.9'], port_security=['fa:16:3e:c6:7e:1d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '797fde07-e88a-4d6e-a1a3-25e22c66097c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=29a765f0-6b44-4aad-9974-a0845658d5f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.154 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 29a765f0-6b44-4aad-9974-a0845658d5f2 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e bound to our chassis
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.158 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.163 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/disk.config d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.163 2 INFO nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Deleting local config drive /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/disk.config because it was imported into RBD.
Oct 02 08:29:34 compute-0 systemd-udevd[316650]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.176 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f8a143-f295-4a29-94ec-15475f3fcc7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:34 compute-0 NetworkManager[45129]: <info>  [1759393774.1813] device (tap29a765f0-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:29:34 compute-0 NetworkManager[45129]: <info>  [1759393774.1839] device (tap29a765f0-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:29:34 compute-0 ovn_controller[152344]: 2025-10-02T08:29:34Z|00440|binding|INFO|Setting lport 29a765f0-6b44-4aad-9974-a0845658d5f2 ovn-installed in OVS
Oct 02 08:29:34 compute-0 ovn_controller[152344]: 2025-10-02T08:29:34Z|00441|binding|INFO|Setting lport 29a765f0-6b44-4aad-9974-a0845658d5f2 up in Southbound
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:34 compute-0 systemd-machined[214636]: New machine qemu-57-instance-00000034.
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.215 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ad6f8e-32b5-45db-9a64-d9b2ef1b9333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:34 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000034.
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.220 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[88b8536b-dfd0-4b8d-9c17-e9c7ebc86ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:34 compute-0 kernel: tap257d115c-e1: entered promiscuous mode
Oct 02 08:29:34 compute-0 NetworkManager[45129]: <info>  [1759393774.2456] manager: (tap257d115c-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Oct 02 08:29:34 compute-0 ovn_controller[152344]: 2025-10-02T08:29:34Z|00442|binding|INFO|Claiming lport 257d115c-e196-4921-a9d3-942604825516 for this chassis.
Oct 02 08:29:34 compute-0 ovn_controller[152344]: 2025-10-02T08:29:34Z|00443|binding|INFO|257d115c-e196-4921-a9d3-942604825516: Claiming fa:16:3e:5f:eb:54 10.100.0.8
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:34 compute-0 systemd-udevd[316658]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.257 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:eb:54 10.100.0.8'], port_security=['fa:16:3e:5f:eb:54 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd15c7c6a-e6a1-4538-9db0-ee1aef10f38b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=257d115c-e196-4921-a9d3-942604825516) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.260 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[47bb788b-6173-495e-9462-435777df8e5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:34 compute-0 NetworkManager[45129]: <info>  [1759393774.2635] device (tap257d115c-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:29:34 compute-0 ovn_controller[152344]: 2025-10-02T08:29:34Z|00444|binding|INFO|Setting lport 257d115c-e196-4921-a9d3-942604825516 ovn-installed in OVS
Oct 02 08:29:34 compute-0 ovn_controller[152344]: 2025-10-02T08:29:34Z|00445|binding|INFO|Setting lport 257d115c-e196-4921-a9d3-942604825516 up in Southbound
Oct 02 08:29:34 compute-0 NetworkManager[45129]: <info>  [1759393774.2666] device (tap257d115c-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.279 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8af3b069-1979-4115-b39c-25c00277c75c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316684, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:34 compute-0 ceph-mon[74477]: pgmap v1460: 305 pgs: 305 active+clean; 280 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 9.1 MiB/s wr, 251 op/s
Oct 02 08:29:34 compute-0 systemd-machined[214636]: New machine qemu-58-instance-00000035.
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.300 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f6adc80b-ff4d-4e2b-b59b-19098ad43f0a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316687, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316687, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.302 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:34 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000035.
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.312 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.312 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.312 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.313 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.314 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 257d115c-e196-4921-a9d3-942604825516 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.316 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.333 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f250bc14-0a13-474b-8ac0-4990e079f03a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.386 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cd098071-f7dc-401f-8ad7-0db9be26e045]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.389 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8156778a-849b-46eb-a4d4-818c30f797e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.430 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9f81915a-c77f-437a-ad81-297d921a75d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.461 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1efa0550-5c7b-40e5-87b1-769885cf7ca0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316717, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.481 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Successfully updated port: 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.484 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4778c77b-bace-451e-a172-37a9791db2b5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316721, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316721, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.486 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.489 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.489 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.490 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.490 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]: {
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "osd_id": 2,
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "type": "bluestore"
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:     },
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "osd_id": 1,
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "type": "bluestore"
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:     },
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "osd_id": 0,
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:         "type": "bluestore"
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]:     }
Oct 02 08:29:34 compute-0 romantic_chebyshev[316553]: }
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.508 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "refresh_cache-f7005e7b-8982-4d23-b12a-4b67c90a6c89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.508 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquired lock "refresh_cache-f7005e7b-8982-4d23-b12a-4b67c90a6c89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.508 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:29:34 compute-0 systemd[1]: libpod-4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885.scope: Deactivated successfully.
Oct 02 08:29:34 compute-0 podman[316537]: 2025-10-02 08:29:34.533878049 +0000 UTC m=+1.213335747 container died 4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 08:29:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-16c9ed56f150ff1f4e4f213b40602c9b286539335c094af45554ba9764125c51-merged.mount: Deactivated successfully.
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.609 2 DEBUG nova.network.neutron [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Updated VIF entry in instance network info cache for port 29a765f0-6b44-4aad-9974-a0845658d5f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:29:34 compute-0 podman[316537]: 2025-10-02 08:29:34.610679486 +0000 UTC m=+1.290137164 container remove 4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.611 2 DEBUG nova.network.neutron [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Updating instance_info_cache with network_info: [{"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:34 compute-0 systemd[1]: libpod-conmon-4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885.scope: Deactivated successfully.
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.635 2 DEBUG oslo_concurrency.lockutils [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-797fde07-e88a-4d6e-a1a3-25e22c66097c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:34 compute-0 sudo[316385]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:29:34 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:29:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:29:34 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:29:34 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a8fe8d3c-ecd0-40ab-ae7a-e858f5222f60 does not exist
Oct 02 08:29:34 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 4fe53165-8757-410d-852d-c154b96651a3 does not exist
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.668 2 DEBUG nova.compute.manager [req-2fae3b60-8b6b-4548-bc62-ce19f1f2c958 req-8345ce5f-4e0f-4aa2-af3d-65a90bcdd339 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Received event network-vif-plugged-29a765f0-6b44-4aad-9974-a0845658d5f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.669 2 DEBUG oslo_concurrency.lockutils [req-2fae3b60-8b6b-4548-bc62-ce19f1f2c958 req-8345ce5f-4e0f-4aa2-af3d-65a90bcdd339 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.670 2 DEBUG oslo_concurrency.lockutils [req-2fae3b60-8b6b-4548-bc62-ce19f1f2c958 req-8345ce5f-4e0f-4aa2-af3d-65a90bcdd339 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.670 2 DEBUG oslo_concurrency.lockutils [req-2fae3b60-8b6b-4548-bc62-ce19f1f2c958 req-8345ce5f-4e0f-4aa2-af3d-65a90bcdd339 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.670 2 DEBUG nova.compute.manager [req-2fae3b60-8b6b-4548-bc62-ce19f1f2c958 req-8345ce5f-4e0f-4aa2-af3d-65a90bcdd339 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Processing event network-vif-plugged-29a765f0-6b44-4aad-9974-a0845658d5f2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:29:34 compute-0 sudo[316735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:29:34 compute-0 sudo[316735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:34 compute-0 sudo[316735]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:34 compute-0 sudo[316760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:29:34 compute-0 sudo[316760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:29:34 compute-0 sudo[316760]: pam_unix(sudo:session): session closed for user root
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.801 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.815 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.816 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.817 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.862 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.862 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.888 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:29:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 319 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 11 MiB/s wr, 259 op/s
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.975 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.976 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.988 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:29:34 compute-0 nova_compute[260603]: 2025-10-02 08:29:34.988 2 INFO nova.compute.claims [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.156 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Updating instance_info_cache with network_info: [{"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.174 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Releasing lock "refresh_cache-73e8c7a5-4621-4f07-824a-b81ea314a672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.174 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Instance network_info: |[{"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.175 2 DEBUG oslo_concurrency.lockutils [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-73e8c7a5-4621-4f07-824a-b81ea314a672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.175 2 DEBUG nova.network.neutron [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Refreshing network info cache for port b4dfefd6-6971-4450-ae0e-50f4bf7eaafa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.178 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Start _get_guest_xml network_info=[{"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.183 2 WARNING nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.192 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.193 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.197 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.197 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.198 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.198 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.199 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.199 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.199 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.199 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.199 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.200 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.200 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.200 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.201 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.201 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.204 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.246 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.551 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393775.5501964, 797fde07-e88a-4d6e-a1a3-25e22c66097c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.553 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] VM Started (Lifecycle Event)
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.560 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.568 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.577 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.597 2 INFO nova.virt.libvirt.driver [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Instance spawned successfully.
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.598 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.601 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.621 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.622 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393775.55034, 797fde07-e88a-4d6e-a1a3-25e22c66097c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.622 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] VM Paused (Lifecycle Event)
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.626 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.626 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.627 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.627 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.627 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.628 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:35 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:29:35 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.656 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.665 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393775.570418, 797fde07-e88a-4d6e-a1a3-25e22c66097c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864614248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.665 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] VM Resumed (Lifecycle Event)
Oct 02 08:29:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/122715789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.685 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.690 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.692 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.717 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.722 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.754 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.756 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.757 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393775.5980434, d15c7c6a-e6a1-4538-9db0-ee1aef10f38b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.757 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] VM Started (Lifecycle Event)
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.761 2 INFO nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Took 11.59 seconds to spawn the instance on the hypervisor.
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.761 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.773 2 DEBUG nova.compute.provider_tree [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.796 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.801 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393775.5988185, d15c7c6a-e6a1-4538-9db0-ee1aef10f38b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.801 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] VM Paused (Lifecycle Event)
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.809 2 DEBUG nova.scheduler.client.report [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.828 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.831 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.846 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.846 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.870 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.893 2 INFO nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Took 12.72 seconds to build instance.
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.907 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.908 2 DEBUG nova.network.neutron [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.915 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.929 2 INFO nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:29:35 compute-0 nova_compute[260603]: 2025-10-02 08:29:35.949 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.043 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Successfully updated port: 12abaaed-2f93-40bd-bddd-8143c3709480 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.069 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.071 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.072 2 INFO nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Creating image(s)
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.105 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.142 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3923769575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.176 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.182 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.238 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "refresh_cache-f56dc5d2-b1f8-42ef-882c-62bcbd600954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.239 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquired lock "refresh_cache-f56dc5d2-b1f8-42ef-882c-62bcbd600954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.239 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.241 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.244 2 DEBUG nova.virt.libvirt.vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-1',id=54,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:28Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=73e8c7a5-4621-4f07-824a-b81ea314a672,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.244 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.245 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.247 2 DEBUG nova.objects.instance [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73e8c7a5-4621-4f07-824a-b81ea314a672 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.263 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.263 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.264 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.264 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.294 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.299 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 923e00cc-7494-46f3-93e2-3c223705aff1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.351 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:29:36 compute-0 nova_compute[260603]:   <uuid>73e8c7a5-4621-4f07-824a-b81ea314a672</uuid>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   <name>instance-00000036</name>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1504413762-1</nova:name>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:29:35</nova:creationTime>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:29:36 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:29:36 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:29:36 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:29:36 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:29:36 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:29:36 compute-0 nova_compute[260603]:         <nova:user uuid="1057882eff8f490d837773415bf65a8a">tempest-ListServersNegativeTestJSON-21742049-project-member</nova:user>
Oct 02 08:29:36 compute-0 nova_compute[260603]:         <nova:project uuid="f6f9056bf44b4bd8859c73e3cb645683">tempest-ListServersNegativeTestJSON-21742049</nova:project>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:29:36 compute-0 nova_compute[260603]:         <nova:port uuid="b4dfefd6-6971-4450-ae0e-50f4bf7eaafa">
Oct 02 08:29:36 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <system>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <entry name="serial">73e8c7a5-4621-4f07-824a-b81ea314a672</entry>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <entry name="uuid">73e8c7a5-4621-4f07-824a-b81ea314a672</entry>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     </system>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   <os>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   </os>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   <features>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   </features>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/73e8c7a5-4621-4f07-824a-b81ea314a672_disk">
Oct 02 08:29:36 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:36 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/73e8c7a5-4621-4f07-824a-b81ea314a672_disk.config">
Oct 02 08:29:36 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:36 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:6e:74:a7"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <target dev="tapb4dfefd6-69"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/console.log" append="off"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <video>
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     </video>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:29:36 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:29:36 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:29:36 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:29:36 compute-0 nova_compute[260603]: </domain>
Oct 02 08:29:36 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.354 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Preparing to wait for external event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.354 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.354 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.355 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.356 2 DEBUG nova.virt.libvirt.vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-1',id=54,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:28Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=73e8c7a5-4621-4f07-824a-b81ea314a672,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.356 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.357 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.358 2 DEBUG os_vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.359 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4dfefd6-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4dfefd6-69, col_values=(('external_ids', {'iface-id': 'b4dfefd6-6971-4450-ae0e-50f4bf7eaafa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:74:a7', 'vm-uuid': '73e8c7a5-4621-4f07-824a-b81ea314a672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:36 compute-0 NetworkManager[45129]: <info>  [1759393776.4182] manager: (tapb4dfefd6-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.428 2 INFO os_vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69')
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.512 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.513 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.513 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No VIF found with MAC fa:16:3e:6e:74:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.514 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Using config drive
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.580 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.591 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.625 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 923e00cc-7494-46f3-93e2-3c223705aff1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:36 compute-0 ceph-mon[74477]: pgmap v1461: 305 pgs: 305 active+clean; 319 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 11 MiB/s wr, 259 op/s
Oct 02 08:29:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2864614248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/122715789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3923769575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.717 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] resizing rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.776 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received event network-changed-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.777 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Refreshing instance network info cache due to event network-changed-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.777 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f7005e7b-8982-4d23-b12a-4b67c90a6c89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.780 2 DEBUG nova.policy [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e8f0a6fb1d224a979db4b4a738bbf453', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f942883d5794a5c8e3cd2b5ef44a863', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.851 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393761.8258083, 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.852 2 INFO nova.compute.manager [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] VM Stopped (Lifecycle Event)
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.861 2 DEBUG nova.objects.instance [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lazy-loading 'migration_context' on Instance uuid 923e00cc-7494-46f3-93e2-3c223705aff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.882 2 DEBUG nova.compute.manager [None req-b69d4143-5029-41a0-bc13-2744d0febebf - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.890 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.891 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Ensure instance console log exists: /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.891 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.892 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:36 compute-0 nova_compute[260603]: 2025-10-02 08:29:36.892 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 319 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 10 MiB/s wr, 246 op/s
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.091 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.092 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.118 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.164 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Updating instance_info_cache with network_info: [{"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.188 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.189 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.190 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Releasing lock "refresh_cache-f7005e7b-8982-4d23-b12a-4b67c90a6c89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.191 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Instance network_info: |[{"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.192 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f7005e7b-8982-4d23-b12a-4b67c90a6c89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.192 2 DEBUG nova.network.neutron [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Refreshing network info cache for port 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.198 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Start _get_guest_xml network_info=[{"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.208 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.209 2 INFO nova.compute.claims [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.214 2 WARNING nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.225 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.226 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.230 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.231 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.231 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.232 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.233 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.233 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.234 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.234 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.235 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.235 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.236 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.236 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.239 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.240 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.244 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.305 2 DEBUG nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Received event network-vif-plugged-29a765f0-6b44-4aad-9974-a0845658d5f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.306 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.306 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.307 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.307 2 DEBUG nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] No waiting events found dispatching network-vif-plugged-29a765f0-6b44-4aad-9974-a0845658d5f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.307 2 WARNING nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Received unexpected event network-vif-plugged-29a765f0-6b44-4aad-9974-a0845658d5f2 for instance with vm_state active and task_state None.
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.307 2 DEBUG nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Received event network-vif-plugged-257d115c-e196-4921-a9d3-942604825516 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.308 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.308 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.308 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.309 2 DEBUG nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Processing event network-vif-plugged-257d115c-e196-4921-a9d3-942604825516 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.309 2 DEBUG nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Received event network-vif-plugged-257d115c-e196-4921-a9d3-942604825516 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.309 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.310 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.310 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.310 2 DEBUG nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] No waiting events found dispatching network-vif-plugged-257d115c-e196-4921-a9d3-942604825516 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.310 2 WARNING nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Received unexpected event network-vif-plugged-257d115c-e196-4921-a9d3-942604825516 for instance with vm_state building and task_state spawning.
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.312 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.326 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393777.3258932, d15c7c6a-e6a1-4538-9db0-ee1aef10f38b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.326 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] VM Resumed (Lifecycle Event)
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.345 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.346 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.350 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.354 2 INFO nova.virt.libvirt.driver [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Instance spawned successfully.
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.354 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.378 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.394 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.395 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.395 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.396 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.396 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.397 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.464 2 INFO nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Took 10.35 seconds to spawn the instance on the hypervisor.
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.465 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.492 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.539 2 INFO nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Took 11.41 seconds to build instance.
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.554 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915258954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.747 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Creating config drive at /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/disk.config
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.754 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp323k3a08 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.791 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.821 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.826 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.857 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Updating instance_info_cache with network_info: [{"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.879 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Releasing lock "refresh_cache-f56dc5d2-b1f8-42ef-882c-62bcbd600954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.879 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Instance network_info: |[{"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.882 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Start _get_guest_xml network_info=[{"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.890 2 WARNING nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.894 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.894 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.898 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.898 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.899 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.899 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.900 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.900 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.900 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.901 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.901 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.901 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.902 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.902 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.902 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.902 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.906 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3300001568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.941 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp323k3a08" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.983 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:37 compute-0 nova_compute[260603]: 2025-10-02 08:29:37.990 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/disk.config 73e8c7a5-4621-4f07-824a-b81ea314a672_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.028 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.033 2 DEBUG nova.compute.provider_tree [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.048 2 DEBUG nova.scheduler.client.report [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.067 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.068 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.131 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.132 2 DEBUG nova.network.neutron [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.144 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/disk.config 73e8c7a5-4621-4f07-824a-b81ea314a672_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.145 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Deleting local config drive /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/disk.config because it was imported into RBD.
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.154 2 INFO nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.171 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:29:38 compute-0 kernel: tapb4dfefd6-69: entered promiscuous mode
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:38 compute-0 NetworkManager[45129]: <info>  [1759393778.1997] manager: (tapb4dfefd6-69): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:38 compute-0 ovn_controller[152344]: 2025-10-02T08:29:38Z|00446|binding|INFO|Claiming lport b4dfefd6-6971-4450-ae0e-50f4bf7eaafa for this chassis.
Oct 02 08:29:38 compute-0 ovn_controller[152344]: 2025-10-02T08:29:38Z|00447|binding|INFO|b4dfefd6-6971-4450-ae0e-50f4bf7eaafa: Claiming fa:16:3e:6e:74:a7 10.100.0.11
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.213 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:74:a7 10.100.0.11'], port_security=['fa:16:3e:6e:74:a7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '73e8c7a5-4621-4f07-824a-b81ea314a672', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4b10ecc-e572-4092-8dce-9b7247cd181c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92074f81-fcf1-4b9d-a09c-c34c3c0535f5, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.215 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b4dfefd6-6971-4450-ae0e-50f4bf7eaafa in datapath 9e6563dd-5ecf-4759-9df8-5b501617e75c bound to our chassis
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.219 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9e6563dd-5ecf-4759-9df8-5b501617e75c
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.233 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[697f74a1-4952-4ded-9d5c-05af5c75e447]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.234 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9e6563dd-51 in ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.236 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9e6563dd-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.236 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c4274756-d7e2-44d5-8999-8fed106c7c3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 systemd-udevd[317295]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.240 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42608d45-57a3-47a8-95bc-69b6fd209522]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 NetworkManager[45129]: <info>  [1759393778.2509] device (tapb4dfefd6-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:29:38 compute-0 NetworkManager[45129]: <info>  [1759393778.2518] device (tapb4dfefd6-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.257 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a111766b-f760-45db-99b7-dd36ed7b35fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 systemd-machined[214636]: New machine qemu-59-instance-00000036.
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.274 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c2792f12-d1f5-41ad-8866-ee4a115c8c3c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000036.
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.285 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.287 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.288 2 INFO nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Creating image(s)
Oct 02 08:29:38 compute-0 ovn_controller[152344]: 2025-10-02T08:29:38Z|00448|binding|INFO|Setting lport b4dfefd6-6971-4450-ae0e-50f4bf7eaafa ovn-installed in OVS
Oct 02 08:29:38 compute-0 ovn_controller[152344]: 2025-10-02T08:29:38Z|00449|binding|INFO|Setting lport b4dfefd6-6971-4450-ae0e-50f4bf7eaafa up in Southbound
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.304 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c90ac602-6dd0-40b2-951a-afe285d64390]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 NetworkManager[45129]: <info>  [1759393778.3120] manager: (tap9e6563dd-50): new Veth device (/org/freedesktop/NetworkManager/Devices/197)
Oct 02 08:29:38 compute-0 systemd-udevd[317299]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.311 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7a867b90-71f4-4776-8b9a-a46d996cd8e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1586218149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.350 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ef57ab7d-ba64-41eb-9ef7-da7159684e1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2406393232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.354 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf05ea2-376d-4b89-af0d-ee4241fcf9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.365 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:38 compute-0 NetworkManager[45129]: <info>  [1759393778.3778] device (tap9e6563dd-50): carrier: link connected
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.384 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[335eae55-5bf3-408f-8085-b52b1610c3c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.401 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf5dd87-f6d1-4d09-929f-75bf020b64ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e6563dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:8b:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465698, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317350, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.416 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.427 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2226a4b7-03b9-4673-8b58-c4d2cdd194de]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:8ba1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465698, 'tstamp': 465698}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317367, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.451 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.454 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[70c71518-408f-4175-901f-ed345f157622]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e6563dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:8b:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465698, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317370, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.457 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[11fd5dad-039b-445b-8c2f-294303710565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.510 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.511 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002078080740867357 of space, bias 1.0, pg target 0.6234242222602071 quantized to 32 (current 32)
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.546 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.555 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.593 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b099e310-b47a-4b6f-957c-f77e801f6505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.594 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6563dd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.594 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.595 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e6563dd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:38 compute-0 kernel: tap9e6563dd-50: entered promiscuous mode
Oct 02 08:29:38 compute-0 NetworkManager[45129]: <info>  [1759393778.6358] manager: (tap9e6563dd-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.637 2 DEBUG nova.virt.libvirt.vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-2',id=55,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:29Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=f7005e7b-8982-4d23-b12a-4b67c90a6c89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.637 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9e6563dd-50, col_values=(('external_ids', {'iface-id': '39e267b1-3dd0-4688-b6e5-f1bccf651722'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.638 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:38 compute-0 ovn_controller[152344]: 2025-10-02T08:29:38Z|00450|binding|INFO|Releasing lport 39e267b1-3dd0-4688-b6e5-f1bccf651722 from this chassis (sb_readonly=0)
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.639 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.640 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9e6563dd-5ecf-4759-9df8-5b501617e75c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9e6563dd-5ecf-4759-9df8-5b501617e75c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.641 2 DEBUG nova.objects.instance [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'pci_devices' on Instance uuid f7005e7b-8982-4d23-b12a-4b67c90a6c89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.641 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5aae81-f333-4fe1-89db-4f27bba944c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.642 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-9e6563dd-5ecf-4759-9df8-5b501617e75c
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/9e6563dd-5ecf-4759-9df8-5b501617e75c.pid.haproxy
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 9e6563dd-5ecf-4759-9df8-5b501617e75c
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:29:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.642 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'env', 'PROCESS_TAG=haproxy-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9e6563dd-5ecf-4759-9df8-5b501617e75c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.650 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.651 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.652 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.652 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:38 compute-0 ceph-mon[74477]: pgmap v1462: 305 pgs: 305 active+clean; 319 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 10 MiB/s wr, 246 op/s
Oct 02 08:29:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3915258954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3300001568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1586218149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2406393232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.679 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.684 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d634251d-b484-4af7-b102-fe8015603660_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.723 2 DEBUG nova.policy [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ac6f72f7366459a86c086737b89ea69', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f269abbe5769427dbf44c430d7529c04', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.727 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:29:38 compute-0 nova_compute[260603]:   <uuid>f7005e7b-8982-4d23-b12a-4b67c90a6c89</uuid>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   <name>instance-00000037</name>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1504413762-2</nova:name>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:29:37</nova:creationTime>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:29:38 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:29:38 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:29:38 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:29:38 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:29:38 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:29:38 compute-0 nova_compute[260603]:         <nova:user uuid="1057882eff8f490d837773415bf65a8a">tempest-ListServersNegativeTestJSON-21742049-project-member</nova:user>
Oct 02 08:29:38 compute-0 nova_compute[260603]:         <nova:project uuid="f6f9056bf44b4bd8859c73e3cb645683">tempest-ListServersNegativeTestJSON-21742049</nova:project>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:29:38 compute-0 nova_compute[260603]:         <nova:port uuid="5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c">
Oct 02 08:29:38 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <system>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <entry name="serial">f7005e7b-8982-4d23-b12a-4b67c90a6c89</entry>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <entry name="uuid">f7005e7b-8982-4d23-b12a-4b67c90a6c89</entry>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     </system>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   <os>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   </os>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   <features>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   </features>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk">
Oct 02 08:29:38 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:38 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk.config">
Oct 02 08:29:38 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:38 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:95:86:2c"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <target dev="tap5bbef33f-36"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/console.log" append="off"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <video>
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     </video>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:29:38 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:29:38 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:29:38 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:29:38 compute-0 nova_compute[260603]: </domain>
Oct 02 08:29:38 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.732 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Preparing to wait for external event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.733 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.734 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.734 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.736 2 DEBUG nova.virt.libvirt.vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-2',id=55,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:29Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=f7005e7b-8982-4d23-b12a-4b67c90a6c89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.736 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.737 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.737 2 DEBUG os_vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bbef33f-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5bbef33f-36, col_values=(('external_ids', {'iface-id': '5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:86:2c', 'vm-uuid': 'f7005e7b-8982-4d23-b12a-4b67c90a6c89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:38 compute-0 NetworkManager[45129]: <info>  [1759393778.7454] manager: (tap5bbef33f-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.751 2 INFO os_vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36')
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.810 2 DEBUG nova.network.neutron [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Successfully created port: 044e4f76-db30-47b6-b277-8c3a13743b9c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.831 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.831 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.831 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No VIF found with MAC fa:16:3e:95:86:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.832 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Using config drive
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.888 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 366 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 12 MiB/s wr, 359 op/s
Oct 02 08:29:38 compute-0 nova_compute[260603]: 2025-10-02 08:29:38.978 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d634251d-b484-4af7-b102-fe8015603660_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:39 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4076390817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.080 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] resizing rbd image d634251d-b484-4af7-b102-fe8015603660_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.127 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.129 2 DEBUG nova.virt.libvirt.vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-3',id=56,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:30Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=f56dc5d2-b1f8-42ef-882c-62bcbd600954,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.131 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.132 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.136 2 DEBUG nova.objects.instance [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'pci_devices' on Instance uuid f56dc5d2-b1f8-42ef-882c-62bcbd600954 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:39 compute-0 podman[317596]: 2025-10-02 08:29:39.140761825 +0000 UTC m=+0.052453321 container create 2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.157 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:29:39 compute-0 nova_compute[260603]:   <uuid>f56dc5d2-b1f8-42ef-882c-62bcbd600954</uuid>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   <name>instance-00000038</name>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1504413762-3</nova:name>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:29:37</nova:creationTime>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:29:39 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:29:39 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:29:39 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:29:39 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:29:39 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:29:39 compute-0 nova_compute[260603]:         <nova:user uuid="1057882eff8f490d837773415bf65a8a">tempest-ListServersNegativeTestJSON-21742049-project-member</nova:user>
Oct 02 08:29:39 compute-0 nova_compute[260603]:         <nova:project uuid="f6f9056bf44b4bd8859c73e3cb645683">tempest-ListServersNegativeTestJSON-21742049</nova:project>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:29:39 compute-0 nova_compute[260603]:         <nova:port uuid="12abaaed-2f93-40bd-bddd-8143c3709480">
Oct 02 08:29:39 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <system>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <entry name="serial">f56dc5d2-b1f8-42ef-882c-62bcbd600954</entry>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <entry name="uuid">f56dc5d2-b1f8-42ef-882c-62bcbd600954</entry>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     </system>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   <os>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   </os>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   <features>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   </features>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk">
Oct 02 08:29:39 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:39 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk.config">
Oct 02 08:29:39 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:39 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:24:bc:66"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <target dev="tap12abaaed-2f"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/console.log" append="off"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <video>
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     </video>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:29:39 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:29:39 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:29:39 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:29:39 compute-0 nova_compute[260603]: </domain>
Oct 02 08:29:39 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.159 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Preparing to wait for external event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.159 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.160 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.160 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.160 2 DEBUG nova.virt.libvirt.vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-3',id=56,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:30Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=f56dc5d2-b1f8-42ef-882c-62bcbd600954,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.161 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.161 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.162 2 DEBUG os_vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.162 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.163 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:39 compute-0 systemd[1]: Started libpod-conmon-2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce.scope.
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.197 2 DEBUG nova.network.neutron [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Updated VIF entry in instance network info cache for port b4dfefd6-6971-4450-ae0e-50f4bf7eaafa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.198 2 DEBUG nova.network.neutron [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Updating instance_info_cache with network_info: [{"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12abaaed-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12abaaed-2f, col_values=(('external_ids', {'iface-id': '12abaaed-2f93-40bd-bddd-8143c3709480', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:bc:66', 'vm-uuid': 'f56dc5d2-b1f8-42ef-882c-62bcbd600954'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:39 compute-0 NetworkManager[45129]: <info>  [1759393779.2021] manager: (tap12abaaed-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b410b66f213cbfe1b811aab1f58708a1f7a49c73b1959803809bb9cdb00d8fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:39 compute-0 podman[317596]: 2025-10-02 08:29:39.116269284 +0000 UTC m=+0.027960810 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.219 2 DEBUG nova.objects.instance [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'migration_context' on Instance uuid d634251d-b484-4af7-b102-fe8015603660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.222 2 INFO os_vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f')
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.223 2 DEBUG oslo_concurrency.lockutils [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-73e8c7a5-4621-4f07-824a-b81ea314a672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:39 compute-0 podman[317596]: 2025-10-02 08:29:39.225347904 +0000 UTC m=+0.137039430 container init 2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:29:39 compute-0 podman[317596]: 2025-10-02 08:29:39.23230856 +0000 UTC m=+0.144000066 container start 2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.237 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.237 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Ensure instance console log exists: /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.237 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.238 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.238 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:39 compute-0 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [NOTICE]   (317653) : New worker (317655) forked
Oct 02 08:29:39 compute-0 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [NOTICE]   (317653) : Loading success.
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.268 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.268 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.268 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No VIF found with MAC fa:16:3e:24:bc:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.269 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Using config drive
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.292 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.325 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393779.3247273, 73e8c7a5-4621-4f07-824a-b81ea314a672 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.325 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] VM Started (Lifecycle Event)
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.352 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.355 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393779.3248746, 73e8c7a5-4621-4f07-824a-b81ea314a672 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.355 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] VM Paused (Lifecycle Event)
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.376 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.379 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.398 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4076390817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.975 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Creating config drive at /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/disk.config
Oct 02 08:29:39 compute-0 nova_compute[260603]: 2025-10-02 08:29:39.981 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3cc5plsn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.049 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Creating config drive at /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/disk.config
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.063 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxvqz63i5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.148 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3cc5plsn" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.181 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.184 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/disk.config f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.227 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxvqz63i5" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.263 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.272 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/disk.config f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.345 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/disk.config f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.346 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Deleting local config drive /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/disk.config because it was imported into RBD.
Oct 02 08:29:40 compute-0 systemd-udevd[317312]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:29:40 compute-0 NetworkManager[45129]: <info>  [1759393780.3961] manager: (tap5bbef33f-36): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Oct 02 08:29:40 compute-0 kernel: tap5bbef33f-36: entered promiscuous mode
Oct 02 08:29:40 compute-0 ovn_controller[152344]: 2025-10-02T08:29:40Z|00451|binding|INFO|Claiming lport 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c for this chassis.
Oct 02 08:29:40 compute-0 ovn_controller[152344]: 2025-10-02T08:29:40Z|00452|binding|INFO|5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c: Claiming fa:16:3e:95:86:2c 10.100.0.5
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:40 compute-0 NetworkManager[45129]: <info>  [1759393780.4077] device (tap5bbef33f-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:29:40 compute-0 NetworkManager[45129]: <info>  [1759393780.4087] device (tap5bbef33f-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:29:40 compute-0 systemd-machined[214636]: New machine qemu-60-instance-00000037.
Oct 02 08:29:40 compute-0 ovn_controller[152344]: 2025-10-02T08:29:40Z|00453|binding|INFO|Setting lport 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c ovn-installed in OVS
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:40 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000037.
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.460 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/disk.config f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.460 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Deleting local config drive /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/disk.config because it was imported into RBD.
Oct 02 08:29:40 compute-0 ovn_controller[152344]: 2025-10-02T08:29:40Z|00454|binding|INFO|Setting lport 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c up in Southbound
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.464 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:86:2c 10.100.0.5'], port_security=['fa:16:3e:95:86:2c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f7005e7b-8982-4d23-b12a-4b67c90a6c89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4b10ecc-e572-4092-8dce-9b7247cd181c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92074f81-fcf1-4b9d-a09c-c34c3c0535f5, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.466 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c in datapath 9e6563dd-5ecf-4759-9df8-5b501617e75c bound to our chassis
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.468 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9e6563dd-5ecf-4759-9df8-5b501617e75c
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.484 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b726f5-66e9-474f-a992-5a3fa297053e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.490 2 DEBUG nova.compute.manager [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.491 2 DEBUG oslo_concurrency.lockutils [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.491 2 DEBUG oslo_concurrency.lockutils [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.491 2 DEBUG oslo_concurrency.lockutils [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.492 2 DEBUG nova.compute.manager [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Processing event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.492 2 DEBUG nova.compute.manager [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.492 2 DEBUG oslo_concurrency.lockutils [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.492 2 DEBUG oslo_concurrency.lockutils [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.492 2 DEBUG oslo_concurrency.lockutils [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.493 2 DEBUG nova.compute.manager [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] No waiting events found dispatching network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.493 2 WARNING nova.compute.manager [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received unexpected event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa for instance with vm_state building and task_state spawning.
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.493 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.500 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393780.4994123, 73e8c7a5-4621-4f07-824a-b81ea314a672 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.501 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] VM Resumed (Lifecycle Event)
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.511 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8b594f-4c4e-43ed-bb4f-63a5d931ffd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.511 2 DEBUG nova.network.neutron [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Updated VIF entry in instance network info cache for port 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.513 2 DEBUG nova.network.neutron [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Updating instance_info_cache with network_info: [{"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.516 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.515 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[934494fb-84ea-4ab5-8da5-d9ce24a71155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.519 2 INFO nova.virt.libvirt.driver [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Instance spawned successfully.
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.519 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.522 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:40 compute-0 kernel: tap12abaaed-2f: entered promiscuous mode
Oct 02 08:29:40 compute-0 NetworkManager[45129]: <info>  [1759393780.5267] manager: (tap12abaaed-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Oct 02 08:29:40 compute-0 ovn_controller[152344]: 2025-10-02T08:29:40Z|00455|binding|INFO|Claiming lport 12abaaed-2f93-40bd-bddd-8143c3709480 for this chassis.
Oct 02 08:29:40 compute-0 ovn_controller[152344]: 2025-10-02T08:29:40Z|00456|binding|INFO|12abaaed-2f93-40bd-bddd-8143c3709480: Claiming fa:16:3e:24:bc:66 10.100.0.4
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.531 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f7005e7b-8982-4d23-b12a-4b67c90a6c89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.531 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.531 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.532 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.532 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.532 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Processing event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.532 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.532 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.533 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.533 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.533 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.534 2 WARNING nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received unexpected event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with vm_state building and task_state spawning.
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.534 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received event network-changed-12abaaed-2f93-40bd-bddd-8143c3709480 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.534 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Refreshing instance network info cache due to event network-changed-12abaaed-2f93-40bd-bddd-8143c3709480. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.534 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f56dc5d2-b1f8-42ef-882c-62bcbd600954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.534 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f56dc5d2-b1f8-42ef-882c-62bcbd600954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.535 2 DEBUG nova.network.neutron [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Refreshing network info cache for port 12abaaed-2f93-40bd-bddd-8143c3709480 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.537 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:29:40 compute-0 NetworkManager[45129]: <info>  [1759393780.5398] device (tap12abaaed-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.540 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:bc:66 10.100.0.4'], port_security=['fa:16:3e:24:bc:66 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f56dc5d2-b1f8-42ef-882c-62bcbd600954', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4b10ecc-e572-4092-8dce-9b7247cd181c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92074f81-fcf1-4b9d-a09c-c34c3c0535f5, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=12abaaed-2f93-40bd-bddd-8143c3709480) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:40 compute-0 NetworkManager[45129]: <info>  [1759393780.5409] device (tap12abaaed-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.546 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:40 compute-0 ovn_controller[152344]: 2025-10-02T08:29:40Z|00457|binding|INFO|Setting lport 12abaaed-2f93-40bd-bddd-8143c3709480 ovn-installed in OVS
Oct 02 08:29:40 compute-0 ovn_controller[152344]: 2025-10-02T08:29:40Z|00458|binding|INFO|Setting lport 12abaaed-2f93-40bd-bddd-8143c3709480 up in Southbound
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.552 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.560 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[56ddc0a2-4692-4766-a067-3c2ee07454e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.566 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.566 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.567 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.568 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.569 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.570 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.573 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.574 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393780.5458484, 9924ce7f-b701-4560-b2c5-67f673b45807 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.574 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] VM Resumed (Lifecycle Event)
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.576 2 INFO nova.virt.libvirt.driver [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance spawned successfully.
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.576 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:29:40 compute-0 systemd-machined[214636]: New machine qemu-61-instance-00000038.
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.585 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94dd12b2-60e9-4056-aa9f-cd8e9b381d44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e6563dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:8b:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465698, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317802, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:40 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000038.
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.601 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.607 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.611 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.611 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.612 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.614 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9026513e-df0c-4444-8016-f3641be34731]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465714, 'tstamp': 465714}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317804, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465718, 'tstamp': 465718}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317804, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.615 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6563dd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.616 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.618 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e6563dd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.618 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.618 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9e6563dd-50, col_values=(('external_ids', {'iface-id': '39e267b1-3dd0-4688-b6e5-f1bccf651722'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.619 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.620 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 12abaaed-2f93-40bd-bddd-8143c3709480 in datapath 9e6563dd-5ecf-4759-9df8-5b501617e75c unbound from our chassis
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.622 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9e6563dd-5ecf-4759-9df8-5b501617e75c
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.623 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.632 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.638 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[153d5d02-5cba-487b-822f-e77cf83ef058]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.650 2 INFO nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Took 12.49 seconds to spawn the instance on the hypervisor.
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.650 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.682 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[11d4c6df-cbd9-4735-a1bf-79d31ba7b315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.685 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4709a2b4-8c3d-456b-bc26-995c284d3fbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:40 compute-0 ceph-mon[74477]: pgmap v1463: 305 pgs: 305 active+clean; 366 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 12 MiB/s wr, 359 op/s
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.711 2 INFO nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Took 18.49 seconds to spawn the instance on the hypervisor.
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.710 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1c244396-b74f-4fb9-be61-d176d1ec8c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.713 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.736 2 INFO nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Took 14.03 seconds to build instance.
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.741 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b4a461-04b1-434a-a808-2692c439c1dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e6563dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:8b:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465698, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317849, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.773 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dff2b997-c2a0-4699-b70a-1ba7124d5d79]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465714, 'tstamp': 465714}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317854, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465718, 'tstamp': 465718}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317854, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.774 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.774 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6563dd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e6563dd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9e6563dd-50, col_values=(('external_ids', {'iface-id': '39e267b1-3dd0-4688-b6e5-f1bccf651722'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.829 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.835 2 INFO nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Took 19.61 seconds to build instance.
Oct 02 08:29:40 compute-0 nova_compute[260603]: 2025-10-02 08:29:40.859 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 366 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 8.4 MiB/s wr, 219 op/s
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.268 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393781.2684371, f7005e7b-8982-4d23-b12a-4b67c90a6c89 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.269 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] VM Started (Lifecycle Event)
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.285 2 DEBUG nova.network.neutron [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Successfully created port: f79edf8d-90b8-47b7-b366-244c63439a64 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.295 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.299 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393781.2685282, f7005e7b-8982-4d23-b12a-4b67c90a6c89 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.300 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] VM Paused (Lifecycle Event)
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.322 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.326 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.353 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.589 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393781.5886009, f56dc5d2-b1f8-42ef-882c-62bcbd600954 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.592 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] VM Started (Lifecycle Event)
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.617 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.620 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393781.5889506, f56dc5d2-b1f8-42ef-882c-62bcbd600954 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.621 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] VM Paused (Lifecycle Event)
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.640 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.645 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:41 compute-0 nova_compute[260603]: 2025-10-02 08:29:41.663 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.099915) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393782099965, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 502, "num_deletes": 252, "total_data_size": 421686, "memory_usage": 432016, "flush_reason": "Manual Compaction"}
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393782104216, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 353741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30460, "largest_seqno": 30961, "table_properties": {"data_size": 350975, "index_size": 738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7655, "raw_average_key_size": 20, "raw_value_size": 345241, "raw_average_value_size": 943, "num_data_blocks": 32, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759393762, "oldest_key_time": 1759393762, "file_creation_time": 1759393782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 4347 microseconds, and 1930 cpu microseconds.
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.104266) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 353741 bytes OK
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.104286) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.105806) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.105822) EVENT_LOG_v1 {"time_micros": 1759393782105817, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.105842) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 418688, prev total WAL file size 418688, number of live WAL files 2.
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.106295) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303030' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(345KB)], [65(10012KB)]
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393782106331, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 10606596, "oldest_snapshot_seqno": -1}
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5361 keys, 7332008 bytes, temperature: kUnknown
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393782140791, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7332008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7296564, "index_size": 20934, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 135376, "raw_average_key_size": 25, "raw_value_size": 7200534, "raw_average_value_size": 1343, "num_data_blocks": 853, "num_entries": 5361, "num_filter_entries": 5361, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759393782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.140977) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7332008 bytes
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.142286) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 307.2 rd, 212.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.8 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(50.7) write-amplify(20.7) OK, records in: 5872, records dropped: 511 output_compression: NoCompression
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.142306) EVENT_LOG_v1 {"time_micros": 1759393782142296, "job": 36, "event": "compaction_finished", "compaction_time_micros": 34524, "compaction_time_cpu_micros": 17932, "output_level": 6, "num_output_files": 1, "total_output_size": 7332008, "num_input_records": 5872, "num_output_records": 5361, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393782142478, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.143 2 DEBUG nova.network.neutron [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Successfully updated port: f79edf8d-90b8-47b7-b366-244c63439a64 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393782144580, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.106251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.144706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.144711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.144742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.144755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:29:42 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.144756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.160 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "refresh_cache-d634251d-b484-4af7-b102-fe8015603660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.160 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquired lock "refresh_cache-d634251d-b484-4af7-b102-fe8015603660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.161 2 DEBUG nova.network.neutron [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.169 2 DEBUG nova.compute.manager [req-4c07f50d-f83f-432d-8ffb-e09fdce0693e req-6467b941-2dc0-4110-96b0-7d59283fe7bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.169 2 DEBUG oslo_concurrency.lockutils [req-4c07f50d-f83f-432d-8ffb-e09fdce0693e req-6467b941-2dc0-4110-96b0-7d59283fe7bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.170 2 DEBUG oslo_concurrency.lockutils [req-4c07f50d-f83f-432d-8ffb-e09fdce0693e req-6467b941-2dc0-4110-96b0-7d59283fe7bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.170 2 DEBUG oslo_concurrency.lockutils [req-4c07f50d-f83f-432d-8ffb-e09fdce0693e req-6467b941-2dc0-4110-96b0-7d59283fe7bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.170 2 DEBUG nova.compute.manager [req-4c07f50d-f83f-432d-8ffb-e09fdce0693e req-6467b941-2dc0-4110-96b0-7d59283fe7bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Processing event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.171 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.180 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393782.179207, f7005e7b-8982-4d23-b12a-4b67c90a6c89 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.180 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] VM Resumed (Lifecycle Event)
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.184 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.200 2 DEBUG nova.network.neutron [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Successfully updated port: 044e4f76-db30-47b6-b277-8c3a13743b9c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.202 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.209 2 INFO nova.virt.libvirt.driver [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Instance spawned successfully.
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.210 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.212 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.223 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.223 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquired lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.223 2 DEBUG nova.network.neutron [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.261 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.271 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.273 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.273 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.274 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.275 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.275 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.368 2 INFO nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Took 13.14 seconds to spawn the instance on the hypervisor.
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.369 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.455 2 INFO nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Took 15.68 seconds to build instance.
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.484 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.595 2 DEBUG nova.network.neutron [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.624 2 DEBUG nova.network.neutron [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:29:42 compute-0 ceph-mon[74477]: pgmap v1464: 305 pgs: 305 active+clean; 366 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 8.4 MiB/s wr, 219 op/s
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.795 2 DEBUG nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.796 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.797 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.798 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.799 2 DEBUG nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Processing event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.799 2 DEBUG nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.800 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.801 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.801 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.802 2 DEBUG nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] No waiting events found dispatching network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.802 2 WARNING nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received unexpected event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 for instance with vm_state building and task_state spawning.
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.803 2 DEBUG nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received event network-changed-f79edf8d-90b8-47b7-b366-244c63439a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.804 2 DEBUG nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Refreshing instance network info cache due to event network-changed-f79edf8d-90b8-47b7-b366-244c63439a64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.804 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d634251d-b484-4af7-b102-fe8015603660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.811 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.816 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393782.8157337, f56dc5d2-b1f8-42ef-882c-62bcbd600954 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.816 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] VM Resumed (Lifecycle Event)
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.820 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.828 2 INFO nova.virt.libvirt.driver [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Instance spawned successfully.
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.829 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.837 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.840 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.850 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.851 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.851 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.852 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.852 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.853 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.860 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.919 2 INFO nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Took 12.26 seconds to spawn the instance on the hypervisor.
Oct 02 08:29:42 compute-0 nova_compute[260603]: 2025-10-02 08:29:42.920 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 383 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 8.8 MiB/s wr, 399 op/s
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.035 2 INFO nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Took 16.24 seconds to build instance.
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.060 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.065 2 DEBUG nova.network.neutron [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Updated VIF entry in instance network info cache for port 12abaaed-2f93-40bd-bddd-8143c3709480. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.066 2 DEBUG nova.network.neutron [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Updating instance_info_cache with network_info: [{"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.089 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f56dc5d2-b1f8-42ef-882c-62bcbd600954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.668 2 DEBUG nova.network.neutron [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Updating instance_info_cache with network_info: [{"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.692 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Releasing lock "refresh_cache-d634251d-b484-4af7-b102-fe8015603660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.693 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Instance network_info: |[{"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.693 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d634251d-b484-4af7-b102-fe8015603660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.693 2 DEBUG nova.network.neutron [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Refreshing network info cache for port f79edf8d-90b8-47b7-b366-244c63439a64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.696 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Start _get_guest_xml network_info=[{"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.702 2 WARNING nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.709 2 DEBUG nova.virt.libvirt.host [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.710 2 DEBUG nova.virt.libvirt.host [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.720 2 DEBUG nova.virt.libvirt.host [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.721 2 DEBUG nova.virt.libvirt.host [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.721 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.721 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.722 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.722 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.722 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.722 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.722 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.723 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.723 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.723 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.723 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.723 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.728 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.853 2 DEBUG nova.network.neutron [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Updating instance_info_cache with network_info: [{"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.873 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Releasing lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.874 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Instance network_info: |[{"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.876 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Start _get_guest_xml network_info=[{"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.903 2 WARNING nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.921 2 DEBUG nova.virt.libvirt.host [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.937 2 DEBUG nova.virt.libvirt.host [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.952 2 DEBUG nova.virt.libvirt.host [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.953 2 DEBUG nova.virt.libvirt.host [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.953 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.953 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.954 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.954 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.954 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.954 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.955 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.955 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.955 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.955 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.956 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.966 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:29:43 compute-0 nova_compute[260603]: 2025-10-02 08:29:43.983 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:44 compute-0 podman[317913]: 2025-10-02 08:29:44.07135164 +0000 UTC m=+0.136428260 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:29:44 compute-0 podman[317914]: 2025-10-02 08:29:44.102292812 +0000 UTC m=+0.165680189 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.269 2 DEBUG nova.compute.manager [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.269 2 DEBUG oslo_concurrency.lockutils [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.270 2 DEBUG oslo_concurrency.lockutils [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.270 2 DEBUG oslo_concurrency.lockutils [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.270 2 DEBUG nova.compute.manager [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] No waiting events found dispatching network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.270 2 WARNING nova.compute.manager [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received unexpected event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c for instance with vm_state active and task_state None.
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.270 2 DEBUG nova.compute.manager [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received event network-changed-044e4f76-db30-47b6-b277-8c3a13743b9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.270 2 DEBUG nova.compute.manager [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Refreshing instance network info cache due to event network-changed-044e4f76-db30-47b6-b277-8c3a13743b9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.271 2 DEBUG oslo_concurrency.lockutils [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.271 2 DEBUG oslo_concurrency.lockutils [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.271 2 DEBUG nova.network.neutron [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Refreshing network info cache for port 044e4f76-db30-47b6-b277-8c3a13743b9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:29:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1027909258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.447 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.719s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.619 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.647 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:44 compute-0 ceph-mon[74477]: pgmap v1465: 305 pgs: 305 active+clean; 383 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 8.8 MiB/s wr, 399 op/s
Oct 02 08:29:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1027909258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2952174159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.859 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.876s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:44 compute-0 nova_compute[260603]: 2025-10-02 08:29:44.943 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 413 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.2 MiB/s wr, 390 op/s
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.002 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.237 2 DEBUG nova.network.neutron [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Updated VIF entry in instance network info cache for port f79edf8d-90b8-47b7-b366-244c63439a64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.238 2 DEBUG nova.network.neutron [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Updating instance_info_cache with network_info: [{"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.261 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d634251d-b484-4af7-b102-fe8015603660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236559884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.455 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.808s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.457 2 DEBUG nova.virt.libvirt.vif [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-916266897',display_name='tempest-DeleteServersTestJSON-server-916266897',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-916266897',id=58,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-aeqxqe0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:38Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=d634251d-b484-4af7-b102-fe8015603660,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.458 2 DEBUG nova.network.os_vif_util [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.458 2 DEBUG nova.network.os_vif_util [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.461 2 DEBUG nova.objects.instance [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'pci_devices' on Instance uuid d634251d-b484-4af7-b102-fe8015603660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.474 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <uuid>d634251d-b484-4af7-b102-fe8015603660</uuid>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <name>instance-0000003a</name>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:name>tempest-DeleteServersTestJSON-server-916266897</nova:name>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:29:43</nova:creationTime>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:user uuid="1ac6f72f7366459a86c086737b89ea69">tempest-DeleteServersTestJSON-812177785-project-member</nova:user>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:project uuid="f269abbe5769427dbf44c430d7529c04">tempest-DeleteServersTestJSON-812177785</nova:project>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:port uuid="f79edf8d-90b8-47b7-b366-244c63439a64">
Oct 02 08:29:45 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <system>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <entry name="serial">d634251d-b484-4af7-b102-fe8015603660</entry>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <entry name="uuid">d634251d-b484-4af7-b102-fe8015603660</entry>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </system>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <os>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </os>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <features>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </features>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/d634251d-b484-4af7-b102-fe8015603660_disk">
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/d634251d-b484-4af7-b102-fe8015603660_disk.config">
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:a8:48:7d"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <target dev="tapf79edf8d-90"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/console.log" append="off"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <video>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </video>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:29:45 compute-0 nova_compute[260603]: </domain>
Oct 02 08:29:45 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.474 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Preparing to wait for external event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.474 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.475 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.475 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.476 2 DEBUG nova.virt.libvirt.vif [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-916266897',display_name='tempest-DeleteServersTestJSON-server-916266897',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-916266897',id=58,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-aeqxqe0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:38Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=d634251d-b484-4af7-b102-fe8015603660,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.477 2 DEBUG nova.network.os_vif_util [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.477 2 DEBUG nova.network.os_vif_util [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.477 2 DEBUG os_vif [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.479 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.481 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf79edf8d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.482 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf79edf8d-90, col_values=(('external_ids', {'iface-id': 'f79edf8d-90b8-47b7-b366-244c63439a64', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:48:7d', 'vm-uuid': 'd634251d-b484-4af7-b102-fe8015603660'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:45 compute-0 NetworkManager[45129]: <info>  [1759393785.4842] manager: (tapf79edf8d-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.491 2 INFO os_vif [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90')
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.577 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.578 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.579 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No VIF found with MAC fa:16:3e:a8:48:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.579 2 INFO nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Using config drive
Oct 02 08:29:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2952174159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2236559884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.716 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:29:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596397078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.781 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.777s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.808 2 DEBUG nova.virt.libvirt.vif [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-568591211',display_name='tempest-ServersTestManualDisk-server-568591211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-568591211',id=57,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARlZgY/0EkzEMepYNnm4b01nWIMecq2XG9MuajTSjQZi/SZ8DIEdLBXsx3DCy0ARgTpk4vDQEJ3TsL+ZNLPSjyILPCIRt4tiYIZmsXwTZOquFcYjN59rB2JnY5UB/nfNw==',key_name='tempest-keypair-510296598',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f942883d5794a5c8e3cd2b5ef44a863',ramdisk_id='',reservation_id='r-9syab2uj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-2010618382',owner_user_name='tempest-ServersTestManualDisk-2010618382-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e8f0a6fb1d224a979db4b4a738bbf453',uuid=923e00cc-7494-46f3-93e2-3c223705aff1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.808 2 DEBUG nova.network.os_vif_util [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Converting VIF {"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.809 2 DEBUG nova.network.os_vif_util [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.810 2 DEBUG nova.objects.instance [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lazy-loading 'pci_devices' on Instance uuid 923e00cc-7494-46f3-93e2-3c223705aff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.827 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <uuid>923e00cc-7494-46f3-93e2-3c223705aff1</uuid>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <name>instance-00000039</name>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersTestManualDisk-server-568591211</nova:name>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:29:43</nova:creationTime>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:user uuid="e8f0a6fb1d224a979db4b4a738bbf453">tempest-ServersTestManualDisk-2010618382-project-member</nova:user>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:project uuid="1f942883d5794a5c8e3cd2b5ef44a863">tempest-ServersTestManualDisk-2010618382</nova:project>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <nova:port uuid="044e4f76-db30-47b6-b277-8c3a13743b9c">
Oct 02 08:29:45 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <system>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <entry name="serial">923e00cc-7494-46f3-93e2-3c223705aff1</entry>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <entry name="uuid">923e00cc-7494-46f3-93e2-3c223705aff1</entry>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </system>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <os>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </os>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <features>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </features>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/923e00cc-7494-46f3-93e2-3c223705aff1_disk">
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/923e00cc-7494-46f3-93e2-3c223705aff1_disk.config">
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </source>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:29:45 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:46:b0:76"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <target dev="tap044e4f76-db"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/console.log" append="off"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <video>
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </video>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:29:45 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:29:45 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:29:45 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:29:45 compute-0 nova_compute[260603]: </domain>
Oct 02 08:29:45 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.827 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Preparing to wait for external event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.828 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.828 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.828 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.829 2 DEBUG nova.virt.libvirt.vif [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-568591211',display_name='tempest-ServersTestManualDisk-server-568591211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-568591211',id=57,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARlZgY/0EkzEMepYNnm4b01nWIMecq2XG9MuajTSjQZi/SZ8DIEdLBXsx3DCy0ARgTpk4vDQEJ3TsL+ZNLPSjyILPCIRt4tiYIZmsXwTZOquFcYjN59rB2JnY5UB/nfNw==',key_name='tempest-keypair-510296598',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f942883d5794a5c8e3cd2b5ef44a863',ramdisk_id='',reservation_id='r-9syab2uj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-2010618382',owner_user_name='tempest-ServersTestManualDisk-2010618382-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e8f0a6fb1d224a979db4b4a738bbf453',uuid=923e00cc-7494-46f3-93e2-3c223705aff1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.829 2 DEBUG nova.network.os_vif_util [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Converting VIF {"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.829 2 DEBUG nova.network.os_vif_util [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.830 2 DEBUG os_vif [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.830 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap044e4f76-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap044e4f76-db, col_values=(('external_ids', {'iface-id': '044e4f76-db30-47b6-b277-8c3a13743b9c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:b0:76', 'vm-uuid': '923e00cc-7494-46f3-93e2-3c223705aff1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:45 compute-0 NetworkManager[45129]: <info>  [1759393785.8355] manager: (tap044e4f76-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.849 2 INFO os_vif [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db')
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.892 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.893 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.893 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] No VIF found with MAC fa:16:3e:46:b0:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.893 2 INFO nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Using config drive
Oct 02 08:29:45 compute-0 nova_compute[260603]: 2025-10-02 08:29:45.952 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.281 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.282 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.282 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.283 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.283 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.285 2 INFO nova.compute.manager [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Terminating instance
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.289 2 DEBUG nova.compute.manager [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.313 2 INFO nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Creating config drive at /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/disk.config
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.320 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph69hzfdu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:46 compute-0 kernel: tapb4dfefd6-69 (unregistering): left promiscuous mode
Oct 02 08:29:46 compute-0 NetworkManager[45129]: <info>  [1759393786.3431] device (tapb4dfefd6-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.360 2 DEBUG nova.network.neutron [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Updated VIF entry in instance network info cache for port 044e4f76-db30-47b6-b277-8c3a13743b9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.361 2 DEBUG nova.network.neutron [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Updating instance_info_cache with network_info: [{"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:46 compute-0 ovn_controller[152344]: 2025-10-02T08:29:46Z|00459|binding|INFO|Releasing lport b4dfefd6-6971-4450-ae0e-50f4bf7eaafa from this chassis (sb_readonly=0)
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:46 compute-0 ovn_controller[152344]: 2025-10-02T08:29:46Z|00460|binding|INFO|Setting lport b4dfefd6-6971-4450-ae0e-50f4bf7eaafa down in Southbound
Oct 02 08:29:46 compute-0 ovn_controller[152344]: 2025-10-02T08:29:46Z|00461|binding|INFO|Removing iface tapb4dfefd6-69 ovn-installed in OVS
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.383 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:74:a7 10.100.0.11'], port_security=['fa:16:3e:6e:74:a7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '73e8c7a5-4621-4f07-824a-b81ea314a672', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4b10ecc-e572-4092-8dce-9b7247cd181c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92074f81-fcf1-4b9d-a09c-c34c3c0535f5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.384 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b4dfefd6-6971-4450-ae0e-50f4bf7eaafa in datapath 9e6563dd-5ecf-4759-9df8-5b501617e75c unbound from our chassis
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.385 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9e6563dd-5ecf-4759-9df8-5b501617e75c
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.388 2 DEBUG oslo_concurrency.lockutils [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:46 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000036.scope: Deactivated successfully.
Oct 02 08:29:46 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000036.scope: Consumed 6.070s CPU time.
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.405 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[515e0ba2-bbbd-4597-9eb2-8388e7ecb0a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 systemd-machined[214636]: Machine qemu-59-instance-00000036 terminated.
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.447 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4dbb6bf5-687f-4d36-8c18-8a6fae037e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.450 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[13d1d85e-4a5d-4af6-b891-02b55293b1dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.476 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[02e2d79b-92c8-450b-a895-fec01b792a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.484 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph69hzfdu" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.492 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[40105d36-5fd3-4ec1-a8ec-a608b1bf6cd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e6563dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:8b:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465698, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318128, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.510 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.513 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dbed3c6b-eab3-4b4b-9763-af1a8037e1fe]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465714, 'tstamp': 465714}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318142, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465718, 'tstamp': 465718}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318142, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.514 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6563dd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.533 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e6563dd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.533 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.534 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9e6563dd-50, col_values=(('external_ids', {'iface-id': '39e267b1-3dd0-4688-b6e5-f1bccf651722'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.534 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.539 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/disk.config d634251d-b484-4af7-b102-fe8015603660_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.576 2 INFO nova.virt.libvirt.driver [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Instance destroyed successfully.
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.576 2 DEBUG nova.objects.instance [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'resources' on Instance uuid 73e8c7a5-4621-4f07-824a-b81ea314a672 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.602 2 DEBUG nova.virt.libvirt.vif [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-1',id=54,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:40Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=73e8c7a5-4621-4f07-824a-b81ea314a672,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.603 2 DEBUG nova.network.os_vif_util [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.608 2 DEBUG nova.network.os_vif_util [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.612 2 DEBUG os_vif [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4dfefd6-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.625 2 INFO nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Creating config drive at /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/disk.config
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.630 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdps5pjcr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.669 2 INFO os_vif [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69')
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.691 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/disk.config d634251d-b484-4af7-b102-fe8015603660_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.692 2 INFO nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Deleting local config drive /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/disk.config because it was imported into RBD.
Oct 02 08:29:46 compute-0 ceph-mon[74477]: pgmap v1466: 305 pgs: 305 active+clean; 413 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.2 MiB/s wr, 390 op/s
Oct 02 08:29:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2596397078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:29:46 compute-0 systemd-udevd[318119]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:29:46 compute-0 NetworkManager[45129]: <info>  [1759393786.7530] manager: (tapf79edf8d-90): new Tun device (/org/freedesktop/NetworkManager/Devices/205)
Oct 02 08:29:46 compute-0 kernel: tapf79edf8d-90: entered promiscuous mode
Oct 02 08:29:46 compute-0 ovn_controller[152344]: 2025-10-02T08:29:46Z|00462|binding|INFO|Claiming lport f79edf8d-90b8-47b7-b366-244c63439a64 for this chassis.
Oct 02 08:29:46 compute-0 ovn_controller[152344]: 2025-10-02T08:29:46Z|00463|binding|INFO|f79edf8d-90b8-47b7-b366-244c63439a64: Claiming fa:16:3e:a8:48:7d 10.100.0.4
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.765 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:48:7d 10.100.0.4'], port_security=['fa:16:3e:a8:48:7d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd634251d-b484-4af7-b102-fe8015603660', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f79edf8d-90b8-47b7-b366-244c63439a64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:46 compute-0 NetworkManager[45129]: <info>  [1759393786.7657] device (tapf79edf8d-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.766 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f79edf8d-90b8-47b7-b366-244c63439a64 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca bound to our chassis
Oct 02 08:29:46 compute-0 NetworkManager[45129]: <info>  [1759393786.7671] device (tapf79edf8d-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.767 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.774 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdps5pjcr" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.778 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f03f37d5-abcb-450d-bafc-21978ede8894]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.781 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa72ac8c9-11 in ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:29:46 compute-0 ovn_controller[152344]: 2025-10-02T08:29:46Z|00464|binding|INFO|Setting lport f79edf8d-90b8-47b7-b366-244c63439a64 ovn-installed in OVS
Oct 02 08:29:46 compute-0 ovn_controller[152344]: 2025-10-02T08:29:46Z|00465|binding|INFO|Setting lport f79edf8d-90b8-47b7-b366-244c63439a64 up in Southbound
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.784 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa72ac8c9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.784 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e4bb04f3-37e0-4c4f-a18f-9ffb6840d133]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.785 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa57311-9e10-4d87-98b6-42058a1f4e69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.806 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[ac203fec-8bfb-42df-a9e5-c07d2f8bf95e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 systemd-machined[214636]: New machine qemu-62-instance-0000003a.
Oct 02 08:29:46 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-0000003a.
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.827 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0605fa8b-3b81-4624-889c-7b1013cb830f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.874 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b19788cc-ec39-486e-98dd-73b53e64ba50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.881 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:29:46 compute-0 NetworkManager[45129]: <info>  [1759393786.8878] manager: (tapa72ac8c9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/206)
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.889 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1893aaa6-3a88-4317-b12f-87dee05acfed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.907 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/disk.config 923e00cc-7494-46f3-93e2-3c223705aff1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.933 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7e0e5a-17c7-4109-9001-a657c6ff00a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.938 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3cff5d34-11db-4c76-828e-e36ac39f18c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.950 2 DEBUG nova.compute.manager [req-22703e42-a9c9-44f2-a836-fc4f010dc609 req-3f4f2867-8352-4b36-aa11-80a5d2274629 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-vif-unplugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.950 2 DEBUG oslo_concurrency.lockutils [req-22703e42-a9c9-44f2-a836-fc4f010dc609 req-3f4f2867-8352-4b36-aa11-80a5d2274629 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.950 2 DEBUG oslo_concurrency.lockutils [req-22703e42-a9c9-44f2-a836-fc4f010dc609 req-3f4f2867-8352-4b36-aa11-80a5d2274629 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.950 2 DEBUG oslo_concurrency.lockutils [req-22703e42-a9c9-44f2-a836-fc4f010dc609 req-3f4f2867-8352-4b36-aa11-80a5d2274629 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.950 2 DEBUG nova.compute.manager [req-22703e42-a9c9-44f2-a836-fc4f010dc609 req-3f4f2867-8352-4b36-aa11-80a5d2274629 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] No waiting events found dispatching network-vif-unplugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:46 compute-0 nova_compute[260603]: 2025-10-02 08:29:46.951 2 DEBUG nova.compute.manager [req-22703e42-a9c9-44f2-a836-fc4f010dc609 req-3f4f2867-8352-4b36-aa11-80a5d2274629 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-vif-unplugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:29:46 compute-0 NetworkManager[45129]: <info>  [1759393786.9590] device (tapa72ac8c9-10): carrier: link connected
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.968 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[08f3f2f7-1604-40fa-919d-fd5fe8d4d327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 413 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 3.6 MiB/s wr, 367 op/s
Oct 02 08:29:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.992 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcec84b-468c-483e-9c32-1c2aab03784b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466556, 'reachable_time': 43237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318278, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.011 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ebfb8e3b-cd37-495f-98e3-108e617bf7f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3b:61d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466556, 'tstamp': 466556}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318293, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.039 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1a9fc6-8feb-4319-8d5c-9476052a180a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466556, 'reachable_time': 43237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318294, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.114 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29e9d93a-544c-4603-8d81-57d65dfdcb47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.169 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/disk.config 923e00cc-7494-46f3-93e2-3c223705aff1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.170 2 INFO nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Deleting local config drive /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/disk.config because it was imported into RBD.
Oct 02 08:29:47 compute-0 virtqemud[260328]: End of file while reading data: Input/output error
Oct 02 08:29:47 compute-0 kernel: tap044e4f76-db: entered promiscuous mode
Oct 02 08:29:47 compute-0 NetworkManager[45129]: <info>  [1759393787.2172] manager: (tap044e4f76-db): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Oct 02 08:29:47 compute-0 systemd-udevd[318259]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:47 compute-0 ovn_controller[152344]: 2025-10-02T08:29:47Z|00466|binding|INFO|Claiming lport 044e4f76-db30-47b6-b277-8c3a13743b9c for this chassis.
Oct 02 08:29:47 compute-0 ovn_controller[152344]: 2025-10-02T08:29:47Z|00467|binding|INFO|044e4f76-db30-47b6-b277-8c3a13743b9c: Claiming fa:16:3e:46:b0:76 10.100.0.11
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.217 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bca16ceb-0240-4267-8cb5-3e8725f80477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.226 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.226 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.226 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa72ac8c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:47 compute-0 NetworkManager[45129]: <info>  [1759393787.2289] manager: (tapa72ac8c9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.230 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:b0:76 10.100.0.11'], port_security=['fa:16:3e:46:b0:76 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '923e00cc-7494-46f3-93e2-3c223705aff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bd8f146-d090-40d8-8651-21c92934a6ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f942883d5794a5c8e3cd2b5ef44a863', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3c9d772-e37a-48f1-89cb-39eaf88eda56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dec8975f-9e7e-451f-957f-04aee213c5b3, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=044e4f76-db30-47b6-b277-8c3a13743b9c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:47 compute-0 NetworkManager[45129]: <info>  [1759393787.2337] device (tap044e4f76-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:29:47 compute-0 NetworkManager[45129]: <info>  [1759393787.2355] device (tap044e4f76-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:29:47 compute-0 systemd-machined[214636]: New machine qemu-63-instance-00000039.
Oct 02 08:29:47 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000039.
Oct 02 08:29:47 compute-0 kernel: tapa72ac8c9-10: entered promiscuous mode
Oct 02 08:29:47 compute-0 ovn_controller[152344]: 2025-10-02T08:29:47Z|00468|binding|INFO|Setting lport 044e4f76-db30-47b6-b277-8c3a13743b9c ovn-installed in OVS
Oct 02 08:29:47 compute-0 ovn_controller[152344]: 2025-10-02T08:29:47Z|00469|binding|INFO|Setting lport 044e4f76-db30-47b6-b277-8c3a13743b9c up in Southbound
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:47 compute-0 ovn_controller[152344]: 2025-10-02T08:29:47Z|00470|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.364 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa72ac8c9-10, col_values=(('external_ids', {'iface-id': 'f9acec59-0200-4a1d-84e4-06e67c730498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.366 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.367 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0338394d-2bdd-4b16-939d-d93e7430c12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.368 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:29:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.368 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'env', 'PROCESS_TAG=haproxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.494 2 INFO nova.virt.libvirt.driver [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Deleting instance files /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672_del
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.495 2 INFO nova.virt.libvirt.driver [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Deletion of /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672_del complete
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.554 2 INFO nova.compute.manager [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Took 1.26 seconds to destroy the instance on the hypervisor.
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.555 2 DEBUG oslo.service.loopingcall [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.556 2 DEBUG nova.compute.manager [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.556 2 DEBUG nova.network.neutron [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:29:47 compute-0 nova_compute[260603]: 2025-10-02 08:29:47.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:47 compute-0 podman[318391]: 2025-10-02 08:29:47.795266697 +0000 UTC m=+0.099307478 container create 7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:29:47 compute-0 podman[318391]: 2025-10-02 08:29:47.759113153 +0000 UTC m=+0.063153954 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:29:47 compute-0 systemd[1]: Started libpod-conmon-7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8.scope.
Oct 02 08:29:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aa27ff810503b5c16a82c7c0b62fcacee12457623b0ef10cc50f690d346bfd3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:47 compute-0 podman[318391]: 2025-10-02 08:29:47.949623104 +0000 UTC m=+0.253663905 container init 7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:29:47 compute-0 podman[318391]: 2025-10-02 08:29:47.960638986 +0000 UTC m=+0.264679767 container start 7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 08:29:47 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [NOTICE]   (318442) : New worker (318447) forked
Oct 02 08:29:47 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [NOTICE]   (318442) : Loading success.
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.108 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 044e4f76-db30-47b6-b277-8c3a13743b9c in datapath 9bd8f146-d090-40d8-8651-21c92934a6ff unbound from our chassis
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.110 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bd8f146-d090-40d8-8651-21c92934a6ff
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.126 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[937d314a-ab7e-4f27-a259-b75ebc6dc340]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.131 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bd8f146-d1 in ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.134 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bd8f146-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.134 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0abc4e6-5354-4c8b-919c-ea6c4a3f4285]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.137 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22cf968d-d4a0-48ac-bea8-006b7ade18de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.149 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[c42892ee-32d5-4f7f-bd28-02fe36d942e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.174 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ad20b7-d96b-4bf4-af75-04ac907597e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.220 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bd70fa6a-8162-40da-8e96-6781d37f9ee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 NetworkManager[45129]: <info>  [1759393788.2309] manager: (tap9bd8f146-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.231 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c6fa5d2b-4952-424f-bc81-da31c8597f28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.253 2 DEBUG nova.network.neutron [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.275 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9902383e-75af-441b-8030-48faf64a9fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.278 2 INFO nova.compute.manager [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Took 0.72 seconds to deallocate network for instance.
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.282 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[be3d8a3d-c18f-4e26-933c-49b5e7ff3463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.303 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393788.301911, d634251d-b484-4af7-b102-fe8015603660 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.303 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] VM Started (Lifecycle Event)
Oct 02 08:29:48 compute-0 NetworkManager[45129]: <info>  [1759393788.3185] device (tap9bd8f146-d0): carrier: link connected
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.323 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d67dc4-f282-4558-b596-6524ccfecf65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.347 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.348 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.358 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.362 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393788.3021214, d634251d-b484-4af7-b102-fe8015603660 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.363 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] VM Paused (Lifecycle Event)
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.362 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc8d37b-4982-4f51-a400-28018773fea4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bd8f146-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:55:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466692, 'reachable_time': 25414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318470, 'error': None, 'target': 'ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.379 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b429a62e-5e4a-476c-a1c3-d23fad46ba7a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:5540'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466692, 'tstamp': 466692}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318471, 'error': None, 'target': 'ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.392 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.396 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.404 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3f8c56-4129-4eab-b9b1-835297a5dde1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bd8f146-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:55:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466692, 'reachable_time': 25414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318472, 'error': None, 'target': 'ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.416 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[026b1444-a0f9-4a2f-95bf-fb3858276952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.510 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e3bca4d1-be01-4ba8-998a-2d20fb91b470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.511 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bd8f146-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.511 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.512 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bd8f146-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:48 compute-0 NetworkManager[45129]: <info>  [1759393788.5145] manager: (tap9bd8f146-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Oct 02 08:29:48 compute-0 kernel: tap9bd8f146-d0: entered promiscuous mode
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.516 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bd8f146-d0, col_values=(('external_ids', {'iface-id': '21940774-b256-44e1-b604-ddb5b66aa4a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:48 compute-0 ovn_controller[152344]: 2025-10-02T08:29:48Z|00471|binding|INFO|Releasing lport 21940774-b256-44e1-b604-ddb5b66aa4a6 from this chassis (sb_readonly=0)
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.535 2 DEBUG oslo_concurrency.processutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.547 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bd8f146-d090-40d8-8651-21c92934a6ff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bd8f146-d090-40d8-8651-21c92934a6ff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.549 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62b2fba1-417e-4beb-8f2c-d5215a9c556f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.550 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-9bd8f146-d090-40d8-8651-21c92934a6ff
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/9bd8f146-d090-40d8-8651-21c92934a6ff.pid.haproxy
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 9bd8f146-d090-40d8-8651-21c92934a6ff
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:29:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.550 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff', 'env', 'PROCESS_TAG=haproxy-9bd8f146-d090-40d8-8651-21c92934a6ff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bd8f146-d090-40d8-8651-21c92934a6ff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:29:48 compute-0 ovn_controller[152344]: 2025-10-02T08:29:48Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:7e:1d 10.100.0.9
Oct 02 08:29:48 compute-0 ovn_controller[152344]: 2025-10-02T08:29:48Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:7e:1d 10.100.0.9
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.616 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393788.6118574, 923e00cc-7494-46f3-93e2-3c223705aff1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.617 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] VM Started (Lifecycle Event)
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.639 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.648 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393788.611991, 923e00cc-7494-46f3-93e2-3c223705aff1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.649 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] VM Paused (Lifecycle Event)
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.667 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.687 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:48 compute-0 nova_compute[260603]: 2025-10-02 08:29:48.712 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:48 compute-0 ceph-mon[74477]: pgmap v1467: 305 pgs: 305 active+clean; 413 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 3.6 MiB/s wr, 367 op/s
Oct 02 08:29:48 compute-0 podman[318521]: 2025-10-02 08:29:48.938283558 +0000 UTC m=+0.044906687 container create 7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:29:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 388 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 5.6 MiB/s wr, 553 op/s
Oct 02 08:29:48 compute-0 systemd[1]: Started libpod-conmon-7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21.scope.
Oct 02 08:29:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db0e7b9516bd1a447408cbffb5be27ab3618c8243a87ad12b0787ded02cc8cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:29:49 compute-0 podman[318521]: 2025-10-02 08:29:48.917787231 +0000 UTC m=+0.024410370 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:29:49 compute-0 podman[318521]: 2025-10-02 08:29:49.02234447 +0000 UTC m=+0.128967599 container init 7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 08:29:49 compute-0 podman[318521]: 2025-10-02 08:29:49.031968599 +0000 UTC m=+0.138591728 container start 7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:29:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1086178548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.068 2 DEBUG oslo_concurrency.processutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:49 compute-0 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [NOTICE]   (318539) : New worker (318543) forked
Oct 02 08:29:49 compute-0 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [NOTICE]   (318539) : Loading success.
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.084 2 DEBUG nova.compute.provider_tree [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.101 2 DEBUG nova.scheduler.client.report [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.117 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.166 2 INFO nova.scheduler.client.report [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Deleted allocations for instance 73e8c7a5-4621-4f07-824a-b81ea314a672
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.223 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.331 2 DEBUG nova.compute.manager [req-60dfc5e9-b385-45c2-a24e-46ae03f21b98 req-aca5cc43-6fc8-4bdf-8da2-b7bb1d7e9670 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-vif-deleted-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.499 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.499 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.499 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.499 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.500 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] No waiting events found dispatching network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.500 2 WARNING nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received unexpected event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa for instance with vm_state deleted and task_state None.
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.500 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.500 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.501 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.501 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.502 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Processing event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.502 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.503 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.503 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.504 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.505 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] No waiting events found dispatching network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.505 2 WARNING nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received unexpected event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 for instance with vm_state building and task_state spawning.
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.505 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.505 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.506 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.507 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.508 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Processing event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.509 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.509 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.509 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.509 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.509 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] No waiting events found dispatching network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.510 2 WARNING nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received unexpected event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c for instance with vm_state building and task_state spawning.
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.510 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.511 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.515 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393789.515318, d634251d-b484-4af7-b102-fe8015603660 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.515 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] VM Resumed (Lifecycle Event)
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.517 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.523 2 INFO nova.virt.libvirt.driver [-] [instance: d634251d-b484-4af7-b102-fe8015603660] Instance spawned successfully.
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.524 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.527 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.532 2 INFO nova.virt.libvirt.driver [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Instance spawned successfully.
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.533 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.542 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.554 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.562 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.563 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.564 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.564 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.567 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.567 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.573 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.577 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.578 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.578 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.579 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.579 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.598 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.598 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393789.5168705, 923e00cc-7494-46f3-93e2-3c223705aff1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.599 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] VM Resumed (Lifecycle Event)
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.647 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.651 2 INFO nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Took 11.37 seconds to spawn the instance on the hypervisor.
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.652 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.652 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.678 2 INFO nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Took 13.61 seconds to spawn the instance on the hypervisor.
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.678 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.685 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:29:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1086178548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.792 2 INFO nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Took 12.62 seconds to build instance.
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.796 2 INFO nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Took 14.84 seconds to build instance.
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.812 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:49 compute-0 nova_compute[260603]: 2025-10-02 08:29:49.813 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:49 compute-0 podman[318553]: 2025-10-02 08:29:49.991058914 +0000 UTC m=+0.059631034 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 08:29:50 compute-0 nova_compute[260603]: 2025-10-02 08:29:50.085 2 DEBUG oslo_concurrency.lockutils [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:50 compute-0 nova_compute[260603]: 2025-10-02 08:29:50.086 2 DEBUG oslo_concurrency.lockutils [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:50 compute-0 nova_compute[260603]: 2025-10-02 08:29:50.086 2 DEBUG nova.compute.manager [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:50 compute-0 nova_compute[260603]: 2025-10-02 08:29:50.090 2 DEBUG nova.compute.manager [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 08:29:50 compute-0 nova_compute[260603]: 2025-10-02 08:29:50.090 2 DEBUG nova.objects.instance [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'flavor' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:50 compute-0 ovn_controller[152344]: 2025-10-02T08:29:50Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:eb:54 10.100.0.8
Oct 02 08:29:50 compute-0 nova_compute[260603]: 2025-10-02 08:29:50.119 2 DEBUG nova.virt.libvirt.driver [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:29:50 compute-0 ovn_controller[152344]: 2025-10-02T08:29:50Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:eb:54 10.100.0.8
Oct 02 08:29:50 compute-0 ceph-mon[74477]: pgmap v1468: 305 pgs: 305 active+clean; 388 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 5.6 MiB/s wr, 553 op/s
Oct 02 08:29:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 388 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 3.8 MiB/s wr, 440 op/s
Oct 02 08:29:51 compute-0 nova_compute[260603]: 2025-10-02 08:29:51.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.029 2 DEBUG nova.objects.instance [None req-fe056b8f-a117-43cf-9e05-c65576ab3523 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'pci_devices' on Instance uuid d634251d-b484-4af7-b102-fe8015603660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.049 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393792.0496888, d634251d-b484-4af7-b102-fe8015603660 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.050 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] VM Paused (Lifecycle Event)
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.072 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.095 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:29:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.122 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 08:29:52 compute-0 kernel: tapf79edf8d-90 (unregistering): left promiscuous mode
Oct 02 08:29:52 compute-0 NetworkManager[45129]: <info>  [1759393792.4103] device (tapf79edf8d-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:29:52 compute-0 ovn_controller[152344]: 2025-10-02T08:29:52Z|00472|binding|INFO|Releasing lport f79edf8d-90b8-47b7-b366-244c63439a64 from this chassis (sb_readonly=0)
Oct 02 08:29:52 compute-0 ovn_controller[152344]: 2025-10-02T08:29:52Z|00473|binding|INFO|Setting lport f79edf8d-90b8-47b7-b366-244c63439a64 down in Southbound
Oct 02 08:29:52 compute-0 ovn_controller[152344]: 2025-10-02T08:29:52Z|00474|binding|INFO|Removing iface tapf79edf8d-90 ovn-installed in OVS
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.486 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:48:7d 10.100.0.4'], port_security=['fa:16:3e:a8:48:7d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd634251d-b484-4af7-b102-fe8015603660', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f79edf8d-90b8-47b7-b366-244c63439a64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.487 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f79edf8d-90b8-47b7-b366-244c63439a64 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca unbound from our chassis
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.490 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.492 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fca171eb-9d2a-4da7-bda5-c806008d110e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.493 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace which is not needed anymore
Oct 02 08:29:52 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Oct 02 08:29:52 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000003a.scope: Consumed 3.489s CPU time.
Oct 02 08:29:52 compute-0 systemd-machined[214636]: Machine qemu-62-instance-0000003a terminated.
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.595 2 DEBUG nova.compute.manager [None req-fe056b8f-a117-43cf-9e05-c65576ab3523 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:29:52 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [NOTICE]   (318442) : haproxy version is 2.8.14-c23fe91
Oct 02 08:29:52 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [NOTICE]   (318442) : path to executable is /usr/sbin/haproxy
Oct 02 08:29:52 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [WARNING]  (318442) : Exiting Master process...
Oct 02 08:29:52 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [WARNING]  (318442) : Exiting Master process...
Oct 02 08:29:52 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [ALERT]    (318442) : Current worker (318447) exited with code 143 (Terminated)
Oct 02 08:29:52 compute-0 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [WARNING]  (318442) : All workers exited. Exiting... (0)
Oct 02 08:29:52 compute-0 systemd[1]: libpod-7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8.scope: Deactivated successfully.
Oct 02 08:29:52 compute-0 podman[318596]: 2025-10-02 08:29:52.643859924 +0000 UTC m=+0.061474381 container died 7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 08:29:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8-userdata-shm.mount: Deactivated successfully.
Oct 02 08:29:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2aa27ff810503b5c16a82c7c0b62fcacee12457623b0ef10cc50f690d346bfd3-merged.mount: Deactivated successfully.
Oct 02 08:29:52 compute-0 podman[318596]: 2025-10-02 08:29:52.695128147 +0000 UTC m=+0.112742624 container cleanup 7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:29:52 compute-0 systemd[1]: libpod-conmon-7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8.scope: Deactivated successfully.
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.720 2 DEBUG nova.compute.manager [req-6c97c494-2049-4bf4-8189-4a89548c6c19 req-e491bb10-0980-4f39-ae83-7d8212eabf2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received event network-vif-unplugged-f79edf8d-90b8-47b7-b366-244c63439a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.723 2 DEBUG oslo_concurrency.lockutils [req-6c97c494-2049-4bf4-8189-4a89548c6c19 req-e491bb10-0980-4f39-ae83-7d8212eabf2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.723 2 DEBUG oslo_concurrency.lockutils [req-6c97c494-2049-4bf4-8189-4a89548c6c19 req-e491bb10-0980-4f39-ae83-7d8212eabf2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.724 2 DEBUG oslo_concurrency.lockutils [req-6c97c494-2049-4bf4-8189-4a89548c6c19 req-e491bb10-0980-4f39-ae83-7d8212eabf2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.725 2 DEBUG nova.compute.manager [req-6c97c494-2049-4bf4-8189-4a89548c6c19 req-e491bb10-0980-4f39-ae83-7d8212eabf2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] No waiting events found dispatching network-vif-unplugged-f79edf8d-90b8-47b7-b366-244c63439a64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.725 2 WARNING nova.compute.manager [req-6c97c494-2049-4bf4-8189-4a89548c6c19 req-e491bb10-0980-4f39-ae83-7d8212eabf2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received unexpected event network-vif-unplugged-f79edf8d-90b8-47b7-b366-244c63439a64 for instance with vm_state suspended and task_state None.
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:52 compute-0 ceph-mon[74477]: pgmap v1469: 305 pgs: 305 active+clean; 388 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 3.8 MiB/s wr, 440 op/s
Oct 02 08:29:52 compute-0 podman[318633]: 2025-10-02 08:29:52.77564751 +0000 UTC m=+0.042907475 container remove 7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.781 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd60fcf-0e96-45e4-bcc0-7105e89a237c]: (4, ('Thu Oct  2 08:29:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8)\n7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8\nThu Oct  2 08:29:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8)\n7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.787 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ba72bc-b481-489f-8a0c-a1d2ddfa5b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.791 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:52 compute-0 kernel: tapa72ac8c9-10: left promiscuous mode
Oct 02 08:29:52 compute-0 nova_compute[260603]: 2025-10-02 08:29:52.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.825 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0a447018-dbf1-4ef8-8046-a6d4d45eff5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.845 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b76c75da-0015-4c92-8f52-166bfe6a7cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.846 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef2cec5-dc99-4599-94cd-8a6370e59125]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.865 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd9329c-2236-4318-89ca-9f0057530d07]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466547, 'reachable_time': 41808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318649, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:52 compute-0 systemd[1]: run-netns-ovnmeta\x2da72ac8c9\x2d16ee\x2d4ec0\x2db23d\x2d2741fda000ca.mount: Deactivated successfully.
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.868 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:29:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.868 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb59dbf-3cdb-443d-8f74-722de555ebce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 424 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.2 MiB/s wr, 586 op/s
Oct 02 08:29:53 compute-0 ovn_controller[152344]: 2025-10-02T08:29:53Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:b2:ea 10.100.0.12
Oct 02 08:29:53 compute-0 ovn_controller[152344]: 2025-10-02T08:29:53Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:b2:ea 10.100.0.12
Oct 02 08:29:54 compute-0 podman[318653]: 2025-10-02 08:29:54.012673622 +0000 UTC m=+0.072220995 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:29:54 compute-0 NetworkManager[45129]: <info>  [1759393794.2959] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 NetworkManager[45129]: <info>  [1759393794.2967] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Oct 02 08:29:54 compute-0 ovn_controller[152344]: 2025-10-02T08:29:54Z|00475|binding|INFO|Releasing lport 21940774-b256-44e1-b604-ddb5b66aa4a6 from this chassis (sb_readonly=0)
Oct 02 08:29:54 compute-0 ovn_controller[152344]: 2025-10-02T08:29:54Z|00476|binding|INFO|Releasing lport 39e267b1-3dd0-4688-b6e5-f1bccf651722 from this chassis (sb_readonly=0)
Oct 02 08:29:54 compute-0 ovn_controller[152344]: 2025-10-02T08:29:54Z|00477|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 ovn_controller[152344]: 2025-10-02T08:29:54Z|00478|binding|INFO|Releasing lport 21940774-b256-44e1-b604-ddb5b66aa4a6 from this chassis (sb_readonly=0)
Oct 02 08:29:54 compute-0 ovn_controller[152344]: 2025-10-02T08:29:54Z|00479|binding|INFO|Releasing lport 39e267b1-3dd0-4688-b6e5-f1bccf651722 from this chassis (sb_readonly=0)
Oct 02 08:29:54 compute-0 ovn_controller[152344]: 2025-10-02T08:29:54Z|00480|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.377 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.378 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.379 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.380 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.380 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.383 2 INFO nova.compute.manager [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Terminating instance
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.386 2 DEBUG nova.compute.manager [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.394 2 INFO nova.virt.libvirt.driver [-] [instance: d634251d-b484-4af7-b102-fe8015603660] Instance destroyed successfully.
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.396 2 DEBUG nova.objects.instance [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'resources' on Instance uuid d634251d-b484-4af7-b102-fe8015603660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.414 2 DEBUG nova.virt.libvirt.vif [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-916266897',display_name='tempest-DeleteServersTestJSON-server-916266897',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-916266897',id=58,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-aeqxqe0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:52Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=d634251d-b484-4af7-b102-fe8015603660,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.416 2 DEBUG nova.network.os_vif_util [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.418 2 DEBUG nova.network.os_vif_util [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.419 2 DEBUG os_vif [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf79edf8d-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.435 2 INFO os_vif [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90')
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.484 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.485 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.485 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.486 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.486 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.488 2 INFO nova.compute.manager [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Terminating instance
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.489 2 DEBUG nova.compute.manager [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:29:54 compute-0 kernel: tap5bbef33f-36 (unregistering): left promiscuous mode
Oct 02 08:29:54 compute-0 NetworkManager[45129]: <info>  [1759393794.5457] device (tap5bbef33f-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 ovn_controller[152344]: 2025-10-02T08:29:54Z|00481|binding|INFO|Releasing lport 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c from this chassis (sb_readonly=0)
Oct 02 08:29:54 compute-0 ovn_controller[152344]: 2025-10-02T08:29:54Z|00482|binding|INFO|Setting lport 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c down in Southbound
Oct 02 08:29:54 compute-0 ovn_controller[152344]: 2025-10-02T08:29:54Z|00483|binding|INFO|Removing iface tap5bbef33f-36 ovn-installed in OVS
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.561 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:86:2c 10.100.0.5'], port_security=['fa:16:3e:95:86:2c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f7005e7b-8982-4d23-b12a-4b67c90a6c89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4b10ecc-e572-4092-8dce-9b7247cd181c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92074f81-fcf1-4b9d-a09c-c34c3c0535f5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.563 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c in datapath 9e6563dd-5ecf-4759-9df8-5b501617e75c unbound from our chassis
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.567 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9e6563dd-5ecf-4759-9df8-5b501617e75c
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.584 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.585 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.585 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.586 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.586 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.587 2 INFO nova.compute.manager [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Terminating instance
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.588 2 DEBUG nova.compute.manager [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.589 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a4ff29-015d-4b65-b74a-891c04c430ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:54 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000037.scope: Deactivated successfully.
Oct 02 08:29:54 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000037.scope: Consumed 11.669s CPU time.
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.616 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[92e30d0d-95d8-43a6-b3af-ec76e40a3749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:54 compute-0 systemd-machined[214636]: Machine qemu-60-instance-00000037 terminated.
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.619 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf16824-c8b2-40e1-ba74-5efd0ae2d846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:54 compute-0 kernel: tap12abaaed-2f (unregistering): left promiscuous mode
Oct 02 08:29:54 compute-0 NetworkManager[45129]: <info>  [1759393794.6281] device (tap12abaaed-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.651 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ad41fd69-5c70-48c2-987d-0d97ea140e8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.667 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e0f39e-13b9-4f09-9a62-c7ad4e10cef6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e6563dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:8b:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465698, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318706, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.682 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3402e64f-ed83-46f4-967b-f63dfcad78ab]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465714, 'tstamp': 465714}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318707, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465718, 'tstamp': 465718}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318707, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.689 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6563dd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:54 compute-0 ovn_controller[152344]: 2025-10-02T08:29:54Z|00484|binding|INFO|Releasing lport 12abaaed-2f93-40bd-bddd-8143c3709480 from this chassis (sb_readonly=0)
Oct 02 08:29:54 compute-0 ovn_controller[152344]: 2025-10-02T08:29:54Z|00485|binding|INFO|Setting lport 12abaaed-2f93-40bd-bddd-8143c3709480 down in Southbound
Oct 02 08:29:54 compute-0 ovn_controller[152344]: 2025-10-02T08:29:54Z|00486|binding|INFO|Removing iface tap12abaaed-2f ovn-installed in OVS
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.698 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:bc:66 10.100.0.4'], port_security=['fa:16:3e:24:bc:66 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f56dc5d2-b1f8-42ef-882c-62bcbd600954', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4b10ecc-e572-4092-8dce-9b7247cd181c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92074f81-fcf1-4b9d-a09c-c34c3c0535f5, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=12abaaed-2f93-40bd-bddd-8143c3709480) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000038.scope: Deactivated successfully.
Oct 02 08:29:54 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000038.scope: Consumed 11.931s CPU time.
Oct 02 08:29:54 compute-0 systemd-machined[214636]: Machine qemu-61-instance-00000038 terminated.
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.718 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e6563dd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.718 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.718 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9e6563dd-50, col_values=(('external_ids', {'iface-id': '39e267b1-3dd0-4688-b6e5-f1bccf651722'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.719 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.719 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 12abaaed-2f93-40bd-bddd-8143c3709480 in datapath 9e6563dd-5ecf-4759-9df8-5b501617e75c unbound from our chassis
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.720 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9e6563dd-5ecf-4759-9df8-5b501617e75c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.721 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[751843c6-0d79-4449-b069-3199f51f769f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.721 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c namespace which is not needed anymore
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.734 2 INFO nova.virt.libvirt.driver [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Instance destroyed successfully.
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.734 2 DEBUG nova.objects.instance [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'resources' on Instance uuid f7005e7b-8982-4d23-b12a-4b67c90a6c89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:54 compute-0 ceph-mon[74477]: pgmap v1470: 305 pgs: 305 active+clean; 424 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.2 MiB/s wr, 586 op/s
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.754 2 DEBUG nova.virt.libvirt.vif [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-2',id=55,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T08:29:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:42Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=f7005e7b-8982-4d23-b12a-4b67c90a6c89,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.755 2 DEBUG nova.network.os_vif_util [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.755 2 DEBUG nova.network.os_vif_util [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.756 2 DEBUG os_vif [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.758 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bbef33f-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.766 2 INFO os_vif [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36')
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.831 2 INFO nova.virt.libvirt.driver [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Instance destroyed successfully.
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.831 2 DEBUG nova.objects.instance [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'resources' on Instance uuid f56dc5d2-b1f8-42ef-882c-62bcbd600954 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.848 2 DEBUG nova.virt.libvirt.vif [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-3',id=56,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-10-02T08:29:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:43Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=f56dc5d2-b1f8-42ef-882c-62bcbd600954,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.849 2 DEBUG nova.network.os_vif_util [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.849 2 DEBUG nova.network.os_vif_util [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.850 2 DEBUG os_vif [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12abaaed-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.857 2 INFO os_vif [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f')
Oct 02 08:29:54 compute-0 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [NOTICE]   (317653) : haproxy version is 2.8.14-c23fe91
Oct 02 08:29:54 compute-0 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [NOTICE]   (317653) : path to executable is /usr/sbin/haproxy
Oct 02 08:29:54 compute-0 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [WARNING]  (317653) : Exiting Master process...
Oct 02 08:29:54 compute-0 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [WARNING]  (317653) : Exiting Master process...
Oct 02 08:29:54 compute-0 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [ALERT]    (317653) : Current worker (317655) exited with code 143 (Terminated)
Oct 02 08:29:54 compute-0 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [WARNING]  (317653) : All workers exited. Exiting... (0)
Oct 02 08:29:54 compute-0 systemd[1]: libpod-2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce.scope: Deactivated successfully.
Oct 02 08:29:54 compute-0 podman[318760]: 2025-10-02 08:29:54.883203045 +0000 UTC m=+0.061511743 container died 2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.887 2 DEBUG nova.compute.manager [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received event network-changed-044e4f76-db30-47b6-b277-8c3a13743b9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.888 2 DEBUG nova.compute.manager [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Refreshing instance network info cache due to event network-changed-044e4f76-db30-47b6-b277-8c3a13743b9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.888 2 DEBUG oslo_concurrency.lockutils [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.888 2 DEBUG oslo_concurrency.lockutils [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.888 2 DEBUG nova.network.neutron [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Refreshing network info cache for port 044e4f76-db30-47b6-b277-8c3a13743b9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:29:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce-userdata-shm.mount: Deactivated successfully.
Oct 02 08:29:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b410b66f213cbfe1b811aab1f58708a1f7a49c73b1959803809bb9cdb00d8fa-merged.mount: Deactivated successfully.
Oct 02 08:29:54 compute-0 podman[318760]: 2025-10-02 08:29:54.930304699 +0000 UTC m=+0.108613387 container cleanup 2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.937 2 INFO nova.virt.libvirt.driver [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Deleting instance files /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660_del
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.937 2 INFO nova.virt.libvirt.driver [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Deletion of /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660_del complete
Oct 02 08:29:54 compute-0 systemd[1]: libpod-conmon-2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce.scope: Deactivated successfully.
Oct 02 08:29:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 454 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 7.7 MiB/s wr, 533 op/s
Oct 02 08:29:54 compute-0 podman[318816]: 2025-10-02 08:29:54.996807575 +0000 UTC m=+0.041111158 container remove 2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.997 2 INFO nova.compute.manager [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Took 0.61 seconds to destroy the instance on the hypervisor.
Oct 02 08:29:54 compute-0 nova_compute[260603]: 2025-10-02 08:29:54.998 2 DEBUG oslo.service.loopingcall [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.001 2 DEBUG nova.compute.manager [-] [instance: d634251d-b484-4af7-b102-fe8015603660] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.004 2 DEBUG nova.network.neutron [-] [instance: d634251d-b484-4af7-b102-fe8015603660] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:29:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.002 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[71dda71e-0a5a-4c6c-8594-cec8c3844a84]: (4, ('Thu Oct  2 08:29:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c (2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce)\n2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce\nThu Oct  2 08:29:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c (2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce)\n2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.006 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b648ecb1-59e0-40cf-8fe8-535d470fb562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.007 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6563dd-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:55 compute-0 kernel: tap9e6563dd-50: left promiscuous mode
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.030 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2049ee3e-9d39-4620-84b9-9db427db117e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.047 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f74fe7f6-6685-4828-b927-c496fadd8fde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.053 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[60d01837-5055-429a-adac-9c80f93fdcee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.071 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7030ff61-70c6-4ab4-a9ea-5e1b3a2d91f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465690, 'reachable_time': 31338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318830, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.077 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:29:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.077 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8379a2-3ddb-4dd7-9dbb-26229db6f13c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:29:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d9e6563dd\x2d5ecf\x2d4759\x2d9df8\x2d5b501617e75c.mount: Deactivated successfully.
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.086 2 DEBUG nova.compute.manager [req-41f7da62-ba46-46d3-a64c-3da21ce37a46 req-4be9d7e2-95ef-4c2d-b5ae-0c9f27875fd5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.086 2 DEBUG oslo_concurrency.lockutils [req-41f7da62-ba46-46d3-a64c-3da21ce37a46 req-4be9d7e2-95ef-4c2d-b5ae-0c9f27875fd5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.086 2 DEBUG oslo_concurrency.lockutils [req-41f7da62-ba46-46d3-a64c-3da21ce37a46 req-4be9d7e2-95ef-4c2d-b5ae-0c9f27875fd5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.086 2 DEBUG oslo_concurrency.lockutils [req-41f7da62-ba46-46d3-a64c-3da21ce37a46 req-4be9d7e2-95ef-4c2d-b5ae-0c9f27875fd5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.086 2 DEBUG nova.compute.manager [req-41f7da62-ba46-46d3-a64c-3da21ce37a46 req-4be9d7e2-95ef-4c2d-b5ae-0c9f27875fd5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] No waiting events found dispatching network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.087 2 WARNING nova.compute.manager [req-41f7da62-ba46-46d3-a64c-3da21ce37a46 req-4be9d7e2-95ef-4c2d-b5ae-0c9f27875fd5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received unexpected event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 for instance with vm_state suspended and task_state deleting.
Oct 02 08:29:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.105 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:29:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.106 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.184 2 INFO nova.virt.libvirt.driver [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Deleting instance files /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89_del
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.185 2 INFO nova.virt.libvirt.driver [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Deletion of /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89_del complete
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.234 2 INFO nova.compute.manager [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.235 2 DEBUG oslo.service.loopingcall [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.235 2 DEBUG nova.compute.manager [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.235 2 DEBUG nova.network.neutron [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.285 2 INFO nova.virt.libvirt.driver [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Deleting instance files /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954_del
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.286 2 INFO nova.virt.libvirt.driver [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Deletion of /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954_del complete
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.335 2 INFO nova.compute.manager [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.335 2 DEBUG oslo.service.loopingcall [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.338 2 DEBUG nova.compute.manager [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.339 2 DEBUG nova.network.neutron [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.901 2 DEBUG nova.network.neutron [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.919 2 INFO nova.compute.manager [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Took 0.58 seconds to deallocate network for instance.
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.979 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.979 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:55 compute-0 nova_compute[260603]: 2025-10-02 08:29:55.980 2 DEBUG nova.network.neutron [-] [instance: d634251d-b484-4af7-b102-fe8015603660] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.000 2 INFO nova.compute.manager [-] [instance: d634251d-b484-4af7-b102-fe8015603660] Took 1.00 seconds to deallocate network for instance.
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.074 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.076 2 DEBUG nova.network.neutron [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.091 2 INFO nova.compute.manager [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Took 0.86 seconds to deallocate network for instance.
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.131 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.177 2 DEBUG oslo_concurrency.processutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3419304229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.653 2 DEBUG oslo_concurrency.processutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.672 2 DEBUG nova.compute.provider_tree [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.704 2 DEBUG nova.scheduler.client.report [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.746 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.750 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:56 compute-0 ceph-mon[74477]: pgmap v1471: 305 pgs: 305 active+clean; 454 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 7.7 MiB/s wr, 533 op/s
Oct 02 08:29:56 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3419304229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.781 2 INFO nova.scheduler.client.report [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Deleted allocations for instance f56dc5d2-b1f8-42ef-882c-62bcbd600954
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.872 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.942 2 DEBUG oslo_concurrency.processutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 454 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 6.4 MiB/s wr, 458 op/s
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.992 2 DEBUG nova.network.neutron [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Updated VIF entry in instance network info cache for port 044e4f76-db30-47b6-b277-8c3a13743b9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:29:56 compute-0 nova_compute[260603]: 2025-10-02 08:29:56.993 2 DEBUG nova.network.neutron [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Updating instance_info_cache with network_info: [{"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.032 2 DEBUG oslo_concurrency.lockutils [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.098 2 DEBUG nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received event network-vif-unplugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.099 2 DEBUG oslo_concurrency.lockutils [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.099 2 DEBUG oslo_concurrency.lockutils [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.100 2 DEBUG oslo_concurrency.lockutils [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.100 2 DEBUG nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] No waiting events found dispatching network-vif-unplugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.100 2 WARNING nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received unexpected event network-vif-unplugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c for instance with vm_state deleted and task_state None.
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.101 2 DEBUG nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.101 2 DEBUG oslo_concurrency.lockutils [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.101 2 DEBUG oslo_concurrency.lockutils [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.102 2 DEBUG oslo_concurrency.lockutils [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.102 2 DEBUG nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] No waiting events found dispatching network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.102 2 WARNING nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received unexpected event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c for instance with vm_state deleted and task_state None.
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.103 2 DEBUG nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received event network-vif-deleted-f79edf8d-90b8-47b7-b366-244c63439a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.103 2 DEBUG nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received event network-vif-deleted-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.170 2 DEBUG nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received event network-vif-unplugged-12abaaed-2f93-40bd-bddd-8143c3709480 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.171 2 DEBUG oslo_concurrency.lockutils [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.171 2 DEBUG oslo_concurrency.lockutils [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.171 2 DEBUG oslo_concurrency.lockutils [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.172 2 DEBUG nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] No waiting events found dispatching network-vif-unplugged-12abaaed-2f93-40bd-bddd-8143c3709480 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.172 2 WARNING nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received unexpected event network-vif-unplugged-12abaaed-2f93-40bd-bddd-8143c3709480 for instance with vm_state deleted and task_state None.
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.172 2 DEBUG nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.173 2 DEBUG oslo_concurrency.lockutils [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.173 2 DEBUG oslo_concurrency.lockutils [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.173 2 DEBUG oslo_concurrency.lockutils [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.174 2 DEBUG nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] No waiting events found dispatching network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.174 2 WARNING nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received unexpected event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 for instance with vm_state deleted and task_state None.
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.174 2 DEBUG nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received event network-vif-deleted-12abaaed-2f93-40bd-bddd-8143c3709480 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:29:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3279043566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.421 2 DEBUG oslo_concurrency.processutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.429 2 DEBUG nova.compute.provider_tree [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.459 2 DEBUG nova.scheduler.client.report [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.495 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.498 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.527 2 INFO nova.scheduler.client.report [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Deleted allocations for instance d634251d-b484-4af7-b102-fe8015603660
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.630 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.636 2 DEBUG oslo_concurrency.processutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:29:57 compute-0 nova_compute[260603]: 2025-10-02 08:29:57.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:29:57 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3279043566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:29:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:29:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:29:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:29:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:29:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:29:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:29:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589170335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:58 compute-0 nova_compute[260603]: 2025-10-02 08:29:58.076 2 DEBUG oslo_concurrency.processutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:29:58 compute-0 nova_compute[260603]: 2025-10-02 08:29:58.084 2 DEBUG nova.compute.provider_tree [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:29:58 compute-0 nova_compute[260603]: 2025-10-02 08:29:58.104 2 DEBUG nova.scheduler.client.report [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:29:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:29:58.108 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:29:58 compute-0 nova_compute[260603]: 2025-10-02 08:29:58.132 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:58 compute-0 nova_compute[260603]: 2025-10-02 08:29:58.169 2 INFO nova.scheduler.client.report [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Deleted allocations for instance f7005e7b-8982-4d23-b12a-4b67c90a6c89
Oct 02 08:29:58 compute-0 nova_compute[260603]: 2025-10-02 08:29:58.240 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:29:58 compute-0 ceph-mon[74477]: pgmap v1472: 305 pgs: 305 active+clean; 454 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 6.4 MiB/s wr, 458 op/s
Oct 02 08:29:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1589170335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:29:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 326 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.5 MiB/s wr, 597 op/s
Oct 02 08:29:59 compute-0 nova_compute[260603]: 2025-10-02 08:29:59.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:00 compute-0 nova_compute[260603]: 2025-10-02 08:30:00.181 2 DEBUG nova.virt.libvirt.driver [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:30:00 compute-0 ceph-mon[74477]: pgmap v1473: 305 pgs: 305 active+clean; 326 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.5 MiB/s wr, 597 op/s
Oct 02 08:30:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 326 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.5 MiB/s wr, 412 op/s
Oct 02 08:30:01 compute-0 nova_compute[260603]: 2025-10-02 08:30:01.572 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393786.532904, 73e8c7a5-4621-4f07-824a-b81ea314a672 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:01 compute-0 nova_compute[260603]: 2025-10-02 08:30:01.572 2 INFO nova.compute.manager [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] VM Stopped (Lifecycle Event)
Oct 02 08:30:01 compute-0 nova_compute[260603]: 2025-10-02 08:30:01.603 2 DEBUG nova.compute.manager [None req-275a379f-29bc-4047-b8ee-485d0f2990e4 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:01 compute-0 ovn_controller[152344]: 2025-10-02T08:30:01Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:b0:76 10.100.0.11
Oct 02 08:30:01 compute-0 ovn_controller[152344]: 2025-10-02T08:30:01Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:b0:76 10.100.0.11
Oct 02 08:30:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:30:02 compute-0 ovn_controller[152344]: 2025-10-02T08:30:02Z|00487|binding|INFO|Releasing lport 21940774-b256-44e1-b604-ddb5b66aa4a6 from this chassis (sb_readonly=0)
Oct 02 08:30:02 compute-0 ovn_controller[152344]: 2025-10-02T08:30:02Z|00488|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct 02 08:30:02 compute-0 nova_compute[260603]: 2025-10-02 08:30:02.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:02 compute-0 kernel: tapbf9cdb7f-4c (unregistering): left promiscuous mode
Oct 02 08:30:02 compute-0 NetworkManager[45129]: <info>  [1759393802.4946] device (tapbf9cdb7f-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:30:02 compute-0 ovn_controller[152344]: 2025-10-02T08:30:02Z|00489|binding|INFO|Releasing lport bf9cdb7f-4cda-403b-b27e-12385e93db02 from this chassis (sb_readonly=0)
Oct 02 08:30:02 compute-0 ovn_controller[152344]: 2025-10-02T08:30:02Z|00490|binding|INFO|Setting lport bf9cdb7f-4cda-403b-b27e-12385e93db02 down in Southbound
Oct 02 08:30:02 compute-0 nova_compute[260603]: 2025-10-02 08:30:02.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:02 compute-0 ovn_controller[152344]: 2025-10-02T08:30:02Z|00491|binding|INFO|Removing iface tapbf9cdb7f-4c ovn-installed in OVS
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.517 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:b2:ea 10.100.0.12'], port_security=['fa:16:3e:64:b2:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9924ce7f-b701-4560-b2c5-67f673b45807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bf9cdb7f-4cda-403b-b27e-12385e93db02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.520 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bf9cdb7f-4cda-403b-b27e-12385e93db02 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.523 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e
Oct 02 08:30:02 compute-0 nova_compute[260603]: 2025-10-02 08:30:02.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.548 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3326ba05-7658-481c-9a3d-25db008f53df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:02 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct 02 08:30:02 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000033.scope: Consumed 13.000s CPU time.
Oct 02 08:30:02 compute-0 systemd-machined[214636]: Machine qemu-56-instance-00000033 terminated.
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.583 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[63437c9d-6234-4f06-a830-2ac85233fcb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.588 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3d483035-fc0c-4f3c-9957-83c48dd97e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.628 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ee67fa10-0086-4e30-b42e-5426d8c32416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.646 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fee7dc-cafd-4d3f-b53d-6f5f8feeaba8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 9, 'rx_bytes': 1042, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 9, 'rx_bytes': 1042, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318909, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.664 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[120d11cc-63fa-4563-81a0-882e1a1dc00c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318910, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318910, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.665 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:02 compute-0 nova_compute[260603]: 2025-10-02 08:30:02.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.671 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:02 compute-0 nova_compute[260603]: 2025-10-02 08:30:02.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.672 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.672 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.673 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:02 compute-0 nova_compute[260603]: 2025-10-02 08:30:02.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:02 compute-0 ceph-mon[74477]: pgmap v1474: 305 pgs: 305 active+clean; 326 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.5 MiB/s wr, 412 op/s
Oct 02 08:30:02 compute-0 nova_compute[260603]: 2025-10-02 08:30:02.889 2 DEBUG nova.compute.manager [req-94ad25f1-291b-4b33-a466-897ac87e9413 req-aa17d4af-9a86-4e39-918b-cec7ef114a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-unplugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:02 compute-0 nova_compute[260603]: 2025-10-02 08:30:02.889 2 DEBUG oslo_concurrency.lockutils [req-94ad25f1-291b-4b33-a466-897ac87e9413 req-aa17d4af-9a86-4e39-918b-cec7ef114a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:02 compute-0 nova_compute[260603]: 2025-10-02 08:30:02.889 2 DEBUG oslo_concurrency.lockutils [req-94ad25f1-291b-4b33-a466-897ac87e9413 req-aa17d4af-9a86-4e39-918b-cec7ef114a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:02 compute-0 nova_compute[260603]: 2025-10-02 08:30:02.890 2 DEBUG oslo_concurrency.lockutils [req-94ad25f1-291b-4b33-a466-897ac87e9413 req-aa17d4af-9a86-4e39-918b-cec7ef114a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:02 compute-0 nova_compute[260603]: 2025-10-02 08:30:02.890 2 DEBUG nova.compute.manager [req-94ad25f1-291b-4b33-a466-897ac87e9413 req-aa17d4af-9a86-4e39-918b-cec7ef114a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-unplugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:30:02 compute-0 nova_compute[260603]: 2025-10-02 08:30:02.890 2 WARNING nova.compute.manager [req-94ad25f1-291b-4b33-a466-897ac87e9413 req-aa17d4af-9a86-4e39-918b-cec7ef114a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received unexpected event network-vif-unplugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with vm_state active and task_state powering-off.
Oct 02 08:30:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 342 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.8 MiB/s wr, 443 op/s
Oct 02 08:30:03 compute-0 nova_compute[260603]: 2025-10-02 08:30:03.195 2 INFO nova.virt.libvirt.driver [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance shutdown successfully after 13 seconds.
Oct 02 08:30:03 compute-0 nova_compute[260603]: 2025-10-02 08:30:03.204 2 INFO nova.virt.libvirt.driver [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance destroyed successfully.
Oct 02 08:30:03 compute-0 nova_compute[260603]: 2025-10-02 08:30:03.204 2 DEBUG nova.objects.instance [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:03 compute-0 nova_compute[260603]: 2025-10-02 08:30:03.222 2 DEBUG nova.compute.manager [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:03 compute-0 nova_compute[260603]: 2025-10-02 08:30:03.268 2 DEBUG oslo_concurrency.lockutils [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:03 compute-0 ovn_controller[152344]: 2025-10-02T08:30:03Z|00492|binding|INFO|Releasing lport 21940774-b256-44e1-b604-ddb5b66aa4a6 from this chassis (sb_readonly=0)
Oct 02 08:30:03 compute-0 ovn_controller[152344]: 2025-10-02T08:30:03Z|00493|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct 02 08:30:03 compute-0 nova_compute[260603]: 2025-10-02 08:30:03.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:04 compute-0 nova_compute[260603]: 2025-10-02 08:30:04.463 2 DEBUG nova.objects.instance [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'flavor' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:04 compute-0 nova_compute[260603]: 2025-10-02 08:30:04.489 2 DEBUG oslo_concurrency.lockutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:30:04 compute-0 nova_compute[260603]: 2025-10-02 08:30:04.490 2 DEBUG oslo_concurrency.lockutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquired lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:30:04 compute-0 nova_compute[260603]: 2025-10-02 08:30:04.490 2 DEBUG nova.network.neutron [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:30:04 compute-0 nova_compute[260603]: 2025-10-02 08:30:04.491 2 DEBUG nova.objects.instance [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'info_cache' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:04 compute-0 ceph-mon[74477]: pgmap v1475: 305 pgs: 305 active+clean; 342 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.8 MiB/s wr, 443 op/s
Oct 02 08:30:04 compute-0 nova_compute[260603]: 2025-10-02 08:30:04.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:04 compute-0 nova_compute[260603]: 2025-10-02 08:30:04.979 2 DEBUG nova.compute.manager [req-9c985cab-cd11-45d4-8b6e-6c1c5b67062a req-f516e8f0-ce6b-4ebd-9103-dbda5287a931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:04 compute-0 nova_compute[260603]: 2025-10-02 08:30:04.979 2 DEBUG oslo_concurrency.lockutils [req-9c985cab-cd11-45d4-8b6e-6c1c5b67062a req-f516e8f0-ce6b-4ebd-9103-dbda5287a931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 353 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.2 MiB/s wr, 321 op/s
Oct 02 08:30:04 compute-0 nova_compute[260603]: 2025-10-02 08:30:04.979 2 DEBUG oslo_concurrency.lockutils [req-9c985cab-cd11-45d4-8b6e-6c1c5b67062a req-f516e8f0-ce6b-4ebd-9103-dbda5287a931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:04 compute-0 nova_compute[260603]: 2025-10-02 08:30:04.980 2 DEBUG oslo_concurrency.lockutils [req-9c985cab-cd11-45d4-8b6e-6c1c5b67062a req-f516e8f0-ce6b-4ebd-9103-dbda5287a931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:04 compute-0 nova_compute[260603]: 2025-10-02 08:30:04.980 2 DEBUG nova.compute.manager [req-9c985cab-cd11-45d4-8b6e-6c1c5b67062a req-f516e8f0-ce6b-4ebd-9103-dbda5287a931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:30:04 compute-0 nova_compute[260603]: 2025-10-02 08:30:04.980 2 WARNING nova.compute.manager [req-9c985cab-cd11-45d4-8b6e-6c1c5b67062a req-f516e8f0-ce6b-4ebd-9103-dbda5287a931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received unexpected event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with vm_state stopped and task_state powering-on.
Oct 02 08:30:06 compute-0 ceph-mon[74477]: pgmap v1476: 305 pgs: 305 active+clean; 353 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.2 MiB/s wr, 321 op/s
Oct 02 08:30:06 compute-0 nova_compute[260603]: 2025-10-02 08:30:06.917 2 DEBUG nova.network.neutron [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updating instance_info_cache with network_info: [{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:30:06 compute-0 nova_compute[260603]: 2025-10-02 08:30:06.944 2 DEBUG oslo_concurrency.lockutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Releasing lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:30:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 353 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 4.2 MiB/s wr, 195 op/s
Oct 02 08:30:06 compute-0 nova_compute[260603]: 2025-10-02 08:30:06.981 2 INFO nova.virt.libvirt.driver [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance destroyed successfully.
Oct 02 08:30:06 compute-0 nova_compute[260603]: 2025-10-02 08:30:06.981 2 DEBUG nova.objects.instance [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:06 compute-0 nova_compute[260603]: 2025-10-02 08:30:06.997 2 DEBUG nova.objects.instance [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'resources' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.014 2 DEBUG nova.virt.libvirt.vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-769762436',display_name='tempest-ListServerFiltersTestJSON-instance-769762436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-769762436',id=51,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-t76hsctw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:30:03Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=9924ce7f-b701-4560-b2c5-67f673b45807,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.014 2 DEBUG nova.network.os_vif_util [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.015 2 DEBUG nova.network.os_vif_util [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.015 2 DEBUG os_vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf9cdb7f-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.050 2 INFO os_vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c')
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.063 2 DEBUG nova.virt.libvirt.driver [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Start _get_guest_xml network_info=[{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.070 2 WARNING nova.virt.libvirt.driver [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.078 2 DEBUG nova.virt.libvirt.host [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.081 2 DEBUG nova.virt.libvirt.host [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.086 2 DEBUG nova.virt.libvirt.host [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.086 2 DEBUG nova.virt.libvirt.host [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.087 2 DEBUG nova.virt.libvirt.driver [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.087 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.088 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.088 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.088 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.088 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.088 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.089 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.089 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.089 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.089 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.090 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.090 2 DEBUG nova.objects.instance [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.106 2 DEBUG oslo_concurrency.processutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.597 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393792.59524, d634251d-b484-4af7-b102-fe8015603660 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.598 2 INFO nova.compute.manager [-] [instance: d634251d-b484-4af7-b102-fe8015603660] VM Stopped (Lifecycle Event)
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.616 2 DEBUG nova.compute.manager [None req-3e2d5ced-789f-4425-a47a-06fc4239fca1 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:30:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/235783216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.653 2 DEBUG oslo_concurrency.processutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.694 2 DEBUG oslo_concurrency.processutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:07 compute-0 nova_compute[260603]: 2025-10-02 08:30:07.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/235783216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:30:08 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2263374193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.219 2 DEBUG oslo_concurrency.processutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.221 2 DEBUG nova.virt.libvirt.vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-769762436',display_name='tempest-ListServerFiltersTestJSON-instance-769762436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-769762436',id=51,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-t76hsctw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:30:03Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=9924ce7f-b701-4560-b2c5-67f673b45807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.222 2 DEBUG nova.network.os_vif_util [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.225 2 DEBUG nova.network.os_vif_util [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.228 2 DEBUG nova.objects.instance [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.254 2 DEBUG nova.virt.libvirt.driver [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:30:08 compute-0 nova_compute[260603]:   <uuid>9924ce7f-b701-4560-b2c5-67f673b45807</uuid>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   <name>instance-00000033</name>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-769762436</nova:name>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:30:07</nova:creationTime>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:30:08 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:30:08 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:30:08 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:30:08 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:30:08 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:30:08 compute-0 nova_compute[260603]:         <nova:user uuid="c1d66932c11043b5b90140cd2dde53d2">tempest-ListServerFiltersTestJSON-1545892750-project-member</nova:user>
Oct 02 08:30:08 compute-0 nova_compute[260603]:         <nova:project uuid="e7c4373fe01a4a14bea07af6dba4d170">tempest-ListServerFiltersTestJSON-1545892750</nova:project>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:30:08 compute-0 nova_compute[260603]:         <nova:port uuid="bf9cdb7f-4cda-403b-b27e-12385e93db02">
Oct 02 08:30:08 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <system>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <entry name="serial">9924ce7f-b701-4560-b2c5-67f673b45807</entry>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <entry name="uuid">9924ce7f-b701-4560-b2c5-67f673b45807</entry>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     </system>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   <os>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   </os>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   <features>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   </features>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/9924ce7f-b701-4560-b2c5-67f673b45807_disk">
Oct 02 08:30:08 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       </source>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:30:08 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/9924ce7f-b701-4560-b2c5-67f673b45807_disk.config">
Oct 02 08:30:08 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       </source>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:30:08 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:64:b2:ea"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <target dev="tapbf9cdb7f-4c"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/console.log" append="off"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <video>
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     </video>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:30:08 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:30:08 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:30:08 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:30:08 compute-0 nova_compute[260603]: </domain>
Oct 02 08:30:08 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.255 2 DEBUG nova.virt.libvirt.driver [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.256 2 DEBUG nova.virt.libvirt.driver [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.257 2 DEBUG nova.virt.libvirt.vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-769762436',display_name='tempest-ListServerFiltersTestJSON-instance-769762436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-769762436',id=51,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-t76hsctw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:30:03Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=9924ce7f-b701-4560-b2c5-67f673b45807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.258 2 DEBUG nova.network.os_vif_util [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.259 2 DEBUG nova.network.os_vif_util [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.259 2 DEBUG os_vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.265 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf9cdb7f-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf9cdb7f-4c, col_values=(('external_ids', {'iface-id': 'bf9cdb7f-4cda-403b-b27e-12385e93db02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:b2:ea', 'vm-uuid': '9924ce7f-b701-4560-b2c5-67f673b45807'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:08 compute-0 NetworkManager[45129]: <info>  [1759393808.3004] manager: (tapbf9cdb7f-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.307 2 INFO os_vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c')
Oct 02 08:30:08 compute-0 kernel: tapbf9cdb7f-4c: entered promiscuous mode
Oct 02 08:30:08 compute-0 NetworkManager[45129]: <info>  [1759393808.3966] manager: (tapbf9cdb7f-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Oct 02 08:30:08 compute-0 ovn_controller[152344]: 2025-10-02T08:30:08Z|00494|binding|INFO|Claiming lport bf9cdb7f-4cda-403b-b27e-12385e93db02 for this chassis.
Oct 02 08:30:08 compute-0 ovn_controller[152344]: 2025-10-02T08:30:08Z|00495|binding|INFO|bf9cdb7f-4cda-403b-b27e-12385e93db02: Claiming fa:16:3e:64:b2:ea 10.100.0.12
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.407 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:b2:ea 10.100.0.12'], port_security=['fa:16:3e:64:b2:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9924ce7f-b701-4560-b2c5-67f673b45807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bf9cdb7f-4cda-403b-b27e-12385e93db02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.409 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bf9cdb7f-4cda-403b-b27e-12385e93db02 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e bound to our chassis
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.410 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:08 compute-0 ovn_controller[152344]: 2025-10-02T08:30:08Z|00496|binding|INFO|Setting lport bf9cdb7f-4cda-403b-b27e-12385e93db02 ovn-installed in OVS
Oct 02 08:30:08 compute-0 ovn_controller[152344]: 2025-10-02T08:30:08Z|00497|binding|INFO|Setting lport bf9cdb7f-4cda-403b-b27e-12385e93db02 up in Southbound
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:08 compute-0 systemd-udevd[318999]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.441 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2398ff9d-15af-43e6-bf20-48aac04d5f1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:08 compute-0 systemd-machined[214636]: New machine qemu-64-instance-00000033.
Oct 02 08:30:08 compute-0 NetworkManager[45129]: <info>  [1759393808.4489] device (tapbf9cdb7f-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:30:08 compute-0 NetworkManager[45129]: <info>  [1759393808.4498] device (tapbf9cdb7f-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:30:08 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-00000033.
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.479 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb12ec3-5158-4106-b160-412ba9b741f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.482 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3129e11e-8951-4311-8c58-1ade1c256642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.519 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c26e93-e1f9-42d2-8129-fd3c16842cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.537 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[668f5f30-3d37-4b3f-b851-6b6d8ed1eaed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 11, 'rx_bytes': 1042, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 11, 'rx_bytes': 1042, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319012, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.555 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca9dcdd-baa7-4e60-b58e-8d5a5d519b57]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319013, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319013, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.557 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.560 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.561 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.561 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.562 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.738 2 DEBUG nova.compute.manager [req-e0bb8932-a187-4b2e-af61-1b40a46086eb req-926ba8e5-f3c3-47c1-9195-f4a32df11125 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.738 2 DEBUG oslo_concurrency.lockutils [req-e0bb8932-a187-4b2e-af61-1b40a46086eb req-926ba8e5-f3c3-47c1-9195-f4a32df11125 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.739 2 DEBUG oslo_concurrency.lockutils [req-e0bb8932-a187-4b2e-af61-1b40a46086eb req-926ba8e5-f3c3-47c1-9195-f4a32df11125 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.739 2 DEBUG oslo_concurrency.lockutils [req-e0bb8932-a187-4b2e-af61-1b40a46086eb req-926ba8e5-f3c3-47c1-9195-f4a32df11125 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.740 2 DEBUG nova.compute.manager [req-e0bb8932-a187-4b2e-af61-1b40a46086eb req-926ba8e5-f3c3-47c1-9195-f4a32df11125 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:30:08 compute-0 nova_compute[260603]: 2025-10-02 08:30:08.740 2 WARNING nova.compute.manager [req-e0bb8932-a187-4b2e-af61-1b40a46086eb req-926ba8e5-f3c3-47c1-9195-f4a32df11125 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received unexpected event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with vm_state stopped and task_state powering-on.
Oct 02 08:30:08 compute-0 ceph-mon[74477]: pgmap v1477: 305 pgs: 305 active+clean; 353 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 4.2 MiB/s wr, 195 op/s
Oct 02 08:30:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2263374193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 359 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 585 KiB/s rd, 4.3 MiB/s wr, 208 op/s
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.350 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 9924ce7f-b701-4560-b2c5-67f673b45807 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.351 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393809.3501732, 9924ce7f-b701-4560-b2c5-67f673b45807 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.352 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] VM Resumed (Lifecycle Event)
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.354 2 DEBUG nova.compute.manager [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.359 2 INFO nova.virt.libvirt.driver [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance rebooted successfully.
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.360 2 DEBUG nova.compute.manager [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.403 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.407 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.450 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393809.3504496, 9924ce7f-b701-4560-b2c5-67f673b45807 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.451 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] VM Started (Lifecycle Event)
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.474 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.478 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.729 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393794.7278774, f7005e7b-8982-4d23-b12a-4b67c90a6c89 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.729 2 INFO nova.compute.manager [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] VM Stopped (Lifecycle Event)
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.760 2 DEBUG nova.compute.manager [None req-8f520af6-859b-472f-bb89-acdf64d0bd8b - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.820 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393794.819385, f56dc5d2-b1f8-42ef-882c-62bcbd600954 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.820 2 INFO nova.compute.manager [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] VM Stopped (Lifecycle Event)
Oct 02 08:30:09 compute-0 nova_compute[260603]: 2025-10-02 08:30:09.846 2 DEBUG nova.compute.manager [None req-0a6d91b2-fad1-465f-b653-216d724df4e4 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:10 compute-0 ceph-mon[74477]: pgmap v1478: 305 pgs: 305 active+clean; 359 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 585 KiB/s rd, 4.3 MiB/s wr, 208 op/s
Oct 02 08:30:10 compute-0 nova_compute[260603]: 2025-10-02 08:30:10.839 2 DEBUG nova.compute.manager [req-f8d69990-c5d8-424d-9fe1-e513bdc74162 req-fa2e7a92-fdf4-4684-ad17-d69383b1694a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:10 compute-0 nova_compute[260603]: 2025-10-02 08:30:10.840 2 DEBUG oslo_concurrency.lockutils [req-f8d69990-c5d8-424d-9fe1-e513bdc74162 req-fa2e7a92-fdf4-4684-ad17-d69383b1694a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:10 compute-0 nova_compute[260603]: 2025-10-02 08:30:10.844 2 DEBUG oslo_concurrency.lockutils [req-f8d69990-c5d8-424d-9fe1-e513bdc74162 req-fa2e7a92-fdf4-4684-ad17-d69383b1694a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:10 compute-0 nova_compute[260603]: 2025-10-02 08:30:10.844 2 DEBUG oslo_concurrency.lockutils [req-f8d69990-c5d8-424d-9fe1-e513bdc74162 req-fa2e7a92-fdf4-4684-ad17-d69383b1694a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:10 compute-0 nova_compute[260603]: 2025-10-02 08:30:10.844 2 DEBUG nova.compute.manager [req-f8d69990-c5d8-424d-9fe1-e513bdc74162 req-fa2e7a92-fdf4-4684-ad17-d69383b1694a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:30:10 compute-0 nova_compute[260603]: 2025-10-02 08:30:10.845 2 WARNING nova.compute.manager [req-f8d69990-c5d8-424d-9fe1-e513bdc74162 req-fa2e7a92-fdf4-4684-ad17-d69383b1694a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received unexpected event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with vm_state active and task_state None.
Oct 02 08:30:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 359 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 296 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct 02 08:30:11 compute-0 nova_compute[260603]: 2025-10-02 08:30:11.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.234 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.235 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.236 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.237 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.237 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.239 2 INFO nova.compute.manager [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Terminating instance
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.241 2 DEBUG nova.compute.manager [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:30:12 compute-0 kernel: tap044e4f76-db (unregistering): left promiscuous mode
Oct 02 08:30:12 compute-0 NetworkManager[45129]: <info>  [1759393812.3014] device (tap044e4f76-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:30:12 compute-0 ovn_controller[152344]: 2025-10-02T08:30:12Z|00498|binding|INFO|Releasing lport 044e4f76-db30-47b6-b277-8c3a13743b9c from this chassis (sb_readonly=0)
Oct 02 08:30:12 compute-0 ovn_controller[152344]: 2025-10-02T08:30:12Z|00499|binding|INFO|Setting lport 044e4f76-db30-47b6-b277-8c3a13743b9c down in Southbound
Oct 02 08:30:12 compute-0 ovn_controller[152344]: 2025-10-02T08:30:12Z|00500|binding|INFO|Removing iface tap044e4f76-db ovn-installed in OVS
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.367 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:b0:76 10.100.0.11'], port_security=['fa:16:3e:46:b0:76 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '923e00cc-7494-46f3-93e2-3c223705aff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bd8f146-d090-40d8-8651-21c92934a6ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f942883d5794a5c8e3cd2b5ef44a863', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3c9d772-e37a-48f1-89cb-39eaf88eda56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dec8975f-9e7e-451f-957f-04aee213c5b3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=044e4f76-db30-47b6-b277-8c3a13743b9c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.370 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 044e4f76-db30-47b6-b277-8c3a13743b9c in datapath 9bd8f146-d090-40d8-8651-21c92934a6ff unbound from our chassis
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.372 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bd8f146-d090-40d8-8651-21c92934a6ff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.374 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ab0738-8727-45c5-b366-f8b88a987041]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.375 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff namespace which is not needed anymore
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:12 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000039.scope: Deactivated successfully.
Oct 02 08:30:12 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000039.scope: Consumed 13.706s CPU time.
Oct 02 08:30:12 compute-0 systemd-machined[214636]: Machine qemu-63-instance-00000039 terminated.
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.489 2 INFO nova.virt.libvirt.driver [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Instance destroyed successfully.
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.491 2 DEBUG nova.objects.instance [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lazy-loading 'resources' on Instance uuid 923e00cc-7494-46f3-93e2-3c223705aff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.503 2 DEBUG nova.virt.libvirt.vif [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-568591211',display_name='tempest-ServersTestManualDisk-server-568591211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-568591211',id=57,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARlZgY/0EkzEMepYNnm4b01nWIMecq2XG9MuajTSjQZi/SZ8DIEdLBXsx3DCy0ARgTpk4vDQEJ3TsL+ZNLPSjyILPCIRt4tiYIZmsXwTZOquFcYjN59rB2JnY5UB/nfNw==',key_name='tempest-keypair-510296598',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f942883d5794a5c8e3cd2b5ef44a863',ramdisk_id='',reservation_id='r-9syab2uj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-2010618382',owner_user_name='tempest-ServersTestManualDisk-2010618382-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e8f0a6fb1d224a979db4b4a738bbf453',uuid=923e00cc-7494-46f3-93e2-3c223705aff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.504 2 DEBUG nova.network.os_vif_util [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Converting VIF {"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.505 2 DEBUG nova.network.os_vif_util [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.506 2 DEBUG os_vif [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap044e4f76-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.522 2 INFO os_vif [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db')
Oct 02 08:30:12 compute-0 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [NOTICE]   (318539) : haproxy version is 2.8.14-c23fe91
Oct 02 08:30:12 compute-0 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [NOTICE]   (318539) : path to executable is /usr/sbin/haproxy
Oct 02 08:30:12 compute-0 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [WARNING]  (318539) : Exiting Master process...
Oct 02 08:30:12 compute-0 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [ALERT]    (318539) : Current worker (318543) exited with code 143 (Terminated)
Oct 02 08:30:12 compute-0 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [WARNING]  (318539) : All workers exited. Exiting... (0)
Oct 02 08:30:12 compute-0 systemd[1]: libpod-7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21.scope: Deactivated successfully.
Oct 02 08:30:12 compute-0 podman[319088]: 2025-10-02 08:30:12.584874196 +0000 UTC m=+0.070419876 container died 7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21-userdata-shm.mount: Deactivated successfully.
Oct 02 08:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6db0e7b9516bd1a447408cbffb5be27ab3618c8243a87ad12b0787ded02cc8cb-merged.mount: Deactivated successfully.
Oct 02 08:30:12 compute-0 podman[319088]: 2025-10-02 08:30:12.6330101 +0000 UTC m=+0.118555780 container cleanup 7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 08:30:12 compute-0 systemd[1]: libpod-conmon-7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21.scope: Deactivated successfully.
Oct 02 08:30:12 compute-0 podman[319136]: 2025-10-02 08:30:12.718700479 +0000 UTC m=+0.055324988 container remove 7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.737 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30d1a35f-7911-4248-ad93-fe8b8f86d085]: (4, ('Thu Oct  2 08:30:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff (7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21)\n7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21\nThu Oct  2 08:30:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff (7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21)\n7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.741 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22bb530c-a976-4aa7-93a3-c0efaadc1389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.742 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bd8f146-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:12 compute-0 kernel: tap9bd8f146-d0: left promiscuous mode
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.782 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[919cdca2-7073-4cd2-8af7-fce4f6e3d769]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.815 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9415c8c2-60bc-414c-8e03-51fac194f181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.816 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30085d62-466f-4911-a2ea-1664f15f649e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.832 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6160b4ad-43e0-4c2c-b3d8-7759e6aa3243]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466681, 'reachable_time': 19997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319152, 'error': None, 'target': 'ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d9bd8f146\x2dd090\x2d40d8\x2d8651\x2d21c92934a6ff.mount: Deactivated successfully.
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.836 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:30:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.837 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4df70d9b-0d82-4877-8dae-7822f5704e12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:12 compute-0 ceph-mon[74477]: pgmap v1479: 305 pgs: 305 active+clean; 359 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 296 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.931 2 INFO nova.virt.libvirt.driver [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Deleting instance files /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1_del
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.932 2 INFO nova.virt.libvirt.driver [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Deletion of /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1_del complete
Oct 02 08:30:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 359 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 114 op/s
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.993 2 INFO nova.compute.manager [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.994 2 DEBUG oslo.service.loopingcall [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.994 2 DEBUG nova.compute.manager [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:30:12 compute-0 nova_compute[260603]: 2025-10-02 08:30:12.994 2 DEBUG nova.network.neutron [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:30:13 compute-0 nova_compute[260603]: 2025-10-02 08:30:13.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:13 compute-0 nova_compute[260603]: 2025-10-02 08:30:13.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:30:13 compute-0 nova_compute[260603]: 2025-10-02 08:30:13.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:30:14 compute-0 nova_compute[260603]: 2025-10-02 08:30:14.531 2 DEBUG nova.network.neutron [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:30:14 compute-0 nova_compute[260603]: 2025-10-02 08:30:14.549 2 INFO nova.compute.manager [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Took 1.56 seconds to deallocate network for instance.
Oct 02 08:30:14 compute-0 nova_compute[260603]: 2025-10-02 08:30:14.616 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:14 compute-0 nova_compute[260603]: 2025-10-02 08:30:14.617 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:14 compute-0 nova_compute[260603]: 2025-10-02 08:30:14.639 2 DEBUG nova.compute.manager [req-abdf0e5a-2c3a-45bc-b711-0580f5ae6b3c req-c32b2372-2fc2-4d6c-8c4f-0a22f8ead708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:14 compute-0 nova_compute[260603]: 2025-10-02 08:30:14.640 2 DEBUG oslo_concurrency.lockutils [req-abdf0e5a-2c3a-45bc-b711-0580f5ae6b3c req-c32b2372-2fc2-4d6c-8c4f-0a22f8ead708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:14 compute-0 nova_compute[260603]: 2025-10-02 08:30:14.641 2 DEBUG oslo_concurrency.lockutils [req-abdf0e5a-2c3a-45bc-b711-0580f5ae6b3c req-c32b2372-2fc2-4d6c-8c4f-0a22f8ead708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:14 compute-0 nova_compute[260603]: 2025-10-02 08:30:14.641 2 DEBUG oslo_concurrency.lockutils [req-abdf0e5a-2c3a-45bc-b711-0580f5ae6b3c req-c32b2372-2fc2-4d6c-8c4f-0a22f8ead708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:14 compute-0 nova_compute[260603]: 2025-10-02 08:30:14.642 2 DEBUG nova.compute.manager [req-abdf0e5a-2c3a-45bc-b711-0580f5ae6b3c req-c32b2372-2fc2-4d6c-8c4f-0a22f8ead708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] No waiting events found dispatching network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:30:14 compute-0 nova_compute[260603]: 2025-10-02 08:30:14.643 2 WARNING nova.compute.manager [req-abdf0e5a-2c3a-45bc-b711-0580f5ae6b3c req-c32b2372-2fc2-4d6c-8c4f-0a22f8ead708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received unexpected event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c for instance with vm_state deleted and task_state None.
Oct 02 08:30:14 compute-0 nova_compute[260603]: 2025-10-02 08:30:14.756 2 DEBUG oslo_concurrency.processutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:14 compute-0 ceph-mon[74477]: pgmap v1480: 305 pgs: 305 active+clean; 359 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 114 op/s
Oct 02 08:30:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 330 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 816 KiB/s wr, 109 op/s
Oct 02 08:30:15 compute-0 podman[319172]: 2025-10-02 08:30:15.007931742 +0000 UTC m=+0.068513477 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 08:30:15 compute-0 podman[319156]: 2025-10-02 08:30:15.043298339 +0000 UTC m=+0.107865388 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 08:30:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:30:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2641797910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:15 compute-0 nova_compute[260603]: 2025-10-02 08:30:15.246 2 DEBUG oslo_concurrency.processutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:15 compute-0 nova_compute[260603]: 2025-10-02 08:30:15.257 2 DEBUG nova.compute.provider_tree [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:30:15 compute-0 nova_compute[260603]: 2025-10-02 08:30:15.282 2 DEBUG nova.scheduler.client.report [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:30:15 compute-0 nova_compute[260603]: 2025-10-02 08:30:15.321 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:15 compute-0 nova_compute[260603]: 2025-10-02 08:30:15.370 2 INFO nova.scheduler.client.report [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Deleted allocations for instance 923e00cc-7494-46f3-93e2-3c223705aff1
Oct 02 08:30:15 compute-0 nova_compute[260603]: 2025-10-02 08:30:15.470 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:15 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2641797910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.443 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.444 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.445 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.445 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.446 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.448 2 INFO nova.compute.manager [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Terminating instance
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.450 2 DEBUG nova.compute.manager [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:30:16 compute-0 kernel: tap257d115c-e1 (unregistering): left promiscuous mode
Oct 02 08:30:16 compute-0 NetworkManager[45129]: <info>  [1759393816.5076] device (tap257d115c-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00501|binding|INFO|Releasing lport 257d115c-e196-4921-a9d3-942604825516 from this chassis (sb_readonly=0)
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00502|binding|INFO|Setting lport 257d115c-e196-4921-a9d3-942604825516 down in Southbound
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00503|binding|INFO|Removing iface tap257d115c-e1 ovn-installed in OVS
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.523 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.524 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.524 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.530 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:eb:54 10.100.0.8'], port_security=['fa:16:3e:5f:eb:54 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd15c7c6a-e6a1-4538-9db0-ee1aef10f38b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=257d115c-e196-4921-a9d3-942604825516) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.532 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 257d115c-e196-4921-a9d3-942604825516 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.534 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.560 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.570 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fdadb1eb-509b-40d5-b440-8ec1a22dc547]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000035.scope: Deactivated successfully.
Oct 02 08:30:16 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000035.scope: Consumed 14.736s CPU time.
Oct 02 08:30:16 compute-0 systemd-machined[214636]: Machine qemu-58-instance-00000035 terminated.
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.620 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1e194f6b-2695-4ca1-b40f-bbba76f298d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.625 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3027cdd1-e1fe-49b2-94ec-6a4a178edb49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.666 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7a1ea9-6cec-4c76-a634-67f5b35e86af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 kernel: tap257d115c-e1: entered promiscuous mode
Oct 02 08:30:16 compute-0 kernel: tap257d115c-e1 (unregistering): left promiscuous mode
Oct 02 08:30:16 compute-0 NetworkManager[45129]: <info>  [1759393816.6810] manager: (tap257d115c-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00504|binding|INFO|Claiming lport 257d115c-e196-4921-a9d3-942604825516 for this chassis.
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00505|binding|INFO|257d115c-e196-4921-a9d3-942604825516: Claiming fa:16:3e:5f:eb:54 10.100.0.8
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.692 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[edf2a864-7f09-44e0-a3af-38031d80ba36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 1042, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 1042, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319234, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.696 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:eb:54 10.100.0.8'], port_security=['fa:16:3e:5f:eb:54 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd15c7c6a-e6a1-4538-9db0-ee1aef10f38b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=257d115c-e196-4921-a9d3-942604825516) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.703 2 INFO nova.virt.libvirt.driver [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Instance destroyed successfully.
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.704 2 DEBUG nova.objects.instance [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'resources' on Instance uuid d15c7c6a-e6a1-4538-9db0-ee1aef10f38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.714 2 DEBUG nova.compute.manager [req-e0edecc2-162c-428c-9316-9775539f2cdd req-66a0ef84-8678-4468-9c4e-e37bd2a110ce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received event network-vif-deleted-044e4f76-db30-47b6-b277-8c3a13743b9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00506|binding|INFO|Setting lport 257d115c-e196-4921-a9d3-942604825516 ovn-installed in OVS
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00507|binding|INFO|Setting lport 257d115c-e196-4921-a9d3-942604825516 up in Southbound
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.728 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[24c826f5-de3b-4aaa-9a1c-08b11031df2d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319238, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319238, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.731 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00508|binding|INFO|Releasing lport 257d115c-e196-4921-a9d3-942604825516 from this chassis (sb_readonly=1)
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00509|if_status|INFO|Dropped 2 log messages in last 460 seconds (most recently, 460 seconds ago) due to excessive rate
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00510|if_status|INFO|Not setting lport 257d115c-e196-4921-a9d3-942604825516 down as sb is readonly
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00511|binding|INFO|Removing iface tap257d115c-e1 ovn-installed in OVS
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00512|binding|INFO|Releasing lport 257d115c-e196-4921-a9d3-942604825516 from this chassis (sb_readonly=0)
Oct 02 08:30:16 compute-0 ovn_controller[152344]: 2025-10-02T08:30:16Z|00513|binding|INFO|Setting lport 257d115c-e196-4921-a9d3-942604825516 down in Southbound
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.743 2 DEBUG nova.virt.libvirt.vif [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1454521037',display_name='tempest-ListServerFiltersTestJSON-instance-1454521037',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1454521037',id=53,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-w2kc410q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:37Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=d15c7c6a-e6a1-4538-9db0-ee1aef10f38b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.743 2 DEBUG nova.network.os_vif_util [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.744 2 DEBUG nova.network.os_vif_util [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.743 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:eb:54 10.100.0.8'], port_security=['fa:16:3e:5f:eb:54 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd15c7c6a-e6a1-4538-9db0-ee1aef10f38b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=257d115c-e196-4921-a9d3-942604825516) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.744 2 DEBUG os_vif [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap257d115c-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.754 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.755 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.756 2 INFO os_vif [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1')
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.756 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.758 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.763 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 257d115c-e196-4921-a9d3-942604825516 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.768 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.791 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df2f9ba5-6cfa-4381-a1b8-40046b689564]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.824 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d39a7fe9-1771-4ec9-bb4b-3d3c9b5c14e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.829 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b8517928-63e5-42cb-a8c4-820fe2dc8389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 ceph-mon[74477]: pgmap v1481: 305 pgs: 305 active+clean; 330 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 816 KiB/s wr, 109 op/s
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.875 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5557d033-3a63-41e8-8988-f0f34b6f47a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.904 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30dbc1d4-b304-4c96-a6ed-047653c0e11a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 1042, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 1042, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319267, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.925 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.926 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.926 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.926 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.929 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c568536-6555-47dc-9534-e8430bf0ef5c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319268, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319268, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.932 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:16 compute-0 nova_compute[260603]: 2025-10-02 08:30:16.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.936 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.937 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.938 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.939 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.941 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 257d115c-e196-4921-a9d3-942604825516 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.944 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e
Oct 02 08:30:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.965 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8bc286-7841-402f-bc74-f1c919e30b2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 330 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 91 KiB/s wr, 84 op/s
Oct 02 08:30:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.011 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b9180a20-583b-4221-96eb-d43dcd3a2c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.015 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf3d34e-1567-4c26-bab9-6378a1cf921d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.056 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e5febcb1-626a-4ddb-a33b-7618941e45dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.082 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb2cd09c-c09f-4e23-a62b-d635ae1c900a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 17, 'rx_bytes': 1042, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 17, 'rx_bytes': 1042, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319275, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:30:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.116 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f3e2de-09aa-4d93-8711-f2f36d5443eb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319276, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319276, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.120 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:17 compute-0 nova_compute[260603]: 2025-10-02 08:30:17.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:17 compute-0 nova_compute[260603]: 2025-10-02 08:30:17.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.158 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.159 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.159 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.160 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:17 compute-0 nova_compute[260603]: 2025-10-02 08:30:17.213 2 INFO nova.virt.libvirt.driver [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Deleting instance files /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_del
Oct 02 08:30:17 compute-0 nova_compute[260603]: 2025-10-02 08:30:17.214 2 INFO nova.virt.libvirt.driver [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Deletion of /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_del complete
Oct 02 08:30:17 compute-0 nova_compute[260603]: 2025-10-02 08:30:17.267 2 INFO nova.compute.manager [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 02 08:30:17 compute-0 nova_compute[260603]: 2025-10-02 08:30:17.267 2 DEBUG oslo.service.loopingcall [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:30:17 compute-0 nova_compute[260603]: 2025-10-02 08:30:17.268 2 DEBUG nova.compute.manager [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:30:17 compute-0 nova_compute[260603]: 2025-10-02 08:30:17.269 2 DEBUG nova.network.neutron [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:30:17 compute-0 nova_compute[260603]: 2025-10-02 08:30:17.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:18 compute-0 ceph-mon[74477]: pgmap v1482: 305 pgs: 305 active+clean; 330 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 91 KiB/s wr, 84 op/s
Oct 02 08:30:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 200 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 94 KiB/s wr, 137 op/s
Oct 02 08:30:19 compute-0 nova_compute[260603]: 2025-10-02 08:30:19.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:19 compute-0 nova_compute[260603]: 2025-10-02 08:30:19.661 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updating instance_info_cache with network_info: [{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:30:19 compute-0 nova_compute[260603]: 2025-10-02 08:30:19.682 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:30:19 compute-0 nova_compute[260603]: 2025-10-02 08:30:19.682 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:30:19 compute-0 nova_compute[260603]: 2025-10-02 08:30:19.682 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:30:19 compute-0 nova_compute[260603]: 2025-10-02 08:30:19.683 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:30:19 compute-0 nova_compute[260603]: 2025-10-02 08:30:19.683 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:30:20 compute-0 nova_compute[260603]: 2025-10-02 08:30:20.087 2 DEBUG nova.network.neutron [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:30:20 compute-0 nova_compute[260603]: 2025-10-02 08:30:20.111 2 INFO nova.compute.manager [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Took 2.84 seconds to deallocate network for instance.
Oct 02 08:30:20 compute-0 nova_compute[260603]: 2025-10-02 08:30:20.160 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:20 compute-0 nova_compute[260603]: 2025-10-02 08:30:20.161 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:20 compute-0 nova_compute[260603]: 2025-10-02 08:30:20.210 2 DEBUG nova.compute.manager [req-08dfcf65-d8f2-4140-8a17-e2b1a9cd04e8 req-552964e0-f955-4779-a021-2dbbe7c945eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Received event network-vif-deleted-257d115c-e196-4921-a9d3-942604825516 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:20 compute-0 nova_compute[260603]: 2025-10-02 08:30:20.250 2 DEBUG oslo_concurrency.processutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:30:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997612194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:20 compute-0 nova_compute[260603]: 2025-10-02 08:30:20.686 2 DEBUG oslo_concurrency.processutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:20 compute-0 nova_compute[260603]: 2025-10-02 08:30:20.697 2 DEBUG nova.compute.provider_tree [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:30:20 compute-0 nova_compute[260603]: 2025-10-02 08:30:20.725 2 DEBUG nova.scheduler.client.report [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:30:20 compute-0 nova_compute[260603]: 2025-10-02 08:30:20.754 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:20 compute-0 nova_compute[260603]: 2025-10-02 08:30:20.812 2 INFO nova.scheduler.client.report [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Deleted allocations for instance d15c7c6a-e6a1-4538-9db0-ee1aef10f38b
Oct 02 08:30:20 compute-0 ceph-mon[74477]: pgmap v1483: 305 pgs: 305 active+clean; 200 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 94 KiB/s wr, 137 op/s
Oct 02 08:30:20 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/997612194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:20 compute-0 nova_compute[260603]: 2025-10-02 08:30:20.893 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 200 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 124 op/s
Oct 02 08:30:21 compute-0 podman[319299]: 2025-10-02 08:30:21.019335351 +0000 UTC m=+0.084248915 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:30:21 compute-0 nova_compute[260603]: 2025-10-02 08:30:21.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:30:21 compute-0 nova_compute[260603]: 2025-10-02 08:30:21.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:30:21 compute-0 nova_compute[260603]: 2025-10-02 08:30:21.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:21 compute-0 nova_compute[260603]: 2025-10-02 08:30:21.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:21 compute-0 nova_compute[260603]: 2025-10-02 08:30:21.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:21 compute-0 nova_compute[260603]: 2025-10-02 08:30:21.541 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:30:21 compute-0 nova_compute[260603]: 2025-10-02 08:30:21.541 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:21 compute-0 nova_compute[260603]: 2025-10-02 08:30:21.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:21 compute-0 nova_compute[260603]: 2025-10-02 08:30:21.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:30:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028904931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.007 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.012 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "797fde07-e88a-4d6e-a1a3-25e22c66097c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.012 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.013 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.013 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.013 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.015 2 INFO nova.compute.manager [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Terminating instance
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.016 2 DEBUG nova.compute.manager [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:30:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:30:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4012285713' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:30:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:30:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4012285713' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:30:22 compute-0 kernel: tap29a765f0-6b (unregistering): left promiscuous mode
Oct 02 08:30:22 compute-0 NetworkManager[45129]: <info>  [1759393822.0744] device (tap29a765f0-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:30:22 compute-0 ovn_controller[152344]: 2025-10-02T08:30:22Z|00514|binding|INFO|Releasing lport 29a765f0-6b44-4aad-9974-a0845658d5f2 from this chassis (sb_readonly=0)
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:22 compute-0 ovn_controller[152344]: 2025-10-02T08:30:22Z|00515|binding|INFO|Setting lport 29a765f0-6b44-4aad-9974-a0845658d5f2 down in Southbound
Oct 02 08:30:22 compute-0 ovn_controller[152344]: 2025-10-02T08:30:22Z|00516|binding|INFO|Removing iface tap29a765f0-6b ovn-installed in OVS
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.093 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:7e:1d 10.100.0.9'], port_security=['fa:16:3e:c6:7e:1d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '797fde07-e88a-4d6e-a1a3-25e22c66097c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=29a765f0-6b44-4aad-9974-a0845658d5f2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.095 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 29a765f0-6b44-4aad-9974-a0845658d5f2 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.096 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e
Oct 02 08:30:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.119 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[885e4ccc-095c-4691-b242-1ac1b9f1b8c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:22 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000034.scope: Deactivated successfully.
Oct 02 08:30:22 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000034.scope: Consumed 14.500s CPU time.
Oct 02 08:30:22 compute-0 systemd-machined[214636]: Machine qemu-57-instance-00000034 terminated.
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.161 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fed22bdf-4a44-4642-9f1a-a8148c410abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.165 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b14a84-c69e-4853-b0cb-5a67643daf49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.196 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4a0bba-f711-4031-a419-862c04aa2c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.215 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e6ee78-973c-4bf3-82e3-b87e3d6cbc91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 19, 'rx_bytes': 1042, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 19, 'rx_bytes': 1042, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319352, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.236 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4515dc7-d4d3-4b56-856b-08bb1c11d1f2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319353, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319353, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.238 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.248 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.249 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.249 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.250 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.255 2 INFO nova.virt.libvirt.driver [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Instance destroyed successfully.
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.256 2 DEBUG nova.objects.instance [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'resources' on Instance uuid 797fde07-e88a-4d6e-a1a3-25e22c66097c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:22 compute-0 ovn_controller[152344]: 2025-10-02T08:30:22Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:b2:ea 10.100.0.12
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.270 2 DEBUG nova.virt.libvirt.vif [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1372609845',display_name='tempest-ListServerFiltersTestJSON-instance-1372609845',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1372609845',id=52,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-2sfo7if8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:35Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=797fde07-e88a-4d6e-a1a3-25e22c66097c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.271 2 DEBUG nova.network.os_vif_util [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.272 2 DEBUG nova.network.os_vif_util [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.272 2 DEBUG os_vif [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.274 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29a765f0-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.279 2 INFO os_vif [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b')
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.343 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.343 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.348 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.348 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.534 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.535 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4028MB free_disk=59.897247314453125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.536 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.536 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.594 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 9924ce7f-b701-4560-b2c5-67f673b45807 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.594 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 797fde07-e88a-4d6e-a1a3-25e22c66097c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.595 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.595 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.632 2 INFO nova.virt.libvirt.driver [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Deleting instance files /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c_del
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.633 2 INFO nova.virt.libvirt.driver [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Deletion of /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c_del complete
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.682 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.725 2 INFO nova.compute.manager [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Took 0.71 seconds to destroy the instance on the hypervisor.
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.726 2 DEBUG oslo.service.loopingcall [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.728 2 DEBUG nova.compute.manager [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.728 2 DEBUG nova.network.neutron [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:30:22 compute-0 nova_compute[260603]: 2025-10-02 08:30:22.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:22 compute-0 ceph-mon[74477]: pgmap v1484: 305 pgs: 305 active+clean; 200 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 124 op/s
Oct 02 08:30:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4028904931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/4012285713' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:30:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/4012285713' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:30:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 200 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 28 KiB/s wr, 146 op/s
Oct 02 08:30:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:30:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454869559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:23 compute-0 nova_compute[260603]: 2025-10-02 08:30:23.116 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:23 compute-0 nova_compute[260603]: 2025-10-02 08:30:23.122 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:30:23 compute-0 nova_compute[260603]: 2025-10-02 08:30:23.145 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:30:23 compute-0 nova_compute[260603]: 2025-10-02 08:30:23.168 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:30:23 compute-0 nova_compute[260603]: 2025-10-02 08:30:23.169 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1454869559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:24 compute-0 nova_compute[260603]: 2025-10-02 08:30:24.170 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:30:24 compute-0 nova_compute[260603]: 2025-10-02 08:30:24.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:30:24 compute-0 ceph-mon[74477]: pgmap v1485: 305 pgs: 305 active+clean; 200 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 28 KiB/s wr, 146 op/s
Oct 02 08:30:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 169 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 18 KiB/s wr, 128 op/s
Oct 02 08:30:25 compute-0 podman[319402]: 2025-10-02 08:30:25.024841697 +0000 UTC m=+0.079490018 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:30:25 compute-0 nova_compute[260603]: 2025-10-02 08:30:25.241 2 DEBUG nova.network.neutron [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:30:25 compute-0 nova_compute[260603]: 2025-10-02 08:30:25.278 2 INFO nova.compute.manager [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Took 2.55 seconds to deallocate network for instance.
Oct 02 08:30:25 compute-0 nova_compute[260603]: 2025-10-02 08:30:25.358 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:25 compute-0 nova_compute[260603]: 2025-10-02 08:30:25.359 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:25 compute-0 nova_compute[260603]: 2025-10-02 08:30:25.369 2 DEBUG nova.compute.manager [req-dd1c297f-eef1-475f-ad7f-fb08536025bb req-a7614d00-c43e-4c9c-a55b-74fa08c134b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Received event network-vif-deleted-29a765f0-6b44-4aad-9974-a0845658d5f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:25 compute-0 nova_compute[260603]: 2025-10-02 08:30:25.463 2 DEBUG oslo_concurrency.processutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:25 compute-0 nova_compute[260603]: 2025-10-02 08:30:25.520 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:25 compute-0 nova_compute[260603]: 2025-10-02 08:30:25.521 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:25 compute-0 nova_compute[260603]: 2025-10-02 08:30:25.548 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:30:25 compute-0 nova_compute[260603]: 2025-10-02 08:30:25.645 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:30:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/142773208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:25 compute-0 ceph-mon[74477]: pgmap v1486: 305 pgs: 305 active+clean; 169 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 18 KiB/s wr, 128 op/s
Oct 02 08:30:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/142773208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:25 compute-0 nova_compute[260603]: 2025-10-02 08:30:25.969 2 DEBUG oslo_concurrency.processutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:25 compute-0 nova_compute[260603]: 2025-10-02 08:30:25.976 2 DEBUG nova.compute.provider_tree [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.009 2 DEBUG nova.scheduler.client.report [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.066 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.068 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.074 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.074 2 INFO nova.compute.claims [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.106 2 INFO nova.scheduler.client.report [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Deleted allocations for instance 797fde07-e88a-4d6e-a1a3-25e22c66097c
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.210 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.271 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:26 compute-0 ovn_controller[152344]: 2025-10-02T08:30:26Z|00517|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.418 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.419 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.419 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.419 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.420 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.421 2 INFO nova.compute.manager [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Terminating instance
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.422 2 DEBUG nova.compute.manager [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:30:26 compute-0 kernel: tapbf9cdb7f-4c (unregistering): left promiscuous mode
Oct 02 08:30:26 compute-0 NetworkManager[45129]: <info>  [1759393826.5260] device (tapbf9cdb7f-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.582 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.582 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:26 compute-0 ovn_controller[152344]: 2025-10-02T08:30:26Z|00518|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct 02 08:30:26 compute-0 ovn_controller[152344]: 2025-10-02T08:30:26Z|00519|binding|INFO|Releasing lport bf9cdb7f-4cda-403b-b27e-12385e93db02 from this chassis (sb_readonly=0)
Oct 02 08:30:26 compute-0 ovn_controller[152344]: 2025-10-02T08:30:26Z|00520|binding|INFO|Removing iface tapbf9cdb7f-4c ovn-installed in OVS
Oct 02 08:30:26 compute-0 ovn_controller[152344]: 2025-10-02T08:30:26Z|00521|binding|INFO|Setting lport bf9cdb7f-4cda-403b-b27e-12385e93db02 down in Southbound
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.593 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:b2:ea 10.100.0.12'], port_security=['fa:16:3e:64:b2:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9924ce7f-b701-4560-b2c5-67f673b45807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bf9cdb7f-4cda-403b-b27e-12385e93db02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.594 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bf9cdb7f-4cda-403b-b27e-12385e93db02 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.595 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00da8a36-bc54-4cc1-a0e2-53333358378e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.596 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d89edd1-2cc2-4a15-818d-4217e50f89ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.596 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e namespace which is not needed anymore
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.600 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:30:26 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct 02 08:30:26 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000033.scope: Consumed 12.865s CPU time.
Oct 02 08:30:26 compute-0 systemd-machined[214636]: Machine qemu-64-instance-00000033 terminated.
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:26 compute-0 NetworkManager[45129]: <info>  [1759393826.6525] manager: (tapbf9cdb7f-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.671 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.674 2 INFO nova.virt.libvirt.driver [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance destroyed successfully.
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.675 2 DEBUG nova.objects.instance [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'resources' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.690 2 DEBUG nova.virt.libvirt.vif [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-769762436',display_name='tempest-ListServerFiltersTestJSON-instance-769762436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-769762436',id=51,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-t76hsctw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:30:09Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=9924ce7f-b701-4560-b2c5-67f673b45807,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.691 2 DEBUG nova.network.os_vif_util [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.692 2 DEBUG nova.network.os_vif_util [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.693 2 DEBUG os_vif [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf9cdb7f-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.705 2 INFO os_vif [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c')
Oct 02 08:30:26 compute-0 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [NOTICE]   (316129) : haproxy version is 2.8.14-c23fe91
Oct 02 08:30:26 compute-0 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [NOTICE]   (316129) : path to executable is /usr/sbin/haproxy
Oct 02 08:30:26 compute-0 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [WARNING]  (316129) : Exiting Master process...
Oct 02 08:30:26 compute-0 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [WARNING]  (316129) : Exiting Master process...
Oct 02 08:30:26 compute-0 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [ALERT]    (316129) : Current worker (316132) exited with code 143 (Terminated)
Oct 02 08:30:26 compute-0 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [WARNING]  (316129) : All workers exited. Exiting... (0)
Oct 02 08:30:26 compute-0 systemd[1]: libpod-b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2.scope: Deactivated successfully.
Oct 02 08:30:26 compute-0 podman[319497]: 2025-10-02 08:30:26.757790509 +0000 UTC m=+0.059275280 container died b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:30:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:30:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887373202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2-userdata-shm.mount: Deactivated successfully.
Oct 02 08:30:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-71b5c5ae73f0794dadb32620ddb0a1a034413642c4424df120944407778f4d1d-merged.mount: Deactivated successfully.
Oct 02 08:30:26 compute-0 podman[319497]: 2025-10-02 08:30:26.810713581 +0000 UTC m=+0.112198312 container cleanup b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 08:30:26 compute-0 systemd[1]: libpod-conmon-b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2.scope: Deactivated successfully.
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.828 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.836 2 DEBUG nova.compute.provider_tree [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.853 2 DEBUG nova.scheduler.client.report [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:30:26 compute-0 podman[319545]: 2025-10-02 08:30:26.885550683 +0000 UTC m=+0.047621789 container remove b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.891 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.892 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.893 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[00869e8b-b4c1-4a16-a442-8f4bf95b155c]: (4, ('Thu Oct  2 08:30:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e (b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2)\nb95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2\nThu Oct  2 08:30:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e (b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2)\nb95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.895 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[907639ba-812a-45ae-97dd-3cb703b6e183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.896 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:26 compute-0 kernel: tap00da8a36-b0: left promiscuous mode
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.902 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.911 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.912 2 INFO nova.compute.claims [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.918 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d43f8430-4631-4187-93cb-8ca60f20b9e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.939 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc972fe-c3d1-4fe1-80d4-804976f3a757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.940 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9844fcb-9d94-47cc-866b-b1afec9beba3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.953 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.953 2 DEBUG nova.network.neutron [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.959 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[43ad2611-113e-49e1-8ee9-77626a868680]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464914, 'reachable_time': 19441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319561, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d00da8a36\x2dbc54\x2d4cc1\x2da0e2\x2d53333358378e.mount: Deactivated successfully.
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.964 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:30:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.964 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3a6807-593a-47f3-95dd-96eabf79c5b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2887373202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:26 compute-0 nova_compute[260603]: 2025-10-02 08:30:26.978 2 INFO nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:30:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 169 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 568 KiB/s rd, 16 KiB/s wr, 102 op/s
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.017 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.082 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.133 2 INFO nova.virt.libvirt.driver [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Deleting instance files /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807_del
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.134 2 INFO nova.virt.libvirt.driver [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Deletion of /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807_del complete
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.176 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.178 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.179 2 INFO nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Creating image(s)
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.209 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.245 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.278 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.283 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.327 2 INFO nova.compute.manager [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Took 0.90 seconds to destroy the instance on the hypervisor.
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.329 2 DEBUG oslo.service.loopingcall [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.329 2 DEBUG nova.compute.manager [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.329 2 DEBUG nova.network.neutron [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.368 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.369 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.370 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.370 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.393 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.397 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.486 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393812.485308, 923e00cc-7494-46f3-93e2-3c223705aff1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.486 2 INFO nova.compute.manager [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] VM Stopped (Lifecycle Event)
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.513 2 DEBUG nova.compute.manager [None req-f71b9520-9335-4048-a3c8-ba898a877ede - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:30:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:30:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/111856958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.555 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.566 2 DEBUG nova.compute.provider_tree [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.583 2 DEBUG nova.scheduler.client.report [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.607 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.608 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.636 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.660 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.661 2 DEBUG nova.network.neutron [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.692 2 INFO nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.700 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] resizing rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.729 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.790 2 DEBUG nova.objects.instance [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'migration_context' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.827 2 DEBUG nova.policy [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '116b114f14f84e4cbd6cc966e29d82e7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.877 2 DEBUG nova.compute.manager [req-285c9386-6a4c-41c7-ba98-45bf267a3b54 req-b3a58807-d8d5-44ee-8cca-86a5bf52076d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-unplugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.878 2 DEBUG oslo_concurrency.lockutils [req-285c9386-6a4c-41c7-ba98-45bf267a3b54 req-b3a58807-d8d5-44ee-8cca-86a5bf52076d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.878 2 DEBUG oslo_concurrency.lockutils [req-285c9386-6a4c-41c7-ba98-45bf267a3b54 req-b3a58807-d8d5-44ee-8cca-86a5bf52076d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.878 2 DEBUG oslo_concurrency.lockutils [req-285c9386-6a4c-41c7-ba98-45bf267a3b54 req-b3a58807-d8d5-44ee-8cca-86a5bf52076d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.879 2 DEBUG nova.compute.manager [req-285c9386-6a4c-41c7-ba98-45bf267a3b54 req-b3a58807-d8d5-44ee-8cca-86a5bf52076d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-unplugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.879 2 DEBUG nova.compute.manager [req-285c9386-6a4c-41c7-ba98-45bf267a3b54 req-b3a58807-d8d5-44ee-8cca-86a5bf52076d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-unplugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:30:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:30:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:30:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:30:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:30:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:30:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.934 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.935 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Ensure instance console log exists: /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.935 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.936 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:27 compute-0 nova_compute[260603]: 2025-10-02 08:30:27.936 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:30:27
Oct 02 08:30:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:30:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:30:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Oct 02 08:30:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:30:27 compute-0 ceph-mon[74477]: pgmap v1487: 305 pgs: 305 active+clean; 169 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 568 KiB/s rd, 16 KiB/s wr, 102 op/s
Oct 02 08:30:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/111856958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.071 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.073 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.074 2 INFO nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Creating image(s)
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.108 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.144 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.178 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.187 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.273 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.274 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.275 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.276 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.313 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.319 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.572 2 DEBUG nova.policy [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e7ed2cfbcca04b4ca0a07910a0319456', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '649aece3b28a477fa6e0d1dc7b1d5ade', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.580 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.668 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] resizing rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.789 2 DEBUG nova.objects.instance [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lazy-loading 'migration_context' on Instance uuid 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.813 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.814 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Ensure instance console log exists: /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.815 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.815 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:28 compute-0 nova_compute[260603]: 2025-10-02 08:30:28.816 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 50 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 330 KiB/s wr, 154 op/s
Oct 02 08:30:29 compute-0 nova_compute[260603]: 2025-10-02 08:30:29.376 2 DEBUG nova.network.neutron [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:30:29 compute-0 nova_compute[260603]: 2025-10-02 08:30:29.400 2 INFO nova.compute.manager [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Took 2.07 seconds to deallocate network for instance.
Oct 02 08:30:29 compute-0 nova_compute[260603]: 2025-10-02 08:30:29.452 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:29 compute-0 nova_compute[260603]: 2025-10-02 08:30:29.452 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:29 compute-0 nova_compute[260603]: 2025-10-02 08:30:29.580 2 DEBUG oslo_concurrency.processutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:29 compute-0 nova_compute[260603]: 2025-10-02 08:30:29.843 2 DEBUG nova.network.neutron [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Successfully created port: 4298d267-ede8-417b-9e26-a2533908497f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:30:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:30:30 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443963866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:30 compute-0 ceph-mon[74477]: pgmap v1488: 305 pgs: 305 active+clean; 50 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 330 KiB/s wr, 154 op/s
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.061 2 DEBUG nova.compute.manager [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.062 2 DEBUG oslo_concurrency.lockutils [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.062 2 DEBUG oslo_concurrency.lockutils [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.063 2 DEBUG oslo_concurrency.lockutils [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.063 2 DEBUG nova.compute.manager [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.064 2 WARNING nova.compute.manager [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received unexpected event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with vm_state deleted and task_state None.
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.064 2 DEBUG nova.compute.manager [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-deleted-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.066 2 DEBUG oslo_concurrency.processutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.075 2 DEBUG nova.compute.provider_tree [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.105 2 DEBUG nova.scheduler.client.report [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.134 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.175 2 INFO nova.scheduler.client.report [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Deleted allocations for instance 9924ce7f-b701-4560-b2c5-67f673b45807
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.260 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:30 compute-0 nova_compute[260603]: 2025-10-02 08:30:30.268 2 DEBUG nova.network.neutron [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Successfully created port: 25be6c2d-0038-4133-8d21-845a3220d33e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:30:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 50 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 327 KiB/s wr, 101 op/s
Oct 02 08:30:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1443963866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:31 compute-0 nova_compute[260603]: 2025-10-02 08:30:31.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:31 compute-0 nova_compute[260603]: 2025-10-02 08:30:31.702 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393816.699861, d15c7c6a-e6a1-4538-9db0-ee1aef10f38b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:31 compute-0 nova_compute[260603]: 2025-10-02 08:30:31.702 2 INFO nova.compute.manager [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] VM Stopped (Lifecycle Event)
Oct 02 08:30:31 compute-0 nova_compute[260603]: 2025-10-02 08:30:31.726 2 DEBUG nova.compute.manager [None req-ebb709b4-588d-46f0-9d40-523e9bf7cd7e - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:31 compute-0 nova_compute[260603]: 2025-10-02 08:30:31.866 2 DEBUG nova.network.neutron [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Successfully updated port: 4298d267-ede8-417b-9e26-a2533908497f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:30:31 compute-0 nova_compute[260603]: 2025-10-02 08:30:31.902 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "refresh_cache-f8f36f36-817a-4e64-8c57-c211cfc7b0ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:30:31 compute-0 nova_compute[260603]: 2025-10-02 08:30:31.903 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquired lock "refresh_cache-f8f36f36-817a-4e64-8c57-c211cfc7b0ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:30:31 compute-0 nova_compute[260603]: 2025-10-02 08:30:31.903 2 DEBUG nova.network.neutron [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:30:32 compute-0 ceph-mon[74477]: pgmap v1489: 305 pgs: 305 active+clean; 50 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 327 KiB/s wr, 101 op/s
Oct 02 08:30:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:30:32 compute-0 nova_compute[260603]: 2025-10-02 08:30:32.455 2 DEBUG nova.network.neutron [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Successfully updated port: 25be6c2d-0038-4133-8d21-845a3220d33e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:30:32 compute-0 nova_compute[260603]: 2025-10-02 08:30:32.479 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "refresh_cache-3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:30:32 compute-0 nova_compute[260603]: 2025-10-02 08:30:32.480 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquired lock "refresh_cache-3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:30:32 compute-0 nova_compute[260603]: 2025-10-02 08:30:32.480 2 DEBUG nova.network.neutron [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:30:32 compute-0 nova_compute[260603]: 2025-10-02 08:30:32.522 2 DEBUG nova.compute.manager [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-changed-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:32 compute-0 nova_compute[260603]: 2025-10-02 08:30:32.523 2 DEBUG nova.compute.manager [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Refreshing instance network info cache due to event network-changed-4298d267-ede8-417b-9e26-a2533908497f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:30:32 compute-0 nova_compute[260603]: 2025-10-02 08:30:32.523 2 DEBUG oslo_concurrency.lockutils [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f8f36f36-817a-4e64-8c57-c211cfc7b0ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:30:32 compute-0 nova_compute[260603]: 2025-10-02 08:30:32.556 2 DEBUG nova.network.neutron [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:30:32 compute-0 nova_compute[260603]: 2025-10-02 08:30:32.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:32 compute-0 nova_compute[260603]: 2025-10-02 08:30:32.883 2 DEBUG nova.network.neutron [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:30:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 118 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 600 KiB/s rd, 2.9 MiB/s wr, 151 op/s
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.897 2 DEBUG nova.network.neutron [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Updating instance_info_cache with network_info: [{"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.932 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Releasing lock "refresh_cache-f8f36f36-817a-4e64-8c57-c211cfc7b0ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.933 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance network_info: |[{"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.934 2 DEBUG oslo_concurrency.lockutils [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f8f36f36-817a-4e64-8c57-c211cfc7b0ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.935 2 DEBUG nova.network.neutron [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Refreshing network info cache for port 4298d267-ede8-417b-9e26-a2533908497f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.940 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Start _get_guest_xml network_info=[{"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.948 2 WARNING nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.956 2 DEBUG nova.virt.libvirt.host [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.957 2 DEBUG nova.virt.libvirt.host [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.971 2 DEBUG nova.virt.libvirt.host [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.972 2 DEBUG nova.virt.libvirt.host [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.973 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.973 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.974 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.975 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.975 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.976 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.976 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.977 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.977 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.978 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.978 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.979 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:30:33 compute-0 nova_compute[260603]: 2025-10-02 08:30:33.984 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.057 2 DEBUG nova.network.neutron [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Updating instance_info_cache with network_info: [{"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:30:34 compute-0 ceph-mon[74477]: pgmap v1490: 305 pgs: 305 active+clean; 118 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 600 KiB/s rd, 2.9 MiB/s wr, 151 op/s
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.086 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Releasing lock "refresh_cache-3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.087 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Instance network_info: |[{"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.092 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Start _get_guest_xml network_info=[{"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.103 2 WARNING nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.108 2 DEBUG nova.virt.libvirt.host [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.109 2 DEBUG nova.virt.libvirt.host [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.116 2 DEBUG nova.virt.libvirt.host [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.117 2 DEBUG nova.virt.libvirt.host [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.118 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.118 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.119 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.120 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.121 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.121 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.122 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.122 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.123 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.123 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.124 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.124 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.130 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:30:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1846897373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.485 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.518 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.523 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:30:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2278444661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.619 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.656 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:34 compute-0 nova_compute[260603]: 2025-10-02 08:30:34.661 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:34.816 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:34.817 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:34.819 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:34 compute-0 sudo[320040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:30:34 compute-0 sudo[320040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:34 compute-0 sudo[320040]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:34 compute-0 sudo[320083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:30:34 compute-0 sudo[320083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:34 compute-0 sudo[320083]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:30:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1539084807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 3.6 MiB/s wr, 132 op/s
Oct 02 08:30:34 compute-0 sudo[320108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:30:34 compute-0 sudo[320108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:34 compute-0 sudo[320108]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.010 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.013 2 DEBUG nova.virt.libvirt.vif [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-949284758',display_name='tempest-ServerDiskConfigTestJSON-server-949284758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-949284758',id=59,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-xg0hxkh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:27Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=f8f36f36-817a-4e64-8c57-c211cfc7b0ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.013 2 DEBUG nova.network.os_vif_util [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.014 2 DEBUG nova.network.os_vif_util [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.015 2 DEBUG nova.objects.instance [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_devices' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.031 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <uuid>f8f36f36-817a-4e64-8c57-c211cfc7b0ba</uuid>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <name>instance-0000003b</name>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-949284758</nova:name>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:30:33</nova:creationTime>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:user uuid="116b114f14f84e4cbd6cc966e29d82e7">tempest-ServerDiskConfigTestJSON-1277806880-project-member</nova:user>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:project uuid="bce7493292bb47cfb7168bca89f78f4a">tempest-ServerDiskConfigTestJSON-1277806880</nova:project>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:port uuid="4298d267-ede8-417b-9e26-a2533908497f">
Oct 02 08:30:35 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <system>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <entry name="serial">f8f36f36-817a-4e64-8c57-c211cfc7b0ba</entry>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <entry name="uuid">f8f36f36-817a-4e64-8c57-c211cfc7b0ba</entry>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </system>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <os>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </os>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <features>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </features>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk">
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </source>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config">
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </source>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:50:87:db"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <target dev="tap4298d267-ed"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/console.log" append="off"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <video>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </video>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 sudo[320135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 sudo[320135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:30:35 compute-0 nova_compute[260603]: </domain>
Oct 02 08:30:35 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.031 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Preparing to wait for external event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.031 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.032 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.032 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.032 2 DEBUG nova.virt.libvirt.vif [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-949284758',display_name='tempest-ServerDiskConfigTestJSON-server-949284758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-949284758',id=59,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-xg0hxkh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:27Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=f8f36f36-817a-4e64-8c57-c211cfc7b0ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.033 2 DEBUG nova.network.os_vif_util [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.033 2 DEBUG nova.network.os_vif_util [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.033 2 DEBUG os_vif [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.037 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4298d267-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.038 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4298d267-ed, col_values=(('external_ids', {'iface-id': '4298d267-ede8-417b-9e26-a2533908497f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:87:db', 'vm-uuid': 'f8f36f36-817a-4e64-8c57-c211cfc7b0ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1846897373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2278444661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1539084807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:35 compute-0 NetworkManager[45129]: <info>  [1759393835.0813] manager: (tap4298d267-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.087 2 INFO os_vif [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed')
Oct 02 08:30:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:30:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262397527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.131 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.132 2 DEBUG nova.virt.libvirt.vif [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-275620590',display_name='tempest-ServerMetadataTestJSON-server-275620590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-275620590',id=60,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='649aece3b28a477fa6e0d1dc7b1d5ade',ramdisk_id='',reservation_id='r-h0hzwcr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1153289932',owner_user_name='tempest-ServerMetadataTestJSON-1153289932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:27Z,user_data=None,user_id='e7ed2cfbcca04b4ca0a07910a0319456',uuid=3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.132 2 DEBUG nova.network.os_vif_util [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Converting VIF {"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.133 2 DEBUG nova.network.os_vif_util [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.133 2 DEBUG nova.objects.instance [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lazy-loading 'pci_devices' on Instance uuid 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.152 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <uuid>3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5</uuid>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <name>instance-0000003c</name>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerMetadataTestJSON-server-275620590</nova:name>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:30:34</nova:creationTime>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:user uuid="e7ed2cfbcca04b4ca0a07910a0319456">tempest-ServerMetadataTestJSON-1153289932-project-member</nova:user>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:project uuid="649aece3b28a477fa6e0d1dc7b1d5ade">tempest-ServerMetadataTestJSON-1153289932</nova:project>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <nova:port uuid="25be6c2d-0038-4133-8d21-845a3220d33e">
Oct 02 08:30:35 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <system>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <entry name="serial">3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5</entry>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <entry name="uuid">3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5</entry>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </system>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <os>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </os>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <features>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </features>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk">
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </source>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk.config">
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </source>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:30:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:75:65:1d"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <target dev="tap25be6c2d-00"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/console.log" append="off"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <video>
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </video>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:30:35 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:30:35 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:30:35 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:30:35 compute-0 nova_compute[260603]: </domain>
Oct 02 08:30:35 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.153 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Preparing to wait for external event network-vif-plugged-25be6c2d-0038-4133-8d21-845a3220d33e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.153 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.153 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.154 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.154 2 DEBUG nova.virt.libvirt.vif [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-275620590',display_name='tempest-ServerMetadataTestJSON-server-275620590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-275620590',id=60,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='649aece3b28a477fa6e0d1dc7b1d5ade',ramdisk_id='',reservation_id='r-h0hzwcr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1153289932',owner_user_name='tempest-ServerMetadataTestJSON-1153289932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:27Z,user_data=None,user_id='e7ed2cfbcca04b4ca0a07910a0319456',uuid=3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.155 2 DEBUG nova.network.os_vif_util [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Converting VIF {"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.155 2 DEBUG nova.network.os_vif_util [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.155 2 DEBUG os_vif [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.156 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.157 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.160 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25be6c2d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.160 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25be6c2d-00, col_values=(('external_ids', {'iface-id': '25be6c2d-0038-4133-8d21-845a3220d33e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:65:1d', 'vm-uuid': '3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:35 compute-0 NetworkManager[45129]: <info>  [1759393835.1630] manager: (tap25be6c2d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.164 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.164 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.164 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No VIF found with MAC fa:16:3e:50:87:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.164 2 INFO nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Using config drive
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.184 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.192 2 INFO os_vif [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00')
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.206 2 DEBUG nova.compute.manager [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Received event network-changed-25be6c2d-0038-4133-8d21-845a3220d33e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.207 2 DEBUG nova.compute.manager [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Refreshing instance network info cache due to event network-changed-25be6c2d-0038-4133-8d21-845a3220d33e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.207 2 DEBUG oslo_concurrency.lockutils [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.207 2 DEBUG oslo_concurrency.lockutils [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.207 2 DEBUG nova.network.neutron [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Refreshing network info cache for port 25be6c2d-0038-4133-8d21-845a3220d33e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.250 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.250 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.250 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] No VIF found with MAC fa:16:3e:75:65:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.251 2 INFO nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Using config drive
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.273 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.595 2 INFO nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Creating config drive at /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/disk.config
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.605 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_vsolfl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:35 compute-0 podman[320276]: 2025-10-02 08:30:35.631807732 +0000 UTC m=+0.076823065 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:30:35 compute-0 podman[320276]: 2025-10-02 08:30:35.717802679 +0000 UTC m=+0.162818012 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.768 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_vsolfl" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.809 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.814 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/disk.config 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.859 2 INFO nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Creating config drive at /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.867 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7myp0roh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.902 2 DEBUG nova.network.neutron [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Updated VIF entry in instance network info cache for port 4298d267-ede8-417b-9e26-a2533908497f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.903 2 DEBUG nova.network.neutron [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Updating instance_info_cache with network_info: [{"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.928 2 DEBUG oslo_concurrency.lockutils [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f8f36f36-817a-4e64-8c57-c211cfc7b0ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.977 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/disk.config 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:35 compute-0 nova_compute[260603]: 2025-10-02 08:30:35.977 2 INFO nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Deleting local config drive /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/disk.config because it was imported into RBD.
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.001 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7myp0roh" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.034 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:36 compute-0 NetworkManager[45129]: <info>  [1759393836.0431] manager: (tap25be6c2d-00): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.044 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:36 compute-0 kernel: tap25be6c2d-00: entered promiscuous mode
Oct 02 08:30:36 compute-0 ovn_controller[152344]: 2025-10-02T08:30:36Z|00522|binding|INFO|Claiming lport 25be6c2d-0038-4133-8d21-845a3220d33e for this chassis.
Oct 02 08:30:36 compute-0 ovn_controller[152344]: 2025-10-02T08:30:36Z|00523|binding|INFO|25be6c2d-0038-4133-8d21-845a3220d33e: Claiming fa:16:3e:75:65:1d 10.100.0.7
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.069 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:65:1d 10.100.0.7'], port_security=['fa:16:3e:75:65:1d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-688aa430-74b8-4500-af80-d797f0ec4310', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '649aece3b28a477fa6e0d1dc7b1d5ade', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4630aa5e-be12-47d4-ae0b-793f01b010a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f759ed2-fdc4-4b2b-ac94-db6a43d5407a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=25be6c2d-0038-4133-8d21-845a3220d33e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:36 compute-0 systemd-udevd[320447]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.072 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 25be6c2d-0038-4133-8d21-845a3220d33e in datapath 688aa430-74b8-4500-af80-d797f0ec4310 bound to our chassis
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.074 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 688aa430-74b8-4500-af80-d797f0ec4310
Oct 02 08:30:36 compute-0 systemd-machined[214636]: New machine qemu-65-instance-0000003c.
Oct 02 08:30:36 compute-0 NetworkManager[45129]: <info>  [1759393836.0833] device (tap25be6c2d-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:30:36 compute-0 NetworkManager[45129]: <info>  [1759393836.0841] device (tap25be6c2d-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.088 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10b5e7da-7d19-4f0a-b382-1d4eaa512a2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:36 compute-0 ceph-mon[74477]: pgmap v1491: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 3.6 MiB/s wr, 132 op/s
Oct 02 08:30:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4262397527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:36 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-0000003c.
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.094 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap688aa430-71 in ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.096 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap688aa430-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.096 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ed73cd6f-5bbc-49e8-ae9d-946c5376c3b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.097 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e143579d-7493-4990-abcf-09de9824c001]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.110 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[fe64ec47-86ea-4d42-b9d6-42f6d01e0110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.136 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[573de149-5ee9-4290-9a19-9d19d4a4ea53]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 ovn_controller[152344]: 2025-10-02T08:30:36Z|00524|binding|INFO|Setting lport 25be6c2d-0038-4133-8d21-845a3220d33e ovn-installed in OVS
Oct 02 08:30:36 compute-0 ovn_controller[152344]: 2025-10-02T08:30:36Z|00525|binding|INFO|Setting lport 25be6c2d-0038-4133-8d21-845a3220d33e up in Southbound
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.171 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2a751374-dd5a-49c9-abb5-6316fced5fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.206 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[975a9da8-21eb-4255-bec9-4d025760a370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 NetworkManager[45129]: <info>  [1759393836.2072] manager: (tap688aa430-70): new Veth device (/org/freedesktop/NetworkManager/Devices/220)
Oct 02 08:30:36 compute-0 systemd-udevd[320455]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.246 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a5cb6cc7-366d-4c52-81f5-2c0400d5246d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.250 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1649689d-d3bf-4a5f-87e1-796a0c049961]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 NetworkManager[45129]: <info>  [1759393836.2744] device (tap688aa430-70): carrier: link connected
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.284 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f5d3d0-473d-4022-8d1e-0cfea245e2e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.311 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29463ace-95bb-4150-a04a-1a33406ebb0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap688aa430-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:29:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471487, 'reachable_time': 36328, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320532, 'error': None, 'target': 'ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.326 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.326 2 INFO nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deleting local config drive /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config because it was imported into RBD.
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.335 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9bac593f-5bb6-4ffe-94f4-edbb2f8f536a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:296f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471487, 'tstamp': 471487}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320543, 'error': None, 'target': 'ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.360 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2eef818-fb9a-433d-b4c3-0e34287fde55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap688aa430-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:29:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471487, 'reachable_time': 36328, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320546, 'error': None, 'target': 'ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.373 2 DEBUG nova.compute.manager [req-48bab894-943e-4c22-98bb-6b6068582de8 req-d4cb7112-c970-4ac6-bc6d-cb4486929a27 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Received event network-vif-plugged-25be6c2d-0038-4133-8d21-845a3220d33e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.373 2 DEBUG oslo_concurrency.lockutils [req-48bab894-943e-4c22-98bb-6b6068582de8 req-d4cb7112-c970-4ac6-bc6d-cb4486929a27 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.374 2 DEBUG oslo_concurrency.lockutils [req-48bab894-943e-4c22-98bb-6b6068582de8 req-d4cb7112-c970-4ac6-bc6d-cb4486929a27 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.374 2 DEBUG oslo_concurrency.lockutils [req-48bab894-943e-4c22-98bb-6b6068582de8 req-d4cb7112-c970-4ac6-bc6d-cb4486929a27 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.374 2 DEBUG nova.compute.manager [req-48bab894-943e-4c22-98bb-6b6068582de8 req-d4cb7112-c970-4ac6-bc6d-cb4486929a27 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Processing event network-vif-plugged-25be6c2d-0038-4133-8d21-845a3220d33e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:30:36 compute-0 NetworkManager[45129]: <info>  [1759393836.3893] manager: (tap4298d267-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/221)
Oct 02 08:30:36 compute-0 kernel: tap4298d267-ed: entered promiscuous mode
Oct 02 08:30:36 compute-0 systemd-udevd[320505]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:36 compute-0 ovn_controller[152344]: 2025-10-02T08:30:36Z|00526|binding|INFO|Claiming lport 4298d267-ede8-417b-9e26-a2533908497f for this chassis.
Oct 02 08:30:36 compute-0 ovn_controller[152344]: 2025-10-02T08:30:36Z|00527|binding|INFO|4298d267-ede8-417b-9e26-a2533908497f: Claiming fa:16:3e:50:87:db 10.100.0.3
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.405 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:87:db 10.100.0.3'], port_security=['fa:16:3e:50:87:db 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f8f36f36-817a-4e64-8c57-c211cfc7b0ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4298d267-ede8-417b-9e26-a2533908497f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:36 compute-0 NetworkManager[45129]: <info>  [1759393836.4070] device (tap4298d267-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:30:36 compute-0 NetworkManager[45129]: <info>  [1759393836.4076] device (tap4298d267-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.410 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc17135-475f-4079-8ffa-2fe012fb007a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 systemd-machined[214636]: New machine qemu-66-instance-0000003b.
Oct 02 08:30:36 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-0000003b.
Oct 02 08:30:36 compute-0 sudo[320135]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.477 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce507f4-108d-4c93-9734-7d53e642610e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.479 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap688aa430-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.479 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.480 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap688aa430-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:36 compute-0 NetworkManager[45129]: <info>  [1759393836.4833] manager: (tap688aa430-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Oct 02 08:30:36 compute-0 kernel: tap688aa430-70: entered promiscuous mode
Oct 02 08:30:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.486 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap688aa430-70, col_values=(('external_ids', {'iface-id': '0d737469-5d43-4396-a0ae-3cb6fa3fbe67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:36 compute-0 ovn_controller[152344]: 2025-10-02T08:30:36Z|00528|binding|INFO|Releasing lport 0d737469-5d43-4396-a0ae-3cb6fa3fbe67 from this chassis (sb_readonly=0)
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.492 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/688aa430-74b8-4500-af80-d797f0ec4310.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/688aa430-74b8-4500-af80-d797f0ec4310.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.495 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1420c5ba-3082-43ea-9d82-5322267954e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.496 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-688aa430-74b8-4500-af80-d797f0ec4310
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/688aa430-74b8-4500-af80-d797f0ec4310.pid.haproxy
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 688aa430-74b8-4500-af80-d797f0ec4310
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:30:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.498 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310', 'env', 'PROCESS_TAG=haproxy-688aa430-74b8-4500-af80-d797f0ec4310', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/688aa430-74b8-4500-af80-d797f0ec4310.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:30:36 compute-0 ovn_controller[152344]: 2025-10-02T08:30:36Z|00529|binding|INFO|Setting lport 4298d267-ede8-417b-9e26-a2533908497f ovn-installed in OVS
Oct 02 08:30:36 compute-0 ovn_controller[152344]: 2025-10-02T08:30:36Z|00530|binding|INFO|Setting lport 4298d267-ede8-417b-9e26-a2533908497f up in Southbound
Oct 02 08:30:36 compute-0 nova_compute[260603]: 2025-10-02 08:30:36.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:36 compute-0 sudo[320625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:30:36 compute-0 sudo[320625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:36 compute-0 sudo[320625]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:36 compute-0 sudo[320660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:30:36 compute-0 sudo[320660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:36 compute-0 sudo[320660]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:36 compute-0 sudo[320685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:30:36 compute-0 sudo[320685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:36 compute-0 sudo[320685]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:36 compute-0 sudo[320725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:30:36 compute-0 sudo[320725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:36 compute-0 podman[320798]: 2025-10-02 08:30:36.916844745 +0000 UTC m=+0.085505144 container create cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:30:36 compute-0 podman[320798]: 2025-10-02 08:30:36.862054515 +0000 UTC m=+0.030714934 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:30:36 compute-0 systemd[1]: Started libpod-conmon-cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3.scope.
Oct 02 08:30:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.6 MiB/s wr, 105 op/s
Oct 02 08:30:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327ede22813a349afb9df738f72940c2bb921ee83034aa092f6f10580f45e456/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:37 compute-0 podman[320798]: 2025-10-02 08:30:37.016413475 +0000 UTC m=+0.185073924 container init cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:30:37 compute-0 podman[320798]: 2025-10-02 08:30:37.022768582 +0000 UTC m=+0.191428991 container start cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 08:30:37 compute-0 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [NOTICE]   (320830) : New worker (320832) forked
Oct 02 08:30:37 compute-0 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [NOTICE]   (320830) : Loading success.
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.095 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.096 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393837.0958679, 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.097 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] VM Started (Lifecycle Event)
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.102 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:30:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.108 2 INFO nova.virt.libvirt.driver [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Instance spawned successfully.
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.108 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.114 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.120 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.141 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.143 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.144 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.144 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.145 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.145 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.155 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.155 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393837.0959647, 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.155 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] VM Paused (Lifecycle Event)
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.142 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4298d267-ede8-417b-9e26-a2533908497f in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b unbound from our chassis
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.159 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.173 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[89649e2c-a249-4a55-8738-1b6007d01b35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.174 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8df0af1-11 in ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.177 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8df0af1-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.177 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cabbcd90-fb6d-4c0a-8b16-26d06604179d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.178 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[90835da8-279a-456f-b169-7940c2c704d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.184 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.193 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[28cbd4a4-3456-4d0e-b8e8-a6cb24a3d514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.198 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393837.1014993, 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.198 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] VM Resumed (Lifecycle Event)
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.211 2 INFO nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Took 9.14 seconds to spawn the instance on the hypervisor.
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.211 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.218 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.225 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.225 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2408ac-3a41-4b00-b2fb-4f628f241dfc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.251 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393822.2506952, 797fde07-e88a-4d6e-a1a3-25e22c66097c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.252 2 INFO nova.compute.manager [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] VM Stopped (Lifecycle Event)
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.256 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.275 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac58d8b-da1e-4787-b491-0821e2326a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.279 2 DEBUG nova.compute.manager [None req-8af441cb-d89e-430a-9b66-06914f7fbad8 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:37 compute-0 NetworkManager[45129]: <info>  [1759393837.2893] manager: (tapf8df0af1-10): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.288 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b2cc22b7-77da-4861-a932-b40ceb6dceee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.296 2 INFO nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Took 10.65 seconds to build instance.
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.317 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:37 compute-0 sudo[320725]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.358 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e5f8f7-f554-4fa9-ac7b-951df42c468a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.364 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[48e24c3b-ed53-4dab-aec3-1755671199ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 08:30:37 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 08:30:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:30:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:30:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:30:37 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:30:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:30:37 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:30:37 compute-0 NetworkManager[45129]: <info>  [1759393837.4027] device (tapf8df0af1-10): carrier: link connected
Oct 02 08:30:37 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ac7bf907-dff9-4c4f-82dc-5e154a2f8001 does not exist
Oct 02 08:30:37 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b93ef181-3e31-4bef-8650-78da064594ab does not exist
Oct 02 08:30:37 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9f39f035-97f1-42af-9939-f69711c5730a does not exist
Oct 02 08:30:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:30:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:30:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:30:37 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:30:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:30:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.419 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3d9c3d-f5ba-4737-8600-f5adb26cfb31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.450 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e6186142-e491-441a-80dc-ea7784f2a1a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471600, 'reachable_time': 26145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320873, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.476 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393837.4763658, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.477 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Started (Lifecycle Event)
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.480 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[290ba45e-3131-46a5-9562-04780cc29c95]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:6ddc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471600, 'tstamp': 471600}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320892, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 sudo[320868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:30:37 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:30:37 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:30:37 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 08:30:37 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:30:37 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:30:37 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:30:37 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:30:37 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:30:37 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:30:37 compute-0 sudo[320868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:37 compute-0 sudo[320868]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.499 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.504 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393837.4772487, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.505 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Paused (Lifecycle Event)
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.513 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[59232c45-77ad-427d-9298-9e7b6fdfd8dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471600, 'reachable_time': 26145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320895, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.521 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.527 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.552 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:30:37 compute-0 sudo[320896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.570 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9d2263-6672-4b30-b485-4664a3c3b028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 sudo[320896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:37 compute-0 sudo[320896]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.588 2 DEBUG nova.compute.manager [req-4a91ecc1-c153-4b05-b58e-77c4cbc81742 req-33dd4861-2751-4875-8d18-6afbc11cdd5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.588 2 DEBUG oslo_concurrency.lockutils [req-4a91ecc1-c153-4b05-b58e-77c4cbc81742 req-33dd4861-2751-4875-8d18-6afbc11cdd5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.588 2 DEBUG oslo_concurrency.lockutils [req-4a91ecc1-c153-4b05-b58e-77c4cbc81742 req-33dd4861-2751-4875-8d18-6afbc11cdd5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.588 2 DEBUG oslo_concurrency.lockutils [req-4a91ecc1-c153-4b05-b58e-77c4cbc81742 req-33dd4861-2751-4875-8d18-6afbc11cdd5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.589 2 DEBUG nova.compute.manager [req-4a91ecc1-c153-4b05-b58e-77c4cbc81742 req-33dd4861-2751-4875-8d18-6afbc11cdd5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Processing event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.589 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.593 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393837.5928373, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.593 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Resumed (Lifecycle Event)
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.594 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.597 2 INFO nova.virt.libvirt.driver [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance spawned successfully.
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.597 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.613 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.619 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.624 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.624 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.625 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.625 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.625 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.626 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.648 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:30:37 compute-0 sudo[320924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.655 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a855b2c8-8bac-4af4-8583-08b7e005eb8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.657 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:37 compute-0 sudo[320924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.657 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.658 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8df0af1-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:37 compute-0 sudo[320924]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:37 compute-0 kernel: tapf8df0af1-10: entered promiscuous mode
Oct 02 08:30:37 compute-0 NetworkManager[45129]: <info>  [1759393837.6607] manager: (tapf8df0af1-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.670 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8df0af1-10, col_values=(('external_ids', {'iface-id': '1405e724-f2f6-4a95-8848-550131e62910'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:37 compute-0 ovn_controller[152344]: 2025-10-02T08:30:37Z|00531|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.685 2 INFO nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Took 10.51 seconds to spawn the instance on the hypervisor.
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.685 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.695 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.696 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0506ba88-9a55-4872-9409-439aacec2cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.697 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:30:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.698 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'env', 'PROCESS_TAG=haproxy-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8df0af1-1767-419a-8500-c28fbf45ae4b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:30:37 compute-0 sudo[320953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:30:37 compute-0 sudo[320953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.732 2 DEBUG nova.network.neutron [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Updated VIF entry in instance network info cache for port 25be6c2d-0038-4133-8d21-845a3220d33e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.732 2 DEBUG nova.network.neutron [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Updating instance_info_cache with network_info: [{"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.745 2 INFO nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Took 12.12 seconds to build instance.
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.747 2 DEBUG oslo_concurrency.lockutils [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.763 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:37 compute-0 nova_compute[260603]: 2025-10-02 08:30:37.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:38 compute-0 podman[321036]: 2025-10-02 08:30:38.072975809 +0000 UTC m=+0.044589115 container create b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:30:38 compute-0 podman[321048]: 2025-10-02 08:30:38.108406419 +0000 UTC m=+0.063084159 container create 720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 08:30:38 compute-0 systemd[1]: Started libpod-conmon-b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72.scope.
Oct 02 08:30:38 compute-0 podman[321036]: 2025-10-02 08:30:38.053912327 +0000 UTC m=+0.025525663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:30:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:30:38 compute-0 systemd[1]: Started libpod-conmon-720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e.scope.
Oct 02 08:30:38 compute-0 podman[321036]: 2025-10-02 08:30:38.168824603 +0000 UTC m=+0.140437939 container init b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:30:38 compute-0 podman[321048]: 2025-10-02 08:30:38.072263367 +0000 UTC m=+0.026941117 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:30:38 compute-0 podman[321036]: 2025-10-02 08:30:38.178413801 +0000 UTC m=+0.150027137 container start b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 08:30:38 compute-0 podman[321036]: 2025-10-02 08:30:38.182343522 +0000 UTC m=+0.153956868 container attach b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct 02 08:30:38 compute-0 cranky_visvesvaraya[321069]: 167 167
Oct 02 08:30:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:30:38 compute-0 podman[321036]: 2025-10-02 08:30:38.185423528 +0000 UTC m=+0.157036844 container died b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 08:30:38 compute-0 systemd[1]: libpod-b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72.scope: Deactivated successfully.
Oct 02 08:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11063ab83b9a9c00fd7baf0f7f968c64e020d49f1d6528b03e58b1a6ea92a56d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:38 compute-0 podman[321048]: 2025-10-02 08:30:38.209794344 +0000 UTC m=+0.164472114 container init 720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:30:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-347c5ee217ac598c0db47c598a2201ee99a86436c0f65e31b6a74989a96c1561-merged.mount: Deactivated successfully.
Oct 02 08:30:38 compute-0 podman[321048]: 2025-10-02 08:30:38.222729625 +0000 UTC m=+0.177407365 container start 720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 08:30:38 compute-0 podman[321036]: 2025-10-02 08:30:38.245510353 +0000 UTC m=+0.217123669 container remove b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:30:38 compute-0 systemd[1]: libpod-conmon-b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72.scope: Deactivated successfully.
Oct 02 08:30:38 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [NOTICE]   (321092) : New worker (321096) forked
Oct 02 08:30:38 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [NOTICE]   (321092) : Loading success.
Oct 02 08:30:38 compute-0 podman[321110]: 2025-10-02 08:30:38.429382628 +0000 UTC m=+0.042342805 container create 5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 08:30:38 compute-0 systemd[1]: Started libpod-conmon-5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c.scope.
Oct 02 08:30:38 compute-0 ceph-mon[74477]: pgmap v1492: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.6 MiB/s wr, 105 op/s
Oct 02 08:30:38 compute-0 nova_compute[260603]: 2025-10-02 08:30:38.499 2 DEBUG nova.compute.manager [req-ccfb574b-c422-4714-a5f5-b5720ac5c97e req-a59d3502-50d7-408b-aa95-572dfb1aa13b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Received event network-vif-plugged-25be6c2d-0038-4133-8d21-845a3220d33e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:38 compute-0 nova_compute[260603]: 2025-10-02 08:30:38.500 2 DEBUG oslo_concurrency.lockutils [req-ccfb574b-c422-4714-a5f5-b5720ac5c97e req-a59d3502-50d7-408b-aa95-572dfb1aa13b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:38 compute-0 nova_compute[260603]: 2025-10-02 08:30:38.500 2 DEBUG oslo_concurrency.lockutils [req-ccfb574b-c422-4714-a5f5-b5720ac5c97e req-a59d3502-50d7-408b-aa95-572dfb1aa13b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:38 compute-0 nova_compute[260603]: 2025-10-02 08:30:38.500 2 DEBUG oslo_concurrency.lockutils [req-ccfb574b-c422-4714-a5f5-b5720ac5c97e req-a59d3502-50d7-408b-aa95-572dfb1aa13b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:38 compute-0 nova_compute[260603]: 2025-10-02 08:30:38.500 2 DEBUG nova.compute.manager [req-ccfb574b-c422-4714-a5f5-b5720ac5c97e req-a59d3502-50d7-408b-aa95-572dfb1aa13b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] No waiting events found dispatching network-vif-plugged-25be6c2d-0038-4133-8d21-845a3220d33e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:30:38 compute-0 nova_compute[260603]: 2025-10-02 08:30:38.500 2 WARNING nova.compute.manager [req-ccfb574b-c422-4714-a5f5-b5720ac5c97e req-a59d3502-50d7-408b-aa95-572dfb1aa13b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Received unexpected event network-vif-plugged-25be6c2d-0038-4133-8d21-845a3220d33e for instance with vm_state active and task_state None.
Oct 02 08:30:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:30:38 compute-0 podman[321110]: 2025-10-02 08:30:38.412169314 +0000 UTC m=+0.025129481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c1538d29820be91413492c99f9f55b30d09a9bd9748e10957d3c8fe998d053/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c1538d29820be91413492c99f9f55b30d09a9bd9748e10957d3c8fe998d053/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c1538d29820be91413492c99f9f55b30d09a9bd9748e10957d3c8fe998d053/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c1538d29820be91413492c99f9f55b30d09a9bd9748e10957d3c8fe998d053/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c1538d29820be91413492c99f9f55b30d09a9bd9748e10957d3c8fe998d053/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:30:38 compute-0 podman[321110]: 2025-10-02 08:30:38.529293508 +0000 UTC m=+0.142253705 container init 5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006919304917952725 of space, bias 1.0, pg target 0.20757914753858175 quantized to 32 (current 32)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:30:38 compute-0 podman[321110]: 2025-10-02 08:30:38.536331586 +0000 UTC m=+0.149291763 container start 5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 08:30:38 compute-0 podman[321110]: 2025-10-02 08:30:38.539738832 +0000 UTC m=+0.152699059 container attach 5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 08:30:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 3.6 MiB/s wr, 139 op/s
Oct 02 08:30:39 compute-0 nifty_brattain[321126]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:30:39 compute-0 nifty_brattain[321126]: --> relative data size: 1.0
Oct 02 08:30:39 compute-0 nifty_brattain[321126]: --> All data devices are unavailable
Oct 02 08:30:39 compute-0 systemd[1]: libpod-5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c.scope: Deactivated successfully.
Oct 02 08:30:39 compute-0 systemd[1]: libpod-5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c.scope: Consumed 1.066s CPU time.
Oct 02 08:30:39 compute-0 conmon[321126]: conmon 5c010b6eab3482eedd6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c.scope/container/memory.events
Oct 02 08:30:39 compute-0 podman[321110]: 2025-10-02 08:30:39.690561491 +0000 UTC m=+1.303521668 container died 5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:30:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-64c1538d29820be91413492c99f9f55b30d09a9bd9748e10957d3c8fe998d053-merged.mount: Deactivated successfully.
Oct 02 08:30:39 compute-0 podman[321110]: 2025-10-02 08:30:39.750387117 +0000 UTC m=+1.363347294 container remove 5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:30:39 compute-0 systemd[1]: libpod-conmon-5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c.scope: Deactivated successfully.
Oct 02 08:30:39 compute-0 sudo[320953]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:39 compute-0 nova_compute[260603]: 2025-10-02 08:30:39.844 2 DEBUG nova.compute.manager [req-6e4cf3e0-19fb-4bf0-9eea-c323d22bace1 req-9fcc895a-e6ab-43e1-9e39-9b8bc2fc7356 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:39 compute-0 nova_compute[260603]: 2025-10-02 08:30:39.845 2 DEBUG oslo_concurrency.lockutils [req-6e4cf3e0-19fb-4bf0-9eea-c323d22bace1 req-9fcc895a-e6ab-43e1-9e39-9b8bc2fc7356 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:39 compute-0 nova_compute[260603]: 2025-10-02 08:30:39.845 2 DEBUG oslo_concurrency.lockutils [req-6e4cf3e0-19fb-4bf0-9eea-c323d22bace1 req-9fcc895a-e6ab-43e1-9e39-9b8bc2fc7356 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:39 compute-0 nova_compute[260603]: 2025-10-02 08:30:39.846 2 DEBUG oslo_concurrency.lockutils [req-6e4cf3e0-19fb-4bf0-9eea-c323d22bace1 req-9fcc895a-e6ab-43e1-9e39-9b8bc2fc7356 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:39 compute-0 nova_compute[260603]: 2025-10-02 08:30:39.846 2 DEBUG nova.compute.manager [req-6e4cf3e0-19fb-4bf0-9eea-c323d22bace1 req-9fcc895a-e6ab-43e1-9e39-9b8bc2fc7356 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] No waiting events found dispatching network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:30:39 compute-0 nova_compute[260603]: 2025-10-02 08:30:39.846 2 WARNING nova.compute.manager [req-6e4cf3e0-19fb-4bf0-9eea-c323d22bace1 req-9fcc895a-e6ab-43e1-9e39-9b8bc2fc7356 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received unexpected event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f for instance with vm_state active and task_state None.
Oct 02 08:30:39 compute-0 sudo[321167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:30:39 compute-0 sudo[321167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:39 compute-0 sudo[321167]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:39 compute-0 sudo[321192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:30:39 compute-0 sudo[321192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:39 compute-0 sudo[321192]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:40 compute-0 sudo[321217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:30:40 compute-0 sudo[321217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:40 compute-0 sudo[321217]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:40 compute-0 sudo[321242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:30:40 compute-0 sudo[321242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:40 compute-0 nova_compute[260603]: 2025-10-02 08:30:40.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:40 compute-0 ceph-mon[74477]: pgmap v1493: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 3.6 MiB/s wr, 139 op/s
Oct 02 08:30:40 compute-0 podman[321306]: 2025-10-02 08:30:40.591764725 +0000 UTC m=+0.061240261 container create 29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaplygin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:30:40 compute-0 systemd[1]: Started libpod-conmon-29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504.scope.
Oct 02 08:30:40 compute-0 podman[321306]: 2025-10-02 08:30:40.572203878 +0000 UTC m=+0.041679414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:30:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:30:40 compute-0 podman[321306]: 2025-10-02 08:30:40.682392387 +0000 UTC m=+0.151867953 container init 29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:30:40 compute-0 podman[321306]: 2025-10-02 08:30:40.694038789 +0000 UTC m=+0.163514325 container start 29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaplygin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:30:40 compute-0 podman[321306]: 2025-10-02 08:30:40.698423505 +0000 UTC m=+0.167899061 container attach 29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 08:30:40 compute-0 great_chaplygin[321322]: 167 167
Oct 02 08:30:40 compute-0 podman[321306]: 2025-10-02 08:30:40.700895091 +0000 UTC m=+0.170370627 container died 29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaplygin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 08:30:40 compute-0 systemd[1]: libpod-29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504.scope: Deactivated successfully.
Oct 02 08:30:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8130fa34f2286e2ac0ef4d32cb8a84a7e6388fb0f4468b1186d3952b7b025add-merged.mount: Deactivated successfully.
Oct 02 08:30:40 compute-0 podman[321306]: 2025-10-02 08:30:40.74082448 +0000 UTC m=+0.210300026 container remove 29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 08:30:40 compute-0 systemd[1]: libpod-conmon-29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504.scope: Deactivated successfully.
Oct 02 08:30:40 compute-0 podman[321345]: 2025-10-02 08:30:40.973869431 +0000 UTC m=+0.060444707 container create d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:30:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 3.3 MiB/s wr, 87 op/s
Oct 02 08:30:41 compute-0 systemd[1]: Started libpod-conmon-d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99.scope.
Oct 02 08:30:41 compute-0 podman[321345]: 2025-10-02 08:30:40.950939649 +0000 UTC m=+0.037514955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:30:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb6d57200532b35fcb517989e3bc088c95c38d97c3a6a22dfd5298854a4ab46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb6d57200532b35fcb517989e3bc088c95c38d97c3a6a22dfd5298854a4ab46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb6d57200532b35fcb517989e3bc088c95c38d97c3a6a22dfd5298854a4ab46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb6d57200532b35fcb517989e3bc088c95c38d97c3a6a22dfd5298854a4ab46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:41 compute-0 podman[321345]: 2025-10-02 08:30:41.097010912 +0000 UTC m=+0.183586178 container init d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:30:41 compute-0 podman[321345]: 2025-10-02 08:30:41.110134259 +0000 UTC m=+0.196709515 container start d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:30:41 compute-0 podman[321345]: 2025-10-02 08:30:41.112839753 +0000 UTC m=+0.199415029 container attach d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:30:41 compute-0 nova_compute[260603]: 2025-10-02 08:30:41.544 2 INFO nova.compute.manager [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Rebuilding instance
Oct 02 08:30:41 compute-0 nova_compute[260603]: 2025-10-02 08:30:41.668 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393826.6670313, 9924ce7f-b701-4560-b2c5-67f673b45807 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:41 compute-0 nova_compute[260603]: 2025-10-02 08:30:41.669 2 INFO nova.compute.manager [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] VM Stopped (Lifecycle Event)
Oct 02 08:30:41 compute-0 nova_compute[260603]: 2025-10-02 08:30:41.700 2 DEBUG nova.compute.manager [None req-6fdb6802-d71c-49bb-89c1-cb233ca7cee8 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:41 compute-0 cranky_kepler[321362]: {
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:     "0": [
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:         {
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "devices": [
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "/dev/loop3"
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             ],
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_name": "ceph_lv0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_size": "21470642176",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "name": "ceph_lv0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "tags": {
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.cluster_name": "ceph",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.crush_device_class": "",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.encrypted": "0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.osd_id": "0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.type": "block",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.vdo": "0"
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             },
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "type": "block",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "vg_name": "ceph_vg0"
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:         }
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:     ],
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:     "1": [
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:         {
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "devices": [
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "/dev/loop4"
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             ],
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_name": "ceph_lv1",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_size": "21470642176",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "name": "ceph_lv1",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "tags": {
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.cluster_name": "ceph",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.crush_device_class": "",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.encrypted": "0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.osd_id": "1",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.type": "block",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.vdo": "0"
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             },
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "type": "block",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "vg_name": "ceph_vg1"
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:         }
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:     ],
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:     "2": [
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:         {
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "devices": [
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "/dev/loop5"
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             ],
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_name": "ceph_lv2",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_size": "21470642176",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "name": "ceph_lv2",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "tags": {
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.cluster_name": "ceph",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.crush_device_class": "",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.encrypted": "0",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.osd_id": "2",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.type": "block",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:                 "ceph.vdo": "0"
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             },
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "type": "block",
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:             "vg_name": "ceph_vg2"
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:         }
Oct 02 08:30:41 compute-0 cranky_kepler[321362]:     ]
Oct 02 08:30:41 compute-0 cranky_kepler[321362]: }
Oct 02 08:30:41 compute-0 nova_compute[260603]: 2025-10-02 08:30:41.916 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'trusted_certs' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:41 compute-0 nova_compute[260603]: 2025-10-02 08:30:41.933 2 DEBUG nova.compute.manager [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:41 compute-0 systemd[1]: libpod-d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99.scope: Deactivated successfully.
Oct 02 08:30:41 compute-0 nova_compute[260603]: 2025-10-02 08:30:41.984 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_requests' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:41 compute-0 podman[321371]: 2025-10-02 08:30:41.997160213 +0000 UTC m=+0.033483600 container died d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.004 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_devices' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.016 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'resources' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-eeb6d57200532b35fcb517989e3bc088c95c38d97c3a6a22dfd5298854a4ab46-merged.mount: Deactivated successfully.
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.030 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'migration_context' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.044 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.049 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:30:42 compute-0 podman[321371]: 2025-10-02 08:30:42.076457503 +0000 UTC m=+0.112780870 container remove d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 08:30:42 compute-0 systemd[1]: libpod-conmon-d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99.scope: Deactivated successfully.
Oct 02 08:30:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:30:42 compute-0 sudo[321242]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:42 compute-0 sudo[321384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:30:42 compute-0 sudo[321384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:42 compute-0 sudo[321384]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:42 compute-0 sudo[321409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:30:42 compute-0 sudo[321409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:42 compute-0 sudo[321409]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:42 compute-0 sudo[321434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:30:42 compute-0 sudo[321434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:42 compute-0 sudo[321434]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:42 compute-0 sudo[321459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:30:42 compute-0 sudo[321459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:42 compute-0 ceph-mon[74477]: pgmap v1494: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 3.3 MiB/s wr, 87 op/s
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.737 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.737 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.737 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.738 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.738 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.739 2 INFO nova.compute.manager [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Terminating instance
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.740 2 DEBUG nova.compute.manager [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:42 compute-0 kernel: tap25be6c2d-00 (unregistering): left promiscuous mode
Oct 02 08:30:42 compute-0 NetworkManager[45129]: <info>  [1759393842.8000] device (tap25be6c2d-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:42 compute-0 ovn_controller[152344]: 2025-10-02T08:30:42Z|00532|binding|INFO|Releasing lport 25be6c2d-0038-4133-8d21-845a3220d33e from this chassis (sb_readonly=0)
Oct 02 08:30:42 compute-0 ovn_controller[152344]: 2025-10-02T08:30:42Z|00533|binding|INFO|Setting lport 25be6c2d-0038-4133-8d21-845a3220d33e down in Southbound
Oct 02 08:30:42 compute-0 ovn_controller[152344]: 2025-10-02T08:30:42Z|00534|binding|INFO|Removing iface tap25be6c2d-00 ovn-installed in OVS
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:42.818 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:65:1d 10.100.0.7'], port_security=['fa:16:3e:75:65:1d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-688aa430-74b8-4500-af80-d797f0ec4310', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '649aece3b28a477fa6e0d1dc7b1d5ade', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4630aa5e-be12-47d4-ae0b-793f01b010a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f759ed2-fdc4-4b2b-ac94-db6a43d5407a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=25be6c2d-0038-4133-8d21-845a3220d33e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:42.819 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 25be6c2d-0038-4133-8d21-845a3220d33e in datapath 688aa430-74b8-4500-af80-d797f0ec4310 unbound from our chassis
Oct 02 08:30:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:42.821 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 688aa430-74b8-4500-af80-d797f0ec4310, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:30:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:42.823 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72bc6529-a3b9-4283-96c4-a030ddc74d57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:42.824 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310 namespace which is not needed anymore
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:42 compute-0 podman[321523]: 2025-10-02 08:30:42.83370616 +0000 UTC m=+0.065257326 container create 97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 08:30:42 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Oct 02 08:30:42 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003c.scope: Consumed 6.398s CPU time.
Oct 02 08:30:42 compute-0 systemd-machined[214636]: Machine qemu-65-instance-0000003c terminated.
Oct 02 08:30:42 compute-0 systemd[1]: Started libpod-conmon-97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46.scope.
Oct 02 08:30:42 compute-0 podman[321523]: 2025-10-02 08:30:42.793353028 +0000 UTC m=+0.024904284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:30:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:30:42 compute-0 podman[321523]: 2025-10-02 08:30:42.926716796 +0000 UTC m=+0.158267972 container init 97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:30:42 compute-0 podman[321523]: 2025-10-02 08:30:42.936655134 +0000 UTC m=+0.168206310 container start 97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:30:42 compute-0 podman[321523]: 2025-10-02 08:30:42.940078821 +0000 UTC m=+0.171630007 container attach 97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 08:30:42 compute-0 heuristic_carver[321552]: 167 167
Oct 02 08:30:42 compute-0 systemd[1]: libpod-97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46.scope: Deactivated successfully.
Oct 02 08:30:42 compute-0 conmon[321552]: conmon 97f9ace54191eb034604 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46.scope/container/memory.events
Oct 02 08:30:42 compute-0 podman[321523]: 2025-10-02 08:30:42.945516379 +0000 UTC m=+0.177067545 container died 97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ca7b1103246e96db7afcd9e3e00de1bfc189cf0af50443656553b0c7c50bea5-merged.mount: Deactivated successfully.
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.980 2 INFO nova.virt.libvirt.driver [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Instance destroyed successfully.
Oct 02 08:30:42 compute-0 nova_compute[260603]: 2025-10-02 08:30:42.982 2 DEBUG nova.objects.instance [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lazy-loading 'resources' on Instance uuid 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:42 compute-0 podman[321523]: 2025-10-02 08:30:42.994143658 +0000 UTC m=+0.225694824 container remove 97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:30:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.3 MiB/s wr, 180 op/s
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.000 2 DEBUG nova.virt.libvirt.vif [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-275620590',display_name='tempest-ServerMetadataTestJSON-server-275620590',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-275620590',id=60,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:30:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='649aece3b28a477fa6e0d1dc7b1d5ade',ramdisk_id='',reservation_id='r-h0hzwcr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-1153289932',owner_user_name='tempest-ServerMetadataTestJSON-1153289932-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:30:42Z,user_data=None,user_id='e7ed2cfbcca04b4ca0a07910a0319456',uuid=3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.000 2 DEBUG nova.network.os_vif_util [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Converting VIF {"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.001 2 DEBUG nova.network.os_vif_util [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.001 2 DEBUG os_vif [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25be6c2d-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:30:43 compute-0 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [NOTICE]   (320830) : haproxy version is 2.8.14-c23fe91
Oct 02 08:30:43 compute-0 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [NOTICE]   (320830) : path to executable is /usr/sbin/haproxy
Oct 02 08:30:43 compute-0 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [WARNING]  (320830) : Exiting Master process...
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.012 2 INFO os_vif [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00')
Oct 02 08:30:43 compute-0 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [ALERT]    (320830) : Current worker (320832) exited with code 143 (Terminated)
Oct 02 08:30:43 compute-0 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [WARNING]  (320830) : All workers exited. Exiting... (0)
Oct 02 08:30:43 compute-0 systemd[1]: libpod-cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3.scope: Deactivated successfully.
Oct 02 08:30:43 compute-0 podman[321563]: 2025-10-02 08:30:43.024046146 +0000 UTC m=+0.083629296 container died cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 08:30:43 compute-0 systemd[1]: libpod-conmon-97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46.scope: Deactivated successfully.
Oct 02 08:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3-userdata-shm.mount: Deactivated successfully.
Oct 02 08:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-327ede22813a349afb9df738f72940c2bb921ee83034aa092f6f10580f45e456-merged.mount: Deactivated successfully.
Oct 02 08:30:43 compute-0 podman[321563]: 2025-10-02 08:30:43.070185118 +0000 UTC m=+0.129768268 container cleanup cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:30:43 compute-0 systemd[1]: libpod-conmon-cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3.scope: Deactivated successfully.
Oct 02 08:30:43 compute-0 podman[321638]: 2025-10-02 08:30:43.168408995 +0000 UTC m=+0.058368641 container remove cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 08:30:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.180 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a3cf56dd-ac9f-438c-9852-b46d546f827c]: (4, ('Thu Oct  2 08:30:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310 (cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3)\ncf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3\nThu Oct  2 08:30:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310 (cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3)\ncf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.187 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[97b7ba86-3027-460b-9070-cc298bf8e678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.188 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap688aa430-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:43 compute-0 kernel: tap688aa430-70: left promiscuous mode
Oct 02 08:30:43 compute-0 podman[321652]: 2025-10-02 08:30:43.209579323 +0000 UTC m=+0.069866509 container create 53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.225 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6d1563-04aa-4e39-84f4-ffa04b60d83d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.255 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[75487c4b-c519-4b6d-b2a5-878cffd2dd49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.258 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[74ee4902-7ad6-419a-9f14-3d0dfd02530f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:43 compute-0 podman[321652]: 2025-10-02 08:30:43.180936205 +0000 UTC m=+0.041223431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:30:43 compute-0 systemd[1]: Started libpod-conmon-53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4.scope.
Oct 02 08:30:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.281 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd630b2-fea2-4d17-b910-17f0c3170ee3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471476, 'reachable_time': 36929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321676, 'error': None, 'target': 'ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d688aa430\x2d74b8\x2d4500\x2daf80\x2dd797f0ec4310.mount: Deactivated successfully.
Oct 02 08:30:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.284 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:30:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.285 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ec0e0d-362b-4d6f-a92a-4289263e0446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c484d04a127be9860bf60edec6feefb1e8284f2f553ba7d14241aecacb659b35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c484d04a127be9860bf60edec6feefb1e8284f2f553ba7d14241aecacb659b35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c484d04a127be9860bf60edec6feefb1e8284f2f553ba7d14241aecacb659b35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c484d04a127be9860bf60edec6feefb1e8284f2f553ba7d14241aecacb659b35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:30:43 compute-0 podman[321652]: 2025-10-02 08:30:43.341165556 +0000 UTC m=+0.201452762 container init 53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:30:43 compute-0 podman[321652]: 2025-10-02 08:30:43.34997257 +0000 UTC m=+0.210259756 container start 53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 08:30:43 compute-0 podman[321652]: 2025-10-02 08:30:43.354339345 +0000 UTC m=+0.214626551 container attach 53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.464 2 INFO nova.virt.libvirt.driver [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Deleting instance files /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_del
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.465 2 INFO nova.virt.libvirt.driver [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Deletion of /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_del complete
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.519 2 INFO nova.compute.manager [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.520 2 DEBUG oslo.service.loopingcall [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.521 2 DEBUG nova.compute.manager [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:30:43 compute-0 nova_compute[260603]: 2025-10-02 08:30:43.521 2 DEBUG nova.network.neutron [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:30:44 compute-0 nova_compute[260603]: 2025-10-02 08:30:44.217 2 DEBUG nova.network.neutron [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:30:44 compute-0 nova_compute[260603]: 2025-10-02 08:30:44.242 2 INFO nova.compute.manager [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Took 0.72 seconds to deallocate network for instance.
Oct 02 08:30:44 compute-0 nova_compute[260603]: 2025-10-02 08:30:44.297 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:44 compute-0 nova_compute[260603]: 2025-10-02 08:30:44.299 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:44 compute-0 nova_compute[260603]: 2025-10-02 08:30:44.320 2 DEBUG nova.compute.manager [req-a2a15143-9e97-4fe2-bc77-275ab77c45b0 req-6c2215cf-e4a8-4ced-9585-2addd2173fe4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Received event network-vif-deleted-25be6c2d-0038-4133-8d21-845a3220d33e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:44 compute-0 nova_compute[260603]: 2025-10-02 08:30:44.374 2 DEBUG oslo_concurrency.processutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]: {
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "osd_id": 2,
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "type": "bluestore"
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:     },
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "osd_id": 1,
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "type": "bluestore"
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:     },
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "osd_id": 0,
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:         "type": "bluestore"
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]:     }
Oct 02 08:30:44 compute-0 modest_dubinsky[321677]: }
Oct 02 08:30:44 compute-0 systemd[1]: libpod-53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4.scope: Deactivated successfully.
Oct 02 08:30:44 compute-0 conmon[321677]: conmon 53815ef94f8085ed38d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4.scope/container/memory.events
Oct 02 08:30:44 compute-0 systemd[1]: libpod-53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4.scope: Consumed 1.134s CPU time.
Oct 02 08:30:44 compute-0 podman[321652]: 2025-10-02 08:30:44.480433667 +0000 UTC m=+1.340720863 container died 53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c484d04a127be9860bf60edec6feefb1e8284f2f553ba7d14241aecacb659b35-merged.mount: Deactivated successfully.
Oct 02 08:30:44 compute-0 ceph-mon[74477]: pgmap v1495: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.3 MiB/s wr, 180 op/s
Oct 02 08:30:44 compute-0 podman[321652]: 2025-10-02 08:30:44.551430029 +0000 UTC m=+1.411717215 container remove 53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:30:44 compute-0 systemd[1]: libpod-conmon-53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4.scope: Deactivated successfully.
Oct 02 08:30:44 compute-0 sudo[321459]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:30:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:30:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:30:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:30:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 05bef98e-e718-4019-99eb-f7a32da2ad06 does not exist
Oct 02 08:30:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev dde8e290-268c-4bf2-a03f-fbf75d4771e0 does not exist
Oct 02 08:30:44 compute-0 sudo[321743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:30:44 compute-0 sudo[321743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:44 compute-0 sudo[321743]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:44 compute-0 sudo[321768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:30:44 compute-0 sudo[321768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:30:44 compute-0 sudo[321768]: pam_unix(sudo:session): session closed for user root
Oct 02 08:30:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:30:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191673915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:44 compute-0 nova_compute[260603]: 2025-10-02 08:30:44.916 2 DEBUG oslo_concurrency.processutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:44 compute-0 nova_compute[260603]: 2025-10-02 08:30:44.924 2 DEBUG nova.compute.provider_tree [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:30:44 compute-0 nova_compute[260603]: 2025-10-02 08:30:44.942 2 DEBUG nova.scheduler.client.report [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:30:44 compute-0 nova_compute[260603]: 2025-10-02 08:30:44.961 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:44 compute-0 nova_compute[260603]: 2025-10-02 08:30:44.987 2 INFO nova.scheduler.client.report [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Deleted allocations for instance 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5
Oct 02 08:30:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 118 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 668 KiB/s wr, 151 op/s
Oct 02 08:30:45 compute-0 nova_compute[260603]: 2025-10-02 08:30:45.054 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:30:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:30:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4191673915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:46 compute-0 podman[321796]: 2025-10-02 08:30:46.04392259 +0000 UTC m=+0.092013695 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 08:30:46 compute-0 podman[321795]: 2025-10-02 08:30:46.120547867 +0000 UTC m=+0.167605441 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:30:46 compute-0 ceph-mon[74477]: pgmap v1496: 305 pgs: 305 active+clean; 118 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 668 KiB/s wr, 151 op/s
Oct 02 08:30:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 118 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 148 op/s
Oct 02 08:30:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:30:47 compute-0 nova_compute[260603]: 2025-10-02 08:30:47.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:48 compute-0 nova_compute[260603]: 2025-10-02 08:30:48.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:48 compute-0 ceph-mon[74477]: pgmap v1497: 305 pgs: 305 active+clean; 118 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 148 op/s
Oct 02 08:30:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 88 MiB data, 500 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 174 op/s
Oct 02 08:30:49 compute-0 ovn_controller[152344]: 2025-10-02T08:30:49Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:87:db 10.100.0.3
Oct 02 08:30:49 compute-0 ovn_controller[152344]: 2025-10-02T08:30:49Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:87:db 10.100.0.3
Oct 02 08:30:50 compute-0 ovn_controller[152344]: 2025-10-02T08:30:50Z|00535|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct 02 08:30:50 compute-0 nova_compute[260603]: 2025-10-02 08:30:50.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:50 compute-0 ceph-mon[74477]: pgmap v1498: 305 pgs: 305 active+clean; 88 MiB data, 500 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 174 op/s
Oct 02 08:30:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 88 MiB data, 500 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.2 KiB/s wr, 139 op/s
Oct 02 08:30:52 compute-0 podman[321840]: 2025-10-02 08:30:52.020480055 +0000 UTC m=+0.083729649 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_managed=true, config_id=iscsid)
Oct 02 08:30:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:30:52 compute-0 nova_compute[260603]: 2025-10-02 08:30:52.125 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:30:52 compute-0 ceph-mon[74477]: pgmap v1499: 305 pgs: 305 active+clean; 88 MiB data, 500 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.2 KiB/s wr, 139 op/s
Oct 02 08:30:52 compute-0 nova_compute[260603]: 2025-10-02 08:30:52.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 107 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.3 MiB/s wr, 186 op/s
Oct 02 08:30:53 compute-0 nova_compute[260603]: 2025-10-02 08:30:53.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:54 compute-0 kernel: tap4298d267-ed (unregistering): left promiscuous mode
Oct 02 08:30:54 compute-0 NetworkManager[45129]: <info>  [1759393854.4478] device (tap4298d267-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:30:54 compute-0 nova_compute[260603]: 2025-10-02 08:30:54.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:54 compute-0 ovn_controller[152344]: 2025-10-02T08:30:54Z|00536|binding|INFO|Releasing lport 4298d267-ede8-417b-9e26-a2533908497f from this chassis (sb_readonly=0)
Oct 02 08:30:54 compute-0 ovn_controller[152344]: 2025-10-02T08:30:54Z|00537|binding|INFO|Setting lport 4298d267-ede8-417b-9e26-a2533908497f down in Southbound
Oct 02 08:30:54 compute-0 ovn_controller[152344]: 2025-10-02T08:30:54Z|00538|binding|INFO|Removing iface tap4298d267-ed ovn-installed in OVS
Oct 02 08:30:54 compute-0 nova_compute[260603]: 2025-10-02 08:30:54.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.468 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:87:db 10.100.0.3'], port_security=['fa:16:3e:50:87:db 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f8f36f36-817a-4e64-8c57-c211cfc7b0ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4298d267-ede8-417b-9e26-a2533908497f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.469 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4298d267-ede8-417b-9e26-a2533908497f in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b unbound from our chassis
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.470 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8df0af1-1767-419a-8500-c28fbf45ae4b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.471 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72267fe3-4f79-4985-b842-f9fc565746e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.472 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace which is not needed anymore
Oct 02 08:30:54 compute-0 nova_compute[260603]: 2025-10-02 08:30:54.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:54 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Oct 02 08:30:54 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003b.scope: Consumed 13.241s CPU time.
Oct 02 08:30:54 compute-0 systemd-machined[214636]: Machine qemu-66-instance-0000003b terminated.
Oct 02 08:30:54 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [NOTICE]   (321092) : haproxy version is 2.8.14-c23fe91
Oct 02 08:30:54 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [NOTICE]   (321092) : path to executable is /usr/sbin/haproxy
Oct 02 08:30:54 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [WARNING]  (321092) : Exiting Master process...
Oct 02 08:30:54 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [ALERT]    (321092) : Current worker (321096) exited with code 143 (Terminated)
Oct 02 08:30:54 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [WARNING]  (321092) : All workers exited. Exiting... (0)
Oct 02 08:30:54 compute-0 systemd[1]: libpod-720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e.scope: Deactivated successfully.
Oct 02 08:30:54 compute-0 podman[321885]: 2025-10-02 08:30:54.612638618 +0000 UTC m=+0.048266329 container died 720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 08:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e-userdata-shm.mount: Deactivated successfully.
Oct 02 08:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-11063ab83b9a9c00fd7baf0f7f968c64e020d49f1d6528b03e58b1a6ea92a56d-merged.mount: Deactivated successfully.
Oct 02 08:30:54 compute-0 podman[321885]: 2025-10-02 08:30:54.657542641 +0000 UTC m=+0.093170342 container cleanup 720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 08:30:54 compute-0 systemd[1]: libpod-conmon-720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e.scope: Deactivated successfully.
Oct 02 08:30:54 compute-0 ceph-mon[74477]: pgmap v1500: 305 pgs: 305 active+clean; 107 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.3 MiB/s wr, 186 op/s
Oct 02 08:30:54 compute-0 podman[321916]: 2025-10-02 08:30:54.755309885 +0000 UTC m=+0.064894485 container remove 720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.762 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f202f875-cd52-41be-b5da-2b4d300f1c3e]: (4, ('Thu Oct  2 08:30:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e)\n720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e\nThu Oct  2 08:30:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e)\n720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.764 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[006692f7-9d43-4719-8251-b3dfaed7f7bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.765 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:54 compute-0 nova_compute[260603]: 2025-10-02 08:30:54.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:54 compute-0 kernel: tapf8df0af1-10: left promiscuous mode
Oct 02 08:30:54 compute-0 nova_compute[260603]: 2025-10-02 08:30:54.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.806 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ba934cb2-3057-4846-9a51-d7e88ef3be72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.835 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[796c070c-6089-4bc1-9064-5b6d0ab7e0f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.836 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9778050f-8acf-4f7c-a724-d147b515c554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.855 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[925e8405-6588-45a4-a032-e8cac74b0853]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471586, 'reachable_time': 30620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321946, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.857 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:30:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.857 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b7809e-a41f-4cd1-9d4d-708bd06aff5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:54 compute-0 systemd[1]: run-netns-ovnmeta\x2df8df0af1\x2d1767\x2d419a\x2d8500\x2dc28fbf45ae4b.mount: Deactivated successfully.
Oct 02 08:30:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 121 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 927 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.143 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance shutdown successfully after 13 seconds.
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.148 2 INFO nova.virt.libvirt.driver [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance destroyed successfully.
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.152 2 INFO nova.virt.libvirt.driver [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance destroyed successfully.
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.152 2 DEBUG nova.virt.libvirt.vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-949284758',display_name='tempest-ServerDiskConfigTestJSON-server-949284758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-949284758',id=59,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:30:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-xg0hxkh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:41Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=f8f36f36-817a-4e64-8c57-c211cfc7b0ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.153 2 DEBUG nova.network.os_vif_util [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.153 2 DEBUG nova.network.os_vif_util [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.153 2 DEBUG os_vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.155 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4298d267-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.209 2 INFO os_vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed')
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.238 2 DEBUG nova.compute.manager [req-ec30be26-2eb8-4b60-b885-90e3dcd6798e req-b754e21a-2f8d-4405-a3c7-beba2c3a6a58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-unplugged-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.239 2 DEBUG oslo_concurrency.lockutils [req-ec30be26-2eb8-4b60-b885-90e3dcd6798e req-b754e21a-2f8d-4405-a3c7-beba2c3a6a58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.240 2 DEBUG oslo_concurrency.lockutils [req-ec30be26-2eb8-4b60-b885-90e3dcd6798e req-b754e21a-2f8d-4405-a3c7-beba2c3a6a58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.240 2 DEBUG oslo_concurrency.lockutils [req-ec30be26-2eb8-4b60-b885-90e3dcd6798e req-b754e21a-2f8d-4405-a3c7-beba2c3a6a58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.241 2 DEBUG nova.compute.manager [req-ec30be26-2eb8-4b60-b885-90e3dcd6798e req-b754e21a-2f8d-4405-a3c7-beba2c3a6a58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] No waiting events found dispatching network-vif-unplugged-4298d267-ede8-417b-9e26-a2533908497f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.241 2 WARNING nova.compute.manager [req-ec30be26-2eb8-4b60-b885-90e3dcd6798e req-b754e21a-2f8d-4405-a3c7-beba2c3a6a58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received unexpected event network-vif-unplugged-4298d267-ede8-417b-9e26-a2533908497f for instance with vm_state active and task_state rebuilding.
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.579 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.579 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.589 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.589 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.599 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.603 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.677 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deleting instance files /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_del
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.678 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deletion of /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_del complete
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.705 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.706 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.707 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.717 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.717 2 INFO nova.compute.claims [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.813 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.814 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Creating image(s)
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.841 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.871 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.899 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:55 compute-0 nova_compute[260603]: 2025-10-02 08:30:55.904 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.004 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.005 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.005 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.006 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:56 compute-0 podman[322020]: 2025-10-02 08:30:56.035078454 +0000 UTC m=+0.089185699 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.040 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.048 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.089 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.424 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.489 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] resizing rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:30:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:30:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3777630118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.561 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.568 2 DEBUG nova.compute.provider_tree [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.585 2 DEBUG nova.scheduler.client.report [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.636 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.637 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.640 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.646 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.647 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Ensure instance console log exists: /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.648 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.648 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.649 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.652 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Start _get_guest_xml network_info=[{"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.655 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.656 2 INFO nova.compute.claims [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.662 2 WARNING nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.667 2 DEBUG nova.virt.libvirt.host [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.668 2 DEBUG nova.virt.libvirt.host [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.671 2 DEBUG nova.virt.libvirt.host [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.671 2 DEBUG nova.virt.libvirt.host [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.672 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.672 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.673 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.673 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.674 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.674 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.674 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.675 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.675 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.676 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.676 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.676 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.677 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'vcpu_model' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:56 compute-0 ceph-mon[74477]: pgmap v1501: 305 pgs: 305 active+clean; 121 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 927 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 08:30:56 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3777630118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.702 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.740 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.740 2 DEBUG nova.network.neutron [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.766 2 INFO nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.781 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.855 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.894 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.896 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.897 2 INFO nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Creating image(s)
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.916 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.936 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.957 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:56 compute-0 nova_compute[260603]: 2025-10-02 08:30:56.960 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 121 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.004 2 DEBUG nova.policy [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9020ed38b31d46f88625374b2a76aef6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.041 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.042 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.043 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.043 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:57.056 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:57.058 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.066 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.069 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 49e7e668-b62c-4e35-a4e2-bba540000961_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:30:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/153148737' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.125 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.149 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.153 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.312 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 49e7e668-b62c-4e35-a4e2-bba540000961_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:30:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/469259123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.356 2 DEBUG nova.compute.manager [req-277084fa-f941-40ab-8ed9-c5014e463e53 req-2798be94-0b82-4f17-83c6-6aac2dfa609a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.357 2 DEBUG oslo_concurrency.lockutils [req-277084fa-f941-40ab-8ed9-c5014e463e53 req-2798be94-0b82-4f17-83c6-6aac2dfa609a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.357 2 DEBUG oslo_concurrency.lockutils [req-277084fa-f941-40ab-8ed9-c5014e463e53 req-2798be94-0b82-4f17-83c6-6aac2dfa609a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.357 2 DEBUG oslo_concurrency.lockutils [req-277084fa-f941-40ab-8ed9-c5014e463e53 req-2798be94-0b82-4f17-83c6-6aac2dfa609a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.357 2 DEBUG nova.compute.manager [req-277084fa-f941-40ab-8ed9-c5014e463e53 req-2798be94-0b82-4f17-83c6-6aac2dfa609a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] No waiting events found dispatching network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.358 2 WARNING nova.compute.manager [req-277084fa-f941-40ab-8ed9-c5014e463e53 req-2798be94-0b82-4f17-83c6-6aac2dfa609a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received unexpected event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.358 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.392 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] resizing rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.421 2 DEBUG nova.compute.provider_tree [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.441 2 DEBUG nova.scheduler.client.report [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.473 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.474 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.481 2 DEBUG nova.objects.instance [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'migration_context' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.501 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.501 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Ensure instance console log exists: /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.502 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.502 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.502 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.519 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.519 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.534 2 INFO nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.555 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:30:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:30:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644203822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.642 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.644 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.645 2 INFO nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Creating image(s)
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.666 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.689 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:57 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/153148737' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:57 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/469259123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:30:57 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/644203822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.713 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.718 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.760 2 DEBUG nova.policy [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'db9a3b1e6d93495f8c849658ffc4e535', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.763 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.766 2 DEBUG nova.virt.libvirt.vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-949284758',display_name='tempest-ServerDiskConfigTestJSON-server-949284758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-949284758',id=59,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:30:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-xg0hxkh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:55Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=f8f36f36-817a-4e64-8c57-c211cfc7b0ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.766 2 DEBUG nova.network.os_vif_util [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.768 2 DEBUG nova.network.os_vif_util [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.772 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:30:57 compute-0 nova_compute[260603]:   <uuid>f8f36f36-817a-4e64-8c57-c211cfc7b0ba</uuid>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   <name>instance-0000003b</name>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-949284758</nova:name>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:30:56</nova:creationTime>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:30:57 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:30:57 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:30:57 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:30:57 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:30:57 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:30:57 compute-0 nova_compute[260603]:         <nova:user uuid="116b114f14f84e4cbd6cc966e29d82e7">tempest-ServerDiskConfigTestJSON-1277806880-project-member</nova:user>
Oct 02 08:30:57 compute-0 nova_compute[260603]:         <nova:project uuid="bce7493292bb47cfb7168bca89f78f4a">tempest-ServerDiskConfigTestJSON-1277806880</nova:project>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:30:57 compute-0 nova_compute[260603]:         <nova:port uuid="4298d267-ede8-417b-9e26-a2533908497f">
Oct 02 08:30:57 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <system>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <entry name="serial">f8f36f36-817a-4e64-8c57-c211cfc7b0ba</entry>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <entry name="uuid">f8f36f36-817a-4e64-8c57-c211cfc7b0ba</entry>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     </system>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   <os>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   </os>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   <features>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   </features>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk">
Oct 02 08:30:57 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       </source>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:30:57 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config">
Oct 02 08:30:57 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       </source>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:30:57 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:50:87:db"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <target dev="tap4298d267-ed"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/console.log" append="off"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <video>
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     </video>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:30:57 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:30:57 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:30:57 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:30:57 compute-0 nova_compute[260603]: </domain>
Oct 02 08:30:57 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.773 2 DEBUG nova.compute.manager [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Preparing to wait for external event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.774 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.775 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.775 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.776 2 DEBUG nova.virt.libvirt.vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-949284758',display_name='tempest-ServerDiskConfigTestJSON-server-949284758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-949284758',id=59,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:30:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-xg0hxkh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:55Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=f8f36f36-817a-4e64-8c57-c211cfc7b0ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.777 2 DEBUG nova.network.os_vif_util [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.778 2 DEBUG nova.network.os_vif_util [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.778 2 DEBUG os_vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.781 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4298d267-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.788 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4298d267-ed, col_values=(('external_ids', {'iface-id': '4298d267-ede8-417b-9e26-a2533908497f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:87:db', 'vm-uuid': 'f8f36f36-817a-4e64-8c57-c211cfc7b0ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:30:57 compute-0 NetworkManager[45129]: <info>  [1759393857.8165] manager: (tap4298d267-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.818 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.819 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.820 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.820 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.844 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.850 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 501f8cba-892f-489d-81b5-abb8669f49eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.902 2 INFO os_vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed')
Oct 02 08:30:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:30:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:30:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:30:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:30:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:30:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.978 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.979 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.980 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No VIF found with MAC fa:16:3e:50:87:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:30:57 compute-0 nova_compute[260603]: 2025-10-02 08:30:57.980 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Using config drive
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.014 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.025 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393842.974334, 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.025 2 INFO nova.compute.manager [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] VM Stopped (Lifecycle Event)
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.054 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'ec2_ids' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.061 2 DEBUG nova.compute.manager [None req-acbd6688-02d9-4129-ad6b-0ca392ab7a37 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.091 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'keypairs' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.166 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 501f8cba-892f-489d-81b5-abb8669f49eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.228 2 DEBUG nova.network.neutron [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Successfully created port: 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.236 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] resizing rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.328 2 DEBUG nova.objects.instance [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 501f8cba-892f-489d-81b5-abb8669f49eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.345 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.346 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Ensure instance console log exists: /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.346 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.346 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.347 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:30:58 compute-0 ceph-mon[74477]: pgmap v1502: 305 pgs: 305 active+clean; 121 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 08:30:58 compute-0 nova_compute[260603]: 2025-10-02 08:30:58.774 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Successfully created port: e19eb16b-f042-4e4d-922b-7057ad6ebb1c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:30:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 141 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 6.1 MiB/s wr, 177 op/s
Oct 02 08:30:59 compute-0 nova_compute[260603]: 2025-10-02 08:30:59.126 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Creating config drive at /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config
Oct 02 08:30:59 compute-0 nova_compute[260603]: 2025-10-02 08:30:59.135 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl8es79rk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:59 compute-0 nova_compute[260603]: 2025-10-02 08:30:59.291 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl8es79rk" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:59 compute-0 nova_compute[260603]: 2025-10-02 08:30:59.330 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:30:59 compute-0 nova_compute[260603]: 2025-10-02 08:30:59.335 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:30:59 compute-0 nova_compute[260603]: 2025-10-02 08:30:59.595 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:30:59 compute-0 nova_compute[260603]: 2025-10-02 08:30:59.596 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deleting local config drive /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config because it was imported into RBD.
Oct 02 08:30:59 compute-0 kernel: tap4298d267-ed: entered promiscuous mode
Oct 02 08:30:59 compute-0 ovn_controller[152344]: 2025-10-02T08:30:59Z|00539|binding|INFO|Claiming lport 4298d267-ede8-417b-9e26-a2533908497f for this chassis.
Oct 02 08:30:59 compute-0 ovn_controller[152344]: 2025-10-02T08:30:59Z|00540|binding|INFO|4298d267-ede8-417b-9e26-a2533908497f: Claiming fa:16:3e:50:87:db 10.100.0.3
Oct 02 08:30:59 compute-0 NetworkManager[45129]: <info>  [1759393859.6926] manager: (tap4298d267-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Oct 02 08:30:59 compute-0 nova_compute[260603]: 2025-10-02 08:30:59.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.703 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:87:db 10.100.0.3'], port_security=['fa:16:3e:50:87:db 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f8f36f36-817a-4e64-8c57-c211cfc7b0ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4298d267-ede8-417b-9e26-a2533908497f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.705 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4298d267-ede8-417b-9e26-a2533908497f in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b bound to our chassis
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.708 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:30:59 compute-0 ovn_controller[152344]: 2025-10-02T08:30:59Z|00541|binding|INFO|Setting lport 4298d267-ede8-417b-9e26-a2533908497f ovn-installed in OVS
Oct 02 08:30:59 compute-0 ovn_controller[152344]: 2025-10-02T08:30:59Z|00542|binding|INFO|Setting lport 4298d267-ede8-417b-9e26-a2533908497f up in Southbound
Oct 02 08:30:59 compute-0 nova_compute[260603]: 2025-10-02 08:30:59.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.736 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fcac67d9-5c17-487a-913a-c51229c3e24c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.737 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8df0af1-11 in ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.742 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8df0af1-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.743 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e23db3a9-f275-485b-b73f-566970156f7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:59 compute-0 systemd-udevd[322666]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[772bb1d2-98fc-4766-8e34-08ab2589630b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:59 compute-0 nova_compute[260603]: 2025-10-02 08:30:59.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:30:59 compute-0 systemd-machined[214636]: New machine qemu-67-instance-0000003b.
Oct 02 08:30:59 compute-0 NetworkManager[45129]: <info>  [1759393859.7638] device (tap4298d267-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:30:59 compute-0 NetworkManager[45129]: <info>  [1759393859.7647] device (tap4298d267-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.767 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[03b84f7a-c2d8-4044-9738-1f0911dda344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:59 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-0000003b.
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.796 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30895c0b-16d6-41e1-bc81-0b65f075516d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:59 compute-0 nova_compute[260603]: 2025-10-02 08:30:59.802 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Successfully created port: e8f18c99-1964-43d6-a955-7b5064c53b3a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.843 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3f109b-f5d9-41e7-8e4e-bb6141e3abb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.850 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91392d24-0b16-4165-9a2e-d310adef5460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:59 compute-0 NetworkManager[45129]: <info>  [1759393859.8516] manager: (tapf8df0af1-10): new Veth device (/org/freedesktop/NetworkManager/Devices/227)
Oct 02 08:30:59 compute-0 systemd-udevd[322669]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.902 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[be34082f-cc4e-4892-9c8c-7507f9f82cfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.906 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[551b291f-dcc4-4864-9c47-a57c42dc90fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:59 compute-0 NetworkManager[45129]: <info>  [1759393859.9347] device (tapf8df0af1-10): carrier: link connected
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.943 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d05d5267-6216-4f82-ad53-19d817afb547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.964 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5bd29e-2f3f-4321-8c66-9550ef727ec9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473853, 'reachable_time': 22677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322698, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:30:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.981 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9044d75b-3c2f-4e32-8058-332a146b0a00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:6ddc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473853, 'tstamp': 473853}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322699, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.000 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bee5e8b6-7cec-4876-8605-0275b0ebb724]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473853, 'reachable_time': 22677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322700, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.038 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[80ac5f32-97c4-4d38-8ebe-e8dd1f6f0a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.104 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[379497a1-0cd5-4cd3-b048-1eb287c6cd2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.106 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.106 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.107 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8df0af1-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:00 compute-0 NetworkManager[45129]: <info>  [1759393860.1105] manager: (tapf8df0af1-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Oct 02 08:31:00 compute-0 kernel: tapf8df0af1-10: entered promiscuous mode
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.114 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8df0af1-10, col_values=(('external_ids', {'iface-id': '1405e724-f2f6-4a95-8848-550131e62910'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:00 compute-0 ovn_controller[152344]: 2025-10-02T08:31:00Z|00543|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.116 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.117 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b18eecc0-d590-4e91-9a1e-12ba52842387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.119 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:31:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.120 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'env', 'PROCESS_TAG=haproxy-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8df0af1-1767-419a-8500-c28fbf45ae4b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.225 2 DEBUG nova.compute.manager [req-6c16e03e-fe13-4d52-89da-676a1b23ca6a req-cb90e0ac-68f9-41b7-a495-d741fbddd900 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.226 2 DEBUG oslo_concurrency.lockutils [req-6c16e03e-fe13-4d52-89da-676a1b23ca6a req-cb90e0ac-68f9-41b7-a495-d741fbddd900 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.226 2 DEBUG oslo_concurrency.lockutils [req-6c16e03e-fe13-4d52-89da-676a1b23ca6a req-cb90e0ac-68f9-41b7-a495-d741fbddd900 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.226 2 DEBUG oslo_concurrency.lockutils [req-6c16e03e-fe13-4d52-89da-676a1b23ca6a req-cb90e0ac-68f9-41b7-a495-d741fbddd900 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.226 2 DEBUG nova.compute.manager [req-6c16e03e-fe13-4d52-89da-676a1b23ca6a req-cb90e0ac-68f9-41b7-a495-d741fbddd900 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Processing event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.501 2 DEBUG nova.network.neutron [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Successfully updated port: 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.518 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.518 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquired lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.519 2 DEBUG nova.network.neutron [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.520 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Successfully created port: 35092937-9590-42d1-a022-549b740da3c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:31:00 compute-0 podman[322774]: 2025-10-02 08:31:00.557267802 +0000 UTC m=+0.062211841 container create 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:31:00 compute-0 systemd[1]: Started libpod-conmon-32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d.scope.
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.606 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for f8f36f36-817a-4e64-8c57-c211cfc7b0ba due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.607 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393860.6058617, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.607 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Started (Lifecycle Event)
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.610 2 DEBUG nova.compute.manager [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.613 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:31:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.619 2 INFO nova.virt.libvirt.driver [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance spawned successfully.
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.619 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:31:00 compute-0 podman[322774]: 2025-10-02 08:31:00.525841796 +0000 UTC m=+0.030785865 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10f76ee94a6c0000dab8a51cc1abe21546c7714b833add5e7deed29110e43c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.635 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:00 compute-0 podman[322774]: 2025-10-02 08:31:00.641951369 +0000 UTC m=+0.146895448 container init 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.642 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.646 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.646 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.647 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.647 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.648 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.648 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:00 compute-0 podman[322774]: 2025-10-02 08:31:00.654397216 +0000 UTC m=+0.159341255 container start 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.669 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.670 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393860.6059682, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.670 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Paused (Lifecycle Event)
Oct 02 08:31:00 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [NOTICE]   (322793) : New worker (322795) forked
Oct 02 08:31:00 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [NOTICE]   (322793) : Loading success.
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.692 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.696 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393860.6121714, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.697 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Resumed (Lifecycle Event)
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.700 2 DEBUG nova.compute.manager [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:00 compute-0 ceph-mon[74477]: pgmap v1503: 305 pgs: 305 active+clean; 141 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 6.1 MiB/s wr, 177 op/s
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.730 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.732 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.749 2 DEBUG nova.network.neutron [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.756 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.770 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.771 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.771 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:31:00 compute-0 nova_compute[260603]: 2025-10-02 08:31:00.831 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 141 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 6.1 MiB/s wr, 151 op/s
Oct 02 08:31:01 compute-0 nova_compute[260603]: 2025-10-02 08:31:01.201 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Successfully updated port: e19eb16b-f042-4e4d-922b-7057ad6ebb1c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:31:01 compute-0 nova_compute[260603]: 2025-10-02 08:31:01.326 2 DEBUG nova.compute.manager [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-changed-e19eb16b-f042-4e4d-922b-7057ad6ebb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:01 compute-0 nova_compute[260603]: 2025-10-02 08:31:01.327 2 DEBUG nova.compute.manager [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Refreshing instance network info cache due to event network-changed-e19eb16b-f042-4e4d-922b-7057ad6ebb1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:31:01 compute-0 nova_compute[260603]: 2025-10-02 08:31:01.328 2 DEBUG oslo_concurrency.lockutils [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:01 compute-0 nova_compute[260603]: 2025-10-02 08:31:01.328 2 DEBUG oslo_concurrency.lockutils [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:01 compute-0 nova_compute[260603]: 2025-10-02 08:31:01.328 2 DEBUG nova.network.neutron [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Refreshing network info cache for port e19eb16b-f042-4e4d-922b-7057ad6ebb1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:31:01 compute-0 nova_compute[260603]: 2025-10-02 08:31:01.675 2 DEBUG nova.network.neutron [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:31:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.213 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Successfully updated port: e8f18c99-1964-43d6-a955-7b5064c53b3a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.654 2 DEBUG nova.network.neutron [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.678 2 DEBUG oslo_concurrency.lockutils [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:02 compute-0 ceph-mon[74477]: pgmap v1504: 305 pgs: 305 active+clean; 141 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 6.1 MiB/s wr, 151 op/s
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.742 2 DEBUG nova.network.neutron [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating instance_info_cache with network_info: [{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.776 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Releasing lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.776 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance network_info: |[{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.780 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Start _get_guest_xml network_info=[{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.785 2 WARNING nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.791 2 DEBUG nova.virt.libvirt.host [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.792 2 DEBUG nova.virt.libvirt.host [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.795 2 DEBUG nova.virt.libvirt.host [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.795 2 DEBUG nova.virt.libvirt.host [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.796 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.796 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.796 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.796 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.797 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.797 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.797 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.797 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.797 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.798 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.798 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.798 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.800 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.910 2 DEBUG nova.compute.manager [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-changed-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.910 2 DEBUG nova.compute.manager [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Refreshing instance network info cache due to event network-changed-37e9c33f-0ff9-4138-a7b5-989ba3c016a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.911 2 DEBUG oslo_concurrency.lockutils [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.911 2 DEBUG oslo_concurrency.lockutils [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:02 compute-0 nova_compute[260603]: 2025-10-02 08:31:02.911 2 DEBUG nova.network.neutron [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Refreshing network info cache for port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:31:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 180 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.5 MiB/s wr, 227 op/s
Oct 02 08:31:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2335375807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.276 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.307 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.311 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.429 2 DEBUG nova.compute.manager [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-changed-e8f18c99-1964-43d6-a955-7b5064c53b3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.429 2 DEBUG nova.compute.manager [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Refreshing instance network info cache due to event network-changed-e8f18c99-1964-43d6-a955-7b5064c53b3a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.430 2 DEBUG oslo_concurrency.lockutils [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.430 2 DEBUG oslo_concurrency.lockutils [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.430 2 DEBUG nova.network.neutron [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Refreshing network info cache for port e8f18c99-1964-43d6-a955-7b5064c53b3a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.647 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.650 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.650 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.651 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.651 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.654 2 INFO nova.compute.manager [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Terminating instance
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.656 2 DEBUG nova.compute.manager [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.671 2 DEBUG nova.network.neutron [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:31:03 compute-0 kernel: tap4298d267-ed (unregistering): left promiscuous mode
Oct 02 08:31:03 compute-0 NetworkManager[45129]: <info>  [1759393863.7062] device (tap4298d267-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:03 compute-0 ovn_controller[152344]: 2025-10-02T08:31:03Z|00544|binding|INFO|Releasing lport 4298d267-ede8-417b-9e26-a2533908497f from this chassis (sb_readonly=0)
Oct 02 08:31:03 compute-0 ovn_controller[152344]: 2025-10-02T08:31:03Z|00545|binding|INFO|Setting lport 4298d267-ede8-417b-9e26-a2533908497f down in Southbound
Oct 02 08:31:03 compute-0 ovn_controller[152344]: 2025-10-02T08:31:03Z|00546|binding|INFO|Removing iface tap4298d267-ed ovn-installed in OVS
Oct 02 08:31:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:03.723 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:87:db 10.100.0.3'], port_security=['fa:16:3e:50:87:db 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f8f36f36-817a-4e64-8c57-c211cfc7b0ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4298d267-ede8-417b-9e26-a2533908497f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:03.727 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4298d267-ede8-417b-9e26-a2533908497f in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b unbound from our chassis
Oct 02 08:31:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:03.730 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8df0af1-1767-419a-8500-c28fbf45ae4b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:31:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:03.732 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3afa9b-ca94-455a-9add-fb0441c8c894]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:03.733 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace which is not needed anymore
Oct 02 08:31:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856367483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2335375807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:03 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:03 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Consumed 3.816s CPU time.
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.768 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.769 2 DEBUG nova.virt.libvirt.vif [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1525105387',display_name='tempest-ServerActionsTestOtherB-server-1525105387',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1525105387',id=61,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmeM0VXrLDuUgkvEcAKZLUawTz1v6B+3eBASOGcaRFlmF3ztxSFLOGPfQ5nMbtxqx6ZDoxMiinSb16iJLQrCBe+IsSQFaYXfW47SOpCqDkfThajOwmApFonqiBjUHNfHQ==',key_name='tempest-keypair-1019763483',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-fypdkou3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9020ed38b31d46f88625374b2a76aef6',uuid=49e7e668-b62c-4e35-a4e2-bba540000961,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.770 2 DEBUG nova.network.os_vif_util [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:03 compute-0 systemd-machined[214636]: Machine qemu-67-instance-0000003b terminated.
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.771 2 DEBUG nova.network.os_vif_util [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.773 2 DEBUG nova.objects.instance [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'pci_devices' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.791 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:31:03 compute-0 nova_compute[260603]:   <uuid>49e7e668-b62c-4e35-a4e2-bba540000961</uuid>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   <name>instance-0000003d</name>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerActionsTestOtherB-server-1525105387</nova:name>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:31:02</nova:creationTime>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:31:03 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:31:03 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:31:03 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:31:03 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:31:03 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:31:03 compute-0 nova_compute[260603]:         <nova:user uuid="9020ed38b31d46f88625374b2a76aef6">tempest-ServerActionsTestOtherB-1644249004-project-member</nova:user>
Oct 02 08:31:03 compute-0 nova_compute[260603]:         <nova:project uuid="eda0caa41e4740148ab99d5ebf9e27ba">tempest-ServerActionsTestOtherB-1644249004</nova:project>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:31:03 compute-0 nova_compute[260603]:         <nova:port uuid="37e9c33f-0ff9-4138-a7b5-989ba3c016a0">
Oct 02 08:31:03 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <system>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <entry name="serial">49e7e668-b62c-4e35-a4e2-bba540000961</entry>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <entry name="uuid">49e7e668-b62c-4e35-a4e2-bba540000961</entry>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     </system>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   <os>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   </os>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   <features>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   </features>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk">
Oct 02 08:31:03 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:03 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk.config">
Oct 02 08:31:03 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:03 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:19:cc:f7"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <target dev="tap37e9c33f-0f"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/console.log" append="off"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <video>
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     </video>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:31:03 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:31:03 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:31:03 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:31:03 compute-0 nova_compute[260603]: </domain>
Oct 02 08:31:03 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.792 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Preparing to wait for external event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.792 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.792 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.793 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.794 2 DEBUG nova.virt.libvirt.vif [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1525105387',display_name='tempest-ServerActionsTestOtherB-server-1525105387',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1525105387',id=61,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmeM0VXrLDuUgkvEcAKZLUawTz1v6B+3eBASOGcaRFlmF3ztxSFLOGPfQ5nMbtxqx6ZDoxMiinSb16iJLQrCBe+IsSQFaYXfW47SOpCqDkfThajOwmApFonqiBjUHNfHQ==',key_name='tempest-keypair-1019763483',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-fypdkou3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9020ed38b31d46f88625374b2a76aef6',uuid=49e7e668-b62c-4e35-a4e2-bba540000961,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.794 2 DEBUG nova.network.os_vif_util [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.795 2 DEBUG nova.network.os_vif_util [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.795 2 DEBUG os_vif [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.797 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.797 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37e9c33f-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37e9c33f-0f, col_values=(('external_ids', {'iface-id': '37e9c33f-0ff9-4138-a7b5-989ba3c016a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:cc:f7', 'vm-uuid': '49e7e668-b62c-4e35-a4e2-bba540000961'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:03 compute-0 NetworkManager[45129]: <info>  [1759393863.8056] manager: (tap37e9c33f-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.821 2 INFO os_vif [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f')
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.865 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Successfully updated port: 35092937-9590-42d1-a022-549b740da3c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.884 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.904 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:03 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [NOTICE]   (322793) : haproxy version is 2.8.14-c23fe91
Oct 02 08:31:03 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [NOTICE]   (322793) : path to executable is /usr/sbin/haproxy
Oct 02 08:31:03 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [WARNING]  (322793) : Exiting Master process...
Oct 02 08:31:03 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [WARNING]  (322793) : Exiting Master process...
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.904 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.907 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No VIF found with MAC fa:16:3e:19:cc:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.908 2 INFO nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Using config drive
Oct 02 08:31:03 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [ALERT]    (322793) : Current worker (322795) exited with code 143 (Terminated)
Oct 02 08:31:03 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [WARNING]  (322793) : All workers exited. Exiting... (0)
Oct 02 08:31:03 compute-0 systemd[1]: libpod-32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d.scope: Deactivated successfully.
Oct 02 08:31:03 compute-0 podman[322893]: 2025-10-02 08:31:03.915958299 +0000 UTC m=+0.057502416 container stop 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.946 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:03 compute-0 podman[322893]: 2025-10-02 08:31:03.948448167 +0000 UTC m=+0.089992274 container died 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.961 2 INFO nova.virt.libvirt.driver [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance destroyed successfully.
Oct 02 08:31:03 compute-0 nova_compute[260603]: 2025-10-02 08:31:03.961 2 DEBUG nova.objects.instance [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'resources' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d-userdata-shm.mount: Deactivated successfully.
Oct 02 08:31:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-d10f76ee94a6c0000dab8a51cc1abe21546c7714b833add5e7deed29110e43c9-merged.mount: Deactivated successfully.
Oct 02 08:31:04 compute-0 podman[322893]: 2025-10-02 08:31:04.001353478 +0000 UTC m=+0.142897665 container cleanup 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:31:04 compute-0 systemd[1]: libpod-conmon-32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d.scope: Deactivated successfully.
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.029 2 DEBUG nova.network.neutron [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.037 2 DEBUG nova.virt.libvirt.vif [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-949284758',display_name='tempest-ServerDiskConfigTestJSON-server-949284758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-949284758',id=59,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-xg0hxkh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:00Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=f8f36f36-817a-4e64-8c57-c211cfc7b0ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.038 2 DEBUG nova.network.os_vif_util [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.038 2 DEBUG nova.network.os_vif_util [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.038 2 DEBUG os_vif [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.041 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4298d267-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.050 2 INFO os_vif [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed')
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.065 2 DEBUG oslo_concurrency.lockutils [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.066 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquired lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.066 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:31:04 compute-0 podman[322954]: 2025-10-02 08:31:04.100427663 +0000 UTC m=+0.061466819 container remove 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.108 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fcacdd73-2636-4040-95d9-3a9bdf7852c7]: (4, ('Thu Oct  2 08:31:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d)\n32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d\nThu Oct  2 08:31:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d)\n32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.110 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[08c74384-2432-4126-9a84-e130c5dc506e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.111 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:04 compute-0 kernel: tapf8df0af1-10: left promiscuous mode
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.163 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5c19d63f-6020-4aff-8ff4-f812053d48c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.163 2 DEBUG nova.network.neutron [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updated VIF entry in instance network info cache for port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.164 2 DEBUG nova.network.neutron [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating instance_info_cache with network_info: [{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.186 2 DEBUG oslo_concurrency.lockutils [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.187 2 DEBUG nova.compute.manager [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.187 2 DEBUG oslo_concurrency.lockutils [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.187 2 DEBUG oslo_concurrency.lockutils [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.187 2 DEBUG oslo_concurrency.lockutils [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.188 2 DEBUG nova.compute.manager [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] No waiting events found dispatching network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.188 2 WARNING nova.compute.manager [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received unexpected event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f for instance with vm_state active and task_state None.
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.188 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd03c606-6a5c-4980-a75a-d24ad1c36aa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.189 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[11196ebf-9165-44db-b12a-95e813680119]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.217 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[da406a0a-59d8-4519-9ed2-72aed500bf5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473843, 'reachable_time': 32128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322992, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.220 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.220 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9deea0ab-fe13-4984-ab39-30256039d530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:04 compute-0 systemd[1]: run-netns-ovnmeta\x2df8df0af1\x2d1767\x2d419a\x2d8500\x2dc28fbf45ae4b.mount: Deactivated successfully.
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.324 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.404 2 INFO nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Creating config drive at /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.410 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfnanibgs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.505 2 INFO nova.virt.libvirt.driver [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deleting instance files /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_del
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.506 2 INFO nova.virt.libvirt.driver [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deletion of /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_del complete
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.567 2 INFO nova.compute.manager [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Took 0.91 seconds to destroy the instance on the hypervisor.
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.568 2 DEBUG oslo.service.loopingcall [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.569 2 DEBUG nova.compute.manager [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.569 2 DEBUG nova.network.neutron [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.574 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfnanibgs" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.613 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.618 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:04 compute-0 ceph-mon[74477]: pgmap v1505: 305 pgs: 305 active+clean; 180 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.5 MiB/s wr, 227 op/s
Oct 02 08:31:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2856367483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.844 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.845 2 INFO nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Deleting local config drive /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config because it was imported into RBD.
Oct 02 08:31:04 compute-0 kernel: tap37e9c33f-0f: entered promiscuous mode
Oct 02 08:31:04 compute-0 systemd-udevd[322870]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:04 compute-0 NetworkManager[45129]: <info>  [1759393864.9344] manager: (tap37e9c33f-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Oct 02 08:31:04 compute-0 nova_compute[260603]: 2025-10-02 08:31:04.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:04 compute-0 ovn_controller[152344]: 2025-10-02T08:31:04Z|00547|binding|INFO|Claiming lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 for this chassis.
Oct 02 08:31:04 compute-0 ovn_controller[152344]: 2025-10-02T08:31:04Z|00548|binding|INFO|37e9c33f-0ff9-4138-a7b5-989ba3c016a0: Claiming fa:16:3e:19:cc:f7 10.100.0.9
Oct 02 08:31:04 compute-0 NetworkManager[45129]: <info>  [1759393864.9470] device (tap37e9c33f-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:31:04 compute-0 NetworkManager[45129]: <info>  [1759393864.9515] device (tap37e9c33f-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.950 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:cc:f7 10.100.0.9'], port_security=['fa:16:3e:19:cc:f7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '49e7e668-b62c-4e35-a4e2-bba540000961', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd09bd0a7-8be4-487a-8b24-ba3d4c0378f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04113540-c60b-4329-960e-cb06bfeb56f0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=37e9c33f-0ff9-4138-a7b5-989ba3c016a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.953 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 in datapath ef30d863-af60-49d9-b5d2-5e4f20c70d56 bound to our chassis
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.957 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef30d863-af60-49d9-b5d2-5e4f20c70d56
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.969 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[024eb167-2ff0-445f-84f7-b645fcbffed4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.970 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef30d863-a1 in ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.974 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef30d863-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.974 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5c141cd5-db37-4dca-a171-357dd6771cae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.975 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ef506798-7dd1-4385-af46-35831517ec7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:04 compute-0 systemd-machined[214636]: New machine qemu-68-instance-0000003d.
Oct 02 08:31:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.990 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9318e707-d45c-45d5-984d-329f332e6f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 180 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.1 MiB/s wr, 200 op/s
Oct 02 08:31:05 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-0000003d.
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.027 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb4927b-1d42-48a9-89e4-1c2e8c193ad8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:05 compute-0 ovn_controller[152344]: 2025-10-02T08:31:05Z|00549|binding|INFO|Setting lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 ovn-installed in OVS
Oct 02 08:31:05 compute-0 ovn_controller[152344]: 2025-10-02T08:31:05Z|00550|binding|INFO|Setting lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 up in Southbound
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.084 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[90baf83a-f980-4fcf-be0c-fa253f630eb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.110 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4607e4-87c8-4503-b590-8ded41214557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 NetworkManager[45129]: <info>  [1759393865.1173] manager: (tapef30d863-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/231)
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.183 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[45ed2812-3bd9-4da6-970e-dedfcf365ca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.191 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8b355f-4640-43e8-81f1-d7e34c760bff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 NetworkManager[45129]: <info>  [1759393865.2214] device (tapef30d863-a0): carrier: link connected
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.225 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5f28985d-f4f7-4699-a65c-5df55915a133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.259 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d791496d-bb9e-4ec2-b424-1b7395ee3546]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef30d863-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:1b:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474382, 'reachable_time': 21085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323077, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.283 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1db327aa-cb02-4d5f-a1ba-dd43d0f0a464]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:1bde'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474382, 'tstamp': 474382}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323078, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.294 2 DEBUG nova.network.neutron [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.307 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[046a5cc4-72e8-4f5c-9c60-e7831ac3be8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef30d863-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:1b:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474382, 'reachable_time': 21085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323079, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.334 2 INFO nova.compute.manager [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Took 0.76 seconds to deallocate network for instance.
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.355 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3115be-04ad-438f-bcd1-a7565ca14680]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.403 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.404 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.410 2 DEBUG nova.compute.manager [req-4c6cedc1-8dc7-48ae-8441-14e3ae6370a8 req-dfb5cee7-0a98-4edd-b0f7-bcda60c58876 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-deleted-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.429 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[708f284d-b0ad-4335-9320-224366944935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.431 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef30d863-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.431 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.432 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef30d863-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:05 compute-0 NetworkManager[45129]: <info>  [1759393865.4362] manager: (tapef30d863-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Oct 02 08:31:05 compute-0 kernel: tapef30d863-a0: entered promiscuous mode
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.440 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef30d863-a0, col_values=(('external_ids', {'iface-id': 'd143de50-fc80-43b6-82e2-6651430a4a42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:05 compute-0 ovn_controller[152344]: 2025-10-02T08:31:05Z|00551|binding|INFO|Releasing lport d143de50-fc80-43b6-82e2-6651430a4a42 from this chassis (sb_readonly=0)
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.470 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef30d863-af60-49d9-b5d2-5e4f20c70d56.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef30d863-af60-49d9-b5d2-5e4f20c70d56.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.471 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d21b3b9f-0b82-40be-b950-09af882c02ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.473 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-ef30d863-af60-49d9-b5d2-5e4f20c70d56
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/ef30d863-af60-49d9-b5d2-5e4f20c70d56.pid.haproxy
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID ef30d863-af60-49d9-b5d2-5e4f20c70d56
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:31:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.474 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'env', 'PROCESS_TAG=haproxy-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef30d863-af60-49d9-b5d2-5e4f20c70d56.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.521 2 DEBUG nova.compute.manager [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-changed-35092937-9590-42d1-a022-549b740da3c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.521 2 DEBUG nova.compute.manager [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Refreshing instance network info cache due to event network-changed-35092937-9590-42d1-a022-549b740da3c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.522 2 DEBUG oslo_concurrency.lockutils [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:05 compute-0 nova_compute[260603]: 2025-10-02 08:31:05.575 2 DEBUG oslo_concurrency.processutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:05 compute-0 podman[323173]: 2025-10-02 08:31:05.964782291 +0000 UTC m=+0.080538769 container create 67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:31:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1473150419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:06 compute-0 podman[323173]: 2025-10-02 08:31:05.924255074 +0000 UTC m=+0.040011562 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:31:06 compute-0 systemd[1]: Started libpod-conmon-67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d.scope.
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.030 2 DEBUG oslo_concurrency.processutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.047 2 DEBUG nova.compute.provider_tree [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:31:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:06.061 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f999397e7e358a7668d8bd32b5ebbd495e6797cdde869d93f7da4a88468be70b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.085 2 DEBUG nova.scheduler.client.report [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:31:06 compute-0 podman[323173]: 2025-10-02 08:31:06.100227084 +0000 UTC m=+0.215983632 container init 67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 08:31:06 compute-0 podman[323173]: 2025-10-02 08:31:06.107900353 +0000 UTC m=+0.223656830 container start 67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.110 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:06 compute-0 neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56[323190]: [NOTICE]   (323194) : New worker (323196) forked
Oct 02 08:31:06 compute-0 neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56[323190]: [NOTICE]   (323194) : Loading success.
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.159 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393866.1584084, 49e7e668-b62c-4e35-a4e2-bba540000961 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.160 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] VM Started (Lifecycle Event)
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.179 2 INFO nova.scheduler.client.report [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Deleted allocations for instance f8f36f36-817a-4e64-8c57-c211cfc7b0ba
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.203 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.217 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393866.1587245, 49e7e668-b62c-4e35-a4e2-bba540000961 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.217 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] VM Paused (Lifecycle Event)
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.247 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.253 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.270 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.279 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.362 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.362 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.379 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.484 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.485 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.495 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.495 2 INFO nova.compute.claims [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:31:06 compute-0 nova_compute[260603]: 2025-10-02 08:31:06.663 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:06 compute-0 ceph-mon[74477]: pgmap v1506: 305 pgs: 305 active+clean; 180 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.1 MiB/s wr, 200 op/s
Oct 02 08:31:06 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1473150419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 180 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 182 op/s
Oct 02 08:31:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:31:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2368568234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.164 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.173 2 DEBUG nova.compute.provider_tree [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.198 2 DEBUG nova.scheduler.client.report [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.217 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Updating instance_info_cache with network_info: [{"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.224 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.225 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.248 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Releasing lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.249 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance network_info: |[{"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.250 2 DEBUG oslo_concurrency.lockutils [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.250 2 DEBUG nova.network.neutron [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Refreshing network info cache for port 35092937-9590-42d1-a022-549b740da3c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.258 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Start _get_guest_xml network_info=[{"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.265 2 WARNING nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.272 2 DEBUG nova.virt.libvirt.host [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.273 2 DEBUG nova.virt.libvirt.host [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.282 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.283 2 DEBUG nova.network.neutron [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.293 2 DEBUG nova.virt.libvirt.host [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.294 2 DEBUG nova.virt.libvirt.host [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.295 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.295 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.296 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.296 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.297 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.297 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.297 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.298 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.298 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.299 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.299 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.299 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.303 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.349 2 INFO nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.371 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.468 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.470 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.471 2 INFO nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Creating image(s)
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.513 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.563 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.593 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.598 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.639 2 DEBUG nova.policy [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '116b114f14f84e4cbd6cc966e29d82e7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.687 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.688 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.688 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.689 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.715 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.718 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2368568234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/271026909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.887 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.943 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.948 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:07 compute-0 nova_compute[260603]: 2025-10-02 08:31:07.998 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.085 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] resizing rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.215 2 DEBUG nova.objects.instance [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'migration_context' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.231 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.231 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Ensure instance console log exists: /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.232 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.232 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.232 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:08 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499274081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.410 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.412 2 DEBUG nova.virt.libvirt.vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:57Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.412 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.413 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.415 2 DEBUG nova.virt.libvirt.vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:57Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.415 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.416 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.417 2 DEBUG nova.virt.libvirt.vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:57Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.418 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.419 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.420 2 DEBUG nova.objects.instance [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 501f8cba-892f-489d-81b5-abb8669f49eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.423 2 DEBUG nova.network.neutron [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Successfully created port: 9835ad6c-8ea8-4a79-8f07-042186ea7c71 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.442 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:31:08 compute-0 nova_compute[260603]:   <uuid>501f8cba-892f-489d-81b5-abb8669f49eb</uuid>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   <name>instance-0000003e</name>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersTestMultiNic-server-1736583523</nova:name>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:31:07</nova:creationTime>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <nova:user uuid="db9a3b1e6d93495f8c849658ffc4e535">tempest-ServersTestMultiNic-670565182-project-member</nova:user>
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <nova:project uuid="62c4ff42369740eebbf14969f4d8d2e5">tempest-ServersTestMultiNic-670565182</nova:project>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <nova:port uuid="e19eb16b-f042-4e4d-922b-7057ad6ebb1c">
Oct 02 08:31:08 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.81" ipVersion="4"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <nova:port uuid="e8f18c99-1964-43d6-a955-7b5064c53b3a">
Oct 02 08:31:08 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.1.99" ipVersion="4"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <nova:port uuid="35092937-9590-42d1-a022-549b740da3c5">
Oct 02 08:31:08 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.194" ipVersion="4"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <system>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <entry name="serial">501f8cba-892f-489d-81b5-abb8669f49eb</entry>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <entry name="uuid">501f8cba-892f-489d-81b5-abb8669f49eb</entry>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     </system>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   <os>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   </os>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   <features>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   </features>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/501f8cba-892f-489d-81b5-abb8669f49eb_disk">
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/501f8cba-892f-489d-81b5-abb8669f49eb_disk.config">
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:08 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:c1:26:fb"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <target dev="tape19eb16b-f0"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:28:89:45"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <target dev="tape8f18c99-19"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:7b:5d:8f"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <target dev="tap35092937-95"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/console.log" append="off"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <video>
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     </video>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:31:08 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:31:08 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:31:08 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:31:08 compute-0 nova_compute[260603]: </domain>
Oct 02 08:31:08 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.443 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Preparing to wait for external event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.444 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.444 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.444 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.445 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Preparing to wait for external event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.445 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.445 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.446 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.446 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Preparing to wait for external event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.446 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.447 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.447 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.448 2 DEBUG nova.virt.libvirt.vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:57Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.449 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.450 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.450 2 DEBUG os_vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.458 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape19eb16b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape19eb16b-f0, col_values=(('external_ids', {'iface-id': 'e19eb16b-f042-4e4d-922b-7057ad6ebb1c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:26:fb', 'vm-uuid': '501f8cba-892f-489d-81b5-abb8669f49eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:08 compute-0 NetworkManager[45129]: <info>  [1759393868.4622] manager: (tape19eb16b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.472 2 INFO os_vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0')
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.473 2 DEBUG nova.virt.libvirt.vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:57Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.474 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.474 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.475 2 DEBUG os_vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.482 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8f18c99-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.482 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape8f18c99-19, col_values=(('external_ids', {'iface-id': 'e8f18c99-1964-43d6-a955-7b5064c53b3a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:89:45', 'vm-uuid': '501f8cba-892f-489d-81b5-abb8669f49eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:08 compute-0 NetworkManager[45129]: <info>  [1759393868.4851] manager: (tape8f18c99-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.496 2 INFO os_vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19')
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.496 2 DEBUG nova.virt.libvirt.vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:57Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.497 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.497 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.498 2 DEBUG os_vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.499 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.499 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35092937-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35092937-95, col_values=(('external_ids', {'iface-id': '35092937-9590-42d1-a022-549b740da3c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:5d:8f', 'vm-uuid': '501f8cba-892f-489d-81b5-abb8669f49eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:08 compute-0 NetworkManager[45129]: <info>  [1759393868.5049] manager: (tap35092937-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.513 2 INFO os_vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95')
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.587 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.587 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.587 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No VIF found with MAC fa:16:3e:c1:26:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.587 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No VIF found with MAC fa:16:3e:28:89:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.588 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No VIF found with MAC fa:16:3e:7b:5d:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.588 2 INFO nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Using config drive
Oct 02 08:31:08 compute-0 nova_compute[260603]: 2025-10-02 08:31:08.619 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:08 compute-0 ceph-mon[74477]: pgmap v1507: 305 pgs: 305 active+clean; 180 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 182 op/s
Oct 02 08:31:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/271026909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1499274081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.001 2 INFO nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Creating config drive at /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/disk.config
Oct 02 08:31:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 151 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 231 op/s
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.011 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ptmyv4j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.058 2 DEBUG nova.network.neutron [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Updated VIF entry in instance network info cache for port 35092937-9590-42d1-a022-549b740da3c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.060 2 DEBUG nova.network.neutron [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Updating instance_info_cache with network_info: [{"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.094 2 DEBUG nova.network.neutron [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Successfully updated port: 9835ad6c-8ea8-4a79-8f07-042186ea7c71 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.099 2 DEBUG oslo_concurrency.lockutils [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.112 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "refresh_cache-07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.112 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquired lock "refresh_cache-07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.112 2 DEBUG nova.network.neutron [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.162 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ptmyv4j" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.207 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.214 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/disk.config 501f8cba-892f-489d-81b5-abb8669f49eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.282 2 DEBUG nova.network.neutron [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.372 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/disk.config 501f8cba-892f-489d-81b5-abb8669f49eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.373 2 INFO nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Deleting local config drive /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/disk.config because it was imported into RBD.
Oct 02 08:31:09 compute-0 NetworkManager[45129]: <info>  [1759393869.4580] manager: (tape19eb16b-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Oct 02 08:31:09 compute-0 kernel: tape19eb16b-f0: entered promiscuous mode
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:09 compute-0 ovn_controller[152344]: 2025-10-02T08:31:09Z|00552|binding|INFO|Claiming lport e19eb16b-f042-4e4d-922b-7057ad6ebb1c for this chassis.
Oct 02 08:31:09 compute-0 ovn_controller[152344]: 2025-10-02T08:31:09Z|00553|binding|INFO|e19eb16b-f042-4e4d-922b-7057ad6ebb1c: Claiming fa:16:3e:c1:26:fb 10.100.0.81
Oct 02 08:31:09 compute-0 NetworkManager[45129]: <info>  [1759393869.4866] manager: (tape8f18c99-19): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Oct 02 08:31:09 compute-0 systemd-udevd[323536]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:09 compute-0 systemd-udevd[323538]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:09 compute-0 NetworkManager[45129]: <info>  [1759393869.5069] manager: (tap35092937-95): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Oct 02 08:31:09 compute-0 systemd-udevd[323539]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:09 compute-0 NetworkManager[45129]: <info>  [1759393869.5260] device (tape19eb16b-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:31:09 compute-0 NetworkManager[45129]: <info>  [1759393869.5282] device (tape19eb16b-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:31:09 compute-0 systemd-machined[214636]: New machine qemu-69-instance-0000003e.
Oct 02 08:31:09 compute-0 kernel: tape8f18c99-19: entered promiscuous mode
Oct 02 08:31:09 compute-0 kernel: tap35092937-95: entered promiscuous mode
Oct 02 08:31:09 compute-0 NetworkManager[45129]: <info>  [1759393869.5579] device (tape8f18c99-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:31:09 compute-0 NetworkManager[45129]: <info>  [1759393869.5590] device (tap35092937-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.552 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:26:fb 10.100.0.81'], port_security=['fa:16:3e:c1:26:fb 10.100.0.81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.81/24', 'neutron:device_id': '501f8cba-892f-489d-81b5-abb8669f49eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cff19d1e-1358-4314-bab4-67f6fde7eba9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e19eb16b-f042-4e4d-922b-7057ad6ebb1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.555 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e19eb16b-f042-4e4d-922b-7057ad6ebb1c in datapath 54248606-6cdd-4d53-9b28-14d8ac1cf290 bound to our chassis
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.558 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54248606-6cdd-4d53-9b28-14d8ac1cf290
Oct 02 08:31:09 compute-0 ovn_controller[152344]: 2025-10-02T08:31:09Z|00554|binding|INFO|Claiming lport e8f18c99-1964-43d6-a955-7b5064c53b3a for this chassis.
Oct 02 08:31:09 compute-0 ovn_controller[152344]: 2025-10-02T08:31:09Z|00555|binding|INFO|e8f18c99-1964-43d6-a955-7b5064c53b3a: Claiming fa:16:3e:28:89:45 10.100.1.99
Oct 02 08:31:09 compute-0 ovn_controller[152344]: 2025-10-02T08:31:09Z|00556|binding|INFO|Claiming lport 35092937-9590-42d1-a022-549b740da3c5 for this chassis.
Oct 02 08:31:09 compute-0 ovn_controller[152344]: 2025-10-02T08:31:09Z|00557|binding|INFO|35092937-9590-42d1-a022-549b740da3c5: Claiming fa:16:3e:7b:5d:8f 10.100.0.194
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:09 compute-0 NetworkManager[45129]: <info>  [1759393869.5610] device (tape8f18c99-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:31:09 compute-0 NetworkManager[45129]: <info>  [1759393869.5613] device (tap35092937-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:09 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-0000003e.
Oct 02 08:31:09 compute-0 ovn_controller[152344]: 2025-10-02T08:31:09Z|00558|binding|INFO|Setting lport e19eb16b-f042-4e4d-922b-7057ad6ebb1c ovn-installed in OVS
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:09 compute-0 ovn_controller[152344]: 2025-10-02T08:31:09Z|00559|binding|INFO|Setting lport e19eb16b-f042-4e4d-922b-7057ad6ebb1c up in Southbound
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.581 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:5d:8f 10.100.0.194'], port_security=['fa:16:3e:7b:5d:8f 10.100.0.194'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.194/24', 'neutron:device_id': '501f8cba-892f-489d-81b5-abb8669f49eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cff19d1e-1358-4314-bab4-67f6fde7eba9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=35092937-9590-42d1-a022-549b740da3c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.585 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:89:45 10.100.1.99'], port_security=['fa:16:3e:28:89:45 10.100.1.99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.99/24', 'neutron:device_id': '501f8cba-892f-489d-81b5-abb8669f49eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1454d03-261a-47dd-a4d8-470427544a20, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e8f18c99-1964-43d6-a955-7b5064c53b3a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.580 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9785981-53d8-4692-a170-25d4e3904663]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.588 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap54248606-61 in ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.594 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap54248606-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.594 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[84875de8-df16-437d-a48a-5338826caa3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.596 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[79766b68-cb32-46da-b735-612f095bff81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.616 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0db73085-3809-4144-80c6-974b42ac99d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_controller[152344]: 2025-10-02T08:31:09Z|00560|binding|INFO|Setting lport e8f18c99-1964-43d6-a955-7b5064c53b3a ovn-installed in OVS
Oct 02 08:31:09 compute-0 ovn_controller[152344]: 2025-10-02T08:31:09Z|00561|binding|INFO|Setting lport e8f18c99-1964-43d6-a955-7b5064c53b3a up in Southbound
Oct 02 08:31:09 compute-0 ovn_controller[152344]: 2025-10-02T08:31:09Z|00562|binding|INFO|Setting lport 35092937-9590-42d1-a022-549b740da3c5 ovn-installed in OVS
Oct 02 08:31:09 compute-0 ovn_controller[152344]: 2025-10-02T08:31:09Z|00563|binding|INFO|Setting lport 35092937-9590-42d1-a022-549b740da3c5 up in Southbound
Oct 02 08:31:09 compute-0 nova_compute[260603]: 2025-10-02 08:31:09.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.644 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c8dea69b-ad4d-4c00-bdbd-3bd30ef7fcc1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.690 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ee5bad9d-f3ce-42da-8811-59c1d5542569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 NetworkManager[45129]: <info>  [1759393869.7006] manager: (tap54248606-60): new Veth device (/org/freedesktop/NetworkManager/Devices/239)
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.699 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[887e7741-5395-4a15-9ecb-01f4f9b01c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.753 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[019c3dfe-f08a-4d51-9fd9-60dfdef00a63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.759 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5a4f79-2822-4e0e-aff6-74221e993fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 NetworkManager[45129]: <info>  [1759393869.7992] device (tap54248606-60): carrier: link connected
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.810 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2e14ae67-f6e3-414d-a2f8-0ae6e3e078e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.843 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2446f7e8-48cb-4376-89bb-604332c7fea9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54248606-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:e5:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474840, 'reachable_time': 41953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323578, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.868 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c3cac456-6643-4e1a-976f-0e4deb1bced3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:e517'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474840, 'tstamp': 474840}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323579, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.891 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7addc0a9-0676-48f2-89c4-4bf4a679c4d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54248606-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:e5:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474840, 'reachable_time': 41953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323580, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.925 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0d4b3a-d37b-4f1c-9e06-72864c41b8a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.992 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50ebc949-74f3-430c-b1d8-40653892074b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.994 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54248606-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.994 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.995 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54248606-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:09 compute-0 NetworkManager[45129]: <info>  [1759393869.9988] manager: (tap54248606-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Oct 02 08:31:10 compute-0 kernel: tap54248606-60: entered promiscuous mode
Oct 02 08:31:10 compute-0 nova_compute[260603]: 2025-10-02 08:31:10.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.003 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54248606-60, col_values=(('external_ids', {'iface-id': 'dacfca36-fe1e-4001-8669-9c1cc2cd3f3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:10 compute-0 ovn_controller[152344]: 2025-10-02T08:31:10Z|00564|binding|INFO|Releasing lport dacfca36-fe1e-4001-8669-9c1cc2cd3f3a from this chassis (sb_readonly=0)
Oct 02 08:31:10 compute-0 nova_compute[260603]: 2025-10-02 08:31:10.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:10 compute-0 nova_compute[260603]: 2025-10-02 08:31:10.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:10 compute-0 nova_compute[260603]: 2025-10-02 08:31:10.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.043 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54248606-6cdd-4d53-9b28-14d8ac1cf290.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54248606-6cdd-4d53-9b28-14d8ac1cf290.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.044 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[52cddb3a-917e-4fcd-9ac2-d7e3f8093bb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.046 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-54248606-6cdd-4d53-9b28-14d8ac1cf290
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/54248606-6cdd-4d53-9b28-14d8ac1cf290.pid.haproxy
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 54248606-6cdd-4d53-9b28-14d8ac1cf290
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.049 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'env', 'PROCESS_TAG=haproxy-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/54248606-6cdd-4d53-9b28-14d8ac1cf290.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:31:10 compute-0 podman[323642]: 2025-10-02 08:31:10.4963252 +0000 UTC m=+0.063037407 container create 595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:31:10 compute-0 systemd[1]: Started libpod-conmon-595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723.scope.
Oct 02 08:31:10 compute-0 podman[323642]: 2025-10-02 08:31:10.465498322 +0000 UTC m=+0.032210589 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:31:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b45d26d074aac14704b1e922d565a27ad4c485c96934af760c5e755ad0d98b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:10 compute-0 podman[323642]: 2025-10-02 08:31:10.598403467 +0000 UTC m=+0.165115714 container init 595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 08:31:10 compute-0 podman[323642]: 2025-10-02 08:31:10.604410074 +0000 UTC m=+0.171122291 container start 595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:31:10 compute-0 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [NOTICE]   (323674) : New worker (323676) forked
Oct 02 08:31:10 compute-0 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [NOTICE]   (323674) : Loading success.
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.662 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 35092937-9590-42d1-a022-549b740da3c5 in datapath 54248606-6cdd-4d53-9b28-14d8ac1cf290 unbound from our chassis
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.668 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54248606-6cdd-4d53-9b28-14d8ac1cf290
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.687 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff50d5d2-d59a-4d7f-bad2-6bb20882d3de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.740 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[695a121c-5ea7-4c5a-8218-9396f2cd7380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.746 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ec8e24-e9d2-4da1-bfb2-fc9b40d978fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:10 compute-0 nova_compute[260603]: 2025-10-02 08:31:10.782 2 DEBUG nova.compute.manager [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-changed-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:10 compute-0 nova_compute[260603]: 2025-10-02 08:31:10.782 2 DEBUG nova.compute.manager [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Refreshing instance network info cache due to event network-changed-9835ad6c-8ea8-4a79-8f07-042186ea7c71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:31:10 compute-0 nova_compute[260603]: 2025-10-02 08:31:10.782 2 DEBUG oslo_concurrency.lockutils [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:10 compute-0 ceph-mon[74477]: pgmap v1508: 305 pgs: 305 active+clean; 151 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 231 op/s
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.803 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[68a9a901-14d5-46a9-ac94-7e602eea7e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.832 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[52c4ace8-261e-4398-8bc5-bf67c126d21b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54248606-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:e5:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474840, 'reachable_time': 41953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323690, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.864 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff163c40-6132-46d9-a63b-3257c2c31d1d]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap54248606-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474855, 'tstamp': 474855}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323691, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap54248606-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474858, 'tstamp': 474858}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323691, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.868 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54248606-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:10 compute-0 nova_compute[260603]: 2025-10-02 08:31:10.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:10 compute-0 nova_compute[260603]: 2025-10-02 08:31:10.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.874 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54248606-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.874 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.875 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54248606-60, col_values=(('external_ids', {'iface-id': 'dacfca36-fe1e-4001-8669-9c1cc2cd3f3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.876 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.878 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e8f18c99-1964-43d6-a955-7b5064c53b3a in datapath 47336fa6-fcc3-40f8-ae02-9bae73a94c41 unbound from our chassis
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.882 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 47336fa6-fcc3-40f8-ae02-9bae73a94c41
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.902 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d8529b04-95ab-49d9-8857-301031b0e220]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.904 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap47336fa6-f1 in ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.907 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap47336fa6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.907 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aac9da2c-6993-4ecd-a79d-e54813786bf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.909 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3a010a-faa6-453b-bf83-446b71a2174c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.934 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d496d465-2a0b-4be9-9c25-6398d94428c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.971 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c549d6-fff7-42dc-9c6f-9657dcf8dc53]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 151 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 143 op/s
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.021 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e25f55-32e8-4162-9722-0148d48a4432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:11 compute-0 NetworkManager[45129]: <info>  [1759393871.0304] manager: (tap47336fa6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/241)
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.029 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62eebb07-1fb9-4c31-a6f9-63d2e9aabfe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.064 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0710aad2-ace0-4bb7-aa85-78b82ffb38b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.068 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[df2b97cd-a8da-4664-8353-3e620eec6660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:11 compute-0 NetworkManager[45129]: <info>  [1759393871.0992] device (tap47336fa6-f0): carrier: link connected
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.108 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6be0a0e3-40d5-4d3d-a394-db72fd93d80d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.121 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393871.1210468, 501f8cba-892f-489d-81b5-abb8669f49eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.122 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] VM Started (Lifecycle Event)
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.137 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[350155ee-d679-4eb3-8860-900205c99aeb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47336fa6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:1c:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474970, 'reachable_time': 28655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323702, 'error': None, 'target': 'ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.145 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.153 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393871.1212702, 501f8cba-892f-489d-81b5-abb8669f49eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.154 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] VM Paused (Lifecycle Event)
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.161 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6a12ce5d-0494-468f-9253-d3927754fd7b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8d:1c72'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474970, 'tstamp': 474970}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323703, 'error': None, 'target': 'ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.173 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.178 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.191 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[623c41ad-f106-4698-b5f8-09464bc671eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47336fa6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:1c:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474970, 'reachable_time': 28655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323704, 'error': None, 'target': 'ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.214 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.237 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aad1545e-1ab7-4e51-8ab8-8347882c4023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.337 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebc3217-2c45-4526-b288-b96111529d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.339 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47336fa6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.339 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.340 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47336fa6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:11 compute-0 kernel: tap47336fa6-f0: entered promiscuous mode
Oct 02 08:31:11 compute-0 NetworkManager[45129]: <info>  [1759393871.3921] manager: (tap47336fa6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.398 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap47336fa6-f0, col_values=(('external_ids', {'iface-id': 'a5ae140a-c6f5-41f2-b9e0-fa0f8fcd807d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:11 compute-0 ovn_controller[152344]: 2025-10-02T08:31:11Z|00565|binding|INFO|Releasing lport a5ae140a-c6f5-41f2-b9e0-fa0f8fcd807d from this chassis (sb_readonly=0)
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.401 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/47336fa6-fcc3-40f8-ae02-9bae73a94c41.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/47336fa6-fcc3-40f8-ae02-9bae73a94c41.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.403 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e948143c-8667-4cf6-9ef6-44b33ad2202d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.404 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-47336fa6-fcc3-40f8-ae02-9bae73a94c41
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/47336fa6-fcc3-40f8-ae02-9bae73a94c41.pid.haproxy
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 47336fa6-fcc3-40f8-ae02-9bae73a94c41
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:31:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.405 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'env', 'PROCESS_TAG=haproxy-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/47336fa6-fcc3-40f8-ae02-9bae73a94c41.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:31:11 compute-0 nova_compute[260603]: 2025-10-02 08:31:11.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:11 compute-0 podman[323737]: 2025-10-02 08:31:11.900827309 +0000 UTC m=+0.075966917 container create 2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 08:31:11 compute-0 systemd[1]: Started libpod-conmon-2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f.scope.
Oct 02 08:31:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927c4f8d9e26b4f276f849fe227297392dc8511c85e75a6e69adb62038a5f909/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:11 compute-0 podman[323737]: 2025-10-02 08:31:11.87117825 +0000 UTC m=+0.046317868 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:31:11 compute-0 podman[323737]: 2025-10-02 08:31:11.977802898 +0000 UTC m=+0.152942486 container init 2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 08:31:11 compute-0 podman[323737]: 2025-10-02 08:31:11.988507001 +0000 UTC m=+0.163646569 container start 2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:31:12 compute-0 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [NOTICE]   (323756) : New worker (323758) forked
Oct 02 08:31:12 compute-0 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [NOTICE]   (323756) : Loading success.
Oct 02 08:31:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.403 2 DEBUG nova.network.neutron [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Updating instance_info_cache with network_info: [{"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.447 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Releasing lock "refresh_cache-07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.448 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance network_info: |[{"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.448 2 DEBUG oslo_concurrency.lockutils [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.448 2 DEBUG nova.network.neutron [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Refreshing network info cache for port 9835ad6c-8ea8-4a79-8f07-042186ea7c71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.452 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Start _get_guest_xml network_info=[{"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.458 2 WARNING nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.465 2 DEBUG nova.virt.libvirt.host [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.466 2 DEBUG nova.virt.libvirt.host [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.471 2 DEBUG nova.virt.libvirt.host [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.472 2 DEBUG nova.virt.libvirt.host [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.472 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.473 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.473 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.474 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.474 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.474 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.474 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.475 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.475 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.475 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.476 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.476 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.479 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:12 compute-0 ceph-mon[74477]: pgmap v1509: 305 pgs: 305 active+clean; 151 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 143 op/s
Oct 02 08:31:12 compute-0 nova_compute[260603]: 2025-10-02 08:31:12.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.003 2 DEBUG nova.compute.manager [req-d7ad9db1-8dc2-43d8-96b3-bf8c38a8c644 req-48d17e08-ff57-41fa-be18-678038079bef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.004 2 DEBUG oslo_concurrency.lockutils [req-d7ad9db1-8dc2-43d8-96b3-bf8c38a8c644 req-48d17e08-ff57-41fa-be18-678038079bef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.005 2 DEBUG oslo_concurrency.lockutils [req-d7ad9db1-8dc2-43d8-96b3-bf8c38a8c644 req-48d17e08-ff57-41fa-be18-678038079bef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.006 2 DEBUG oslo_concurrency.lockutils [req-d7ad9db1-8dc2-43d8-96b3-bf8c38a8c644 req-48d17e08-ff57-41fa-be18-678038079bef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.006 2 DEBUG nova.compute.manager [req-d7ad9db1-8dc2-43d8-96b3-bf8c38a8c644 req-48d17e08-ff57-41fa-be18-678038079bef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Processing event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:31:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4171617591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.007 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:31:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 181 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 164 op/s
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.014 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.016 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393873.0143142, 49e7e668-b62c-4e35-a4e2-bba540000961 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.016 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] VM Resumed (Lifecycle Event)
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.024 2 INFO nova.virt.libvirt.driver [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance spawned successfully.
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.025 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.030 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.066 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.074 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.136 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.140 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.141 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.141 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.142 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.142 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Processing event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.142 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.143 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.143 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.143 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.143 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No event matching network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c in dict_keys([('network-vif-plugged', 'e8f18c99-1964-43d6-a955-7b5064c53b3a'), ('network-vif-plugged', '35092937-9590-42d1-a022-549b740da3c5')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.144 2 WARNING nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received unexpected event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c for instance with vm_state building and task_state spawning.
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.144 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.144 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.145 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.145 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.145 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Processing event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.145 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.146 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.146 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.146 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.147 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No event matching network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a in dict_keys([('network-vif-plugged', '35092937-9590-42d1-a022-549b740da3c5')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.147 2 WARNING nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received unexpected event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a for instance with vm_state building and task_state spawning.
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.152 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.157 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.157 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.158 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.158 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.159 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.159 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.196 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.246 2 INFO nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Took 16.35 seconds to spawn the instance on the hypervisor.
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.246 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.345 2 INFO nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Took 17.68 seconds to build instance.
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.394 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:31:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80577841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.605 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.607 2 DEBUG nova.virt.libvirt.vif [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-361192799',display_name='tempest-ServerDiskConfigTestJSON-server-361192799',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-361192799',id=63,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-bmy2yvfw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:07Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.608 2 DEBUG nova.network.os_vif_util [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.609 2 DEBUG nova.network.os_vif_util [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.611 2 DEBUG nova.objects.instance [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_devices' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.627 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:31:13 compute-0 nova_compute[260603]:   <uuid>07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf</uuid>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   <name>instance-0000003f</name>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-361192799</nova:name>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:31:12</nova:creationTime>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:31:13 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:31:13 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:31:13 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:31:13 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:31:13 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:31:13 compute-0 nova_compute[260603]:         <nova:user uuid="116b114f14f84e4cbd6cc966e29d82e7">tempest-ServerDiskConfigTestJSON-1277806880-project-member</nova:user>
Oct 02 08:31:13 compute-0 nova_compute[260603]:         <nova:project uuid="bce7493292bb47cfb7168bca89f78f4a">tempest-ServerDiskConfigTestJSON-1277806880</nova:project>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:31:13 compute-0 nova_compute[260603]:         <nova:port uuid="9835ad6c-8ea8-4a79-8f07-042186ea7c71">
Oct 02 08:31:13 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <system>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <entry name="serial">07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf</entry>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <entry name="uuid">07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf</entry>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     </system>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   <os>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   </os>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   <features>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   </features>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk">
Oct 02 08:31:13 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:13 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config">
Oct 02 08:31:13 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:13 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:5b:e9:fe"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <target dev="tap9835ad6c-8e"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/console.log" append="off"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <video>
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     </video>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:31:13 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:31:13 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:31:13 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:31:13 compute-0 nova_compute[260603]: </domain>
Oct 02 08:31:13 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.632 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Preparing to wait for external event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.632 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.633 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.633 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.634 2 DEBUG nova.virt.libvirt.vif [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-361192799',display_name='tempest-ServerDiskConfigTestJSON-server-361192799',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-361192799',id=63,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-bmy2yvfw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:07Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.634 2 DEBUG nova.network.os_vif_util [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.635 2 DEBUG nova.network.os_vif_util [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.635 2 DEBUG os_vif [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.637 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.637 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9835ad6c-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9835ad6c-8e, col_values=(('external_ids', {'iface-id': '9835ad6c-8ea8-4a79-8f07-042186ea7c71', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:e9:fe', 'vm-uuid': '07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:13 compute-0 NetworkManager[45129]: <info>  [1759393873.6797] manager: (tap9835ad6c-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.694 2 INFO os_vif [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e')
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.762 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "7ac34b0c-8ced-417d-9442-8fda77804a34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.763 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.773 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.774 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.774 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No VIF found with MAC fa:16:3e:5b:e9:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.775 2 INFO nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Using config drive
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.800 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.809 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:31:13 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4171617591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:13 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/80577841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.917 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.919 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.929 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:31:13 compute-0 nova_compute[260603]: 2025-10-02 08:31:13.930 2 INFO nova.compute.claims [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.137 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.198 2 DEBUG nova.network.neutron [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Updated VIF entry in instance network info cache for port 9835ad6c-8ea8-4a79-8f07-042186ea7c71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.199 2 DEBUG nova.network.neutron [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Updating instance_info_cache with network_info: [{"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.219 2 DEBUG oslo_concurrency.lockutils [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.424 2 INFO nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Creating config drive at /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.431 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp37e4p8wm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:14 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2544996907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.579 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp37e4p8wm" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.618 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.624 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.662 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.672 2 DEBUG nova.compute.provider_tree [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.700 2 DEBUG nova.scheduler.client.report [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.741 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.743 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.799 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.800 2 INFO nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Deleting local config drive /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config because it was imported into RBD.
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.807 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.807 2 DEBUG nova.network.neutron [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:31:14 compute-0 ceph-mon[74477]: pgmap v1510: 305 pgs: 305 active+clean; 181 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 164 op/s
Oct 02 08:31:14 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2544996907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.832 2 INFO nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.854 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:31:14 compute-0 kernel: tap9835ad6c-8e: entered promiscuous mode
Oct 02 08:31:14 compute-0 NetworkManager[45129]: <info>  [1759393874.8908] manager: (tap9835ad6c-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Oct 02 08:31:14 compute-0 ovn_controller[152344]: 2025-10-02T08:31:14Z|00566|binding|INFO|Claiming lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 for this chassis.
Oct 02 08:31:14 compute-0 ovn_controller[152344]: 2025-10-02T08:31:14Z|00567|binding|INFO|9835ad6c-8ea8-4a79-8f07-042186ea7c71: Claiming fa:16:3e:5b:e9:fe 10.100.0.10
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.920 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:e9:fe 10.100.0.10'], port_security=['fa:16:3e:5b:e9:fe 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9835ad6c-8ea8-4a79-8f07-042186ea7c71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.922 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9835ad6c-8ea8-4a79-8f07-042186ea7c71 in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b bound to our chassis
Oct 02 08:31:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.923 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:31:14 compute-0 systemd-udevd[323924]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.945 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df91c45b-0265-4e74-9ff1-37b235649bb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.946 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8df0af1-11 in ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:31:14 compute-0 NetworkManager[45129]: <info>  [1759393874.9497] device (tap9835ad6c-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:31:14 compute-0 NetworkManager[45129]: <info>  [1759393874.9510] device (tap9835ad6c-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:31:14 compute-0 systemd-machined[214636]: New machine qemu-70-instance-0000003f.
Oct 02 08:31:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.951 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8df0af1-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:31:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.951 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c5dbd8-c2dc-490a-b8dc-153b5f71d04a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.958 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50d6054f-e08e-4bd8-a10b-23dbea561264]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:14 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-0000003f.
Oct 02 08:31:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.972 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[48b4e526-65bd-4c86-9ea0-840b8d53f60b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.977 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.979 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:31:14 compute-0 nova_compute[260603]: 2025-10-02 08:31:14.979 2 INFO nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Creating image(s)
Oct 02 08:31:14 compute-0 ovn_controller[152344]: 2025-10-02T08:31:14Z|00568|binding|INFO|Setting lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 ovn-installed in OVS
Oct 02 08:31:14 compute-0 ovn_controller[152344]: 2025-10-02T08:31:14Z|00569|binding|INFO|Setting lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 up in Southbound
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.005 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df0e8256-7fdb-4121-9396-a774066d99ba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 181 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 485 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.015 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.039 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c80ac2-0b2b-4235-af0b-80eb64053571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.049 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[297f9e83-8297-4d79-9c77-9c62392cc143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:15 compute-0 systemd-udevd[323928]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:15 compute-0 NetworkManager[45129]: <info>  [1759393875.0506] manager: (tapf8df0af1-10): new Veth device (/org/freedesktop/NetworkManager/Devices/245)
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.100 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f4378227-1644-4378-be18-af5c36a0dd6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.100 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.104 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f00b2915-370e-4d7c-b593-4781ab456e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:15 compute-0 NetworkManager[45129]: <info>  [1759393875.1292] device (tapf8df0af1-10): carrier: link connected
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.135 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d6151db3-7e98-40ce-8c29-067306f7ce67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.143 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.153 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.158 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e4f3f2-d804-4ffa-94d0-7730744b5ab0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475373, 'reachable_time': 32653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324012, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.176 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[048a21e0-d5e2-4a25-b723-50013ce1794b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:6ddc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475373, 'tstamp': 475373}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324013, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.194 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f7943369-bdd3-4371-9989-1124adab16ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475373, 'reachable_time': 32653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324015, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.207 2 DEBUG nova.policy [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33ee6781337742479d7b4b078ad6a221', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.228 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aaac8e0e-6e4b-4257-8b76-68493bf94b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.264 2 DEBUG nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.265 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.265 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.266 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.266 2 DEBUG nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] No waiting events found dispatching network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.266 2 WARNING nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received unexpected event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 for instance with vm_state active and task_state None.
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.266 2 DEBUG nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.267 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.267 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.267 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.267 2 DEBUG nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Processing event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.268 2 DEBUG nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.268 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.268 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.268 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.268 2 DEBUG nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.269 2 WARNING nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received unexpected event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 for instance with vm_state building and task_state spawning.
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.269 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.270 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance event wait completed in 4 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.270 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.271 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.271 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.305 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a5165a57-7027-41bb-8c32-233d20b41055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.306 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.306 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.307 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8df0af1-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.307 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:15 compute-0 NetworkManager[45129]: <info>  [1759393875.3087] manager: (tapf8df0af1-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Oct 02 08:31:15 compute-0 kernel: tapf8df0af1-10: entered promiscuous mode
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.311 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8df0af1-10, col_values=(('external_ids', {'iface-id': '1405e724-f2f6-4a95-8848-550131e62910'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:15 compute-0 ovn_controller[152344]: 2025-10-02T08:31:15Z|00570|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.314 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.315 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3a1a5d-c157-4f6d-adb4-e9c14e661d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.316 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:31:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.316 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'env', 'PROCESS_TAG=haproxy-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8df0af1-1767-419a-8500-c28fbf45ae4b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.333 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7ac34b0c-8ced-417d-9442-8fda77804a34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.381 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393875.2741497, 501f8cba-892f-489d-81b5-abb8669f49eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.381 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] VM Resumed (Lifecycle Event)
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.389 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.392 2 DEBUG nova.compute.manager [req-30fcc749-c83e-48ee-bdfa-0de8bcf446e5 req-f78080fc-d5da-46c0-b676-3ccc0c0d3149 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.393 2 DEBUG oslo_concurrency.lockutils [req-30fcc749-c83e-48ee-bdfa-0de8bcf446e5 req-f78080fc-d5da-46c0-b676-3ccc0c0d3149 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.393 2 DEBUG oslo_concurrency.lockutils [req-30fcc749-c83e-48ee-bdfa-0de8bcf446e5 req-f78080fc-d5da-46c0-b676-3ccc0c0d3149 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.394 2 DEBUG oslo_concurrency.lockutils [req-30fcc749-c83e-48ee-bdfa-0de8bcf446e5 req-f78080fc-d5da-46c0-b676-3ccc0c0d3149 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.394 2 DEBUG nova.compute.manager [req-30fcc749-c83e-48ee-bdfa-0de8bcf446e5 req-f78080fc-d5da-46c0-b676-3ccc0c0d3149 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Processing event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.410 2 INFO nova.virt.libvirt.driver [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance spawned successfully.
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.411 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.419 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.426 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.442 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.443 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.443 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.444 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.444 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.445 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.454 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.510 2 INFO nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Took 17.87 seconds to spawn the instance on the hypervisor.
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.511 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.692 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7ac34b0c-8ced-417d-9442-8fda77804a34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:15 compute-0 podman[324124]: 2025-10-02 08:31:15.755939591 +0000 UTC m=+0.052687986 container create f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 08:31:15 compute-0 systemd[1]: Started libpod-conmon-f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0.scope.
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.816 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] resizing rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:31:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:15 compute-0 podman[324124]: 2025-10-02 08:31:15.729340106 +0000 UTC m=+0.026088521 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b436514243685187aa6cdcdd36d5f1493da17b7b370783efb1b916d0e9b66972/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:15 compute-0 podman[324124]: 2025-10-02 08:31:15.850523906 +0000 UTC m=+0.147272321 container init f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 08:31:15 compute-0 podman[324124]: 2025-10-02 08:31:15.856522532 +0000 UTC m=+0.153270927 container start f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 08:31:15 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [NOTICE]   (324199) : New worker (324201) forked
Oct 02 08:31:15 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [NOTICE]   (324199) : Loading success.
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.954 2 INFO nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Took 20.29 seconds to build instance.
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.966 2 DEBUG nova.objects.instance [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 7ac34b0c-8ced-417d-9442-8fda77804a34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.993 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.994 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.995 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Ensure instance console log exists: /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.995 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.995 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:15 compute-0 nova_compute[260603]: 2025-10-02 08:31:15.996 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.215 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.216 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393876.2144597, 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.216 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] VM Started (Lifecycle Event)
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.222 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.225 2 INFO nova.virt.libvirt.driver [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance spawned successfully.
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.225 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.251 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.255 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.260 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.260 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.261 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.261 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.261 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.261 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.280 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.280 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393876.2179103, 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.281 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] VM Paused (Lifecycle Event)
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.310 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.316 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393876.221003, 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.316 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] VM Resumed (Lifecycle Event)
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.324 2 INFO nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Took 8.85 seconds to spawn the instance on the hypervisor.
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.325 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.340 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.346 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.391 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.414 2 INFO nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Took 9.97 seconds to build instance.
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.430 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:16 compute-0 ceph-mon[74477]: pgmap v1511: 305 pgs: 305 active+clean; 181 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 485 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 02 08:31:16 compute-0 nova_compute[260603]: 2025-10-02 08:31:16.965 2 DEBUG nova.network.neutron [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Successfully created port: 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:31:16 compute-0 podman[324229]: 2025-10-02 08:31:16.985569426 +0000 UTC m=+0.054348878 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 08:31:17 compute-0 podman[324228]: 2025-10-02 08:31:17.010002354 +0000 UTC m=+0.079069145 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:31:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 181 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 02 08:31:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:31:17 compute-0 NetworkManager[45129]: <info>  [1759393877.1289] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:17 compute-0 NetworkManager[45129]: <info>  [1759393877.1296] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:17 compute-0 ovn_controller[152344]: 2025-10-02T08:31:17Z|00571|binding|INFO|Releasing lport d143de50-fc80-43b6-82e2-6651430a4a42 from this chassis (sb_readonly=0)
Oct 02 08:31:17 compute-0 ovn_controller[152344]: 2025-10-02T08:31:17Z|00572|binding|INFO|Releasing lport dacfca36-fe1e-4001-8669-9c1cc2cd3f3a from this chassis (sb_readonly=0)
Oct 02 08:31:17 compute-0 ovn_controller[152344]: 2025-10-02T08:31:17Z|00573|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct 02 08:31:17 compute-0 ovn_controller[152344]: 2025-10-02T08:31:17Z|00574|binding|INFO|Releasing lport a5ae140a-c6f5-41f2-b9e0-fa0f8fcd807d from this chassis (sb_readonly=0)
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.532 2 DEBUG nova.compute.manager [req-2bcda2f4-4832-4a27-943e-dfa2e1e29d60 req-bfcbba90-7c95-4b19-a841-cc3a9c5f8c88 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.533 2 DEBUG oslo_concurrency.lockutils [req-2bcda2f4-4832-4a27-943e-dfa2e1e29d60 req-bfcbba90-7c95-4b19-a841-cc3a9c5f8c88 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.533 2 DEBUG oslo_concurrency.lockutils [req-2bcda2f4-4832-4a27-943e-dfa2e1e29d60 req-bfcbba90-7c95-4b19-a841-cc3a9c5f8c88 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.533 2 DEBUG oslo_concurrency.lockutils [req-2bcda2f4-4832-4a27-943e-dfa2e1e29d60 req-bfcbba90-7c95-4b19-a841-cc3a9c5f8c88 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.533 2 DEBUG nova.compute.manager [req-2bcda2f4-4832-4a27-943e-dfa2e1e29d60 req-bfcbba90-7c95-4b19-a841-cc3a9c5f8c88 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] No waiting events found dispatching network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.534 2 WARNING nova.compute.manager [req-2bcda2f4-4832-4a27-943e-dfa2e1e29d60 req-bfcbba90-7c95-4b19-a841-cc3a9c5f8c88 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received unexpected event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 for instance with vm_state active and task_state None.
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.546 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.546 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:31:17 compute-0 nova_compute[260603]: 2025-10-02 08:31:17.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:18 compute-0 nova_compute[260603]: 2025-10-02 08:31:18.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:18 compute-0 nova_compute[260603]: 2025-10-02 08:31:18.727 2 DEBUG nova.network.neutron [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Successfully updated port: 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:31:18 compute-0 nova_compute[260603]: 2025-10-02 08:31:18.742 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:18 compute-0 nova_compute[260603]: 2025-10-02 08:31:18.742 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquired lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:18 compute-0 nova_compute[260603]: 2025-10-02 08:31:18.742 2 DEBUG nova.network.neutron [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:31:18 compute-0 ceph-mon[74477]: pgmap v1512: 305 pgs: 305 active+clean; 181 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 02 08:31:18 compute-0 nova_compute[260603]: 2025-10-02 08:31:18.954 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393863.9093144, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:18 compute-0 nova_compute[260603]: 2025-10-02 08:31:18.954 2 INFO nova.compute.manager [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Stopped (Lifecycle Event)
Oct 02 08:31:18 compute-0 nova_compute[260603]: 2025-10-02 08:31:18.976 2 DEBUG nova.compute.manager [None req-7524d67c-749c-445b-819f-9cefbd7fb6cd - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 227 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.6 MiB/s wr, 293 op/s
Oct 02 08:31:19 compute-0 nova_compute[260603]: 2025-10-02 08:31:19.655 2 DEBUG nova.network.neutron [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:31:19 compute-0 nova_compute[260603]: 2025-10-02 08:31:19.841 2 DEBUG nova.compute.manager [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received event network-changed-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:19 compute-0 nova_compute[260603]: 2025-10-02 08:31:19.842 2 DEBUG nova.compute.manager [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Refreshing instance network info cache due to event network-changed-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:31:19 compute-0 nova_compute[260603]: 2025-10-02 08:31:19.842 2 DEBUG oslo_concurrency.lockutils [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:19 compute-0 nova_compute[260603]: 2025-10-02 08:31:19.929 2 DEBUG nova.compute.manager [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-changed-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:19 compute-0 nova_compute[260603]: 2025-10-02 08:31:19.930 2 DEBUG nova.compute.manager [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Refreshing instance network info cache due to event network-changed-37e9c33f-0ff9-4138-a7b5-989ba3c016a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:31:19 compute-0 nova_compute[260603]: 2025-10-02 08:31:19.930 2 DEBUG oslo_concurrency.lockutils [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:19 compute-0 nova_compute[260603]: 2025-10-02 08:31:19.930 2 DEBUG oslo_concurrency.lockutils [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:19 compute-0 nova_compute[260603]: 2025-10-02 08:31:19.930 2 DEBUG nova.network.neutron [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Refreshing network info cache for port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:31:20 compute-0 nova_compute[260603]: 2025-10-02 08:31:20.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:31:20 compute-0 nova_compute[260603]: 2025-10-02 08:31:20.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:31:20 compute-0 ceph-mon[74477]: pgmap v1513: 305 pgs: 305 active+clean; 227 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.6 MiB/s wr, 293 op/s
Oct 02 08:31:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 227 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.4 MiB/s wr, 244 op/s
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.063 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.064 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.064 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.064 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.064 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.065 2 INFO nova.compute.manager [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Terminating instance
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.066 2 DEBUG nova.compute.manager [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:31:21 compute-0 kernel: tape19eb16b-f0 (unregistering): left promiscuous mode
Oct 02 08:31:21 compute-0 NetworkManager[45129]: <info>  [1759393881.1323] device (tape19eb16b-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 ovn_controller[152344]: 2025-10-02T08:31:21Z|00575|binding|INFO|Releasing lport e19eb16b-f042-4e4d-922b-7057ad6ebb1c from this chassis (sb_readonly=0)
Oct 02 08:31:21 compute-0 ovn_controller[152344]: 2025-10-02T08:31:21Z|00576|binding|INFO|Setting lport e19eb16b-f042-4e4d-922b-7057ad6ebb1c down in Southbound
Oct 02 08:31:21 compute-0 ovn_controller[152344]: 2025-10-02T08:31:21Z|00577|binding|INFO|Removing iface tape19eb16b-f0 ovn-installed in OVS
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.149 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:26:fb 10.100.0.81'], port_security=['fa:16:3e:c1:26:fb 10.100.0.81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.81/24', 'neutron:device_id': '501f8cba-892f-489d-81b5-abb8669f49eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cff19d1e-1358-4314-bab4-67f6fde7eba9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e19eb16b-f042-4e4d-922b-7057ad6ebb1c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.151 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e19eb16b-f042-4e4d-922b-7057ad6ebb1c in datapath 54248606-6cdd-4d53-9b28-14d8ac1cf290 unbound from our chassis
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.152 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54248606-6cdd-4d53-9b28-14d8ac1cf290
Oct 02 08:31:21 compute-0 kernel: tape8f18c99-19 (unregistering): left promiscuous mode
Oct 02 08:31:21 compute-0 NetworkManager[45129]: <info>  [1759393881.1670] device (tape8f18c99-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 ovn_controller[152344]: 2025-10-02T08:31:21Z|00578|binding|INFO|Releasing lport e8f18c99-1964-43d6-a955-7b5064c53b3a from this chassis (sb_readonly=0)
Oct 02 08:31:21 compute-0 ovn_controller[152344]: 2025-10-02T08:31:21Z|00579|binding|INFO|Setting lport e8f18c99-1964-43d6-a955-7b5064c53b3a down in Southbound
Oct 02 08:31:21 compute-0 ovn_controller[152344]: 2025-10-02T08:31:21Z|00580|binding|INFO|Removing iface tape8f18c99-19 ovn-installed in OVS
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.183 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:89:45 10.100.1.99'], port_security=['fa:16:3e:28:89:45 10.100.1.99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.99/24', 'neutron:device_id': '501f8cba-892f-489d-81b5-abb8669f49eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1454d03-261a-47dd-a4d8-470427544a20, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e8f18c99-1964-43d6-a955-7b5064c53b3a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.184 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[793eec77-9522-45d8-8a03-54a8cd612764]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 kernel: tap35092937-95 (unregistering): left promiscuous mode
Oct 02 08:31:21 compute-0 NetworkManager[45129]: <info>  [1759393881.1918] device (tap35092937-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.215 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[86240cf4-7e30-4299-bb1f-bc83ae3721e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 ovn_controller[152344]: 2025-10-02T08:31:21Z|00581|binding|INFO|Releasing lport 35092937-9590-42d1-a022-549b740da3c5 from this chassis (sb_readonly=0)
Oct 02 08:31:21 compute-0 ovn_controller[152344]: 2025-10-02T08:31:21Z|00582|binding|INFO|Setting lport 35092937-9590-42d1-a022-549b740da3c5 down in Southbound
Oct 02 08:31:21 compute-0 ovn_controller[152344]: 2025-10-02T08:31:21Z|00583|binding|INFO|Removing iface tap35092937-95 ovn-installed in OVS
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.220 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[06867620-f8d0-4f45-be82-69d726f25652]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.225 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:5d:8f 10.100.0.194'], port_security=['fa:16:3e:7b:5d:8f 10.100.0.194'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.194/24', 'neutron:device_id': '501f8cba-892f-489d-81b5-abb8669f49eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cff19d1e-1358-4314-bab4-67f6fde7eba9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=35092937-9590-42d1-a022-549b740da3c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.248 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2b78b8-d6eb-4385-8c4b-08faa42d4675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Oct 02 08:31:21 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Consumed 7.255s CPU time.
Oct 02 08:31:21 compute-0 systemd-machined[214636]: Machine qemu-69-instance-0000003e terminated.
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.273 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c845db1d-95ff-4326-979d-548bdcd8d663]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54248606-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:e5:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474840, 'reachable_time': 41953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324294, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 NetworkManager[45129]: <info>  [1759393881.2936] manager: (tape8f18c99-19): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.298 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e68c287-4272-4e0f-8e70-d09545e35a44]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap54248606-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474855, 'tstamp': 474855}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324299, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap54248606-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474858, 'tstamp': 474858}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324299, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.300 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54248606-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 NetworkManager[45129]: <info>  [1759393881.3044] manager: (tap35092937-95): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.322 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54248606-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.322 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.322 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54248606-60, col_values=(('external_ids', {'iface-id': 'dacfca36-fe1e-4001-8669-9c1cc2cd3f3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.323 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.325 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e8f18c99-1964-43d6-a955-7b5064c53b3a in datapath 47336fa6-fcc3-40f8-ae02-9bae73a94c41 unbound from our chassis
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.330 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 47336fa6-fcc3-40f8-ae02-9bae73a94c41, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.331 2 INFO nova.virt.libvirt.driver [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance destroyed successfully.
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.333 2 DEBUG nova.objects.instance [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lazy-loading 'resources' on Instance uuid 501f8cba-892f-489d-81b5-abb8669f49eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.331 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e25339-faf6-4232-a0f4-3210b807165f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.336 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41 namespace which is not needed anymore
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.351 2 DEBUG nova.virt.libvirt.vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:15Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.352 2 DEBUG nova.network.os_vif_util [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.352 2 DEBUG nova.network.os_vif_util [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.353 2 DEBUG os_vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.355 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape19eb16b-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.367 2 INFO os_vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0')
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.368 2 DEBUG nova.virt.libvirt.vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:15Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.369 2 DEBUG nova.network.os_vif_util [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.369 2 DEBUG nova.network.os_vif_util [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.370 2 DEBUG os_vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8f18c99-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.382 2 INFO os_vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19')
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.383 2 DEBUG nova.virt.libvirt.vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:15Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.384 2 DEBUG nova.network.os_vif_util [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.386 2 DEBUG nova.network.os_vif_util [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.386 2 DEBUG os_vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.387 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35092937-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.393 2 INFO os_vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95')
Oct 02 08:31:21 compute-0 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [NOTICE]   (323756) : haproxy version is 2.8.14-c23fe91
Oct 02 08:31:21 compute-0 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [NOTICE]   (323756) : path to executable is /usr/sbin/haproxy
Oct 02 08:31:21 compute-0 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [WARNING]  (323756) : Exiting Master process...
Oct 02 08:31:21 compute-0 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [ALERT]    (323756) : Current worker (323758) exited with code 143 (Terminated)
Oct 02 08:31:21 compute-0 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [WARNING]  (323756) : All workers exited. Exiting... (0)
Oct 02 08:31:21 compute-0 systemd[1]: libpod-2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f.scope: Deactivated successfully.
Oct 02 08:31:21 compute-0 conmon[323752]: conmon 2366353275d0610a8c46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f.scope/container/memory.events
Oct 02 08:31:21 compute-0 podman[324365]: 2025-10-02 08:31:21.486250608 +0000 UTC m=+0.052274624 container died 2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 08:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f-userdata-shm.mount: Deactivated successfully.
Oct 02 08:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-927c4f8d9e26b4f276f849fe227297392dc8511c85e75a6e69adb62038a5f909-merged.mount: Deactivated successfully.
Oct 02 08:31:21 compute-0 podman[324365]: 2025-10-02 08:31:21.540867432 +0000 UTC m=+0.106891488 container cleanup 2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:31:21 compute-0 systemd[1]: libpod-conmon-2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f.scope: Deactivated successfully.
Oct 02 08:31:21 compute-0 podman[324396]: 2025-10-02 08:31:21.606344284 +0000 UTC m=+0.040778626 container remove 2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c48069-016e-45a8-8af1-98bf2b592f93]: (4, ('Thu Oct  2 08:31:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41 (2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f)\n2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f\nThu Oct  2 08:31:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41 (2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f)\n2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.618 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2c905e70-ca2b-45c0-bf30-e8b53679f623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.620 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47336fa6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 kernel: tap47336fa6-f0: left promiscuous mode
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.640 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[040caf88-7692-45e3-8d04-39b907b4a3e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.668 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9391dca-cb24-41a2-bece-70508a452f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.669 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94e7037c-3b53-45b4-9df5-c8eb34e2be4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.688 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b60d4c08-6f8b-409f-a555-fab9174e1608]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474961, 'reachable_time': 32642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324412, 'error': None, 'target': 'ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d47336fa6\x2dfcc3\x2d40f8\x2dae02\x2d9bae73a94c41.mount: Deactivated successfully.
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.694 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.694 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d7a0a6-c4cb-4cba-b8ae-7832e3eb072d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.695 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 35092937-9590-42d1-a022-549b740da3c5 in datapath 54248606-6cdd-4d53-9b28-14d8ac1cf290 unbound from our chassis
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.698 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54248606-6cdd-4d53-9b28-14d8ac1cf290, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.699 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[967ed653-9ed2-43ba-9205-ea190ea65599]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.700 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290 namespace which is not needed anymore
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.743 2 DEBUG nova.network.neutron [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Updating instance_info_cache with network_info: [{"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.781 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Releasing lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.782 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Instance network_info: |[{"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.783 2 DEBUG oslo_concurrency.lockutils [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.784 2 DEBUG nova.network.neutron [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Refreshing network info cache for port 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.789 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Start _get_guest_xml network_info=[{"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.798 2 WARNING nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.807 2 INFO nova.virt.libvirt.driver [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Deleting instance files /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb_del
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.807 2 INFO nova.virt.libvirt.driver [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Deletion of /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb_del complete
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.816 2 DEBUG nova.virt.libvirt.host [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.817 2 DEBUG nova.virt.libvirt.host [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.823 2 DEBUG nova.virt.libvirt.host [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.824 2 DEBUG nova.virt.libvirt.host [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.825 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.825 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.826 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.826 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.826 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.827 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.827 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.828 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.829 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.829 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.829 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.830 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:31:21 compute-0 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [NOTICE]   (323674) : haproxy version is 2.8.14-c23fe91
Oct 02 08:31:21 compute-0 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [NOTICE]   (323674) : path to executable is /usr/sbin/haproxy
Oct 02 08:31:21 compute-0 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [WARNING]  (323674) : Exiting Master process...
Oct 02 08:31:21 compute-0 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [ALERT]    (323674) : Current worker (323676) exited with code 143 (Terminated)
Oct 02 08:31:21 compute-0 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [WARNING]  (323674) : All workers exited. Exiting... (0)
Oct 02 08:31:21 compute-0 systemd[1]: libpod-595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723.scope: Deactivated successfully.
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.834 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:21 compute-0 podman[324429]: 2025-10-02 08:31:21.84212804 +0000 UTC m=+0.045485672 container died 595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 08:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723-userdata-shm.mount: Deactivated successfully.
Oct 02 08:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8b45d26d074aac14704b1e922d565a27ad4c485c96934af760c5e755ad0d98b-merged.mount: Deactivated successfully.
Oct 02 08:31:21 compute-0 podman[324429]: 2025-10-02 08:31:21.874207365 +0000 UTC m=+0.077565007 container cleanup 595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 08:31:21 compute-0 systemd[1]: libpod-conmon-595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723.scope: Deactivated successfully.
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.902 2 INFO nova.compute.manager [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Took 0.84 seconds to destroy the instance on the hypervisor.
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.903 2 DEBUG oslo.service.loopingcall [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.903 2 DEBUG nova.compute.manager [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.904 2 DEBUG nova.network.neutron [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:31:21 compute-0 podman[324457]: 2025-10-02 08:31:21.93685335 +0000 UTC m=+0.043242033 container remove 595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.943 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[392cb454-988e-4af7-a9f7-8a27e8b862e4]: (4, ('Thu Oct  2 08:31:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290 (595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723)\n595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723\nThu Oct  2 08:31:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290 (595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723)\n595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.946 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a5af82-5257-42c0-963c-b0a824ea0b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.947 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54248606-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 kernel: tap54248606-60: left promiscuous mode
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.958 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[704b96af-268e-482a-93d7-4936d2ef526f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 nova_compute[260603]: 2025-10-02 08:31:21.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.985 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[39788b8a-1f7f-48c2-91f1-1b1dc2366711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.990 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1696df6b-2582-47b1-b1df-f7f21367f681]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:22.013 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e680a89-9ce1-47c5-b4c2-1a46337a3292]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474828, 'reachable_time': 27722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324491, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:22.016 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:31:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:22.016 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d93aee32-7b0b-4e40-b63b-16477c57df91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:31:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/936054651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:31:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:31:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/936054651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:31:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:31:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925441020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.295 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.343 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.355 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.485 2 DEBUG nova.compute.manager [req-a685ac75-82cc-4c9b-a5d8-b457bf83ecde req-9a13193a-0ced-4139-90da-6e106f14646d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-unplugged-35092937-9590-42d1-a022-549b740da3c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.486 2 DEBUG oslo_concurrency.lockutils [req-a685ac75-82cc-4c9b-a5d8-b457bf83ecde req-9a13193a-0ced-4139-90da-6e106f14646d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.486 2 DEBUG oslo_concurrency.lockutils [req-a685ac75-82cc-4c9b-a5d8-b457bf83ecde req-9a13193a-0ced-4139-90da-6e106f14646d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.487 2 DEBUG oslo_concurrency.lockutils [req-a685ac75-82cc-4c9b-a5d8-b457bf83ecde req-9a13193a-0ced-4139-90da-6e106f14646d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.487 2 DEBUG nova.compute.manager [req-a685ac75-82cc-4c9b-a5d8-b457bf83ecde req-9a13193a-0ced-4139-90da-6e106f14646d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-unplugged-35092937-9590-42d1-a022-549b740da3c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.488 2 DEBUG nova.compute.manager [req-a685ac75-82cc-4c9b-a5d8-b457bf83ecde req-9a13193a-0ced-4139-90da-6e106f14646d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-unplugged-35092937-9590-42d1-a022-549b740da3c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.517 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:31:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d54248606\x2d6cdd\x2d4d53\x2d9b28\x2d14d8ac1cf290.mount: Deactivated successfully.
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.523 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-unplugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.524 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.524 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.525 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.525 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-unplugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.528 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-unplugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.530 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.530 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.532 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.533 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.534 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.535 2 WARNING nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received unexpected event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c for instance with vm_state active and task_state deleting.
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.536 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-unplugged-e8f18c99-1964-43d6-a955-7b5064c53b3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.536 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.537 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.538 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.538 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-unplugged-e8f18c99-1964-43d6-a955-7b5064c53b3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.538 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-unplugged-e8f18c99-1964-43d6-a955-7b5064c53b3a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.539 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.539 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.540 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.540 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.541 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.541 2 WARNING nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received unexpected event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a for instance with vm_state active and task_state deleting.
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.543 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:31:22 compute-0 podman[324513]: 2025-10-02 08:31:22.545343031 +0000 UTC m=+0.095765373 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.584 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.585 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.586 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.586 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.587 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.666 2 DEBUG nova.network.neutron [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updated VIF entry in instance network info cache for port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.667 2 DEBUG nova.network.neutron [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating instance_info_cache with network_info: [{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.690 2 DEBUG oslo_concurrency.lockutils [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:22 compute-0 ceph-mon[74477]: pgmap v1514: 305 pgs: 305 active+clean; 227 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.4 MiB/s wr, 244 op/s
Oct 02 08:31:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/936054651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:31:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/936054651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:31:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1925441020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943899668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.886 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.889 2 DEBUG nova.virt.libvirt.vif [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1230108710',display_name='tempest-₡-1230108710',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1230108710',id=64,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-djhvanl3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:14Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=7ac34b0c-8ced-417d-9442-8fda77804a34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.890 2 DEBUG nova.network.os_vif_util [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.891 2 DEBUG nova.network.os_vif_util [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:fa:94,bridge_name='br-int',has_traffic_filtering=True,id=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fbf8f14-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.894 2 DEBUG nova.objects.instance [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ac34b0c-8ced-417d-9442-8fda77804a34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.912 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:31:22 compute-0 nova_compute[260603]:   <uuid>7ac34b0c-8ced-417d-9442-8fda77804a34</uuid>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   <name>instance-00000040</name>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <nova:name>tempest-₡-1230108710</nova:name>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:31:21</nova:creationTime>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:31:22 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:31:22 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:31:22 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:31:22 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:31:22 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:31:22 compute-0 nova_compute[260603]:         <nova:user uuid="33ee6781337742479d7b4b078ad6a221">tempest-ServersTestJSON-520437589-project-member</nova:user>
Oct 02 08:31:22 compute-0 nova_compute[260603]:         <nova:project uuid="f6678937d40d4004ad15e1e9eef6f9c7">tempest-ServersTestJSON-520437589</nova:project>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:31:22 compute-0 nova_compute[260603]:         <nova:port uuid="2fbf8f14-9d1f-4042-9f0b-1abcc448ea97">
Oct 02 08:31:22 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <system>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <entry name="serial">7ac34b0c-8ced-417d-9442-8fda77804a34</entry>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <entry name="uuid">7ac34b0c-8ced-417d-9442-8fda77804a34</entry>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     </system>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   <os>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   </os>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   <features>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   </features>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7ac34b0c-8ced-417d-9442-8fda77804a34_disk">
Oct 02 08:31:22 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:22 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7ac34b0c-8ced-417d-9442-8fda77804a34_disk.config">
Oct 02 08:31:22 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:22 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:23:fa:94"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <target dev="tap2fbf8f14-9d"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/console.log" append="off"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <video>
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     </video>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:31:22 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:31:22 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:31:22 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:31:22 compute-0 nova_compute[260603]: </domain>
Oct 02 08:31:22 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.924 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Preparing to wait for external event network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.925 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.925 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.926 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.927 2 DEBUG nova.virt.libvirt.vif [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1230108710',display_name='tempest-₡-1230108710',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1230108710',id=64,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-djhvanl3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:14Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=7ac34b0c-8ced-417d-9442-8fda77804a34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.928 2 DEBUG nova.network.os_vif_util [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.930 2 DEBUG nova.network.os_vif_util [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:fa:94,bridge_name='br-int',has_traffic_filtering=True,id=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fbf8f14-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.931 2 DEBUG os_vif [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:fa:94,bridge_name='br-int',has_traffic_filtering=True,id=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fbf8f14-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.933 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.934 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.941 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fbf8f14-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.942 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2fbf8f14-9d, col_values=(('external_ids', {'iface-id': '2fbf8f14-9d1f-4042-9f0b-1abcc448ea97', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:fa:94', 'vm-uuid': '7ac34b0c-8ced-417d-9442-8fda77804a34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:22 compute-0 NetworkManager[45129]: <info>  [1759393882.9450] manager: (tap2fbf8f14-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:22 compute-0 nova_compute[260603]: 2025-10-02 08:31:22.955 2 INFO os_vif [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:fa:94,bridge_name='br-int',has_traffic_filtering=True,id=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fbf8f14-9d')
Oct 02 08:31:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 197 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.4 MiB/s wr, 267 op/s
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.028 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.029 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.030 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No VIF found with MAC fa:16:3e:23:fa:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.031 2 INFO nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Using config drive
Oct 02 08:31:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1055812350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.070 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.081 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.157 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.158 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.171 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.172 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.181 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.182 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.419 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.421 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3790MB free_disk=59.90481185913086GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.421 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.421 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.499 2 INFO nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Creating config drive at /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/disk.config
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.505 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp13cwefln execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.565 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 49e7e668-b62c-4e35-a4e2-bba540000961 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.566 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 501f8cba-892f-489d-81b5-abb8669f49eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.566 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.566 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 7ac34b0c-8ced-417d-9442-8fda77804a34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.566 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.566 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.646 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp13cwefln" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.669 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.678 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/disk.config 7ac34b0c-8ced-417d-9442-8fda77804a34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.760 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.843 2 DEBUG nova.network.neutron [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Updated VIF entry in instance network info cache for port 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.845 2 DEBUG nova.network.neutron [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Updating instance_info_cache with network_info: [{"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.849 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/disk.config 7ac34b0c-8ced-417d-9442-8fda77804a34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.850 2 INFO nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Deleting local config drive /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/disk.config because it was imported into RBD.
Oct 02 08:31:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/943899668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1055812350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.876 2 DEBUG oslo_concurrency.lockutils [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:23 compute-0 kernel: tap2fbf8f14-9d: entered promiscuous mode
Oct 02 08:31:23 compute-0 NetworkManager[45129]: <info>  [1759393883.9115] manager: (tap2fbf8f14-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/252)
Oct 02 08:31:23 compute-0 systemd-udevd[324277]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:23 compute-0 ovn_controller[152344]: 2025-10-02T08:31:23Z|00584|binding|INFO|Claiming lport 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 for this chassis.
Oct 02 08:31:23 compute-0 ovn_controller[152344]: 2025-10-02T08:31:23Z|00585|binding|INFO|2fbf8f14-9d1f-4042-9f0b-1abcc448ea97: Claiming fa:16:3e:23:fa:94 10.100.0.3
Oct 02 08:31:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.923 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:fa:94 10.100.0.3'], port_security=['fa:16:3e:23:fa:94 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7ac34b0c-8ced-417d-9442-8fda77804a34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:23 compute-0 NetworkManager[45129]: <info>  [1759393883.9265] device (tap2fbf8f14-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:31:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.926 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 bound to our chassis
Oct 02 08:31:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.928 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:31:23 compute-0 NetworkManager[45129]: <info>  [1759393883.9316] device (tap2fbf8f14-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:31:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.942 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f673837e-8ec7-47fe-b540-2c853ed8c4b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.944 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e3507cf-e1 in ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:23 compute-0 ovn_controller[152344]: 2025-10-02T08:31:23Z|00586|binding|INFO|Setting lport 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 ovn-installed in OVS
Oct 02 08:31:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.946 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e3507cf-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:31:23 compute-0 ovn_controller[152344]: 2025-10-02T08:31:23Z|00587|binding|INFO|Setting lport 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 up in Southbound
Oct 02 08:31:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.946 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[55abab2d-1188-4d19-ad5c-2a23e1b9cd1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.952 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d7dd2e2-a542-46eb-aacb-ae38826fbd3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:23 compute-0 nova_compute[260603]: 2025-10-02 08:31:23.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:23 compute-0 systemd-machined[214636]: New machine qemu-71-instance-00000040.
Oct 02 08:31:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.969 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b8194ffa-d5c7-4b82-8c78-8d9e6e7332fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:23 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-00000040.
Oct 02 08:31:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.986 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f5bb126b-e735-4883-bbff-2459dc6c6d3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.022 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[58984ce4-29cf-4675-85ee-7f38827678b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:24 compute-0 NetworkManager[45129]: <info>  [1759393884.0279] manager: (tap1e3507cf-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/253)
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.028 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[63fa85d9-fbfb-4162-91ac-8d86ba8e2a40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.064 2 INFO nova.compute.manager [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Rebuilding instance
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.073 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ce4f94-0530-4104-a2a1-2f73a732990e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.076 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[95da95d7-a967-4146-915d-b65e949ca934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:24 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 02 08:31:24 compute-0 NetworkManager[45129]: <info>  [1759393884.1012] device (tap1e3507cf-e0): carrier: link connected
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.109 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b96316-d9c4-467f-b626-321f7833b789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.127 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b989e7a4-dd72-42d1-827d-85eef2492aa0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324700, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:24 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.146 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f41a0940-08d3-488e-bd46-90377685a3b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:88a2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476270, 'tstamp': 476270}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324701, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.171 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c6bf16af-4fe4-4873-a5b7-903d287d0522]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324702, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.222 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ee12306c-da42-4b13-95fd-c16ade7d97ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051485142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.274 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.283 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.302 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.305 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[997b47a6-dfa4-46f6-9863-27980477b169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.307 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.307 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.308 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:24 compute-0 NetworkManager[45129]: <info>  [1759393884.3106] manager: (tap1e3507cf-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:24 compute-0 kernel: tap1e3507cf-e0: entered promiscuous mode
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.317 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:24 compute-0 ovn_controller[152344]: 2025-10-02T08:31:24Z|00588|binding|INFO|Releasing lport 8d9038d5-8bd6-460b-aca0-b6f7422e177a from this chassis (sb_readonly=0)
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.322 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e3507cf-e1b2-456e-8cff-b075c2a55621.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e3507cf-e1b2-456e-8cff-b075c2a55621.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.324 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[35d5538e-f9e2-463e-8d3b-6492e33ae80e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.326 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/1e3507cf-e1b2-456e-8cff-b075c2a55621.pid.haproxy
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:31:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.327 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'env', 'PROCESS_TAG=haproxy-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e3507cf-e1b2-456e-8cff-b075c2a55621.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.335 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.338 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.338 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.349 2 DEBUG nova.compute.manager [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.396 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_requests' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.406 2 DEBUG nova.network.neutron [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.408 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_devices' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.422 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'resources' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.424 2 INFO nova.compute.manager [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Took 2.52 seconds to deallocate network for instance.
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.436 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'migration_context' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.445 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.449 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.470 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.471 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.578 2 DEBUG oslo_concurrency.processutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.631 2 DEBUG nova.compute.manager [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.632 2 DEBUG oslo_concurrency.lockutils [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.632 2 DEBUG oslo_concurrency.lockutils [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.632 2 DEBUG oslo_concurrency.lockutils [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.632 2 DEBUG nova.compute.manager [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.632 2 WARNING nova.compute.manager [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received unexpected event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 for instance with vm_state deleted and task_state None.
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.633 2 DEBUG nova.compute.manager [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-deleted-35092937-9590-42d1-a022-549b740da3c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.633 2 DEBUG nova.compute.manager [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-deleted-e8f18c99-1964-43d6-a955-7b5064c53b3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.633 2 DEBUG nova.compute.manager [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-deleted-e19eb16b-f042-4e4d-922b-7057ad6ebb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.662 2 DEBUG nova.compute.manager [req-f375afe3-6e2e-46df-a600-438e7e75b150 req-17e053cf-f40f-4959-b444-3d266f427b45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received event network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.662 2 DEBUG oslo_concurrency.lockutils [req-f375afe3-6e2e-46df-a600-438e7e75b150 req-17e053cf-f40f-4959-b444-3d266f427b45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.662 2 DEBUG oslo_concurrency.lockutils [req-f375afe3-6e2e-46df-a600-438e7e75b150 req-17e053cf-f40f-4959-b444-3d266f427b45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.662 2 DEBUG oslo_concurrency.lockutils [req-f375afe3-6e2e-46df-a600-438e7e75b150 req-17e053cf-f40f-4959-b444-3d266f427b45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.663 2 DEBUG nova.compute.manager [req-f375afe3-6e2e-46df-a600-438e7e75b150 req-17e053cf-f40f-4959-b444-3d266f427b45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Processing event network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:31:24 compute-0 podman[324779]: 2025-10-02 08:31:24.787373399 +0000 UTC m=+0.064277266 container create 1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:31:24 compute-0 podman[324779]: 2025-10-02 08:31:24.755129238 +0000 UTC m=+0.032033125 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:31:24 compute-0 systemd[1]: Started libpod-conmon-1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34.scope.
Oct 02 08:31:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61431590530a1e10271e0cde98940632ec3d957e216da4e4fc4080e4734f710/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:24 compute-0 ceph-mon[74477]: pgmap v1515: 305 pgs: 305 active+clean; 197 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.4 MiB/s wr, 267 op/s
Oct 02 08:31:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1051485142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:24 compute-0 podman[324779]: 2025-10-02 08:31:24.890063135 +0000 UTC m=+0.166967042 container init 1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 08:31:24 compute-0 podman[324779]: 2025-10-02 08:31:24.894993288 +0000 UTC m=+0.171897175 container start 1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 08:31:24 compute-0 neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621[324813]: [NOTICE]   (324817) : New worker (324819) forked
Oct 02 08:31:24 compute-0 neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621[324813]: [NOTICE]   (324817) : Loading success.
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.928 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393884.9278488, 7ac34b0c-8ced-417d-9442-8fda77804a34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.928 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] VM Started (Lifecycle Event)
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.931 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.934 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.937 2 INFO nova.virt.libvirt.driver [-] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Instance spawned successfully.
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.937 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.950 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.955 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.958 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.958 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.959 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.959 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.959 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.960 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.987 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.988 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393884.9280937, 7ac34b0c-8ced-417d-9442-8fda77804a34 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:24 compute-0 nova_compute[260603]: 2025-10-02 08:31:24.988 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] VM Paused (Lifecycle Event)
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.009 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.015 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393884.9335034, 7ac34b0c-8ced-417d-9442-8fda77804a34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.015 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] VM Resumed (Lifecycle Event)
Oct 02 08:31:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 181 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 258 op/s
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.021 2 INFO nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Took 10.04 seconds to spawn the instance on the hypervisor.
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.021 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.034 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.037 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.059 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2835659606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.093 2 INFO nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Took 11.21 seconds to build instance.
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.096 2 DEBUG oslo_concurrency.processutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.103 2 DEBUG nova.compute.provider_tree [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.116 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.134 2 DEBUG nova.scheduler.client.report [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.164 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.198 2 INFO nova.scheduler.client.report [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Deleted allocations for instance 501f8cba-892f-489d-81b5-abb8669f49eb
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.267 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:25 compute-0 nova_compute[260603]: 2025-10-02 08:31:25.314 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:31:25 compute-0 ovn_controller[152344]: 2025-10-02T08:31:25Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:cc:f7 10.100.0.9
Oct 02 08:31:25 compute-0 ovn_controller[152344]: 2025-10-02T08:31:25Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:cc:f7 10.100.0.9
Oct 02 08:31:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2835659606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:26 compute-0 nova_compute[260603]: 2025-10-02 08:31:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:31:26 compute-0 ceph-mon[74477]: pgmap v1516: 305 pgs: 305 active+clean; 181 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 258 op/s
Oct 02 08:31:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 181 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 254 op/s
Oct 02 08:31:27 compute-0 podman[324830]: 2025-10-02 08:31:27.046084594 +0000 UTC m=+0.096345150 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:31:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:31:27 compute-0 nova_compute[260603]: 2025-10-02 08:31:27.387 2 DEBUG nova.compute.manager [req-ae0a3333-c879-4178-8e19-dd081cad16c0 req-1998214d-8798-4376-b1a3-3829366a7a75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received event network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:27 compute-0 nova_compute[260603]: 2025-10-02 08:31:27.388 2 DEBUG oslo_concurrency.lockutils [req-ae0a3333-c879-4178-8e19-dd081cad16c0 req-1998214d-8798-4376-b1a3-3829366a7a75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:27 compute-0 nova_compute[260603]: 2025-10-02 08:31:27.388 2 DEBUG oslo_concurrency.lockutils [req-ae0a3333-c879-4178-8e19-dd081cad16c0 req-1998214d-8798-4376-b1a3-3829366a7a75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:27 compute-0 nova_compute[260603]: 2025-10-02 08:31:27.388 2 DEBUG oslo_concurrency.lockutils [req-ae0a3333-c879-4178-8e19-dd081cad16c0 req-1998214d-8798-4376-b1a3-3829366a7a75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:27 compute-0 nova_compute[260603]: 2025-10-02 08:31:27.388 2 DEBUG nova.compute.manager [req-ae0a3333-c879-4178-8e19-dd081cad16c0 req-1998214d-8798-4376-b1a3-3829366a7a75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] No waiting events found dispatching network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:27 compute-0 nova_compute[260603]: 2025-10-02 08:31:27.389 2 WARNING nova.compute.manager [req-ae0a3333-c879-4178-8e19-dd081cad16c0 req-1998214d-8798-4376-b1a3-3829366a7a75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received unexpected event network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 for instance with vm_state active and task_state None.
Oct 02 08:31:27 compute-0 ovn_controller[152344]: 2025-10-02T08:31:27Z|00589|binding|INFO|Releasing lport d143de50-fc80-43b6-82e2-6651430a4a42 from this chassis (sb_readonly=0)
Oct 02 08:31:27 compute-0 ovn_controller[152344]: 2025-10-02T08:31:27Z|00590|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct 02 08:31:27 compute-0 ovn_controller[152344]: 2025-10-02T08:31:27Z|00591|binding|INFO|Releasing lport 8d9038d5-8bd6-460b-aca0-b6f7422e177a from this chassis (sb_readonly=0)
Oct 02 08:31:27 compute-0 nova_compute[260603]: 2025-10-02 08:31:27.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:27 compute-0 nova_compute[260603]: 2025-10-02 08:31:27.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:31:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:31:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:31:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:31:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:31:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:31:27 compute-0 nova_compute[260603]: 2025-10-02 08:31:27.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:31:27
Oct 02 08:31:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:31:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:31:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'images']
Oct 02 08:31:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:31:28 compute-0 ovn_controller[152344]: 2025-10-02T08:31:28Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:e9:fe 10.100.0.10
Oct 02 08:31:28 compute-0 ovn_controller[152344]: 2025-10-02T08:31:28Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:e9:fe 10.100.0.10
Oct 02 08:31:28 compute-0 ceph-mon[74477]: pgmap v1517: 305 pgs: 305 active+clean; 181 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 254 op/s
Oct 02 08:31:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 236 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 6.0 MiB/s wr, 434 op/s
Oct 02 08:31:30 compute-0 ceph-mon[74477]: pgmap v1518: 305 pgs: 305 active+clean; 236 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 6.0 MiB/s wr, 434 op/s
Oct 02 08:31:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 236 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 214 op/s
Oct 02 08:31:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.138 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "24339ad4-fec2-43f8-8da3-5e433206a1cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.138 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.156 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.246 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.247 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.252 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.253 2 INFO nova.compute.claims [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.412 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846241726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.892 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.899 2 DEBUG nova.compute.provider_tree [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:31:32 compute-0 ceph-mon[74477]: pgmap v1519: 305 pgs: 305 active+clean; 236 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 214 op/s
Oct 02 08:31:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/846241726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.923 2 DEBUG nova.scheduler.client.report [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.947 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:32 compute-0 nova_compute[260603]: 2025-10-02 08:31:32.948 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.004 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.004 2 DEBUG nova.network.neutron [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:31:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 244 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 234 op/s
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.035 2 INFO nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.091 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.209 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.213 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.214 2 INFO nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Creating image(s)
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.252 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.291 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.329 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.334 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.436 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.438 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.440 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.440 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.472 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.476 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.704 2 DEBUG nova.policy [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33ee6781337742479d7b4b078ad6a221', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.713 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.763 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] resizing rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.850 2 DEBUG nova.objects.instance [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 24339ad4-fec2-43f8-8da3-5e433206a1cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.868 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.868 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Ensure instance console log exists: /var/lib/nova/instances/24339ad4-fec2-43f8-8da3-5e433206a1cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.868 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.869 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:33 compute-0 nova_compute[260603]: 2025-10-02 08:31:33.869 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:34 compute-0 nova_compute[260603]: 2025-10-02 08:31:34.512 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:31:34 compute-0 nova_compute[260603]: 2025-10-02 08:31:34.552 2 DEBUG nova.network.neutron [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Successfully created port: a246f438-6334-440d-931b-f177dc6cadd6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:31:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:34.817 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:34.818 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:34.819 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:34 compute-0 ceph-mon[74477]: pgmap v1520: 305 pgs: 305 active+clean; 244 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 234 op/s
Oct 02 08:31:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 246 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Oct 02 08:31:35 compute-0 nova_compute[260603]: 2025-10-02 08:31:35.866 2 DEBUG nova.network.neutron [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Successfully updated port: a246f438-6334-440d-931b-f177dc6cadd6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:31:35 compute-0 nova_compute[260603]: 2025-10-02 08:31:35.898 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "refresh_cache-24339ad4-fec2-43f8-8da3-5e433206a1cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:35 compute-0 nova_compute[260603]: 2025-10-02 08:31:35.899 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquired lock "refresh_cache-24339ad4-fec2-43f8-8da3-5e433206a1cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:35 compute-0 nova_compute[260603]: 2025-10-02 08:31:35.900 2 DEBUG nova.network.neutron [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:31:35 compute-0 ceph-mon[74477]: pgmap v1521: 305 pgs: 305 active+clean; 246 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Oct 02 08:31:36 compute-0 nova_compute[260603]: 2025-10-02 08:31:36.111 2 DEBUG nova.network.neutron [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:31:36 compute-0 nova_compute[260603]: 2025-10-02 08:31:36.320 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393881.3189375, 501f8cba-892f-489d-81b5-abb8669f49eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:36 compute-0 nova_compute[260603]: 2025-10-02 08:31:36.321 2 INFO nova.compute.manager [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] VM Stopped (Lifecycle Event)
Oct 02 08:31:36 compute-0 nova_compute[260603]: 2025-10-02 08:31:36.345 2 DEBUG nova.compute.manager [None req-e3181f8c-8480-464a-8e20-1e364f78e91f - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:36 compute-0 nova_compute[260603]: 2025-10-02 08:31:36.811 2 DEBUG nova.compute.manager [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Received event network-changed-a246f438-6334-440d-931b-f177dc6cadd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:36 compute-0 nova_compute[260603]: 2025-10-02 08:31:36.811 2 DEBUG nova.compute.manager [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Refreshing instance network info cache due to event network-changed-a246f438-6334-440d-931b-f177dc6cadd6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:31:36 compute-0 nova_compute[260603]: 2025-10-02 08:31:36.812 2 DEBUG oslo_concurrency.lockutils [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-24339ad4-fec2-43f8-8da3-5e433206a1cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:36 compute-0 kernel: tap9835ad6c-8e (unregistering): left promiscuous mode
Oct 02 08:31:36 compute-0 NetworkManager[45129]: <info>  [1759393896.8362] device (tap9835ad6c-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:31:36 compute-0 ovn_controller[152344]: 2025-10-02T08:31:36Z|00592|binding|INFO|Releasing lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 from this chassis (sb_readonly=0)
Oct 02 08:31:36 compute-0 ovn_controller[152344]: 2025-10-02T08:31:36Z|00593|binding|INFO|Setting lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 down in Southbound
Oct 02 08:31:36 compute-0 ovn_controller[152344]: 2025-10-02T08:31:36Z|00594|binding|INFO|Removing iface tap9835ad6c-8e ovn-installed in OVS
Oct 02 08:31:36 compute-0 nova_compute[260603]: 2025-10-02 08:31:36.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:36.856 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:e9:fe 10.100.0.10'], port_security=['fa:16:3e:5b:e9:fe 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9835ad6c-8ea8-4a79-8f07-042186ea7c71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:36.857 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9835ad6c-8ea8-4a79-8f07-042186ea7c71 in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b unbound from our chassis
Oct 02 08:31:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:36.858 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8df0af1-1767-419a-8500-c28fbf45ae4b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:31:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:36.860 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b132e394-c7ca-416f-97a7-bf7a1976a5b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:36.861 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace which is not needed anymore
Oct 02 08:31:36 compute-0 nova_compute[260603]: 2025-10-02 08:31:36.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:36 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Oct 02 08:31:36 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Consumed 13.036s CPU time.
Oct 02 08:31:36 compute-0 systemd-machined[214636]: Machine qemu-70-instance-0000003f terminated.
Oct 02 08:31:36 compute-0 ovn_controller[152344]: 2025-10-02T08:31:36Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:fa:94 10.100.0.3
Oct 02 08:31:36 compute-0 ovn_controller[152344]: 2025-10-02T08:31:36Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:fa:94 10.100.0.3
Oct 02 08:31:36 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [NOTICE]   (324199) : haproxy version is 2.8.14-c23fe91
Oct 02 08:31:36 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [NOTICE]   (324199) : path to executable is /usr/sbin/haproxy
Oct 02 08:31:36 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [WARNING]  (324199) : Exiting Master process...
Oct 02 08:31:36 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [ALERT]    (324199) : Current worker (324201) exited with code 143 (Terminated)
Oct 02 08:31:36 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [WARNING]  (324199) : All workers exited. Exiting... (0)
Oct 02 08:31:36 compute-0 systemd[1]: libpod-f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0.scope: Deactivated successfully.
Oct 02 08:31:37 compute-0 podman[325060]: 2025-10-02 08:31:37.004431724 +0000 UTC m=+0.047208227 container died f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.015 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.016 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 246 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 200 op/s
Oct 02 08:31:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0-userdata-shm.mount: Deactivated successfully.
Oct 02 08:31:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b436514243685187aa6cdcdd36d5f1493da17b7b370783efb1b916d0e9b66972-merged.mount: Deactivated successfully.
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.038 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:31:37 compute-0 podman[325060]: 2025-10-02 08:31:37.048933725 +0000 UTC m=+0.091710228 container cleanup f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 08:31:37 compute-0 systemd[1]: libpod-conmon-f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0.scope: Deactivated successfully.
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.116 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.117 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.123 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.124 2 INFO nova.compute.claims [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:31:37 compute-0 podman[325091]: 2025-10-02 08:31:37.134170918 +0000 UTC m=+0.057648139 container remove f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:31:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.141 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f46476-c3d6-4a9b-8fc3-5db64f4828ed]: (4, ('Thu Oct  2 08:31:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0)\nf67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0\nThu Oct  2 08:31:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0)\nf67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.143 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42d1409d-f2ea-44ef-a6d0-285ccbd4ac45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.144 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:37 compute-0 kernel: tapf8df0af1-10: left promiscuous mode
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.169 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea363ee-cefa-44d8-b663-5665f90bba8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.193 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2460229f-fea2-4034-8e8e-a3d6ff8674fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.195 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[34301e52-6a75-443f-adad-e0aded512e51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.218 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[74cf0b9c-afb7-43f5-be8e-7ba707e14d60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475363, 'reachable_time': 34562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325116, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:37 compute-0 systemd[1]: run-netns-ovnmeta\x2df8df0af1\x2d1767\x2d419a\x2d8500\x2dc28fbf45ae4b.mount: Deactivated successfully.
Oct 02 08:31:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.222 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:31:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.222 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ecf796-322e-4e51-8926-89755515adb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.247 2 DEBUG nova.network.neutron [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Updating instance_info_cache with network_info: [{"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.263 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Releasing lock "refresh_cache-24339ad4-fec2-43f8-8da3-5e433206a1cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.263 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Instance network_info: |[{"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.264 2 DEBUG oslo_concurrency.lockutils [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-24339ad4-fec2-43f8-8da3-5e433206a1cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.264 2 DEBUG nova.network.neutron [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Refreshing network info cache for port a246f438-6334-440d-931b-f177dc6cadd6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.266 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Start _get_guest_xml network_info=[{"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.272 2 WARNING nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.278 2 DEBUG nova.virt.libvirt.host [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.279 2 DEBUG nova.virt.libvirt.host [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.284 2 DEBUG nova.virt.libvirt.host [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.285 2 DEBUG nova.virt.libvirt.host [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.285 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.285 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.286 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.286 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.286 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.286 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.286 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.287 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.287 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.287 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.288 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.288 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.290 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.316 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.372 2 DEBUG nova.compute.manager [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.425 2 DEBUG nova.compute.manager [req-a1a1a55a-638f-4ea7-ae07-d5a42a95f97f req-c335f758-98c2-42ae-9660-6f916cd9cdeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-unplugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.425 2 DEBUG oslo_concurrency.lockutils [req-a1a1a55a-638f-4ea7-ae07-d5a42a95f97f req-c335f758-98c2-42ae-9660-6f916cd9cdeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.425 2 DEBUG oslo_concurrency.lockutils [req-a1a1a55a-638f-4ea7-ae07-d5a42a95f97f req-c335f758-98c2-42ae-9660-6f916cd9cdeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.426 2 DEBUG oslo_concurrency.lockutils [req-a1a1a55a-638f-4ea7-ae07-d5a42a95f97f req-c335f758-98c2-42ae-9660-6f916cd9cdeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.426 2 DEBUG nova.compute.manager [req-a1a1a55a-638f-4ea7-ae07-d5a42a95f97f req-c335f758-98c2-42ae-9660-6f916cd9cdeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] No waiting events found dispatching network-vif-unplugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.426 2 WARNING nova.compute.manager [req-a1a1a55a-638f-4ea7-ae07-d5a42a95f97f req-c335f758-98c2-42ae-9660-6f916cd9cdeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received unexpected event network-vif-unplugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 for instance with vm_state active and task_state rebuilding.
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.428 2 INFO nova.compute.manager [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] instance snapshotting
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.429 2 DEBUG nova.objects.instance [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'flavor' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.529 2 INFO nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance shutdown successfully after 13 seconds.
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.536 2 INFO nova.virt.libvirt.driver [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance destroyed successfully.
Oct 02 08:31:37 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:31:37 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.573 2 INFO nova.virt.libvirt.driver [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance destroyed successfully.
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.574 2 DEBUG nova.virt.libvirt.vif [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:31:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-361192799',display_name='tempest-ServerDiskConfigTestJSON-server-361192799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-361192799',id=63,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-bmy2yvfw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:23Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.575 2 DEBUG nova.network.os_vif_util [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.576 2 DEBUG nova.network.os_vif_util [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.576 2 DEBUG os_vif [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.578 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9835ad6c-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.585 2 INFO os_vif [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e')
Oct 02 08:31:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2435144527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.739 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.766 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.770 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2753295579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.804 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.808 2 INFO nova.virt.libvirt.driver [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Beginning live snapshot process
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.814 2 DEBUG nova.compute.provider_tree [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.855 2 DEBUG nova.scheduler.client.report [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.883 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.883 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.945 2 INFO nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Deleting instance files /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_del
Oct 02 08:31:37 compute-0 nova_compute[260603]: 2025-10-02 08:31:37.946 2 INFO nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Deletion of /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_del complete
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.002 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.002 2 DEBUG nova.network.neutron [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.012 2 DEBUG nova.virt.libvirt.imagebackend [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.049 2 INFO nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.078 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:31:38 compute-0 ceph-mon[74477]: pgmap v1522: 305 pgs: 305 active+clean; 246 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 200 op/s
Oct 02 08:31:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2435144527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2753295579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.163 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.164 2 INFO nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Creating image(s)
Oct 02 08:31:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3293481748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.193 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.218 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.241 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.245 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.274 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.275 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.276 2 INFO nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Creating image(s)
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.295 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.318 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.350 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.357 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.397 2 DEBUG nova.storage.rbd_utils [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] creating snapshot(823eb30c50ed45c796f53778c84fb7ca) on rbd image(49e7e668-b62c-4e35-a4e2-bba540000961_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.449 2 DEBUG nova.policy [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'db9a3b1e6d93495f8c849658ffc4e535', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.458 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.462 2 DEBUG nova.virt.libvirt.vif [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-26328652',display_name='tempest-ServersTestJSON-server-26328652',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-26328652',id=65,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-j0hil4jt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:33Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=24339ad4-fec2-43f8-8da3-5e433206a1cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.463 2 DEBUG nova.network.os_vif_util [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.465 2 DEBUG nova.network.os_vif_util [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:eb:75,bridge_name='br-int',has_traffic_filtering=True,id=a246f438-6334-440d-931b-f177dc6cadd6,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa246f438-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.467 2 DEBUG nova.objects.instance [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 24339ad4-fec2-43f8-8da3-5e433206a1cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.469 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.470 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.472 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.473 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.474 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.500 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.504 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.542 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.544 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.545 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018637800555121444 of space, bias 1.0, pg target 0.5591340166536433 quantized to 32 (current 32)
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:31:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.572 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.576 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 49564059-b2ef-4053-bedd-56a9afb53d2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.631 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:31:38 compute-0 nova_compute[260603]:   <uuid>24339ad4-fec2-43f8-8da3-5e433206a1cc</uuid>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   <name>instance-00000041</name>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersTestJSON-server-26328652</nova:name>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:31:37</nova:creationTime>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:31:38 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:31:38 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:31:38 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:31:38 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:31:38 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:31:38 compute-0 nova_compute[260603]:         <nova:user uuid="33ee6781337742479d7b4b078ad6a221">tempest-ServersTestJSON-520437589-project-member</nova:user>
Oct 02 08:31:38 compute-0 nova_compute[260603]:         <nova:project uuid="f6678937d40d4004ad15e1e9eef6f9c7">tempest-ServersTestJSON-520437589</nova:project>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:31:38 compute-0 nova_compute[260603]:         <nova:port uuid="a246f438-6334-440d-931b-f177dc6cadd6">
Oct 02 08:31:38 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <system>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <entry name="serial">24339ad4-fec2-43f8-8da3-5e433206a1cc</entry>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <entry name="uuid">24339ad4-fec2-43f8-8da3-5e433206a1cc</entry>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     </system>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   <os>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   </os>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   <features>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   </features>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/24339ad4-fec2-43f8-8da3-5e433206a1cc_disk">
Oct 02 08:31:38 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:38 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/24339ad4-fec2-43f8-8da3-5e433206a1cc_disk.config">
Oct 02 08:31:38 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:38 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:16:eb:75"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <target dev="tapa246f438-63"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/24339ad4-fec2-43f8-8da3-5e433206a1cc/console.log" append="off"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <video>
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     </video>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:31:38 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:31:38 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:31:38 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:31:38 compute-0 nova_compute[260603]: </domain>
Oct 02 08:31:38 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.632 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Preparing to wait for external event network-vif-plugged-a246f438-6334-440d-931b-f177dc6cadd6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.632 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.633 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.633 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.634 2 DEBUG nova.virt.libvirt.vif [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-26328652',display_name='tempest-ServersTestJSON-server-26328652',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-26328652',id=65,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-j0hil4jt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:33Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=24339ad4-fec2-43f8-8da3-5e433206a1cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.635 2 DEBUG nova.network.os_vif_util [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.636 2 DEBUG nova.network.os_vif_util [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:eb:75,bridge_name='br-int',has_traffic_filtering=True,id=a246f438-6334-440d-931b-f177dc6cadd6,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa246f438-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.636 2 DEBUG os_vif [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:eb:75,bridge_name='br-int',has_traffic_filtering=True,id=a246f438-6334-440d-931b-f177dc6cadd6,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa246f438-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa246f438-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa246f438-63, col_values=(('external_ids', {'iface-id': 'a246f438-6334-440d-931b-f177dc6cadd6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:eb:75', 'vm-uuid': '24339ad4-fec2-43f8-8da3-5e433206a1cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:38 compute-0 NetworkManager[45129]: <info>  [1759393898.6486] manager: (tapa246f438-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.661 2 INFO os_vif [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:eb:75,bridge_name='br-int',has_traffic_filtering=True,id=a246f438-6334-440d-931b-f177dc6cadd6,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa246f438-63')
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.729 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.729 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.729 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No VIF found with MAC fa:16:3e:16:eb:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.730 2 INFO nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Using config drive
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.754 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.816 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.901 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] resizing rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:31:38 compute-0 nova_compute[260603]: 2025-10-02 08:31:38.935 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 49564059-b2ef-4053-bedd-56a9afb53d2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.016 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] resizing rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:31:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 301 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.2 MiB/s wr, 313 op/s
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.047 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.047 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Ensure instance console log exists: /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.048 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.048 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.048 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.050 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Start _get_guest_xml network_info=[{"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.053 2 WARNING nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.065 2 DEBUG nova.virt.libvirt.host [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.066 2 DEBUG nova.virt.libvirt.host [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.068 2 DEBUG nova.virt.libvirt.host [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.069 2 DEBUG nova.virt.libvirt.host [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.069 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.069 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.069 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.070 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.070 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.070 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.070 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.070 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.070 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.071 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.071 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.071 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.071 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct 02 08:31:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3293481748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct 02 08:31:39 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.111 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.148 2 DEBUG nova.objects.instance [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 49564059-b2ef-4053-bedd-56a9afb53d2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.167 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.168 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Ensure instance console log exists: /var/lib/nova/instances/49564059-b2ef-4053-bedd-56a9afb53d2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.168 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.169 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.169 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.176 2 DEBUG nova.storage.rbd_utils [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] cloning vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk@823eb30c50ed45c796f53778c84fb7ca to images/14d99014-bbf1-4f02-9b3b-cd6971c554c9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.279 2 DEBUG nova.storage.rbd_utils [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] flattening images/14d99014-bbf1-4f02-9b3b-cd6971c554c9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:31:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:39 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/926281068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.622 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.653 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.658 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.736 2 DEBUG nova.storage.rbd_utils [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] removing snapshot(823eb30c50ed45c796f53778c84fb7ca) on rbd image(49e7e668-b62c-4e35-a4e2-bba540000961_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.902 2 DEBUG nova.compute.manager [req-590f767a-f101-4a38-9fc2-0c3cac2c6582 req-79c50fba-87e6-4226-89e0-6a165eb424e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.904 2 DEBUG oslo_concurrency.lockutils [req-590f767a-f101-4a38-9fc2-0c3cac2c6582 req-79c50fba-87e6-4226-89e0-6a165eb424e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.904 2 DEBUG oslo_concurrency.lockutils [req-590f767a-f101-4a38-9fc2-0c3cac2c6582 req-79c50fba-87e6-4226-89e0-6a165eb424e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.905 2 DEBUG oslo_concurrency.lockutils [req-590f767a-f101-4a38-9fc2-0c3cac2c6582 req-79c50fba-87e6-4226-89e0-6a165eb424e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.905 2 DEBUG nova.compute.manager [req-590f767a-f101-4a38-9fc2-0c3cac2c6582 req-79c50fba-87e6-4226-89e0-6a165eb424e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] No waiting events found dispatching network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:39 compute-0 nova_compute[260603]: 2025-10-02 08:31:39.906 2 WARNING nova.compute.manager [req-590f767a-f101-4a38-9fc2-0c3cac2c6582 req-79c50fba-87e6-4226-89e0-6a165eb424e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received unexpected event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:31:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct 02 08:31:40 compute-0 ceph-mon[74477]: pgmap v1523: 305 pgs: 305 active+clean; 301 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.2 MiB/s wr, 313 op/s
Oct 02 08:31:40 compute-0 ceph-mon[74477]: osdmap e208: 3 total, 3 up, 3 in
Oct 02 08:31:40 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/926281068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct 02 08:31:40 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.141 2 DEBUG nova.storage.rbd_utils [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] creating snapshot(snap) on rbd image(14d99014-bbf1-4f02-9b3b-cd6971c554c9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:31:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2092238596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.193 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.195 2 DEBUG nova.virt.libvirt.vif [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:31:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-361192799',display_name='tempest-ServerDiskConfigTestJSON-server-361192799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-361192799',id=63,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-bmy2yvfw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:38Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.196 2 DEBUG nova.network.os_vif_util [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.197 2 DEBUG nova.network.os_vif_util [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.201 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:31:40 compute-0 nova_compute[260603]:   <uuid>07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf</uuid>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   <name>instance-0000003f</name>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-361192799</nova:name>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:31:39</nova:creationTime>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:31:40 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:31:40 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:31:40 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:31:40 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:31:40 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:31:40 compute-0 nova_compute[260603]:         <nova:user uuid="116b114f14f84e4cbd6cc966e29d82e7">tempest-ServerDiskConfigTestJSON-1277806880-project-member</nova:user>
Oct 02 08:31:40 compute-0 nova_compute[260603]:         <nova:project uuid="bce7493292bb47cfb7168bca89f78f4a">tempest-ServerDiskConfigTestJSON-1277806880</nova:project>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:31:40 compute-0 nova_compute[260603]:         <nova:port uuid="9835ad6c-8ea8-4a79-8f07-042186ea7c71">
Oct 02 08:31:40 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <system>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <entry name="serial">07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf</entry>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <entry name="uuid">07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf</entry>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     </system>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   <os>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   </os>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   <features>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   </features>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk">
Oct 02 08:31:40 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:40 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config">
Oct 02 08:31:40 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:40 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:5b:e9:fe"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <target dev="tap9835ad6c-8e"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/console.log" append="off"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <video>
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     </video>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:31:40 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:31:40 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:31:40 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:31:40 compute-0 nova_compute[260603]: </domain>
Oct 02 08:31:40 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.202 2 DEBUG nova.compute.manager [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Preparing to wait for external event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.203 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.204 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.204 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.206 2 DEBUG nova.virt.libvirt.vif [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:31:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-361192799',display_name='tempest-ServerDiskConfigTestJSON-server-361192799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-361192799',id=63,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-bmy2yvfw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:38Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.207 2 DEBUG nova.network.os_vif_util [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.208 2 DEBUG nova.network.os_vif_util [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.208 2 DEBUG os_vif [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.210 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.214 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9835ad6c-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.215 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9835ad6c-8e, col_values=(('external_ids', {'iface-id': '9835ad6c-8ea8-4a79-8f07-042186ea7c71', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:e9:fe', 'vm-uuid': '07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:40 compute-0 NetworkManager[45129]: <info>  [1759393900.2186] manager: (tap9835ad6c-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.226 2 INFO os_vif [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e')
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.278 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.278 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.279 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No VIF found with MAC fa:16:3e:5b:e9:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.279 2 INFO nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Using config drive
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.311 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.349 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.378 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'keypairs' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:40 compute-0 rsyslogd[1004]: imjournal from <np0005465604:nova_compute>: begin to drop messages due to rate-limiting
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.727 2 INFO nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Creating config drive at /var/lib/nova/instances/24339ad4-fec2-43f8-8da3-5e433206a1cc/disk.config
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.735 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24339ad4-fec2-43f8-8da3-5e433206a1cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxw7i1kn9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.892 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24339ad4-fec2-43f8-8da3-5e433206a1cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxw7i1kn9" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.925 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:40 compute-0 nova_compute[260603]: 2025-10-02 08:31:40.931 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24339ad4-fec2-43f8-8da3-5e433206a1cc/disk.config 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 301 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 553 KiB/s rd, 5.9 MiB/s wr, 171 op/s
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.035 2 INFO nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Creating config drive at /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.054 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplj7yj88u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.122 2 DEBUG nova.network.neutron [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Updated VIF entry in instance network info cache for port a246f438-6334-440d-931b-f177dc6cadd6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:31:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.124 2 DEBUG nova.network.neutron [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Updating instance_info_cache with network_info: [{"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:41 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.127 2 DEBUG nova.network.neutron [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Successfully created port: 079652d3-bf75-4bfc-9a4d-f208f0313a7d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:31:41 compute-0 ceph-mon[74477]: osdmap e209: 3 total, 3 up, 3 in
Oct 02 08:31:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2092238596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.141 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24339ad4-fec2-43f8-8da3-5e433206a1cc/disk.config 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.142 2 INFO nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Deleting local config drive /var/lib/nova/instances/24339ad4-fec2-43f8-8da3-5e433206a1cc/disk.config because it was imported into RBD.
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.151 2 DEBUG oslo_concurrency.lockutils [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-24339ad4-fec2-43f8-8da3-5e433206a1cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:41 compute-0 NetworkManager[45129]: <info>  [1759393901.2044] manager: (tapa246f438-63): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Oct 02 08:31:41 compute-0 kernel: tapa246f438-63: entered promiscuous mode
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:41 compute-0 ovn_controller[152344]: 2025-10-02T08:31:41Z|00595|binding|INFO|Claiming lport a246f438-6334-440d-931b-f177dc6cadd6 for this chassis.
Oct 02 08:31:41 compute-0 ovn_controller[152344]: 2025-10-02T08:31:41Z|00596|binding|INFO|a246f438-6334-440d-931b-f177dc6cadd6: Claiming fa:16:3e:16:eb:75 10.100.0.10
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.226 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:eb:75 10.100.0.10'], port_security=['fa:16:3e:16:eb:75 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '24339ad4-fec2-43f8-8da3-5e433206a1cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a246f438-6334-440d-931b-f177dc6cadd6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:41 compute-0 ovn_controller[152344]: 2025-10-02T08:31:41Z|00597|binding|INFO|Setting lport a246f438-6334-440d-931b-f177dc6cadd6 ovn-installed in OVS
Oct 02 08:31:41 compute-0 ovn_controller[152344]: 2025-10-02T08:31:41Z|00598|binding|INFO|Setting lport a246f438-6334-440d-931b-f177dc6cadd6 up in Southbound
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.227 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a246f438-6334-440d-931b-f177dc6cadd6 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 bound to our chassis
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.229 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.232 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplj7yj88u" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.246 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[11ed12df-fcfd-4ab0-8b0f-eb936ccc24b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 systemd-machined[214636]: New machine qemu-72-instance-00000041.
Oct 02 08:31:41 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-00000041.
Oct 02 08:31:41 compute-0 systemd-udevd[325874]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.280 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecd3312-896f-4e05-9163-f08ed7df5978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.285 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[28e5abae-900c-48ca-9441-cc9fc5ada361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.287 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:41 compute-0 NetworkManager[45129]: <info>  [1759393901.2952] device (tapa246f438-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:31:41 compute-0 NetworkManager[45129]: <info>  [1759393901.2974] device (tapa246f438-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.304 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.318 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b3072b-f97a-4fd3-9703-9732302fefdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.339 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6c0e83-6f63-4dff-a921-51ad9c747b75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325887, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.357 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e88a7df1-19dd-4076-9918-439d25c4c4ce]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325888, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325888, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.359 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.366 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.366 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.367 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.367 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.434 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.435 2 INFO nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Deleting local config drive /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config because it was imported into RBD.
Oct 02 08:31:41 compute-0 NetworkManager[45129]: <info>  [1759393901.4812] manager: (tap9835ad6c-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/258)
Oct 02 08:31:41 compute-0 kernel: tap9835ad6c-8e: entered promiscuous mode
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.484 2 DEBUG nova.compute.manager [req-768e8246-caee-453a-8a9b-084ccdc0e003 req-49059f53-70e2-4897-b9fb-c3aa55d80d01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Received event network-vif-plugged-a246f438-6334-440d-931b-f177dc6cadd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.485 2 DEBUG oslo_concurrency.lockutils [req-768e8246-caee-453a-8a9b-084ccdc0e003 req-49059f53-70e2-4897-b9fb-c3aa55d80d01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.485 2 DEBUG oslo_concurrency.lockutils [req-768e8246-caee-453a-8a9b-084ccdc0e003 req-49059f53-70e2-4897-b9fb-c3aa55d80d01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.485 2 DEBUG oslo_concurrency.lockutils [req-768e8246-caee-453a-8a9b-084ccdc0e003 req-49059f53-70e2-4897-b9fb-c3aa55d80d01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.486 2 DEBUG nova.compute.manager [req-768e8246-caee-453a-8a9b-084ccdc0e003 req-49059f53-70e2-4897-b9fb-c3aa55d80d01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Processing event network-vif-plugged-a246f438-6334-440d-931b-f177dc6cadd6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:31:41 compute-0 NetworkManager[45129]: <info>  [1759393901.5165] device (tap9835ad6c-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:31:41 compute-0 NetworkManager[45129]: <info>  [1759393901.5172] device (tap9835ad6c-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:31:41 compute-0 ovn_controller[152344]: 2025-10-02T08:31:41Z|00599|binding|INFO|Claiming lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 for this chassis.
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:41 compute-0 ovn_controller[152344]: 2025-10-02T08:31:41Z|00600|binding|INFO|9835ad6c-8ea8-4a79-8f07-042186ea7c71: Claiming fa:16:3e:5b:e9:fe 10.100.0.10
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.527 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:e9:fe 10.100.0.10'], port_security=['fa:16:3e:5b:e9:fe 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9835ad6c-8ea8-4a79-8f07-042186ea7c71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.529 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9835ad6c-8ea8-4a79-8f07-042186ea7c71 in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b bound to our chassis
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.530 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:31:41 compute-0 ovn_controller[152344]: 2025-10-02T08:31:41Z|00601|binding|INFO|Setting lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 ovn-installed in OVS
Oct 02 08:31:41 compute-0 ovn_controller[152344]: 2025-10-02T08:31:41Z|00602|binding|INFO|Setting lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 up in Southbound
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:41 compute-0 systemd-machined[214636]: New machine qemu-73-instance-0000003f.
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.543 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a069c7-0054-4097-93a3-63f5d3453cf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.547 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8df0af1-11 in ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.549 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8df0af1-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.549 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[84d98751-813f-4d12-8d0f-a35acb558ff7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.550 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[99e9ff5a-285f-4199-8408-9321778ce371]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-0000003f.
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.565 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e22a9f-a254-4ff0-83fc-68ae10de0a7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.592 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2f24a45e-0263-4745-9f92-1c8ae30e57f9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.624 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[41acc096-5d22-432d-ade1-4e258e5c4051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 NetworkManager[45129]: <info>  [1759393901.6304] manager: (tapf8df0af1-10): new Veth device (/org/freedesktop/NetworkManager/Devices/259)
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.629 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c672d1e0-7020-4cdb-8baa-714da2631737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.660 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5394da31-3fac-4415-b68f-de4077ea433a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.664 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[67362700-7863-424e-9971-e2a9fe9b8c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 NetworkManager[45129]: <info>  [1759393901.6889] device (tapf8df0af1-10): carrier: link connected
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.696 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5fddcf26-3447-4f4c-aa08-833352bc746c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.721 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4a5fe0-1984-406c-8def-c174ce6db5cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478029, 'reachable_time': 19243, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325994, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.740 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ba591fff-8f0e-4348-9cbe-8eba735f26b9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:6ddc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478029, 'tstamp': 478029}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325995, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.759 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a34750-8241-4d8f-ad07-db066427179c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478029, 'reachable_time': 19243, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325996, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.793 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[188c6018-2bfd-414f-8b78-f924f3151611]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.850 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[40e9df1a-ccf0-4671-93e8-c6de9ad890f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.851 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.852 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.853 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8df0af1-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:41 compute-0 NetworkManager[45129]: <info>  [1759393901.8568] manager: (tapf8df0af1-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Oct 02 08:31:41 compute-0 kernel: tapf8df0af1-10: entered promiscuous mode
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.860 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8df0af1-10, col_values=(('external_ids', {'iface-id': '1405e724-f2f6-4a95-8848-550131e62910'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:41 compute-0 ovn_controller[152344]: 2025-10-02T08:31:41Z|00603|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.891 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.892 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[25ae2d40-f239-40b3-8782-0f4c3b126b7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.893 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:31:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:41.894 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'env', 'PROCESS_TAG=haproxy-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8df0af1-1767-419a-8500-c28fbf45ae4b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:31:41 compute-0 nova_compute[260603]: 2025-10-02 08:31:41.916 2 DEBUG nova.network.neutron [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Successfully created port: 11588b0e-a5d2-490e-b72a-3d31e1d091e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:31:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:31:42 compute-0 ceph-mon[74477]: pgmap v1526: 305 pgs: 305 active+clean; 301 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 553 KiB/s rd, 5.9 MiB/s wr, 171 op/s
Oct 02 08:31:42 compute-0 ceph-mon[74477]: osdmap e210: 3 total, 3 up, 3 in
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.142 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393902.1418715, 24339ad4-fec2-43f8-8da3-5e433206a1cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.142 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] VM Started (Lifecycle Event)
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.144 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.148 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.151 2 INFO nova.virt.libvirt.driver [-] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Instance spawned successfully.
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.151 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.178 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.182 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.182 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.182 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.183 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.183 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.184 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.198 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.234 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.235 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393902.142082, 24339ad4-fec2-43f8-8da3-5e433206a1cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.235 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] VM Paused (Lifecycle Event)
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.257 2 INFO nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Took 9.05 seconds to spawn the instance on the hypervisor.
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.257 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.259 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.266 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393902.1469524, 24339ad4-fec2-43f8-8da3-5e433206a1cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.266 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] VM Resumed (Lifecycle Event)
Oct 02 08:31:42 compute-0 podman[326070]: 2025-10-02 08:31:42.277205573 +0000 UTC m=+0.046116162 container create 36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.308 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.311 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:42 compute-0 systemd[1]: Started libpod-conmon-36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2.scope.
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.323 2 INFO nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Took 10.10 seconds to build instance.
Oct 02 08:31:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:42 compute-0 podman[326070]: 2025-10-02 08:31:42.252071463 +0000 UTC m=+0.020982082 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ee2fd29e24499fcf75214ee1c28d07816e18d4cd96133033eaa3492f94e022/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.354 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:42 compute-0 podman[326070]: 2025-10-02 08:31:42.372379656 +0000 UTC m=+0.141290295 container init 36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:31:42 compute-0 podman[326070]: 2025-10-02 08:31:42.377386001 +0000 UTC m=+0.146296630 container start 36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:31:42 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[326085]: [NOTICE]   (326089) : New worker (326091) forked
Oct 02 08:31:42 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[326085]: [NOTICE]   (326089) : Loading success.
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.425 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.426 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393902.425281, 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.426 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] VM Started (Lifecycle Event)
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.447 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.450 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393902.4261875, 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.450 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] VM Paused (Lifecycle Event)
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.466 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.468 2 DEBUG nova.compute.manager [req-aee55332-5013-4ed2-9aa9-fa37759a119c req-43982a83-f1d2-452a-87c4-67062e9c01f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.468 2 DEBUG oslo_concurrency.lockutils [req-aee55332-5013-4ed2-9aa9-fa37759a119c req-43982a83-f1d2-452a-87c4-67062e9c01f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.468 2 DEBUG oslo_concurrency.lockutils [req-aee55332-5013-4ed2-9aa9-fa37759a119c req-43982a83-f1d2-452a-87c4-67062e9c01f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.469 2 DEBUG oslo_concurrency.lockutils [req-aee55332-5013-4ed2-9aa9-fa37759a119c req-43982a83-f1d2-452a-87c4-67062e9c01f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.469 2 DEBUG nova.compute.manager [req-aee55332-5013-4ed2-9aa9-fa37759a119c req-43982a83-f1d2-452a-87c4-67062e9c01f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Processing event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.471 2 DEBUG nova.compute.manager [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.473 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.473 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393902.4728496, 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.474 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] VM Resumed (Lifecycle Event)
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.480 2 INFO nova.virt.libvirt.driver [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance spawned successfully.
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.480 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.510 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.515 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.519 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.520 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.520 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.521 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.521 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.522 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.561 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.599 2 DEBUG nova.network.neutron [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Successfully updated port: 079652d3-bf75-4bfc-9a4d-f208f0313a7d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.602 2 DEBUG nova.compute.manager [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.672 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.673 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.673 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.742 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.850 2 INFO nova.virt.libvirt.driver [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Snapshot image upload complete
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.850 2 INFO nova.compute.manager [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Took 5.38 seconds to snapshot the instance on the hypervisor.
Oct 02 08:31:42 compute-0 nova_compute[260603]: 2025-10-02 08:31:42.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 363 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 18 MiB/s wr, 448 op/s
Oct 02 08:31:43 compute-0 nova_compute[260603]: 2025-10-02 08:31:43.174 2 DEBUG nova.compute.manager [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 02 08:31:43 compute-0 nova_compute[260603]: 2025-10-02 08:31:43.906 2 DEBUG nova.network.neutron [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Successfully updated port: 11588b0e-a5d2-490e-b72a-3d31e1d091e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:31:43 compute-0 nova_compute[260603]: 2025-10-02 08:31:43.931 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "refresh_cache-49564059-b2ef-4053-bedd-56a9afb53d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:43 compute-0 nova_compute[260603]: 2025-10-02 08:31:43.931 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquired lock "refresh_cache-49564059-b2ef-4053-bedd-56a9afb53d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:43 compute-0 nova_compute[260603]: 2025-10-02 08:31:43.932 2 DEBUG nova.network.neutron [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.028 2 DEBUG nova.compute.manager [req-b01e30d0-ebe5-4bbe-8f41-e60e97e53fcb req-dac9e0e3-4fb5-43b9-8315-a464dfda8a6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Received event network-vif-plugged-a246f438-6334-440d-931b-f177dc6cadd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.029 2 DEBUG oslo_concurrency.lockutils [req-b01e30d0-ebe5-4bbe-8f41-e60e97e53fcb req-dac9e0e3-4fb5-43b9-8315-a464dfda8a6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.030 2 DEBUG oslo_concurrency.lockutils [req-b01e30d0-ebe5-4bbe-8f41-e60e97e53fcb req-dac9e0e3-4fb5-43b9-8315-a464dfda8a6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.031 2 DEBUG oslo_concurrency.lockutils [req-b01e30d0-ebe5-4bbe-8f41-e60e97e53fcb req-dac9e0e3-4fb5-43b9-8315-a464dfda8a6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.031 2 DEBUG nova.compute.manager [req-b01e30d0-ebe5-4bbe-8f41-e60e97e53fcb req-dac9e0e3-4fb5-43b9-8315-a464dfda8a6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] No waiting events found dispatching network-vif-plugged-a246f438-6334-440d-931b-f177dc6cadd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.032 2 WARNING nova.compute.manager [req-b01e30d0-ebe5-4bbe-8f41-e60e97e53fcb req-dac9e0e3-4fb5-43b9-8315-a464dfda8a6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Received unexpected event network-vif-plugged-a246f438-6334-440d-931b-f177dc6cadd6 for instance with vm_state active and task_state None.
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.099 2 DEBUG nova.network.neutron [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:31:44 compute-0 ceph-mon[74477]: pgmap v1528: 305 pgs: 305 active+clean; 363 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 18 MiB/s wr, 448 op/s
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.890 2 DEBUG nova.compute.manager [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:44 compute-0 sudo[326100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:31:44 compute-0 sudo[326100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:44 compute-0 sudo[326100]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.936 2 INFO nova.compute.manager [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] instance snapshotting
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.937 2 DEBUG nova.objects.instance [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'flavor' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.969 2 DEBUG nova.compute.manager [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.969 2 DEBUG oslo_concurrency.lockutils [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.969 2 DEBUG oslo_concurrency.lockutils [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.970 2 DEBUG oslo_concurrency.lockutils [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.970 2 DEBUG nova.compute.manager [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] No waiting events found dispatching network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.970 2 WARNING nova.compute.manager [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received unexpected event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 for instance with vm_state active and task_state None.
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.971 2 DEBUG nova.compute.manager [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-changed-079652d3-bf75-4bfc-9a4d-f208f0313a7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.971 2 DEBUG nova.compute.manager [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Refreshing instance network info cache due to event network-changed-079652d3-bf75-4bfc-9a4d-f208f0313a7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:31:44 compute-0 nova_compute[260603]: 2025-10-02 08:31:44.972 2 DEBUG oslo_concurrency.lockutils [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-49564059-b2ef-4053-bedd-56a9afb53d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:45 compute-0 sudo[326125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:31:45 compute-0 sudo[326125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:45 compute-0 sudo[326125]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 418 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 15 MiB/s wr, 396 op/s
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.103 2 DEBUG oslo_concurrency.lockutils [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "24339ad4-fec2-43f8-8da3-5e433206a1cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.103 2 DEBUG oslo_concurrency.lockutils [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.104 2 DEBUG oslo_concurrency.lockutils [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.104 2 DEBUG oslo_concurrency.lockutils [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.104 2 DEBUG oslo_concurrency.lockutils [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.105 2 INFO nova.compute.manager [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Terminating instance
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.106 2 DEBUG nova.compute.manager [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:31:45 compute-0 sudo[326150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:31:45 compute-0 sudo[326150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:45 compute-0 sudo[326150]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:45 compute-0 kernel: tapa246f438-63 (unregistering): left promiscuous mode
Oct 02 08:31:45 compute-0 NetworkManager[45129]: <info>  [1759393905.1902] device (tapa246f438-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.207 2 INFO nova.virt.libvirt.driver [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Beginning live snapshot process
Oct 02 08:31:45 compute-0 ovn_controller[152344]: 2025-10-02T08:31:45Z|00604|binding|INFO|Releasing lport a246f438-6334-440d-931b-f177dc6cadd6 from this chassis (sb_readonly=0)
Oct 02 08:31:45 compute-0 ovn_controller[152344]: 2025-10-02T08:31:45Z|00605|binding|INFO|Setting lport a246f438-6334-440d-931b-f177dc6cadd6 down in Southbound
Oct 02 08:31:45 compute-0 ovn_controller[152344]: 2025-10-02T08:31:45Z|00606|binding|INFO|Removing iface tapa246f438-63 ovn-installed in OVS
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 sudo[326175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.218 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:eb:75 10.100.0.10'], port_security=['fa:16:3e:16:eb:75 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '24339ad4-fec2-43f8-8da3-5e433206a1cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a246f438-6334-440d-931b-f177dc6cadd6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:45 compute-0 sudo[326175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.220 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a246f438-6334-440d-931b-f177dc6cadd6 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 unbound from our chassis
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.222 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.245 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b296b54-ded2-48d0-a3f4-8693b458b2a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000041.scope: Deactivated successfully.
Oct 02 08:31:45 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000041.scope: Consumed 3.643s CPU time.
Oct 02 08:31:45 compute-0 systemd-machined[214636]: Machine qemu-72-instance-00000041 terminated.
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.286 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fb40c03f-6688-41a2-ba5d-d33a3208dea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.290 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[dd448f44-838b-41af-a83f-71b2a69d40c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 kernel: tapa246f438-63: entered promiscuous mode
Oct 02 08:31:45 compute-0 NetworkManager[45129]: <info>  [1759393905.3292] manager: (tapa246f438-63): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Oct 02 08:31:45 compute-0 kernel: tapa246f438-63 (unregistering): left promiscuous mode
Oct 02 08:31:45 compute-0 ovn_controller[152344]: 2025-10-02T08:31:45Z|00607|binding|INFO|Claiming lport a246f438-6334-440d-931b-f177dc6cadd6 for this chassis.
Oct 02 08:31:45 compute-0 ovn_controller[152344]: 2025-10-02T08:31:45Z|00608|binding|INFO|a246f438-6334-440d-931b-f177dc6cadd6: Claiming fa:16:3e:16:eb:75 10.100.0.10
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.333 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5decd1e1-4aa4-4ea7-99a8-2f0081e3aa47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.341 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:eb:75 10.100.0.10'], port_security=['fa:16:3e:16:eb:75 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '24339ad4-fec2-43f8-8da3-5e433206a1cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a246f438-6334-440d-931b-f177dc6cadd6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:45 compute-0 ovn_controller[152344]: 2025-10-02T08:31:45Z|00609|binding|INFO|Releasing lport a246f438-6334-440d-931b-f177dc6cadd6 from this chassis (sb_readonly=0)
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.369 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:eb:75 10.100.0.10'], port_security=['fa:16:3e:16:eb:75 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '24339ad4-fec2-43f8-8da3-5e433206a1cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a246f438-6334-440d-931b-f177dc6cadd6) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.375 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5de395-42a1-4cbd-9ced-4d4d05b007f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326248, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.384 2 DEBUG nova.virt.libvirt.imagebackend [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.398 2 INFO nova.virt.libvirt.driver [-] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Instance destroyed successfully.
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.398 2 DEBUG nova.objects.instance [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'resources' on Instance uuid 24339ad4-fec2-43f8-8da3-5e433206a1cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.406 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[678fc42b-356a-4493-8a6b-de1769c54a0d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326259, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326259, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.408 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.417 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.418 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.418 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.418 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.419 2 DEBUG nova.virt.libvirt.vif [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-26328652',display_name='tempest-ServersTestJSON-server-26328652',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-26328652',id=65,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-j0hil4jt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:42Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=24339ad4-fec2-43f8-8da3-5e433206a1cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.419 2 DEBUG nova.network.os_vif_util [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.420 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a246f438-6334-440d-931b-f177dc6cadd6 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 unbound from our chassis
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.421 2 DEBUG nova.network.os_vif_util [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:eb:75,bridge_name='br-int',has_traffic_filtering=True,id=a246f438-6334-440d-931b-f177dc6cadd6,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa246f438-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.421 2 DEBUG os_vif [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:eb:75,bridge_name='br-int',has_traffic_filtering=True,id=a246f438-6334-440d-931b-f177dc6cadd6,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa246f438-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.422 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.423 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa246f438-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.432 2 INFO os_vif [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:eb:75,bridge_name='br-int',has_traffic_filtering=True,id=a246f438-6334-440d-931b-f177dc6cadd6,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa246f438-63')
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.437 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[782d5d07-59b8-45b5-8421-92f89085afeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.479 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[561077bc-8b57-47f1-9787-9e5d2e8baac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.481 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4aaf7f09-0e1b-4d08-a172-ab182c203ea9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.525 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2c8e589a-2da6-40e2-a45e-d8250fcdce3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.554 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[934d7a20-dc24-4716-8721-b19c5c253ac9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326287, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.572 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[33438cf0-0934-40a1-bdbc-e006c5380f00]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326290, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326290, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.574 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.580 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.581 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.581 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.582 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.583 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a246f438-6334-440d-931b-f177dc6cadd6 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 unbound from our chassis
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.585 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.609 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[951933ef-5ca0-49de-be62-e86aea858153]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.662 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[446183ea-dfb9-4bb4-8abe-97eabeb8a3ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.668 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e425102e-03da-4269-8a09-b1b7550f507c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.678 2 DEBUG nova.storage.rbd_utils [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] creating snapshot(c0017c35dd5d4721828d5589cdb51aba) on rbd image(49e7e668-b62c-4e35-a4e2-bba540000961_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.710 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7988af56-cb25-4d0a-96f6-7ec9135e8f4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.730 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50428fec-6b69-4110-8314-65d3e98e525a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326317, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.753 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[faa1277a-4f13-477d-a007-699f07fcc7e3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326326, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326326, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.757 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 nova_compute[260603]: 2025-10-02 08:31:45.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.768 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.768 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.769 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:45.769 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:45 compute-0 sudo[326175]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:31:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:31:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:31:45 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:31:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:31:45 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:31:45 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 706023ef-f19b-4846-9f24-2ec5ebf86d29 does not exist
Oct 02 08:31:45 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f15ebb79-d66c-425e-8af7-20341d5387d2 does not exist
Oct 02 08:31:45 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ea761122-6f01-44da-9a8f-3b44fffd4deb does not exist
Oct 02 08:31:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:31:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:31:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:31:45 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:31:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:31:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:31:45 compute-0 sudo[326331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:31:45 compute-0 sudo[326331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:45 compute-0 sudo[326331]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.019 2 DEBUG nova.network.neutron [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Updating instance_info_cache with network_info: [{"id": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "address": "fa:16:3e:62:c5:05", "network": {"id": "6be132cd-1736-40b4-94e9-fab0bc5bb76a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-477300430", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079652d3-bf", "ovs_interfaceid": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "address": "fa:16:3e:62:21:b6", "network": {"id": "43544beb-5350-4aeb-8684-3bce1489b358", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2007163273", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11588b0e-a5", "ovs_interfaceid": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.053 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Releasing lock "refresh_cache-49564059-b2ef-4053-bedd-56a9afb53d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.054 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Instance network_info: |[{"id": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "address": "fa:16:3e:62:c5:05", "network": {"id": "6be132cd-1736-40b4-94e9-fab0bc5bb76a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-477300430", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079652d3-bf", "ovs_interfaceid": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "address": "fa:16:3e:62:21:b6", "network": {"id": "43544beb-5350-4aeb-8684-3bce1489b358", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2007163273", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11588b0e-a5", "ovs_interfaceid": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.056 2 DEBUG oslo_concurrency.lockutils [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-49564059-b2ef-4053-bedd-56a9afb53d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.057 2 DEBUG nova.network.neutron [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Refreshing network info cache for port 079652d3-bf75-4bfc-9a4d-f208f0313a7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:31:46 compute-0 sudo[326356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:31:46 compute-0 sudo[326356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.065 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Start _get_guest_xml network_info=[{"id": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "address": "fa:16:3e:62:c5:05", "network": {"id": "6be132cd-1736-40b4-94e9-fab0bc5bb76a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-477300430", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079652d3-bf", "ovs_interfaceid": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "address": "fa:16:3e:62:21:b6", "network": {"id": "43544beb-5350-4aeb-8684-3bce1489b358", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2007163273", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11588b0e-a5", "ovs_interfaceid": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:31:46 compute-0 sudo[326356]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.078 2 WARNING nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.087 2 DEBUG nova.virt.libvirt.host [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.088 2 DEBUG nova.virt.libvirt.host [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.100 2 INFO nova.virt.libvirt.driver [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Deleting instance files /var/lib/nova/instances/24339ad4-fec2-43f8-8da3-5e433206a1cc_del
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.101 2 INFO nova.virt.libvirt.driver [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Deletion of /var/lib/nova/instances/24339ad4-fec2-43f8-8da3-5e433206a1cc_del complete
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.106 2 DEBUG nova.virt.libvirt.host [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.107 2 DEBUG nova.virt.libvirt.host [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.107 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.108 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.108 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.109 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.109 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.109 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.110 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.110 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.110 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.111 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.111 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.111 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.115 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:46 compute-0 sudo[326381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:31:46 compute-0 sudo[326381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:46 compute-0 sudo[326381]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.156 2 DEBUG oslo_concurrency.lockutils [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.157 2 DEBUG oslo_concurrency.lockutils [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.157 2 DEBUG oslo_concurrency.lockutils [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.157 2 DEBUG oslo_concurrency.lockutils [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.157 2 DEBUG oslo_concurrency.lockutils [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.159 2 INFO nova.compute.manager [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Terminating instance
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.160 2 DEBUG nova.compute.manager [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:31:46 compute-0 ceph-mon[74477]: pgmap v1529: 305 pgs: 305 active+clean; 418 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 15 MiB/s wr, 396 op/s
Oct 02 08:31:46 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:31:46 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:31:46 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:31:46 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:31:46 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:31:46 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.169 2 INFO nova.compute.manager [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Took 1.06 seconds to destroy the instance on the hypervisor.
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.169 2 DEBUG oslo.service.loopingcall [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.170 2 DEBUG nova.compute.manager [-] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.170 2 DEBUG nova.network.neutron [-] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:31:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Oct 02 08:31:46 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.186 2 DEBUG nova.compute.manager [req-a8fdeb99-d045-496b-b716-b13446137628 req-678209d5-dcb0-4569-ae64-2827ce734564 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Received event network-vif-unplugged-a246f438-6334-440d-931b-f177dc6cadd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.186 2 DEBUG oslo_concurrency.lockutils [req-a8fdeb99-d045-496b-b716-b13446137628 req-678209d5-dcb0-4569-ae64-2827ce734564 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.187 2 DEBUG oslo_concurrency.lockutils [req-a8fdeb99-d045-496b-b716-b13446137628 req-678209d5-dcb0-4569-ae64-2827ce734564 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.187 2 DEBUG oslo_concurrency.lockutils [req-a8fdeb99-d045-496b-b716-b13446137628 req-678209d5-dcb0-4569-ae64-2827ce734564 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.187 2 DEBUG nova.compute.manager [req-a8fdeb99-d045-496b-b716-b13446137628 req-678209d5-dcb0-4569-ae64-2827ce734564 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] No waiting events found dispatching network-vif-unplugged-a246f438-6334-440d-931b-f177dc6cadd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.188 2 DEBUG nova.compute.manager [req-a8fdeb99-d045-496b-b716-b13446137628 req-678209d5-dcb0-4569-ae64-2827ce734564 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Received event network-vif-unplugged-a246f438-6334-440d-931b-f177dc6cadd6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.188 2 DEBUG nova.compute.manager [req-a8fdeb99-d045-496b-b716-b13446137628 req-678209d5-dcb0-4569-ae64-2827ce734564 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Received event network-vif-plugged-a246f438-6334-440d-931b-f177dc6cadd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.188 2 DEBUG oslo_concurrency.lockutils [req-a8fdeb99-d045-496b-b716-b13446137628 req-678209d5-dcb0-4569-ae64-2827ce734564 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.189 2 DEBUG oslo_concurrency.lockutils [req-a8fdeb99-d045-496b-b716-b13446137628 req-678209d5-dcb0-4569-ae64-2827ce734564 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.190 2 DEBUG oslo_concurrency.lockutils [req-a8fdeb99-d045-496b-b716-b13446137628 req-678209d5-dcb0-4569-ae64-2827ce734564 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.190 2 DEBUG nova.compute.manager [req-a8fdeb99-d045-496b-b716-b13446137628 req-678209d5-dcb0-4569-ae64-2827ce734564 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] No waiting events found dispatching network-vif-plugged-a246f438-6334-440d-931b-f177dc6cadd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.190 2 WARNING nova.compute.manager [req-a8fdeb99-d045-496b-b716-b13446137628 req-678209d5-dcb0-4569-ae64-2827ce734564 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Received unexpected event network-vif-plugged-a246f438-6334-440d-931b-f177dc6cadd6 for instance with vm_state active and task_state deleting.
Oct 02 08:31:46 compute-0 kernel: tap9835ad6c-8e (unregistering): left promiscuous mode
Oct 02 08:31:46 compute-0 NetworkManager[45129]: <info>  [1759393906.2161] device (tap9835ad6c-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:31:46 compute-0 sudo[326407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:31:46 compute-0 sudo[326407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:46 compute-0 ovn_controller[152344]: 2025-10-02T08:31:46Z|00610|binding|INFO|Releasing lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 from this chassis (sb_readonly=0)
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:46 compute-0 ovn_controller[152344]: 2025-10-02T08:31:46Z|00611|binding|INFO|Setting lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 down in Southbound
Oct 02 08:31:46 compute-0 ovn_controller[152344]: 2025-10-02T08:31:46Z|00612|binding|INFO|Removing iface tap9835ad6c-8e ovn-installed in OVS
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.271 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:e9:fe 10.100.0.10'], port_security=['fa:16:3e:5b:e9:fe 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9835ad6c-8ea8-4a79-8f07-042186ea7c71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.273 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9835ad6c-8ea8-4a79-8f07-042186ea7c71 in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b unbound from our chassis
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.274 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8df0af1-1767-419a-8500-c28fbf45ae4b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.274 2 DEBUG nova.storage.rbd_utils [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] cloning vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk@c0017c35dd5d4721828d5589cdb51aba to images/2f3c6928-2844-418f-b682-99c3bbcc831b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.275 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e76321-298b-44cd-b61c-1ed2390550f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.278 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace which is not needed anymore
Oct 02 08:31:46 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Oct 02 08:31:46 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003f.scope: Consumed 4.367s CPU time.
Oct 02 08:31:46 compute-0 systemd-machined[214636]: Machine qemu-73-instance-0000003f terminated.
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.397 2 INFO nova.virt.libvirt.driver [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance destroyed successfully.
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.398 2 DEBUG nova.objects.instance [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'resources' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.428 2 DEBUG nova.storage.rbd_utils [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] flattening images/2f3c6928-2844-418f-b682-99c3bbcc831b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:31:46 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[326085]: [NOTICE]   (326089) : haproxy version is 2.8.14-c23fe91
Oct 02 08:31:46 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[326085]: [NOTICE]   (326089) : path to executable is /usr/sbin/haproxy
Oct 02 08:31:46 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[326085]: [WARNING]  (326089) : Exiting Master process...
Oct 02 08:31:46 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[326085]: [ALERT]    (326089) : Current worker (326091) exited with code 143 (Terminated)
Oct 02 08:31:46 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[326085]: [WARNING]  (326089) : All workers exited. Exiting... (0)
Oct 02 08:31:46 compute-0 systemd[1]: libpod-36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2.scope: Deactivated successfully.
Oct 02 08:31:46 compute-0 podman[326512]: 2025-10-02 08:31:46.451352113 +0000 UTC m=+0.068838717 container died 36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.477 2 DEBUG nova.virt.libvirt.vif [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:31:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-361192799',display_name='tempest-ServerDiskConfigTestJSON-server-361192799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-361192799',id=63,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-bmy2yvfw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:42Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.477 2 DEBUG nova.network.os_vif_util [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2-userdata-shm.mount: Deactivated successfully.
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.478 2 DEBUG nova.network.os_vif_util [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.479 2 DEBUG os_vif [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4ee2fd29e24499fcf75214ee1c28d07816e18d4cd96133033eaa3492f94e022-merged.mount: Deactivated successfully.
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.481 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9835ad6c-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.485 2 INFO os_vif [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e')
Oct 02 08:31:46 compute-0 podman[326512]: 2025-10-02 08:31:46.501429547 +0000 UTC m=+0.118916141 container cleanup 36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:31:46 compute-0 systemd[1]: libpod-conmon-36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2.scope: Deactivated successfully.
Oct 02 08:31:46 compute-0 podman[326614]: 2025-10-02 08:31:46.633582357 +0000 UTC m=+0.102133460 container remove 36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.640 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[06932ecb-729f-4a57-b17e-1c0e188d835a]: (4, ('Thu Oct  2 08:31:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2)\n36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2\nThu Oct  2 08:31:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2)\n36c2c5c70ccc6af42cd9759a096f1a0745f350599a7b4493cceb143f1c9ad8d2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.644 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[af66a23a-765a-4d2a-bf94-7631b3d05a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.645 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:46 compute-0 podman[326617]: 2025-10-02 08:31:46.659256254 +0000 UTC m=+0.112583975 container create 6667e347d38a964e9cb76b7cb34398a0719047aed54ea533a2f1fc7d416db1d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 08:31:46 compute-0 kernel: tapf8df0af1-10: left promiscuous mode
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:46 compute-0 podman[326617]: 2025-10-02 08:31:46.578946512 +0000 UTC m=+0.032274243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.677 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[525b644a-fac2-4edf-9243-ba7b43269d5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1752799595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.705 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9631f1b5-c14c-4e31-8897-50ad8f974847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.707 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5b17b3-53ad-48d0-9a9b-6c31d9d855df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.711 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:46 compute-0 systemd[1]: Started libpod-conmon-6667e347d38a964e9cb76b7cb34398a0719047aed54ea533a2f1fc7d416db1d3.scope.
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.733 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bc568252-eccb-4d92-af7c-1a01d5915e93]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478021, 'reachable_time': 25761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326647, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:46 compute-0 systemd[1]: run-netns-ovnmeta\x2df8df0af1\x2d1767\x2d419a\x2d8500\x2dc28fbf45ae4b.mount: Deactivated successfully.
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.736 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:31:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:46.737 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd27a64-a5ee-429e-bf2b-2bbda715d271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.743 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.750 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:46 compute-0 podman[326617]: 2025-10-02 08:31:46.787198534 +0000 UTC m=+0.240526305 container init 6667e347d38a964e9cb76b7cb34398a0719047aed54ea533a2f1fc7d416db1d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:31:46 compute-0 podman[326617]: 2025-10-02 08:31:46.794446479 +0000 UTC m=+0.247774200 container start 6667e347d38a964e9cb76b7cb34398a0719047aed54ea533a2f1fc7d416db1d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:31:46 compute-0 podman[326617]: 2025-10-02 08:31:46.797484513 +0000 UTC m=+0.250812254 container attach 6667e347d38a964e9cb76b7cb34398a0719047aed54ea533a2f1fc7d416db1d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:31:46 compute-0 nervous_pare[326662]: 167 167
Oct 02 08:31:46 compute-0 systemd[1]: libpod-6667e347d38a964e9cb76b7cb34398a0719047aed54ea533a2f1fc7d416db1d3.scope: Deactivated successfully.
Oct 02 08:31:46 compute-0 podman[326617]: 2025-10-02 08:31:46.802306333 +0000 UTC m=+0.255634054 container died 6667e347d38a964e9cb76b7cb34398a0719047aed54ea533a2f1fc7d416db1d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ea73b1f4eab54f2aeb43439c60701e7808dd641d16b6bf8cdb3ed9a276f3866-merged.mount: Deactivated successfully.
Oct 02 08:31:46 compute-0 podman[326617]: 2025-10-02 08:31:46.845705249 +0000 UTC m=+0.299032980 container remove 6667e347d38a964e9cb76b7cb34398a0719047aed54ea533a2f1fc7d416db1d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 08:31:46 compute-0 systemd[1]: libpod-conmon-6667e347d38a964e9cb76b7cb34398a0719047aed54ea533a2f1fc7d416db1d3.scope: Deactivated successfully.
Oct 02 08:31:46 compute-0 nova_compute[260603]: 2025-10-02 08:31:46.878 2 DEBUG nova.storage.rbd_utils [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] removing snapshot(c0017c35dd5d4721828d5589cdb51aba) on rbd image(49e7e668-b62c-4e35-a4e2-bba540000961_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:31:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 418 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 13 MiB/s wr, 343 op/s
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.051 2 INFO nova.virt.libvirt.driver [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Deleting instance files /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_del
Oct 02 08:31:47 compute-0 podman[326730]: 2025-10-02 08:31:47.052920459 +0000 UTC m=+0.047120173 container create 4ddc441067050d84d0864cd40f059b38291a9720f9a8dced36a8f11f30b370e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.054 2 INFO nova.virt.libvirt.driver [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Deletion of /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_del complete
Oct 02 08:31:47 compute-0 systemd[1]: Started libpod-conmon-4ddc441067050d84d0864cd40f059b38291a9720f9a8dced36a8f11f30b370e3.scope.
Oct 02 08:31:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f9c6661ee6a64c22d55dd3b282c8e0dcf5f70222d47a78e93839707bc2819d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f9c6661ee6a64c22d55dd3b282c8e0dcf5f70222d47a78e93839707bc2819d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f9c6661ee6a64c22d55dd3b282c8e0dcf5f70222d47a78e93839707bc2819d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f9c6661ee6a64c22d55dd3b282c8e0dcf5f70222d47a78e93839707bc2819d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f9c6661ee6a64c22d55dd3b282c8e0dcf5f70222d47a78e93839707bc2819d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:31:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Oct 02 08:31:47 compute-0 podman[326730]: 2025-10-02 08:31:47.031276237 +0000 UTC m=+0.025475981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.132 2 INFO nova.compute.manager [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Took 0.97 seconds to destroy the instance on the hypervisor.
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.132 2 DEBUG oslo.service.loopingcall [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.133 2 DEBUG nova.compute.manager [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.133 2 DEBUG nova.network.neutron [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:31:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Oct 02 08:31:47 compute-0 podman[326730]: 2025-10-02 08:31:47.14318956 +0000 UTC m=+0.137389314 container init 4ddc441067050d84d0864cd40f059b38291a9720f9a8dced36a8f11f30b370e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:31:47 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Oct 02 08:31:47 compute-0 podman[326730]: 2025-10-02 08:31:47.152129078 +0000 UTC m=+0.146328802 container start 4ddc441067050d84d0864cd40f059b38291a9720f9a8dced36a8f11f30b370e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nash, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:31:47 compute-0 podman[326730]: 2025-10-02 08:31:47.159220838 +0000 UTC m=+0.153420562 container attach 4ddc441067050d84d0864cd40f059b38291a9720f9a8dced36a8f11f30b370e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:31:47 compute-0 podman[326745]: 2025-10-02 08:31:47.163104298 +0000 UTC m=+0.082921084 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct 02 08:31:47 compute-0 ceph-mon[74477]: osdmap e211: 3 total, 3 up, 3 in
Oct 02 08:31:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1752799595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:47 compute-0 ceph-mon[74477]: osdmap e212: 3 total, 3 up, 3 in
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.184 2 DEBUG nova.storage.rbd_utils [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] creating snapshot(snap) on rbd image(2f3c6928-2844-418f-b682-99c3bbcc831b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:31:47 compute-0 podman[326744]: 2025-10-02 08:31:47.209594991 +0000 UTC m=+0.130514211 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:31:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1723022582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.232 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.234 2 DEBUG nova.virt.libvirt.vif [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-785143250',display_name='tempest-ServersTestMultiNic-server-785143250',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-785143250',id=66,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-cbfgptpc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:38Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=49564059-b2ef-4053-bedd-56a9afb53d2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "address": "fa:16:3e:62:c5:05", "network": {"id": "6be132cd-1736-40b4-94e9-fab0bc5bb76a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-477300430", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079652d3-bf", "ovs_interfaceid": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.234 2 DEBUG nova.network.os_vif_util [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "address": "fa:16:3e:62:c5:05", "network": {"id": "6be132cd-1736-40b4-94e9-fab0bc5bb76a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-477300430", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079652d3-bf", "ovs_interfaceid": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.235 2 DEBUG nova.network.os_vif_util [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:c5:05,bridge_name='br-int',has_traffic_filtering=True,id=079652d3-bf75-4bfc-9a4d-f208f0313a7d,network=Network(6be132cd-1736-40b4-94e9-fab0bc5bb76a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079652d3-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.237 2 DEBUG nova.virt.libvirt.vif [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-785143250',display_name='tempest-ServersTestMultiNic-server-785143250',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-785143250',id=66,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-cbfgptpc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:38Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=49564059-b2ef-4053-bedd-56a9afb53d2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "address": "fa:16:3e:62:21:b6", "network": {"id": "43544beb-5350-4aeb-8684-3bce1489b358", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2007163273", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11588b0e-a5", "ovs_interfaceid": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.237 2 DEBUG nova.network.os_vif_util [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "address": "fa:16:3e:62:21:b6", "network": {"id": "43544beb-5350-4aeb-8684-3bce1489b358", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2007163273", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11588b0e-a5", "ovs_interfaceid": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.238 2 DEBUG nova.network.os_vif_util [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:21:b6,bridge_name='br-int',has_traffic_filtering=True,id=11588b0e-a5d2-490e-b72a-3d31e1d091e4,network=Network(43544beb-5350-4aeb-8684-3bce1489b358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11588b0e-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.240 2 DEBUG nova.objects.instance [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49564059-b2ef-4053-bedd-56a9afb53d2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.260 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:31:47 compute-0 nova_compute[260603]:   <uuid>49564059-b2ef-4053-bedd-56a9afb53d2c</uuid>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   <name>instance-00000042</name>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersTestMultiNic-server-785143250</nova:name>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:31:46</nova:creationTime>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <nova:user uuid="db9a3b1e6d93495f8c849658ffc4e535">tempest-ServersTestMultiNic-670565182-project-member</nova:user>
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <nova:project uuid="62c4ff42369740eebbf14969f4d8d2e5">tempest-ServersTestMultiNic-670565182</nova:project>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <nova:port uuid="079652d3-bf75-4bfc-9a4d-f208f0313a7d">
Oct 02 08:31:47 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.80" ipVersion="4"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <nova:port uuid="11588b0e-a5d2-490e-b72a-3d31e1d091e4">
Oct 02 08:31:47 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.1.99" ipVersion="4"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <system>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <entry name="serial">49564059-b2ef-4053-bedd-56a9afb53d2c</entry>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <entry name="uuid">49564059-b2ef-4053-bedd-56a9afb53d2c</entry>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     </system>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   <os>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   </os>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   <features>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   </features>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/49564059-b2ef-4053-bedd-56a9afb53d2c_disk">
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/49564059-b2ef-4053-bedd-56a9afb53d2c_disk.config">
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:47 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:62:c5:05"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <target dev="tap079652d3-bf"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:62:21:b6"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <target dev="tap11588b0e-a5"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/49564059-b2ef-4053-bedd-56a9afb53d2c/console.log" append="off"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <video>
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     </video>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:31:47 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:31:47 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:31:47 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:31:47 compute-0 nova_compute[260603]: </domain>
Oct 02 08:31:47 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.268 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Preparing to wait for external event network-vif-plugged-079652d3-bf75-4bfc-9a4d-f208f0313a7d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.268 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.268 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.269 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.269 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Preparing to wait for external event network-vif-plugged-11588b0e-a5d2-490e-b72a-3d31e1d091e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.269 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.269 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.270 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.270 2 DEBUG nova.virt.libvirt.vif [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-785143250',display_name='tempest-ServersTestMultiNic-server-785143250',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-785143250',id=66,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-cbfgptpc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:38Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=49564059-b2ef-4053-bedd-56a9afb53d2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "address": "fa:16:3e:62:c5:05", "network": {"id": "6be132cd-1736-40b4-94e9-fab0bc5bb76a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-477300430", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079652d3-bf", "ovs_interfaceid": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.270 2 DEBUG nova.network.os_vif_util [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "address": "fa:16:3e:62:c5:05", "network": {"id": "6be132cd-1736-40b4-94e9-fab0bc5bb76a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-477300430", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079652d3-bf", "ovs_interfaceid": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.271 2 DEBUG nova.network.os_vif_util [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:c5:05,bridge_name='br-int',has_traffic_filtering=True,id=079652d3-bf75-4bfc-9a4d-f208f0313a7d,network=Network(6be132cd-1736-40b4-94e9-fab0bc5bb76a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079652d3-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.271 2 DEBUG os_vif [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:c5:05,bridge_name='br-int',has_traffic_filtering=True,id=079652d3-bf75-4bfc-9a4d-f208f0313a7d,network=Network(6be132cd-1736-40b4-94e9-fab0bc5bb76a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079652d3-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.277 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap079652d3-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.277 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap079652d3-bf, col_values=(('external_ids', {'iface-id': '079652d3-bf75-4bfc-9a4d-f208f0313a7d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:c5:05', 'vm-uuid': '49564059-b2ef-4053-bedd-56a9afb53d2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:47 compute-0 NetworkManager[45129]: <info>  [1759393907.2794] manager: (tap079652d3-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.285 2 INFO os_vif [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:c5:05,bridge_name='br-int',has_traffic_filtering=True,id=079652d3-bf75-4bfc-9a4d-f208f0313a7d,network=Network(6be132cd-1736-40b4-94e9-fab0bc5bb76a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079652d3-bf')
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.286 2 DEBUG nova.virt.libvirt.vif [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-785143250',display_name='tempest-ServersTestMultiNic-server-785143250',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-785143250',id=66,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-cbfgptpc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:38Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=49564059-b2ef-4053-bedd-56a9afb53d2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "address": "fa:16:3e:62:21:b6", "network": {"id": "43544beb-5350-4aeb-8684-3bce1489b358", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2007163273", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11588b0e-a5", "ovs_interfaceid": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.286 2 DEBUG nova.network.os_vif_util [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "address": "fa:16:3e:62:21:b6", "network": {"id": "43544beb-5350-4aeb-8684-3bce1489b358", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2007163273", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11588b0e-a5", "ovs_interfaceid": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.287 2 DEBUG nova.network.os_vif_util [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:21:b6,bridge_name='br-int',has_traffic_filtering=True,id=11588b0e-a5d2-490e-b72a-3d31e1d091e4,network=Network(43544beb-5350-4aeb-8684-3bce1489b358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11588b0e-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.287 2 DEBUG os_vif [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:21:b6,bridge_name='br-int',has_traffic_filtering=True,id=11588b0e-a5d2-490e-b72a-3d31e1d091e4,network=Network(43544beb-5350-4aeb-8684-3bce1489b358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11588b0e-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.287 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.288 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.289 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11588b0e-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.290 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11588b0e-a5, col_values=(('external_ids', {'iface-id': '11588b0e-a5d2-490e-b72a-3d31e1d091e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:21:b6', 'vm-uuid': '49564059-b2ef-4053-bedd-56a9afb53d2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:47 compute-0 NetworkManager[45129]: <info>  [1759393907.2914] manager: (tap11588b0e-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.297 2 INFO os_vif [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:21:b6,bridge_name='br-int',has_traffic_filtering=True,id=11588b0e-a5d2-490e-b72a-3d31e1d091e4,network=Network(43544beb-5350-4aeb-8684-3bce1489b358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11588b0e-a5')
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.352 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.353 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.353 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No VIF found with MAC fa:16:3e:62:c5:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.353 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No VIF found with MAC fa:16:3e:62:21:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.354 2 INFO nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Using config drive
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.377 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.655 2 DEBUG nova.network.neutron [-] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.675 2 INFO nova.compute.manager [-] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Took 1.51 seconds to deallocate network for instance.
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.732 2 DEBUG oslo_concurrency.lockutils [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.732 2 DEBUG oslo_concurrency.lockutils [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.879 2 DEBUG oslo_concurrency.processutils [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.918 2 DEBUG nova.compute.manager [req-200353a2-afcf-4dce-86c6-b2bcf2980922 req-4b9c43fc-a86d-4bb0-a898-51cacbe8cdb5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-unplugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.919 2 DEBUG oslo_concurrency.lockutils [req-200353a2-afcf-4dce-86c6-b2bcf2980922 req-4b9c43fc-a86d-4bb0-a898-51cacbe8cdb5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.920 2 DEBUG oslo_concurrency.lockutils [req-200353a2-afcf-4dce-86c6-b2bcf2980922 req-4b9c43fc-a86d-4bb0-a898-51cacbe8cdb5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.920 2 DEBUG oslo_concurrency.lockutils [req-200353a2-afcf-4dce-86c6-b2bcf2980922 req-4b9c43fc-a86d-4bb0-a898-51cacbe8cdb5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.920 2 DEBUG nova.compute.manager [req-200353a2-afcf-4dce-86c6-b2bcf2980922 req-4b9c43fc-a86d-4bb0-a898-51cacbe8cdb5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] No waiting events found dispatching network-vif-unplugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.920 2 DEBUG nova.compute.manager [req-200353a2-afcf-4dce-86c6-b2bcf2980922 req-4b9c43fc-a86d-4bb0-a898-51cacbe8cdb5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-unplugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.993 2 INFO nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Creating config drive at /var/lib/nova/instances/49564059-b2ef-4053-bedd-56a9afb53d2c/disk.config
Oct 02 08:31:47 compute-0 nova_compute[260603]: 2025-10-02 08:31:47.998 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49564059-b2ef-4053-bedd-56a9afb53d2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo3tljfze execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.159 2 DEBUG nova.network.neutron [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Updated VIF entry in instance network info cache for port 079652d3-bf75-4bfc-9a4d-f208f0313a7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.160 2 DEBUG nova.network.neutron [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Updating instance_info_cache with network_info: [{"id": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "address": "fa:16:3e:62:c5:05", "network": {"id": "6be132cd-1736-40b4-94e9-fab0bc5bb76a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-477300430", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079652d3-bf", "ovs_interfaceid": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "address": "fa:16:3e:62:21:b6", "network": {"id": "43544beb-5350-4aeb-8684-3bce1489b358", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2007163273", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11588b0e-a5", "ovs_interfaceid": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.162 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49564059-b2ef-4053-bedd-56a9afb53d2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo3tljfze" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Oct 02 08:31:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Oct 02 08:31:48 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.188 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:48 compute-0 ceph-mon[74477]: pgmap v1531: 305 pgs: 305 active+clean; 418 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 13 MiB/s wr, 343 op/s
Oct 02 08:31:48 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1723022582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.194 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49564059-b2ef-4053-bedd-56a9afb53d2c/disk.config 49564059-b2ef-4053-bedd-56a9afb53d2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.237 2 DEBUG oslo_concurrency.lockutils [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-49564059-b2ef-4053-bedd-56a9afb53d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.239 2 DEBUG nova.compute.manager [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-changed-11588b0e-a5d2-490e-b72a-3d31e1d091e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.239 2 DEBUG nova.compute.manager [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Refreshing instance network info cache due to event network-changed-11588b0e-a5d2-490e-b72a-3d31e1d091e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.239 2 DEBUG oslo_concurrency.lockutils [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-49564059-b2ef-4053-bedd-56a9afb53d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.240 2 DEBUG oslo_concurrency.lockutils [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-49564059-b2ef-4053-bedd-56a9afb53d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.240 2 DEBUG nova.network.neutron [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Refreshing network info cache for port 11588b0e-a5d2-490e-b72a-3d31e1d091e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:31:48 compute-0 busy_nash[326761]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:31:48 compute-0 busy_nash[326761]: --> relative data size: 1.0
Oct 02 08:31:48 compute-0 busy_nash[326761]: --> All data devices are unavailable
Oct 02 08:31:48 compute-0 systemd[1]: libpod-4ddc441067050d84d0864cd40f059b38291a9720f9a8dced36a8f11f30b370e3.scope: Deactivated successfully.
Oct 02 08:31:48 compute-0 podman[326730]: 2025-10-02 08:31:48.307381633 +0000 UTC m=+1.301581357 container died 4ddc441067050d84d0864cd40f059b38291a9720f9a8dced36a8f11f30b370e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nash, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:31:48 compute-0 systemd[1]: libpod-4ddc441067050d84d0864cd40f059b38291a9720f9a8dced36a8f11f30b370e3.scope: Consumed 1.067s CPU time.
Oct 02 08:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-32f9c6661ee6a64c22d55dd3b282c8e0dcf5f70222d47a78e93839707bc2819d-merged.mount: Deactivated successfully.
Oct 02 08:31:48 compute-0 podman[326730]: 2025-10-02 08:31:48.357786998 +0000 UTC m=+1.351986732 container remove 4ddc441067050d84d0864cd40f059b38291a9720f9a8dced36a8f11f30b370e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nash, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.381 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49564059-b2ef-4053-bedd-56a9afb53d2c/disk.config 49564059-b2ef-4053-bedd-56a9afb53d2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.382 2 INFO nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Deleting local config drive /var/lib/nova/instances/49564059-b2ef-4053-bedd-56a9afb53d2c/disk.config because it was imported into RBD.
Oct 02 08:31:48 compute-0 sudo[326407]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:48 compute-0 systemd[1]: libpod-conmon-4ddc441067050d84d0864cd40f059b38291a9720f9a8dced36a8f11f30b370e3.scope: Deactivated successfully.
Oct 02 08:31:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:48 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489341053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.432 2 DEBUG oslo_concurrency.processutils [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.438 2 DEBUG nova.compute.provider_tree [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:31:48 compute-0 NetworkManager[45129]: <info>  [1759393908.4413] manager: (tap079652d3-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Oct 02 08:31:48 compute-0 kernel: tap079652d3-bf: entered promiscuous mode
Oct 02 08:31:48 compute-0 ovn_controller[152344]: 2025-10-02T08:31:48Z|00613|binding|INFO|Claiming lport 079652d3-bf75-4bfc-9a4d-f208f0313a7d for this chassis.
Oct 02 08:31:48 compute-0 ovn_controller[152344]: 2025-10-02T08:31:48Z|00614|binding|INFO|079652d3-bf75-4bfc-9a4d-f208f0313a7d: Claiming fa:16:3e:62:c5:05 10.100.0.80
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.454 2 DEBUG nova.scheduler.client.report [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.455 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:c5:05 10.100.0.80'], port_security=['fa:16:3e:62:c5:05 10.100.0.80'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.80/24', 'neutron:device_id': '49564059-b2ef-4053-bedd-56a9afb53d2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6be132cd-1736-40b4-94e9-fab0bc5bb76a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67e12d7b-180a-4743-8871-1b93526461a2, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=079652d3-bf75-4bfc-9a4d-f208f0313a7d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.456 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 079652d3-bf75-4bfc-9a4d-f208f0313a7d in datapath 6be132cd-1736-40b4-94e9-fab0bc5bb76a bound to our chassis
Oct 02 08:31:48 compute-0 sudo[326936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.457 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6be132cd-1736-40b4-94e9-fab0bc5bb76a
Oct 02 08:31:48 compute-0 sudo[326936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:48 compute-0 sudo[326936]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:48 compute-0 NetworkManager[45129]: <info>  [1759393908.4699] manager: (tap11588b0e-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/265)
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.469 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eae6e19c-e787-4084-892c-9e34415ef7d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.470 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6be132cd-11 in ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:31:48 compute-0 systemd-udevd[326978]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:48 compute-0 systemd-udevd[326979]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.473 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6be132cd-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.473 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e187c478-6c0f-4169-afc4-e22def7766f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.474 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7711314b-35ea-4a87-84ec-67ec8169a045]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.476 2 DEBUG oslo_concurrency.lockutils [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:48 compute-0 NetworkManager[45129]: <info>  [1759393908.4921] device (tap079652d3-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:31:48 compute-0 NetworkManager[45129]: <info>  [1759393908.4926] device (tap079652d3-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.490 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[69561787-2510-42c3-ba1d-63fb79d3bae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.501 2 INFO nova.scheduler.client.report [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Deleted allocations for instance 24339ad4-fec2-43f8-8da3-5e433206a1cc
Oct 02 08:31:48 compute-0 kernel: tap11588b0e-a5: entered promiscuous mode
Oct 02 08:31:48 compute-0 NetworkManager[45129]: <info>  [1759393908.5045] device (tap11588b0e-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:31:48 compute-0 NetworkManager[45129]: <info>  [1759393908.5055] device (tap11588b0e-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:31:48 compute-0 ovn_controller[152344]: 2025-10-02T08:31:48Z|00615|binding|INFO|Claiming lport 11588b0e-a5d2-490e-b72a-3d31e1d091e4 for this chassis.
Oct 02 08:31:48 compute-0 ovn_controller[152344]: 2025-10-02T08:31:48Z|00616|binding|INFO|11588b0e-a5d2-490e-b72a-3d31e1d091e4: Claiming fa:16:3e:62:21:b6 10.100.1.99
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:48 compute-0 ovn_controller[152344]: 2025-10-02T08:31:48Z|00617|binding|INFO|Setting lport 079652d3-bf75-4bfc-9a4d-f208f0313a7d ovn-installed in OVS
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:48 compute-0 ovn_controller[152344]: 2025-10-02T08:31:48Z|00618|binding|INFO|Setting lport 079652d3-bf75-4bfc-9a4d-f208f0313a7d up in Southbound
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.517 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:21:b6 10.100.1.99'], port_security=['fa:16:3e:62:21:b6 10.100.1.99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.99/24', 'neutron:device_id': '49564059-b2ef-4053-bedd-56a9afb53d2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43544beb-5350-4aeb-8684-3bce1489b358', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bf824d6-f862-4c8b-b3e3-334590c7fa1c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=11588b0e-a5d2-490e-b72a-3d31e1d091e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.517 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d431b145-da37-4a1a-98dc-27640eacc7c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 systemd-machined[214636]: New machine qemu-74-instance-00000042.
Oct 02 08:31:48 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-00000042.
Oct 02 08:31:48 compute-0 sudo[326984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:31:48 compute-0 sudo[326984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:48 compute-0 sudo[326984]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:48 compute-0 ovn_controller[152344]: 2025-10-02T08:31:48Z|00619|binding|INFO|Releasing lport d143de50-fc80-43b6-82e2-6651430a4a42 from this chassis (sb_readonly=0)
Oct 02 08:31:48 compute-0 ovn_controller[152344]: 2025-10-02T08:31:48Z|00620|binding|INFO|Releasing lport 8d9038d5-8bd6-460b-aca0-b6f7422e177a from this chassis (sb_readonly=0)
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.565 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9daa5bb6-3cd7-448e-8176-a16466d5fe20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 ovn_controller[152344]: 2025-10-02T08:31:48Z|00621|binding|INFO|Setting lport 11588b0e-a5d2-490e-b72a-3d31e1d091e4 up in Southbound
Oct 02 08:31:48 compute-0 ovn_controller[152344]: 2025-10-02T08:31:48Z|00622|binding|INFO|Setting lport 11588b0e-a5d2-490e-b72a-3d31e1d091e4 ovn-installed in OVS
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:48 compute-0 NetworkManager[45129]: <info>  [1759393908.5850] manager: (tap6be132cd-10): new Veth device (/org/freedesktop/NetworkManager/Devices/266)
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.584 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[69d6076c-998e-4ca1-9d31-8a78924beea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.582 2 DEBUG oslo_concurrency.lockutils [None req-07732ef4-dfc7-46e8-996d-c16728cb73d6 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:48 compute-0 sudo[327017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:31:48 compute-0 sudo[327017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:48 compute-0 sudo[327017]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.624 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ba83b326-a4cf-4bc2-85ff-712c9a6d3d9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.628 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9419671e-3138-42ee-8cd6-211113fbf196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.637 2 DEBUG nova.network.neutron [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:48 compute-0 NetworkManager[45129]: <info>  [1759393908.6568] device (tap6be132cd-10): carrier: link connected
Oct 02 08:31:48 compute-0 sudo[327063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:31:48 compute-0 sudo[327063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.663 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d13eb126-9d99-467e-b3b6-91bf8d5ce015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.683 2 INFO nova.compute.manager [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Took 1.55 seconds to deallocate network for instance.
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.686 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f5631733-5f2a-4df5-ad5a-312b6d4dbd00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6be132cd-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:d6:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478725, 'reachable_time': 42078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327090, 'error': None, 'target': 'ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.704 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[930fee56-b1ee-4122-beb9-de34f02e0b49]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe07:d6a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478725, 'tstamp': 478725}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327091, 'error': None, 'target': 'ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.716 2 DEBUG nova.compute.manager [req-d973809a-57c8-4fec-9096-ac8876222375 req-e8891321-0428-444e-b7d4-fcb182768c8e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-deleted-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.722 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4becfddd-be1d-41b5-a782-c9c7fa9c8a54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6be132cd-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:d6:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478725, 'reachable_time': 42078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327092, 'error': None, 'target': 'ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.735 2 DEBUG oslo_concurrency.lockutils [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.735 2 DEBUG oslo_concurrency.lockutils [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.760 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[21ef5abe-d2e3-4a9a-91e6-bffd3b425d95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.827 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d67afd86-ec7d-4a1f-a71a-b5cc0d4d9532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6be132cd-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6be132cd-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:48 compute-0 kernel: tap6be132cd-10: entered promiscuous mode
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:48 compute-0 NetworkManager[45129]: <info>  [1759393908.8320] manager: (tap6be132cd-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.832 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6be132cd-10, col_values=(('external_ids', {'iface-id': '4c8ccf97-5dfe-4744-b0b7-2a410ea07c13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:48 compute-0 ovn_controller[152344]: 2025-10-02T08:31:48Z|00623|binding|INFO|Releasing lport 4c8ccf97-5dfe-4744-b0b7-2a410ea07c13 from this chassis (sb_readonly=0)
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.851 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6be132cd-1736-40b4-94e9-fab0bc5bb76a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6be132cd-1736-40b4-94e9-fab0bc5bb76a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.852 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[40924bba-dbb7-442d-b4ad-d39a298750cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.853 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-6be132cd-1736-40b4-94e9-fab0bc5bb76a
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/6be132cd-1736-40b4-94e9-fab0bc5bb76a.pid.haproxy
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 6be132cd-1736-40b4-94e9-fab0bc5bb76a
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:31:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:48.853 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a', 'env', 'PROCESS_TAG=haproxy-6be132cd-1736-40b4-94e9-fab0bc5bb76a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6be132cd-1736-40b4-94e9-fab0bc5bb76a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:31:48 compute-0 nova_compute[260603]: 2025-10-02 08:31:48.875 2 DEBUG oslo_concurrency.processutils [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:49 compute-0 podman[327184]: 2025-10-02 08:31:49.014885447 +0000 UTC m=+0.053553913 container create 38205e84b2050062e343cb75a09d777578e8f3939d77a7733e064b72162a5d3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.025 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 295 active+clean; 405 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 19 MiB/s rd, 12 MiB/s wr, 628 op/s
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.026 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.045 2 DEBUG nova.compute.manager [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:31:49 compute-0 systemd[1]: Started libpod-conmon-38205e84b2050062e343cb75a09d777578e8f3939d77a7733e064b72162a5d3d.scope.
Oct 02 08:31:49 compute-0 podman[327184]: 2025-10-02 08:31:48.98857507 +0000 UTC m=+0.027243566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:31:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:49 compute-0 podman[327184]: 2025-10-02 08:31:49.122029961 +0000 UTC m=+0.160698467 container init 38205e84b2050062e343cb75a09d777578e8f3939d77a7733e064b72162a5d3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ishizaka, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.126 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:49 compute-0 podman[327184]: 2025-10-02 08:31:49.127642106 +0000 UTC m=+0.166310562 container start 38205e84b2050062e343cb75a09d777578e8f3939d77a7733e064b72162a5d3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ishizaka, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 08:31:49 compute-0 podman[327184]: 2025-10-02 08:31:49.13133888 +0000 UTC m=+0.170007386 container attach 38205e84b2050062e343cb75a09d777578e8f3939d77a7733e064b72162a5d3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ishizaka, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:31:49 compute-0 pensive_ishizaka[327220]: 167 167
Oct 02 08:31:49 compute-0 systemd[1]: libpod-38205e84b2050062e343cb75a09d777578e8f3939d77a7733e064b72162a5d3d.scope: Deactivated successfully.
Oct 02 08:31:49 compute-0 podman[327184]: 2025-10-02 08:31:49.13488902 +0000 UTC m=+0.173557486 container died 38205e84b2050062e343cb75a09d777578e8f3939d77a7733e064b72162a5d3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 08:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-64878aed067ea987ec79c1b1dc9263b30a42a8a033c1a708f443a624162aff12-merged.mount: Deactivated successfully.
Oct 02 08:31:49 compute-0 podman[327184]: 2025-10-02 08:31:49.188896906 +0000 UTC m=+0.227565362 container remove 38205e84b2050062e343cb75a09d777578e8f3939d77a7733e064b72162a5d3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ishizaka, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:31:49 compute-0 systemd[1]: libpod-conmon-38205e84b2050062e343cb75a09d777578e8f3939d77a7733e064b72162a5d3d.scope: Deactivated successfully.
Oct 02 08:31:49 compute-0 ceph-mon[74477]: osdmap e213: 3 total, 3 up, 3 in
Oct 02 08:31:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2489341053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:49 compute-0 podman[327256]: 2025-10-02 08:31:49.260692894 +0000 UTC m=+0.059835207 container create c9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:31:49 compute-0 systemd[1]: Started libpod-conmon-c9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324.scope.
Oct 02 08:31:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3782023396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:49 compute-0 podman[327256]: 2025-10-02 08:31:49.225436631 +0000 UTC m=+0.024578954 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:31:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.330 2 DEBUG oslo_concurrency.processutils [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cca918a7498189e7a4563b38616ac1806a129c65f561bc1b9c93d39b19f12ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.344 2 DEBUG nova.compute.provider_tree [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:31:49 compute-0 podman[327256]: 2025-10-02 08:31:49.358887191 +0000 UTC m=+0.158029504 container init c9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.361 2 DEBUG nova.scheduler.client.report [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:31:49 compute-0 podman[327256]: 2025-10-02 08:31:49.366092455 +0000 UTC m=+0.165234768 container start c9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.388 2 DEBUG oslo_concurrency.lockutils [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.390 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:49 compute-0 neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a[327275]: [NOTICE]   (327288) : New worker (327297) forked
Oct 02 08:31:49 compute-0 neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a[327275]: [NOTICE]   (327288) : Loading success.
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.397 2 DEBUG nova.virt.hardware [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.398 2 INFO nova.compute.claims [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:31:49 compute-0 podman[327286]: 2025-10-02 08:31:49.411439221 +0000 UTC m=+0.038657230 container create 98d0a834c05ed2319c33249ad2b0e63ba80fd4ddb43b4661ef563f2c678a8849 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ptolemy, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.413 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 11588b0e-a5d2-490e-b72a-3d31e1d091e4 in datapath 43544beb-5350-4aeb-8684-3bce1489b358 unbound from our chassis
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.414 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43544beb-5350-4aeb-8684-3bce1489b358
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.429 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f2f98ae-b080-495c-a145-ad1f5a6e0ed9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.430 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43544beb-51 in ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.432 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43544beb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.433 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[13f75817-23eb-428c-ac32-4e980e914382]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.432 2 INFO nova.scheduler.client.report [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Deleted allocations for instance 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.434 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[223506c8-f3f5-41d0-9c05-6c1831044931]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 systemd[1]: Started libpod-conmon-98d0a834c05ed2319c33249ad2b0e63ba80fd4ddb43b4661ef563f2c678a8849.scope.
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.447 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[506b60fe-f22e-43a9-8368-81f63e3bf560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.460 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393909.4584663, 49564059-b2ef-4053-bedd-56a9afb53d2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.460 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] VM Started (Lifecycle Event)
Oct 02 08:31:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ab423d39207edea2f5fcc3361678b10bf677fc7c1aa3a519c7f9102458ea16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.475 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bb384994-7d89-47f3-925c-dbb24146c2a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ab423d39207edea2f5fcc3361678b10bf677fc7c1aa3a519c7f9102458ea16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ab423d39207edea2f5fcc3361678b10bf677fc7c1aa3a519c7f9102458ea16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ab423d39207edea2f5fcc3361678b10bf677fc7c1aa3a519c7f9102458ea16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:49 compute-0 podman[327286]: 2025-10-02 08:31:49.39624116 +0000 UTC m=+0.023459189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:31:49 compute-0 podman[327286]: 2025-10-02 08:31:49.49518598 +0000 UTC m=+0.122403999 container init 98d0a834c05ed2319c33249ad2b0e63ba80fd4ddb43b4661ef563f2c678a8849 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ptolemy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.496 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.500 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393909.4618642, 49564059-b2ef-4053-bedd-56a9afb53d2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.501 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] VM Paused (Lifecycle Event)
Oct 02 08:31:49 compute-0 podman[327286]: 2025-10-02 08:31:49.502436576 +0000 UTC m=+0.129654585 container start 98d0a834c05ed2319c33249ad2b0e63ba80fd4ddb43b4661ef563f2c678a8849 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 08:31:49 compute-0 podman[327286]: 2025-10-02 08:31:49.505510491 +0000 UTC m=+0.132728590 container attach 98d0a834c05ed2319c33249ad2b0e63ba80fd4ddb43b4661ef563f2c678a8849 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.504 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1aa8a8-c532-416e-b799-7748f81ed463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 NetworkManager[45129]: <info>  [1759393909.5114] manager: (tap43544beb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/268)
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.510 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[507d7489-5b42-4846-9490-dceb929f68b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 systemd-udevd[327059]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.530 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.532 2 DEBUG oslo_concurrency.lockutils [None req-6507cc98-2e58-4d28-8429-5a1d4aaff010 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.539 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.547 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b058d9be-cb4e-42de-8b0f-836ec30c3b85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.555 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[98fe9d98-e140-405f-b7b3-46f3d9127c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.559 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:49 compute-0 NetworkManager[45129]: <info>  [1759393909.5807] device (tap43544beb-50): carrier: link connected
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.585 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6fdb623d-f2f0-44b9-87c0-9ef0c7c7c90b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.604 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.606 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b4bac7db-7196-4334-915a-c5fa865ea470]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43544beb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:e8:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 182], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478818, 'reachable_time': 20018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327329, 'error': None, 'target': 'ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.623 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e79e9079-3630-406a-a7be-f5a22ef6ff4a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe48:e855'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478818, 'tstamp': 478818}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327330, 'error': None, 'target': 'ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.651 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[87e5f800-e05f-446c-b024-0b4f1dd5ee41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43544beb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:e8:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 182], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478818, 'reachable_time': 20018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327332, 'error': None, 'target': 'ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.700 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5a22d8e2-479d-44a7-85bf-133852773979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.798 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ad8b1f-6c0f-4885-96d6-b4a3c0549ff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.803 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43544beb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.803 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.804 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43544beb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:49 compute-0 kernel: tap43544beb-50: entered promiscuous mode
Oct 02 08:31:49 compute-0 NetworkManager[45129]: <info>  [1759393909.8098] manager: (tap43544beb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.815 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43544beb-50, col_values=(('external_ids', {'iface-id': '2a232d39-1ce6-44d4-b570-401f4d7e01e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:49 compute-0 ovn_controller[152344]: 2025-10-02T08:31:49Z|00624|binding|INFO|Releasing lport 2a232d39-1ce6-44d4-b570-401f4d7e01e5 from this chassis (sb_readonly=0)
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.820 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43544beb-5350-4aeb-8684-3bce1489b358.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43544beb-5350-4aeb-8684-3bce1489b358.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.821 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b8cc6606-cf95-44c0-a1d0-ff616261463d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.822 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-43544beb-5350-4aeb-8684-3bce1489b358
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/43544beb-5350-4aeb-8684-3bce1489b358.pid.haproxy
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 43544beb-5350-4aeb-8684-3bce1489b358
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:31:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:49.823 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358', 'env', 'PROCESS_TAG=haproxy-43544beb-5350-4aeb-8684-3bce1489b358', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43544beb-5350-4aeb-8684-3bce1489b358.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.840 2 INFO nova.virt.libvirt.driver [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Snapshot image upload complete
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.841 2 INFO nova.compute.manager [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Took 4.88 seconds to snapshot the instance on the hypervisor.
Oct 02 08:31:49 compute-0 nova_compute[260603]: 2025-10-02 08:31:49.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3903202659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.095 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.100 2 DEBUG nova.compute.provider_tree [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.115 2 DEBUG nova.scheduler.client.report [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.124 2 DEBUG nova.network.neutron [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Updated VIF entry in instance network info cache for port 11588b0e-a5d2-490e-b72a-3d31e1d091e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.124 2 DEBUG nova.network.neutron [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Updating instance_info_cache with network_info: [{"id": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "address": "fa:16:3e:62:c5:05", "network": {"id": "6be132cd-1736-40b4-94e9-fab0bc5bb76a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-477300430", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079652d3-bf", "ovs_interfaceid": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "address": "fa:16:3e:62:21:b6", "network": {"id": "43544beb-5350-4aeb-8684-3bce1489b358", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2007163273", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11588b0e-a5", "ovs_interfaceid": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.140 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.141 2 DEBUG nova.compute.manager [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.143 2 DEBUG oslo_concurrency.lockutils [req-78481581-1fd3-4486-8bcb-44797b77715d req-2287ab98-04b6-42a8-b190-aedd5287832a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-49564059-b2ef-4053-bedd-56a9afb53d2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.174 2 DEBUG nova.compute.manager [None req-84e92702-a525-43e9-bdcb-c0d187aa114c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 02 08:31:50 compute-0 ceph-mon[74477]: pgmap v1534: 305 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 295 active+clean; 405 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 19 MiB/s rd, 12 MiB/s wr, 628 op/s
Oct 02 08:31:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3782023396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3903202659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.217 2 DEBUG nova.compute.manager [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.217 2 DEBUG nova.network.neutron [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.240 2 INFO nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:31:50 compute-0 podman[327385]: 2025-10-02 08:31:50.241613032 +0000 UTC m=+0.049336682 container create 20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]: {
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:     "0": [
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:         {
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "devices": [
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "/dev/loop3"
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             ],
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_name": "ceph_lv0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_size": "21470642176",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "name": "ceph_lv0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "tags": {
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.cluster_name": "ceph",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.crush_device_class": "",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.encrypted": "0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.osd_id": "0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.type": "block",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.vdo": "0"
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             },
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "type": "block",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "vg_name": "ceph_vg0"
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:         }
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:     ],
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:     "1": [
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:         {
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "devices": [
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "/dev/loop4"
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             ],
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_name": "ceph_lv1",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_size": "21470642176",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "name": "ceph_lv1",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "tags": {
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.cluster_name": "ceph",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.crush_device_class": "",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.encrypted": "0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.osd_id": "1",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.type": "block",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.vdo": "0"
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             },
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "type": "block",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "vg_name": "ceph_vg1"
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:         }
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:     ],
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:     "2": [
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:         {
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "devices": [
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "/dev/loop5"
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             ],
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_name": "ceph_lv2",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_size": "21470642176",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "name": "ceph_lv2",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "tags": {
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.cluster_name": "ceph",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.crush_device_class": "",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.encrypted": "0",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.osd_id": "2",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.type": "block",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:                 "ceph.vdo": "0"
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             },
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "type": "block",
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:             "vg_name": "ceph_vg2"
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:         }
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]:     ]
Oct 02 08:31:50 compute-0 heuristic_ptolemy[327316]: }
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.259 2 DEBUG nova.compute.manager [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:31:50 compute-0 systemd[1]: Started libpod-conmon-20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359.scope.
Oct 02 08:31:50 compute-0 systemd[1]: libpod-98d0a834c05ed2319c33249ad2b0e63ba80fd4ddb43b4661ef563f2c678a8849.scope: Deactivated successfully.
Oct 02 08:31:50 compute-0 podman[327286]: 2025-10-02 08:31:50.278595639 +0000 UTC m=+0.905813698 container died 98d0a834c05ed2319c33249ad2b0e63ba80fd4ddb43b4661ef563f2c678a8849 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ptolemy, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:31:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e24951565c7b76b808bcaa4a1e1b8709afd980f1410db799d0627fefb727daaa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-11ab423d39207edea2f5fcc3361678b10bf677fc7c1aa3a519c7f9102458ea16-merged.mount: Deactivated successfully.
Oct 02 08:31:50 compute-0 podman[327385]: 2025-10-02 08:31:50.213694795 +0000 UTC m=+0.021418505 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:31:50 compute-0 podman[327385]: 2025-10-02 08:31:50.32729785 +0000 UTC m=+0.135021530 container init 20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 08:31:50 compute-0 podman[327385]: 2025-10-02 08:31:50.337903939 +0000 UTC m=+0.145627629 container start 20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:31:50 compute-0 podman[327286]: 2025-10-02 08:31:50.340623133 +0000 UTC m=+0.967841142 container remove 98d0a834c05ed2319c33249ad2b0e63ba80fd4ddb43b4661ef563f2c678a8849 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ptolemy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:31:50 compute-0 systemd[1]: libpod-conmon-98d0a834c05ed2319c33249ad2b0e63ba80fd4ddb43b4661ef563f2c678a8849.scope: Deactivated successfully.
Oct 02 08:31:50 compute-0 sudo[327063]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:50 compute-0 neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358[327401]: [NOTICE]   (327417) : New worker (327419) forked
Oct 02 08:31:50 compute-0 neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358[327401]: [NOTICE]   (327417) : Loading success.
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.408 2 DEBUG nova.compute.manager [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.409 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.410 2 INFO nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Creating image(s)
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.442 2 DEBUG nova.storage.rbd_utils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:50 compute-0 sudo[327428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:31:50 compute-0 sudo[327428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:50 compute-0 sudo[327428]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.476 2 DEBUG nova.storage.rbd_utils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.504 2 DEBUG nova.storage.rbd_utils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.508 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:50 compute-0 sudo[327486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:31:50 compute-0 sudo[327486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:50 compute-0 sudo[327486]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.559 2 DEBUG nova.compute.manager [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.560 2 DEBUG oslo_concurrency.lockutils [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.560 2 DEBUG oslo_concurrency.lockutils [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.560 2 DEBUG oslo_concurrency.lockutils [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.561 2 DEBUG nova.compute.manager [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] No waiting events found dispatching network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.561 2 WARNING nova.compute.manager [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received unexpected event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 for instance with vm_state deleted and task_state None.
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.561 2 DEBUG nova.compute.manager [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Received event network-vif-deleted-a246f438-6334-440d-931b-f177dc6cadd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.561 2 DEBUG nova.compute.manager [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-vif-plugged-11588b0e-a5d2-490e-b72a-3d31e1d091e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.561 2 DEBUG oslo_concurrency.lockutils [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.562 2 DEBUG oslo_concurrency.lockutils [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.562 2 DEBUG oslo_concurrency.lockutils [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.562 2 DEBUG nova.compute.manager [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Processing event network-vif-plugged-11588b0e-a5d2-490e-b72a-3d31e1d091e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.562 2 DEBUG nova.compute.manager [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-vif-plugged-11588b0e-a5d2-490e-b72a-3d31e1d091e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.563 2 DEBUG oslo_concurrency.lockutils [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.563 2 DEBUG oslo_concurrency.lockutils [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.563 2 DEBUG oslo_concurrency.lockutils [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.563 2 DEBUG nova.compute.manager [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] No event matching network-vif-plugged-11588b0e-a5d2-490e-b72a-3d31e1d091e4 in dict_keys([('network-vif-plugged', '079652d3-bf75-4bfc-9a4d-f208f0313a7d')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.563 2 WARNING nova.compute.manager [req-8ac59381-0620-4a08-bbd2-fe9c6f4b6433 req-ceb8f0a1-ebb8-46fe-b248-1f67369bc83a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received unexpected event network-vif-plugged-11588b0e-a5d2-490e-b72a-3d31e1d091e4 for instance with vm_state building and task_state spawning.
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.612 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.613 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.613 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.614 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:50 compute-0 sudo[327533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:31:50 compute-0 sudo[327533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:50 compute-0 sudo[327533]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.646 2 DEBUG nova.storage.rbd_utils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.652 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.713 2 DEBUG nova.policy [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '116b114f14f84e4cbd6cc966e29d82e7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:31:50 compute-0 sudo[327578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:31:50 compute-0 sudo[327578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:50 compute-0 nova_compute[260603]: 2025-10-02 08:31:50.985 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 295 active+clean; 405 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 7.8 MiB/s wr, 453 op/s
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.085 2 DEBUG nova.storage.rbd_utils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] resizing rbd image ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:31:51 compute-0 podman[327711]: 2025-10-02 08:31:51.178140171 +0000 UTC m=+0.058931280 container create e23a8c849635f105ff82a02cf25ab0d5ce74912b45525fe19a0bbf9db7f2942c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.197 2 DEBUG nova.objects.instance [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'migration_context' on Instance uuid ccaeb1d7-f1d2-43fb-b36a-793776c713cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.214 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.214 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Ensure instance console log exists: /var/lib/nova/instances/ccaeb1d7-f1d2-43fb-b36a-793776c713cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.215 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.215 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.215 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:51 compute-0 systemd[1]: Started libpod-conmon-e23a8c849635f105ff82a02cf25ab0d5ce74912b45525fe19a0bbf9db7f2942c.scope.
Oct 02 08:31:51 compute-0 podman[327711]: 2025-10-02 08:31:51.148028896 +0000 UTC m=+0.028820095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:31:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:51 compute-0 podman[327711]: 2025-10-02 08:31:51.270792075 +0000 UTC m=+0.151583264 container init e23a8c849635f105ff82a02cf25ab0d5ce74912b45525fe19a0bbf9db7f2942c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:31:51 compute-0 podman[327711]: 2025-10-02 08:31:51.283141449 +0000 UTC m=+0.163932588 container start e23a8c849635f105ff82a02cf25ab0d5ce74912b45525fe19a0bbf9db7f2942c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:31:51 compute-0 podman[327711]: 2025-10-02 08:31:51.287094601 +0000 UTC m=+0.167885740 container attach e23a8c849635f105ff82a02cf25ab0d5ce74912b45525fe19a0bbf9db7f2942c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 08:31:51 compute-0 mystifying_sammet[327749]: 167 167
Oct 02 08:31:51 compute-0 systemd[1]: libpod-e23a8c849635f105ff82a02cf25ab0d5ce74912b45525fe19a0bbf9db7f2942c.scope: Deactivated successfully.
Oct 02 08:31:51 compute-0 podman[327711]: 2025-10-02 08:31:51.292139378 +0000 UTC m=+0.172930487 container died e23a8c849635f105ff82a02cf25ab0d5ce74912b45525fe19a0bbf9db7f2942c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 08:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a36d0fa3aa7ca4f4587fc12f5a24e3abf187b8225134b4640847707091ea1c72-merged.mount: Deactivated successfully.
Oct 02 08:31:51 compute-0 podman[327711]: 2025-10-02 08:31:51.338436034 +0000 UTC m=+0.219227143 container remove e23a8c849635f105ff82a02cf25ab0d5ce74912b45525fe19a0bbf9db7f2942c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 08:31:51 compute-0 systemd[1]: libpod-conmon-e23a8c849635f105ff82a02cf25ab0d5ce74912b45525fe19a0bbf9db7f2942c.scope: Deactivated successfully.
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.498 2 DEBUG nova.compute.manager [req-3650a9b4-2b5b-4ec7-b577-f0384f292ab4 req-4e94b6f0-1c8d-4086-9a65-9d20fffc5689 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-vif-plugged-079652d3-bf75-4bfc-9a4d-f208f0313a7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.499 2 DEBUG oslo_concurrency.lockutils [req-3650a9b4-2b5b-4ec7-b577-f0384f292ab4 req-4e94b6f0-1c8d-4086-9a65-9d20fffc5689 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.499 2 DEBUG oslo_concurrency.lockutils [req-3650a9b4-2b5b-4ec7-b577-f0384f292ab4 req-4e94b6f0-1c8d-4086-9a65-9d20fffc5689 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.499 2 DEBUG oslo_concurrency.lockutils [req-3650a9b4-2b5b-4ec7-b577-f0384f292ab4 req-4e94b6f0-1c8d-4086-9a65-9d20fffc5689 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.500 2 DEBUG nova.compute.manager [req-3650a9b4-2b5b-4ec7-b577-f0384f292ab4 req-4e94b6f0-1c8d-4086-9a65-9d20fffc5689 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Processing event network-vif-plugged-079652d3-bf75-4bfc-9a4d-f208f0313a7d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.500 2 DEBUG nova.compute.manager [req-3650a9b4-2b5b-4ec7-b577-f0384f292ab4 req-4e94b6f0-1c8d-4086-9a65-9d20fffc5689 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-vif-plugged-079652d3-bf75-4bfc-9a4d-f208f0313a7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.501 2 DEBUG oslo_concurrency.lockutils [req-3650a9b4-2b5b-4ec7-b577-f0384f292ab4 req-4e94b6f0-1c8d-4086-9a65-9d20fffc5689 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.501 2 DEBUG oslo_concurrency.lockutils [req-3650a9b4-2b5b-4ec7-b577-f0384f292ab4 req-4e94b6f0-1c8d-4086-9a65-9d20fffc5689 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.501 2 DEBUG oslo_concurrency.lockutils [req-3650a9b4-2b5b-4ec7-b577-f0384f292ab4 req-4e94b6f0-1c8d-4086-9a65-9d20fffc5689 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.502 2 DEBUG nova.compute.manager [req-3650a9b4-2b5b-4ec7-b577-f0384f292ab4 req-4e94b6f0-1c8d-4086-9a65-9d20fffc5689 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] No waiting events found dispatching network-vif-plugged-079652d3-bf75-4bfc-9a4d-f208f0313a7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.502 2 WARNING nova.compute.manager [req-3650a9b4-2b5b-4ec7-b577-f0384f292ab4 req-4e94b6f0-1c8d-4086-9a65-9d20fffc5689 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received unexpected event network-vif-plugged-079652d3-bf75-4bfc-9a4d-f208f0313a7d for instance with vm_state building and task_state spawning.
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.503 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.511 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.512 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393911.5120592, 49564059-b2ef-4053-bedd-56a9afb53d2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.512 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] VM Resumed (Lifecycle Event)
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.519 2 INFO nova.virt.libvirt.driver [-] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Instance spawned successfully.
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.519 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.547 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.559 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.565 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.566 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.567 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.568 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.568 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.569 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.601 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.621 2 DEBUG nova.network.neutron [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Successfully created port: c265ae6a-2342-4b12-a28b-b08b728b8356 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.639 2 INFO nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Took 13.36 seconds to spawn the instance on the hypervisor.
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.640 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:51 compute-0 podman[327772]: 2025-10-02 08:31:51.650479207 +0000 UTC m=+0.079040993 container create be780996c83785a5758c6a65af55f25c35a0d87c319067629eeec30f68f4d60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:31:51 compute-0 podman[327772]: 2025-10-02 08:31:51.615996837 +0000 UTC m=+0.044558633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:31:51 compute-0 systemd[1]: Started libpod-conmon-be780996c83785a5758c6a65af55f25c35a0d87c319067629eeec30f68f4d60d.scope.
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.710 2 INFO nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Took 14.61 seconds to build instance.
Oct 02 08:31:51 compute-0 nova_compute[260603]: 2025-10-02 08:31:51.728 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaa09ce51146e31f8503b6da096329a4692daa75744d02daa475ae37934c55f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaa09ce51146e31f8503b6da096329a4692daa75744d02daa475ae37934c55f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaa09ce51146e31f8503b6da096329a4692daa75744d02daa475ae37934c55f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaa09ce51146e31f8503b6da096329a4692daa75744d02daa475ae37934c55f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:51 compute-0 podman[327772]: 2025-10-02 08:31:51.80073853 +0000 UTC m=+0.229300346 container init be780996c83785a5758c6a65af55f25c35a0d87c319067629eeec30f68f4d60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 08:31:51 compute-0 podman[327772]: 2025-10-02 08:31:51.813423574 +0000 UTC m=+0.241985340 container start be780996c83785a5758c6a65af55f25c35a0d87c319067629eeec30f68f4d60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 08:31:51 compute-0 podman[327772]: 2025-10-02 08:31:51.817288063 +0000 UTC m=+0.245849839 container attach be780996c83785a5758c6a65af55f25c35a0d87c319067629eeec30f68f4d60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:31:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:31:52 compute-0 ceph-mon[74477]: pgmap v1535: 305 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 295 active+clean; 405 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 7.8 MiB/s wr, 453 op/s
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.237 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "751dd598-d3c9-4e21-90ab-98962fac748d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.237 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.258 2 DEBUG nova.compute.manager [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.370 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.370 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.379 2 DEBUG nova.virt.hardware [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.379 2 INFO nova.compute.claims [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.559 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.653 2 DEBUG nova.compute.manager [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.709 2 INFO nova.compute.manager [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] instance snapshotting
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.711 2 DEBUG nova.objects.instance [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'flavor' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.766 2 DEBUG nova.network.neutron [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Successfully updated port: c265ae6a-2342-4b12-a28b-b08b728b8356 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.784 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "refresh_cache-ccaeb1d7-f1d2-43fb-b36a-793776c713cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.785 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquired lock "refresh_cache-ccaeb1d7-f1d2-43fb-b36a-793776c713cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.785 2 DEBUG nova.network.neutron [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.962 2 INFO nova.virt.libvirt.driver [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Beginning live snapshot process
Oct 02 08:31:52 compute-0 eager_tharp[327788]: {
Oct 02 08:31:52 compute-0 eager_tharp[327788]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "osd_id": 2,
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "type": "bluestore"
Oct 02 08:31:52 compute-0 eager_tharp[327788]:     },
Oct 02 08:31:52 compute-0 eager_tharp[327788]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "osd_id": 1,
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "type": "bluestore"
Oct 02 08:31:52 compute-0 eager_tharp[327788]:     },
Oct 02 08:31:52 compute-0 eager_tharp[327788]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "osd_id": 0,
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:31:52 compute-0 eager_tharp[327788]:         "type": "bluestore"
Oct 02 08:31:52 compute-0 eager_tharp[327788]:     }
Oct 02 08:31:52 compute-0 eager_tharp[327788]: }
Oct 02 08:31:52 compute-0 nova_compute[260603]: 2025-10-02 08:31:52.985 2 DEBUG nova.network.neutron [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:31:53 compute-0 systemd[1]: libpod-be780996c83785a5758c6a65af55f25c35a0d87c319067629eeec30f68f4d60d.scope: Deactivated successfully.
Oct 02 08:31:53 compute-0 systemd[1]: libpod-be780996c83785a5758c6a65af55f25c35a0d87c319067629eeec30f68f4d60d.scope: Consumed 1.140s CPU time.
Oct 02 08:31:53 compute-0 podman[327772]: 2025-10-02 08:31:53.00673812 +0000 UTC m=+1.435299876 container died be780996c83785a5758c6a65af55f25c35a0d87c319067629eeec30f68f4d60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:31:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 442 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 9.6 MiB/s wr, 506 op/s
Oct 02 08:31:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-eaa09ce51146e31f8503b6da096329a4692daa75744d02daa475ae37934c55f6-merged.mount: Deactivated successfully.
Oct 02 08:31:53 compute-0 podman[327772]: 2025-10-02 08:31:53.068674372 +0000 UTC m=+1.497236128 container remove be780996c83785a5758c6a65af55f25c35a0d87c319067629eeec30f68f4d60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:31:53 compute-0 podman[327839]: 2025-10-02 08:31:53.071078267 +0000 UTC m=+0.108794127 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 08:31:53 compute-0 systemd[1]: libpod-conmon-be780996c83785a5758c6a65af55f25c35a0d87c319067629eeec30f68f4d60d.scope: Deactivated successfully.
Oct 02 08:31:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1653577163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:53 compute-0 sudo[327578]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:31:53 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:31:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:31:53 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:31:53 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 3244c3d5-3460-4fbb-8323-9bb6d7fb475a does not exist
Oct 02 08:31:53 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f750ba5e-d3e1-4c9b-ab01-937f5e008273 does not exist
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.151 2 DEBUG nova.virt.libvirt.imagebackend [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.155 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.161 2 DEBUG nova.compute.provider_tree [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.180 2 DEBUG nova.scheduler.client.report [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.207 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.208 2 DEBUG nova.compute.manager [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:31:53 compute-0 sudo[327906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:31:53 compute-0 sudo[327906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:53 compute-0 sudo[327906]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:53 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1653577163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:53 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:31:53 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:31:53 compute-0 sudo[327931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:31:53 compute-0 sudo[327931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.285 2 DEBUG nova.compute.manager [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:31:53 compute-0 sudo[327931]: pam_unix(sudo:session): session closed for user root
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.286 2 DEBUG nova.network.neutron [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.314 2 INFO nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.336 2 DEBUG nova.compute.manager [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.435 2 DEBUG nova.compute.manager [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.437 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.437 2 INFO nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Creating image(s)
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.462 2 DEBUG nova.storage.rbd_utils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 751dd598-d3c9-4e21-90ab-98962fac748d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.498 2 DEBUG nova.storage.rbd_utils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 751dd598-d3c9-4e21-90ab-98962fac748d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.528 2 DEBUG nova.storage.rbd_utils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 751dd598-d3c9-4e21-90ab-98962fac748d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.534 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.573 2 DEBUG nova.storage.rbd_utils [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] creating snapshot(4cb22976425a45ac8ea44390da03a9c9) on rbd image(49e7e668-b62c-4e35-a4e2-bba540000961_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.628 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.630 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.630 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.631 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.676 2 DEBUG nova.storage.rbd_utils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 751dd598-d3c9-4e21-90ab-98962fac748d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.687 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 751dd598-d3c9-4e21-90ab-98962fac748d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.783 2 DEBUG nova.policy [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33ee6781337742479d7b4b078ad6a221', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:31:53 compute-0 nova_compute[260603]: 2025-10-02 08:31:53.991 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 751dd598-d3c9-4e21-90ab-98962fac748d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.074 2 DEBUG nova.storage.rbd_utils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] resizing rbd image 751dd598-d3c9-4e21-90ab-98962fac748d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.174 2 DEBUG nova.objects.instance [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 751dd598-d3c9-4e21-90ab-98962fac748d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.189 2 DEBUG nova.compute.manager [req-0a90ca6f-18a7-4e7a-84e3-a700e6ca551a req-cb7f0d9d-6e6e-414a-b068-2dd627104df7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Received event network-changed-c265ae6a-2342-4b12-a28b-b08b728b8356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.190 2 DEBUG nova.compute.manager [req-0a90ca6f-18a7-4e7a-84e3-a700e6ca551a req-cb7f0d9d-6e6e-414a-b068-2dd627104df7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Refreshing instance network info cache due to event network-changed-c265ae6a-2342-4b12-a28b-b08b728b8356. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.190 2 DEBUG oslo_concurrency.lockutils [req-0a90ca6f-18a7-4e7a-84e3-a700e6ca551a req-cb7f0d9d-6e6e-414a-b068-2dd627104df7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ccaeb1d7-f1d2-43fb-b36a-793776c713cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.199 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.199 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Ensure instance console log exists: /var/lib/nova/instances/751dd598-d3c9-4e21-90ab-98962fac748d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.199 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.200 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.200 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Oct 02 08:31:54 compute-0 ceph-mon[74477]: pgmap v1536: 305 pgs: 305 active+clean; 442 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 9.6 MiB/s wr, 506 op/s
Oct 02 08:31:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Oct 02 08:31:54 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.305 2 DEBUG nova.network.neutron [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Updating instance_info_cache with network_info: [{"id": "c265ae6a-2342-4b12-a28b-b08b728b8356", "address": "fa:16:3e:a6:cf:5b", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc265ae6a-23", "ovs_interfaceid": "c265ae6a-2342-4b12-a28b-b08b728b8356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.324 2 DEBUG nova.storage.rbd_utils [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] cloning vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk@4cb22976425a45ac8ea44390da03a9c9 to images/96cbedbb-d6de-403c-86d9-6a14bf3d5624 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.362 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Releasing lock "refresh_cache-ccaeb1d7-f1d2-43fb-b36a-793776c713cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.363 2 DEBUG nova.compute.manager [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Instance network_info: |[{"id": "c265ae6a-2342-4b12-a28b-b08b728b8356", "address": "fa:16:3e:a6:cf:5b", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc265ae6a-23", "ovs_interfaceid": "c265ae6a-2342-4b12-a28b-b08b728b8356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.363 2 DEBUG oslo_concurrency.lockutils [req-0a90ca6f-18a7-4e7a-84e3-a700e6ca551a req-cb7f0d9d-6e6e-414a-b068-2dd627104df7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ccaeb1d7-f1d2-43fb-b36a-793776c713cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.364 2 DEBUG nova.network.neutron [req-0a90ca6f-18a7-4e7a-84e3-a700e6ca551a req-cb7f0d9d-6e6e-414a-b068-2dd627104df7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Refreshing network info cache for port c265ae6a-2342-4b12-a28b-b08b728b8356 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.367 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Start _get_guest_xml network_info=[{"id": "c265ae6a-2342-4b12-a28b-b08b728b8356", "address": "fa:16:3e:a6:cf:5b", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc265ae6a-23", "ovs_interfaceid": "c265ae6a-2342-4b12-a28b-b08b728b8356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.374 2 WARNING nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.388 2 DEBUG nova.virt.libvirt.host [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.389 2 DEBUG nova.virt.libvirt.host [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.393 2 DEBUG nova.virt.libvirt.host [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.394 2 DEBUG nova.virt.libvirt.host [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.394 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.395 2 DEBUG nova.virt.hardware [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.395 2 DEBUG nova.virt.hardware [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.396 2 DEBUG nova.virt.hardware [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.396 2 DEBUG nova.virt.hardware [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.396 2 DEBUG nova.virt.hardware [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.397 2 DEBUG nova.virt.hardware [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.397 2 DEBUG nova.virt.hardware [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.397 2 DEBUG nova.virt.hardware [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.397 2 DEBUG nova.virt.hardware [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.398 2 DEBUG nova.virt.hardware [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.398 2 DEBUG nova.virt.hardware [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.402 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.474 2 DEBUG nova.storage.rbd_utils [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] flattening images/96cbedbb-d6de-403c-86d9-6a14bf3d5624 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.523 2 DEBUG oslo_concurrency.lockutils [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.524 2 DEBUG oslo_concurrency.lockutils [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.524 2 DEBUG oslo_concurrency.lockutils [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.524 2 DEBUG oslo_concurrency.lockutils [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.525 2 DEBUG oslo_concurrency.lockutils [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.527 2 INFO nova.compute.manager [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Terminating instance
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.528 2 DEBUG nova.compute.manager [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:31:54 compute-0 kernel: tap079652d3-bf (unregistering): left promiscuous mode
Oct 02 08:31:54 compute-0 NetworkManager[45129]: <info>  [1759393914.5910] device (tap079652d3-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:31:54 compute-0 ovn_controller[152344]: 2025-10-02T08:31:54Z|00625|binding|INFO|Releasing lport 079652d3-bf75-4bfc-9a4d-f208f0313a7d from this chassis (sb_readonly=0)
Oct 02 08:31:54 compute-0 ovn_controller[152344]: 2025-10-02T08:31:54Z|00626|binding|INFO|Setting lport 079652d3-bf75-4bfc-9a4d-f208f0313a7d down in Southbound
Oct 02 08:31:54 compute-0 ovn_controller[152344]: 2025-10-02T08:31:54Z|00627|binding|INFO|Removing iface tap079652d3-bf ovn-installed in OVS
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:54.609 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:c5:05 10.100.0.80'], port_security=['fa:16:3e:62:c5:05 10.100.0.80'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.80/24', 'neutron:device_id': '49564059-b2ef-4053-bedd-56a9afb53d2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6be132cd-1736-40b4-94e9-fab0bc5bb76a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67e12d7b-180a-4743-8871-1b93526461a2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=079652d3-bf75-4bfc-9a4d-f208f0313a7d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:54 compute-0 kernel: tap11588b0e-a5 (unregistering): left promiscuous mode
Oct 02 08:31:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:54.612 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 079652d3-bf75-4bfc-9a4d-f208f0313a7d in datapath 6be132cd-1736-40b4-94e9-fab0bc5bb76a unbound from our chassis
Oct 02 08:31:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:54.614 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6be132cd-1736-40b4-94e9-fab0bc5bb76a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:31:54 compute-0 NetworkManager[45129]: <info>  [1759393914.6162] device (tap11588b0e-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:31:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:54.615 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[acac4550-9c6d-488c-875f-e2b1c0808e19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:54.617 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a namespace which is not needed anymore
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:54 compute-0 ovn_controller[152344]: 2025-10-02T08:31:54Z|00628|binding|INFO|Releasing lport 11588b0e-a5d2-490e-b72a-3d31e1d091e4 from this chassis (sb_readonly=0)
Oct 02 08:31:54 compute-0 ovn_controller[152344]: 2025-10-02T08:31:54Z|00629|binding|INFO|Setting lport 11588b0e-a5d2-490e-b72a-3d31e1d091e4 down in Southbound
Oct 02 08:31:54 compute-0 ovn_controller[152344]: 2025-10-02T08:31:54Z|00630|binding|INFO|Removing iface tap11588b0e-a5 ovn-installed in OVS
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:54.654 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:21:b6 10.100.1.99'], port_security=['fa:16:3e:62:21:b6 10.100.1.99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.99/24', 'neutron:device_id': '49564059-b2ef-4053-bedd-56a9afb53d2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43544beb-5350-4aeb-8684-3bce1489b358', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bf824d6-f862-4c8b-b3e3-334590c7fa1c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=11588b0e-a5d2-490e-b72a-3d31e1d091e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:54 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000042.scope: Deactivated successfully.
Oct 02 08:31:54 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000042.scope: Consumed 3.920s CPU time.
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:54 compute-0 systemd-machined[214636]: Machine qemu-74-instance-00000042 terminated.
Oct 02 08:31:54 compute-0 NetworkManager[45129]: <info>  [1759393914.7613] manager: (tap11588b0e-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.779 2 INFO nova.virt.libvirt.driver [-] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Instance destroyed successfully.
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.780 2 DEBUG nova.objects.instance [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lazy-loading 'resources' on Instance uuid 49564059-b2ef-4053-bedd-56a9afb53d2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:54 compute-0 neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a[327275]: [NOTICE]   (327288) : haproxy version is 2.8.14-c23fe91
Oct 02 08:31:54 compute-0 neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a[327275]: [NOTICE]   (327288) : path to executable is /usr/sbin/haproxy
Oct 02 08:31:54 compute-0 neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a[327275]: [ALERT]    (327288) : Current worker (327297) exited with code 143 (Terminated)
Oct 02 08:31:54 compute-0 neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a[327275]: [WARNING]  (327288) : All workers exited. Exiting... (0)
Oct 02 08:31:54 compute-0 systemd[1]: libpod-c9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324.scope: Deactivated successfully.
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.797 2 DEBUG nova.virt.libvirt.vif [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-785143250',display_name='tempest-ServersTestMultiNic-server-785143250',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-785143250',id=66,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-cbfgptpc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:51Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=49564059-b2ef-4053-bedd-56a9afb53d2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "address": "fa:16:3e:62:c5:05", "network": {"id": "6be132cd-1736-40b4-94e9-fab0bc5bb76a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-477300430", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079652d3-bf", "ovs_interfaceid": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.798 2 DEBUG nova.network.os_vif_util [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "address": "fa:16:3e:62:c5:05", "network": {"id": "6be132cd-1736-40b4-94e9-fab0bc5bb76a", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-477300430", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079652d3-bf", "ovs_interfaceid": "079652d3-bf75-4bfc-9a4d-f208f0313a7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:54 compute-0 podman[328241]: 2025-10-02 08:31:54.801092808 +0000 UTC m=+0.059961521 container died c9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.799 2 DEBUG nova.network.os_vif_util [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:c5:05,bridge_name='br-int',has_traffic_filtering=True,id=079652d3-bf75-4bfc-9a4d-f208f0313a7d,network=Network(6be132cd-1736-40b4-94e9-fab0bc5bb76a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079652d3-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.799 2 DEBUG os_vif [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:c5:05,bridge_name='br-int',has_traffic_filtering=True,id=079652d3-bf75-4bfc-9a4d-f208f0313a7d,network=Network(6be132cd-1736-40b4-94e9-fab0bc5bb76a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079652d3-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap079652d3-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.812 2 INFO os_vif [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:c5:05,bridge_name='br-int',has_traffic_filtering=True,id=079652d3-bf75-4bfc-9a4d-f208f0313a7d,network=Network(6be132cd-1736-40b4-94e9-fab0bc5bb76a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079652d3-bf')
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.812 2 DEBUG nova.virt.libvirt.vif [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-785143250',display_name='tempest-ServersTestMultiNic-server-785143250',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-785143250',id=66,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-cbfgptpc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:51Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=49564059-b2ef-4053-bedd-56a9afb53d2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "address": "fa:16:3e:62:21:b6", "network": {"id": "43544beb-5350-4aeb-8684-3bce1489b358", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2007163273", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11588b0e-a5", "ovs_interfaceid": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.813 2 DEBUG nova.network.os_vif_util [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "address": "fa:16:3e:62:21:b6", "network": {"id": "43544beb-5350-4aeb-8684-3bce1489b358", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2007163273", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11588b0e-a5", "ovs_interfaceid": "11588b0e-a5d2-490e-b72a-3d31e1d091e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.813 2 DEBUG nova.network.os_vif_util [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:21:b6,bridge_name='br-int',has_traffic_filtering=True,id=11588b0e-a5d2-490e-b72a-3d31e1d091e4,network=Network(43544beb-5350-4aeb-8684-3bce1489b358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11588b0e-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.814 2 DEBUG os_vif [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:21:b6,bridge_name='br-int',has_traffic_filtering=True,id=11588b0e-a5d2-490e-b72a-3d31e1d091e4,network=Network(43544beb-5350-4aeb-8684-3bce1489b358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11588b0e-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.815 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11588b0e-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.819 2 DEBUG nova.network.neutron [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Successfully created port: 85f814fe-3a2f-40e4-a15b-e8b0c466670c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.824 2 INFO os_vif [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:21:b6,bridge_name='br-int',has_traffic_filtering=True,id=11588b0e-a5d2-490e-b72a-3d31e1d091e4,network=Network(43544beb-5350-4aeb-8684-3bce1489b358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11588b0e-a5')
Oct 02 08:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324-userdata-shm.mount: Deactivated successfully.
Oct 02 08:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cca918a7498189e7a4563b38616ac1806a129c65f561bc1b9c93d39b19f12ba-merged.mount: Deactivated successfully.
Oct 02 08:31:54 compute-0 podman[328241]: 2025-10-02 08:31:54.840175591 +0000 UTC m=+0.099044304 container cleanup c9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 08:31:54 compute-0 systemd[1]: libpod-conmon-c9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324.scope: Deactivated successfully.
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.886 2 DEBUG nova.storage.rbd_utils [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] removing snapshot(4cb22976425a45ac8ea44390da03a9c9) on rbd image(49e7e668-b62c-4e35-a4e2-bba540000961_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:31:54 compute-0 podman[328304]: 2025-10-02 08:31:54.92847343 +0000 UTC m=+0.065050099 container remove c9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:31:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:54.936 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2dee2ea6-03c4-4519-993b-4b4a96a4bcfc]: (4, ('Thu Oct  2 08:31:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a (c9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324)\nc9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324\nThu Oct  2 08:31:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a (c9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324)\nc9513a4fd046a18da9ceb16661ee457bba0abee477b975a8b110d6916ad6a324\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:54.940 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3078baa-181a-4f85-bfc2-a707eba4756b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:54.942 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6be132cd-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:54 compute-0 kernel: tap6be132cd-10: left promiscuous mode
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3593300172' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:54.965 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ae88eeb8-b5f2-4112-95aa-22d75206cf00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:54 compute-0 nova_compute[260603]: 2025-10-02 08:31:54.979 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.005 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1941ceae-530b-4b2c-b57f-0e8a9ae092f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.007 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[79c2fbf4-a9c0-40ee-adf4-a5301a120cf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.010 2 DEBUG nova.storage.rbd_utils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.021 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5537dd50-316f-46a3-9f56-e1f7b0ab8885]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478715, 'reachable_time': 18575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328361, 'error': None, 'target': 'ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.022 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.024 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6be132cd-1736-40b4-94e9-fab0bc5bb76a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.024 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0e2c1075-e437-452b-ba95-83f983ca8ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d6be132cd\x2d1736\x2d40b4\x2d94e9\x2dfab0bc5bb76a.mount: Deactivated successfully.
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.026 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 11588b0e-a5d2-490e-b72a-3d31e1d091e4 in datapath 43544beb-5350-4aeb-8684-3bce1489b358 unbound from our chassis
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.027 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43544beb-5350-4aeb-8684-3bce1489b358, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:31:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 451 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 8.6 MiB/s wr, 479 op/s
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.028 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1a5922-036a-492a-a04b-3f8a8133a632]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.029 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358 namespace which is not needed anymore
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.084 2 DEBUG nova.compute.manager [req-55cf4927-a81b-4d91-936b-99f08e3b725b req-96d00416-55f7-4115-8eed-5c35e2814dc6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-vif-unplugged-11588b0e-a5d2-490e-b72a-3d31e1d091e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.087 2 DEBUG oslo_concurrency.lockutils [req-55cf4927-a81b-4d91-936b-99f08e3b725b req-96d00416-55f7-4115-8eed-5c35e2814dc6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.088 2 DEBUG oslo_concurrency.lockutils [req-55cf4927-a81b-4d91-936b-99f08e3b725b req-96d00416-55f7-4115-8eed-5c35e2814dc6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.088 2 DEBUG oslo_concurrency.lockutils [req-55cf4927-a81b-4d91-936b-99f08e3b725b req-96d00416-55f7-4115-8eed-5c35e2814dc6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.089 2 DEBUG nova.compute.manager [req-55cf4927-a81b-4d91-936b-99f08e3b725b req-96d00416-55f7-4115-8eed-5c35e2814dc6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] No waiting events found dispatching network-vif-unplugged-11588b0e-a5d2-490e-b72a-3d31e1d091e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.089 2 DEBUG nova.compute.manager [req-55cf4927-a81b-4d91-936b-99f08e3b725b req-96d00416-55f7-4115-8eed-5c35e2814dc6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-vif-unplugged-11588b0e-a5d2-490e-b72a-3d31e1d091e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:31:55 compute-0 neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358[327401]: [NOTICE]   (327417) : haproxy version is 2.8.14-c23fe91
Oct 02 08:31:55 compute-0 neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358[327401]: [NOTICE]   (327417) : path to executable is /usr/sbin/haproxy
Oct 02 08:31:55 compute-0 neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358[327401]: [WARNING]  (327417) : Exiting Master process...
Oct 02 08:31:55 compute-0 neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358[327401]: [WARNING]  (327417) : Exiting Master process...
Oct 02 08:31:55 compute-0 neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358[327401]: [ALERT]    (327417) : Current worker (327419) exited with code 143 (Terminated)
Oct 02 08:31:55 compute-0 neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358[327401]: [WARNING]  (327417) : All workers exited. Exiting... (0)
Oct 02 08:31:55 compute-0 systemd[1]: libpod-20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359.scope: Deactivated successfully.
Oct 02 08:31:55 compute-0 podman[328380]: 2025-10-02 08:31:55.161049678 +0000 UTC m=+0.046189906 container died 20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:31:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359-userdata-shm.mount: Deactivated successfully.
Oct 02 08:31:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e24951565c7b76b808bcaa4a1e1b8709afd980f1410db799d0627fefb727daaa-merged.mount: Deactivated successfully.
Oct 02 08:31:55 compute-0 podman[328380]: 2025-10-02 08:31:55.197657203 +0000 UTC m=+0.082797431 container cleanup 20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:31:55 compute-0 systemd[1]: libpod-conmon-20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359.scope: Deactivated successfully.
Oct 02 08:31:55 compute-0 podman[328430]: 2025-10-02 08:31:55.263190327 +0000 UTC m=+0.042270903 container remove 20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 08:31:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Oct 02 08:31:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.272 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcec655-1ef3-4df0-9ca1-4f77339c9ad2]: (4, ('Thu Oct  2 08:31:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358 (20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359)\n20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359\nThu Oct  2 08:31:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358 (20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359)\n20966af7415dfd5cf2c0be6708b1cd428f0f274974b9d5ec5bb3dc2392564359\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.273 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a63ede9b-4e59-40b9-8827-5185cd70570b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.275 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43544beb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:55 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Oct 02 08:31:55 compute-0 kernel: tap43544beb-50: left promiscuous mode
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:55 compute-0 ceph-mon[74477]: osdmap e214: 3 total, 3 up, 3 in
Oct 02 08:31:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3593300172' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.347 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef1bd6e-c8dc-480a-8b07-81e6d77aaea6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.378 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[08f0ea0a-0026-41a6-96b9-024fb1e301d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.380 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5401ffc7-3d34-4139-8a13-bf1391fe7c40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.378 2 DEBUG nova.storage.rbd_utils [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] creating snapshot(snap) on rbd image(96cbedbb-d6de-403c-86d9-6a14bf3d5624) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.410 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5a03a1ec-a0ee-481e-90c6-3fa182420aa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478810, 'reachable_time': 33220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328445, 'error': None, 'target': 'ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.412 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43544beb-5350-4aeb-8684-3bce1489b358 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:31:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:55.413 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[827b78f6-a9e3-4568-ae2c-cf379f8aed5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.455 2 INFO nova.virt.libvirt.driver [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Deleting instance files /var/lib/nova/instances/49564059-b2ef-4053-bedd-56a9afb53d2c_del
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.456 2 INFO nova.virt.libvirt.driver [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Deletion of /var/lib/nova/instances/49564059-b2ef-4053-bedd-56a9afb53d2c_del complete
Oct 02 08:31:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:55 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584169051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.535 2 INFO nova.compute.manager [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Took 1.01 seconds to destroy the instance on the hypervisor.
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.536 2 DEBUG oslo.service.loopingcall [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.536 2 DEBUG nova.compute.manager [-] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.537 2 DEBUG nova.network.neutron [-] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.549 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.551 2 DEBUG nova.virt.libvirt.vif [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-314573417',display_name='tempest-ServerDiskConfigTestJSON-server-314573417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-314573417',id=67,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-qn6v65fb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:50Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=ccaeb1d7-f1d2-43fb-b36a-793776c713cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c265ae6a-2342-4b12-a28b-b08b728b8356", "address": "fa:16:3e:a6:cf:5b", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc265ae6a-23", "ovs_interfaceid": "c265ae6a-2342-4b12-a28b-b08b728b8356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.551 2 DEBUG nova.network.os_vif_util [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "c265ae6a-2342-4b12-a28b-b08b728b8356", "address": "fa:16:3e:a6:cf:5b", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc265ae6a-23", "ovs_interfaceid": "c265ae6a-2342-4b12-a28b-b08b728b8356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.553 2 DEBUG nova.network.os_vif_util [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:cf:5b,bridge_name='br-int',has_traffic_filtering=True,id=c265ae6a-2342-4b12-a28b-b08b728b8356,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc265ae6a-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.554 2 DEBUG nova.objects.instance [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_devices' on Instance uuid ccaeb1d7-f1d2-43fb-b36a-793776c713cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.576 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:31:55 compute-0 nova_compute[260603]:   <uuid>ccaeb1d7-f1d2-43fb-b36a-793776c713cd</uuid>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   <name>instance-00000043</name>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-314573417</nova:name>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:31:54</nova:creationTime>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:31:55 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:31:55 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:31:55 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:31:55 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:31:55 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:31:55 compute-0 nova_compute[260603]:         <nova:user uuid="116b114f14f84e4cbd6cc966e29d82e7">tempest-ServerDiskConfigTestJSON-1277806880-project-member</nova:user>
Oct 02 08:31:55 compute-0 nova_compute[260603]:         <nova:project uuid="bce7493292bb47cfb7168bca89f78f4a">tempest-ServerDiskConfigTestJSON-1277806880</nova:project>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:31:55 compute-0 nova_compute[260603]:         <nova:port uuid="c265ae6a-2342-4b12-a28b-b08b728b8356">
Oct 02 08:31:55 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <system>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <entry name="serial">ccaeb1d7-f1d2-43fb-b36a-793776c713cd</entry>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <entry name="uuid">ccaeb1d7-f1d2-43fb-b36a-793776c713cd</entry>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     </system>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   <os>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   </os>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   <features>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   </features>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk">
Oct 02 08:31:55 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:55 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk.config">
Oct 02 08:31:55 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:55 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:a6:cf:5b"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <target dev="tapc265ae6a-23"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/ccaeb1d7-f1d2-43fb-b36a-793776c713cd/console.log" append="off"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <video>
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     </video>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:31:55 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:31:55 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:31:55 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:31:55 compute-0 nova_compute[260603]: </domain>
Oct 02 08:31:55 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.577 2 DEBUG nova.compute.manager [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Preparing to wait for external event network-vif-plugged-c265ae6a-2342-4b12-a28b-b08b728b8356 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.578 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.578 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.579 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.580 2 DEBUG nova.virt.libvirt.vif [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-314573417',display_name='tempest-ServerDiskConfigTestJSON-server-314573417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-314573417',id=67,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-qn6v65fb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:50Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=ccaeb1d7-f1d2-43fb-b36a-793776c713cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c265ae6a-2342-4b12-a28b-b08b728b8356", "address": "fa:16:3e:a6:cf:5b", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc265ae6a-23", "ovs_interfaceid": "c265ae6a-2342-4b12-a28b-b08b728b8356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.581 2 DEBUG nova.network.os_vif_util [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "c265ae6a-2342-4b12-a28b-b08b728b8356", "address": "fa:16:3e:a6:cf:5b", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc265ae6a-23", "ovs_interfaceid": "c265ae6a-2342-4b12-a28b-b08b728b8356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.582 2 DEBUG nova.network.os_vif_util [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:cf:5b,bridge_name='br-int',has_traffic_filtering=True,id=c265ae6a-2342-4b12-a28b-b08b728b8356,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc265ae6a-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.583 2 DEBUG os_vif [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:cf:5b,bridge_name='br-int',has_traffic_filtering=True,id=c265ae6a-2342-4b12-a28b-b08b728b8356,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc265ae6a-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.591 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc265ae6a-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.592 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc265ae6a-23, col_values=(('external_ids', {'iface-id': 'c265ae6a-2342-4b12-a28b-b08b728b8356', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:cf:5b', 'vm-uuid': 'ccaeb1d7-f1d2-43fb-b36a-793776c713cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:55 compute-0 NetworkManager[45129]: <info>  [1759393915.5962] manager: (tapc265ae6a-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.604 2 INFO os_vif [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:cf:5b,bridge_name='br-int',has_traffic_filtering=True,id=c265ae6a-2342-4b12-a28b-b08b728b8356,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc265ae6a-23')
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.690 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.690 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.691 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No VIF found with MAC fa:16:3e:a6:cf:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.692 2 INFO nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Using config drive
Oct 02 08:31:55 compute-0 nova_compute[260603]: 2025-10-02 08:31:55.727 2 DEBUG nova.storage.rbd_utils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d43544beb\x2d5350\x2d4aeb\x2d8684\x2d3bce1489b358.mount: Deactivated successfully.
Oct 02 08:31:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Oct 02 08:31:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Oct 02 08:31:56 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Oct 02 08:31:56 compute-0 ceph-mon[74477]: pgmap v1538: 305 pgs: 305 active+clean; 451 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 8.6 MiB/s wr, 479 op/s
Oct 02 08:31:56 compute-0 ceph-mon[74477]: osdmap e215: 3 total, 3 up, 3 in
Oct 02 08:31:56 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3584169051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:56 compute-0 nova_compute[260603]: 2025-10-02 08:31:56.888 2 INFO nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Creating config drive at /var/lib/nova/instances/ccaeb1d7-f1d2-43fb-b36a-793776c713cd/disk.config
Oct 02 08:31:56 compute-0 nova_compute[260603]: 2025-10-02 08:31:56.894 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ccaeb1d7-f1d2-43fb-b36a-793776c713cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphod8yr3b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 451 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 175 op/s
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.066 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ccaeb1d7-f1d2-43fb-b36a-793776c713cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphod8yr3b" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.104 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.106 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.111 2 DEBUG nova.storage.rbd_utils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.117 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ccaeb1d7-f1d2-43fb-b36a-793776c713cd/disk.config ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:31:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Oct 02 08:31:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Oct 02 08:31:57 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.232 2 DEBUG nova.network.neutron [req-0a90ca6f-18a7-4e7a-84e3-a700e6ca551a req-cb7f0d9d-6e6e-414a-b068-2dd627104df7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Updated VIF entry in instance network info cache for port c265ae6a-2342-4b12-a28b-b08b728b8356. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.233 2 DEBUG nova.network.neutron [req-0a90ca6f-18a7-4e7a-84e3-a700e6ca551a req-cb7f0d9d-6e6e-414a-b068-2dd627104df7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Updating instance_info_cache with network_info: [{"id": "c265ae6a-2342-4b12-a28b-b08b728b8356", "address": "fa:16:3e:a6:cf:5b", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc265ae6a-23", "ovs_interfaceid": "c265ae6a-2342-4b12-a28b-b08b728b8356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.237 2 DEBUG nova.network.neutron [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Successfully updated port: 85f814fe-3a2f-40e4-a15b-e8b0c466670c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.256 2 DEBUG oslo_concurrency.lockutils [req-0a90ca6f-18a7-4e7a-84e3-a700e6ca551a req-cb7f0d9d-6e6e-414a-b068-2dd627104df7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ccaeb1d7-f1d2-43fb-b36a-793776c713cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.259 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "refresh_cache-751dd598-d3c9-4e21-90ab-98962fac748d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.260 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquired lock "refresh_cache-751dd598-d3c9-4e21-90ab-98962fac748d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.260 2 DEBUG nova.network.neutron [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.323 2 DEBUG nova.compute.manager [req-b808e07e-b8d7-49d2-a3dd-585054b96454 req-01d186b5-6869-4ae7-9fe9-cb2c6190c02a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-vif-unplugged-079652d3-bf75-4bfc-9a4d-f208f0313a7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.324 2 DEBUG oslo_concurrency.lockutils [req-b808e07e-b8d7-49d2-a3dd-585054b96454 req-01d186b5-6869-4ae7-9fe9-cb2c6190c02a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.324 2 DEBUG oslo_concurrency.lockutils [req-b808e07e-b8d7-49d2-a3dd-585054b96454 req-01d186b5-6869-4ae7-9fe9-cb2c6190c02a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.324 2 DEBUG oslo_concurrency.lockutils [req-b808e07e-b8d7-49d2-a3dd-585054b96454 req-01d186b5-6869-4ae7-9fe9-cb2c6190c02a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.325 2 DEBUG nova.compute.manager [req-b808e07e-b8d7-49d2-a3dd-585054b96454 req-01d186b5-6869-4ae7-9fe9-cb2c6190c02a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] No waiting events found dispatching network-vif-unplugged-079652d3-bf75-4bfc-9a4d-f208f0313a7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.325 2 DEBUG nova.compute.manager [req-b808e07e-b8d7-49d2-a3dd-585054b96454 req-01d186b5-6869-4ae7-9fe9-cb2c6190c02a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-vif-unplugged-079652d3-bf75-4bfc-9a4d-f208f0313a7d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.325 2 DEBUG nova.compute.manager [req-b808e07e-b8d7-49d2-a3dd-585054b96454 req-01d186b5-6869-4ae7-9fe9-cb2c6190c02a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-vif-plugged-079652d3-bf75-4bfc-9a4d-f208f0313a7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.325 2 DEBUG oslo_concurrency.lockutils [req-b808e07e-b8d7-49d2-a3dd-585054b96454 req-01d186b5-6869-4ae7-9fe9-cb2c6190c02a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.326 2 DEBUG oslo_concurrency.lockutils [req-b808e07e-b8d7-49d2-a3dd-585054b96454 req-01d186b5-6869-4ae7-9fe9-cb2c6190c02a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.326 2 DEBUG oslo_concurrency.lockutils [req-b808e07e-b8d7-49d2-a3dd-585054b96454 req-01d186b5-6869-4ae7-9fe9-cb2c6190c02a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.326 2 DEBUG nova.compute.manager [req-b808e07e-b8d7-49d2-a3dd-585054b96454 req-01d186b5-6869-4ae7-9fe9-cb2c6190c02a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] No waiting events found dispatching network-vif-plugged-079652d3-bf75-4bfc-9a4d-f208f0313a7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.326 2 WARNING nova.compute.manager [req-b808e07e-b8d7-49d2-a3dd-585054b96454 req-01d186b5-6869-4ae7-9fe9-cb2c6190c02a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received unexpected event network-vif-plugged-079652d3-bf75-4bfc-9a4d-f208f0313a7d for instance with vm_state active and task_state deleting.
Oct 02 08:31:57 compute-0 ceph-mon[74477]: osdmap e216: 3 total, 3 up, 3 in
Oct 02 08:31:57 compute-0 ceph-mon[74477]: osdmap e217: 3 total, 3 up, 3 in
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.370 2 DEBUG oslo_concurrency.processutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ccaeb1d7-f1d2-43fb-b36a-793776c713cd/disk.config ccaeb1d7-f1d2-43fb-b36a-793776c713cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.371 2 INFO nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Deleting local config drive /var/lib/nova/instances/ccaeb1d7-f1d2-43fb-b36a-793776c713cd/disk.config because it was imported into RBD.
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.402 2 DEBUG nova.compute.manager [req-b727a21c-254c-4e52-95fc-842a0cd90275 req-d8e98042-02aa-49b0-8b82-c1e2428fd5a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-vif-plugged-11588b0e-a5d2-490e-b72a-3d31e1d091e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.403 2 DEBUG oslo_concurrency.lockutils [req-b727a21c-254c-4e52-95fc-842a0cd90275 req-d8e98042-02aa-49b0-8b82-c1e2428fd5a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.403 2 DEBUG oslo_concurrency.lockutils [req-b727a21c-254c-4e52-95fc-842a0cd90275 req-d8e98042-02aa-49b0-8b82-c1e2428fd5a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.404 2 DEBUG oslo_concurrency.lockutils [req-b727a21c-254c-4e52-95fc-842a0cd90275 req-d8e98042-02aa-49b0-8b82-c1e2428fd5a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.404 2 DEBUG nova.compute.manager [req-b727a21c-254c-4e52-95fc-842a0cd90275 req-d8e98042-02aa-49b0-8b82-c1e2428fd5a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] No waiting events found dispatching network-vif-plugged-11588b0e-a5d2-490e-b72a-3d31e1d091e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.405 2 WARNING nova.compute.manager [req-b727a21c-254c-4e52-95fc-842a0cd90275 req-d8e98042-02aa-49b0-8b82-c1e2428fd5a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received unexpected event network-vif-plugged-11588b0e-a5d2-490e-b72a-3d31e1d091e4 for instance with vm_state active and task_state deleting.
Oct 02 08:31:57 compute-0 kernel: tapc265ae6a-23: entered promiscuous mode
Oct 02 08:31:57 compute-0 NetworkManager[45129]: <info>  [1759393917.4658] manager: (tapc265ae6a-23): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:57 compute-0 ovn_controller[152344]: 2025-10-02T08:31:57Z|00631|binding|INFO|Claiming lport c265ae6a-2342-4b12-a28b-b08b728b8356 for this chassis.
Oct 02 08:31:57 compute-0 ovn_controller[152344]: 2025-10-02T08:31:57Z|00632|binding|INFO|c265ae6a-2342-4b12-a28b-b08b728b8356: Claiming fa:16:3e:a6:cf:5b 10.100.0.11
Oct 02 08:31:57 compute-0 systemd-udevd[328219]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.468 2 DEBUG nova.network.neutron [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.473 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:cf:5b 10.100.0.11'], port_security=['fa:16:3e:a6:cf:5b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ccaeb1d7-f1d2-43fb-b36a-793776c713cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c265ae6a-2342-4b12-a28b-b08b728b8356) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.476 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c265ae6a-2342-4b12-a28b-b08b728b8356 in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b bound to our chassis
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.479 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:31:57 compute-0 ovn_controller[152344]: 2025-10-02T08:31:57Z|00633|binding|INFO|Setting lport c265ae6a-2342-4b12-a28b-b08b728b8356 ovn-installed in OVS
Oct 02 08:31:57 compute-0 ovn_controller[152344]: 2025-10-02T08:31:57Z|00634|binding|INFO|Setting lport c265ae6a-2342-4b12-a28b-b08b728b8356 up in Southbound
Oct 02 08:31:57 compute-0 NetworkManager[45129]: <info>  [1759393917.4926] device (tapc265ae6a-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:57 compute-0 NetworkManager[45129]: <info>  [1759393917.4998] device (tapc265ae6a-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.503 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d48f51-b576-43cf-9596-969b48ea1062]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.504 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8df0af1-11 in ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.507 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8df0af1-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.508 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3464054d-9a6d-4df4-aa5b-a6a95dba62c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78a675b1-c999-4eff-8195-ee7d844745aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.533 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e1fef5f5-7cc5-45f5-8f49-21716f65f675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 systemd-machined[214636]: New machine qemu-75-instance-00000043.
Oct 02 08:31:57 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-00000043.
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.571 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09b5c50b-e98c-4fea-8470-68e76d72660e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.623 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[33cb245a-1b6e-4c96-a2b4-36b8dec52286]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 NetworkManager[45129]: <info>  [1759393917.6355] manager: (tapf8df0af1-10): new Veth device (/org/freedesktop/NetworkManager/Devices/273)
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.634 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b6271b88-8242-47f2-90e2-3db0e5d45aad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 podman[328538]: 2025-10-02 08:31:57.668247954 +0000 UTC m=+0.136759114 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:31:57 compute-0 systemd-udevd[328572]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.684 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb45d3a9-add2-4bd9-a721-01e2ce02b03a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.693 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[57248e67-5077-4650-9f47-bda744323fd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 NetworkManager[45129]: <info>  [1759393917.7199] device (tapf8df0af1-10): carrier: link connected
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.727 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9c56651d-7bdd-45ae-ace1-afffeef87f9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.753 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ae1281-6b1c-4862-a9d5-a0cbca16ae9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479632, 'reachable_time': 22441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328591, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.771 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dea444e4-8809-4258-b91a-4b1fc7e50875]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:6ddc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479632, 'tstamp': 479632}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328592, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.795 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5c9e936b-feb3-4c88-902b-8ef580aeb1cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479632, 'reachable_time': 22441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328593, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.829 2 DEBUG nova.network.neutron [-] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.835 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e7323785-cc1f-4d32-b5b1-cf872d37a724]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.853 2 INFO nova.compute.manager [-] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Took 2.32 seconds to deallocate network for instance.
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.903 2 DEBUG oslo_concurrency.lockutils [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.904 2 DEBUG oslo_concurrency.lockutils [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.911 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[82cdf92d-eecb-48c3-90c7-44c7ff91d127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.912 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.913 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.913 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8df0af1-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:31:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:31:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:31:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:31:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:31:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:31:57 compute-0 kernel: tapf8df0af1-10: entered promiscuous mode
Oct 02 08:31:57 compute-0 NetworkManager[45129]: <info>  [1759393917.9495] manager: (tapf8df0af1-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.952 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8df0af1-10, col_values=(('external_ids', {'iface-id': '1405e724-f2f6-4a95-8848-550131e62910'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:57 compute-0 ovn_controller[152344]: 2025-10-02T08:31:57Z|00635|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.981 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:31:57 compute-0 nova_compute[260603]: 2025-10-02 08:31:57.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.982 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[234871bc-ed39-430a-9272-5defbea33897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.983 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:31:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:31:57.984 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'env', 'PROCESS_TAG=haproxy-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8df0af1-1767-419a-8500-c28fbf45ae4b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.087 2 DEBUG oslo_concurrency.processutils [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.178 2 INFO nova.virt.libvirt.driver [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Snapshot image upload complete
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.179 2 INFO nova.compute.manager [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Took 5.44 seconds to snapshot the instance on the hypervisor.
Oct 02 08:31:58 compute-0 ceph-mon[74477]: pgmap v1541: 305 pgs: 305 active+clean; 451 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 175 op/s
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.372 2 DEBUG nova.network.neutron [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Updating instance_info_cache with network_info: [{"id": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "address": "fa:16:3e:44:ad:e1", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85f814fe-3a", "ovs_interfaceid": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.396 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Releasing lock "refresh_cache-751dd598-d3c9-4e21-90ab-98962fac748d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.397 2 DEBUG nova.compute.manager [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Instance network_info: |[{"id": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "address": "fa:16:3e:44:ad:e1", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85f814fe-3a", "ovs_interfaceid": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.401 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Start _get_guest_xml network_info=[{"id": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "address": "fa:16:3e:44:ad:e1", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85f814fe-3a", "ovs_interfaceid": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.412 2 WARNING nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.418 2 DEBUG nova.virt.libvirt.host [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.419 2 DEBUG nova.virt.libvirt.host [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.422 2 DEBUG nova.virt.libvirt.host [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.423 2 DEBUG nova.virt.libvirt.host [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.423 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.424 2 DEBUG nova.virt.hardware [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.425 2 DEBUG nova.virt.hardware [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.425 2 DEBUG nova.virt.hardware [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.425 2 DEBUG nova.virt.hardware [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.426 2 DEBUG nova.virt.hardware [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.426 2 DEBUG nova.virt.hardware [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.426 2 DEBUG nova.virt.hardware [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.427 2 DEBUG nova.virt.hardware [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.427 2 DEBUG nova.virt.hardware [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.427 2 DEBUG nova.virt.hardware [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.428 2 DEBUG nova.virt.hardware [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.436 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:58 compute-0 podman[328642]: 2025-10-02 08:31:58.446480322 +0000 UTC m=+0.082140240 container create 1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.486 2 DEBUG nova.compute.manager [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.486 2 DEBUG nova.compute.manager [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.487 2 DEBUG nova.compute.manager [None req-1384105a-5999-4d67-9b7c-8ee534955af6 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Deleting image 14d99014-bbf1-4f02-9b3b-cd6971c554c9 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Oct 02 08:31:58 compute-0 podman[328642]: 2025-10-02 08:31:58.407803212 +0000 UTC m=+0.043463140 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:31:58 compute-0 systemd[1]: Started libpod-conmon-1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c.scope.
Oct 02 08:31:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:31:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660dd8a1af004e50dd70d9999bc618b2bd1b5d1f0376a79e24fd1281af0a402b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:31:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005791950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:58 compute-0 podman[328642]: 2025-10-02 08:31:58.551108099 +0000 UTC m=+0.186768017 container init 1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.563 2 DEBUG oslo_concurrency.processutils [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:58 compute-0 podman[328642]: 2025-10-02 08:31:58.564518754 +0000 UTC m=+0.200178652 container start 1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.569 2 DEBUG nova.compute.provider_tree [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.589 2 DEBUG nova.scheduler.client.report [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:31:58 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[328684]: [NOTICE]   (328707) : New worker (328726) forked
Oct 02 08:31:58 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[328684]: [NOTICE]   (328707) : Loading success.
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.613 2 DEBUG oslo_concurrency.lockutils [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.644 2 INFO nova.scheduler.client.report [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Deleted allocations for instance 49564059-b2ef-4053-bedd-56a9afb53d2c
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.711 2 DEBUG oslo_concurrency.lockutils [None req-16b504ec-43bc-4287-a016-1c301eb15497 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2181980581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.902 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.922 2 DEBUG nova.storage.rbd_utils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 751dd598-d3c9-4e21-90ab-98962fac748d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:58 compute-0 nova_compute[260603]: 2025-10-02 08:31:58.926 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:31:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 530 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 14 MiB/s wr, 408 op/s
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.047 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393919.046048, ccaeb1d7-f1d2-43fb-b36a-793776c713cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.048 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] VM Started (Lifecycle Event)
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.068 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.072 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393919.0465653, ccaeb1d7-f1d2-43fb-b36a-793776c713cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.072 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] VM Paused (Lifecycle Event)
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.090 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.092 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.108 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:31:59 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051866115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.356 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.358 2 DEBUG nova.virt.libvirt.vif [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-297675523',display_name='tempest-ServersTestJSON-server-297675523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-297675523',id=68,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6EsSTY7yfqwWqV2wONjEiOBCPbEgYNMv++CFHq/eVVMD1864AeAvLESrcEA+x/FOzRYDGvMAsww4qqCZZeNF9FJf7N7IJrCEpt7jqBXBJvYAAWW7+VlmnaT3mQyhl2PA==',key_name='tempest-key-1208335161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-s656zu26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:53Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=751dd598-d3c9-4e21-90ab-98962fac748d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "address": "fa:16:3e:44:ad:e1", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85f814fe-3a", "ovs_interfaceid": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.359 2 DEBUG nova.network.os_vif_util [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "address": "fa:16:3e:44:ad:e1", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85f814fe-3a", "ovs_interfaceid": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.360 2 DEBUG nova.network.os_vif_util [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:ad:e1,bridge_name='br-int',has_traffic_filtering=True,id=85f814fe-3a2f-40e4-a15b-e8b0c466670c,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85f814fe-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.363 2 DEBUG nova.objects.instance [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 751dd598-d3c9-4e21-90ab-98962fac748d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:31:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Oct 02 08:31:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3005791950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:31:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2181980581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1051866115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.380 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:31:59 compute-0 nova_compute[260603]:   <uuid>751dd598-d3c9-4e21-90ab-98962fac748d</uuid>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   <name>instance-00000044</name>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersTestJSON-server-297675523</nova:name>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:31:58</nova:creationTime>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:31:59 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:31:59 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:31:59 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:31:59 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:31:59 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:31:59 compute-0 nova_compute[260603]:         <nova:user uuid="33ee6781337742479d7b4b078ad6a221">tempest-ServersTestJSON-520437589-project-member</nova:user>
Oct 02 08:31:59 compute-0 nova_compute[260603]:         <nova:project uuid="f6678937d40d4004ad15e1e9eef6f9c7">tempest-ServersTestJSON-520437589</nova:project>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:31:59 compute-0 nova_compute[260603]:         <nova:port uuid="85f814fe-3a2f-40e4-a15b-e8b0c466670c">
Oct 02 08:31:59 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <system>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <entry name="serial">751dd598-d3c9-4e21-90ab-98962fac748d</entry>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <entry name="uuid">751dd598-d3c9-4e21-90ab-98962fac748d</entry>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     </system>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   <os>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   </os>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   <features>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   </features>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/751dd598-d3c9-4e21-90ab-98962fac748d_disk">
Oct 02 08:31:59 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:59 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/751dd598-d3c9-4e21-90ab-98962fac748d_disk.config">
Oct 02 08:31:59 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       </source>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:31:59 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:44:ad:e1"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <target dev="tap85f814fe-3a"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/751dd598-d3c9-4e21-90ab-98962fac748d/console.log" append="off"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <video>
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     </video>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:31:59 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:31:59 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:31:59 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:31:59 compute-0 nova_compute[260603]: </domain>
Oct 02 08:31:59 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.382 2 DEBUG nova.compute.manager [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Preparing to wait for external event network-vif-plugged-85f814fe-3a2f-40e4-a15b-e8b0c466670c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.383 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.383 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.384 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.385 2 DEBUG nova.virt.libvirt.vif [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-297675523',display_name='tempest-ServersTestJSON-server-297675523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-297675523',id=68,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6EsSTY7yfqwWqV2wONjEiOBCPbEgYNMv++CFHq/eVVMD1864AeAvLESrcEA+x/FOzRYDGvMAsww4qqCZZeNF9FJf7N7IJrCEpt7jqBXBJvYAAWW7+VlmnaT3mQyhl2PA==',key_name='tempest-key-1208335161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-s656zu26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:53Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=751dd598-d3c9-4e21-90ab-98962fac748d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "address": "fa:16:3e:44:ad:e1", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85f814fe-3a", "ovs_interfaceid": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.385 2 DEBUG nova.network.os_vif_util [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "address": "fa:16:3e:44:ad:e1", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85f814fe-3a", "ovs_interfaceid": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.386 2 DEBUG nova.network.os_vif_util [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:ad:e1,bridge_name='br-int',has_traffic_filtering=True,id=85f814fe-3a2f-40e4-a15b-e8b0c466670c,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85f814fe-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.387 2 DEBUG os_vif [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:ad:e1,bridge_name='br-int',has_traffic_filtering=True,id=85f814fe-3a2f-40e4-a15b-e8b0c466670c,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85f814fe-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:31:59 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.390 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.390 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.396 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85f814fe-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap85f814fe-3a, col_values=(('external_ids', {'iface-id': '85f814fe-3a2f-40e4-a15b-e8b0c466670c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:ad:e1', 'vm-uuid': '751dd598-d3c9-4e21-90ab-98962fac748d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:31:59 compute-0 NetworkManager[45129]: <info>  [1759393919.4007] manager: (tap85f814fe-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.413 2 INFO os_vif [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:ad:e1,bridge_name='br-int',has_traffic_filtering=True,id=85f814fe-3a2f-40e4-a15b-e8b0c466670c,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85f814fe-3a')
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.501 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.502 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.503 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No VIF found with MAC fa:16:3e:44:ad:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.504 2 INFO nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Using config drive
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.539 2 DEBUG nova.storage.rbd_utils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 751dd598-d3c9-4e21-90ab-98962fac748d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.792 2 DEBUG nova.compute.manager [req-bab5af06-c124-44ce-85c7-4864e6ed1238 req-0ec55d63-62cb-47ae-90a1-d42e4631e9cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Received event network-changed-85f814fe-3a2f-40e4-a15b-e8b0c466670c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.793 2 DEBUG nova.compute.manager [req-bab5af06-c124-44ce-85c7-4864e6ed1238 req-0ec55d63-62cb-47ae-90a1-d42e4631e9cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Refreshing instance network info cache due to event network-changed-85f814fe-3a2f-40e4-a15b-e8b0c466670c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.794 2 DEBUG oslo_concurrency.lockutils [req-bab5af06-c124-44ce-85c7-4864e6ed1238 req-0ec55d63-62cb-47ae-90a1-d42e4631e9cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-751dd598-d3c9-4e21-90ab-98962fac748d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.794 2 DEBUG oslo_concurrency.lockutils [req-bab5af06-c124-44ce-85c7-4864e6ed1238 req-0ec55d63-62cb-47ae-90a1-d42e4631e9cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-751dd598-d3c9-4e21-90ab-98962fac748d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.795 2 DEBUG nova.network.neutron [req-bab5af06-c124-44ce-85c7-4864e6ed1238 req-0ec55d63-62cb-47ae-90a1-d42e4631e9cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Refreshing network info cache for port 85f814fe-3a2f-40e4-a15b-e8b0c466670c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.833 2 DEBUG nova.compute.manager [req-d766142d-689a-4168-804c-8f3cdc94e7ef req-ce1de6c2-94b4-4b8e-8510-0699d41aff3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Received event network-vif-plugged-c265ae6a-2342-4b12-a28b-b08b728b8356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.833 2 DEBUG oslo_concurrency.lockutils [req-d766142d-689a-4168-804c-8f3cdc94e7ef req-ce1de6c2-94b4-4b8e-8510-0699d41aff3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.834 2 DEBUG oslo_concurrency.lockutils [req-d766142d-689a-4168-804c-8f3cdc94e7ef req-ce1de6c2-94b4-4b8e-8510-0699d41aff3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.834 2 DEBUG oslo_concurrency.lockutils [req-d766142d-689a-4168-804c-8f3cdc94e7ef req-ce1de6c2-94b4-4b8e-8510-0699d41aff3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.835 2 DEBUG nova.compute.manager [req-d766142d-689a-4168-804c-8f3cdc94e7ef req-ce1de6c2-94b4-4b8e-8510-0699d41aff3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Processing event network-vif-plugged-c265ae6a-2342-4b12-a28b-b08b728b8356 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.835 2 DEBUG nova.compute.manager [req-d766142d-689a-4168-804c-8f3cdc94e7ef req-ce1de6c2-94b4-4b8e-8510-0699d41aff3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Received event network-vif-plugged-c265ae6a-2342-4b12-a28b-b08b728b8356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.836 2 DEBUG oslo_concurrency.lockutils [req-d766142d-689a-4168-804c-8f3cdc94e7ef req-ce1de6c2-94b4-4b8e-8510-0699d41aff3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.836 2 DEBUG oslo_concurrency.lockutils [req-d766142d-689a-4168-804c-8f3cdc94e7ef req-ce1de6c2-94b4-4b8e-8510-0699d41aff3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.837 2 DEBUG oslo_concurrency.lockutils [req-d766142d-689a-4168-804c-8f3cdc94e7ef req-ce1de6c2-94b4-4b8e-8510-0699d41aff3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.837 2 DEBUG nova.compute.manager [req-d766142d-689a-4168-804c-8f3cdc94e7ef req-ce1de6c2-94b4-4b8e-8510-0699d41aff3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] No waiting events found dispatching network-vif-plugged-c265ae6a-2342-4b12-a28b-b08b728b8356 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.838 2 WARNING nova.compute.manager [req-d766142d-689a-4168-804c-8f3cdc94e7ef req-ce1de6c2-94b4-4b8e-8510-0699d41aff3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Received unexpected event network-vif-plugged-c265ae6a-2342-4b12-a28b-b08b728b8356 for instance with vm_state building and task_state spawning.
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.839 2 DEBUG nova.compute.manager [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.845 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393919.8450527, ccaeb1d7-f1d2-43fb-b36a-793776c713cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.846 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] VM Resumed (Lifecycle Event)
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.849 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.853 2 INFO nova.virt.libvirt.driver [-] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Instance spawned successfully.
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.854 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.868 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.876 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.882 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.883 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.884 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.884 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.884 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.884 2 DEBUG nova.virt.libvirt.driver [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.904 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.982 2 INFO nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Creating config drive at /var/lib/nova/instances/751dd598-d3c9-4e21-90ab-98962fac748d/disk.config
Oct 02 08:31:59 compute-0 nova_compute[260603]: 2025-10-02 08:31:59.991 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/751dd598-d3c9-4e21-90ab-98962fac748d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgt11qibi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.069 2 INFO nova.compute.manager [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Took 9.66 seconds to spawn the instance on the hypervisor.
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.071 2 DEBUG nova.compute.manager [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.137 2 INFO nova.compute.manager [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Took 11.04 seconds to build instance.
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.155 2 DEBUG oslo_concurrency.lockutils [None req-bb06dded-a217-4096-84ae-bae2a3143c20 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.166 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/751dd598-d3c9-4e21-90ab-98962fac748d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgt11qibi" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.189 2 DEBUG nova.storage.rbd_utils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 751dd598-d3c9-4e21-90ab-98962fac748d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.193 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/751dd598-d3c9-4e21-90ab-98962fac748d/disk.config 751dd598-d3c9-4e21-90ab-98962fac748d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.367 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393905.3500617, 24339ad4-fec2-43f8-8da3-5e433206a1cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.368 2 INFO nova.compute.manager [-] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] VM Stopped (Lifecycle Event)
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.376 2 DEBUG oslo_concurrency.processutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/751dd598-d3c9-4e21-90ab-98962fac748d/disk.config 751dd598-d3c9-4e21-90ab-98962fac748d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.376 2 INFO nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Deleting local config drive /var/lib/nova/instances/751dd598-d3c9-4e21-90ab-98962fac748d/disk.config because it was imported into RBD.
Oct 02 08:32:00 compute-0 ceph-mon[74477]: pgmap v1543: 305 pgs: 305 active+clean; 530 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 14 MiB/s wr, 408 op/s
Oct 02 08:32:00 compute-0 ceph-mon[74477]: osdmap e218: 3 total, 3 up, 3 in
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.386 2 DEBUG nova.compute.manager [None req-511b27b7-cd9f-4f44-97d8-41cd79375c69 - - - - - -] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:00 compute-0 kernel: tap85f814fe-3a: entered promiscuous mode
Oct 02 08:32:00 compute-0 NetworkManager[45129]: <info>  [1759393920.4387] manager: (tap85f814fe-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/276)
Oct 02 08:32:00 compute-0 systemd-udevd[328585]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:00 compute-0 ovn_controller[152344]: 2025-10-02T08:32:00Z|00636|binding|INFO|Claiming lport 85f814fe-3a2f-40e4-a15b-e8b0c466670c for this chassis.
Oct 02 08:32:00 compute-0 ovn_controller[152344]: 2025-10-02T08:32:00Z|00637|binding|INFO|85f814fe-3a2f-40e4-a15b-e8b0c466670c: Claiming fa:16:3e:44:ad:e1 10.100.0.12
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.450 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:ad:e1 10.100.0.12'], port_security=['fa:16:3e:44:ad:e1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '751dd598-d3c9-4e21-90ab-98962fac748d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=85f814fe-3a2f-40e4-a15b-e8b0c466670c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.451 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 85f814fe-3a2f-40e4-a15b-e8b0c466670c in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 bound to our chassis
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.452 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:00 compute-0 NetworkManager[45129]: <info>  [1759393920.4623] device (tap85f814fe-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:32:00 compute-0 NetworkManager[45129]: <info>  [1759393920.4645] device (tap85f814fe-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:32:00 compute-0 ovn_controller[152344]: 2025-10-02T08:32:00Z|00638|binding|INFO|Setting lport 85f814fe-3a2f-40e4-a15b-e8b0c466670c ovn-installed in OVS
Oct 02 08:32:00 compute-0 ovn_controller[152344]: 2025-10-02T08:32:00Z|00639|binding|INFO|Setting lport 85f814fe-3a2f-40e4-a15b-e8b0c466670c up in Southbound
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.473 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c6d68d-6cf0-44c9-ad6f-4a12aa1cf676]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:00 compute-0 systemd-machined[214636]: New machine qemu-76-instance-00000044.
Oct 02 08:32:00 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-00000044.
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.518 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[817102d8-3d31-45d3-ab37-f42bbe260ab2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.522 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[979cdae0-528f-4a6d-afa1-d771b3aaeb7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.569 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8e01f7d8-f5dd-48f4-9f87-728eb228676c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.596 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[032d99d6-d499-487b-9738-2ed668ecb0ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328863, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.622 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b380596-d768-403d-9dbb-36f2bc38cd41]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328865, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328865, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.624 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:00 compute-0 nova_compute[260603]: 2025-10-02 08:32:00.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.627 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.627 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.628 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:00.628 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 530 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 12 MiB/s wr, 338 op/s
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.395 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393906.39478, 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.396 2 INFO nova.compute.manager [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] VM Stopped (Lifecycle Event)
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.402 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393921.4015853, 751dd598-d3c9-4e21-90ab-98962fac748d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.402 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] VM Started (Lifecycle Event)
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.426 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.429 2 DEBUG nova.compute.manager [None req-54bc5b51-b79a-4bdb-b474-e5868b0bb13e - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.431 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393921.4018915, 751dd598-d3c9-4e21-90ab-98962fac748d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.431 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] VM Paused (Lifecycle Event)
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.459 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.462 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.482 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.897 2 DEBUG nova.network.neutron [req-bab5af06-c124-44ce-85c7-4864e6ed1238 req-0ec55d63-62cb-47ae-90a1-d42e4631e9cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Updated VIF entry in instance network info cache for port 85f814fe-3a2f-40e4-a15b-e8b0c466670c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.898 2 DEBUG nova.network.neutron [req-bab5af06-c124-44ce-85c7-4864e6ed1238 req-0ec55d63-62cb-47ae-90a1-d42e4631e9cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Updating instance_info_cache with network_info: [{"id": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "address": "fa:16:3e:44:ad:e1", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85f814fe-3a", "ovs_interfaceid": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.933 2 DEBUG oslo_concurrency.lockutils [req-bab5af06-c124-44ce-85c7-4864e6ed1238 req-0ec55d63-62cb-47ae-90a1-d42e4631e9cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-751dd598-d3c9-4e21-90ab-98962fac748d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.934 2 DEBUG nova.compute.manager [req-bab5af06-c124-44ce-85c7-4864e6ed1238 req-0ec55d63-62cb-47ae-90a1-d42e4631e9cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-vif-deleted-079652d3-bf75-4bfc-9a4d-f208f0313a7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:01 compute-0 nova_compute[260603]: 2025-10-02 08:32:01.935 2 DEBUG nova.compute.manager [req-bab5af06-c124-44ce-85c7-4864e6ed1238 req-0ec55d63-62cb-47ae-90a1-d42e4631e9cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Received event network-vif-deleted-11588b0e-a5d2-490e-b72a-3d31e1d091e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.017 2 DEBUG nova.compute.manager [req-da1acb41-f129-4c64-8c69-3878ddae3efb req-1161fe10-990f-4b2d-9dc9-899e4d64f5ee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Received event network-vif-plugged-85f814fe-3a2f-40e4-a15b-e8b0c466670c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.018 2 DEBUG oslo_concurrency.lockutils [req-da1acb41-f129-4c64-8c69-3878ddae3efb req-1161fe10-990f-4b2d-9dc9-899e4d64f5ee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.019 2 DEBUG oslo_concurrency.lockutils [req-da1acb41-f129-4c64-8c69-3878ddae3efb req-1161fe10-990f-4b2d-9dc9-899e4d64f5ee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.019 2 DEBUG oslo_concurrency.lockutils [req-da1acb41-f129-4c64-8c69-3878ddae3efb req-1161fe10-990f-4b2d-9dc9-899e4d64f5ee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.020 2 DEBUG nova.compute.manager [req-da1acb41-f129-4c64-8c69-3878ddae3efb req-1161fe10-990f-4b2d-9dc9-899e4d64f5ee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Processing event network-vif-plugged-85f814fe-3a2f-40e4-a15b-e8b0c466670c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.020 2 DEBUG nova.compute.manager [req-da1acb41-f129-4c64-8c69-3878ddae3efb req-1161fe10-990f-4b2d-9dc9-899e4d64f5ee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Received event network-vif-plugged-85f814fe-3a2f-40e4-a15b-e8b0c466670c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.021 2 DEBUG oslo_concurrency.lockutils [req-da1acb41-f129-4c64-8c69-3878ddae3efb req-1161fe10-990f-4b2d-9dc9-899e4d64f5ee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.021 2 DEBUG oslo_concurrency.lockutils [req-da1acb41-f129-4c64-8c69-3878ddae3efb req-1161fe10-990f-4b2d-9dc9-899e4d64f5ee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.022 2 DEBUG oslo_concurrency.lockutils [req-da1acb41-f129-4c64-8c69-3878ddae3efb req-1161fe10-990f-4b2d-9dc9-899e4d64f5ee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.022 2 DEBUG nova.compute.manager [req-da1acb41-f129-4c64-8c69-3878ddae3efb req-1161fe10-990f-4b2d-9dc9-899e4d64f5ee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] No waiting events found dispatching network-vif-plugged-85f814fe-3a2f-40e4-a15b-e8b0c466670c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.023 2 WARNING nova.compute.manager [req-da1acb41-f129-4c64-8c69-3878ddae3efb req-1161fe10-990f-4b2d-9dc9-899e4d64f5ee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Received unexpected event network-vif-plugged-85f814fe-3a2f-40e4-a15b-e8b0c466670c for instance with vm_state building and task_state spawning.
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.024 2 DEBUG nova.compute.manager [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.040 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393922.0284655, 751dd598-d3c9-4e21-90ab-98962fac748d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.041 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] VM Resumed (Lifecycle Event)
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.044 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.050 2 INFO nova.virt.libvirt.driver [-] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Instance spawned successfully.
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.051 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.085 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.091 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.094 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.094 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.094 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.095 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.095 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.095 2 DEBUG nova.virt.libvirt.driver [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:02.108 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.121 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:32:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Oct 02 08:32:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Oct 02 08:32:02 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.165 2 INFO nova.compute.manager [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Took 8.73 seconds to spawn the instance on the hypervisor.
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.165 2 DEBUG nova.compute.manager [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.228 2 INFO nova.compute.manager [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Took 9.90 seconds to build instance.
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.244 2 DEBUG oslo_concurrency.lockutils [None req-bc9d2bb9-1fc2-4d87-9a2e-0741173bf7bf 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:02 compute-0 ceph-mon[74477]: pgmap v1545: 305 pgs: 305 active+clean; 530 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 12 MiB/s wr, 338 op/s
Oct 02 08:32:02 compute-0 ceph-mon[74477]: osdmap e219: 3 total, 3 up, 3 in
Oct 02 08:32:02 compute-0 ovn_controller[152344]: 2025-10-02T08:32:02Z|00640|binding|INFO|Releasing lport d143de50-fc80-43b6-82e2-6651430a4a42 from this chassis (sb_readonly=0)
Oct 02 08:32:02 compute-0 ovn_controller[152344]: 2025-10-02T08:32:02Z|00641|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct 02 08:32:02 compute-0 ovn_controller[152344]: 2025-10-02T08:32:02Z|00642|binding|INFO|Releasing lport 8d9038d5-8bd6-460b-aca0-b6f7422e177a from this chassis (sb_readonly=0)
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:02 compute-0 nova_compute[260603]: 2025-10-02 08:32:02.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 469 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 11 MiB/s wr, 541 op/s
Oct 02 08:32:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Oct 02 08:32:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Oct 02 08:32:03 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Oct 02 08:32:04 compute-0 ceph-mon[74477]: pgmap v1547: 305 pgs: 305 active+clean; 469 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 11 MiB/s wr, 541 op/s
Oct 02 08:32:04 compute-0 ceph-mon[74477]: osdmap e220: 3 total, 3 up, 3 in
Oct 02 08:32:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Oct 02 08:32:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:04 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.703 2 DEBUG oslo_concurrency.lockutils [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "751dd598-d3c9-4e21-90ab-98962fac748d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.704 2 DEBUG oslo_concurrency.lockutils [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.704 2 DEBUG oslo_concurrency.lockutils [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.705 2 DEBUG oslo_concurrency.lockutils [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.705 2 DEBUG oslo_concurrency.lockutils [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.707 2 INFO nova.compute.manager [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Terminating instance
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.709 2 DEBUG nova.compute.manager [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:32:04 compute-0 kernel: tap85f814fe-3a (unregistering): left promiscuous mode
Oct 02 08:32:04 compute-0 NetworkManager[45129]: <info>  [1759393924.7490] device (tap85f814fe-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:32:04 compute-0 ovn_controller[152344]: 2025-10-02T08:32:04Z|00643|binding|INFO|Releasing lport 85f814fe-3a2f-40e4-a15b-e8b0c466670c from this chassis (sb_readonly=0)
Oct 02 08:32:04 compute-0 ovn_controller[152344]: 2025-10-02T08:32:04Z|00644|binding|INFO|Setting lport 85f814fe-3a2f-40e4-a15b-e8b0c466670c down in Southbound
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:04 compute-0 ovn_controller[152344]: 2025-10-02T08:32:04Z|00645|binding|INFO|Removing iface tap85f814fe-3a ovn-installed in OVS
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.767 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:ad:e1 10.100.0.12'], port_security=['fa:16:3e:44:ad:e1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '751dd598-d3c9-4e21-90ab-98962fac748d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=85f814fe-3a2f-40e4-a15b-e8b0c466670c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.768 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 85f814fe-3a2f-40e4-a15b-e8b0c466670c in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 unbound from our chassis
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.770 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:04 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000044.scope: Deactivated successfully.
Oct 02 08:32:04 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000044.scope: Consumed 3.419s CPU time.
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.791 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bed79619-1a7c-4902-893e-4301ab29e0ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:04 compute-0 systemd-machined[214636]: Machine qemu-76-instance-00000044 terminated.
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.821 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bd5377f3-8722-428e-b99f-fbb0cc0864f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.827 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb26092f-427f-4e52-aaaa-2829ca62cb9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.855 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8399f7c1-0143-4848-b198-070f1ecadb85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.878 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcb6c03-99f7-4b62-98eb-396227e80992]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328922, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.896 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[57bb350d-d4ed-40d3-b053-eb14976034e8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328923, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328923, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.898 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.908 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.908 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.909 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:04.909 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.944 2 INFO nova.virt.libvirt.driver [-] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Instance destroyed successfully.
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.945 2 DEBUG nova.objects.instance [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'resources' on Instance uuid 751dd598-d3c9-4e21-90ab-98962fac748d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.963 2 DEBUG nova.virt.libvirt.vif [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:31:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-297675523',display_name='tempest-ServersTestJSON-server-297675523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-297675523',id=68,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6EsSTY7yfqwWqV2wONjEiOBCPbEgYNMv++CFHq/eVVMD1864AeAvLESrcEA+x/FOzRYDGvMAsww4qqCZZeNF9FJf7N7IJrCEpt7jqBXBJvYAAWW7+VlmnaT3mQyhl2PA==',key_name='tempest-key-1208335161',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:32:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-s656zu26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:32:02Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=751dd598-d3c9-4e21-90ab-98962fac748d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "address": "fa:16:3e:44:ad:e1", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85f814fe-3a", "ovs_interfaceid": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.964 2 DEBUG nova.network.os_vif_util [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "address": "fa:16:3e:44:ad:e1", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85f814fe-3a", "ovs_interfaceid": "85f814fe-3a2f-40e4-a15b-e8b0c466670c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.965 2 DEBUG nova.network.os_vif_util [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:ad:e1,bridge_name='br-int',has_traffic_filtering=True,id=85f814fe-3a2f-40e4-a15b-e8b0c466670c,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85f814fe-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.966 2 DEBUG os_vif [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:ad:e1,bridge_name='br-int',has_traffic_filtering=True,id=85f814fe-3a2f-40e4-a15b-e8b0c466670c,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85f814fe-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.973 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85f814fe-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:04 compute-0 nova_compute[260603]: 2025-10-02 08:32:04.980 2 INFO os_vif [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:ad:e1,bridge_name='br-int',has_traffic_filtering=True,id=85f814fe-3a2f-40e4-a15b-e8b0c466670c,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85f814fe-3a')
Oct 02 08:32:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 451 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 32 KiB/s wr, 306 op/s
Oct 02 08:32:05 compute-0 nova_compute[260603]: 2025-10-02 08:32:05.310 2 INFO nova.virt.libvirt.driver [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Deleting instance files /var/lib/nova/instances/751dd598-d3c9-4e21-90ab-98962fac748d_del
Oct 02 08:32:05 compute-0 nova_compute[260603]: 2025-10-02 08:32:05.312 2 INFO nova.virt.libvirt.driver [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Deletion of /var/lib/nova/instances/751dd598-d3c9-4e21-90ab-98962fac748d_del complete
Oct 02 08:32:05 compute-0 nova_compute[260603]: 2025-10-02 08:32:05.395 2 INFO nova.compute.manager [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Took 0.69 seconds to destroy the instance on the hypervisor.
Oct 02 08:32:05 compute-0 nova_compute[260603]: 2025-10-02 08:32:05.396 2 DEBUG oslo.service.loopingcall [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:32:05 compute-0 nova_compute[260603]: 2025-10-02 08:32:05.397 2 DEBUG nova.compute.manager [-] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:32:05 compute-0 nova_compute[260603]: 2025-10-02 08:32:05.397 2 DEBUG nova.network.neutron [-] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:32:05 compute-0 ceph-mon[74477]: osdmap e221: 3 total, 3 up, 3 in
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.140 2 DEBUG nova.compute.manager [req-3f302b2a-6dee-4265-96a1-823ec43709ca req-646df397-331b-485d-a782-866db0faf48e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Received event network-vif-unplugged-85f814fe-3a2f-40e4-a15b-e8b0c466670c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.141 2 DEBUG oslo_concurrency.lockutils [req-3f302b2a-6dee-4265-96a1-823ec43709ca req-646df397-331b-485d-a782-866db0faf48e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.142 2 DEBUG oslo_concurrency.lockutils [req-3f302b2a-6dee-4265-96a1-823ec43709ca req-646df397-331b-485d-a782-866db0faf48e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.142 2 DEBUG oslo_concurrency.lockutils [req-3f302b2a-6dee-4265-96a1-823ec43709ca req-646df397-331b-485d-a782-866db0faf48e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.142 2 DEBUG nova.compute.manager [req-3f302b2a-6dee-4265-96a1-823ec43709ca req-646df397-331b-485d-a782-866db0faf48e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] No waiting events found dispatching network-vif-unplugged-85f814fe-3a2f-40e4-a15b-e8b0c466670c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.143 2 DEBUG nova.compute.manager [req-3f302b2a-6dee-4265-96a1-823ec43709ca req-646df397-331b-485d-a782-866db0faf48e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Received event network-vif-unplugged-85f814fe-3a2f-40e4-a15b-e8b0c466670c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.279 2 DEBUG oslo_concurrency.lockutils [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.280 2 DEBUG oslo_concurrency.lockutils [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.281 2 DEBUG oslo_concurrency.lockutils [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.281 2 DEBUG oslo_concurrency.lockutils [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.282 2 DEBUG oslo_concurrency.lockutils [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.283 2 INFO nova.compute.manager [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Terminating instance
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.285 2 DEBUG nova.compute.manager [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:32:06 compute-0 kernel: tapc265ae6a-23 (unregistering): left promiscuous mode
Oct 02 08:32:06 compute-0 NetworkManager[45129]: <info>  [1759393926.3391] device (tapc265ae6a-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:06 compute-0 ovn_controller[152344]: 2025-10-02T08:32:06Z|00646|binding|INFO|Releasing lport c265ae6a-2342-4b12-a28b-b08b728b8356 from this chassis (sb_readonly=0)
Oct 02 08:32:06 compute-0 ovn_controller[152344]: 2025-10-02T08:32:06Z|00647|binding|INFO|Setting lport c265ae6a-2342-4b12-a28b-b08b728b8356 down in Southbound
Oct 02 08:32:06 compute-0 ovn_controller[152344]: 2025-10-02T08:32:06Z|00648|binding|INFO|Removing iface tapc265ae6a-23 ovn-installed in OVS
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.360 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:cf:5b 10.100.0.11'], port_security=['fa:16:3e:a6:cf:5b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ccaeb1d7-f1d2-43fb-b36a-793776c713cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c265ae6a-2342-4b12-a28b-b08b728b8356) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.362 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c265ae6a-2342-4b12-a28b-b08b728b8356 in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b unbound from our chassis
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.364 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8df0af1-1767-419a-8500-c28fbf45ae4b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.366 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[73eebc30-9a77-492e-982e-045706ae3a92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.367 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace which is not needed anymore
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:06 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000043.scope: Deactivated successfully.
Oct 02 08:32:06 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000043.scope: Consumed 8.025s CPU time.
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.411 2 DEBUG nova.network.neutron [-] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:06 compute-0 systemd-machined[214636]: Machine qemu-75-instance-00000043 terminated.
Oct 02 08:32:06 compute-0 ceph-mon[74477]: pgmap v1550: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 451 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 32 KiB/s wr, 306 op/s
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.432 2 INFO nova.compute.manager [-] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Took 1.03 seconds to deallocate network for instance.
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.470 2 DEBUG oslo_concurrency.lockutils [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.471 2 DEBUG oslo_concurrency.lockutils [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:06 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[328684]: [NOTICE]   (328707) : haproxy version is 2.8.14-c23fe91
Oct 02 08:32:06 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[328684]: [NOTICE]   (328707) : path to executable is /usr/sbin/haproxy
Oct 02 08:32:06 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[328684]: [WARNING]  (328707) : Exiting Master process...
Oct 02 08:32:06 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[328684]: [ALERT]    (328707) : Current worker (328726) exited with code 143 (Terminated)
Oct 02 08:32:06 compute-0 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[328684]: [WARNING]  (328707) : All workers exited. Exiting... (0)
Oct 02 08:32:06 compute-0 systemd[1]: libpod-1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c.scope: Deactivated successfully.
Oct 02 08:32:06 compute-0 podman[328976]: 2025-10-02 08:32:06.53653483 +0000 UTC m=+0.062125210 container died 1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.545 2 INFO nova.virt.libvirt.driver [-] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Instance destroyed successfully.
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.545 2 DEBUG nova.objects.instance [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'resources' on Instance uuid ccaeb1d7-f1d2-43fb-b36a-793776c713cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.560 2 DEBUG nova.virt.libvirt.vif [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:31:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-314573417',display_name='tempest-ServerDiskConfigTestJSON-server-314573417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-314573417',id=67,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:32:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-qn6v65fb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:32:04Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=ccaeb1d7-f1d2-43fb-b36a-793776c713cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c265ae6a-2342-4b12-a28b-b08b728b8356", "address": "fa:16:3e:a6:cf:5b", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc265ae6a-23", "ovs_interfaceid": "c265ae6a-2342-4b12-a28b-b08b728b8356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.561 2 DEBUG nova.network.os_vif_util [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "c265ae6a-2342-4b12-a28b-b08b728b8356", "address": "fa:16:3e:a6:cf:5b", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc265ae6a-23", "ovs_interfaceid": "c265ae6a-2342-4b12-a28b-b08b728b8356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.562 2 DEBUG nova.network.os_vif_util [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:cf:5b,bridge_name='br-int',has_traffic_filtering=True,id=c265ae6a-2342-4b12-a28b-b08b728b8356,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc265ae6a-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.562 2 DEBUG os_vif [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:cf:5b,bridge_name='br-int',has_traffic_filtering=True,id=c265ae6a-2342-4b12-a28b-b08b728b8356,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc265ae6a-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.564 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc265ae6a-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.618 2 INFO os_vif [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:cf:5b,bridge_name='br-int',has_traffic_filtering=True,id=c265ae6a-2342-4b12-a28b-b08b728b8356,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc265ae6a-23')
Oct 02 08:32:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c-userdata-shm.mount: Deactivated successfully.
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.637 2 DEBUG oslo_concurrency.processutils [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-660dd8a1af004e50dd70d9999bc618b2bd1b5d1f0376a79e24fd1281af0a402b-merged.mount: Deactivated successfully.
Oct 02 08:32:06 compute-0 podman[328976]: 2025-10-02 08:32:06.647156812 +0000 UTC m=+0.172747142 container cleanup 1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:32:06 compute-0 systemd[1]: libpod-conmon-1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c.scope: Deactivated successfully.
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.679 2 DEBUG nova.compute.manager [req-cfc7cf83-0de7-48ca-bf54-6e83f40d3c8f req-f144356c-7b44-441a-abbd-72a7029f6dc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Received event network-vif-unplugged-c265ae6a-2342-4b12-a28b-b08b728b8356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.679 2 DEBUG oslo_concurrency.lockutils [req-cfc7cf83-0de7-48ca-bf54-6e83f40d3c8f req-f144356c-7b44-441a-abbd-72a7029f6dc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.680 2 DEBUG oslo_concurrency.lockutils [req-cfc7cf83-0de7-48ca-bf54-6e83f40d3c8f req-f144356c-7b44-441a-abbd-72a7029f6dc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.680 2 DEBUG oslo_concurrency.lockutils [req-cfc7cf83-0de7-48ca-bf54-6e83f40d3c8f req-f144356c-7b44-441a-abbd-72a7029f6dc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.681 2 DEBUG nova.compute.manager [req-cfc7cf83-0de7-48ca-bf54-6e83f40d3c8f req-f144356c-7b44-441a-abbd-72a7029f6dc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] No waiting events found dispatching network-vif-unplugged-c265ae6a-2342-4b12-a28b-b08b728b8356 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.681 2 DEBUG nova.compute.manager [req-cfc7cf83-0de7-48ca-bf54-6e83f40d3c8f req-f144356c-7b44-441a-abbd-72a7029f6dc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Received event network-vif-unplugged-c265ae6a-2342-4b12-a28b-b08b728b8356 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:32:06 compute-0 podman[329032]: 2025-10-02 08:32:06.721763167 +0000 UTC m=+0.053096399 container remove 1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.727 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[64f20132-5488-4a40-ae7d-0c9cfd4494bd]: (4, ('Thu Oct  2 08:32:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c)\n1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c\nThu Oct  2 08:32:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c)\n1f7bab63882e44abfefe8a5eeec73c1922f5e99d50d3b194a6bdd490e1999f4c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.730 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[17294f8c-13bf-44c6-a995-1fc05c3cea28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.731 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:06 compute-0 kernel: tapf8df0af1-10: left promiscuous mode
Oct 02 08:32:06 compute-0 nova_compute[260603]: 2025-10-02 08:32:06.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.758 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e1103c-d2e0-483d-aed7-5d53b2e61403]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.786 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7044ff7e-d9cd-4fcc-ab12-d2af08e48ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.787 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[59cb15cf-9830-4e11-ba42-93631ed1f822]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.813 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4679f6c0-b9a6-48bc-8d82-daecc8849dbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479621, 'reachable_time': 41847, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329061, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:06 compute-0 systemd[1]: run-netns-ovnmeta\x2df8df0af1\x2d1767\x2d419a\x2d8500\x2dc28fbf45ae4b.mount: Deactivated successfully.
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.815 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:32:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:06.816 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b04336c7-ac22-4360-999b-e36a2ed986bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.021 2 INFO nova.virt.libvirt.driver [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Deleting instance files /var/lib/nova/instances/ccaeb1d7-f1d2-43fb-b36a-793776c713cd_del
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.023 2 INFO nova.virt.libvirt.driver [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Deletion of /var/lib/nova/instances/ccaeb1d7-f1d2-43fb-b36a-793776c713cd_del complete
Oct 02 08:32:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 451 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 30 KiB/s wr, 288 op/s
Oct 02 08:32:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1194516534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.108 2 INFO nova.compute.manager [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.109 2 DEBUG oslo.service.loopingcall [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.110 2 DEBUG nova.compute.manager [-] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.110 2 DEBUG nova.network.neutron [-] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.115 2 DEBUG oslo_concurrency.processutils [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.123 2 DEBUG nova.compute.provider_tree [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.141 2 DEBUG nova.scheduler.client.report [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:32:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Oct 02 08:32:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Oct 02 08:32:07 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.166 2 DEBUG oslo_concurrency.lockutils [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.194 2 INFO nova.scheduler.client.report [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Deleted allocations for instance 751dd598-d3c9-4e21-90ab-98962fac748d
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.273 2 DEBUG oslo_concurrency.lockutils [None req-bc7f2ca7-7d43-4059-9ef3-3e44d2748155 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1194516534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:07 compute-0 ceph-mon[74477]: osdmap e222: 3 total, 3 up, 3 in
Oct 02 08:32:07 compute-0 nova_compute[260603]: 2025-10-02 08:32:07.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.336 2 DEBUG nova.network.neutron [-] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.353 2 INFO nova.compute.manager [-] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Took 1.24 seconds to deallocate network for instance.
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.407 2 DEBUG oslo_concurrency.lockutils [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.407 2 DEBUG oslo_concurrency.lockutils [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:08 compute-0 ceph-mon[74477]: pgmap v1551: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 451 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 30 KiB/s wr, 288 op/s
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.495 2 DEBUG oslo_concurrency.processutils [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.847 2 DEBUG nova.compute.manager [req-4254319f-6c8f-45f5-b580-505d04b35dba req-c7064ba3-3da4-4050-b746-5b20a839ba76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Received event network-vif-plugged-85f814fe-3a2f-40e4-a15b-e8b0c466670c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.849 2 DEBUG oslo_concurrency.lockutils [req-4254319f-6c8f-45f5-b580-505d04b35dba req-c7064ba3-3da4-4050-b746-5b20a839ba76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.849 2 DEBUG oslo_concurrency.lockutils [req-4254319f-6c8f-45f5-b580-505d04b35dba req-c7064ba3-3da4-4050-b746-5b20a839ba76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.850 2 DEBUG oslo_concurrency.lockutils [req-4254319f-6c8f-45f5-b580-505d04b35dba req-c7064ba3-3da4-4050-b746-5b20a839ba76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "751dd598-d3c9-4e21-90ab-98962fac748d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.850 2 DEBUG nova.compute.manager [req-4254319f-6c8f-45f5-b580-505d04b35dba req-c7064ba3-3da4-4050-b746-5b20a839ba76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] No waiting events found dispatching network-vif-plugged-85f814fe-3a2f-40e4-a15b-e8b0c466670c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.851 2 WARNING nova.compute.manager [req-4254319f-6c8f-45f5-b580-505d04b35dba req-c7064ba3-3da4-4050-b746-5b20a839ba76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Received unexpected event network-vif-plugged-85f814fe-3a2f-40e4-a15b-e8b0c466670c for instance with vm_state deleted and task_state None.
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.852 2 DEBUG nova.compute.manager [req-4254319f-6c8f-45f5-b580-505d04b35dba req-c7064ba3-3da4-4050-b746-5b20a839ba76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Received event network-vif-deleted-85f814fe-3a2f-40e4-a15b-e8b0c466670c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:08 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/787708047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.971 2 DEBUG oslo_concurrency.processutils [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:08 compute-0 nova_compute[260603]: 2025-10-02 08:32:08.977 2 DEBUG nova.compute.provider_tree [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.000 2 DEBUG nova.scheduler.client.report [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.009 2 DEBUG nova.compute.manager [req-a323986a-2b64-412f-9c2d-b3272861ce60 req-ebd519a3-de26-4a77-bfb9-e672d81cbf3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Received event network-vif-plugged-c265ae6a-2342-4b12-a28b-b08b728b8356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.009 2 DEBUG oslo_concurrency.lockutils [req-a323986a-2b64-412f-9c2d-b3272861ce60 req-ebd519a3-de26-4a77-bfb9-e672d81cbf3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.009 2 DEBUG oslo_concurrency.lockutils [req-a323986a-2b64-412f-9c2d-b3272861ce60 req-ebd519a3-de26-4a77-bfb9-e672d81cbf3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.010 2 DEBUG oslo_concurrency.lockutils [req-a323986a-2b64-412f-9c2d-b3272861ce60 req-ebd519a3-de26-4a77-bfb9-e672d81cbf3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.010 2 DEBUG nova.compute.manager [req-a323986a-2b64-412f-9c2d-b3272861ce60 req-ebd519a3-de26-4a77-bfb9-e672d81cbf3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] No waiting events found dispatching network-vif-plugged-c265ae6a-2342-4b12-a28b-b08b728b8356 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.010 2 WARNING nova.compute.manager [req-a323986a-2b64-412f-9c2d-b3272861ce60 req-ebd519a3-de26-4a77-bfb9-e672d81cbf3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Received unexpected event network-vif-plugged-c265ae6a-2342-4b12-a28b-b08b728b8356 for instance with vm_state deleted and task_state None.
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.022 2 DEBUG oslo_concurrency.lockutils [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 200 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 12 KiB/s wr, 335 op/s
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.064 2 INFO nova.scheduler.client.report [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Deleted allocations for instance ccaeb1d7-f1d2-43fb-b36a-793776c713cd
Oct 02 08:32:09 compute-0 ovn_controller[152344]: 2025-10-02T08:32:09Z|00649|binding|INFO|Releasing lport d143de50-fc80-43b6-82e2-6651430a4a42 from this chassis (sb_readonly=0)
Oct 02 08:32:09 compute-0 ovn_controller[152344]: 2025-10-02T08:32:09Z|00650|binding|INFO|Releasing lport 8d9038d5-8bd6-460b-aca0-b6f7422e177a from this chassis (sb_readonly=0)
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.165 2 DEBUG oslo_concurrency.lockutils [None req-71fca6e8-e607-4485-b413-7e70368c73cf 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "ccaeb1d7-f1d2-43fb-b36a-793776c713cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:09 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/787708047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.777 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393914.775981, 49564059-b2ef-4053-bedd-56a9afb53d2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.778 2 INFO nova.compute.manager [-] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] VM Stopped (Lifecycle Event)
Oct 02 08:32:09 compute-0 nova_compute[260603]: 2025-10-02 08:32:09.808 2 DEBUG nova.compute.manager [None req-73d4ea5b-eb85-4c6f-a617-060a7a7f921d - - - - - -] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:10 compute-0 ceph-mon[74477]: pgmap v1553: 305 pgs: 305 active+clean; 200 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 12 KiB/s wr, 335 op/s
Oct 02 08:32:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 200 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 9.0 KiB/s wr, 255 op/s
Oct 02 08:32:11 compute-0 nova_compute[260603]: 2025-10-02 08:32:11.322 2 DEBUG nova.compute.manager [req-c439f4ce-0ab8-4c45-b137-3b4e6d757d82 req-bd7783c3-dcf4-4aec-ae9a-e46e5066fe34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Received event network-vif-deleted-c265ae6a-2342-4b12-a28b-b08b728b8356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:11 compute-0 nova_compute[260603]: 2025-10-02 08:32:11.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:32:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Oct 02 08:32:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Oct 02 08:32:12 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Oct 02 08:32:12 compute-0 ceph-mon[74477]: pgmap v1554: 305 pgs: 305 active+clean; 200 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 9.0 KiB/s wr, 255 op/s
Oct 02 08:32:12 compute-0 ceph-mon[74477]: osdmap e223: 3 total, 3 up, 3 in
Oct 02 08:32:12 compute-0 nova_compute[260603]: 2025-10-02 08:32:12.677 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "50c9773b-004f-491e-abcf-6698fbd8ca3d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:12 compute-0 nova_compute[260603]: 2025-10-02 08:32:12.678 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:12 compute-0 nova_compute[260603]: 2025-10-02 08:32:12.705 2 DEBUG nova.compute.manager [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:32:12 compute-0 nova_compute[260603]: 2025-10-02 08:32:12.793 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:12 compute-0 nova_compute[260603]: 2025-10-02 08:32:12.794 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:12 compute-0 nova_compute[260603]: 2025-10-02 08:32:12.800 2 DEBUG nova.virt.hardware [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:32:12 compute-0 nova_compute[260603]: 2025-10-02 08:32:12.800 2 INFO nova.compute.claims [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:32:12 compute-0 nova_compute[260603]: 2025-10-02 08:32:12.945 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:12 compute-0 nova_compute[260603]: 2025-10-02 08:32:12.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 200 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.4 KiB/s wr, 199 op/s
Oct 02 08:32:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1248758229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.416 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.423 2 DEBUG nova.compute.provider_tree [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.439 2 DEBUG nova.scheduler.client.report [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:13 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1248758229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.462 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.463 2 DEBUG nova.compute.manager [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.516 2 DEBUG nova.compute.manager [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.517 2 DEBUG nova.network.neutron [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.528 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.528 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.546 2 INFO nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.571 2 DEBUG nova.compute.manager [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.683 2 DEBUG nova.compute.manager [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.685 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.686 2 INFO nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Creating image(s)
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.722 2 DEBUG nova.storage.rbd_utils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 50c9773b-004f-491e-abcf-6698fbd8ca3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.748 2 DEBUG nova.storage.rbd_utils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 50c9773b-004f-491e-abcf-6698fbd8ca3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.772 2 DEBUG nova.storage.rbd_utils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 50c9773b-004f-491e-abcf-6698fbd8ca3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.777 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.873 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.874 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.874 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.875 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.896 2 DEBUG nova.storage.rbd_utils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 50c9773b-004f-491e-abcf-6698fbd8ca3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:13 compute-0 nova_compute[260603]: 2025-10-02 08:32:13.899 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 50c9773b-004f-491e-abcf-6698fbd8ca3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.034 2 DEBUG nova.policy [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33ee6781337742479d7b4b078ad6a221', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.168 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 50c9773b-004f-491e-abcf-6698fbd8ca3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.242 2 DEBUG nova.storage.rbd_utils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] resizing rbd image 50c9773b-004f-491e-abcf-6698fbd8ca3d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.308 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.309 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.346 2 DEBUG nova.objects.instance [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 50c9773b-004f-491e-abcf-6698fbd8ca3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.358 2 DEBUG nova.compute.manager [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.366 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.367 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Ensure instance console log exists: /var/lib/nova/instances/50c9773b-004f-491e-abcf-6698fbd8ca3d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.367 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.368 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.368 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:14 compute-0 ceph-mon[74477]: pgmap v1556: 305 pgs: 305 active+clean; 200 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.4 KiB/s wr, 199 op/s
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.491 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.492 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.502 2 DEBUG nova.virt.hardware [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.502 2 INFO nova.compute.claims [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:32:14 compute-0 nova_compute[260603]: 2025-10-02 08:32:14.675 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 200 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.4 KiB/s wr, 199 op/s
Oct 02 08:32:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4219537803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.151 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.160 2 DEBUG nova.compute.provider_tree [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.184 2 DEBUG nova.scheduler.client.report [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.220 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.221 2 DEBUG nova.compute.manager [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.287 2 DEBUG nova.compute.manager [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.287 2 DEBUG nova.network.neutron [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.312 2 INFO nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.332 2 DEBUG nova.compute.manager [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.463 2 DEBUG nova.compute.manager [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.464 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.465 2 INFO nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Creating image(s)
Oct 02 08:32:15 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4219537803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.495 2 DEBUG nova.storage.rbd_utils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.521 2 DEBUG nova.storage.rbd_utils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.546 2 DEBUG nova.storage.rbd_utils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.551 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.600 2 DEBUG nova.policy [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9020ed38b31d46f88625374b2a76aef6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.652 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.652 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.653 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.654 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.680 2 DEBUG nova.storage.rbd_utils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.684 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.798 2 DEBUG nova.network.neutron [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Successfully created port: 099dc779-5949-4fbd-969a-1200ae071364 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:32:15 compute-0 nova_compute[260603]: 2025-10-02 08:32:15.964 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:16 compute-0 nova_compute[260603]: 2025-10-02 08:32:16.040 2 DEBUG nova.storage.rbd_utils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] resizing rbd image 19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:32:16 compute-0 nova_compute[260603]: 2025-10-02 08:32:16.141 2 DEBUG nova.objects.instance [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'migration_context' on Instance uuid 19dd1983-6b14-4ed7-bcb1-f620e7426cc6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:16 compute-0 nova_compute[260603]: 2025-10-02 08:32:16.171 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:32:16 compute-0 nova_compute[260603]: 2025-10-02 08:32:16.171 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Ensure instance console log exists: /var/lib/nova/instances/19dd1983-6b14-4ed7-bcb1-f620e7426cc6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:32:16 compute-0 nova_compute[260603]: 2025-10-02 08:32:16.172 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:16 compute-0 nova_compute[260603]: 2025-10-02 08:32:16.172 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:16 compute-0 nova_compute[260603]: 2025-10-02 08:32:16.172 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:16 compute-0 nova_compute[260603]: 2025-10-02 08:32:16.347 2 DEBUG nova.network.neutron [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Successfully created port: c55dac75-c247-4672-885b-8b1adb241591 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:32:16 compute-0 ceph-mon[74477]: pgmap v1557: 305 pgs: 305 active+clean; 200 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.4 KiB/s wr, 199 op/s
Oct 02 08:32:16 compute-0 nova_compute[260603]: 2025-10-02 08:32:16.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:16 compute-0 ovn_controller[152344]: 2025-10-02T08:32:16Z|00651|binding|INFO|Releasing lport d143de50-fc80-43b6-82e2-6651430a4a42 from this chassis (sb_readonly=0)
Oct 02 08:32:16 compute-0 ovn_controller[152344]: 2025-10-02T08:32:16Z|00652|binding|INFO|Releasing lport 8d9038d5-8bd6-460b-aca0-b6f7422e177a from this chassis (sb_readonly=0)
Oct 02 08:32:16 compute-0 nova_compute[260603]: 2025-10-02 08:32:16.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 200 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 7.6 KiB/s wr, 161 op/s
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.069 2 DEBUG nova.network.neutron [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Successfully updated port: c55dac75-c247-4672-885b-8b1adb241591 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.085 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "refresh_cache-19dd1983-6b14-4ed7-bcb1-f620e7426cc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.085 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquired lock "refresh_cache-19dd1983-6b14-4ed7-bcb1-f620e7426cc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.085 2 DEBUG nova.network.neutron [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:32:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.522 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.523 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.543 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.545 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.654 2 DEBUG nova.network.neutron [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.675 2 DEBUG nova.compute.manager [req-5525be61-a49e-4f8d-bbe9-c9c6497f6d44 req-e184d7fb-a03e-43ef-b907-2b25fb9d6f33 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Received event network-changed-c55dac75-c247-4672-885b-8b1adb241591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.676 2 DEBUG nova.compute.manager [req-5525be61-a49e-4f8d-bbe9-c9c6497f6d44 req-e184d7fb-a03e-43ef-b907-2b25fb9d6f33 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Refreshing instance network info cache due to event network-changed-c55dac75-c247-4672-885b-8b1adb241591. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.676 2 DEBUG oslo_concurrency.lockutils [req-5525be61-a49e-4f8d-bbe9-c9c6497f6d44 req-e184d7fb-a03e-43ef-b907-2b25fb9d6f33 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-19dd1983-6b14-4ed7-bcb1-f620e7426cc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.854 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.855 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.855 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.855 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:17 compute-0 nova_compute[260603]: 2025-10-02 08:32:17.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:18 compute-0 podman[329473]: 2025-10-02 08:32:18.029730263 +0000 UTC m=+0.080737276 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 08:32:18 compute-0 podman[329472]: 2025-10-02 08:32:18.044805131 +0000 UTC m=+0.111465250 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.060 2 DEBUG nova.network.neutron [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Successfully updated port: 099dc779-5949-4fbd-969a-1200ae071364 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.084 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "refresh_cache-50c9773b-004f-491e-abcf-6698fbd8ca3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.085 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquired lock "refresh_cache-50c9773b-004f-491e-abcf-6698fbd8ca3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.085 2 DEBUG nova.network.neutron [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.309 2 DEBUG nova.compute.manager [req-59ba9726-70ce-4f25-81d3-ae5b74364090 req-80805d02-fe40-49ac-aac8-e145b8506e20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Received event network-changed-099dc779-5949-4fbd-969a-1200ae071364 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.309 2 DEBUG nova.compute.manager [req-59ba9726-70ce-4f25-81d3-ae5b74364090 req-80805d02-fe40-49ac-aac8-e145b8506e20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Refreshing instance network info cache due to event network-changed-099dc779-5949-4fbd-969a-1200ae071364. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.309 2 DEBUG oslo_concurrency.lockutils [req-59ba9726-70ce-4f25-81d3-ae5b74364090 req-80805d02-fe40-49ac-aac8-e145b8506e20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-50c9773b-004f-491e-abcf-6698fbd8ca3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.455 2 DEBUG nova.network.neutron [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Updating instance_info_cache with network_info: [{"id": "c55dac75-c247-4672-885b-8b1adb241591", "address": "fa:16:3e:08:a0:e4", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc55dac75-c2", "ovs_interfaceid": "c55dac75-c247-4672-885b-8b1adb241591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.468 2 DEBUG nova.network.neutron [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.476 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Releasing lock "refresh_cache-19dd1983-6b14-4ed7-bcb1-f620e7426cc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.477 2 DEBUG nova.compute.manager [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Instance network_info: |[{"id": "c55dac75-c247-4672-885b-8b1adb241591", "address": "fa:16:3e:08:a0:e4", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc55dac75-c2", "ovs_interfaceid": "c55dac75-c247-4672-885b-8b1adb241591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.478 2 DEBUG oslo_concurrency.lockutils [req-5525be61-a49e-4f8d-bbe9-c9c6497f6d44 req-e184d7fb-a03e-43ef-b907-2b25fb9d6f33 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-19dd1983-6b14-4ed7-bcb1-f620e7426cc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.479 2 DEBUG nova.network.neutron [req-5525be61-a49e-4f8d-bbe9-c9c6497f6d44 req-e184d7fb-a03e-43ef-b907-2b25fb9d6f33 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Refreshing network info cache for port c55dac75-c247-4672-885b-8b1adb241591 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.484 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Start _get_guest_xml network_info=[{"id": "c55dac75-c247-4672-885b-8b1adb241591", "address": "fa:16:3e:08:a0:e4", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc55dac75-c2", "ovs_interfaceid": "c55dac75-c247-4672-885b-8b1adb241591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.493 2 WARNING nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:32:18 compute-0 ceph-mon[74477]: pgmap v1558: 305 pgs: 305 active+clean; 200 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 7.6 KiB/s wr, 161 op/s
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.503 2 DEBUG nova.virt.libvirt.host [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.504 2 DEBUG nova.virt.libvirt.host [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.508 2 DEBUG nova.virt.libvirt.host [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.509 2 DEBUG nova.virt.libvirt.host [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.509 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.510 2 DEBUG nova.virt.hardware [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.511 2 DEBUG nova.virt.hardware [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.511 2 DEBUG nova.virt.hardware [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.512 2 DEBUG nova.virt.hardware [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.512 2 DEBUG nova.virt.hardware [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.512 2 DEBUG nova.virt.hardware [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.513 2 DEBUG nova.virt.hardware [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.513 2 DEBUG nova.virt.hardware [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.513 2 DEBUG nova.virt.hardware [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.514 2 DEBUG nova.virt.hardware [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.514 2 DEBUG nova.virt.hardware [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.519 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/325067560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.973 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.994 2 DEBUG nova.storage.rbd_utils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:18 compute-0 nova_compute[260603]: 2025-10-02 08:32:18.998 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 4.3 MiB/s wr, 65 op/s
Oct 02 08:32:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145141501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.481 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.482 2 DEBUG nova.virt.libvirt.vif [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-326848172',display_name='tempest-ServerActionsTestOtherB-server-326848172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-326848172',id=70,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-ku6xzkx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:15Z,user_data=None,user_id='9020ed38b31d46f88625374b2a76aef6',uuid=19dd1983-6b14-4ed7-bcb1-f620e7426cc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c55dac75-c247-4672-885b-8b1adb241591", "address": "fa:16:3e:08:a0:e4", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc55dac75-c2", "ovs_interfaceid": "c55dac75-c247-4672-885b-8b1adb241591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.483 2 DEBUG nova.network.os_vif_util [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "c55dac75-c247-4672-885b-8b1adb241591", "address": "fa:16:3e:08:a0:e4", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc55dac75-c2", "ovs_interfaceid": "c55dac75-c247-4672-885b-8b1adb241591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.483 2 DEBUG nova.network.os_vif_util [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:a0:e4,bridge_name='br-int',has_traffic_filtering=True,id=c55dac75-c247-4672-885b-8b1adb241591,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc55dac75-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.484 2 DEBUG nova.objects.instance [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'pci_devices' on Instance uuid 19dd1983-6b14-4ed7-bcb1-f620e7426cc6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/325067560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1145141501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.505 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:32:19 compute-0 nova_compute[260603]:   <uuid>19dd1983-6b14-4ed7-bcb1-f620e7426cc6</uuid>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   <name>instance-00000046</name>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerActionsTestOtherB-server-326848172</nova:name>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:32:18</nova:creationTime>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:32:19 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:32:19 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:32:19 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:32:19 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:32:19 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:32:19 compute-0 nova_compute[260603]:         <nova:user uuid="9020ed38b31d46f88625374b2a76aef6">tempest-ServerActionsTestOtherB-1644249004-project-member</nova:user>
Oct 02 08:32:19 compute-0 nova_compute[260603]:         <nova:project uuid="eda0caa41e4740148ab99d5ebf9e27ba">tempest-ServerActionsTestOtherB-1644249004</nova:project>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:32:19 compute-0 nova_compute[260603]:         <nova:port uuid="c55dac75-c247-4672-885b-8b1adb241591">
Oct 02 08:32:19 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <system>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <entry name="serial">19dd1983-6b14-4ed7-bcb1-f620e7426cc6</entry>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <entry name="uuid">19dd1983-6b14-4ed7-bcb1-f620e7426cc6</entry>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     </system>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   <os>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   </os>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   <features>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   </features>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk">
Oct 02 08:32:19 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:19 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk.config">
Oct 02 08:32:19 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:19 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:08:a0:e4"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <target dev="tapc55dac75-c2"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/19dd1983-6b14-4ed7-bcb1-f620e7426cc6/console.log" append="off"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <video>
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     </video>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:32:19 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:32:19 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:32:19 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:32:19 compute-0 nova_compute[260603]: </domain>
Oct 02 08:32:19 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.506 2 DEBUG nova.compute.manager [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Preparing to wait for external event network-vif-plugged-c55dac75-c247-4672-885b-8b1adb241591 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.506 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.506 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.506 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.507 2 DEBUG nova.virt.libvirt.vif [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-326848172',display_name='tempest-ServerActionsTestOtherB-server-326848172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-326848172',id=70,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-ku6xzkx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:15Z,user_data=None,user_id='9020ed38b31d46f88625374b2a76aef6',uuid=19dd1983-6b14-4ed7-bcb1-f620e7426cc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c55dac75-c247-4672-885b-8b1adb241591", "address": "fa:16:3e:08:a0:e4", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc55dac75-c2", "ovs_interfaceid": "c55dac75-c247-4672-885b-8b1adb241591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.507 2 DEBUG nova.network.os_vif_util [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "c55dac75-c247-4672-885b-8b1adb241591", "address": "fa:16:3e:08:a0:e4", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc55dac75-c2", "ovs_interfaceid": "c55dac75-c247-4672-885b-8b1adb241591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.508 2 DEBUG nova.network.os_vif_util [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:a0:e4,bridge_name='br-int',has_traffic_filtering=True,id=c55dac75-c247-4672-885b-8b1adb241591,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc55dac75-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.508 2 DEBUG os_vif [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:a0:e4,bridge_name='br-int',has_traffic_filtering=True,id=c55dac75-c247-4672-885b-8b1adb241591,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc55dac75-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.509 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.509 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc55dac75-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.512 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc55dac75-c2, col_values=(('external_ids', {'iface-id': 'c55dac75-c247-4672-885b-8b1adb241591', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:a0:e4', 'vm-uuid': '19dd1983-6b14-4ed7-bcb1-f620e7426cc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:19 compute-0 NetworkManager[45129]: <info>  [1759393939.5139] manager: (tapc55dac75-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.519 2 INFO os_vif [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:a0:e4,bridge_name='br-int',has_traffic_filtering=True,id=c55dac75-c247-4672-885b-8b1adb241591,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc55dac75-c2')
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.598 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.600 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.600 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No VIF found with MAC fa:16:3e:08:a0:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.601 2 INFO nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Using config drive
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.635 2 DEBUG nova.storage.rbd_utils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.941 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393924.9409263, 751dd598-d3c9-4e21-90ab-98962fac748d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.942 2 INFO nova.compute.manager [-] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] VM Stopped (Lifecycle Event)
Oct 02 08:32:19 compute-0 nova_compute[260603]: 2025-10-02 08:32:19.982 2 DEBUG nova.compute.manager [None req-972970f8-591c-458e-ba1e-3a5f32dc977c - - - - - -] [instance: 751dd598-d3c9-4e21-90ab-98962fac748d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.244 2 DEBUG nova.network.neutron [req-5525be61-a49e-4f8d-bbe9-c9c6497f6d44 req-e184d7fb-a03e-43ef-b907-2b25fb9d6f33 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Updated VIF entry in instance network info cache for port c55dac75-c247-4672-885b-8b1adb241591. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.245 2 DEBUG nova.network.neutron [req-5525be61-a49e-4f8d-bbe9-c9c6497f6d44 req-e184d7fb-a03e-43ef-b907-2b25fb9d6f33 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Updating instance_info_cache with network_info: [{"id": "c55dac75-c247-4672-885b-8b1adb241591", "address": "fa:16:3e:08:a0:e4", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc55dac75-c2", "ovs_interfaceid": "c55dac75-c247-4672-885b-8b1adb241591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.274 2 DEBUG nova.network.neutron [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Updating instance_info_cache with network_info: [{"id": "099dc779-5949-4fbd-969a-1200ae071364", "address": "fa:16:3e:c5:31:42", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap099dc779-59", "ovs_interfaceid": "099dc779-5949-4fbd-969a-1200ae071364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.278 2 DEBUG oslo_concurrency.lockutils [req-5525be61-a49e-4f8d-bbe9-c9c6497f6d44 req-e184d7fb-a03e-43ef-b907-2b25fb9d6f33 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-19dd1983-6b14-4ed7-bcb1-f620e7426cc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.300 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Releasing lock "refresh_cache-50c9773b-004f-491e-abcf-6698fbd8ca3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.301 2 DEBUG nova.compute.manager [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Instance network_info: |[{"id": "099dc779-5949-4fbd-969a-1200ae071364", "address": "fa:16:3e:c5:31:42", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap099dc779-59", "ovs_interfaceid": "099dc779-5949-4fbd-969a-1200ae071364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.301 2 DEBUG oslo_concurrency.lockutils [req-59ba9726-70ce-4f25-81d3-ae5b74364090 req-80805d02-fe40-49ac-aac8-e145b8506e20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-50c9773b-004f-491e-abcf-6698fbd8ca3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.302 2 DEBUG nova.network.neutron [req-59ba9726-70ce-4f25-81d3-ae5b74364090 req-80805d02-fe40-49ac-aac8-e145b8506e20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Refreshing network info cache for port 099dc779-5949-4fbd-969a-1200ae071364 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.307 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Start _get_guest_xml network_info=[{"id": "099dc779-5949-4fbd-969a-1200ae071364", "address": "fa:16:3e:c5:31:42", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap099dc779-59", "ovs_interfaceid": "099dc779-5949-4fbd-969a-1200ae071364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.314 2 WARNING nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.321 2 DEBUG nova.virt.libvirt.host [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.322 2 DEBUG nova.virt.libvirt.host [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.330 2 DEBUG nova.virt.libvirt.host [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.331 2 DEBUG nova.virt.libvirt.host [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.331 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.332 2 DEBUG nova.virt.hardware [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.333 2 DEBUG nova.virt.hardware [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.333 2 DEBUG nova.virt.hardware [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.333 2 DEBUG nova.virt.hardware [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.334 2 DEBUG nova.virt.hardware [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.334 2 DEBUG nova.virt.hardware [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.334 2 DEBUG nova.virt.hardware [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.335 2 DEBUG nova.virt.hardware [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.335 2 DEBUG nova.virt.hardware [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.336 2 DEBUG nova.virt.hardware [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.336 2 DEBUG nova.virt.hardware [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.341 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:20 compute-0 ceph-mon[74477]: pgmap v1559: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 4.3 MiB/s wr, 65 op/s
Oct 02 08:32:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3765598787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.870 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.901 2 DEBUG nova.storage.rbd_utils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 50c9773b-004f-491e-abcf-6698fbd8ca3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:20 compute-0 nova_compute[260603]: 2025-10-02 08:32:20.905 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 4.3 MiB/s wr, 65 op/s
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.224 2 INFO nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Creating config drive at /var/lib/nova/instances/19dd1983-6b14-4ed7-bcb1-f620e7426cc6/disk.config
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.231 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19dd1983-6b14-4ed7-bcb1-f620e7426cc6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_khxizf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.282 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating instance_info_cache with network_info: [{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.304 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.304 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.305 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.305 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:32:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2105301229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.352 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.354 2 DEBUG nova.virt.libvirt.vif [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1154721187',display_name='tempest-ServersTestJSON-server-1154721187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1154721187',id=69,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-k01r9hb6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:13Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=50c9773b-004f-491e-abcf-6698fbd8ca3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "099dc779-5949-4fbd-969a-1200ae071364", "address": "fa:16:3e:c5:31:42", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap099dc779-59", "ovs_interfaceid": "099dc779-5949-4fbd-969a-1200ae071364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.354 2 DEBUG nova.network.os_vif_util [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "099dc779-5949-4fbd-969a-1200ae071364", "address": "fa:16:3e:c5:31:42", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap099dc779-59", "ovs_interfaceid": "099dc779-5949-4fbd-969a-1200ae071364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.356 2 DEBUG nova.network.os_vif_util [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:31:42,bridge_name='br-int',has_traffic_filtering=True,id=099dc779-5949-4fbd-969a-1200ae071364,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap099dc779-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.357 2 DEBUG nova.objects.instance [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 50c9773b-004f-491e-abcf-6698fbd8ca3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.370 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19dd1983-6b14-4ed7-bcb1-f620e7426cc6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_khxizf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.399 2 DEBUG nova.storage.rbd_utils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.402 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19dd1983-6b14-4ed7-bcb1-f620e7426cc6/disk.config 19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.439 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:32:21 compute-0 nova_compute[260603]:   <uuid>50c9773b-004f-491e-abcf-6698fbd8ca3d</uuid>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   <name>instance-00000045</name>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersTestJSON-server-1154721187</nova:name>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:32:20</nova:creationTime>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:32:21 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:32:21 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:32:21 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:32:21 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:32:21 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:32:21 compute-0 nova_compute[260603]:         <nova:user uuid="33ee6781337742479d7b4b078ad6a221">tempest-ServersTestJSON-520437589-project-member</nova:user>
Oct 02 08:32:21 compute-0 nova_compute[260603]:         <nova:project uuid="f6678937d40d4004ad15e1e9eef6f9c7">tempest-ServersTestJSON-520437589</nova:project>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:32:21 compute-0 nova_compute[260603]:         <nova:port uuid="099dc779-5949-4fbd-969a-1200ae071364">
Oct 02 08:32:21 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <system>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <entry name="serial">50c9773b-004f-491e-abcf-6698fbd8ca3d</entry>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <entry name="uuid">50c9773b-004f-491e-abcf-6698fbd8ca3d</entry>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     </system>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   <os>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   </os>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   <features>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   </features>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/50c9773b-004f-491e-abcf-6698fbd8ca3d_disk">
Oct 02 08:32:21 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:21 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/50c9773b-004f-491e-abcf-6698fbd8ca3d_disk.config">
Oct 02 08:32:21 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:21 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:c5:31:42"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <target dev="tap099dc779-59"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/50c9773b-004f-491e-abcf-6698fbd8ca3d/console.log" append="off"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <video>
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     </video>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:32:21 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:32:21 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:32:21 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:32:21 compute-0 nova_compute[260603]: </domain>
Oct 02 08:32:21 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.445 2 DEBUG nova.compute.manager [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Preparing to wait for external event network-vif-plugged-099dc779-5949-4fbd-969a-1200ae071364 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.445 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.446 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.446 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.447 2 DEBUG nova.virt.libvirt.vif [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1154721187',display_name='tempest-ServersTestJSON-server-1154721187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1154721187',id=69,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-k01r9hb6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:13Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=50c9773b-004f-491e-abcf-6698fbd8ca3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "099dc779-5949-4fbd-969a-1200ae071364", "address": "fa:16:3e:c5:31:42", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap099dc779-59", "ovs_interfaceid": "099dc779-5949-4fbd-969a-1200ae071364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.448 2 DEBUG nova.network.os_vif_util [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "099dc779-5949-4fbd-969a-1200ae071364", "address": "fa:16:3e:c5:31:42", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap099dc779-59", "ovs_interfaceid": "099dc779-5949-4fbd-969a-1200ae071364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.449 2 DEBUG nova.network.os_vif_util [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:31:42,bridge_name='br-int',has_traffic_filtering=True,id=099dc779-5949-4fbd-969a-1200ae071364,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap099dc779-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.449 2 DEBUG os_vif [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:31:42,bridge_name='br-int',has_traffic_filtering=True,id=099dc779-5949-4fbd-969a-1200ae071364,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap099dc779-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.455 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap099dc779-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap099dc779-59, col_values=(('external_ids', {'iface-id': '099dc779-5949-4fbd-969a-1200ae071364', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:31:42', 'vm-uuid': '50c9773b-004f-491e-abcf-6698fbd8ca3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:21 compute-0 NetworkManager[45129]: <info>  [1759393941.4587] manager: (tap099dc779-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.467 2 INFO os_vif [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:31:42,bridge_name='br-int',has_traffic_filtering=True,id=099dc779-5949-4fbd-969a-1200ae071364,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap099dc779-59')
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:32:21 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3765598787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:21 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2105301229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.526 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.527 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.527 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No VIF found with MAC fa:16:3e:c5:31:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.527 2 INFO nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Using config drive
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.560 2 DEBUG nova.storage.rbd_utils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 50c9773b-004f-491e-abcf-6698fbd8ca3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.566 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393926.5285072, ccaeb1d7-f1d2-43fb-b36a-793776c713cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.566 2 INFO nova.compute.manager [-] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] VM Stopped (Lifecycle Event)
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.568 2 DEBUG oslo_concurrency.processutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19dd1983-6b14-4ed7-bcb1-f620e7426cc6/disk.config 19dd1983-6b14-4ed7-bcb1-f620e7426cc6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.568 2 INFO nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Deleting local config drive /var/lib/nova/instances/19dd1983-6b14-4ed7-bcb1-f620e7426cc6/disk.config because it was imported into RBD.
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.597 2 DEBUG nova.compute.manager [None req-0c75655c-2d58-40db-b26c-9e789877efa7 - - - - - -] [instance: ccaeb1d7-f1d2-43fb-b36a-793776c713cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:21 compute-0 kernel: tapc55dac75-c2: entered promiscuous mode
Oct 02 08:32:21 compute-0 NetworkManager[45129]: <info>  [1759393941.6293] manager: (tapc55dac75-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Oct 02 08:32:21 compute-0 ovn_controller[152344]: 2025-10-02T08:32:21Z|00653|binding|INFO|Claiming lport c55dac75-c247-4672-885b-8b1adb241591 for this chassis.
Oct 02 08:32:21 compute-0 ovn_controller[152344]: 2025-10-02T08:32:21Z|00654|binding|INFO|c55dac75-c247-4672-885b-8b1adb241591: Claiming fa:16:3e:08:a0:e4 10.100.0.3
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:21 compute-0 systemd-udevd[329733]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.674 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:a0:e4 10.100.0.3'], port_security=['fa:16:3e:08:a0:e4 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '19dd1983-6b14-4ed7-bcb1-f620e7426cc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'df44854a-80b4-49ce-898d-50927f9b482f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04113540-c60b-4329-960e-cb06bfeb56f0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c55dac75-c247-4672-885b-8b1adb241591) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.676 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c55dac75-c247-4672-885b-8b1adb241591 in datapath ef30d863-af60-49d9-b5d2-5e4f20c70d56 bound to our chassis
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.677 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef30d863-af60-49d9-b5d2-5e4f20c70d56
Oct 02 08:32:21 compute-0 NetworkManager[45129]: <info>  [1759393941.6841] device (tapc55dac75-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:32:21 compute-0 NetworkManager[45129]: <info>  [1759393941.6854] device (tapc55dac75-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:32:21 compute-0 ovn_controller[152344]: 2025-10-02T08:32:21Z|00655|binding|INFO|Setting lport c55dac75-c247-4672-885b-8b1adb241591 ovn-installed in OVS
Oct 02 08:32:21 compute-0 ovn_controller[152344]: 2025-10-02T08:32:21Z|00656|binding|INFO|Setting lport c55dac75-c247-4672-885b-8b1adb241591 up in Southbound
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.700 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[acaedaab-732b-4b30-957e-687323019951]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:21 compute-0 systemd-machined[214636]: New machine qemu-77-instance-00000046.
Oct 02 08:32:21 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-00000046.
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.737 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fd299a47-8b15-4e8e-8b2d-cb6d6c8a2642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.741 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[26d052a3-daa2-416c-a963-b50e6e911026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.773 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c7230691-2c42-45f8-9d70-6e69f7693ce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.795 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[716d12cf-4aea-4705-a22e-3538d2a3b3d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef30d863-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:1b:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474382, 'reachable_time': 21085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329748, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.816 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[76f7eb61-a1e6-4816-b93d-2cc097b21dc6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef30d863-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474399, 'tstamp': 474399}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329751, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef30d863-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474402, 'tstamp': 474402}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329751, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.818 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef30d863-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef30d863-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.829 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef30d863-a0, col_values=(('external_ids', {'iface-id': 'd143de50-fc80-43b6-82e2-6651430a4a42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:21.829 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:21 compute-0 nova_compute[260603]: 2025-10-02 08:32:21.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:32:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920791514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:32:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:32:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920791514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:32:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.199 2 DEBUG nova.compute.manager [req-b9e25765-3d0e-4953-ba5a-56adc3020401 req-7e289c31-8610-45d9-8e00-ddb896671ac1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Received event network-vif-plugged-c55dac75-c247-4672-885b-8b1adb241591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.199 2 DEBUG oslo_concurrency.lockutils [req-b9e25765-3d0e-4953-ba5a-56adc3020401 req-7e289c31-8610-45d9-8e00-ddb896671ac1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.200 2 DEBUG oslo_concurrency.lockutils [req-b9e25765-3d0e-4953-ba5a-56adc3020401 req-7e289c31-8610-45d9-8e00-ddb896671ac1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.200 2 DEBUG oslo_concurrency.lockutils [req-b9e25765-3d0e-4953-ba5a-56adc3020401 req-7e289c31-8610-45d9-8e00-ddb896671ac1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.201 2 DEBUG nova.compute.manager [req-b9e25765-3d0e-4953-ba5a-56adc3020401 req-7e289c31-8610-45d9-8e00-ddb896671ac1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Processing event network-vif-plugged-c55dac75-c247-4672-885b-8b1adb241591 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.222 2 INFO nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Creating config drive at /var/lib/nova/instances/50c9773b-004f-491e-abcf-6698fbd8ca3d/disk.config
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.227 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50c9773b-004f-491e-abcf-6698fbd8ca3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp10nl8g_l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.365 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50c9773b-004f-491e-abcf-6698fbd8ca3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp10nl8g_l" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.388 2 DEBUG nova.storage.rbd_utils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 50c9773b-004f-491e-abcf-6698fbd8ca3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.393 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50c9773b-004f-491e-abcf-6698fbd8ca3d/disk.config 50c9773b-004f-491e-abcf-6698fbd8ca3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.517 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:32:22 compute-0 ceph-mon[74477]: pgmap v1560: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 4.3 MiB/s wr, 65 op/s
Oct 02 08:32:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/920791514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:32:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/920791514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.574 2 DEBUG oslo_concurrency.processutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50c9773b-004f-491e-abcf-6698fbd8ca3d/disk.config 50c9773b-004f-491e-abcf-6698fbd8ca3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.575 2 INFO nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Deleting local config drive /var/lib/nova/instances/50c9773b-004f-491e-abcf-6698fbd8ca3d/disk.config because it was imported into RBD.
Oct 02 08:32:22 compute-0 systemd-udevd[329737]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:32:22 compute-0 kernel: tap099dc779-59: entered promiscuous mode
Oct 02 08:32:22 compute-0 NetworkManager[45129]: <info>  [1759393942.6289] manager: (tap099dc779-59): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:22 compute-0 ovn_controller[152344]: 2025-10-02T08:32:22Z|00657|binding|INFO|Claiming lport 099dc779-5949-4fbd-969a-1200ae071364 for this chassis.
Oct 02 08:32:22 compute-0 ovn_controller[152344]: 2025-10-02T08:32:22Z|00658|binding|INFO|099dc779-5949-4fbd-969a-1200ae071364: Claiming fa:16:3e:c5:31:42 10.100.0.5
Oct 02 08:32:22 compute-0 NetworkManager[45129]: <info>  [1759393942.6417] device (tap099dc779-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.643 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:31:42 10.100.0.5'], port_security=['fa:16:3e:c5:31:42 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '50c9773b-004f-491e-abcf-6698fbd8ca3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=099dc779-5949-4fbd-969a-1200ae071364) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:22 compute-0 NetworkManager[45129]: <info>  [1759393942.6451] device (tap099dc779-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.644 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 099dc779-5949-4fbd-969a-1200ae071364 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 bound to our chassis
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.646 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:32:22 compute-0 ovn_controller[152344]: 2025-10-02T08:32:22Z|00659|binding|INFO|Setting lport 099dc779-5949-4fbd-969a-1200ae071364 ovn-installed in OVS
Oct 02 08:32:22 compute-0 ovn_controller[152344]: 2025-10-02T08:32:22Z|00660|binding|INFO|Setting lport 099dc779-5949-4fbd-969a-1200ae071364 up in Southbound
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:22 compute-0 systemd-machined[214636]: New machine qemu-78-instance-00000045.
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.665 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d377a26f-0b1a-4d59-8e5b-308339102907]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:22 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-00000045.
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.706 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f85469de-4f4f-47b9-92ad-1cb0ed884188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.708 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf7e6fa-1e81-4bdc-b9b9-fdfe7dc1d4e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.732 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a0001e0d-18d4-4353-af06-3d5797d873aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.751 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[567703c1-feea-465e-9098-e3152b09d359]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 916, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 916, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329862, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.763 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[567ca1ec-3868-47fa-a41f-948e697ff1ea]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329864, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329864, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.764 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.807 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.807 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.808 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:22.809 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.976 2 DEBUG nova.network.neutron [req-59ba9726-70ce-4f25-81d3-ae5b74364090 req-80805d02-fe40-49ac-aac8-e145b8506e20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Updated VIF entry in instance network info cache for port 099dc779-5949-4fbd-969a-1200ae071364. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.977 2 DEBUG nova.network.neutron [req-59ba9726-70ce-4f25-81d3-ae5b74364090 req-80805d02-fe40-49ac-aac8-e145b8506e20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Updating instance_info_cache with network_info: [{"id": "099dc779-5949-4fbd-969a-1200ae071364", "address": "fa:16:3e:c5:31:42", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap099dc779-59", "ovs_interfaceid": "099dc779-5949-4fbd-969a-1200ae071364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:22 compute-0 nova_compute[260603]: 2025-10-02 08:32:22.997 2 DEBUG oslo_concurrency.lockutils [req-59ba9726-70ce-4f25-81d3-ae5b74364090 req-80805d02-fe40-49ac-aac8-e145b8506e20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-50c9773b-004f-491e-abcf-6698fbd8ca3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.9 MiB/s wr, 62 op/s
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.158 2 DEBUG nova.compute.manager [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.159 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393943.1581864, 19dd1983-6b14-4ed7-bcb1-f620e7426cc6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.159 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] VM Started (Lifecycle Event)
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.165 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.169 2 INFO nova.virt.libvirt.driver [-] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Instance spawned successfully.
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.169 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.197 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.199 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.200 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.200 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.200 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.201 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.201 2 DEBUG nova.virt.libvirt.driver [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.206 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.237 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.237 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393943.1589231, 19dd1983-6b14-4ed7-bcb1-f620e7426cc6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.237 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] VM Paused (Lifecycle Event)
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.262 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.265 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393943.163175, 19dd1983-6b14-4ed7-bcb1-f620e7426cc6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.265 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] VM Resumed (Lifecycle Event)
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.285 2 INFO nova.compute.manager [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Took 7.82 seconds to spawn the instance on the hypervisor.
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.286 2 DEBUG nova.compute.manager [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.287 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.293 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.349 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.393 2 INFO nova.compute.manager [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Took 8.98 seconds to build instance.
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.419 2 DEBUG oslo_concurrency.lockutils [None req-dad9a4ed-9a02-44f8-bbfe-daaa7449481f 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.559 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393943.5589135, 50c9773b-004f-491e-abcf-6698fbd8ca3d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.560 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] VM Started (Lifecycle Event)
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.583 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.588 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393943.559092, 50c9773b-004f-491e-abcf-6698fbd8ca3d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.588 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] VM Paused (Lifecycle Event)
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.607 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.611 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:23 compute-0 nova_compute[260603]: 2025-10-02 08:32:23.634 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:24 compute-0 podman[329907]: 2025-10-02 08:32:24.012646967 +0000 UTC m=+0.075015028 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.365 2 DEBUG nova.compute.manager [req-4a053f4d-c7c4-4c74-94ff-0f38a81ddbd1 req-4d405a53-f7f5-4f6b-9555-82bae53a1c1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Received event network-vif-plugged-c55dac75-c247-4672-885b-8b1adb241591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.366 2 DEBUG oslo_concurrency.lockutils [req-4a053f4d-c7c4-4c74-94ff-0f38a81ddbd1 req-4d405a53-f7f5-4f6b-9555-82bae53a1c1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.366 2 DEBUG oslo_concurrency.lockutils [req-4a053f4d-c7c4-4c74-94ff-0f38a81ddbd1 req-4d405a53-f7f5-4f6b-9555-82bae53a1c1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.366 2 DEBUG oslo_concurrency.lockutils [req-4a053f4d-c7c4-4c74-94ff-0f38a81ddbd1 req-4d405a53-f7f5-4f6b-9555-82bae53a1c1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.366 2 DEBUG nova.compute.manager [req-4a053f4d-c7c4-4c74-94ff-0f38a81ddbd1 req-4d405a53-f7f5-4f6b-9555-82bae53a1c1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] No waiting events found dispatching network-vif-plugged-c55dac75-c247-4672-885b-8b1adb241591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.366 2 WARNING nova.compute.manager [req-4a053f4d-c7c4-4c74-94ff-0f38a81ddbd1 req-4d405a53-f7f5-4f6b-9555-82bae53a1c1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Received unexpected event network-vif-plugged-c55dac75-c247-4672-885b-8b1adb241591 for instance with vm_state active and task_state None.
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.507 2 INFO nova.compute.manager [None req-1a99bb36-3a94-48cf-936d-371a3d56721d 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Get console output
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.514 2 INFO oslo.privsep.daemon [None req-1a99bb36-3a94-48cf-936d-371a3d56721d 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpidi2crq_/privsep.sock']
Oct 02 08:32:24 compute-0 ceph-mon[74477]: pgmap v1561: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.9 MiB/s wr, 62 op/s
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.544 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.571 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.571 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.571 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.572 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.572 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.958 2 DEBUG nova.compute.manager [req-a9b2f13e-938c-4ff9-9e98-a169bed65674 req-429c92ce-b871-462e-a9b8-b4ea73b6b02c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Received event network-vif-plugged-099dc779-5949-4fbd-969a-1200ae071364 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.960 2 DEBUG oslo_concurrency.lockutils [req-a9b2f13e-938c-4ff9-9e98-a169bed65674 req-429c92ce-b871-462e-a9b8-b4ea73b6b02c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.962 2 DEBUG oslo_concurrency.lockutils [req-a9b2f13e-938c-4ff9-9e98-a169bed65674 req-429c92ce-b871-462e-a9b8-b4ea73b6b02c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.963 2 DEBUG oslo_concurrency.lockutils [req-a9b2f13e-938c-4ff9-9e98-a169bed65674 req-429c92ce-b871-462e-a9b8-b4ea73b6b02c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.964 2 DEBUG nova.compute.manager [req-a9b2f13e-938c-4ff9-9e98-a169bed65674 req-429c92ce-b871-462e-a9b8-b4ea73b6b02c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Processing event network-vif-plugged-099dc779-5949-4fbd-969a-1200ae071364 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.967 2 DEBUG nova.compute.manager [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.979 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393944.9780667, 50c9773b-004f-491e-abcf-6698fbd8ca3d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.979 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] VM Resumed (Lifecycle Event)
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.982 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:32:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2482458836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.989 2 INFO nova.virt.libvirt.driver [-] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Instance spawned successfully.
Oct 02 08:32:24 compute-0 nova_compute[260603]: 2025-10-02 08:32:24.989 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.013 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.021 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.025 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.025 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.026 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.026 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.027 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.027 2 DEBUG nova.virt.libvirt.driver [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.037 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.074 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.177 2 INFO nova.compute.manager [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Took 11.49 seconds to spawn the instance on the hypervisor.
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.178 2 DEBUG nova.compute.manager [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.190 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.190 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.198 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.198 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.202 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.202 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.205 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.205 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.262 2 INFO nova.compute.manager [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Took 12.49 seconds to build instance.
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.283 2 DEBUG oslo_concurrency.lockutils [None req-60c08ea0-165d-4dfe-ae69-075d247ed55e 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.451 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.453 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3438MB free_disk=59.85553741455078GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.453 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.453 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.520 2 INFO oslo.privsep.daemon [None req-1a99bb36-3a94-48cf-936d-371a3d56721d 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Spawned new privsep daemon via rootwrap
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.305 29746 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.309 29746 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.311 29746 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.312 29746 INFO oslo.privsep.daemon [-] privsep daemon running as pid 29746
Oct 02 08:32:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2482458836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.576 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 49e7e668-b62c-4e35-a4e2-bba540000961 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.576 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 7ac34b0c-8ced-417d-9442-8fda77804a34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.577 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 50c9773b-004f-491e-abcf-6698fbd8ca3d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.577 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 19dd1983-6b14-4ed7-bcb1-f620e7426cc6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.577 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.578 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.639 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:32:25 compute-0 nova_compute[260603]: 2025-10-02 08:32:25.717 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16855085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:26 compute-0 nova_compute[260603]: 2025-10-02 08:32:26.223 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:26 compute-0 nova_compute[260603]: 2025-10-02 08:32:26.228 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:26 compute-0 nova_compute[260603]: 2025-10-02 08:32:26.252 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:26 compute-0 nova_compute[260603]: 2025-10-02 08:32:26.311 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:32:26 compute-0 nova_compute[260603]: 2025-10-02 08:32:26.311 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:26 compute-0 nova_compute[260603]: 2025-10-02 08:32:26.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:26 compute-0 ceph-mon[74477]: pgmap v1562: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Oct 02 08:32:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/16855085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.561199) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393946561266, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1945, "num_deletes": 258, "total_data_size": 2806199, "memory_usage": 2855184, "flush_reason": "Manual Compaction"}
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393946575443, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2751182, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30962, "largest_seqno": 32906, "table_properties": {"data_size": 2742452, "index_size": 5288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19378, "raw_average_key_size": 20, "raw_value_size": 2724463, "raw_average_value_size": 2923, "num_data_blocks": 233, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759393782, "oldest_key_time": 1759393782, "file_creation_time": 1759393946, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 14294 microseconds, and 8500 cpu microseconds.
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.575504) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2751182 bytes OK
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.575535) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.577105) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.577125) EVENT_LOG_v1 {"time_micros": 1759393946577117, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.577149) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2797774, prev total WAL file size 2797774, number of live WAL files 2.
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.578165) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2686KB)], [68(7160KB)]
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393946578222, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10083190, "oldest_snapshot_seqno": -1}
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5766 keys, 8361886 bytes, temperature: kUnknown
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393946621221, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8361886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8323021, "index_size": 23331, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 144796, "raw_average_key_size": 25, "raw_value_size": 8219155, "raw_average_value_size": 1425, "num_data_blocks": 948, "num_entries": 5766, "num_filter_entries": 5766, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759393946, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.621873) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8361886 bytes
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.623086) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 233.7 rd, 193.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.0 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(6.7) write-amplify(3.0) OK, records in: 6293, records dropped: 527 output_compression: NoCompression
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.623116) EVENT_LOG_v1 {"time_micros": 1759393946623101, "job": 38, "event": "compaction_finished", "compaction_time_micros": 43144, "compaction_time_cpu_micros": 24863, "output_level": 6, "num_output_files": 1, "total_output_size": 8361886, "num_input_records": 6293, "num_output_records": 5766, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393946624070, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393946626605, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.578098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.626707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.626721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.626724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.626726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:32:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:32:26.626729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:32:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Oct 02 08:32:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:32:27 compute-0 nova_compute[260603]: 2025-10-02 08:32:27.288 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:32:27 compute-0 nova_compute[260603]: 2025-10-02 08:32:27.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:32:27 compute-0 nova_compute[260603]: 2025-10-02 08:32:27.906 2 DEBUG nova.compute.manager [req-a4e92339-575a-4b51-bc08-41df1687456f req-11bd5213-2aa7-402c-ab90-4482415b232a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Received event network-vif-plugged-099dc779-5949-4fbd-969a-1200ae071364 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:27 compute-0 nova_compute[260603]: 2025-10-02 08:32:27.906 2 DEBUG oslo_concurrency.lockutils [req-a4e92339-575a-4b51-bc08-41df1687456f req-11bd5213-2aa7-402c-ab90-4482415b232a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:27 compute-0 nova_compute[260603]: 2025-10-02 08:32:27.906 2 DEBUG oslo_concurrency.lockutils [req-a4e92339-575a-4b51-bc08-41df1687456f req-11bd5213-2aa7-402c-ab90-4482415b232a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:27 compute-0 nova_compute[260603]: 2025-10-02 08:32:27.907 2 DEBUG oslo_concurrency.lockutils [req-a4e92339-575a-4b51-bc08-41df1687456f req-11bd5213-2aa7-402c-ab90-4482415b232a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:27 compute-0 nova_compute[260603]: 2025-10-02 08:32:27.907 2 DEBUG nova.compute.manager [req-a4e92339-575a-4b51-bc08-41df1687456f req-11bd5213-2aa7-402c-ab90-4482415b232a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] No waiting events found dispatching network-vif-plugged-099dc779-5949-4fbd-969a-1200ae071364 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:27 compute-0 nova_compute[260603]: 2025-10-02 08:32:27.907 2 WARNING nova.compute.manager [req-a4e92339-575a-4b51-bc08-41df1687456f req-11bd5213-2aa7-402c-ab90-4482415b232a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Received unexpected event network-vif-plugged-099dc779-5949-4fbd-969a-1200ae071364 for instance with vm_state active and task_state None.
Oct 02 08:32:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:32:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:32:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:32:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:32:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:32:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:32:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:32:27
Oct 02 08:32:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:32:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:32:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'images', 'vms', 'cephfs.cephfs.meta']
Oct 02 08:32:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:32:27 compute-0 nova_compute[260603]: 2025-10-02 08:32:27.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:28 compute-0 podman[329978]: 2025-10-02 08:32:28.039195498 +0000 UTC m=+0.094227855 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 08:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:32:28 compute-0 nova_compute[260603]: 2025-10-02 08:32:28.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:28 compute-0 ceph-mon[74477]: pgmap v1563: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Oct 02 08:32:28 compute-0 ceph-mgr[74774]: client.0 ms_handle_reset on v2:192.168.122.100:6800/860957497
Oct 02 08:32:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 02 08:32:29 compute-0 nova_compute[260603]: 2025-10-02 08:32:29.105 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:29 compute-0 nova_compute[260603]: 2025-10-02 08:32:29.105 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:29 compute-0 nova_compute[260603]: 2025-10-02 08:32:29.138 2 DEBUG nova.compute.manager [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:32:29 compute-0 nova_compute[260603]: 2025-10-02 08:32:29.214 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:29 compute-0 nova_compute[260603]: 2025-10-02 08:32:29.215 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:29 compute-0 nova_compute[260603]: 2025-10-02 08:32:29.254 2 DEBUG nova.virt.hardware [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:32:29 compute-0 nova_compute[260603]: 2025-10-02 08:32:29.255 2 INFO nova.compute.claims [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:32:29 compute-0 nova_compute[260603]: 2025-10-02 08:32:29.520 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2984761659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:29 compute-0 nova_compute[260603]: 2025-10-02 08:32:29.995 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.002 2 DEBUG nova.compute.provider_tree [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.028 2 DEBUG nova.scheduler.client.report [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.059 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.060 2 DEBUG nova.compute.manager [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.137 2 DEBUG nova.compute.manager [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.138 2 DEBUG nova.network.neutron [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.162 2 INFO nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.180 2 DEBUG nova.compute.manager [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.280 2 DEBUG nova.compute.manager [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.282 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.282 2 INFO nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Creating image(s)
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.317 2 DEBUG nova.storage.rbd_utils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] rbd image 6fc23d37-19fe-44e7-8525-17c199801726_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.353 2 DEBUG nova.storage.rbd_utils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] rbd image 6fc23d37-19fe-44e7-8525-17c199801726_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.375 2 DEBUG nova.storage.rbd_utils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] rbd image 6fc23d37-19fe-44e7-8525-17c199801726_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.378 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.408 2 DEBUG nova.policy [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '46c1104e877d4c59a7947f5750b06496', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0c01c9c7e49d4e79942959efaa1b294a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.453 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.454 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.454 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.455 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.480 2 DEBUG nova.storage.rbd_utils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] rbd image 6fc23d37-19fe-44e7-8525-17c199801726_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.484 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 6fc23d37-19fe-44e7-8525-17c199801726_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:32:30 compute-0 ceph-mon[74477]: pgmap v1564: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 02 08:32:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2984761659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.721 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.722 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.737 2 DEBUG nova.compute.manager [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.760 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 6fc23d37-19fe-44e7-8525-17c199801726_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.825 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.825 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.830 2 DEBUG nova.storage.rbd_utils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] resizing rbd image 6fc23d37-19fe-44e7-8525-17c199801726_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.864 2 DEBUG nova.virt.hardware [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.865 2 INFO nova.compute.claims [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.923 2 DEBUG nova.objects.instance [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lazy-loading 'migration_context' on Instance uuid 6fc23d37-19fe-44e7-8525-17c199801726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.948 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.949 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Ensure instance console log exists: /var/lib/nova/instances/6fc23d37-19fe-44e7-8525-17c199801726/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.949 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.949 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:30 compute-0 nova_compute[260603]: 2025-10-02 08:32:30.949 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 34 KiB/s wr, 148 op/s
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.069 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471515554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.520 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.526 2 DEBUG nova.compute.provider_tree [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.544 2 DEBUG nova.scheduler.client.report [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.572 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.573 2 DEBUG nova.compute.manager [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:32:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3471515554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.653 2 DEBUG nova.compute.manager [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.654 2 DEBUG nova.network.neutron [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.681 2 INFO nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.705 2 DEBUG nova.compute.manager [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.796 2 DEBUG nova.network.neutron [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Successfully created port: bd85afb3-c20d-4a10-838a-6c3e2248fb09 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.801 2 DEBUG nova.compute.manager [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.802 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.802 2 INFO nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Creating image(s)
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.824 2 DEBUG nova.storage.rbd_utils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.853 2 DEBUG nova.storage.rbd_utils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.880 2 DEBUG nova.storage.rbd_utils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.884 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.952 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.954 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.955 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.955 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.978 2 DEBUG nova.storage.rbd_utils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:31 compute-0 nova_compute[260603]: 2025-10-02 08:32:31.982 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.020 2 DEBUG nova.policy [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33ee6781337742479d7b4b078ad6a221', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:32:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.232 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.287 2 DEBUG nova.storage.rbd_utils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] resizing rbd image 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.385 2 DEBUG nova.objects.instance [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.394 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "edb2feae-9638-44f1-83f5-0713116e913f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.394 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.400 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.401 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Ensure instance console log exists: /var/lib/nova/instances/56d0280c-1cff-4fc1-aaec-57f3dbad7ba5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.401 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.402 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.402 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.413 2 DEBUG nova.compute.manager [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.489 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.490 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.498 2 DEBUG nova.virt.hardware [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.498 2 INFO nova.compute.claims [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.524 2 DEBUG nova.network.neutron [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Successfully updated port: bd85afb3-c20d-4a10-838a-6c3e2248fb09 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.546 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquiring lock "refresh_cache-6fc23d37-19fe-44e7-8525-17c199801726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.547 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquired lock "refresh_cache-6fc23d37-19fe-44e7-8525-17c199801726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.547 2 DEBUG nova.network.neutron [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:32:32 compute-0 ceph-mon[74477]: pgmap v1565: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 34 KiB/s wr, 148 op/s
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.638 2 DEBUG nova.compute.manager [req-476ab732-61d6-4fd2-9075-a53bfc350769 req-5a3f88c8-b2e8-48ab-9b68-6774fe0c78f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received event network-changed-bd85afb3-c20d-4a10-838a-6c3e2248fb09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.638 2 DEBUG nova.compute.manager [req-476ab732-61d6-4fd2-9075-a53bfc350769 req-5a3f88c8-b2e8-48ab-9b68-6774fe0c78f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Refreshing instance network info cache due to event network-changed-bd85afb3-c20d-4a10-838a-6c3e2248fb09. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.639 2 DEBUG oslo_concurrency.lockutils [req-476ab732-61d6-4fd2-9075-a53bfc350769 req-5a3f88c8-b2e8-48ab-9b68-6774fe0c78f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-6fc23d37-19fe-44e7-8525-17c199801726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.707 2 DEBUG nova.network.neutron [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.775 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:32 compute-0 nova_compute[260603]: 2025-10-02 08:32:32.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 343 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.175 2 DEBUG nova.network.neutron [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Successfully created port: 210f4c33-5b93-4fa0-a28c-9562754313b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:32:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1313599617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.224 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.231 2 DEBUG nova.compute.provider_tree [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.248 2 DEBUG nova.scheduler.client.report [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.270 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.271 2 DEBUG nova.compute.manager [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.320 2 DEBUG nova.compute.manager [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.321 2 DEBUG nova.network.neutron [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.341 2 INFO nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.357 2 DEBUG nova.compute.manager [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.396 2 DEBUG nova.network.neutron [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Updating instance_info_cache with network_info: [{"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.430 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Releasing lock "refresh_cache-6fc23d37-19fe-44e7-8525-17c199801726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.431 2 DEBUG nova.compute.manager [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Instance network_info: |[{"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.431 2 DEBUG oslo_concurrency.lockutils [req-476ab732-61d6-4fd2-9075-a53bfc350769 req-5a3f88c8-b2e8-48ab-9b68-6774fe0c78f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6fc23d37-19fe-44e7-8525-17c199801726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.431 2 DEBUG nova.network.neutron [req-476ab732-61d6-4fd2-9075-a53bfc350769 req-5a3f88c8-b2e8-48ab-9b68-6774fe0c78f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Refreshing network info cache for port bd85afb3-c20d-4a10-838a-6c3e2248fb09 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.434 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Start _get_guest_xml network_info=[{"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.438 2 WARNING nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.442 2 DEBUG nova.virt.libvirt.host [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.443 2 DEBUG nova.virt.libvirt.host [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.444 2 DEBUG nova.compute.manager [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.445 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.446 2 INFO nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Creating image(s)
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.468 2 DEBUG nova.storage.rbd_utils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image edb2feae-9638-44f1-83f5-0713116e913f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.490 2 DEBUG nova.storage.rbd_utils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image edb2feae-9638-44f1-83f5-0713116e913f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.513 2 DEBUG nova.storage.rbd_utils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image edb2feae-9638-44f1-83f5-0713116e913f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.517 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.567 2 DEBUG nova.virt.libvirt.host [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.568 2 DEBUG nova.virt.libvirt.host [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.568 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.569 2 DEBUG nova.virt.hardware [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.569 2 DEBUG nova.virt.hardware [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.570 2 DEBUG nova.virt.hardware [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.570 2 DEBUG nova.virt.hardware [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.570 2 DEBUG nova.virt.hardware [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.571 2 DEBUG nova.virt.hardware [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.571 2 DEBUG nova.virt.hardware [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.571 2 DEBUG nova.virt.hardware [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.571 2 DEBUG nova.virt.hardware [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.572 2 DEBUG nova.virt.hardware [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.572 2 DEBUG nova.virt.hardware [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.575 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1313599617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.611 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.613 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.613 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.614 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.637 2 DEBUG nova.storage.rbd_utils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image edb2feae-9638-44f1-83f5-0713116e913f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.641 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 edb2feae-9638-44f1-83f5-0713116e913f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.790 2 DEBUG nova.policy [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9020ed38b31d46f88625374b2a76aef6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:32:33 compute-0 nova_compute[260603]: 2025-10-02 08:32:33.948 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 edb2feae-9638-44f1-83f5-0713116e913f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.033 2 DEBUG nova.storage.rbd_utils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] resizing rbd image edb2feae-9638-44f1-83f5-0713116e913f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:32:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/581977480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.072 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.096 2 DEBUG nova.storage.rbd_utils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] rbd image 6fc23d37-19fe-44e7-8525-17c199801726_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.102 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.223 2 DEBUG nova.objects.instance [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'migration_context' on Instance uuid edb2feae-9638-44f1-83f5-0713116e913f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.237 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.237 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Ensure instance console log exists: /var/lib/nova/instances/edb2feae-9638-44f1-83f5-0713116e913f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.238 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.238 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.238 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3043408607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:34 compute-0 ovn_controller[152344]: 2025-10-02T08:32:34Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:a0:e4 10.100.0.3
Oct 02 08:32:34 compute-0 ovn_controller[152344]: 2025-10-02T08:32:34Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:a0:e4 10.100.0.3
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.596 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.600 2 DEBUG nova.virt.libvirt.vif [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1788060948',display_name='tempest-InstanceActionsTestJSON-server-1788060948',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1788060948',id=71,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c01c9c7e49d4e79942959efaa1b294a',ramdisk_id='',reservation_id='r-zv0whomp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-559595974',owner_user_name='tempest-InstanceActionsTestJSON-559595974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:30Z,user_data=None,user_id='46c1104e877d4c59a7947f5750b06496',uuid=6fc23d37-19fe-44e7-8525-17c199801726,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.601 2 DEBUG nova.network.os_vif_util [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Converting VIF {"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.603 2 DEBUG nova.network.os_vif_util [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:34 compute-0 ceph-mon[74477]: pgmap v1566: 305 pgs: 305 active+clean; 343 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Oct 02 08:32:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/581977480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3043408607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.607 2 DEBUG nova.objects.instance [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lazy-loading 'pci_devices' on Instance uuid 6fc23d37-19fe-44e7-8525-17c199801726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.634 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:32:34 compute-0 nova_compute[260603]:   <uuid>6fc23d37-19fe-44e7-8525-17c199801726</uuid>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   <name>instance-00000047</name>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <nova:name>tempest-InstanceActionsTestJSON-server-1788060948</nova:name>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:32:33</nova:creationTime>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:32:34 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:32:34 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:32:34 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:32:34 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:32:34 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:32:34 compute-0 nova_compute[260603]:         <nova:user uuid="46c1104e877d4c59a7947f5750b06496">tempest-InstanceActionsTestJSON-559595974-project-member</nova:user>
Oct 02 08:32:34 compute-0 nova_compute[260603]:         <nova:project uuid="0c01c9c7e49d4e79942959efaa1b294a">tempest-InstanceActionsTestJSON-559595974</nova:project>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:32:34 compute-0 nova_compute[260603]:         <nova:port uuid="bd85afb3-c20d-4a10-838a-6c3e2248fb09">
Oct 02 08:32:34 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <system>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <entry name="serial">6fc23d37-19fe-44e7-8525-17c199801726</entry>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <entry name="uuid">6fc23d37-19fe-44e7-8525-17c199801726</entry>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     </system>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   <os>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   </os>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   <features>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   </features>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6fc23d37-19fe-44e7-8525-17c199801726_disk">
Oct 02 08:32:34 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:34 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6fc23d37-19fe-44e7-8525-17c199801726_disk.config">
Oct 02 08:32:34 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:34 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:d8:55:99"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <target dev="tapbd85afb3-c2"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/6fc23d37-19fe-44e7-8525-17c199801726/console.log" append="off"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <video>
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     </video>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:32:34 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:32:34 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:32:34 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:32:34 compute-0 nova_compute[260603]: </domain>
Oct 02 08:32:34 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.649 2 DEBUG nova.compute.manager [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Preparing to wait for external event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.650 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.650 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.651 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.652 2 DEBUG nova.virt.libvirt.vif [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1788060948',display_name='tempest-InstanceActionsTestJSON-server-1788060948',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1788060948',id=71,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c01c9c7e49d4e79942959efaa1b294a',ramdisk_id='',reservation_id='r-zv0whomp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-559595974',owner_user_name='tempest-InstanceActionsTestJSON-559595974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:30Z,user_data=None,user_id='46c1104e877d4c59a7947f5750b06496',uuid=6fc23d37-19fe-44e7-8525-17c199801726,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.653 2 DEBUG nova.network.os_vif_util [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Converting VIF {"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.654 2 DEBUG nova.network.os_vif_util [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.655 2 DEBUG os_vif [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.658 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd85afb3-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.664 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd85afb3-c2, col_values=(('external_ids', {'iface-id': 'bd85afb3-c20d-4a10-838a-6c3e2248fb09', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:55:99', 'vm-uuid': '6fc23d37-19fe-44e7-8525-17c199801726'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:34 compute-0 NetworkManager[45129]: <info>  [1759393954.6664] manager: (tapbd85afb3-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.672 2 INFO os_vif [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2')
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.721 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.721 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.722 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] No VIF found with MAC fa:16:3e:d8:55:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.722 2 INFO nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Using config drive
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.746 2 DEBUG nova.storage.rbd_utils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] rbd image 6fc23d37-19fe-44e7-8525-17c199801726_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.753 2 DEBUG nova.network.neutron [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Successfully created port: dc2f1a95-f7f3-4da2-a25e-82d30b142db4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:32:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:34.818 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:34.819 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:34.819 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.841 2 DEBUG nova.network.neutron [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Successfully updated port: 210f4c33-5b93-4fa0-a28c-9562754313b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.866 2 DEBUG nova.network.neutron [req-476ab732-61d6-4fd2-9075-a53bfc350769 req-5a3f88c8-b2e8-48ab-9b68-6774fe0c78f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Updated VIF entry in instance network info cache for port bd85afb3-c20d-4a10-838a-6c3e2248fb09. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.866 2 DEBUG nova.network.neutron [req-476ab732-61d6-4fd2-9075-a53bfc350769 req-5a3f88c8-b2e8-48ab-9b68-6774fe0c78f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Updating instance_info_cache with network_info: [{"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.868 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "refresh_cache-56d0280c-1cff-4fc1-aaec-57f3dbad7ba5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.869 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquired lock "refresh_cache-56d0280c-1cff-4fc1-aaec-57f3dbad7ba5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.869 2 DEBUG nova.network.neutron [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.902 2 DEBUG oslo_concurrency.lockutils [req-476ab732-61d6-4fd2-9075-a53bfc350769 req-5a3f88c8-b2e8-48ab-9b68-6774fe0c78f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6fc23d37-19fe-44e7-8525-17c199801726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.936 2 DEBUG nova.compute.manager [req-3672308c-8395-45da-80f8-2048e50cbd51 req-29cbfa78-cad5-4b88-abcf-e9640e6f6c87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Received event network-changed-210f4c33-5b93-4fa0-a28c-9562754313b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.936 2 DEBUG nova.compute.manager [req-3672308c-8395-45da-80f8-2048e50cbd51 req-29cbfa78-cad5-4b88-abcf-e9640e6f6c87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Refreshing instance network info cache due to event network-changed-210f4c33-5b93-4fa0-a28c-9562754313b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:32:34 compute-0 nova_compute[260603]: 2025-10-02 08:32:34.937 2 DEBUG oslo_concurrency.lockutils [req-3672308c-8395-45da-80f8-2048e50cbd51 req-29cbfa78-cad5-4b88-abcf-e9640e6f6c87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-56d0280c-1cff-4fc1-aaec-57f3dbad7ba5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.025 2 DEBUG nova.network.neutron [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:32:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 385 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 191 op/s
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.134 2 INFO nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Creating config drive at /var/lib/nova/instances/6fc23d37-19fe-44e7-8525-17c199801726/disk.config
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.143 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6fc23d37-19fe-44e7-8525-17c199801726/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_os_r3yu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.297 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6fc23d37-19fe-44e7-8525-17c199801726/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_os_r3yu" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.331 2 DEBUG nova.storage.rbd_utils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] rbd image 6fc23d37-19fe-44e7-8525-17c199801726_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.336 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6fc23d37-19fe-44e7-8525-17c199801726/disk.config 6fc23d37-19fe-44e7-8525-17c199801726_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.420 2 DEBUG nova.network.neutron [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Successfully updated port: dc2f1a95-f7f3-4da2-a25e-82d30b142db4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.440 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "refresh_cache-edb2feae-9638-44f1-83f5-0713116e913f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.440 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquired lock "refresh_cache-edb2feae-9638-44f1-83f5-0713116e913f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.441 2 DEBUG nova.network.neutron [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.532 2 DEBUG oslo_concurrency.processutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6fc23d37-19fe-44e7-8525-17c199801726/disk.config 6fc23d37-19fe-44e7-8525-17c199801726_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.533 2 INFO nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Deleting local config drive /var/lib/nova/instances/6fc23d37-19fe-44e7-8525-17c199801726/disk.config because it was imported into RBD.
Oct 02 08:32:35 compute-0 kernel: tapbd85afb3-c2: entered promiscuous mode
Oct 02 08:32:35 compute-0 NetworkManager[45129]: <info>  [1759393955.5894] manager: (tapbd85afb3-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/282)
Oct 02 08:32:35 compute-0 ovn_controller[152344]: 2025-10-02T08:32:35Z|00661|binding|INFO|Claiming lport bd85afb3-c20d-4a10-838a-6c3e2248fb09 for this chassis.
Oct 02 08:32:35 compute-0 ovn_controller[152344]: 2025-10-02T08:32:35Z|00662|binding|INFO|bd85afb3-c20d-4a10-838a-6c3e2248fb09: Claiming fa:16:3e:d8:55:99 10.100.0.13
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.606 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:55:99 10.100.0.13'], port_security=['fa:16:3e:d8:55:99 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6fc23d37-19fe-44e7-8525-17c199801726', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51be74fd-6730-41ad-8944-c77ca7c989b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c01c9c7e49d4e79942959efaa1b294a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd5785b6c-b9bb-41cf-ba74-c0d70260202c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=686f3722-f2fd-47a9-be1c-ef29b7ace186, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bd85afb3-c20d-4a10-838a-6c3e2248fb09) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.607 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bd85afb3-c20d-4a10-838a-6c3e2248fb09 in datapath 51be74fd-6730-41ad-8944-c77ca7c989b7 bound to our chassis
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.608 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51be74fd-6730-41ad-8944-c77ca7c989b7
Oct 02 08:32:35 compute-0 ovn_controller[152344]: 2025-10-02T08:32:35Z|00663|binding|INFO|Setting lport bd85afb3-c20d-4a10-838a-6c3e2248fb09 ovn-installed in OVS
Oct 02 08:32:35 compute-0 ovn_controller[152344]: 2025-10-02T08:32:35Z|00664|binding|INFO|Setting lport bd85afb3-c20d-4a10-838a-6c3e2248fb09 up in Southbound
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.621 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6e0feb-a0df-43b0-88e1-159f375dd0be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.622 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap51be74fd-61 in ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.624 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap51be74fd-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.624 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[205f5602-cd07-4ddf-a01c-8365f64c0fc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.625 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a18b9a15-3164-4296-9016-a758bfce3a21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:35 compute-0 systemd-machined[214636]: New machine qemu-79-instance-00000047.
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.637 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e01ed954-3f76-4c26-8393-d34cfc438047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-00000047.
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.651 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6713df5c-8497-4b76-80b6-0d330febe6d0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 systemd-udevd[330703]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:32:35 compute-0 NetworkManager[45129]: <info>  [1759393955.6857] device (tapbd85afb3-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:32:35 compute-0 NetworkManager[45129]: <info>  [1759393955.6867] device (tapbd85afb3-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.692 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[89e42374-4bca-4d10-b216-86e1fb736c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 NetworkManager[45129]: <info>  [1759393955.7013] manager: (tap51be74fd-60): new Veth device (/org/freedesktop/NetworkManager/Devices/283)
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.688 2 DEBUG nova.network.neutron [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.699 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7f888d6d-ee27-433f-b830-3b4e4a21ec66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.740 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b02ae7-57ee-4e3f-97fe-753c9cd4d065]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.743 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e865853c-b479-43f8-9cc3-c5c42de646f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 NetworkManager[45129]: <info>  [1759393955.7718] device (tap51be74fd-60): carrier: link connected
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.778 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[48be2a4d-a952-4f2e-83f9-54d862f1e1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.798 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83a3662e-54a0-4dfd-8466-653e30a3cece]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51be74fd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:e5:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483437, 'reachable_time': 17935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330731, 'error': None, 'target': 'ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.818 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f66cc78e-a863-45c3-baec-054ffd13f826]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:e504'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 483437, 'tstamp': 483437}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330732, 'error': None, 'target': 'ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.840 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[797dc433-03ae-4a0f-84d4-86c403cc47a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51be74fd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:e5:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483437, 'reachable_time': 17935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330733, 'error': None, 'target': 'ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.876 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[19230167-e26a-476c-b966-114ddd949b4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.961 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf26a11-f852-4e14-ab1a-7b7bd66b8100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.963 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51be74fd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.963 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.964 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51be74fd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:35 compute-0 kernel: tap51be74fd-60: entered promiscuous mode
Oct 02 08:32:35 compute-0 NetworkManager[45129]: <info>  [1759393955.9668] manager: (tap51be74fd-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.970 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51be74fd-60, col_values=(('external_ids', {'iface-id': '018506de-118e-4ad6-ae4a-0f6a98b235ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:35 compute-0 ovn_controller[152344]: 2025-10-02T08:32:35Z|00665|binding|INFO|Releasing lport 018506de-118e-4ad6-ae4a-0f6a98b235ca from this chassis (sb_readonly=0)
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:35 compute-0 nova_compute[260603]: 2025-10-02 08:32:35.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:35.999 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/51be74fd-6730-41ad-8944-c77ca7c989b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/51be74fd-6730-41ad-8944-c77ca7c989b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:36.000 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0adf8ddb-0426-4296-b402-ad575b4d91ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:36.001 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-51be74fd-6730-41ad-8944-c77ca7c989b7
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/51be74fd-6730-41ad-8944-c77ca7c989b7.pid.haproxy
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 51be74fd-6730-41ad-8944-c77ca7c989b7
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:32:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:36.003 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7', 'env', 'PROCESS_TAG=haproxy-51be74fd-6730-41ad-8944-c77ca7c989b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/51be74fd-6730-41ad-8944-c77ca7c989b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.044 2 DEBUG nova.network.neutron [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Updating instance_info_cache with network_info: [{"id": "210f4c33-5b93-4fa0-a28c-9562754313b3", "address": "fa:16:3e:45:db:6b", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap210f4c33-5b", "ovs_interfaceid": "210f4c33-5b93-4fa0-a28c-9562754313b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.064 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Releasing lock "refresh_cache-56d0280c-1cff-4fc1-aaec-57f3dbad7ba5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.065 2 DEBUG nova.compute.manager [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Instance network_info: |[{"id": "210f4c33-5b93-4fa0-a28c-9562754313b3", "address": "fa:16:3e:45:db:6b", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap210f4c33-5b", "ovs_interfaceid": "210f4c33-5b93-4fa0-a28c-9562754313b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.065 2 DEBUG oslo_concurrency.lockutils [req-3672308c-8395-45da-80f8-2048e50cbd51 req-29cbfa78-cad5-4b88-abcf-e9640e6f6c87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-56d0280c-1cff-4fc1-aaec-57f3dbad7ba5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.065 2 DEBUG nova.network.neutron [req-3672308c-8395-45da-80f8-2048e50cbd51 req-29cbfa78-cad5-4b88-abcf-e9640e6f6c87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Refreshing network info cache for port 210f4c33-5b93-4fa0-a28c-9562754313b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.068 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Start _get_guest_xml network_info=[{"id": "210f4c33-5b93-4fa0-a28c-9562754313b3", "address": "fa:16:3e:45:db:6b", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap210f4c33-5b", "ovs_interfaceid": "210f4c33-5b93-4fa0-a28c-9562754313b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.072 2 WARNING nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.078 2 DEBUG nova.virt.libvirt.host [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.078 2 DEBUG nova.virt.libvirt.host [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.081 2 DEBUG nova.virt.libvirt.host [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.082 2 DEBUG nova.virt.libvirt.host [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.082 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.082 2 DEBUG nova.virt.hardware [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.083 2 DEBUG nova.virt.hardware [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.083 2 DEBUG nova.virt.hardware [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.083 2 DEBUG nova.virt.hardware [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.084 2 DEBUG nova.virt.hardware [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.084 2 DEBUG nova.virt.hardware [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.084 2 DEBUG nova.virt.hardware [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.084 2 DEBUG nova.virt.hardware [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.085 2 DEBUG nova.virt.hardware [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.085 2 DEBUG nova.virt.hardware [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.085 2 DEBUG nova.virt.hardware [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.088 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:36 compute-0 podman[330826]: 2025-10-02 08:32:36.408868851 +0000 UTC m=+0.098551039 container create a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:32:36 compute-0 podman[330826]: 2025-10-02 08:32:36.339586821 +0000 UTC m=+0.029269029 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:32:36 compute-0 systemd[1]: Started libpod-conmon-a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a.scope.
Oct 02 08:32:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9624f0d3dab56072fad3e34f4a8ec15e62826f5f029130c08cc7f31b494d047/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:32:36 compute-0 podman[330826]: 2025-10-02 08:32:36.527187943 +0000 UTC m=+0.216870131 container init a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.527 2 DEBUG nova.network.neutron [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Updating instance_info_cache with network_info: [{"id": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "address": "fa:16:3e:b0:0f:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc2f1a95-f7", "ovs_interfaceid": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:36 compute-0 podman[330826]: 2025-10-02 08:32:36.534276952 +0000 UTC m=+0.223959110 container start a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.559 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Releasing lock "refresh_cache-edb2feae-9638-44f1-83f5-0713116e913f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.559 2 DEBUG nova.compute.manager [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Instance network_info: |[{"id": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "address": "fa:16:3e:b0:0f:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc2f1a95-f7", "ovs_interfaceid": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:32:36 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[330841]: [NOTICE]   (330845) : New worker (330847) forked
Oct 02 08:32:36 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[330841]: [NOTICE]   (330845) : Loading success.
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.563 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Start _get_guest_xml network_info=[{"id": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "address": "fa:16:3e:b0:0f:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc2f1a95-f7", "ovs_interfaceid": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.572 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393956.5718977, 6fc23d37-19fe-44e7-8525-17c199801726 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.573 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] VM Started (Lifecycle Event)
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.579 2 WARNING nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.585 2 DEBUG nova.virt.libvirt.host [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.586 2 DEBUG nova.virt.libvirt.host [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.590 2 DEBUG nova.virt.libvirt.host [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.590 2 DEBUG nova.virt.libvirt.host [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.590 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.591 2 DEBUG nova.virt.hardware [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.591 2 DEBUG nova.virt.hardware [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.591 2 DEBUG nova.virt.hardware [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.591 2 DEBUG nova.virt.hardware [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.592 2 DEBUG nova.virt.hardware [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.592 2 DEBUG nova.virt.hardware [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.592 2 DEBUG nova.virt.hardware [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.592 2 DEBUG nova.virt.hardware [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.592 2 DEBUG nova.virt.hardware [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.592 2 DEBUG nova.virt.hardware [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.593 2 DEBUG nova.virt.hardware [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.596 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1440384328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:36 compute-0 ceph-mon[74477]: pgmap v1567: 305 pgs: 305 active+clean; 385 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 191 op/s
Oct 02 08:32:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1440384328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.632 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.633 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.656 2 DEBUG nova.storage.rbd_utils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.660 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.705 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393956.5720465, 6fc23d37-19fe-44e7-8525-17c199801726 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.705 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] VM Paused (Lifecycle Event)
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.725 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.728 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:36 compute-0 nova_compute[260603]: 2025-10-02 08:32:36.751 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3145986861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.032 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 385 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 186 op/s
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.063 2 DEBUG nova.storage.rbd_utils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image edb2feae-9638-44f1-83f5-0713116e913f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.068 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3822340146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.121 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.123 2 DEBUG nova.virt.libvirt.vif [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1154721187',display_name='tempest-ServersTestJSON-server-1154721187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1154721187',id=72,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-ar1pakt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:31Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=56d0280c-1cff-4fc1-aaec-57f3dbad7ba5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "210f4c33-5b93-4fa0-a28c-9562754313b3", "address": "fa:16:3e:45:db:6b", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap210f4c33-5b", "ovs_interfaceid": "210f4c33-5b93-4fa0-a28c-9562754313b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.123 2 DEBUG nova.network.os_vif_util [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "210f4c33-5b93-4fa0-a28c-9562754313b3", "address": "fa:16:3e:45:db:6b", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap210f4c33-5b", "ovs_interfaceid": "210f4c33-5b93-4fa0-a28c-9562754313b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.124 2 DEBUG nova.network.os_vif_util [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:db:6b,bridge_name='br-int',has_traffic_filtering=True,id=210f4c33-5b93-4fa0-a28c-9562754313b3,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap210f4c33-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.125 2 DEBUG nova.objects.instance [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.131 2 DEBUG nova.compute.manager [req-98dad913-e970-4c24-8164-3bccfc06612e req-09c8536a-d903-4a24-a9cf-1c1bd0cca28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Received event network-changed-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.131 2 DEBUG nova.compute.manager [req-98dad913-e970-4c24-8164-3bccfc06612e req-09c8536a-d903-4a24-a9cf-1c1bd0cca28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Refreshing instance network info cache due to event network-changed-dc2f1a95-f7f3-4da2-a25e-82d30b142db4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.132 2 DEBUG oslo_concurrency.lockutils [req-98dad913-e970-4c24-8164-3bccfc06612e req-09c8536a-d903-4a24-a9cf-1c1bd0cca28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-edb2feae-9638-44f1-83f5-0713116e913f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.132 2 DEBUG oslo_concurrency.lockutils [req-98dad913-e970-4c24-8164-3bccfc06612e req-09c8536a-d903-4a24-a9cf-1c1bd0cca28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-edb2feae-9638-44f1-83f5-0713116e913f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.132 2 DEBUG nova.network.neutron [req-98dad913-e970-4c24-8164-3bccfc06612e req-09c8536a-d903-4a24-a9cf-1c1bd0cca28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Refreshing network info cache for port dc2f1a95-f7f3-4da2-a25e-82d30b142db4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.140 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <uuid>56d0280c-1cff-4fc1-aaec-57f3dbad7ba5</uuid>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <name>instance-00000048</name>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersTestJSON-server-1154721187</nova:name>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:32:36</nova:creationTime>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:user uuid="33ee6781337742479d7b4b078ad6a221">tempest-ServersTestJSON-520437589-project-member</nova:user>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:project uuid="f6678937d40d4004ad15e1e9eef6f9c7">tempest-ServersTestJSON-520437589</nova:project>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:port uuid="210f4c33-5b93-4fa0-a28c-9562754313b3">
Oct 02 08:32:37 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <system>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <entry name="serial">56d0280c-1cff-4fc1-aaec-57f3dbad7ba5</entry>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <entry name="uuid">56d0280c-1cff-4fc1-aaec-57f3dbad7ba5</entry>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </system>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <os>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </os>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <features>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </features>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk">
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk.config">
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:45:db:6b"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <target dev="tap210f4c33-5b"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/56d0280c-1cff-4fc1-aaec-57f3dbad7ba5/console.log" append="off"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <video>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </video>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:32:37 compute-0 nova_compute[260603]: </domain>
Oct 02 08:32:37 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.141 2 DEBUG nova.compute.manager [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Preparing to wait for external event network-vif-plugged-210f4c33-5b93-4fa0-a28c-9562754313b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.141 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.141 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.142 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.142 2 DEBUG nova.virt.libvirt.vif [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1154721187',display_name='tempest-ServersTestJSON-server-1154721187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1154721187',id=72,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-ar1pakt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:31Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=56d0280c-1cff-4fc1-aaec-57f3dbad7ba5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "210f4c33-5b93-4fa0-a28c-9562754313b3", "address": "fa:16:3e:45:db:6b", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap210f4c33-5b", "ovs_interfaceid": "210f4c33-5b93-4fa0-a28c-9562754313b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.143 2 DEBUG nova.network.os_vif_util [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "210f4c33-5b93-4fa0-a28c-9562754313b3", "address": "fa:16:3e:45:db:6b", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap210f4c33-5b", "ovs_interfaceid": "210f4c33-5b93-4fa0-a28c-9562754313b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.143 2 DEBUG nova.network.os_vif_util [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:db:6b,bridge_name='br-int',has_traffic_filtering=True,id=210f4c33-5b93-4fa0-a28c-9562754313b3,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap210f4c33-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.144 2 DEBUG os_vif [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:db:6b,bridge_name='br-int',has_traffic_filtering=True,id=210f4c33-5b93-4fa0-a28c-9562754313b3,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap210f4c33-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.145 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.145 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.150 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap210f4c33-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.151 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap210f4c33-5b, col_values=(('external_ids', {'iface-id': '210f4c33-5b93-4fa0-a28c-9562754313b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:db:6b', 'vm-uuid': '56d0280c-1cff-4fc1-aaec-57f3dbad7ba5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:37 compute-0 NetworkManager[45129]: <info>  [1759393957.1541] manager: (tap210f4c33-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:32:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.164 2 INFO os_vif [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:db:6b,bridge_name='br-int',has_traffic_filtering=True,id=210f4c33-5b93-4fa0-a28c-9562754313b3,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap210f4c33-5b')
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.233 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.234 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.234 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No VIF found with MAC fa:16:3e:45:db:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.235 2 INFO nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Using config drive
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.259 2 DEBUG nova.storage.rbd_utils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.529 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Acquiring lock "d107c637-880a-47fa-ac2d-c762781f296c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.530 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.544 2 DEBUG nova.network.neutron [req-3672308c-8395-45da-80f8-2048e50cbd51 req-29cbfa78-cad5-4b88-abcf-e9640e6f6c87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Updated VIF entry in instance network info cache for port 210f4c33-5b93-4fa0-a28c-9562754313b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.544 2 DEBUG nova.network.neutron [req-3672308c-8395-45da-80f8-2048e50cbd51 req-29cbfa78-cad5-4b88-abcf-e9640e6f6c87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Updating instance_info_cache with network_info: [{"id": "210f4c33-5b93-4fa0-a28c-9562754313b3", "address": "fa:16:3e:45:db:6b", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap210f4c33-5b", "ovs_interfaceid": "210f4c33-5b93-4fa0-a28c-9562754313b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.549 2 DEBUG nova.compute.manager [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.558 2 DEBUG oslo_concurrency.lockutils [req-3672308c-8395-45da-80f8-2048e50cbd51 req-29cbfa78-cad5-4b88-abcf-e9640e6f6c87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-56d0280c-1cff-4fc1-aaec-57f3dbad7ba5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2096979315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.582 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.583 2 DEBUG nova.virt.libvirt.vif [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2022045083',display_name='tempest-ServerActionsTestOtherB-server-2022045083',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2022045083',id=73,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-rzit6p9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:33Z,user_data=None,user_id='9020ed38b31d46f88625374b2a76aef6',uuid=edb2feae-9638-44f1-83f5-0713116e913f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "address": "fa:16:3e:b0:0f:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc2f1a95-f7", "ovs_interfaceid": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.583 2 DEBUG nova.network.os_vif_util [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "address": "fa:16:3e:b0:0f:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc2f1a95-f7", "ovs_interfaceid": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.584 2 DEBUG nova.network.os_vif_util [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:0f:f7,bridge_name='br-int',has_traffic_filtering=True,id=dc2f1a95-f7f3-4da2-a25e-82d30b142db4,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc2f1a95-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.585 2 DEBUG nova.objects.instance [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'pci_devices' on Instance uuid edb2feae-9638-44f1-83f5-0713116e913f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.604 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <uuid>edb2feae-9638-44f1-83f5-0713116e913f</uuid>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <name>instance-00000049</name>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerActionsTestOtherB-server-2022045083</nova:name>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:32:36</nova:creationTime>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:user uuid="9020ed38b31d46f88625374b2a76aef6">tempest-ServerActionsTestOtherB-1644249004-project-member</nova:user>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:project uuid="eda0caa41e4740148ab99d5ebf9e27ba">tempest-ServerActionsTestOtherB-1644249004</nova:project>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <nova:port uuid="dc2f1a95-f7f3-4da2-a25e-82d30b142db4">
Oct 02 08:32:37 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <system>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <entry name="serial">edb2feae-9638-44f1-83f5-0713116e913f</entry>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <entry name="uuid">edb2feae-9638-44f1-83f5-0713116e913f</entry>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </system>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <os>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </os>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <features>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </features>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/edb2feae-9638-44f1-83f5-0713116e913f_disk">
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/edb2feae-9638-44f1-83f5-0713116e913f_disk.config">
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:37 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:b0:0f:f7"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <target dev="tapdc2f1a95-f7"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/edb2feae-9638-44f1-83f5-0713116e913f/console.log" append="off"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <video>
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </video>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:32:37 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:32:37 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:32:37 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:32:37 compute-0 nova_compute[260603]: </domain>
Oct 02 08:32:37 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.605 2 DEBUG nova.compute.manager [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Preparing to wait for external event network-vif-plugged-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.605 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "edb2feae-9638-44f1-83f5-0713116e913f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.605 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.605 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.606 2 DEBUG nova.virt.libvirt.vif [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2022045083',display_name='tempest-ServerActionsTestOtherB-server-2022045083',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2022045083',id=73,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-rzit6p9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:33Z,user_data=None,user_id='9020ed38b31d46f88625374b2a76aef6',uuid=edb2feae-9638-44f1-83f5-0713116e913f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "address": "fa:16:3e:b0:0f:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc2f1a95-f7", "ovs_interfaceid": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.607 2 DEBUG nova.network.os_vif_util [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "address": "fa:16:3e:b0:0f:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc2f1a95-f7", "ovs_interfaceid": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.607 2 DEBUG nova.network.os_vif_util [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:0f:f7,bridge_name='br-int',has_traffic_filtering=True,id=dc2f1a95-f7f3-4da2-a25e-82d30b142db4,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc2f1a95-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.608 2 DEBUG os_vif [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:0f:f7,bridge_name='br-int',has_traffic_filtering=True,id=dc2f1a95-f7f3-4da2-a25e-82d30b142db4,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc2f1a95-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.609 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.609 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc2f1a95-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc2f1a95-f7, col_values=(('external_ids', {'iface-id': 'dc2f1a95-f7f3-4da2-a25e-82d30b142db4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:0f:f7', 'vm-uuid': 'edb2feae-9638-44f1-83f5-0713116e913f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.640 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:37 compute-0 NetworkManager[45129]: <info>  [1759393957.6413] manager: (tapdc2f1a95-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.640 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:32:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3145986861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3822340146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2096979315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.649 2 DEBUG nova.virt.hardware [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.650 2 INFO nova.compute.claims [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.653 2 INFO os_vif [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:0f:f7,bridge_name='br-int',has_traffic_filtering=True,id=dc2f1a95-f7f3-4da2-a25e-82d30b142db4,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc2f1a95-f7')
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.681 2 INFO nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Creating config drive at /var/lib/nova/instances/56d0280c-1cff-4fc1-aaec-57f3dbad7ba5/disk.config
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.687 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/56d0280c-1cff-4fc1-aaec-57f3dbad7ba5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpauzkz9e4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.766 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.767 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.767 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No VIF found with MAC fa:16:3e:b0:0f:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.768 2 INFO nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Using config drive
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.795 2 DEBUG nova.storage.rbd_utils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image edb2feae-9638-44f1-83f5-0713116e913f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.837 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/56d0280c-1cff-4fc1-aaec-57f3dbad7ba5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpauzkz9e4" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.867 2 DEBUG nova.storage.rbd_utils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:37 compute-0 ovn_controller[152344]: 2025-10-02T08:32:37Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c5:31:42 10.100.0.5
Oct 02 08:32:37 compute-0 ovn_controller[152344]: 2025-10-02T08:32:37Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c5:31:42 10.100.0.5
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.872 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/56d0280c-1cff-4fc1-aaec-57f3dbad7ba5/disk.config 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:37 compute-0 nova_compute[260603]: 2025-10-02 08:32:37.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.031 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.079 2 DEBUG oslo_concurrency.processutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/56d0280c-1cff-4fc1-aaec-57f3dbad7ba5/disk.config 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.080 2 INFO nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Deleting local config drive /var/lib/nova/instances/56d0280c-1cff-4fc1-aaec-57f3dbad7ba5/disk.config because it was imported into RBD.
Oct 02 08:32:38 compute-0 NetworkManager[45129]: <info>  [1759393958.1304] manager: (tap210f4c33-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/287)
Oct 02 08:32:38 compute-0 kernel: tap210f4c33-5b: entered promiscuous mode
Oct 02 08:32:38 compute-0 systemd-udevd[330720]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:32:38 compute-0 ovn_controller[152344]: 2025-10-02T08:32:38Z|00666|binding|INFO|Claiming lport 210f4c33-5b93-4fa0-a28c-9562754313b3 for this chassis.
Oct 02 08:32:38 compute-0 ovn_controller[152344]: 2025-10-02T08:32:38Z|00667|binding|INFO|210f4c33-5b93-4fa0-a28c-9562754313b3: Claiming fa:16:3e:45:db:6b 10.100.0.6
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.155 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:db:6b 10.100.0.6'], port_security=['fa:16:3e:45:db:6b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '56d0280c-1cff-4fc1-aaec-57f3dbad7ba5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=210f4c33-5b93-4fa0-a28c-9562754313b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.159 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 210f4c33-5b93-4fa0-a28c-9562754313b3 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 bound to our chassis
Oct 02 08:32:38 compute-0 NetworkManager[45129]: <info>  [1759393958.1639] device (tap210f4c33-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:32:38 compute-0 NetworkManager[45129]: <info>  [1759393958.1655] device (tap210f4c33-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.164 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:32:38 compute-0 systemd-machined[214636]: New machine qemu-80-instance-00000048.
Oct 02 08:32:38 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000048.
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.185 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1524dc76-c8fa-4c8b-9518-efe1ee491a1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:38 compute-0 ovn_controller[152344]: 2025-10-02T08:32:38Z|00668|binding|INFO|Setting lport 210f4c33-5b93-4fa0-a28c-9562754313b3 ovn-installed in OVS
Oct 02 08:32:38 compute-0 ovn_controller[152344]: 2025-10-02T08:32:38Z|00669|binding|INFO|Setting lport 210f4c33-5b93-4fa0-a28c-9562754313b3 up in Southbound
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.215 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[65123ee3-00b0-4174-b33f-e27e5fda0789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.218 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[79b672d9-597a-4a1b-a55d-ce8799e51b5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.248 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[390e940b-f9d4-45a2-a045-b6378a9408f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.270 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[80dd28f7-1a51-4945-861c-7943ff96d9fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 916, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 916, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331086, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.291 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[913027f9-3f2c-4d38-a437-394ef453e669]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331088, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331088, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.292 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.299 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.299 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.299 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.299 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.377 2 INFO nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Creating config drive at /var/lib/nova/instances/edb2feae-9638-44f1-83f5-0713116e913f/disk.config
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.381 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/edb2feae-9638-44f1-83f5-0713116e913f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp62tzopns execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/888409169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.511 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.521 2 DEBUG nova.compute.provider_tree [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.524 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/edb2feae-9638-44f1-83f5-0713116e913f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp62tzopns" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.562 2 DEBUG nova.storage.rbd_utils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image edb2feae-9638-44f1-83f5-0713116e913f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002907112798723709 of space, bias 1.0, pg target 0.8721338396171127 quantized to 32 (current 32)
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:32:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.572 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/edb2feae-9638-44f1-83f5-0713116e913f/disk.config edb2feae-9638-44f1-83f5-0713116e913f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.610 2 DEBUG nova.scheduler.client.report [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:38 compute-0 ceph-mon[74477]: pgmap v1568: 305 pgs: 305 active+clean; 385 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 186 op/s
Oct 02 08:32:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/888409169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.656 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.658 2 DEBUG nova.compute.manager [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.705 2 DEBUG nova.compute.manager [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.706 2 DEBUG nova.network.neutron [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.725 2 INFO nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.728 2 DEBUG oslo_concurrency.processutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/edb2feae-9638-44f1-83f5-0713116e913f/disk.config edb2feae-9638-44f1-83f5-0713116e913f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.729 2 INFO nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Deleting local config drive /var/lib/nova/instances/edb2feae-9638-44f1-83f5-0713116e913f/disk.config because it was imported into RBD.
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.741 2 DEBUG nova.compute.manager [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:32:38 compute-0 kernel: tapdc2f1a95-f7: entered promiscuous mode
Oct 02 08:32:38 compute-0 NetworkManager[45129]: <info>  [1759393958.8119] manager: (tapdc2f1a95-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/288)
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.821 2 DEBUG nova.compute.manager [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.823 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.823 2 INFO nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Creating image(s)
Oct 02 08:32:38 compute-0 ovn_controller[152344]: 2025-10-02T08:32:38Z|00670|binding|INFO|Claiming lport dc2f1a95-f7f3-4da2-a25e-82d30b142db4 for this chassis.
Oct 02 08:32:38 compute-0 ovn_controller[152344]: 2025-10-02T08:32:38Z|00671|binding|INFO|dc2f1a95-f7f3-4da2-a25e-82d30b142db4: Claiming fa:16:3e:b0:0f:f7 10.100.0.12
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.853 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:0f:f7 10.100.0.12'], port_security=['fa:16:3e:b0:0f:f7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'edb2feae-9638-44f1-83f5-0713116e913f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'df44854a-80b4-49ce-898d-50927f9b482f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04113540-c60b-4329-960e-cb06bfeb56f0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dc2f1a95-f7f3-4da2-a25e-82d30b142db4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.854 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dc2f1a95-f7f3-4da2-a25e-82d30b142db4 in datapath ef30d863-af60-49d9-b5d2-5e4f20c70d56 bound to our chassis
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.857 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef30d863-af60-49d9-b5d2-5e4f20c70d56
Oct 02 08:32:38 compute-0 ovn_controller[152344]: 2025-10-02T08:32:38Z|00672|binding|INFO|Setting lport dc2f1a95-f7f3-4da2-a25e-82d30b142db4 ovn-installed in OVS
Oct 02 08:32:38 compute-0 ovn_controller[152344]: 2025-10-02T08:32:38Z|00673|binding|INFO|Setting lport dc2f1a95-f7f3-4da2-a25e-82d30b142db4 up in Southbound
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.870 2 DEBUG nova.storage.rbd_utils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] rbd image d107c637-880a-47fa-ac2d-c762781f296c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:38 compute-0 systemd-udevd[331207]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:32:38 compute-0 systemd-machined[214636]: New machine qemu-81-instance-00000049.
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.881 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[35d73fce-1482-4642-ae71-3822ebabf4be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:38 compute-0 NetworkManager[45129]: <info>  [1759393958.8930] device (tapdc2f1a95-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:32:38 compute-0 NetworkManager[45129]: <info>  [1759393958.8938] device (tapdc2f1a95-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:32:38 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-00000049.
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.911 2 DEBUG nova.storage.rbd_utils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] rbd image d107c637-880a-47fa-ac2d-c762781f296c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.920 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe80dea-ffcc-4947-99ad-f47cbfd9162f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.923 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6b02bdb9-d8da-49bd-8b14-bbe0e0bb5384]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.953 2 DEBUG nova.storage.rbd_utils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] rbd image d107c637-880a-47fa-ac2d-c762781f296c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.953 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[96fc22ce-47bd-42e1-9afd-daec64baccb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.958 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.972 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f06bd4a2-f9ac-4700-b65d-fa2bbac03364]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef30d863-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:1b:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474382, 'reachable_time': 21085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331256, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.989 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c18e54-1911-412a-bb72-779643d795a5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef30d863-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474399, 'tstamp': 474399}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331259, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef30d863-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474402, 'tstamp': 474402}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331259, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.990 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef30d863-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.992 2 DEBUG nova.network.neutron [req-98dad913-e970-4c24-8164-3bccfc06612e req-09c8536a-d903-4a24-a9cf-1c1bd0cca28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Updated VIF entry in instance network info cache for port dc2f1a95-f7f3-4da2-a25e-82d30b142db4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.993 2 DEBUG nova.network.neutron [req-98dad913-e970-4c24-8164-3bccfc06612e req-09c8536a-d903-4a24-a9cf-1c1bd0cca28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Updating instance_info_cache with network_info: [{"id": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "address": "fa:16:3e:b0:0f:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc2f1a95-f7", "ovs_interfaceid": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.993 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef30d863-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.993 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.994 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef30d863-a0, col_values=(('external_ids', {'iface-id': 'd143de50-fc80-43b6-82e2-6651430a4a42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:38.994 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:38 compute-0 nova_compute[260603]: 2025-10-02 08:32:38.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.021 2 DEBUG oslo_concurrency.lockutils [req-98dad913-e970-4c24-8164-3bccfc06612e req-09c8536a-d903-4a24-a9cf-1c1bd0cca28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-edb2feae-9638-44f1-83f5-0713116e913f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.028 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.028 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.029 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.029 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 493 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 354 op/s
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.045 2 DEBUG nova.storage.rbd_utils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] rbd image d107c637-880a-47fa-ac2d-c762781f296c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.048 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d107c637-880a-47fa-ac2d-c762781f296c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.140 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393959.1400852, 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.141 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] VM Started (Lifecycle Event)
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.163 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.168 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393959.1459558, 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.168 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] VM Paused (Lifecycle Event)
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.184 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.187 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.209 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.299 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d107c637-880a-47fa-ac2d-c762781f296c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.357 2 DEBUG nova.storage.rbd_utils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] resizing rbd image d107c637-880a-47fa-ac2d-c762781f296c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.474 2 DEBUG nova.objects.instance [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lazy-loading 'migration_context' on Instance uuid d107c637-880a-47fa-ac2d-c762781f296c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.488 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.489 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Ensure instance console log exists: /var/lib/nova/instances/d107c637-880a-47fa-ac2d-c762781f296c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.490 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.491 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.491 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.786 2 DEBUG nova.policy [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fd9d3515f65243c595470300ea9c96d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cff05b28ad7949e3b6b334ee46e02341', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.908 2 DEBUG nova.compute.manager [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.909 2 DEBUG oslo_concurrency.lockutils [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.909 2 DEBUG oslo_concurrency.lockutils [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.909 2 DEBUG oslo_concurrency.lockutils [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.910 2 DEBUG nova.compute.manager [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Processing event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.910 2 DEBUG nova.compute.manager [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.911 2 DEBUG oslo_concurrency.lockutils [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.911 2 DEBUG oslo_concurrency.lockutils [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.912 2 DEBUG oslo_concurrency.lockutils [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.912 2 DEBUG nova.compute.manager [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] No waiting events found dispatching network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.913 2 WARNING nova.compute.manager [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received unexpected event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 for instance with vm_state building and task_state spawning.
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.913 2 DEBUG nova.compute.manager [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Received event network-vif-plugged-210f4c33-5b93-4fa0-a28c-9562754313b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.913 2 DEBUG oslo_concurrency.lockutils [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.914 2 DEBUG oslo_concurrency.lockutils [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.914 2 DEBUG oslo_concurrency.lockutils [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.914 2 DEBUG nova.compute.manager [req-1cb605d2-cbaf-40c2-aedb-5f1631c9a05f req-a9baa072-3aec-48a9-a111-08436148c9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Processing event network-vif-plugged-210f4c33-5b93-4fa0-a28c-9562754313b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.916 2 DEBUG nova.compute.manager [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.916 2 DEBUG nova.compute.manager [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.924 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393959.9239495, 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.925 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] VM Resumed (Lifecycle Event)
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.928 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.930 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.936 2 INFO nova.virt.libvirt.driver [-] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Instance spawned successfully.
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.937 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.940 2 INFO nova.virt.libvirt.driver [-] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Instance spawned successfully.
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.941 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.965 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.976 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.983 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.984 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.985 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.986 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.986 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.987 2 DEBUG nova.virt.libvirt.driver [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.995 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.996 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.997 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.997 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.998 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:39 compute-0 nova_compute[260603]: 2025-10-02 08:32:39.999 2 DEBUG nova.virt.libvirt.driver [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.030 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.031 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393959.924172, 6fc23d37-19fe-44e7-8525-17c199801726 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.031 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] VM Resumed (Lifecycle Event)
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.082 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.085 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.092 2 INFO nova.compute.manager [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Took 8.29 seconds to spawn the instance on the hypervisor.
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.092 2 DEBUG nova.compute.manager [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.095 2 INFO nova.compute.manager [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Took 9.81 seconds to spawn the instance on the hypervisor.
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.096 2 DEBUG nova.compute.manager [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.105 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.191 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393960.190724, edb2feae-9638-44f1-83f5-0713116e913f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.191 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] VM Started (Lifecycle Event)
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.234 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.238 2 INFO nova.compute.manager [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Took 9.45 seconds to build instance.
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.243 2 INFO nova.compute.manager [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Took 11.06 seconds to build instance.
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.244 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393960.193148, edb2feae-9638-44f1-83f5-0713116e913f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.244 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] VM Paused (Lifecycle Event)
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.264 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.265 2 DEBUG oslo_concurrency.lockutils [None req-28a9540b-4799-4d20-8dd2-823cd901c35f 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.266 2 DEBUG oslo_concurrency.lockutils [None req-4b6ebc7c-7d36-46f1-ac83-8885bed1dcc9 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.268 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:40 compute-0 nova_compute[260603]: 2025-10-02 08:32:40.287 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:40 compute-0 ceph-mon[74477]: pgmap v1569: 305 pgs: 305 active+clean; 493 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 354 op/s
Oct 02 08:32:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 493 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 690 KiB/s rd, 9.6 MiB/s wr, 213 op/s
Oct 02 08:32:41 compute-0 nova_compute[260603]: 2025-10-02 08:32:41.389 2 DEBUG nova.network.neutron [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Successfully created port: b101d57e-4c57-4e8a-9a68-a3dded764a52 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:32:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.601 2 DEBUG nova.network.neutron [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Successfully updated port: b101d57e-4c57-4e8a-9a68-a3dded764a52 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.628 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Acquiring lock "refresh_cache-d107c637-880a-47fa-ac2d-c762781f296c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.629 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Acquired lock "refresh_cache-d107c637-880a-47fa-ac2d-c762781f296c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.629 2 DEBUG nova.network.neutron [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:32:42 compute-0 ceph-mon[74477]: pgmap v1570: 305 pgs: 305 active+clean; 493 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 690 KiB/s rd, 9.6 MiB/s wr, 213 op/s
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.717 2 DEBUG nova.compute.manager [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Received event network-vif-plugged-210f4c33-5b93-4fa0-a28c-9562754313b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.718 2 DEBUG oslo_concurrency.lockutils [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.718 2 DEBUG oslo_concurrency.lockutils [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.719 2 DEBUG oslo_concurrency.lockutils [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.719 2 DEBUG nova.compute.manager [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] No waiting events found dispatching network-vif-plugged-210f4c33-5b93-4fa0-a28c-9562754313b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.719 2 WARNING nova.compute.manager [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Received unexpected event network-vif-plugged-210f4c33-5b93-4fa0-a28c-9562754313b3 for instance with vm_state active and task_state None.
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.720 2 DEBUG nova.compute.manager [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Received event network-vif-plugged-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.720 2 DEBUG oslo_concurrency.lockutils [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "edb2feae-9638-44f1-83f5-0713116e913f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.720 2 DEBUG oslo_concurrency.lockutils [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.721 2 DEBUG oslo_concurrency.lockutils [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.721 2 DEBUG nova.compute.manager [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Processing event network-vif-plugged-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.721 2 DEBUG nova.compute.manager [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Received event network-vif-plugged-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.721 2 DEBUG oslo_concurrency.lockutils [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "edb2feae-9638-44f1-83f5-0713116e913f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.722 2 DEBUG oslo_concurrency.lockutils [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.722 2 DEBUG oslo_concurrency.lockutils [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.722 2 DEBUG nova.compute.manager [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] No waiting events found dispatching network-vif-plugged-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.723 2 WARNING nova.compute.manager [req-a95cfc6e-4c70-4b25-a027-3e14555993a9 req-91d33bd4-30cf-461c-9f1d-363e8e5facc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Received unexpected event network-vif-plugged-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 for instance with vm_state building and task_state spawning.
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.723 2 DEBUG nova.compute.manager [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.727 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393962.7273858, edb2feae-9638-44f1-83f5-0713116e913f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.728 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] VM Resumed (Lifecycle Event)
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.732 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.735 2 INFO nova.virt.libvirt.driver [-] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Instance spawned successfully.
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.735 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.764 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.772 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.775 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.776 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.776 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.776 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.777 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.777 2 DEBUG nova.virt.libvirt.driver [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.828 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.830 2 DEBUG nova.network.neutron [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.862 2 INFO nova.compute.manager [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Took 9.42 seconds to spawn the instance on the hypervisor.
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.863 2 DEBUG nova.compute.manager [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.928 2 INFO nova.compute.manager [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Took 10.46 seconds to build instance.
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.945 2 DEBUG oslo_concurrency.lockutils [None req-eccf1eec-2efd-4544-985b-cab053887334 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:42 compute-0 nova_compute[260603]: 2025-10-02 08:32:42.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 527 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 11 MiB/s wr, 327 op/s
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.068 2 DEBUG oslo_concurrency.lockutils [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.069 2 DEBUG oslo_concurrency.lockutils [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.069 2 DEBUG oslo_concurrency.lockutils [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.069 2 DEBUG oslo_concurrency.lockutils [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.069 2 DEBUG oslo_concurrency.lockutils [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.070 2 INFO nova.compute.manager [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Terminating instance
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.071 2 DEBUG nova.compute.manager [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:32:44 compute-0 kernel: tap210f4c33-5b (unregistering): left promiscuous mode
Oct 02 08:32:44 compute-0 NetworkManager[45129]: <info>  [1759393964.1071] device (tap210f4c33-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:44 compute-0 ovn_controller[152344]: 2025-10-02T08:32:44Z|00674|binding|INFO|Releasing lport 210f4c33-5b93-4fa0-a28c-9562754313b3 from this chassis (sb_readonly=0)
Oct 02 08:32:44 compute-0 ovn_controller[152344]: 2025-10-02T08:32:44Z|00675|binding|INFO|Setting lport 210f4c33-5b93-4fa0-a28c-9562754313b3 down in Southbound
Oct 02 08:32:44 compute-0 ovn_controller[152344]: 2025-10-02T08:32:44Z|00676|binding|INFO|Removing iface tap210f4c33-5b ovn-installed in OVS
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.126 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:db:6b 10.100.0.6'], port_security=['fa:16:3e:45:db:6b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '56d0280c-1cff-4fc1-aaec-57f3dbad7ba5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=210f4c33-5b93-4fa0-a28c-9562754313b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.135 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 210f4c33-5b93-4fa0-a28c-9562754313b3 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 unbound from our chassis
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.137 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.161 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[153cd659-db2e-4cc2-ac28-0254a20b1f6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:44 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000048.scope: Deactivated successfully.
Oct 02 08:32:44 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000048.scope: Consumed 4.927s CPU time.
Oct 02 08:32:44 compute-0 systemd-machined[214636]: Machine qemu-80-instance-00000048 terminated.
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.193 2 DEBUG oslo_concurrency.lockutils [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.194 2 DEBUG oslo_concurrency.lockutils [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.194 2 INFO nova.compute.manager [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Rebooting instance
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.195 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f0764ed2-9a93-4366-a200-50d726c25ad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.198 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c113b05d-5fdd-4581-b3dc-716c7d9a85ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.218 2 DEBUG oslo_concurrency.lockutils [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquiring lock "refresh_cache-6fc23d37-19fe-44e7-8525-17c199801726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.219 2 DEBUG oslo_concurrency.lockutils [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquired lock "refresh_cache-6fc23d37-19fe-44e7-8525-17c199801726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.219 2 DEBUG nova.network.neutron [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.230 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d44836f8-3d77-47c3-873c-3d42a2738a9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.249 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8aea03-1bf8-426b-8742-4e98c68e3398]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 21, 'rx_bytes': 958, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 21, 'rx_bytes': 958, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331424, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.272 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef60558-4c8e-490f-87c7-b11b115fb406]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331425, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331425, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.274 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.280 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.280 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.280 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:44.281 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.306 2 INFO nova.virt.libvirt.driver [-] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Instance destroyed successfully.
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.307 2 DEBUG nova.objects.instance [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'resources' on Instance uuid 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.326 2 DEBUG nova.virt.libvirt.vif [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1154721187',display_name='tempest-ServersTestJSON-server-1154721187',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1154721187',id=72,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:32:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-ar1pakt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:32:40Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=56d0280c-1cff-4fc1-aaec-57f3dbad7ba5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "210f4c33-5b93-4fa0-a28c-9562754313b3", "address": "fa:16:3e:45:db:6b", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap210f4c33-5b", "ovs_interfaceid": "210f4c33-5b93-4fa0-a28c-9562754313b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.326 2 DEBUG nova.network.os_vif_util [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "210f4c33-5b93-4fa0-a28c-9562754313b3", "address": "fa:16:3e:45:db:6b", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap210f4c33-5b", "ovs_interfaceid": "210f4c33-5b93-4fa0-a28c-9562754313b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.327 2 DEBUG nova.network.os_vif_util [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:db:6b,bridge_name='br-int',has_traffic_filtering=True,id=210f4c33-5b93-4fa0-a28c-9562754313b3,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap210f4c33-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.327 2 DEBUG os_vif [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:db:6b,bridge_name='br-int',has_traffic_filtering=True,id=210f4c33-5b93-4fa0-a28c-9562754313b3,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap210f4c33-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.330 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap210f4c33-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.336 2 INFO os_vif [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:db:6b,bridge_name='br-int',has_traffic_filtering=True,id=210f4c33-5b93-4fa0-a28c-9562754313b3,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap210f4c33-5b')
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.392 2 DEBUG nova.network.neutron [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Updating instance_info_cache with network_info: [{"id": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "address": "fa:16:3e:82:b9:5d", "network": {"id": "7b2a1aa3-018f-4ad6-a26f-fd806332f44c", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-877607774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cff05b28ad7949e3b6b334ee46e02341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb101d57e-4c", "ovs_interfaceid": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.412 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Releasing lock "refresh_cache-d107c637-880a-47fa-ac2d-c762781f296c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.413 2 DEBUG nova.compute.manager [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Instance network_info: |[{"id": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "address": "fa:16:3e:82:b9:5d", "network": {"id": "7b2a1aa3-018f-4ad6-a26f-fd806332f44c", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-877607774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cff05b28ad7949e3b6b334ee46e02341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb101d57e-4c", "ovs_interfaceid": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.417 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Start _get_guest_xml network_info=[{"id": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "address": "fa:16:3e:82:b9:5d", "network": {"id": "7b2a1aa3-018f-4ad6-a26f-fd806332f44c", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-877607774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cff05b28ad7949e3b6b334ee46e02341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb101d57e-4c", "ovs_interfaceid": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.425 2 WARNING nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.432 2 DEBUG nova.virt.libvirt.host [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.433 2 DEBUG nova.virt.libvirt.host [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.437 2 DEBUG nova.virt.libvirt.host [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.437 2 DEBUG nova.virt.libvirt.host [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.438 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.439 2 DEBUG nova.virt.hardware [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.440 2 DEBUG nova.virt.hardware [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.440 2 DEBUG nova.virt.hardware [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.440 2 DEBUG nova.virt.hardware [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.440 2 DEBUG nova.virt.hardware [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.441 2 DEBUG nova.virt.hardware [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.441 2 DEBUG nova.virt.hardware [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.441 2 DEBUG nova.virt.hardware [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.441 2 DEBUG nova.virt.hardware [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.442 2 DEBUG nova.virt.hardware [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.442 2 DEBUG nova.virt.hardware [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.445 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:44 compute-0 ceph-mon[74477]: pgmap v1571: 305 pgs: 305 active+clean; 527 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 11 MiB/s wr, 327 op/s
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.706 2 INFO nova.virt.libvirt.driver [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Deleting instance files /var/lib/nova/instances/56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_del
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.709 2 INFO nova.virt.libvirt.driver [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Deletion of /var/lib/nova/instances/56d0280c-1cff-4fc1-aaec-57f3dbad7ba5_del complete
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.800 2 INFO nova.compute.manager [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.801 2 DEBUG oslo.service.loopingcall [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.802 2 DEBUG nova.compute.manager [-] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.802 2 DEBUG nova.network.neutron [-] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.812 2 DEBUG nova.compute.manager [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Received event network-changed-b101d57e-4c57-4e8a-9a68-a3dded764a52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.813 2 DEBUG nova.compute.manager [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Refreshing instance network info cache due to event network-changed-b101d57e-4c57-4e8a-9a68-a3dded764a52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.814 2 DEBUG oslo_concurrency.lockutils [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d107c637-880a-47fa-ac2d-c762781f296c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.814 2 DEBUG oslo_concurrency.lockutils [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d107c637-880a-47fa-ac2d-c762781f296c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.815 2 DEBUG nova.network.neutron [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Refreshing network info cache for port b101d57e-4c57-4e8a-9a68-a3dded764a52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:32:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3348178313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.934 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.971 2 DEBUG nova.storage.rbd_utils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] rbd image d107c637-880a-47fa-ac2d-c762781f296c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:44 compute-0 nova_compute[260603]: 2025-10-02 08:32:44.976 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 544 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 354 op/s
Oct 02 08:32:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2150145554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.470 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.472 2 DEBUG nova.virt.libvirt.vif [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-416001303',display_name='tempest-ServerPasswordTestJSON-server-416001303',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-416001303',id=74,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cff05b28ad7949e3b6b334ee46e02341',ramdisk_id='',reservation_id='r-5t7lh5vi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1565935053',owner_user_name='tempest-ServerPasswordTestJSON-1565935053-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:38Z,user_data=None,user_id='fd9d3515f65243c595470300ea9c96d8',uuid=d107c637-880a-47fa-ac2d-c762781f296c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "address": "fa:16:3e:82:b9:5d", "network": {"id": "7b2a1aa3-018f-4ad6-a26f-fd806332f44c", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-877607774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cff05b28ad7949e3b6b334ee46e02341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb101d57e-4c", "ovs_interfaceid": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.473 2 DEBUG nova.network.os_vif_util [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Converting VIF {"id": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "address": "fa:16:3e:82:b9:5d", "network": {"id": "7b2a1aa3-018f-4ad6-a26f-fd806332f44c", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-877607774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cff05b28ad7949e3b6b334ee46e02341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb101d57e-4c", "ovs_interfaceid": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.474 2 DEBUG nova.network.os_vif_util [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:b9:5d,bridge_name='br-int',has_traffic_filtering=True,id=b101d57e-4c57-4e8a-9a68-a3dded764a52,network=Network(7b2a1aa3-018f-4ad6-a26f-fd806332f44c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb101d57e-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.475 2 DEBUG nova.objects.instance [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lazy-loading 'pci_devices' on Instance uuid d107c637-880a-47fa-ac2d-c762781f296c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.493 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:32:45 compute-0 nova_compute[260603]:   <uuid>d107c637-880a-47fa-ac2d-c762781f296c</uuid>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   <name>instance-0000004a</name>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerPasswordTestJSON-server-416001303</nova:name>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:32:44</nova:creationTime>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:32:45 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:32:45 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:32:45 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:32:45 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:32:45 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:32:45 compute-0 nova_compute[260603]:         <nova:user uuid="fd9d3515f65243c595470300ea9c96d8">tempest-ServerPasswordTestJSON-1565935053-project-member</nova:user>
Oct 02 08:32:45 compute-0 nova_compute[260603]:         <nova:project uuid="cff05b28ad7949e3b6b334ee46e02341">tempest-ServerPasswordTestJSON-1565935053</nova:project>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:32:45 compute-0 nova_compute[260603]:         <nova:port uuid="b101d57e-4c57-4e8a-9a68-a3dded764a52">
Oct 02 08:32:45 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <system>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <entry name="serial">d107c637-880a-47fa-ac2d-c762781f296c</entry>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <entry name="uuid">d107c637-880a-47fa-ac2d-c762781f296c</entry>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     </system>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   <os>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   </os>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   <features>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   </features>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/d107c637-880a-47fa-ac2d-c762781f296c_disk">
Oct 02 08:32:45 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:45 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/d107c637-880a-47fa-ac2d-c762781f296c_disk.config">
Oct 02 08:32:45 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:45 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:82:b9:5d"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <target dev="tapb101d57e-4c"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/d107c637-880a-47fa-ac2d-c762781f296c/console.log" append="off"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <video>
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     </video>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:32:45 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:32:45 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:32:45 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:32:45 compute-0 nova_compute[260603]: </domain>
Oct 02 08:32:45 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.500 2 DEBUG nova.compute.manager [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Preparing to wait for external event network-vif-plugged-b101d57e-4c57-4e8a-9a68-a3dded764a52 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.500 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Acquiring lock "d107c637-880a-47fa-ac2d-c762781f296c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.500 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.500 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.501 2 DEBUG nova.virt.libvirt.vif [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-416001303',display_name='tempest-ServerPasswordTestJSON-server-416001303',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-416001303',id=74,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cff05b28ad7949e3b6b334ee46e02341',ramdisk_id='',reservation_id='r-5t7lh5vi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1565935053',owner_user_name='tempest-ServerPasswordTestJSON-1565935053-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:38Z,user_data=None,user_id='fd9d3515f65243c595470300ea9c96d8',uuid=d107c637-880a-47fa-ac2d-c762781f296c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "address": "fa:16:3e:82:b9:5d", "network": {"id": "7b2a1aa3-018f-4ad6-a26f-fd806332f44c", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-877607774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cff05b28ad7949e3b6b334ee46e02341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb101d57e-4c", "ovs_interfaceid": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.502 2 DEBUG nova.network.os_vif_util [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Converting VIF {"id": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "address": "fa:16:3e:82:b9:5d", "network": {"id": "7b2a1aa3-018f-4ad6-a26f-fd806332f44c", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-877607774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cff05b28ad7949e3b6b334ee46e02341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb101d57e-4c", "ovs_interfaceid": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.503 2 DEBUG nova.network.os_vif_util [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:b9:5d,bridge_name='br-int',has_traffic_filtering=True,id=b101d57e-4c57-4e8a-9a68-a3dded764a52,network=Network(7b2a1aa3-018f-4ad6-a26f-fd806332f44c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb101d57e-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.503 2 DEBUG os_vif [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:b9:5d,bridge_name='br-int',has_traffic_filtering=True,id=b101d57e-4c57-4e8a-9a68-a3dded764a52,network=Network(7b2a1aa3-018f-4ad6-a26f-fd806332f44c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb101d57e-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.508 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb101d57e-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.509 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb101d57e-4c, col_values=(('external_ids', {'iface-id': 'b101d57e-4c57-4e8a-9a68-a3dded764a52', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:b9:5d', 'vm-uuid': 'd107c637-880a-47fa-ac2d-c762781f296c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:45 compute-0 NetworkManager[45129]: <info>  [1759393965.5116] manager: (tapb101d57e-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.517 2 INFO os_vif [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:b9:5d,bridge_name='br-int',has_traffic_filtering=True,id=b101d57e-4c57-4e8a-9a68-a3dded764a52,network=Network(7b2a1aa3-018f-4ad6-a26f-fd806332f44c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb101d57e-4c')
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.591 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.591 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.592 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] No VIF found with MAC fa:16:3e:82:b9:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.593 2 INFO nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Using config drive
Oct 02 08:32:45 compute-0 nova_compute[260603]: 2025-10-02 08:32:45.617 2 DEBUG nova.storage.rbd_utils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] rbd image d107c637-880a-47fa-ac2d-c762781f296c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3348178313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2150145554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:46 compute-0 ceph-mon[74477]: pgmap v1572: 305 pgs: 305 active+clean; 544 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 354 op/s
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.699 2 DEBUG nova.network.neutron [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Updating instance_info_cache with network_info: [{"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.727 2 DEBUG nova.network.neutron [-] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.729 2 DEBUG nova.network.neutron [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Updated VIF entry in instance network info cache for port b101d57e-4c57-4e8a-9a68-a3dded764a52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.729 2 DEBUG nova.network.neutron [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Updating instance_info_cache with network_info: [{"id": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "address": "fa:16:3e:82:b9:5d", "network": {"id": "7b2a1aa3-018f-4ad6-a26f-fd806332f44c", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-877607774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cff05b28ad7949e3b6b334ee46e02341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb101d57e-4c", "ovs_interfaceid": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.730 2 DEBUG oslo_concurrency.lockutils [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Releasing lock "refresh_cache-6fc23d37-19fe-44e7-8525-17c199801726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.731 2 DEBUG nova.compute.manager [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.748 2 INFO nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Creating config drive at /var/lib/nova/instances/d107c637-880a-47fa-ac2d-c762781f296c/disk.config
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.753 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d107c637-880a-47fa-ac2d-c762781f296c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpao3aji4z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.801 2 DEBUG oslo_concurrency.lockutils [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d107c637-880a-47fa-ac2d-c762781f296c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.802 2 DEBUG nova.compute.manager [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Received event network-vif-unplugged-210f4c33-5b93-4fa0-a28c-9562754313b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.802 2 DEBUG oslo_concurrency.lockutils [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.803 2 DEBUG oslo_concurrency.lockutils [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.803 2 DEBUG oslo_concurrency.lockutils [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.803 2 DEBUG nova.compute.manager [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] No waiting events found dispatching network-vif-unplugged-210f4c33-5b93-4fa0-a28c-9562754313b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.803 2 DEBUG nova.compute.manager [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Received event network-vif-unplugged-210f4c33-5b93-4fa0-a28c-9562754313b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.804 2 DEBUG nova.compute.manager [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Received event network-vif-plugged-210f4c33-5b93-4fa0-a28c-9562754313b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.804 2 DEBUG oslo_concurrency.lockutils [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.804 2 DEBUG oslo_concurrency.lockutils [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.804 2 DEBUG oslo_concurrency.lockutils [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.804 2 DEBUG nova.compute.manager [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] No waiting events found dispatching network-vif-plugged-210f4c33-5b93-4fa0-a28c-9562754313b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.805 2 WARNING nova.compute.manager [req-b497a6df-9432-4875-a1b3-8c287310236d req-423bf4ed-54e3-41a4-b30a-21f868877572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Received unexpected event network-vif-plugged-210f4c33-5b93-4fa0-a28c-9562754313b3 for instance with vm_state active and task_state deleting.
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.805 2 INFO nova.compute.manager [-] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Took 2.00 seconds to deallocate network for instance.
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.900 2 DEBUG oslo_concurrency.lockutils [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.901 2 DEBUG oslo_concurrency.lockutils [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.901 2 INFO nova.compute.manager [None req-c805139c-ced3-4eec-9a3a-2580483bafe8 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Pausing
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.902 2 DEBUG nova.objects.instance [None req-c805139c-ced3-4eec-9a3a-2580483bafe8 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'flavor' on Instance uuid edb2feae-9638-44f1-83f5-0713116e913f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.914 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d107c637-880a-47fa-ac2d-c762781f296c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpao3aji4z" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:46 compute-0 kernel: tapbd85afb3-c2 (unregistering): left promiscuous mode
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.943 2 DEBUG nova.storage.rbd_utils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] rbd image d107c637-880a-47fa-ac2d-c762781f296c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:46 compute-0 NetworkManager[45129]: <info>  [1759393966.9497] device (tapbd85afb3-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:32:46 compute-0 nova_compute[260603]: 2025-10-02 08:32:46.968 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d107c637-880a-47fa-ac2d-c762781f296c/disk.config d107c637-880a-47fa-ac2d-c762781f296c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:46 compute-0 ovn_controller[152344]: 2025-10-02T08:32:46Z|00677|binding|INFO|Releasing lport bd85afb3-c20d-4a10-838a-6c3e2248fb09 from this chassis (sb_readonly=0)
Oct 02 08:32:46 compute-0 ovn_controller[152344]: 2025-10-02T08:32:46Z|00678|binding|INFO|Setting lport bd85afb3-c20d-4a10-838a-6c3e2248fb09 down in Southbound
Oct 02 08:32:46 compute-0 ovn_controller[152344]: 2025-10-02T08:32:46Z|00679|binding|INFO|Removing iface tapbd85afb3-c2 ovn-installed in OVS
Oct 02 08:32:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:46.979 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:55:99 10.100.0.13'], port_security=['fa:16:3e:d8:55:99 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6fc23d37-19fe-44e7-8525-17c199801726', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51be74fd-6730-41ad-8944-c77ca7c989b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c01c9c7e49d4e79942959efaa1b294a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd5785b6c-b9bb-41cf-ba74-c0d70260202c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=686f3722-f2fd-47a9-be1c-ef29b7ace186, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bd85afb3-c20d-4a10-838a-6c3e2248fb09) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:46.982 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bd85afb3-c20d-4a10-838a-6c3e2248fb09 in datapath 51be74fd-6730-41ad-8944-c77ca7c989b7 unbound from our chassis
Oct 02 08:32:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:46.986 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51be74fd-6730-41ad-8944-c77ca7c989b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:32:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:46.987 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[25a55770-274d-4aee-8a6f-0d9704ae1390]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:46.988 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7 namespace which is not needed anymore
Oct 02 08:32:47 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000047.scope: Deactivated successfully.
Oct 02 08:32:47 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000047.scope: Consumed 7.807s CPU time.
Oct 02 08:32:47 compute-0 systemd-machined[214636]: Machine qemu-79-instance-00000047 terminated.
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.028 2 DEBUG nova.compute.manager [req-26a2b03d-9129-40ed-89f5-65126492355e req-3bed9841-3695-43a7-a163-4d6bd57b35d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Received event network-vif-deleted-210f4c33-5b93-4fa0-a28c-9562754313b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.036 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393967.0360348, edb2feae-9638-44f1-83f5-0713116e913f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.036 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] VM Paused (Lifecycle Event)
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.039 2 DEBUG nova.compute.manager [None req-c805139c-ced3-4eec-9a3a-2580483bafe8 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 544 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 7.9 MiB/s wr, 348 op/s
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.069 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.077 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.109 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.141 2 INFO nova.virt.libvirt.driver [-] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Instance destroyed successfully.
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.142 2 DEBUG nova.objects.instance [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lazy-loading 'resources' on Instance uuid 6fc23d37-19fe-44e7-8525-17c199801726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:47 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[330841]: [NOTICE]   (330845) : haproxy version is 2.8.14-c23fe91
Oct 02 08:32:47 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[330841]: [NOTICE]   (330845) : path to executable is /usr/sbin/haproxy
Oct 02 08:32:47 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[330841]: [WARNING]  (330845) : Exiting Master process...
Oct 02 08:32:47 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[330841]: [ALERT]    (330845) : Current worker (330847) exited with code 143 (Terminated)
Oct 02 08:32:47 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[330841]: [WARNING]  (330845) : All workers exited. Exiting... (0)
Oct 02 08:32:47 compute-0 systemd[1]: libpod-a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a.scope: Deactivated successfully.
Oct 02 08:32:47 compute-0 conmon[330841]: conmon a5ff52dee89e8a6aca0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a.scope/container/memory.events
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.156 2 DEBUG nova.virt.libvirt.vif [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1788060948',display_name='tempest-InstanceActionsTestJSON-server-1788060948',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1788060948',id=71,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:32:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0c01c9c7e49d4e79942959efaa1b294a',ramdisk_id='',reservation_id='r-zv0whomp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-559595974',owner_user_name='tempest-InstanceActionsTestJSON-559595974-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:32:46Z,user_data=None,user_id='46c1104e877d4c59a7947f5750b06496',uuid=6fc23d37-19fe-44e7-8525-17c199801726,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.156 2 DEBUG nova.network.os_vif_util [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Converting VIF {"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:47 compute-0 podman[331600]: 2025-10-02 08:32:47.156939823 +0000 UTC m=+0.054759120 container died a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.157 2 DEBUG nova.network.os_vif_util [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.157 2 DEBUG os_vif [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.159 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd85afb3-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.167 2 INFO os_vif [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2')
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.176 2 DEBUG nova.virt.libvirt.driver [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Start _get_guest_xml network_info=[{"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.177 2 DEBUG oslo_concurrency.processutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d107c637-880a-47fa-ac2d-c762781f296c/disk.config d107c637-880a-47fa-ac2d-c762781f296c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.178 2 INFO nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Deleting local config drive /var/lib/nova/instances/d107c637-880a-47fa-ac2d-c762781f296c/disk.config because it was imported into RBD.
Oct 02 08:32:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a-userdata-shm.mount: Deactivated successfully.
Oct 02 08:32:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9624f0d3dab56072fad3e34f4a8ec15e62826f5f029130c08cc7f31b494d047-merged.mount: Deactivated successfully.
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.197 2 DEBUG oslo_concurrency.processutils [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:47 compute-0 podman[331600]: 2025-10-02 08:32:47.213688264 +0000 UTC m=+0.111507571 container cleanup a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:32:47 compute-0 systemd[1]: libpod-conmon-a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a.scope: Deactivated successfully.
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.247 2 WARNING nova.virt.libvirt.driver [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:32:47 compute-0 kernel: tapb101d57e-4c: entered promiscuous mode
Oct 02 08:32:47 compute-0 NetworkManager[45129]: <info>  [1759393967.2533] manager: (tapb101d57e-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/290)
Oct 02 08:32:47 compute-0 systemd-udevd[331622]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:32:47 compute-0 ovn_controller[152344]: 2025-10-02T08:32:47Z|00680|binding|INFO|Claiming lport b101d57e-4c57-4e8a-9a68-a3dded764a52 for this chassis.
Oct 02 08:32:47 compute-0 ovn_controller[152344]: 2025-10-02T08:32:47Z|00681|binding|INFO|b101d57e-4c57-4e8a-9a68-a3dded764a52: Claiming fa:16:3e:82:b9:5d 10.100.0.9
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.256 2 DEBUG nova.virt.libvirt.host [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.257 2 DEBUG nova.virt.libvirt.host [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.262 2 DEBUG nova.virt.libvirt.host [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.262 2 DEBUG nova.virt.libvirt.host [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.262 2 DEBUG nova.virt.libvirt.driver [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.263 2 DEBUG nova.virt.hardware [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.263 2 DEBUG nova.virt.hardware [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.263 2 DEBUG nova.virt.hardware [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.264 2 DEBUG nova.virt.hardware [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.264 2 DEBUG nova.virt.hardware [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.264 2 DEBUG nova.virt.hardware [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.264 2 DEBUG nova.virt.hardware [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.264 2 DEBUG nova.virt.hardware [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.265 2 DEBUG nova.virt.hardware [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.265 2 DEBUG nova.virt.hardware [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.265 2 DEBUG nova.virt.hardware [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.265 2 DEBUG nova.objects.instance [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6fc23d37-19fe-44e7-8525-17c199801726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.265 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:b9:5d 10.100.0.9'], port_security=['fa:16:3e:82:b9:5d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd107c637-880a-47fa-ac2d-c762781f296c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b2a1aa3-018f-4ad6-a26f-fd806332f44c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cff05b28ad7949e3b6b334ee46e02341', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e542f46f-e7c1-49eb-b2fd-2cf5f18edd93', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a2d1197-a3fa-4937-b26a-7e932a4555f2, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b101d57e-4c57-4e8a-9a68-a3dded764a52) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:47 compute-0 NetworkManager[45129]: <info>  [1759393967.2696] device (tapb101d57e-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:32:47 compute-0 NetworkManager[45129]: <info>  [1759393967.2701] device (tapb101d57e-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:32:47 compute-0 ovn_controller[152344]: 2025-10-02T08:32:47Z|00682|binding|INFO|Setting lport b101d57e-4c57-4e8a-9a68-a3dded764a52 ovn-installed in OVS
Oct 02 08:32:47 compute-0 ovn_controller[152344]: 2025-10-02T08:32:47Z|00683|binding|INFO|Setting lport b101d57e-4c57-4e8a-9a68-a3dded764a52 up in Southbound
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.290 2 DEBUG oslo_concurrency.processutils [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:47 compute-0 systemd-machined[214636]: New machine qemu-82-instance-0000004a.
Oct 02 08:32:47 compute-0 podman[331652]: 2025-10-02 08:32:47.311890931 +0000 UTC m=+0.072853052 container remove a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:32:47 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-0000004a.
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.321 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a72a245b-4f6a-407c-b0e0-4848dbf1a38f]: (4, ('Thu Oct  2 08:32:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7 (a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a)\na5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a\nThu Oct  2 08:32:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7 (a5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a)\na5ff52dee89e8a6aca0b3b6d87ec78159f326e837c7bd38ffc8cb4fd5c70863a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.324 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ed51f5-7113-4ac8-b93c-2426686a2c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.325 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51be74fd-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:47 compute-0 kernel: tap51be74fd-60: left promiscuous mode
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.343 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf568a8-b30a-4371-b351-e8d887332980]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.368 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f24f2979-7c57-41ec-bca2-2a8bcae41132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.369 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[201705d3-556b-422c-9760-250b3fa84287]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.388 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[95d844aa-d27d-4a36-b39a-4223e07a570f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483428, 'reachable_time': 38427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331701, 'error': None, 'target': 'ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.398 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.399 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[bcadacaf-ca74-4ebf-b74b-576370a44daa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.399 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b101d57e-4c57-4e8a-9a68-a3dded764a52 in datapath 7b2a1aa3-018f-4ad6-a26f-fd806332f44c unbound from our chassis
Oct 02 08:32:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d51be74fd\x2d6730\x2d41ad\x2d8944\x2dc77ca7c989b7.mount: Deactivated successfully.
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.401 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b2a1aa3-018f-4ad6-a26f-fd806332f44c
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.414 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0843814f-a54b-4e6b-9d19-8a0f7baddeaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.415 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7b2a1aa3-01 in ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.416 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7b2a1aa3-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.417 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd3b630-0677-4c7d-9348-d731730e9206]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.417 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb18a571-7325-4e34-aea7-7fad6348141d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.429 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1466ed80-a1be-4954-b352-ed8146f7b89f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.454 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbf6053-f5cc-48da-b55c-f606fdb95f87]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.482 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[044b150a-1598-4636-a4a7-b25e17ba98cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 NetworkManager[45129]: <info>  [1759393967.4927] manager: (tap7b2a1aa3-00): new Veth device (/org/freedesktop/NetworkManager/Devices/291)
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.496 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7f95deaa-d4a1-4c56-92e9-84923e936dd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.532 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fc84f0ef-db7f-4ea3-a3d3-09922716a8f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.535 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[59d71020-061d-486d-9b5b-3b49d018c243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 NetworkManager[45129]: <info>  [1759393967.5555] device (tap7b2a1aa3-00): carrier: link connected
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.561 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed5118e-8717-4c76-a8a4-ff3b39c52a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.577 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c2bb14-83a4-4f5c-a57f-63831fbbdc9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b2a1aa3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:e4:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484615, 'reachable_time': 23394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331748, 'error': None, 'target': 'ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.593 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf482a1-0b10-43b0-8d83-1996483731f5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:e484'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 484615, 'tstamp': 484615}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331749, 'error': None, 'target': 'ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.608 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8e461d37-08b1-4fe5-a393-5adeb1170454]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b2a1aa3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:e4:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484615, 'reachable_time': 23394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331750, 'error': None, 'target': 'ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.640 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e5839498-96fa-46c9-b798-453d4aec50cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2332276205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2332276205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.690 2 DEBUG oslo_concurrency.processutils [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.698 2 DEBUG nova.compute.provider_tree [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.702 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[925ab890-2584-474c-a997-c80c6f751470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.703 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b2a1aa3-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.703 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.704 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b2a1aa3-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:47 compute-0 NetworkManager[45129]: <info>  [1759393967.7061] manager: (tap7b2a1aa3-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Oct 02 08:32:47 compute-0 kernel: tap7b2a1aa3-00: entered promiscuous mode
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.711 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b2a1aa3-00, col_values=(('external_ids', {'iface-id': '12993089-f8e8-41ae-bb7e-40e99cfb0054'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.714 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7b2a1aa3-018f-4ad6-a26f-fd806332f44c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7b2a1aa3-018f-4ad6-a26f-fd806332f44c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.715 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4198c99d-ee79-463e-ac53-2bd89ad2d588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:47 compute-0 ovn_controller[152344]: 2025-10-02T08:32:47Z|00684|binding|INFO|Releasing lport 12993089-f8e8-41ae-bb7e-40e99cfb0054 from this chassis (sb_readonly=0)
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.715 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-7b2a1aa3-018f-4ad6-a26f-fd806332f44c
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/7b2a1aa3-018f-4ad6-a26f-fd806332f44c.pid.haproxy
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 7b2a1aa3-018f-4ad6-a26f-fd806332f44c
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:32:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:47.716 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c', 'env', 'PROCESS_TAG=haproxy-7b2a1aa3-018f-4ad6-a26f-fd806332f44c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7b2a1aa3-018f-4ad6-a26f-fd806332f44c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:32:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/246294800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.721 2 DEBUG nova.scheduler.client.report [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.739 2 DEBUG oslo_concurrency.lockutils [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.751 2 DEBUG oslo_concurrency.processutils [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.772 2 INFO nova.scheduler.client.report [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Deleted allocations for instance 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.776 2 DEBUG oslo_concurrency.processutils [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:47 compute-0 nova_compute[260603]: 2025-10-02 08:32:47.875 2 DEBUG oslo_concurrency.lockutils [None req-6cbf821e-8064-4c45-856b-dc1d30473dbd 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "56d0280c-1cff-4fc1-aaec-57f3dbad7ba5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:48 compute-0 podman[331824]: 2025-10-02 08:32:48.111211204 +0000 UTC m=+0.048513707 container create 79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 08:32:48 compute-0 systemd[1]: Started libpod-conmon-79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976.scope.
Oct 02 08:32:48 compute-0 podman[331824]: 2025-10-02 08:32:48.08724289 +0000 UTC m=+0.024545393 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:32:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:32:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ed30f69eb390198f6ccfe6b8db4d64da7b2c591380396171816abedcaa8cf6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:32:48 compute-0 podman[331824]: 2025-10-02 08:32:48.217713068 +0000 UTC m=+0.155015591 container init 79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:32:48 compute-0 podman[331824]: 2025-10-02 08:32:48.223096005 +0000 UTC m=+0.160398498 container start 79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 08:32:48 compute-0 podman[331840]: 2025-10-02 08:32:48.227228963 +0000 UTC m=+0.060953902 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:32:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:32:48 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1213759498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:48 compute-0 neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c[331841]: [NOTICE]   (331879) : New worker (331900) forked
Oct 02 08:32:48 compute-0 neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c[331841]: [NOTICE]   (331879) : Loading success.
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.263 2 DEBUG oslo_concurrency.processutils [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.264 2 DEBUG nova.virt.libvirt.vif [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1788060948',display_name='tempest-InstanceActionsTestJSON-server-1788060948',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1788060948',id=71,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:32:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0c01c9c7e49d4e79942959efaa1b294a',ramdisk_id='',reservation_id='r-zv0whomp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-559595974',owner_user_name='tempest-InstanceActionsTestJSON-559595974-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:32:46Z,user_data=None,user_id='46c1104e877d4c59a7947f5750b06496',uuid=6fc23d37-19fe-44e7-8525-17c199801726,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.265 2 DEBUG nova.network.os_vif_util [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Converting VIF {"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.266 2 DEBUG nova.network.os_vif_util [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.267 2 DEBUG nova.objects.instance [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lazy-loading 'pci_devices' on Instance uuid 6fc23d37-19fe-44e7-8525-17c199801726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:48 compute-0 podman[331837]: 2025-10-02 08:32:48.276470731 +0000 UTC m=+0.106893148 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.282 2 DEBUG nova.virt.libvirt.driver [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:32:48 compute-0 nova_compute[260603]:   <uuid>6fc23d37-19fe-44e7-8525-17c199801726</uuid>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   <name>instance-00000047</name>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <nova:name>tempest-InstanceActionsTestJSON-server-1788060948</nova:name>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:32:47</nova:creationTime>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:32:48 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:32:48 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:32:48 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:32:48 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:32:48 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:32:48 compute-0 nova_compute[260603]:         <nova:user uuid="46c1104e877d4c59a7947f5750b06496">tempest-InstanceActionsTestJSON-559595974-project-member</nova:user>
Oct 02 08:32:48 compute-0 nova_compute[260603]:         <nova:project uuid="0c01c9c7e49d4e79942959efaa1b294a">tempest-InstanceActionsTestJSON-559595974</nova:project>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:32:48 compute-0 nova_compute[260603]:         <nova:port uuid="bd85afb3-c20d-4a10-838a-6c3e2248fb09">
Oct 02 08:32:48 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <system>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <entry name="serial">6fc23d37-19fe-44e7-8525-17c199801726</entry>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <entry name="uuid">6fc23d37-19fe-44e7-8525-17c199801726</entry>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     </system>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   <os>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   </os>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   <features>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   </features>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6fc23d37-19fe-44e7-8525-17c199801726_disk">
Oct 02 08:32:48 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:48 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6fc23d37-19fe-44e7-8525-17c199801726_disk.config">
Oct 02 08:32:48 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       </source>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:32:48 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:d8:55:99"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <target dev="tapbd85afb3-c2"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/6fc23d37-19fe-44e7-8525-17c199801726/console.log" append="off"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <video>
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     </video>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:32:48 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:32:48 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:32:48 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:32:48 compute-0 nova_compute[260603]: </domain>
Oct 02 08:32:48 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.282 2 DEBUG nova.virt.libvirt.driver [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.283 2 DEBUG nova.virt.libvirt.driver [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.284 2 DEBUG nova.virt.libvirt.vif [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1788060948',display_name='tempest-InstanceActionsTestJSON-server-1788060948',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1788060948',id=71,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:32:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='0c01c9c7e49d4e79942959efaa1b294a',ramdisk_id='',reservation_id='r-zv0whomp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-559595974',owner_user_name='tempest-InstanceActionsTestJSON-559595974-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:32:46Z,user_data=None,user_id='46c1104e877d4c59a7947f5750b06496',uuid=6fc23d37-19fe-44e7-8525-17c199801726,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.284 2 DEBUG nova.network.os_vif_util [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Converting VIF {"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.285 2 DEBUG nova.network.os_vif_util [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.285 2 DEBUG os_vif [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.286 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.287 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.290 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd85afb3-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.290 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd85afb3-c2, col_values=(('external_ids', {'iface-id': 'bd85afb3-c20d-4a10-838a-6c3e2248fb09', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:55:99', 'vm-uuid': '6fc23d37-19fe-44e7-8525-17c199801726'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:48 compute-0 NetworkManager[45129]: <info>  [1759393968.2928] manager: (tapbd85afb3-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.300 2 INFO os_vif [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2')
Oct 02 08:32:48 compute-0 systemd-udevd[331733]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:32:48 compute-0 kernel: tapbd85afb3-c2: entered promiscuous mode
Oct 02 08:32:48 compute-0 NetworkManager[45129]: <info>  [1759393968.3591] manager: (tapbd85afb3-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/294)
Oct 02 08:32:48 compute-0 ovn_controller[152344]: 2025-10-02T08:32:48Z|00685|binding|INFO|Claiming lport bd85afb3-c20d-4a10-838a-6c3e2248fb09 for this chassis.
Oct 02 08:32:48 compute-0 ovn_controller[152344]: 2025-10-02T08:32:48Z|00686|binding|INFO|bd85afb3-c20d-4a10-838a-6c3e2248fb09: Claiming fa:16:3e:d8:55:99 10.100.0.13
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:48 compute-0 NetworkManager[45129]: <info>  [1759393968.3709] device (tapbd85afb3-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:32:48 compute-0 NetworkManager[45129]: <info>  [1759393968.3716] device (tapbd85afb3-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.370 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:55:99 10.100.0.13'], port_security=['fa:16:3e:d8:55:99 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6fc23d37-19fe-44e7-8525-17c199801726', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51be74fd-6730-41ad-8944-c77ca7c989b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c01c9c7e49d4e79942959efaa1b294a', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd5785b6c-b9bb-41cf-ba74-c0d70260202c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=686f3722-f2fd-47a9-be1c-ef29b7ace186, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bd85afb3-c20d-4a10-838a-6c3e2248fb09) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.371 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bd85afb3-c20d-4a10-838a-6c3e2248fb09 in datapath 51be74fd-6730-41ad-8944-c77ca7c989b7 bound to our chassis
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.373 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51be74fd-6730-41ad-8944-c77ca7c989b7
Oct 02 08:32:48 compute-0 ovn_controller[152344]: 2025-10-02T08:32:48Z|00687|binding|INFO|Setting lport bd85afb3-c20d-4a10-838a-6c3e2248fb09 ovn-installed in OVS
Oct 02 08:32:48 compute-0 ovn_controller[152344]: 2025-10-02T08:32:48Z|00688|binding|INFO|Setting lport bd85afb3-c20d-4a10-838a-6c3e2248fb09 up in Southbound
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.385 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6de6a4fd-9654-42fc-8bcf-fa79e9c3fea2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.386 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap51be74fd-61 in ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.388 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap51be74fd-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.388 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[70318fce-5266-437c-b9f8-b4dcad406347]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.389 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3846c82d-4159-45f7-91af-bc3b58589f39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:48 compute-0 systemd-machined[214636]: New machine qemu-83-instance-00000047.
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.407 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[26838d7f-30af-4031-ab97-1180e2182e61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-00000047.
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.420 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7637ef50-1365-4136-a614-84a4be25810a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.447 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[73805ffe-6652-46e8-a2dc-db9b8aadf0f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 NetworkManager[45129]: <info>  [1759393968.4594] manager: (tap51be74fd-60): new Veth device (/org/freedesktop/NetworkManager/Devices/295)
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.458 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[82681ec9-4c90-4b2b-ae7c-8bb66e30ad1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.489 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2880aedb-dc67-4ee8-8a09-ac3fb4826588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.491 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9059c59c-2711-4dbc-a8b1-767caa9f5f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 NetworkManager[45129]: <info>  [1759393968.5155] device (tap51be74fd-60): carrier: link connected
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.525 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e2944abd-e3a8-4183-8fc5-8dab2c82b03f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.547 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b5588a53-00b0-4049-9f27-4ebdccc20fe9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51be74fd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:e5:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484711, 'reachable_time': 43722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331974, 'error': None, 'target': 'ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.562 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50dc66ad-905b-45e4-a492-31985ab54824]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:e504'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 484711, 'tstamp': 484711}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331975, 'error': None, 'target': 'ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.579 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[98a0a8ad-7c11-4f12-b885-8b0882da46d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51be74fd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:e5:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484711, 'reachable_time': 43722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331976, 'error': None, 'target': 'ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.617 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[147d3849-37bd-4bda-a29f-817f4efac90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.677 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[975b33a2-55b2-4e96-9b77-73872f4673cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.678 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51be74fd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.678 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.678 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51be74fd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:48 compute-0 kernel: tap51be74fd-60: entered promiscuous mode
Oct 02 08:32:48 compute-0 NetworkManager[45129]: <info>  [1759393968.6804] manager: (tap51be74fd-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/296)
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.683 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51be74fd-60, col_values=(('external_ids', {'iface-id': '018506de-118e-4ad6-ae4a-0f6a98b235ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:48 compute-0 ovn_controller[152344]: 2025-10-02T08:32:48Z|00689|binding|INFO|Releasing lport 018506de-118e-4ad6-ae4a-0f6a98b235ca from this chassis (sb_readonly=0)
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.687 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/51be74fd-6730-41ad-8944-c77ca7c989b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/51be74fd-6730-41ad-8944-c77ca7c989b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.687 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cafa45c3-c220-417f-8593-55fc8a75bceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.688 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-51be74fd-6730-41ad-8944-c77ca7c989b7
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/51be74fd-6730-41ad-8944-c77ca7c989b7.pid.haproxy
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 51be74fd-6730-41ad-8944-c77ca7c989b7
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:32:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:48.689 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7', 'env', 'PROCESS_TAG=haproxy-51be74fd-6730-41ad-8944-c77ca7c989b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/51be74fd-6730-41ad-8944-c77ca7c989b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:32:48 compute-0 ceph-mon[74477]: pgmap v1573: 305 pgs: 305 active+clean; 544 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 7.9 MiB/s wr, 348 op/s
Oct 02 08:32:48 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/246294800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:48 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1213759498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.901 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393968.9011722, d107c637-880a-47fa-ac2d-c762781f296c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.901 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d107c637-880a-47fa-ac2d-c762781f296c] VM Started (Lifecycle Event)
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.930 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.935 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393968.9012392, d107c637-880a-47fa-ac2d-c762781f296c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.935 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d107c637-880a-47fa-ac2d-c762781f296c] VM Paused (Lifecycle Event)
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.984 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:48 compute-0 nova_compute[260603]: 2025-10-02 08:32:48.987 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.018 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d107c637-880a-47fa-ac2d-c762781f296c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 498 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 7.9 MiB/s wr, 448 op/s
Oct 02 08:32:49 compute-0 podman[332048]: 2025-10-02 08:32:49.08051688 +0000 UTC m=+0.051876580 container create 779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 08:32:49 compute-0 systemd[1]: Started libpod-conmon-779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9.scope.
Oct 02 08:32:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beadf06bb28ec2191c6eb1c3a33b05fb6b158858de0743ce3297b2d98dae8511/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:32:49 compute-0 podman[332048]: 2025-10-02 08:32:49.052364986 +0000 UTC m=+0.023724706 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:32:49 compute-0 podman[332048]: 2025-10-02 08:32:49.155353603 +0000 UTC m=+0.126713303 container init 779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 08:32:49 compute-0 podman[332048]: 2025-10-02 08:32:49.168069077 +0000 UTC m=+0.139428777 container start 779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:32:49 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[332063]: [NOTICE]   (332067) : New worker (332069) forked
Oct 02 08:32:49 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[332063]: [NOTICE]   (332067) : Loading success.
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.250 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 6fc23d37-19fe-44e7-8525-17c199801726 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.250 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393969.248464, 6fc23d37-19fe-44e7-8525-17c199801726 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.251 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] VM Resumed (Lifecycle Event)
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.253 2 DEBUG nova.compute.manager [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.257 2 INFO nova.virt.libvirt.driver [-] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Instance rebooted successfully.
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.257 2 DEBUG nova.compute.manager [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.295 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.299 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.340 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.341 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393969.2492616, 6fc23d37-19fe-44e7-8525-17c199801726 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.341 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] VM Started (Lifecycle Event)
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.348 2 DEBUG oslo_concurrency.lockutils [None req-8dc9a880-c8b6-405c-b322-6f413d0e2971 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.371 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.375 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.816 2 DEBUG nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received event network-vif-unplugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.817 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.818 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.819 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.819 2 DEBUG nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] No waiting events found dispatching network-vif-unplugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.819 2 WARNING nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received unexpected event network-vif-unplugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 for instance with vm_state active and task_state None.
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.819 2 DEBUG nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.820 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.821 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.821 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.821 2 DEBUG nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] No waiting events found dispatching network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.821 2 WARNING nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received unexpected event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 for instance with vm_state active and task_state None.
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.822 2 DEBUG nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Received event network-vif-plugged-b101d57e-4c57-4e8a-9a68-a3dded764a52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.822 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d107c637-880a-47fa-ac2d-c762781f296c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.822 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.822 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.823 2 DEBUG nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Processing event network-vif-plugged-b101d57e-4c57-4e8a-9a68-a3dded764a52 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.823 2 DEBUG nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Received event network-vif-plugged-b101d57e-4c57-4e8a-9a68-a3dded764a52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.823 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d107c637-880a-47fa-ac2d-c762781f296c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.824 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.824 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.824 2 DEBUG nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] No waiting events found dispatching network-vif-plugged-b101d57e-4c57-4e8a-9a68-a3dded764a52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.824 2 WARNING nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Received unexpected event network-vif-plugged-b101d57e-4c57-4e8a-9a68-a3dded764a52 for instance with vm_state building and task_state spawning.
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.825 2 DEBUG nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.825 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.825 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.825 2 DEBUG oslo_concurrency.lockutils [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.826 2 DEBUG nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] No waiting events found dispatching network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.826 2 WARNING nova.compute.manager [req-3a53baa1-4f7b-4385-ba29-4af756e4872d req-b5b3e8de-9983-47ad-9b95-5d721ad7c614 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received unexpected event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 for instance with vm_state active and task_state None.
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.827 2 DEBUG nova.compute.manager [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.832 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393969.830265, d107c637-880a-47fa-ac2d-c762781f296c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.833 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d107c637-880a-47fa-ac2d-c762781f296c] VM Resumed (Lifecycle Event)
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.835 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.839 2 INFO nova.virt.libvirt.driver [-] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Instance spawned successfully.
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.840 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.873 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.884 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.889 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.890 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.891 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.891 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.892 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.893 2 DEBUG nova.virt.libvirt.driver [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.941 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d107c637-880a-47fa-ac2d-c762781f296c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.991 2 INFO nova.compute.manager [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Took 11.17 seconds to spawn the instance on the hypervisor.
Oct 02 08:32:49 compute-0 nova_compute[260603]: 2025-10-02 08:32:49.992 2 DEBUG nova.compute.manager [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.061 2 INFO nova.compute.manager [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Took 12.46 seconds to build instance.
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.076 2 DEBUG oslo_concurrency.lockutils [None req-f4bfc781-5134-4725-8c3f-bce0317e4e66 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.465 2 DEBUG oslo_concurrency.lockutils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "edb2feae-9638-44f1-83f5-0713116e913f" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.466 2 DEBUG oslo_concurrency.lockutils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.466 2 INFO nova.compute.manager [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Shelving
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.470 2 DEBUG oslo_concurrency.lockutils [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "50c9773b-004f-491e-abcf-6698fbd8ca3d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.471 2 DEBUG oslo_concurrency.lockutils [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.471 2 DEBUG oslo_concurrency.lockutils [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.472 2 DEBUG oslo_concurrency.lockutils [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.473 2 DEBUG oslo_concurrency.lockutils [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.475 2 INFO nova.compute.manager [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Terminating instance
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.477 2 DEBUG nova.compute.manager [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:32:50 compute-0 kernel: tap099dc779-59 (unregistering): left promiscuous mode
Oct 02 08:32:50 compute-0 kernel: tapdc2f1a95-f7 (unregistering): left promiscuous mode
Oct 02 08:32:50 compute-0 NetworkManager[45129]: <info>  [1759393970.5393] device (tap099dc779-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:32:50 compute-0 NetworkManager[45129]: <info>  [1759393970.5403] device (tapdc2f1a95-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:32:50 compute-0 ovn_controller[152344]: 2025-10-02T08:32:50Z|00690|binding|INFO|Releasing lport 099dc779-5949-4fbd-969a-1200ae071364 from this chassis (sb_readonly=0)
Oct 02 08:32:50 compute-0 ovn_controller[152344]: 2025-10-02T08:32:50Z|00691|binding|INFO|Setting lport 099dc779-5949-4fbd-969a-1200ae071364 down in Southbound
Oct 02 08:32:50 compute-0 ovn_controller[152344]: 2025-10-02T08:32:50Z|00692|binding|INFO|Releasing lport dc2f1a95-f7f3-4da2-a25e-82d30b142db4 from this chassis (sb_readonly=0)
Oct 02 08:32:50 compute-0 ovn_controller[152344]: 2025-10-02T08:32:50Z|00693|binding|INFO|Setting lport dc2f1a95-f7f3-4da2-a25e-82d30b142db4 down in Southbound
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:50 compute-0 ovn_controller[152344]: 2025-10-02T08:32:50Z|00694|binding|INFO|Removing iface tapdc2f1a95-f7 ovn-installed in OVS
Oct 02 08:32:50 compute-0 ovn_controller[152344]: 2025-10-02T08:32:50Z|00695|binding|INFO|Removing iface tap099dc779-59 ovn-installed in OVS
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.571 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:31:42 10.100.0.5'], port_security=['fa:16:3e:c5:31:42 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '50c9773b-004f-491e-abcf-6698fbd8ca3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=099dc779-5949-4fbd-969a-1200ae071364) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.573 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:0f:f7 10.100.0.12'], port_security=['fa:16:3e:b0:0f:f7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'edb2feae-9638-44f1-83f5-0713116e913f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df44854a-80b4-49ce-898d-50927f9b482f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04113540-c60b-4329-960e-cb06bfeb56f0, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dc2f1a95-f7f3-4da2-a25e-82d30b142db4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.575 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 099dc779-5949-4fbd-969a-1200ae071364 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 unbound from our chassis
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.577 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:32:50 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000049.scope: Deactivated successfully.
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.595 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec8bb88-2f63-454e-9c55-eac0c23a4384]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:50 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000049.scope: Consumed 5.545s CPU time.
Oct 02 08:32:50 compute-0 systemd-machined[214636]: Machine qemu-81-instance-00000049 terminated.
Oct 02 08:32:50 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000045.scope: Deactivated successfully.
Oct 02 08:32:50 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000045.scope: Consumed 13.299s CPU time.
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:50 compute-0 systemd-machined[214636]: Machine qemu-78-instance-00000045 terminated.
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.633 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b285b5-a5ea-4bb4-9908-74cda7793925]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.638 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[041f7ae3-9a45-4ed7-af22-33bbba3ecb49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.668 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c222a79b-84d7-4397-8dd4-ffd3b4fc6b5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.685 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b3be6ad6-bfd7-4e1c-94fa-a89bdd43bbec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 23, 'rx_bytes': 1000, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 23, 'rx_bytes': 1000, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 25420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332093, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:50 compute-0 NetworkManager[45129]: <info>  [1759393970.7099] manager: (tapdc2f1a95-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/297)
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.709 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ee1c9d-8027-49f0-85ed-085d68f46739]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332096, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332096, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.710 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:50 compute-0 ceph-mon[74477]: pgmap v1574: 305 pgs: 305 active+clean; 498 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 7.9 MiB/s wr, 448 op/s
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.720 2 INFO nova.virt.libvirt.driver [-] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Instance destroyed successfully.
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.721 2 DEBUG nova.objects.instance [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'resources' on Instance uuid 50c9773b-004f-491e-abcf-6698fbd8ca3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.724 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.725 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.725 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.726 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.727 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dc2f1a95-f7f3-4da2-a25e-82d30b142db4 in datapath ef30d863-af60-49d9-b5d2-5e4f20c70d56 unbound from our chassis
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.731 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef30d863-af60-49d9-b5d2-5e4f20c70d56
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.736 2 INFO nova.virt.libvirt.driver [-] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Instance destroyed successfully.
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.737 2 DEBUG nova.objects.instance [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'numa_topology' on Instance uuid edb2feae-9638-44f1-83f5-0713116e913f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.751 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b776813e-f51c-4823-9efd-5064a2d28a37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.764 2 DEBUG nova.virt.libvirt.vif [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:32:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1154721187',display_name='tempest-ServersTestJSON-server-1154721187',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1154721187',id=69,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:32:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-k01r9hb6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:32:25Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=50c9773b-004f-491e-abcf-6698fbd8ca3d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "099dc779-5949-4fbd-969a-1200ae071364", "address": "fa:16:3e:c5:31:42", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap099dc779-59", "ovs_interfaceid": "099dc779-5949-4fbd-969a-1200ae071364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.765 2 DEBUG nova.network.os_vif_util [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "099dc779-5949-4fbd-969a-1200ae071364", "address": "fa:16:3e:c5:31:42", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap099dc779-59", "ovs_interfaceid": "099dc779-5949-4fbd-969a-1200ae071364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.766 2 DEBUG nova.network.os_vif_util [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:31:42,bridge_name='br-int',has_traffic_filtering=True,id=099dc779-5949-4fbd-969a-1200ae071364,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap099dc779-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.766 2 DEBUG os_vif [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:31:42,bridge_name='br-int',has_traffic_filtering=True,id=099dc779-5949-4fbd-969a-1200ae071364,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap099dc779-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.770 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap099dc779-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.785 2 INFO os_vif [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:31:42,bridge_name='br-int',has_traffic_filtering=True,id=099dc779-5949-4fbd-969a-1200ae071364,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap099dc779-59')
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.785 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2a6b32-f10c-40eb-b988-de063a713082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.788 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0b131d05-508e-427a-ab38-5a1a9e2bbe32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.818 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8a019016-9ffc-4d7f-bf16-74a7aad3432b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.836 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[edb241a1-626e-4db4-8be5-873952af4b08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef30d863-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:1b:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474382, 'reachable_time': 23805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332142, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.854 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9150b478-df99-48e0-8786-0a831723585d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef30d863-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474399, 'tstamp': 474399}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332145, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef30d863-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474402, 'tstamp': 474402}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332145, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.855 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef30d863-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:50 compute-0 nova_compute[260603]: 2025-10-02 08:32:50.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.860 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef30d863-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.861 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.861 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef30d863-a0, col_values=(('external_ids', {'iface-id': 'd143de50-fc80-43b6-82e2-6651430a4a42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:50.861 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:32:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 498 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 280 op/s
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.121 2 INFO nova.virt.libvirt.driver [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Beginning cold snapshot process
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.128 2 INFO nova.virt.libvirt.driver [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Deleting instance files /var/lib/nova/instances/50c9773b-004f-491e-abcf-6698fbd8ca3d_del
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.129 2 INFO nova.virt.libvirt.driver [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Deletion of /var/lib/nova/instances/50c9773b-004f-491e-abcf-6698fbd8ca3d_del complete
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.186 2 DEBUG nova.compute.manager [req-139f2a2e-42ee-4418-bdfc-7499055a1cbc req-41eb47a1-1878-4184-be28-f8b697e025fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Received event network-vif-unplugged-099dc779-5949-4fbd-969a-1200ae071364 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.187 2 DEBUG oslo_concurrency.lockutils [req-139f2a2e-42ee-4418-bdfc-7499055a1cbc req-41eb47a1-1878-4184-be28-f8b697e025fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.187 2 DEBUG oslo_concurrency.lockutils [req-139f2a2e-42ee-4418-bdfc-7499055a1cbc req-41eb47a1-1878-4184-be28-f8b697e025fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.187 2 DEBUG oslo_concurrency.lockutils [req-139f2a2e-42ee-4418-bdfc-7499055a1cbc req-41eb47a1-1878-4184-be28-f8b697e025fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.187 2 DEBUG nova.compute.manager [req-139f2a2e-42ee-4418-bdfc-7499055a1cbc req-41eb47a1-1878-4184-be28-f8b697e025fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] No waiting events found dispatching network-vif-unplugged-099dc779-5949-4fbd-969a-1200ae071364 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.187 2 DEBUG nova.compute.manager [req-139f2a2e-42ee-4418-bdfc-7499055a1cbc req-41eb47a1-1878-4184-be28-f8b697e025fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Received event network-vif-unplugged-099dc779-5949-4fbd-969a-1200ae071364 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.215 2 INFO nova.compute.manager [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.216 2 DEBUG oslo.service.loopingcall [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.216 2 DEBUG nova.compute.manager [-] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.216 2 DEBUG nova.network.neutron [-] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.326 2 DEBUG nova.virt.libvirt.imagebackend [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.565 2 DEBUG nova.storage.rbd_utils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] creating snapshot(499b354672bf4983b5fd644febc0e0e8) on rbd image(edb2feae-9638-44f1-83f5-0713116e913f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.660 2 DEBUG oslo_concurrency.lockutils [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.661 2 DEBUG oslo_concurrency.lockutils [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.661 2 DEBUG oslo_concurrency.lockutils [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.662 2 DEBUG oslo_concurrency.lockutils [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.663 2 DEBUG oslo_concurrency.lockutils [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.665 2 INFO nova.compute.manager [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Terminating instance
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.667 2 DEBUG nova.compute.manager [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:32:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Oct 02 08:32:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Oct 02 08:32:51 compute-0 kernel: tapbd85afb3-c2 (unregistering): left promiscuous mode
Oct 02 08:32:51 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Oct 02 08:32:51 compute-0 NetworkManager[45129]: <info>  [1759393971.7306] device (tapbd85afb3-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:51 compute-0 ovn_controller[152344]: 2025-10-02T08:32:51Z|00696|binding|INFO|Releasing lport bd85afb3-c20d-4a10-838a-6c3e2248fb09 from this chassis (sb_readonly=0)
Oct 02 08:32:51 compute-0 ovn_controller[152344]: 2025-10-02T08:32:51Z|00697|binding|INFO|Setting lport bd85afb3-c20d-4a10-838a-6c3e2248fb09 down in Southbound
Oct 02 08:32:51 compute-0 ovn_controller[152344]: 2025-10-02T08:32:51Z|00698|binding|INFO|Removing iface tapbd85afb3-c2 ovn-installed in OVS
Oct 02 08:32:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:51.754 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:55:99 10.100.0.13'], port_security=['fa:16:3e:d8:55:99 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6fc23d37-19fe-44e7-8525-17c199801726', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51be74fd-6730-41ad-8944-c77ca7c989b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c01c9c7e49d4e79942959efaa1b294a', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd5785b6c-b9bb-41cf-ba74-c0d70260202c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=686f3722-f2fd-47a9-be1c-ef29b7ace186, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bd85afb3-c20d-4a10-838a-6c3e2248fb09) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:51.755 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bd85afb3-c20d-4a10-838a-6c3e2248fb09 in datapath 51be74fd-6730-41ad-8944-c77ca7c989b7 unbound from our chassis
Oct 02 08:32:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:51.757 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51be74fd-6730-41ad-8944-c77ca7c989b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:32:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:51.760 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ab94f3-3e6e-422f-8390-c56c7c7e72e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:51.761 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7 namespace which is not needed anymore
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:51 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000047.scope: Deactivated successfully.
Oct 02 08:32:51 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000047.scope: Consumed 3.190s CPU time.
Oct 02 08:32:51 compute-0 systemd-machined[214636]: Machine qemu-83-instance-00000047 terminated.
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.808 2 DEBUG nova.storage.rbd_utils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] cloning vms/edb2feae-9638-44f1-83f5-0713116e913f_disk@499b354672bf4983b5fd644febc0e0e8 to images/20e9bdfc-2c59-4117-bbad-fa7996fe8b08 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.913 2 INFO nova.virt.libvirt.driver [-] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Instance destroyed successfully.
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.917 2 DEBUG nova.objects.instance [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lazy-loading 'resources' on Instance uuid 6fc23d37-19fe-44e7-8525-17c199801726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.950 2 DEBUG nova.virt.libvirt.vif [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1788060948',display_name='tempest-InstanceActionsTestJSON-server-1788060948',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1788060948',id=71,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:32:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0c01c9c7e49d4e79942959efaa1b294a',ramdisk_id='',reservation_id='r-zv0whomp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-559595974',owner_user_name='tempest-InstanceActionsTestJSON-559595974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:32:49Z,user_data=None,user_id='46c1104e877d4c59a7947f5750b06496',uuid=6fc23d37-19fe-44e7-8525-17c199801726,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.951 2 DEBUG nova.network.os_vif_util [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Converting VIF {"id": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "address": "fa:16:3e:d8:55:99", "network": {"id": "51be74fd-6730-41ad-8944-c77ca7c989b7", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-235260406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c01c9c7e49d4e79942959efaa1b294a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd85afb3-c2", "ovs_interfaceid": "bd85afb3-c20d-4a10-838a-6c3e2248fb09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.951 2 DEBUG nova.network.os_vif_util [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.951 2 DEBUG os_vif [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.954 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd85afb3-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:51 compute-0 nova_compute[260603]: 2025-10-02 08:32:51.972 2 DEBUG nova.storage.rbd_utils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] flattening images/20e9bdfc-2c59-4117-bbad-fa7996fe8b08 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:32:51 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[332063]: [NOTICE]   (332067) : haproxy version is 2.8.14-c23fe91
Oct 02 08:32:51 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[332063]: [NOTICE]   (332067) : path to executable is /usr/sbin/haproxy
Oct 02 08:32:51 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[332063]: [WARNING]  (332067) : Exiting Master process...
Oct 02 08:32:51 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[332063]: [ALERT]    (332067) : Current worker (332069) exited with code 143 (Terminated)
Oct 02 08:32:51 compute-0 neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7[332063]: [WARNING]  (332067) : All workers exited. Exiting... (0)
Oct 02 08:32:51 compute-0 systemd[1]: libpod-779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9.scope: Deactivated successfully.
Oct 02 08:32:51 compute-0 podman[332256]: 2025-10-02 08:32:51.986111858 +0000 UTC m=+0.075121072 container died 779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9-userdata-shm.mount: Deactivated successfully.
Oct 02 08:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-beadf06bb28ec2191c6eb1c3a33b05fb6b158858de0743ce3297b2d98dae8511-merged.mount: Deactivated successfully.
Oct 02 08:32:52 compute-0 podman[332256]: 2025-10-02 08:32:52.035418028 +0000 UTC m=+0.124427212 container cleanup 779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.046 2 INFO os_vif [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:55:99,bridge_name='br-int',has_traffic_filtering=True,id=bd85afb3-c20d-4a10-838a-6c3e2248fb09,network=Network(51be74fd-6730-41ad-8944-c77ca7c989b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd85afb3-c2')
Oct 02 08:32:52 compute-0 systemd[1]: libpod-conmon-779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9.scope: Deactivated successfully.
Oct 02 08:32:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:32:52 compute-0 podman[332317]: 2025-10-02 08:32:52.167244859 +0000 UTC m=+0.098956501 container remove 779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:32:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:52.175 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09deb08d-a647-4c8b-bd80-8ab89e3cb871]: (4, ('Thu Oct  2 08:32:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7 (779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9)\n779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9\nThu Oct  2 08:32:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7 (779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9)\n779f9e03c6d29962693abc464e4c9bde0e865f5566526d67f9092c57b59bf6b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:52.176 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[60b26418-2c08-44b6-9c87-8492b0666b7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:52.177 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51be74fd-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:52 compute-0 kernel: tap51be74fd-60: left promiscuous mode
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:52.188 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42ca83f9-7d43-4588-85d5-6095a2bacc44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:52.223 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7d62ac-4845-49e9-af56-e718793c1903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:52.224 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7403ac06-5e5c-45ad-bbeb-707e7fb3a829]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:52.244 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4dbe2f-1a14-4702-b186-fefacc3bcb16]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484704, 'reachable_time': 29204, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332360, 'error': None, 'target': 'ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:52.246 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-51be74fd-6730-41ad-8944-c77ca7c989b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:32:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:52.246 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[42e61d60-ea82-4b5d-a53c-8228b98f4301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d51be74fd\x2d6730\x2d41ad\x2d8944\x2dc77ca7c989b7.mount: Deactivated successfully.
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.273 2 DEBUG nova.storage.rbd_utils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] removing snapshot(499b354672bf4983b5fd644febc0e0e8) on rbd image(edb2feae-9638-44f1-83f5-0713116e913f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.477 2 DEBUG nova.network.neutron [-] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.496 2 INFO nova.compute.manager [-] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Took 1.28 seconds to deallocate network for instance.
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.524 2 INFO nova.virt.libvirt.driver [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Deleting instance files /var/lib/nova/instances/6fc23d37-19fe-44e7-8525-17c199801726_del
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.525 2 INFO nova.virt.libvirt.driver [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Deletion of /var/lib/nova/instances/6fc23d37-19fe-44e7-8525-17c199801726_del complete
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.556 2 DEBUG nova.compute.manager [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.557 2 DEBUG oslo_concurrency.lockutils [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.557 2 DEBUG oslo_concurrency.lockutils [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.558 2 DEBUG oslo_concurrency.lockutils [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.558 2 DEBUG nova.compute.manager [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] No waiting events found dispatching network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.558 2 WARNING nova.compute.manager [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received unexpected event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 for instance with vm_state active and task_state deleting.
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.559 2 DEBUG nova.compute.manager [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Received event network-vif-unplugged-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.559 2 DEBUG oslo_concurrency.lockutils [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "edb2feae-9638-44f1-83f5-0713116e913f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.560 2 DEBUG oslo_concurrency.lockutils [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.560 2 DEBUG oslo_concurrency.lockutils [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.560 2 DEBUG nova.compute.manager [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] No waiting events found dispatching network-vif-unplugged-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.561 2 WARNING nova.compute.manager [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Received unexpected event network-vif-unplugged-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 for instance with vm_state paused and task_state shelving_image_uploading.
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.561 2 DEBUG nova.compute.manager [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Received event network-vif-plugged-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.562 2 DEBUG oslo_concurrency.lockutils [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "edb2feae-9638-44f1-83f5-0713116e913f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.562 2 DEBUG oslo_concurrency.lockutils [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.562 2 DEBUG oslo_concurrency.lockutils [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.563 2 DEBUG nova.compute.manager [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] No waiting events found dispatching network-vif-plugged-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.563 2 WARNING nova.compute.manager [req-60ccab85-2f2f-4c0b-8cb4-8d9a8e741fb8 req-39d5a3b8-a477-4a3c-9a2e-bd2ec99a756d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Received unexpected event network-vif-plugged-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 for instance with vm_state paused and task_state shelving_image_uploading.
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.566 2 DEBUG oslo_concurrency.lockutils [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.567 2 DEBUG oslo_concurrency.lockutils [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.592 2 INFO nova.compute.manager [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Took 0.92 seconds to destroy the instance on the hypervisor.
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.592 2 DEBUG oslo.service.loopingcall [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.593 2 DEBUG nova.compute.manager [-] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.593 2 DEBUG nova.network.neutron [-] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:32:52 compute-0 ceph-mon[74477]: pgmap v1575: 305 pgs: 305 active+clean; 498 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 280 op/s
Oct 02 08:32:52 compute-0 ceph-mon[74477]: osdmap e224: 3 total, 3 up, 3 in
Oct 02 08:32:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Oct 02 08:32:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Oct 02 08:32:52 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.788 2 DEBUG nova.storage.rbd_utils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] creating snapshot(snap) on rbd image(20e9bdfc-2c59-4117-bbad-fa7996fe8b08) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:32:52 compute-0 nova_compute[260603]: 2025-10-02 08:32:52.848 2 DEBUG oslo_concurrency.processutils [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 478 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 1.8 MiB/s wr, 366 op/s
Oct 02 08:32:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242586170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.297 2 DEBUG oslo_concurrency.processutils [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.305 2 DEBUG nova.compute.provider_tree [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.323 2 DEBUG nova.scheduler.client.report [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.344 2 DEBUG oslo_concurrency.lockutils [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:53 compute-0 sudo[332413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:32:53 compute-0 sudo[332413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:53 compute-0 sudo[332413]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.385 2 INFO nova.scheduler.client.report [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Deleted allocations for instance 50c9773b-004f-491e-abcf-6698fbd8ca3d
Oct 02 08:32:53 compute-0 sudo[332438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:32:53 compute-0 sudo[332438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:53 compute-0 sudo[332438]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.468 2 DEBUG oslo_concurrency.lockutils [None req-2a9b0845-ac75-481f-8c70-973375f6a498 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:53 compute-0 sudo[332463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:32:53 compute-0 sudo[332463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:53 compute-0 sudo[332463]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:53 compute-0 sudo[332488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.665 2 DEBUG nova.compute.manager [req-e4d4d920-fdd2-41bb-8501-950ec0ea7dea req-8c2c6905-8c91-4726-a200-5839916f2161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Received event network-vif-plugged-099dc779-5949-4fbd-969a-1200ae071364 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.666 2 DEBUG oslo_concurrency.lockutils [req-e4d4d920-fdd2-41bb-8501-950ec0ea7dea req-8c2c6905-8c91-4726-a200-5839916f2161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:53 compute-0 sudo[332488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.668 2 DEBUG oslo_concurrency.lockutils [req-e4d4d920-fdd2-41bb-8501-950ec0ea7dea req-8c2c6905-8c91-4726-a200-5839916f2161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.668 2 DEBUG oslo_concurrency.lockutils [req-e4d4d920-fdd2-41bb-8501-950ec0ea7dea req-8c2c6905-8c91-4726-a200-5839916f2161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "50c9773b-004f-491e-abcf-6698fbd8ca3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.669 2 DEBUG nova.compute.manager [req-e4d4d920-fdd2-41bb-8501-950ec0ea7dea req-8c2c6905-8c91-4726-a200-5839916f2161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] No waiting events found dispatching network-vif-plugged-099dc779-5949-4fbd-969a-1200ae071364 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.670 2 WARNING nova.compute.manager [req-e4d4d920-fdd2-41bb-8501-950ec0ea7dea req-8c2c6905-8c91-4726-a200-5839916f2161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Received unexpected event network-vif-plugged-099dc779-5949-4fbd-969a-1200ae071364 for instance with vm_state deleted and task_state None.
Oct 02 08:32:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Oct 02 08:32:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Oct 02 08:32:53 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Oct 02 08:32:53 compute-0 ceph-mon[74477]: osdmap e225: 3 total, 3 up, 3 in
Oct 02 08:32:53 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4242586170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.947 2 DEBUG oslo_concurrency.lockutils [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Acquiring lock "d107c637-880a-47fa-ac2d-c762781f296c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.948 2 DEBUG oslo_concurrency.lockutils [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.949 2 DEBUG oslo_concurrency.lockutils [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Acquiring lock "d107c637-880a-47fa-ac2d-c762781f296c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.949 2 DEBUG oslo_concurrency.lockutils [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.950 2 DEBUG oslo_concurrency.lockutils [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.951 2 INFO nova.compute.manager [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Terminating instance
Oct 02 08:32:53 compute-0 nova_compute[260603]: 2025-10-02 08:32:53.952 2 DEBUG nova.compute.manager [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:32:54 compute-0 kernel: tapb101d57e-4c (unregistering): left promiscuous mode
Oct 02 08:32:54 compute-0 NetworkManager[45129]: <info>  [1759393974.0202] device (tapb101d57e-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.025 2 DEBUG nova.network.neutron [-] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:54 compute-0 ovn_controller[152344]: 2025-10-02T08:32:54Z|00699|binding|INFO|Releasing lport b101d57e-4c57-4e8a-9a68-a3dded764a52 from this chassis (sb_readonly=0)
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:54 compute-0 ovn_controller[152344]: 2025-10-02T08:32:54Z|00700|binding|INFO|Setting lport b101d57e-4c57-4e8a-9a68-a3dded764a52 down in Southbound
Oct 02 08:32:54 compute-0 ovn_controller[152344]: 2025-10-02T08:32:54Z|00701|binding|INFO|Removing iface tapb101d57e-4c ovn-installed in OVS
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.045 2 INFO nova.compute.manager [-] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Took 1.45 seconds to deallocate network for instance.
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.055 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:b9:5d 10.100.0.9'], port_security=['fa:16:3e:82:b9:5d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd107c637-880a-47fa-ac2d-c762781f296c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b2a1aa3-018f-4ad6-a26f-fd806332f44c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cff05b28ad7949e3b6b334ee46e02341', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e542f46f-e7c1-49eb-b2fd-2cf5f18edd93', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a2d1197-a3fa-4937-b26a-7e932a4555f2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b101d57e-4c57-4e8a-9a68-a3dded764a52) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.056 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b101d57e-4c57-4e8a-9a68-a3dded764a52 in datapath 7b2a1aa3-018f-4ad6-a26f-fd806332f44c unbound from our chassis
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.058 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b2a1aa3-018f-4ad6-a26f-fd806332f44c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.059 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb7ae422-5455-4bb3-875c-cea1327d93f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.060 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c namespace which is not needed anymore
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.109 2 DEBUG oslo_concurrency.lockutils [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:54 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.110 2 DEBUG oslo_concurrency.lockutils [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:54 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000004a.scope: Consumed 5.413s CPU time.
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:54 compute-0 systemd-machined[214636]: Machine qemu-82-instance-0000004a terminated.
Oct 02 08:32:54 compute-0 podman[332531]: 2025-10-02 08:32:54.171744316 +0000 UTC m=+0.114159223 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.211 2 INFO nova.virt.libvirt.driver [-] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Instance destroyed successfully.
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.212 2 DEBUG nova.objects.instance [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lazy-loading 'resources' on Instance uuid d107c637-880a-47fa-ac2d-c762781f296c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.240 2 DEBUG nova.virt.libvirt.vif [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:32:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-416001303',display_name='tempest-ServerPasswordTestJSON-server-416001303',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-416001303',id=74,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:32:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cff05b28ad7949e3b6b334ee46e02341',ramdisk_id='',reservation_id='r-5t7lh5vi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1565935053',owner_user_name='tempest-ServerPasswordTestJSON-1565935053-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:32:53Z,user_data=None,user_id='fd9d3515f65243c595470300ea9c96d8',uuid=d107c637-880a-47fa-ac2d-c762781f296c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "address": "fa:16:3e:82:b9:5d", "network": {"id": "7b2a1aa3-018f-4ad6-a26f-fd806332f44c", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-877607774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cff05b28ad7949e3b6b334ee46e02341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb101d57e-4c", "ovs_interfaceid": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.241 2 DEBUG nova.network.os_vif_util [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Converting VIF {"id": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "address": "fa:16:3e:82:b9:5d", "network": {"id": "7b2a1aa3-018f-4ad6-a26f-fd806332f44c", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-877607774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cff05b28ad7949e3b6b334ee46e02341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb101d57e-4c", "ovs_interfaceid": "b101d57e-4c57-4e8a-9a68-a3dded764a52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.242 2 DEBUG nova.network.os_vif_util [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:b9:5d,bridge_name='br-int',has_traffic_filtering=True,id=b101d57e-4c57-4e8a-9a68-a3dded764a52,network=Network(7b2a1aa3-018f-4ad6-a26f-fd806332f44c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb101d57e-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.242 2 DEBUG os_vif [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:b9:5d,bridge_name='br-int',has_traffic_filtering=True,id=b101d57e-4c57-4e8a-9a68-a3dded764a52,network=Network(7b2a1aa3-018f-4ad6-a26f-fd806332f44c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb101d57e-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:54 compute-0 neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c[331841]: [NOTICE]   (331879) : haproxy version is 2.8.14-c23fe91
Oct 02 08:32:54 compute-0 neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c[331841]: [NOTICE]   (331879) : path to executable is /usr/sbin/haproxy
Oct 02 08:32:54 compute-0 neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c[331841]: [WARNING]  (331879) : Exiting Master process...
Oct 02 08:32:54 compute-0 neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c[331841]: [WARNING]  (331879) : Exiting Master process...
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.245 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb101d57e-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:54 compute-0 neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c[331841]: [ALERT]    (331879) : Current worker (331900) exited with code 143 (Terminated)
Oct 02 08:32:54 compute-0 neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c[331841]: [WARNING]  (331879) : All workers exited. Exiting... (0)
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:32:54 compute-0 systemd[1]: libpod-79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976.scope: Deactivated successfully.
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.254 2 INFO os_vif [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:b9:5d,bridge_name='br-int',has_traffic_filtering=True,id=b101d57e-4c57-4e8a-9a68-a3dded764a52,network=Network(7b2a1aa3-018f-4ad6-a26f-fd806332f44c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb101d57e-4c')
Oct 02 08:32:54 compute-0 podman[332577]: 2025-10-02 08:32:54.256912849 +0000 UTC m=+0.065445832 container died 79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 08:32:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976-userdata-shm.mount: Deactivated successfully.
Oct 02 08:32:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-43ed30f69eb390198f6ccfe6b8db4d64da7b2c591380396171816abedcaa8cf6-merged.mount: Deactivated successfully.
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.295 2 DEBUG oslo_concurrency.processutils [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:54 compute-0 podman[332577]: 2025-10-02 08:32:54.299700486 +0000 UTC m=+0.108233479 container cleanup 79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:32:54 compute-0 systemd[1]: libpod-conmon-79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976.scope: Deactivated successfully.
Oct 02 08:32:54 compute-0 sudo[332488]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:54 compute-0 podman[332642]: 2025-10-02 08:32:54.386559172 +0000 UTC m=+0.050462987 container remove 79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.395 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7e716c-3da7-41fc-8431-4f0d5ad51d5b]: (4, ('Thu Oct  2 08:32:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c (79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976)\n79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976\nThu Oct  2 08:32:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c (79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976)\n79d4080454a2c25c7391df10fc42dc29b3a15dacf2f01011e55bc2a3288b2976\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.397 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd30682c-b256-497d-800c-954ca932b4d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.398 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b2a1aa3-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:32:54 compute-0 kernel: tap7b2a1aa3-00: left promiscuous mode
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.425 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca1c92a-72a4-471e-b0bc-67bb5cce299e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:32:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:32:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:32:54 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:32:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.451 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3efc4ed0-ab51-415f-ba68-deb54ef91676]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.453 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7bb16e-e356-41b0-8418-eaca23634dbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:54 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:32:54 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 35b4d598-b782-46c9-bbb0-cec811db4fce does not exist
Oct 02 08:32:54 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e6b7df6b-fbc5-4f0c-9992-b132f61770bd does not exist
Oct 02 08:32:54 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b3dfc4dd-51f2-47bb-9a0f-a2c831b994b7 does not exist
Oct 02 08:32:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:32:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:32:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:32:54 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:32:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:32:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.476 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[90b61fa4-ddb8-4e6f-90a1-8698efb9b15f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484607, 'reachable_time': 33295, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332686, 'error': None, 'target': 'ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.478 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7b2a1aa3-018f-4ad6-a26f-fd806332f44c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:32:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:54.479 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[50f45c59-b480-432d-ae1f-03f959a279f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:32:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d7b2a1aa3\x2d018f\x2d4ad6\x2da26f\x2dfd806332f44c.mount: Deactivated successfully.
Oct 02 08:32:54 compute-0 sudo[332688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:32:54 compute-0 sudo[332688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:54 compute-0 sudo[332688]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:54 compute-0 sudo[332714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:32:54 compute-0 sudo[332714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:54 compute-0 sudo[332714]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:54 compute-0 sudo[332739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:32:54 compute-0 sudo[332739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:54 compute-0 sudo[332739]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.695 2 DEBUG nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Received event network-vif-deleted-099dc779-5949-4fbd-969a-1200ae071364 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.695 2 DEBUG nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received event network-vif-unplugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.696 2 DEBUG oslo_concurrency.lockutils [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.696 2 DEBUG oslo_concurrency.lockutils [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.696 2 DEBUG oslo_concurrency.lockutils [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.697 2 DEBUG nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] No waiting events found dispatching network-vif-unplugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.697 2 WARNING nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received unexpected event network-vif-unplugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 for instance with vm_state deleted and task_state None.
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.697 2 DEBUG nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.697 2 DEBUG oslo_concurrency.lockutils [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6fc23d37-19fe-44e7-8525-17c199801726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.698 2 DEBUG oslo_concurrency.lockutils [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.698 2 DEBUG oslo_concurrency.lockutils [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.698 2 DEBUG nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] No waiting events found dispatching network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.699 2 WARNING nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received unexpected event network-vif-plugged-bd85afb3-c20d-4a10-838a-6c3e2248fb09 for instance with vm_state deleted and task_state None.
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.699 2 DEBUG nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Received event network-vif-deleted-bd85afb3-c20d-4a10-838a-6c3e2248fb09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.699 2 DEBUG nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Received event network-vif-unplugged-b101d57e-4c57-4e8a-9a68-a3dded764a52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.700 2 DEBUG oslo_concurrency.lockutils [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d107c637-880a-47fa-ac2d-c762781f296c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.700 2 DEBUG oslo_concurrency.lockutils [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.701 2 DEBUG oslo_concurrency.lockutils [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.701 2 DEBUG nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] No waiting events found dispatching network-vif-unplugged-b101d57e-4c57-4e8a-9a68-a3dded764a52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.701 2 DEBUG nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Received event network-vif-unplugged-b101d57e-4c57-4e8a-9a68-a3dded764a52 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.701 2 DEBUG nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Received event network-vif-plugged-b101d57e-4c57-4e8a-9a68-a3dded764a52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.702 2 DEBUG oslo_concurrency.lockutils [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d107c637-880a-47fa-ac2d-c762781f296c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.702 2 DEBUG oslo_concurrency.lockutils [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.702 2 DEBUG oslo_concurrency.lockutils [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.703 2 DEBUG nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] No waiting events found dispatching network-vif-plugged-b101d57e-4c57-4e8a-9a68-a3dded764a52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.703 2 WARNING nova.compute.manager [req-5abe6ab2-0488-4acf-9677-727e69a01762 req-88aab95e-bc84-4e8d-af39-2345a773a989 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Received unexpected event network-vif-plugged-b101d57e-4c57-4e8a-9a68-a3dded764a52 for instance with vm_state active and task_state deleting.
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.723 2 INFO nova.virt.libvirt.driver [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Deleting instance files /var/lib/nova/instances/d107c637-880a-47fa-ac2d-c762781f296c_del
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.724 2 INFO nova.virt.libvirt.driver [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Deletion of /var/lib/nova/instances/d107c637-880a-47fa-ac2d-c762781f296c_del complete
Oct 02 08:32:54 compute-0 sudo[332764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:32:54 compute-0 sudo[332764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:54 compute-0 ceph-mon[74477]: pgmap v1578: 305 pgs: 305 active+clean; 478 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 1.8 MiB/s wr, 366 op/s
Oct 02 08:32:54 compute-0 ceph-mon[74477]: osdmap e226: 3 total, 3 up, 3 in
Oct 02 08:32:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:32:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:32:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:32:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:32:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:32:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:32:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2144457477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.782 2 DEBUG oslo_concurrency.processutils [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.789 2 DEBUG nova.compute.provider_tree [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.805 2 INFO nova.compute.manager [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.805 2 DEBUG oslo.service.loopingcall [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.806 2 DEBUG nova.compute.manager [-] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.806 2 DEBUG nova.network.neutron [-] [instance: d107c637-880a-47fa-ac2d-c762781f296c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.812 2 DEBUG nova.scheduler.client.report [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.844 2 DEBUG oslo_concurrency.lockutils [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.870 2 INFO nova.scheduler.client.report [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Deleted allocations for instance 6fc23d37-19fe-44e7-8525-17c199801726
Oct 02 08:32:54 compute-0 nova_compute[260603]: 2025-10-02 08:32:54.928 2 DEBUG oslo_concurrency.lockutils [None req-754e6807-44e7-415c-9a42-aba33bedf15d 46c1104e877d4c59a7947f5750b06496 0c01c9c7e49d4e79942959efaa1b294a - - default default] Lock "6fc23d37-19fe-44e7-8525-17c199801726" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 426 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 3.6 MiB/s wr, 458 op/s
Oct 02 08:32:55 compute-0 podman[332832]: 2025-10-02 08:32:55.162007512 +0000 UTC m=+0.051471887 container create a56d0db54495e37e71c3eb082947e034e30d2a522f0611c85060a81b2635f6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:32:55 compute-0 systemd[1]: Started libpod-conmon-a56d0db54495e37e71c3eb082947e034e30d2a522f0611c85060a81b2635f6f1.scope.
Oct 02 08:32:55 compute-0 podman[332832]: 2025-10-02 08:32:55.143452718 +0000 UTC m=+0.032917143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:32:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.244 2 INFO nova.virt.libvirt.driver [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Snapshot image upload complete
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.245 2 DEBUG nova.compute.manager [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:55 compute-0 podman[332832]: 2025-10-02 08:32:55.25538963 +0000 UTC m=+0.144854085 container init a56d0db54495e37e71c3eb082947e034e30d2a522f0611c85060a81b2635f6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:32:55 compute-0 podman[332832]: 2025-10-02 08:32:55.262399398 +0000 UTC m=+0.151863773 container start a56d0db54495e37e71c3eb082947e034e30d2a522f0611c85060a81b2635f6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 08:32:55 compute-0 podman[332832]: 2025-10-02 08:32:55.267739414 +0000 UTC m=+0.157203879 container attach a56d0db54495e37e71c3eb082947e034e30d2a522f0611c85060a81b2635f6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 08:32:55 compute-0 lucid_sammet[332849]: 167 167
Oct 02 08:32:55 compute-0 conmon[332849]: conmon a56d0db54495e37e71c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a56d0db54495e37e71c3eb082947e034e30d2a522f0611c85060a81b2635f6f1.scope/container/memory.events
Oct 02 08:32:55 compute-0 systemd[1]: libpod-a56d0db54495e37e71c3eb082947e034e30d2a522f0611c85060a81b2635f6f1.scope: Deactivated successfully.
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.296 2 INFO nova.compute.manager [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Shelve offloading
Oct 02 08:32:55 compute-0 podman[332854]: 2025-10-02 08:32:55.304284577 +0000 UTC m=+0.023055066 container died a56d0db54495e37e71c3eb082947e034e30d2a522f0611c85060a81b2635f6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.304 2 INFO nova.virt.libvirt.driver [-] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Instance destroyed successfully.
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.306 2 DEBUG nova.compute.manager [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.309 2 DEBUG oslo_concurrency.lockutils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "refresh_cache-edb2feae-9638-44f1-83f5-0713116e913f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.309 2 DEBUG oslo_concurrency.lockutils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquired lock "refresh_cache-edb2feae-9638-44f1-83f5-0713116e913f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.310 2 DEBUG nova.network.neutron [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-81780b7fe742aa9f29fdf9d81bba61c00b7e2a30a4e4f66e4f2215c083f026c6-merged.mount: Deactivated successfully.
Oct 02 08:32:55 compute-0 podman[332854]: 2025-10-02 08:32:55.344321399 +0000 UTC m=+0.063091908 container remove a56d0db54495e37e71c3eb082947e034e30d2a522f0611c85060a81b2635f6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:32:55 compute-0 systemd[1]: libpod-conmon-a56d0db54495e37e71c3eb082947e034e30d2a522f0611c85060a81b2635f6f1.scope: Deactivated successfully.
Oct 02 08:32:55 compute-0 podman[332875]: 2025-10-02 08:32:55.58640956 +0000 UTC m=+0.080296521 container create f04703ae4c089f5adf1e5a503d112bc2e416b1d7236877b1a6c82677950536aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 08:32:55 compute-0 systemd[1]: Started libpod-conmon-f04703ae4c089f5adf1e5a503d112bc2e416b1d7236877b1a6c82677950536aa.scope.
Oct 02 08:32:55 compute-0 podman[332875]: 2025-10-02 08:32:55.551543928 +0000 UTC m=+0.045430959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.656 2 DEBUG nova.network.neutron [-] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.678 2 INFO nova.compute.manager [-] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Took 0.87 seconds to deallocate network for instance.
Oct 02 08:32:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54ae17cd6da76e53c1491b90b8af080e9694ba5db19dc45c4c2dd802f0f259f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54ae17cd6da76e53c1491b90b8af080e9694ba5db19dc45c4c2dd802f0f259f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54ae17cd6da76e53c1491b90b8af080e9694ba5db19dc45c4c2dd802f0f259f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54ae17cd6da76e53c1491b90b8af080e9694ba5db19dc45c4c2dd802f0f259f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54ae17cd6da76e53c1491b90b8af080e9694ba5db19dc45c4c2dd802f0f259f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:32:55 compute-0 podman[332875]: 2025-10-02 08:32:55.700686686 +0000 UTC m=+0.194573627 container init f04703ae4c089f5adf1e5a503d112bc2e416b1d7236877b1a6c82677950536aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 08:32:55 compute-0 podman[332875]: 2025-10-02 08:32:55.724430403 +0000 UTC m=+0.218317354 container start f04703ae4c089f5adf1e5a503d112bc2e416b1d7236877b1a6c82677950536aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:32:55 compute-0 podman[332875]: 2025-10-02 08:32:55.729089297 +0000 UTC m=+0.222976258 container attach f04703ae4c089f5adf1e5a503d112bc2e416b1d7236877b1a6c82677950536aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.732 2 DEBUG oslo_concurrency.lockutils [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.733 2 DEBUG oslo_concurrency.lockutils [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2144457477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:55 compute-0 nova_compute[260603]: 2025-10-02 08:32:55.853 2 DEBUG oslo_concurrency.processutils [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4248459256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:56 compute-0 nova_compute[260603]: 2025-10-02 08:32:56.354 2 DEBUG oslo_concurrency.processutils [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:56 compute-0 nova_compute[260603]: 2025-10-02 08:32:56.362 2 DEBUG nova.compute.provider_tree [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:56 compute-0 nova_compute[260603]: 2025-10-02 08:32:56.383 2 DEBUG nova.scheduler.client.report [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:56 compute-0 nova_compute[260603]: 2025-10-02 08:32:56.407 2 DEBUG oslo_concurrency.lockutils [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:56 compute-0 nova_compute[260603]: 2025-10-02 08:32:56.434 2 INFO nova.scheduler.client.report [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Deleted allocations for instance d107c637-880a-47fa-ac2d-c762781f296c
Oct 02 08:32:56 compute-0 nova_compute[260603]: 2025-10-02 08:32:56.510 2 DEBUG oslo_concurrency.lockutils [None req-6bd920b1-8ff7-4d08-8769-d850a9da20d5 fd9d3515f65243c595470300ea9c96d8 cff05b28ad7949e3b6b334ee46e02341 - - default default] Lock "d107c637-880a-47fa-ac2d-c762781f296c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:56 compute-0 ceph-mon[74477]: pgmap v1580: 305 pgs: 305 active+clean; 426 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 3.6 MiB/s wr, 458 op/s
Oct 02 08:32:56 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4248459256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:56 compute-0 infallible_bardeen[332892]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:32:56 compute-0 infallible_bardeen[332892]: --> relative data size: 1.0
Oct 02 08:32:56 compute-0 infallible_bardeen[332892]: --> All data devices are unavailable
Oct 02 08:32:56 compute-0 systemd[1]: libpod-f04703ae4c089f5adf1e5a503d112bc2e416b1d7236877b1a6c82677950536aa.scope: Deactivated successfully.
Oct 02 08:32:56 compute-0 systemd[1]: libpod-f04703ae4c089f5adf1e5a503d112bc2e416b1d7236877b1a6c82677950536aa.scope: Consumed 1.157s CPU time.
Oct 02 08:32:56 compute-0 nova_compute[260603]: 2025-10-02 08:32:56.981 2 DEBUG nova.compute.manager [req-4b13be8b-78c3-49c7-9709-bf52fe9db22c req-749a170e-5cee-4e80-ba01-ed20fa2ca7ae 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Received event network-vif-deleted-b101d57e-4c57-4e8a-9a68-a3dded764a52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:32:57 compute-0 podman[332943]: 2025-10-02 08:32:57.043892704 +0000 UTC m=+0.048628889 container died f04703ae4c089f5adf1e5a503d112bc2e416b1d7236877b1a6c82677950536aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:32:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 426 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 3.6 MiB/s wr, 458 op/s
Oct 02 08:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d54ae17cd6da76e53c1491b90b8af080e9694ba5db19dc45c4c2dd802f0f259f-merged.mount: Deactivated successfully.
Oct 02 08:32:57 compute-0 podman[332943]: 2025-10-02 08:32:57.119338415 +0000 UTC m=+0.124074560 container remove f04703ae4c089f5adf1e5a503d112bc2e416b1d7236877b1a6c82677950536aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:32:57 compute-0 systemd[1]: libpod-conmon-f04703ae4c089f5adf1e5a503d112bc2e416b1d7236877b1a6c82677950536aa.scope: Deactivated successfully.
Oct 02 08:32:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:32:57 compute-0 sudo[332764]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:57 compute-0 sudo[332958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:32:57 compute-0 sudo[332958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:57 compute-0 sudo[332958]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:57 compute-0 sudo[332983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:32:57 compute-0 sudo[332983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:57 compute-0 sudo[332983]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:57 compute-0 sudo[333008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:32:57 compute-0 sudo[333008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:57 compute-0 sudo[333008]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:57 compute-0 sudo[333033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:32:57 compute-0 sudo[333033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:32:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:32:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:32:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:32:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:32:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:58 compute-0 podman[333100]: 2025-10-02 08:32:58.115575537 +0000 UTC m=+0.072479870 container create 2a4961a5a6417a796e6a6b26f92db6aabd3770156c81d7043f992f31c8035e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.148 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "7d8d42e9-4547-43c4-999e-096925053f6e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.149 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7d8d42e9-4547-43c4-999e-096925053f6e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:58 compute-0 systemd[1]: Started libpod-conmon-2a4961a5a6417a796e6a6b26f92db6aabd3770156c81d7043f992f31c8035e5e.scope.
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.166 2 DEBUG nova.compute.manager [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:32:58 compute-0 podman[333100]: 2025-10-02 08:32:58.085435633 +0000 UTC m=+0.042340006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:32:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:32:58 compute-0 podman[333100]: 2025-10-02 08:32:58.217700557 +0000 UTC m=+0.174604940 container init 2a4961a5a6417a796e6a6b26f92db6aabd3770156c81d7043f992f31c8035e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 08:32:58 compute-0 podman[333100]: 2025-10-02 08:32:58.229204503 +0000 UTC m=+0.186108796 container start 2a4961a5a6417a796e6a6b26f92db6aabd3770156c81d7043f992f31c8035e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 08:32:58 compute-0 podman[333100]: 2025-10-02 08:32:58.232449564 +0000 UTC m=+0.189353947 container attach 2a4961a5a6417a796e6a6b26f92db6aabd3770156c81d7043f992f31c8035e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:32:58 compute-0 jolly_proskuriakova[333119]: 167 167
Oct 02 08:32:58 compute-0 systemd[1]: libpod-2a4961a5a6417a796e6a6b26f92db6aabd3770156c81d7043f992f31c8035e5e.scope: Deactivated successfully.
Oct 02 08:32:58 compute-0 podman[333100]: 2025-10-02 08:32:58.235455577 +0000 UTC m=+0.192359890 container died 2a4961a5a6417a796e6a6b26f92db6aabd3770156c81d7043f992f31c8035e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.232 2 DEBUG nova.network.neutron [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Updating instance_info_cache with network_info: [{"id": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "address": "fa:16:3e:b0:0f:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc2f1a95-f7", "ovs_interfaceid": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.242 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.243 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.255 2 DEBUG nova.virt.hardware [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.256 2 INFO nova.compute.claims [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0de52d2267f78689e53428dee3788ff71c6621db62490c735a8d5c759d8ed321-merged.mount: Deactivated successfully.
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.268 2 DEBUG oslo_concurrency.lockutils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Releasing lock "refresh_cache-edb2feae-9638-44f1-83f5-0713116e913f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:32:58 compute-0 podman[333114]: 2025-10-02 08:32:58.269980209 +0000 UTC m=+0.102115469 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:32:58 compute-0 podman[333100]: 2025-10-02 08:32:58.27455045 +0000 UTC m=+0.231454743 container remove 2a4961a5a6417a796e6a6b26f92db6aabd3770156c81d7043f992f31c8035e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_proskuriakova, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 08:32:58 compute-0 systemd[1]: libpod-conmon-2a4961a5a6417a796e6a6b26f92db6aabd3770156c81d7043f992f31c8035e5e.scope: Deactivated successfully.
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.461 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:58 compute-0 podman[333162]: 2025-10-02 08:32:58.518438728 +0000 UTC m=+0.044552883 container create f63de672517f95abb44829147387a772c0029e64a74482823702e25e2d552800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:32:58 compute-0 systemd[1]: Started libpod-conmon-f63de672517f95abb44829147387a772c0029e64a74482823702e25e2d552800.scope.
Oct 02 08:32:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8820646f158335ca36fa672f0dfa94724ddca0e583abc691be715af9a98348f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:32:58 compute-0 podman[333162]: 2025-10-02 08:32:58.498265033 +0000 UTC m=+0.024379268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8820646f158335ca36fa672f0dfa94724ddca0e583abc691be715af9a98348f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8820646f158335ca36fa672f0dfa94724ddca0e583abc691be715af9a98348f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8820646f158335ca36fa672f0dfa94724ddca0e583abc691be715af9a98348f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:32:58 compute-0 podman[333162]: 2025-10-02 08:32:58.610384211 +0000 UTC m=+0.136498416 container init f63de672517f95abb44829147387a772c0029e64a74482823702e25e2d552800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:32:58 compute-0 podman[333162]: 2025-10-02 08:32:58.622215668 +0000 UTC m=+0.148329863 container start f63de672517f95abb44829147387a772c0029e64a74482823702e25e2d552800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 08:32:58 compute-0 podman[333162]: 2025-10-02 08:32:58.625977655 +0000 UTC m=+0.152091900 container attach f63de672517f95abb44829147387a772c0029e64a74482823702e25e2d552800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:32:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:58.684 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:32:58.686 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:32:58 compute-0 ceph-mon[74477]: pgmap v1581: 305 pgs: 305 active+clean; 426 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 3.6 MiB/s wr, 458 op/s
Oct 02 08:32:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:32:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2046181299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.901 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.908 2 DEBUG nova.compute.provider_tree [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.934 2 DEBUG nova.scheduler.client.report [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.968 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:58 compute-0 nova_compute[260603]: 2025-10-02 08:32:58.969 2 DEBUG nova.compute.manager [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.034 2 DEBUG nova.compute.manager [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.035 2 DEBUG nova.network.neutron [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:32:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 372 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 2.9 MiB/s wr, 461 op/s
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.057 2 INFO nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.078 2 DEBUG nova.compute.manager [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.170 2 DEBUG nova.compute.manager [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.172 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.173 2 INFO nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Creating image(s)
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.208 2 DEBUG nova.storage.rbd_utils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7d8d42e9-4547-43c4-999e-096925053f6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.237 2 DEBUG nova.storage.rbd_utils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7d8d42e9-4547-43c4-999e-096925053f6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.267 2 DEBUG nova.storage.rbd_utils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7d8d42e9-4547-43c4-999e-096925053f6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.273 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.329 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393964.3014538, 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.329 2 INFO nova.compute.manager [-] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] VM Stopped (Lifecycle Event)
Oct 02 08:32:59 compute-0 objective_maxwell[333180]: {
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:     "0": [
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:         {
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "devices": [
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "/dev/loop3"
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             ],
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_name": "ceph_lv0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_size": "21470642176",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "name": "ceph_lv0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "tags": {
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.cluster_name": "ceph",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.crush_device_class": "",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.encrypted": "0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.osd_id": "0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.type": "block",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.vdo": "0"
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             },
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "type": "block",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "vg_name": "ceph_vg0"
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:         }
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:     ],
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:     "1": [
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:         {
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "devices": [
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "/dev/loop4"
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             ],
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_name": "ceph_lv1",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_size": "21470642176",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "name": "ceph_lv1",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "tags": {
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.cluster_name": "ceph",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.crush_device_class": "",
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.355 2 DEBUG nova.compute.manager [None req-5ef04265-f8fe-4913-a7f0-5c73dc5b30a6 - - - - - -] [instance: 56d0280c-1cff-4fc1-aaec-57f3dbad7ba5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.encrypted": "0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.osd_id": "1",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.type": "block",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.vdo": "0"
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             },
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "type": "block",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "vg_name": "ceph_vg1"
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:         }
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:     ],
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:     "2": [
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:         {
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "devices": [
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "/dev/loop5"
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             ],
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_name": "ceph_lv2",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_size": "21470642176",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "name": "ceph_lv2",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "tags": {
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.cluster_name": "ceph",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.crush_device_class": "",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.encrypted": "0",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.osd_id": "2",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.type": "block",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:                 "ceph.vdo": "0"
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             },
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "type": "block",
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:             "vg_name": "ceph_vg2"
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:         }
Oct 02 08:32:59 compute-0 objective_maxwell[333180]:     ]
Oct 02 08:32:59 compute-0 objective_maxwell[333180]: }
Oct 02 08:32:59 compute-0 systemd[1]: libpod-f63de672517f95abb44829147387a772c0029e64a74482823702e25e2d552800.scope: Deactivated successfully.
Oct 02 08:32:59 compute-0 podman[333162]: 2025-10-02 08:32:59.385248075 +0000 UTC m=+0.911362260 container died f63de672517f95abb44829147387a772c0029e64a74482823702e25e2d552800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.393 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.394 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.395 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.395 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:32:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8820646f158335ca36fa672f0dfa94724ddca0e583abc691be715af9a98348f4-merged.mount: Deactivated successfully.
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.422 2 DEBUG nova.storage.rbd_utils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7d8d42e9-4547-43c4-999e-096925053f6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.426 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7d8d42e9-4547-43c4-999e-096925053f6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:32:59 compute-0 podman[333162]: 2025-10-02 08:32:59.443611185 +0000 UTC m=+0.969725350 container remove f63de672517f95abb44829147387a772c0029e64a74482823702e25e2d552800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 08:32:59 compute-0 systemd[1]: libpod-conmon-f63de672517f95abb44829147387a772c0029e64a74482823702e25e2d552800.scope: Deactivated successfully.
Oct 02 08:32:59 compute-0 sudo[333033]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:59 compute-0 sudo[333298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:32:59 compute-0 sudo[333298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:59 compute-0 sudo[333298]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:59 compute-0 sudo[333341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:32:59 compute-0 sudo[333341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:59 compute-0 sudo[333341]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.719 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7d8d42e9-4547-43c4-999e-096925053f6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:32:59 compute-0 sudo[333366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:32:59 compute-0 sudo[333366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:59 compute-0 sudo[333366]: pam_unix(sudo:session): session closed for user root
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.782 2 DEBUG nova.storage.rbd_utils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] resizing rbd image 7d8d42e9-4547-43c4-999e-096925053f6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:32:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2046181299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:32:59 compute-0 sudo[333407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:32:59 compute-0 sudo[333407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.887 2 DEBUG nova.objects.instance [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 7d8d42e9-4547-43c4-999e-096925053f6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.903 2 DEBUG nova.policy [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33ee6781337742479d7b4b078ad6a221', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.907 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.908 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Ensure instance console log exists: /var/lib/nova/instances/7d8d42e9-4547-43c4-999e-096925053f6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.908 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.909 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:32:59 compute-0 nova_compute[260603]: 2025-10-02 08:32:59.909 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:00 compute-0 podman[333529]: 2025-10-02 08:33:00.125876726 +0000 UTC m=+0.047523696 container create cb5f5ab51935bd54cd398a635f6a5bbb663aacf43e41239155e95e95fb7a09f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:33:00 compute-0 systemd[1]: Started libpod-conmon-cb5f5ab51935bd54cd398a635f6a5bbb663aacf43e41239155e95e95fb7a09f1.scope.
Oct 02 08:33:00 compute-0 podman[333529]: 2025-10-02 08:33:00.105847224 +0000 UTC m=+0.027494234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:33:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:33:00 compute-0 podman[333529]: 2025-10-02 08:33:00.246960922 +0000 UTC m=+0.168607922 container init cb5f5ab51935bd54cd398a635f6a5bbb663aacf43e41239155e95e95fb7a09f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jones, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:33:00 compute-0 podman[333529]: 2025-10-02 08:33:00.259279845 +0000 UTC m=+0.180926845 container start cb5f5ab51935bd54cd398a635f6a5bbb663aacf43e41239155e95e95fb7a09f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 08:33:00 compute-0 podman[333529]: 2025-10-02 08:33:00.26298632 +0000 UTC m=+0.184633290 container attach cb5f5ab51935bd54cd398a635f6a5bbb663aacf43e41239155e95e95fb7a09f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jones, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 08:33:00 compute-0 recursing_jones[333545]: 167 167
Oct 02 08:33:00 compute-0 systemd[1]: libpod-cb5f5ab51935bd54cd398a635f6a5bbb663aacf43e41239155e95e95fb7a09f1.scope: Deactivated successfully.
Oct 02 08:33:00 compute-0 podman[333529]: 2025-10-02 08:33:00.267595303 +0000 UTC m=+0.189242293 container died cb5f5ab51935bd54cd398a635f6a5bbb663aacf43e41239155e95e95fb7a09f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 08:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e59651df60865aedf37611bd0dc1665fcd32ada8283ede7cfbb4bdc68ed020d-merged.mount: Deactivated successfully.
Oct 02 08:33:00 compute-0 podman[333529]: 2025-10-02 08:33:00.305142618 +0000 UTC m=+0.226789598 container remove cb5f5ab51935bd54cd398a635f6a5bbb663aacf43e41239155e95e95fb7a09f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jones, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:33:00 compute-0 systemd[1]: libpod-conmon-cb5f5ab51935bd54cd398a635f6a5bbb663aacf43e41239155e95e95fb7a09f1.scope: Deactivated successfully.
Oct 02 08:33:00 compute-0 podman[333569]: 2025-10-02 08:33:00.573935638 +0000 UTC m=+0.072140249 container create ed94f4b0e7d8cff0ddeff21f42b3537f008b31426eaba04e9a8d45f3acadbe7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 08:33:00 compute-0 systemd[1]: Started libpod-conmon-ed94f4b0e7d8cff0ddeff21f42b3537f008b31426eaba04e9a8d45f3acadbe7b.scope.
Oct 02 08:33:00 compute-0 podman[333569]: 2025-10-02 08:33:00.5459535 +0000 UTC m=+0.044158151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:33:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b66ad7c99edde58ddca9b803ab0a13b80a7f19e4240fce91e6f0bef724d8cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b66ad7c99edde58ddca9b803ab0a13b80a7f19e4240fce91e6f0bef724d8cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b66ad7c99edde58ddca9b803ab0a13b80a7f19e4240fce91e6f0bef724d8cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b66ad7c99edde58ddca9b803ab0a13b80a7f19e4240fce91e6f0bef724d8cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:33:00 compute-0 podman[333569]: 2025-10-02 08:33:00.681612069 +0000 UTC m=+0.179816731 container init ed94f4b0e7d8cff0ddeff21f42b3537f008b31426eaba04e9a8d45f3acadbe7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moore, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:33:00 compute-0 podman[333569]: 2025-10-02 08:33:00.6922665 +0000 UTC m=+0.190471061 container start ed94f4b0e7d8cff0ddeff21f42b3537f008b31426eaba04e9a8d45f3acadbe7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moore, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:33:00 compute-0 podman[333569]: 2025-10-02 08:33:00.695029966 +0000 UTC m=+0.193234607 container attach ed94f4b0e7d8cff0ddeff21f42b3537f008b31426eaba04e9a8d45f3acadbe7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moore, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 08:33:00 compute-0 ceph-mon[74477]: pgmap v1582: 305 pgs: 305 active+clean; 372 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 2.9 MiB/s wr, 461 op/s
Oct 02 08:33:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 372 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 914 KiB/s wr, 198 op/s
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.116 2 INFO nova.virt.libvirt.driver [-] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Instance destroyed successfully.
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.118 2 DEBUG nova.objects.instance [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'resources' on Instance uuid edb2feae-9638-44f1-83f5-0713116e913f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.140 2 DEBUG nova.virt.libvirt.vif [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2022045083',display_name='tempest-ServerActionsTestOtherB-server-2022045083',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2022045083',id=73,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:32:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-rzit6p9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member',shelved_at='2025-10-02T08:32:55.245668',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='20e9bdfc-2c59-4117-bbad-fa7996fe8b08'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:32:51Z,user_data=None,user_id='9020ed38b31d46f88625374b2a76aef6',uuid=edb2feae-9638-44f1-83f5-0713116e913f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "address": "fa:16:3e:b0:0f:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc2f1a95-f7", "ovs_interfaceid": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.141 2 DEBUG nova.network.os_vif_util [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "address": "fa:16:3e:b0:0f:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc2f1a95-f7", "ovs_interfaceid": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.142 2 DEBUG nova.network.os_vif_util [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:0f:f7,bridge_name='br-int',has_traffic_filtering=True,id=dc2f1a95-f7f3-4da2-a25e-82d30b142db4,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc2f1a95-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.142 2 DEBUG os_vif [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:0f:f7,bridge_name='br-int',has_traffic_filtering=True,id=dc2f1a95-f7f3-4da2-a25e-82d30b142db4,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc2f1a95-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.145 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc2f1a95-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.153 2 INFO os_vif [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:0f:f7,bridge_name='br-int',has_traffic_filtering=True,id=dc2f1a95-f7f3-4da2-a25e-82d30b142db4,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc2f1a95-f7')
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.217 2 DEBUG nova.compute.manager [req-3f01a65c-c960-49f9-a716-7ff38998fecf req-11533fdb-e248-478f-8383-fc4008e17dad 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Received event network-changed-dc2f1a95-f7f3-4da2-a25e-82d30b142db4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.218 2 DEBUG nova.compute.manager [req-3f01a65c-c960-49f9-a716-7ff38998fecf req-11533fdb-e248-478f-8383-fc4008e17dad 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Refreshing instance network info cache due to event network-changed-dc2f1a95-f7f3-4da2-a25e-82d30b142db4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.218 2 DEBUG oslo_concurrency.lockutils [req-3f01a65c-c960-49f9-a716-7ff38998fecf req-11533fdb-e248-478f-8383-fc4008e17dad 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-edb2feae-9638-44f1-83f5-0713116e913f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.218 2 DEBUG oslo_concurrency.lockutils [req-3f01a65c-c960-49f9-a716-7ff38998fecf req-11533fdb-e248-478f-8383-fc4008e17dad 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-edb2feae-9638-44f1-83f5-0713116e913f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.219 2 DEBUG nova.network.neutron [req-3f01a65c-c960-49f9-a716-7ff38998fecf req-11533fdb-e248-478f-8383-fc4008e17dad 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Refreshing network info cache for port dc2f1a95-f7f3-4da2-a25e-82d30b142db4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.495 2 INFO nova.virt.libvirt.driver [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Deleting instance files /var/lib/nova/instances/edb2feae-9638-44f1-83f5-0713116e913f_del
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.496 2 INFO nova.virt.libvirt.driver [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Deletion of /var/lib/nova/instances/edb2feae-9638-44f1-83f5-0713116e913f_del complete
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.587 2 INFO nova.scheduler.client.report [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Deleted allocations for instance edb2feae-9638-44f1-83f5-0713116e913f
Oct 02 08:33:01 compute-0 crazy_moore[333586]: {
Oct 02 08:33:01 compute-0 crazy_moore[333586]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "osd_id": 2,
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "type": "bluestore"
Oct 02 08:33:01 compute-0 crazy_moore[333586]:     },
Oct 02 08:33:01 compute-0 crazy_moore[333586]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "osd_id": 1,
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "type": "bluestore"
Oct 02 08:33:01 compute-0 crazy_moore[333586]:     },
Oct 02 08:33:01 compute-0 crazy_moore[333586]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "osd_id": 0,
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:33:01 compute-0 crazy_moore[333586]:         "type": "bluestore"
Oct 02 08:33:01 compute-0 crazy_moore[333586]:     }
Oct 02 08:33:01 compute-0 crazy_moore[333586]: }
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.627 2 DEBUG oslo_concurrency.lockutils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.628 2 DEBUG oslo_concurrency.lockutils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:01 compute-0 systemd[1]: libpod-ed94f4b0e7d8cff0ddeff21f42b3537f008b31426eaba04e9a8d45f3acadbe7b.scope: Deactivated successfully.
Oct 02 08:33:01 compute-0 podman[333569]: 2025-10-02 08:33:01.654667292 +0000 UTC m=+1.152871873 container died ed94f4b0e7d8cff0ddeff21f42b3537f008b31426eaba04e9a8d45f3acadbe7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 08:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2b66ad7c99edde58ddca9b803ab0a13b80a7f19e4240fce91e6f0bef724d8cb-merged.mount: Deactivated successfully.
Oct 02 08:33:01 compute-0 podman[333569]: 2025-10-02 08:33:01.714795118 +0000 UTC m=+1.212999689 container remove ed94f4b0e7d8cff0ddeff21f42b3537f008b31426eaba04e9a8d45f3acadbe7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moore, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:33:01 compute-0 systemd[1]: libpod-conmon-ed94f4b0e7d8cff0ddeff21f42b3537f008b31426eaba04e9a8d45f3acadbe7b.scope: Deactivated successfully.
Oct 02 08:33:01 compute-0 sudo[333407]: pam_unix(sudo:session): session closed for user root
Oct 02 08:33:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:33:01 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:33:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:33:01 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:33:01 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5489420f-1188-47d4-a7e7-af240c977398 does not exist
Oct 02 08:33:01 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6b405d98-9ce7-4b8c-9b77-74df41962f4d does not exist
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.787 2 DEBUG oslo_concurrency.processutils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:01 compute-0 sudo[333650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:33:01 compute-0 sudo[333650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:33:01 compute-0 sudo[333650]: pam_unix(sudo:session): session closed for user root
Oct 02 08:33:01 compute-0 nova_compute[260603]: 2025-10-02 08:33:01.914 2 DEBUG nova.network.neutron [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Successfully created port: c05b45a2-ba93-4677-8f35-a2d934021ee8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:33:01 compute-0 sudo[333676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:33:01 compute-0 sudo[333676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:33:01 compute-0 sudo[333676]: pam_unix(sudo:session): session closed for user root
Oct 02 08:33:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:33:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Oct 02 08:33:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Oct 02 08:33:02 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Oct 02 08:33:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1389143563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:02 compute-0 nova_compute[260603]: 2025-10-02 08:33:02.287 2 DEBUG oslo_concurrency.processutils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:02 compute-0 nova_compute[260603]: 2025-10-02 08:33:02.297 2 DEBUG nova.compute.provider_tree [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:02 compute-0 nova_compute[260603]: 2025-10-02 08:33:02.317 2 DEBUG nova.scheduler.client.report [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:02 compute-0 nova_compute[260603]: 2025-10-02 08:33:02.349 2 DEBUG oslo_concurrency.lockutils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:02 compute-0 nova_compute[260603]: 2025-10-02 08:33:02.408 2 DEBUG oslo_concurrency.lockutils [None req-128d6f25-1a9f-4d76-8d1d-8ed72c1b1ad0 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "edb2feae-9638-44f1-83f5-0713116e913f" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 11.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:02.688 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:02 compute-0 ceph-mon[74477]: pgmap v1583: 305 pgs: 305 active+clean; 372 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 914 KiB/s wr, 198 op/s
Oct 02 08:33:02 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:33:02 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:33:02 compute-0 ceph-mon[74477]: osdmap e227: 3 total, 3 up, 3 in
Oct 02 08:33:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1389143563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:03 compute-0 ovn_controller[152344]: 2025-10-02T08:33:03Z|00702|binding|INFO|Releasing lport d143de50-fc80-43b6-82e2-6651430a4a42 from this chassis (sb_readonly=0)
Oct 02 08:33:03 compute-0 ovn_controller[152344]: 2025-10-02T08:33:03Z|00703|binding|INFO|Releasing lport 8d9038d5-8bd6-460b-aca0-b6f7422e177a from this chassis (sb_readonly=0)
Oct 02 08:33:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 366 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.5 MiB/s wr, 117 op/s
Oct 02 08:33:03 compute-0 nova_compute[260603]: 2025-10-02 08:33:03.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:03 compute-0 nova_compute[260603]: 2025-10-02 08:33:03.454 2 DEBUG nova.network.neutron [req-3f01a65c-c960-49f9-a716-7ff38998fecf req-11533fdb-e248-478f-8383-fc4008e17dad 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Updated VIF entry in instance network info cache for port dc2f1a95-f7f3-4da2-a25e-82d30b142db4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:33:03 compute-0 nova_compute[260603]: 2025-10-02 08:33:03.455 2 DEBUG nova.network.neutron [req-3f01a65c-c960-49f9-a716-7ff38998fecf req-11533fdb-e248-478f-8383-fc4008e17dad 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Updating instance_info_cache with network_info: [{"id": "dc2f1a95-f7f3-4da2-a25e-82d30b142db4", "address": "fa:16:3e:b0:0f:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapdc2f1a95-f7", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:03 compute-0 nova_compute[260603]: 2025-10-02 08:33:03.481 2 DEBUG oslo_concurrency.lockutils [req-3f01a65c-c960-49f9-a716-7ff38998fecf req-11533fdb-e248-478f-8383-fc4008e17dad 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-edb2feae-9638-44f1-83f5-0713116e913f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:04 compute-0 nova_compute[260603]: 2025-10-02 08:33:04.147 2 DEBUG nova.network.neutron [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Successfully updated port: c05b45a2-ba93-4677-8f35-a2d934021ee8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:33:04 compute-0 nova_compute[260603]: 2025-10-02 08:33:04.177 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "refresh_cache-7d8d42e9-4547-43c4-999e-096925053f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:04 compute-0 nova_compute[260603]: 2025-10-02 08:33:04.178 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquired lock "refresh_cache-7d8d42e9-4547-43c4-999e-096925053f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:04 compute-0 nova_compute[260603]: 2025-10-02 08:33:04.178 2 DEBUG nova.network.neutron [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:33:04 compute-0 nova_compute[260603]: 2025-10-02 08:33:04.509 2 DEBUG nova.network.neutron [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:33:04 compute-0 ovn_controller[152344]: 2025-10-02T08:33:04Z|00704|binding|INFO|Releasing lport d143de50-fc80-43b6-82e2-6651430a4a42 from this chassis (sb_readonly=0)
Oct 02 08:33:04 compute-0 ovn_controller[152344]: 2025-10-02T08:33:04Z|00705|binding|INFO|Releasing lport 8d9038d5-8bd6-460b-aca0-b6f7422e177a from this chassis (sb_readonly=0)
Oct 02 08:33:04 compute-0 nova_compute[260603]: 2025-10-02 08:33:04.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:04 compute-0 nova_compute[260603]: 2025-10-02 08:33:04.638 2 DEBUG nova.compute.manager [req-833417f5-852d-4e4c-93d9-1a9e1282aa1f req-d0ec84fe-8d1f-4178-a4d6-981ede6fd274 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Received event network-changed-c05b45a2-ba93-4677-8f35-a2d934021ee8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:04 compute-0 nova_compute[260603]: 2025-10-02 08:33:04.639 2 DEBUG nova.compute.manager [req-833417f5-852d-4e4c-93d9-1a9e1282aa1f req-d0ec84fe-8d1f-4178-a4d6-981ede6fd274 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Refreshing instance network info cache due to event network-changed-c05b45a2-ba93-4677-8f35-a2d934021ee8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:33:04 compute-0 nova_compute[260603]: 2025-10-02 08:33:04.639 2 DEBUG oslo_concurrency.lockutils [req-833417f5-852d-4e4c-93d9-1a9e1282aa1f req-d0ec84fe-8d1f-4178-a4d6-981ede6fd274 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7d8d42e9-4547-43c4-999e-096925053f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:04 compute-0 ceph-mon[74477]: pgmap v1585: 305 pgs: 305 active+clean; 366 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.5 MiB/s wr, 117 op/s
Oct 02 08:33:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 372 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 126 op/s
Oct 02 08:33:05 compute-0 nova_compute[260603]: 2025-10-02 08:33:05.714 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393970.712393, 50c9773b-004f-491e-abcf-6698fbd8ca3d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:05 compute-0 nova_compute[260603]: 2025-10-02 08:33:05.715 2 INFO nova.compute.manager [-] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] VM Stopped (Lifecycle Event)
Oct 02 08:33:05 compute-0 nova_compute[260603]: 2025-10-02 08:33:05.732 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393970.732022, edb2feae-9638-44f1-83f5-0713116e913f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:05 compute-0 nova_compute[260603]: 2025-10-02 08:33:05.732 2 INFO nova.compute.manager [-] [instance: edb2feae-9638-44f1-83f5-0713116e913f] VM Stopped (Lifecycle Event)
Oct 02 08:33:05 compute-0 nova_compute[260603]: 2025-10-02 08:33:05.757 2 DEBUG nova.compute.manager [None req-22429c6c-dc93-4268-a17b-f8f9bc7e5224 - - - - - -] [instance: 50c9773b-004f-491e-abcf-6698fbd8ca3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:05 compute-0 nova_compute[260603]: 2025-10-02 08:33:05.762 2 DEBUG nova.compute.manager [None req-266ab2f3-3689-4a3e-94ee-7033cdca6e24 - - - - - -] [instance: edb2feae-9638-44f1-83f5-0713116e913f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.299 2 DEBUG nova.network.neutron [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Updating instance_info_cache with network_info: [{"id": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "address": "fa:16:3e:45:63:68", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc05b45a2-ba", "ovs_interfaceid": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.326 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Releasing lock "refresh_cache-7d8d42e9-4547-43c4-999e-096925053f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.327 2 DEBUG nova.compute.manager [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Instance network_info: |[{"id": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "address": "fa:16:3e:45:63:68", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc05b45a2-ba", "ovs_interfaceid": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.327 2 DEBUG oslo_concurrency.lockutils [req-833417f5-852d-4e4c-93d9-1a9e1282aa1f req-d0ec84fe-8d1f-4178-a4d6-981ede6fd274 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7d8d42e9-4547-43c4-999e-096925053f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.328 2 DEBUG nova.network.neutron [req-833417f5-852d-4e4c-93d9-1a9e1282aa1f req-d0ec84fe-8d1f-4178-a4d6-981ede6fd274 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Refreshing network info cache for port c05b45a2-ba93-4677-8f35-a2d934021ee8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.333 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Start _get_guest_xml network_info=[{"id": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "address": "fa:16:3e:45:63:68", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc05b45a2-ba", "ovs_interfaceid": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.340 2 WARNING nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.347 2 DEBUG nova.virt.libvirt.host [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.349 2 DEBUG nova.virt.libvirt.host [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.357 2 DEBUG nova.virt.libvirt.host [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.357 2 DEBUG nova.virt.libvirt.host [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.358 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.358 2 DEBUG nova.virt.hardware [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.359 2 DEBUG nova.virt.hardware [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.360 2 DEBUG nova.virt.hardware [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.360 2 DEBUG nova.virt.hardware [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.360 2 DEBUG nova.virt.hardware [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.361 2 DEBUG nova.virt.hardware [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.361 2 DEBUG nova.virt.hardware [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.362 2 DEBUG nova.virt.hardware [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.362 2 DEBUG nova.virt.hardware [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.362 2 DEBUG nova.virt.hardware [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.363 2 DEBUG nova.virt.hardware [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.368 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:06 compute-0 ceph-mon[74477]: pgmap v1586: 305 pgs: 305 active+clean; 372 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 126 op/s
Oct 02 08:33:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726692157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.819 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.855 2 DEBUG nova.storage.rbd_utils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7d8d42e9-4547-43c4-999e-096925053f6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.860 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.909 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393971.9070575, 6fc23d37-19fe-44e7-8525-17c199801726 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.910 2 INFO nova.compute.manager [-] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] VM Stopped (Lifecycle Event)
Oct 02 08:33:06 compute-0 nova_compute[260603]: 2025-10-02 08:33:06.959 2 DEBUG nova.compute.manager [None req-7c915160-218e-4046-a802-e5390eaf9a3b - - - - - -] [instance: 6fc23d37-19fe-44e7-8525-17c199801726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 372 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 126 op/s
Oct 02 08:33:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.170499) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393987170595, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 683, "num_deletes": 256, "total_data_size": 714100, "memory_usage": 727792, "flush_reason": "Manual Compaction"}
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393987178867, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 706628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32907, "largest_seqno": 33589, "table_properties": {"data_size": 703006, "index_size": 1402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8536, "raw_average_key_size": 19, "raw_value_size": 695543, "raw_average_value_size": 1577, "num_data_blocks": 61, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759393947, "oldest_key_time": 1759393947, "file_creation_time": 1759393987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 8430 microseconds, and 5013 cpu microseconds.
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.178938) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 706628 bytes OK
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.178965) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.180483) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.180507) EVENT_LOG_v1 {"time_micros": 1759393987180499, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.180533) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 710431, prev total WAL file size 710431, number of live WAL files 2.
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.181294) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323533' seq:0, type:0; will stop at (end)
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(690KB)], [71(8165KB)]
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393987181353, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9068514, "oldest_snapshot_seqno": -1}
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5679 keys, 8952085 bytes, temperature: kUnknown
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393987238274, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8952085, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8912582, "index_size": 24225, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 144020, "raw_average_key_size": 25, "raw_value_size": 8809009, "raw_average_value_size": 1551, "num_data_blocks": 983, "num_entries": 5679, "num_filter_entries": 5679, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759393987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.238568) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8952085 bytes
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.239838) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.1 rd, 157.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(25.5) write-amplify(12.7) OK, records in: 6207, records dropped: 528 output_compression: NoCompression
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.239869) EVENT_LOG_v1 {"time_micros": 1759393987239855, "job": 40, "event": "compaction_finished", "compaction_time_micros": 57010, "compaction_time_cpu_micros": 39074, "output_level": 6, "num_output_files": 1, "total_output_size": 8952085, "num_input_records": 6207, "num_output_records": 5679, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393987240290, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393987243099, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.181210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.243147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.243154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.243160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.243164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:33:07 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:33:07.243167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:33:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1863131620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.349 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.351 2 DEBUG nova.virt.libvirt.vif [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-267111338',display_name='tempest-ServersTestJSON-server-267111338',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-267111338',id=75,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-zk4d7e9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:59Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=7d8d42e9-4547-43c4-999e-096925053f6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "address": "fa:16:3e:45:63:68", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc05b45a2-ba", "ovs_interfaceid": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.352 2 DEBUG nova.network.os_vif_util [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "address": "fa:16:3e:45:63:68", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc05b45a2-ba", "ovs_interfaceid": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.353 2 DEBUG nova.network.os_vif_util [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:63:68,bridge_name='br-int',has_traffic_filtering=True,id=c05b45a2-ba93-4677-8f35-a2d934021ee8,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc05b45a2-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.355 2 DEBUG nova.objects.instance [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d8d42e9-4547-43c4-999e-096925053f6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.374 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:33:07 compute-0 nova_compute[260603]:   <uuid>7d8d42e9-4547-43c4-999e-096925053f6e</uuid>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   <name>instance-0000004b</name>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersTestJSON-server-267111338</nova:name>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:33:06</nova:creationTime>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:33:07 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:33:07 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:33:07 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:33:07 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:33:07 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:33:07 compute-0 nova_compute[260603]:         <nova:user uuid="33ee6781337742479d7b4b078ad6a221">tempest-ServersTestJSON-520437589-project-member</nova:user>
Oct 02 08:33:07 compute-0 nova_compute[260603]:         <nova:project uuid="f6678937d40d4004ad15e1e9eef6f9c7">tempest-ServersTestJSON-520437589</nova:project>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:33:07 compute-0 nova_compute[260603]:         <nova:port uuid="c05b45a2-ba93-4677-8f35-a2d934021ee8">
Oct 02 08:33:07 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <system>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <entry name="serial">7d8d42e9-4547-43c4-999e-096925053f6e</entry>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <entry name="uuid">7d8d42e9-4547-43c4-999e-096925053f6e</entry>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     </system>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   <os>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   </os>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   <features>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   </features>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7d8d42e9-4547-43c4-999e-096925053f6e_disk">
Oct 02 08:33:07 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:07 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7d8d42e9-4547-43c4-999e-096925053f6e_disk.config">
Oct 02 08:33:07 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:07 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:45:63:68"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <target dev="tapc05b45a2-ba"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/7d8d42e9-4547-43c4-999e-096925053f6e/console.log" append="off"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <video>
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     </video>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:33:07 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:33:07 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:33:07 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:33:07 compute-0 nova_compute[260603]: </domain>
Oct 02 08:33:07 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.377 2 DEBUG nova.compute.manager [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Preparing to wait for external event network-vif-plugged-c05b45a2-ba93-4677-8f35-a2d934021ee8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.378 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "7d8d42e9-4547-43c4-999e-096925053f6e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.378 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7d8d42e9-4547-43c4-999e-096925053f6e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.379 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7d8d42e9-4547-43c4-999e-096925053f6e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.380 2 DEBUG nova.virt.libvirt.vif [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:32:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-267111338',display_name='tempest-ServersTestJSON-server-267111338',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-267111338',id=75,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-zk4d7e9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:32:59Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=7d8d42e9-4547-43c4-999e-096925053f6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "address": "fa:16:3e:45:63:68", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc05b45a2-ba", "ovs_interfaceid": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.381 2 DEBUG nova.network.os_vif_util [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "address": "fa:16:3e:45:63:68", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc05b45a2-ba", "ovs_interfaceid": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.382 2 DEBUG nova.network.os_vif_util [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:63:68,bridge_name='br-int',has_traffic_filtering=True,id=c05b45a2-ba93-4677-8f35-a2d934021ee8,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc05b45a2-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.383 2 DEBUG os_vif [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:63:68,bridge_name='br-int',has_traffic_filtering=True,id=c05b45a2-ba93-4677-8f35-a2d934021ee8,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc05b45a2-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.390 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc05b45a2-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.391 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc05b45a2-ba, col_values=(('external_ids', {'iface-id': 'c05b45a2-ba93-4677-8f35-a2d934021ee8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:63:68', 'vm-uuid': '7d8d42e9-4547-43c4-999e-096925053f6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:07 compute-0 NetworkManager[45129]: <info>  [1759393987.3953] manager: (tapc05b45a2-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.402 2 INFO os_vif [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:63:68,bridge_name='br-int',has_traffic_filtering=True,id=c05b45a2-ba93-4677-8f35-a2d934021ee8,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc05b45a2-ba')
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.474 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.475 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.475 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No VIF found with MAC fa:16:3e:45:63:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.476 2 INFO nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Using config drive
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.508 2 DEBUG nova.storage.rbd_utils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7d8d42e9-4547-43c4-999e-096925053f6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.558 2 DEBUG oslo_concurrency.lockutils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.560 2 DEBUG oslo_concurrency.lockutils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.560 2 INFO nova.compute.manager [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Shelving
Oct 02 08:33:07 compute-0 nova_compute[260603]: 2025-10-02 08:33:07.592 2 DEBUG nova.virt.libvirt.driver [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:33:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/726692157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1863131620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.000 2 INFO nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Creating config drive at /var/lib/nova/instances/7d8d42e9-4547-43c4-999e-096925053f6e/disk.config
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.009 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d8d42e9-4547-43c4-999e-096925053f6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvmua1xsu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.172 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d8d42e9-4547-43c4-999e-096925053f6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvmua1xsu" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.212 2 DEBUG nova.storage.rbd_utils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7d8d42e9-4547-43c4-999e-096925053f6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.217 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d8d42e9-4547-43c4-999e-096925053f6e/disk.config 7d8d42e9-4547-43c4-999e-096925053f6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.265 2 DEBUG nova.network.neutron [req-833417f5-852d-4e4c-93d9-1a9e1282aa1f req-d0ec84fe-8d1f-4178-a4d6-981ede6fd274 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Updated VIF entry in instance network info cache for port c05b45a2-ba93-4677-8f35-a2d934021ee8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.267 2 DEBUG nova.network.neutron [req-833417f5-852d-4e4c-93d9-1a9e1282aa1f req-d0ec84fe-8d1f-4178-a4d6-981ede6fd274 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Updating instance_info_cache with network_info: [{"id": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "address": "fa:16:3e:45:63:68", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc05b45a2-ba", "ovs_interfaceid": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.292 2 DEBUG oslo_concurrency.lockutils [req-833417f5-852d-4e4c-93d9-1a9e1282aa1f req-d0ec84fe-8d1f-4178-a4d6-981ede6fd274 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7d8d42e9-4547-43c4-999e-096925053f6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.410 2 DEBUG oslo_concurrency.processutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d8d42e9-4547-43c4-999e-096925053f6e/disk.config 7d8d42e9-4547-43c4-999e-096925053f6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.410 2 INFO nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Deleting local config drive /var/lib/nova/instances/7d8d42e9-4547-43c4-999e-096925053f6e/disk.config because it was imported into RBD.
Oct 02 08:33:08 compute-0 kernel: tapc05b45a2-ba: entered promiscuous mode
Oct 02 08:33:08 compute-0 NetworkManager[45129]: <info>  [1759393988.4721] manager: (tapc05b45a2-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Oct 02 08:33:08 compute-0 ovn_controller[152344]: 2025-10-02T08:33:08Z|00706|binding|INFO|Claiming lport c05b45a2-ba93-4677-8f35-a2d934021ee8 for this chassis.
Oct 02 08:33:08 compute-0 ovn_controller[152344]: 2025-10-02T08:33:08Z|00707|binding|INFO|c05b45a2-ba93-4677-8f35-a2d934021ee8: Claiming fa:16:3e:45:63:68 10.100.0.10
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.483 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:63:68 10.100.0.10'], port_security=['fa:16:3e:45:63:68 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7d8d42e9-4547-43c4-999e-096925053f6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c05b45a2-ba93-4677-8f35-a2d934021ee8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.484 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c05b45a2-ba93-4677-8f35-a2d934021ee8 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 bound to our chassis
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.485 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:33:08 compute-0 ovn_controller[152344]: 2025-10-02T08:33:08Z|00708|binding|INFO|Setting lport c05b45a2-ba93-4677-8f35-a2d934021ee8 ovn-installed in OVS
Oct 02 08:33:08 compute-0 ovn_controller[152344]: 2025-10-02T08:33:08Z|00709|binding|INFO|Setting lport c05b45a2-ba93-4677-8f35-a2d934021ee8 up in Southbound
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a48bbbdb-5e58-4fb5-be8e-6f5049d7028d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:08 compute-0 systemd-udevd[333858]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:33:08 compute-0 systemd-machined[214636]: New machine qemu-84-instance-0000004b.
Oct 02 08:33:08 compute-0 NetworkManager[45129]: <info>  [1759393988.5471] device (tapc05b45a2-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:33:08 compute-0 NetworkManager[45129]: <info>  [1759393988.5480] device (tapc05b45a2-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:33:08 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-0000004b.
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.562 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[186b8447-f13d-4680-bc2a-29fac43b1e96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.565 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ef31307d-a328-4472-bff4-6ed7ccdb6a89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.597 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5311faa6-9e42-4a56-8613-44f758155fad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.621 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfacb37-43bb-448a-9d3e-cf5f35c91bf2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 25, 'rx_bytes': 1000, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 25, 'rx_bytes': 1000, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 25420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333870, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.653 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[15de4097-194f-4e6e-b847-5a91ad0b43db]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333872, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333872, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.654 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.657 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.658 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.658 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:08.658 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:08 compute-0 ceph-mon[74477]: pgmap v1587: 305 pgs: 305 active+clean; 372 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 126 op/s
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.864 2 DEBUG nova.compute.manager [req-9510c7aa-efce-4c5d-a8e6-045358b6dddd req-42d91acf-b30b-4b20-a095-c81479637a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Received event network-vif-plugged-c05b45a2-ba93-4677-8f35-a2d934021ee8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.865 2 DEBUG oslo_concurrency.lockutils [req-9510c7aa-efce-4c5d-a8e6-045358b6dddd req-42d91acf-b30b-4b20-a095-c81479637a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7d8d42e9-4547-43c4-999e-096925053f6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.865 2 DEBUG oslo_concurrency.lockutils [req-9510c7aa-efce-4c5d-a8e6-045358b6dddd req-42d91acf-b30b-4b20-a095-c81479637a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d8d42e9-4547-43c4-999e-096925053f6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.865 2 DEBUG oslo_concurrency.lockutils [req-9510c7aa-efce-4c5d-a8e6-045358b6dddd req-42d91acf-b30b-4b20-a095-c81479637a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d8d42e9-4547-43c4-999e-096925053f6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:08 compute-0 nova_compute[260603]: 2025-10-02 08:33:08.866 2 DEBUG nova.compute.manager [req-9510c7aa-efce-4c5d-a8e6-045358b6dddd req-42d91acf-b30b-4b20-a095-c81479637a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Processing event network-vif-plugged-c05b45a2-ba93-4677-8f35-a2d934021ee8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:33:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 372 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.211 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393974.2100818, d107c637-880a-47fa-ac2d-c762781f296c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.212 2 INFO nova.compute.manager [-] [instance: d107c637-880a-47fa-ac2d-c762781f296c] VM Stopped (Lifecycle Event)
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.232 2 DEBUG nova.compute.manager [None req-e7752785-8307-4a9f-9e09-84edbe081739 - - - - - -] [instance: d107c637-880a-47fa-ac2d-c762781f296c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.672 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393989.6716254, 7d8d42e9-4547-43c4-999e-096925053f6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.672 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] VM Started (Lifecycle Event)
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.676 2 DEBUG nova.compute.manager [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.679 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.683 2 INFO nova.virt.libvirt.driver [-] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Instance spawned successfully.
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.685 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.696 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.702 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.716 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.717 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.717 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.718 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.719 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.719 2 DEBUG nova.virt.libvirt.driver [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.726 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.727 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393989.6719406, 7d8d42e9-4547-43c4-999e-096925053f6e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.727 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] VM Paused (Lifecycle Event)
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.760 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.764 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393989.678913, 7d8d42e9-4547-43c4-999e-096925053f6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.765 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] VM Resumed (Lifecycle Event)
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.788 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.792 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.810 2 INFO nova.compute.manager [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Took 10.64 seconds to spawn the instance on the hypervisor.
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.811 2 DEBUG nova.compute.manager [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.816 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:33:09 compute-0 kernel: tap37e9c33f-0f (unregistering): left promiscuous mode
Oct 02 08:33:09 compute-0 NetworkManager[45129]: <info>  [1759393989.8599] device (tap37e9c33f-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:33:09 compute-0 ovn_controller[152344]: 2025-10-02T08:33:09Z|00710|binding|INFO|Releasing lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 from this chassis (sb_readonly=0)
Oct 02 08:33:09 compute-0 ovn_controller[152344]: 2025-10-02T08:33:09Z|00711|binding|INFO|Setting lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 down in Southbound
Oct 02 08:33:09 compute-0 ovn_controller[152344]: 2025-10-02T08:33:09Z|00712|binding|INFO|Removing iface tap37e9c33f-0f ovn-installed in OVS
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:09.888 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:cc:f7 10.100.0.9'], port_security=['fa:16:3e:19:cc:f7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '49e7e668-b62c-4e35-a4e2-bba540000961', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd09bd0a7-8be4-487a-8b24-ba3d4c0378f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04113540-c60b-4329-960e-cb06bfeb56f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=37e9c33f-0ff9-4138-a7b5-989ba3c016a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:09.890 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 in datapath ef30d863-af60-49d9-b5d2-5e4f20c70d56 unbound from our chassis
Oct 02 08:33:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:09.893 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef30d863-af60-49d9-b5d2-5e4f20c70d56
Oct 02 08:33:09 compute-0 nova_compute[260603]: 2025-10-02 08:33:09.905 2 INFO nova.compute.manager [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Took 11.68 seconds to build instance.
Oct 02 08:33:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:09.922 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e47cc8ac-ea81-4626-956b-3a1f70939894]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:09 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Oct 02 08:33:09 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003d.scope: Consumed 17.457s CPU time.
Oct 02 08:33:09 compute-0 systemd-machined[214636]: Machine qemu-68-instance-0000003d terminated.
Oct 02 08:33:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:09.959 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5f771bf1-6c3c-4ef0-9cff-a623e972ac56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:09.964 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cb06f727-ea52-41a6-8578-096e558e9df7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:09.997 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2e41e9-140b-48fb-b404-13c3195ff19d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:10.018 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df69b771-ab26-4ffd-9ea6-e29bc6b84a42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef30d863-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:1b:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 1000, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 1000, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474382, 'reachable_time': 23805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333925, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:10.045 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a3a226-9fef-4825-a662-c83a02cf8330]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef30d863-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474399, 'tstamp': 474399}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333926, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef30d863-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474402, 'tstamp': 474402}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333926, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:10.046 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef30d863-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:10.053 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef30d863-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:10.054 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:10.054 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef30d863-a0, col_values=(('external_ids', {'iface-id': 'd143de50-fc80-43b6-82e2-6651430a4a42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:10.055 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.074 2 DEBUG oslo_concurrency.lockutils [None req-8c277dc5-40df-44b7-b77c-be19dd7c5047 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7d8d42e9-4547-43c4-999e-096925053f6e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.420 2 DEBUG nova.compute.manager [req-5fa46af1-b5d9-4990-aaa3-bac618ff9128 req-0585a16f-f00c-4a18-860a-5ab2bfcfab09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-vif-unplugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.421 2 DEBUG oslo_concurrency.lockutils [req-5fa46af1-b5d9-4990-aaa3-bac618ff9128 req-0585a16f-f00c-4a18-860a-5ab2bfcfab09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.422 2 DEBUG oslo_concurrency.lockutils [req-5fa46af1-b5d9-4990-aaa3-bac618ff9128 req-0585a16f-f00c-4a18-860a-5ab2bfcfab09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.422 2 DEBUG oslo_concurrency.lockutils [req-5fa46af1-b5d9-4990-aaa3-bac618ff9128 req-0585a16f-f00c-4a18-860a-5ab2bfcfab09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.423 2 DEBUG nova.compute.manager [req-5fa46af1-b5d9-4990-aaa3-bac618ff9128 req-0585a16f-f00c-4a18-860a-5ab2bfcfab09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] No waiting events found dispatching network-vif-unplugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.423 2 WARNING nova.compute.manager [req-5fa46af1-b5d9-4990-aaa3-bac618ff9128 req-0585a16f-f00c-4a18-860a-5ab2bfcfab09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received unexpected event network-vif-unplugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 for instance with vm_state active and task_state shelving.
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.638 2 INFO nova.virt.libvirt.driver [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance shutdown successfully after 3 seconds.
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.646 2 INFO nova.virt.libvirt.driver [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance destroyed successfully.
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.647 2 DEBUG nova.objects.instance [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'numa_topology' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:10 compute-0 ceph-mon[74477]: pgmap v1588: 305 pgs: 305 active+clean; 372 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.947 2 INFO nova.virt.libvirt.driver [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Beginning cold snapshot process
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.955 2 DEBUG nova.compute.manager [req-57f306bd-529a-4b04-957e-da574a650008 req-3cc2d6b8-7c70-435b-9398-2944981227cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Received event network-vif-plugged-c05b45a2-ba93-4677-8f35-a2d934021ee8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.956 2 DEBUG oslo_concurrency.lockutils [req-57f306bd-529a-4b04-957e-da574a650008 req-3cc2d6b8-7c70-435b-9398-2944981227cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7d8d42e9-4547-43c4-999e-096925053f6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.956 2 DEBUG oslo_concurrency.lockutils [req-57f306bd-529a-4b04-957e-da574a650008 req-3cc2d6b8-7c70-435b-9398-2944981227cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d8d42e9-4547-43c4-999e-096925053f6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.956 2 DEBUG oslo_concurrency.lockutils [req-57f306bd-529a-4b04-957e-da574a650008 req-3cc2d6b8-7c70-435b-9398-2944981227cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d8d42e9-4547-43c4-999e-096925053f6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.957 2 DEBUG nova.compute.manager [req-57f306bd-529a-4b04-957e-da574a650008 req-3cc2d6b8-7c70-435b-9398-2944981227cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] No waiting events found dispatching network-vif-plugged-c05b45a2-ba93-4677-8f35-a2d934021ee8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:10 compute-0 nova_compute[260603]: 2025-10-02 08:33:10.957 2 WARNING nova.compute.manager [req-57f306bd-529a-4b04-957e-da574a650008 req-3cc2d6b8-7c70-435b-9398-2944981227cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Received unexpected event network-vif-plugged-c05b45a2-ba93-4677-8f35-a2d934021ee8 for instance with vm_state active and task_state None.
Oct 02 08:33:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 372 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 08:33:11 compute-0 nova_compute[260603]: 2025-10-02 08:33:11.331 2 DEBUG nova.virt.libvirt.imagebackend [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:33:11 compute-0 nova_compute[260603]: 2025-10-02 08:33:11.565 2 DEBUG nova.storage.rbd_utils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] creating snapshot(0b2800069755452ca40aa4a8f5d6868a) on rbd image(49e7e668-b62c-4e35-a4e2-bba540000961_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:33:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Oct 02 08:33:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Oct 02 08:33:11 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Oct 02 08:33:11 compute-0 nova_compute[260603]: 2025-10-02 08:33:11.899 2 DEBUG nova.storage.rbd_utils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] cloning vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk@0b2800069755452ca40aa4a8f5d6868a to images/9c97855a-bc91-414b-b335-de5e3b829e28 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:33:12 compute-0 nova_compute[260603]: 2025-10-02 08:33:12.021 2 DEBUG nova.storage.rbd_utils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] flattening images/9c97855a-bc91-414b-b335-de5e3b829e28 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:33:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:33:12 compute-0 nova_compute[260603]: 2025-10-02 08:33:12.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:12 compute-0 nova_compute[260603]: 2025-10-02 08:33:12.435 2 DEBUG nova.storage.rbd_utils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] removing snapshot(0b2800069755452ca40aa4a8f5d6868a) on rbd image(49e7e668-b62c-4e35-a4e2-bba540000961_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:33:12 compute-0 nova_compute[260603]: 2025-10-02 08:33:12.558 2 DEBUG nova.compute.manager [req-a5b9a648-6732-4154-af69-9961d9328e62 req-15eff6dc-6dcb-402c-8d4f-5c5dfddf50d0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:12 compute-0 nova_compute[260603]: 2025-10-02 08:33:12.559 2 DEBUG oslo_concurrency.lockutils [req-a5b9a648-6732-4154-af69-9961d9328e62 req-15eff6dc-6dcb-402c-8d4f-5c5dfddf50d0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:12 compute-0 nova_compute[260603]: 2025-10-02 08:33:12.559 2 DEBUG oslo_concurrency.lockutils [req-a5b9a648-6732-4154-af69-9961d9328e62 req-15eff6dc-6dcb-402c-8d4f-5c5dfddf50d0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:12 compute-0 nova_compute[260603]: 2025-10-02 08:33:12.560 2 DEBUG oslo_concurrency.lockutils [req-a5b9a648-6732-4154-af69-9961d9328e62 req-15eff6dc-6dcb-402c-8d4f-5c5dfddf50d0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:12 compute-0 nova_compute[260603]: 2025-10-02 08:33:12.560 2 DEBUG nova.compute.manager [req-a5b9a648-6732-4154-af69-9961d9328e62 req-15eff6dc-6dcb-402c-8d4f-5c5dfddf50d0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] No waiting events found dispatching network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:12 compute-0 nova_compute[260603]: 2025-10-02 08:33:12.561 2 WARNING nova.compute.manager [req-a5b9a648-6732-4154-af69-9961d9328e62 req-15eff6dc-6dcb-402c-8d4f-5c5dfddf50d0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received unexpected event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 08:33:12 compute-0 ceph-mon[74477]: pgmap v1589: 305 pgs: 305 active+clean; 372 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 08:33:12 compute-0 ceph-mon[74477]: osdmap e228: 3 total, 3 up, 3 in
Oct 02 08:33:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Oct 02 08:33:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Oct 02 08:33:12 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Oct 02 08:33:12 compute-0 nova_compute[260603]: 2025-10-02 08:33:12.893 2 DEBUG nova.storage.rbd_utils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] creating snapshot(snap) on rbd image(9c97855a-bc91-414b-b335-de5e3b829e28) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:33:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 419 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.8 MiB/s wr, 100 op/s
Oct 02 08:33:13 compute-0 nova_compute[260603]: 2025-10-02 08:33:13.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:13 compute-0 nova_compute[260603]: 2025-10-02 08:33:13.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:33:13 compute-0 nova_compute[260603]: 2025-10-02 08:33:13.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:33:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Oct 02 08:33:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Oct 02 08:33:13 compute-0 ceph-mon[74477]: osdmap e229: 3 total, 3 up, 3 in
Oct 02 08:33:13 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Oct 02 08:33:13 compute-0 nova_compute[260603]: 2025-10-02 08:33:13.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.390 2 DEBUG oslo_concurrency.lockutils [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "7d8d42e9-4547-43c4-999e-096925053f6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.391 2 DEBUG oslo_concurrency.lockutils [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7d8d42e9-4547-43c4-999e-096925053f6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.391 2 DEBUG oslo_concurrency.lockutils [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "7d8d42e9-4547-43c4-999e-096925053f6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.391 2 DEBUG oslo_concurrency.lockutils [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7d8d42e9-4547-43c4-999e-096925053f6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.392 2 DEBUG oslo_concurrency.lockutils [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7d8d42e9-4547-43c4-999e-096925053f6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.393 2 INFO nova.compute.manager [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Terminating instance
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.394 2 DEBUG nova.compute.manager [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:14 compute-0 kernel: tapc05b45a2-ba (unregistering): left promiscuous mode
Oct 02 08:33:14 compute-0 NetworkManager[45129]: <info>  [1759393994.4542] device (tapc05b45a2-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:33:14 compute-0 ovn_controller[152344]: 2025-10-02T08:33:14Z|00713|binding|INFO|Releasing lport c05b45a2-ba93-4677-8f35-a2d934021ee8 from this chassis (sb_readonly=0)
Oct 02 08:33:14 compute-0 ovn_controller[152344]: 2025-10-02T08:33:14Z|00714|binding|INFO|Setting lport c05b45a2-ba93-4677-8f35-a2d934021ee8 down in Southbound
Oct 02 08:33:14 compute-0 ovn_controller[152344]: 2025-10-02T08:33:14Z|00715|binding|INFO|Removing iface tapc05b45a2-ba ovn-installed in OVS
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.477 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:63:68 10.100.0.10'], port_security=['fa:16:3e:45:63:68 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7d8d42e9-4547-43c4-999e-096925053f6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c05b45a2-ba93-4677-8f35-a2d934021ee8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.479 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c05b45a2-ba93-4677-8f35-a2d934021ee8 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 unbound from our chassis
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.482 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.499 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6578743d-0cdf-42e3-9441-2d0c71201d26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:14 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Oct 02 08:33:14 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000004b.scope: Consumed 5.896s CPU time.
Oct 02 08:33:14 compute-0 systemd-machined[214636]: Machine qemu-84-instance-0000004b terminated.
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.548 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc3cbec-8ce7-4e24-8e50-e488e7d851d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.551 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7f31fc5e-703c-4975-ba2c-fcd2d9cbb38c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.593 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[73b92a6d-90a0-49bb-ac7c-9dffbe7d1ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.622 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a93bf9bf-511d-4189-aa1a-f6b7bd0a5c8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 27, 'rx_bytes': 1000, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 27, 'rx_bytes': 1000, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 25420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334094, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.639 2 INFO nova.virt.libvirt.driver [-] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Instance destroyed successfully.
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.639 2 DEBUG nova.objects.instance [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'resources' on Instance uuid 7d8d42e9-4547-43c4-999e-096925053f6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.643 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a5470630-f96f-4a10-8adf-f03672ec410d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334102, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334102, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.644 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.654 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.655 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.655 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:14.656 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.663 2 DEBUG nova.virt.libvirt.vif [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:32:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-267111338',display_name='tempest-ServersTestJSON-server-267111338',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-267111338',id=75,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:33:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-zk4d7e9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:33:12Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=7d8d42e9-4547-43c4-999e-096925053f6e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "address": "fa:16:3e:45:63:68", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc05b45a2-ba", "ovs_interfaceid": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.664 2 DEBUG nova.network.os_vif_util [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "address": "fa:16:3e:45:63:68", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc05b45a2-ba", "ovs_interfaceid": "c05b45a2-ba93-4677-8f35-a2d934021ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.664 2 DEBUG nova.network.os_vif_util [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:63:68,bridge_name='br-int',has_traffic_filtering=True,id=c05b45a2-ba93-4677-8f35-a2d934021ee8,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc05b45a2-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.665 2 DEBUG os_vif [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:63:68,bridge_name='br-int',has_traffic_filtering=True,id=c05b45a2-ba93-4677-8f35-a2d934021ee8,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc05b45a2-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.667 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc05b45a2-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:14 compute-0 nova_compute[260603]: 2025-10-02 08:33:14.677 2 INFO os_vif [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:63:68,bridge_name='br-int',has_traffic_filtering=True,id=c05b45a2-ba93-4677-8f35-a2d934021ee8,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc05b45a2-ba')
Oct 02 08:33:14 compute-0 ceph-mon[74477]: pgmap v1592: 305 pgs: 305 active+clean; 419 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.8 MiB/s wr, 100 op/s
Oct 02 08:33:14 compute-0 ceph-mon[74477]: osdmap e230: 3 total, 3 up, 3 in
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.019 2 INFO nova.virt.libvirt.driver [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Deleting instance files /var/lib/nova/instances/7d8d42e9-4547-43c4-999e-096925053f6e_del
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.020 2 INFO nova.virt.libvirt.driver [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Deletion of /var/lib/nova/instances/7d8d42e9-4547-43c4-999e-096925053f6e_del complete
Oct 02 08:33:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 435 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.081 2 INFO nova.compute.manager [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Took 0.69 seconds to destroy the instance on the hypervisor.
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.082 2 DEBUG oslo.service.loopingcall [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.082 2 DEBUG nova.compute.manager [-] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.082 2 DEBUG nova.network.neutron [-] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.460 2 INFO nova.virt.libvirt.driver [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Snapshot image upload complete
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.461 2 DEBUG nova.compute.manager [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.506 2 INFO nova.compute.manager [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Shelve offloading
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.514 2 INFO nova.virt.libvirt.driver [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance destroyed successfully.
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.514 2 DEBUG nova.compute.manager [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.517 2 DEBUG oslo_concurrency.lockutils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.518 2 DEBUG oslo_concurrency.lockutils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquired lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.518 2 DEBUG nova.network.neutron [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.907 2 DEBUG nova.network.neutron [-] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.924 2 INFO nova.compute.manager [-] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Took 0.84 seconds to deallocate network for instance.
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.976 2 DEBUG oslo_concurrency.lockutils [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.977 2 DEBUG oslo_concurrency.lockutils [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:15 compute-0 nova_compute[260603]: 2025-10-02 08:33:15.983 2 DEBUG nova.compute.manager [req-29f644fc-507c-4bd5-8fe5-88171fa42922 req-12948759-3abd-4342-9ad2-c0cbf8065182 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Received event network-vif-deleted-c05b45a2-ba93-4677-8f35-a2d934021ee8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:16 compute-0 nova_compute[260603]: 2025-10-02 08:33:16.427 2 DEBUG oslo_concurrency.processutils [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:16 compute-0 ceph-mon[74477]: pgmap v1594: 305 pgs: 305 active+clean; 435 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 02 08:33:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/699042532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:16 compute-0 nova_compute[260603]: 2025-10-02 08:33:16.926 2 DEBUG oslo_concurrency.processutils [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:16 compute-0 nova_compute[260603]: 2025-10-02 08:33:16.931 2 DEBUG nova.compute.provider_tree [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:16 compute-0 nova_compute[260603]: 2025-10-02 08:33:16.958 2 DEBUG nova.scheduler.client.report [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:16 compute-0 nova_compute[260603]: 2025-10-02 08:33:16.984 2 DEBUG oslo_concurrency.lockutils [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:17 compute-0 nova_compute[260603]: 2025-10-02 08:33:17.032 2 INFO nova.scheduler.client.report [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Deleted allocations for instance 7d8d42e9-4547-43c4-999e-096925053f6e
Oct 02 08:33:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 435 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 02 08:33:17 compute-0 nova_compute[260603]: 2025-10-02 08:33:17.090 2 DEBUG oslo_concurrency.lockutils [None req-8245380b-32a6-43ad-984e-3d1e27d0909b 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7d8d42e9-4547-43c4-999e-096925053f6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:33:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/699042532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:18 compute-0 nova_compute[260603]: 2025-10-02 08:33:18.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:18 compute-0 nova_compute[260603]: 2025-10-02 08:33:18.394 2 DEBUG nova.network.neutron [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating instance_info_cache with network_info: [{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:18 compute-0 nova_compute[260603]: 2025-10-02 08:33:18.458 2 DEBUG oslo_concurrency.lockutils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Releasing lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:18 compute-0 nova_compute[260603]: 2025-10-02 08:33:18.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:33:18 compute-0 nova_compute[260603]: 2025-10-02 08:33:18.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:33:18 compute-0 nova_compute[260603]: 2025-10-02 08:33:18.691 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:18 compute-0 nova_compute[260603]: 2025-10-02 08:33:18.692 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:18 compute-0 nova_compute[260603]: 2025-10-02 08:33:18.692 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:33:18 compute-0 ceph-mon[74477]: pgmap v1595: 305 pgs: 305 active+clean; 435 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 02 08:33:19 compute-0 podman[334152]: 2025-10-02 08:33:19.026641468 +0000 UTC m=+0.082347726 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:33:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 405 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 6.5 MiB/s wr, 310 op/s
Oct 02 08:33:19 compute-0 podman[334151]: 2025-10-02 08:33:19.057570007 +0000 UTC m=+0.114224194 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 08:33:19 compute-0 nova_compute[260603]: 2025-10-02 08:33:19.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.074 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Updating instance_info_cache with network_info: [{"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.097 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.098 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.099 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:33:20 compute-0 ceph-mon[74477]: pgmap v1596: 305 pgs: 305 active+clean; 405 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 6.5 MiB/s wr, 310 op/s
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.901 2 INFO nova.virt.libvirt.driver [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance destroyed successfully.
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.902 2 DEBUG nova.objects.instance [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'resources' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.921 2 DEBUG nova.virt.libvirt.vif [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1525105387',display_name='tempest-ServerActionsTestOtherB-server-1525105387',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1525105387',id=61,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmeM0VXrLDuUgkvEcAKZLUawTz1v6B+3eBASOGcaRFlmF3ztxSFLOGPfQ5nMbtxqx6ZDoxMiinSb16iJLQrCBe+IsSQFaYXfW47SOpCqDkfThajOwmApFonqiBjUHNfHQ==',key_name='tempest-keypair-1019763483',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-fypdkou3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member',shelved_at='2025-10-02T08:33:15.461206',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='9c97855a-bc91-414b-b335-de5e3b829e28'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:33:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9020ed38b31d46f88625374b2a76aef6',uuid=49e7e668-b62c-4e35-a4e2-bba540000961,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.921 2 DEBUG nova.network.os_vif_util [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.922 2 DEBUG nova.network.os_vif_util [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.922 2 DEBUG os_vif [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37e9c33f-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:20 compute-0 nova_compute[260603]: 2025-10-02 08:33:20.933 2 INFO os_vif [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f')
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.054 2 DEBUG nova.compute.manager [req-148d478f-4008-45cd-a9ae-494067900153 req-3054800f-cb9f-4730-b81b-50924c923f2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-changed-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.054 2 DEBUG nova.compute.manager [req-148d478f-4008-45cd-a9ae-494067900153 req-3054800f-cb9f-4730-b81b-50924c923f2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Refreshing instance network info cache due to event network-changed-37e9c33f-0ff9-4138-a7b5-989ba3c016a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.055 2 DEBUG oslo_concurrency.lockutils [req-148d478f-4008-45cd-a9ae-494067900153 req-3054800f-cb9f-4730-b81b-50924c923f2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.055 2 DEBUG oslo_concurrency.lockutils [req-148d478f-4008-45cd-a9ae-494067900153 req-3054800f-cb9f-4730-b81b-50924c923f2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 405 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 177 op/s
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.056 2 DEBUG nova.network.neutron [req-148d478f-4008-45cd-a9ae-494067900153 req-3054800f-cb9f-4730-b81b-50924c923f2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Refreshing network info cache for port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.235 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "fcd663c5-c20d-477c-bf26-11eb72d0886f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.235 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.269 2 DEBUG nova.compute.manager [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.340 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.340 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.345 2 DEBUG nova.virt.hardware [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.346 2 INFO nova.compute.claims [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.444 2 INFO nova.virt.libvirt.driver [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Deleting instance files /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961_del
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.444 2 INFO nova.virt.libvirt.driver [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Deletion of /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961_del complete
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.720 2 INFO nova.scheduler.client.report [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Deleted allocations for instance 49e7e668-b62c-4e35-a4e2-bba540000961
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.777 2 DEBUG oslo_concurrency.lockutils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.788 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:21 compute-0 nova_compute[260603]: 2025-10-02 08:33:21.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:33:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4209464066' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:33:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:33:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4209464066' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:33:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:33:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Oct 02 08:33:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Oct 02 08:33:22 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Oct 02 08:33:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310233441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.294 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.302 2 DEBUG nova.compute.provider_tree [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.322 2 DEBUG nova.scheduler.client.report [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.351 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.352 2 DEBUG nova.compute.manager [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.356 2 DEBUG oslo_concurrency.lockutils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.359 2 DEBUG nova.network.neutron [req-148d478f-4008-45cd-a9ae-494067900153 req-3054800f-cb9f-4730-b81b-50924c923f2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updated VIF entry in instance network info cache for port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.360 2 DEBUG nova.network.neutron [req-148d478f-4008-45cd-a9ae-494067900153 req-3054800f-cb9f-4730-b81b-50924c923f2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating instance_info_cache with network_info: [{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.379 2 DEBUG oslo_concurrency.lockutils [req-148d478f-4008-45cd-a9ae-494067900153 req-3054800f-cb9f-4730-b81b-50924c923f2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.405 2 DEBUG nova.compute.manager [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.406 2 DEBUG nova.network.neutron [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.428 2 INFO nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.447 2 DEBUG nova.compute.manager [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.500 2 DEBUG oslo_concurrency.processutils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.558 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.598 2 DEBUG nova.compute.manager [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.599 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.600 2 INFO nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Creating image(s)
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.632 2 DEBUG nova.storage.rbd_utils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image fcd663c5-c20d-477c-bf26-11eb72d0886f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.662 2 DEBUG nova.storage.rbd_utils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image fcd663c5-c20d-477c-bf26-11eb72d0886f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.699 2 DEBUG nova.storage.rbd_utils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image fcd663c5-c20d-477c-bf26-11eb72d0886f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.704 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.753 2 DEBUG nova.policy [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33ee6781337742479d7b4b078ad6a221', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.795 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.797 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.798 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.799 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.831 2 DEBUG nova.storage.rbd_utils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image fcd663c5-c20d-477c-bf26-11eb72d0886f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.835 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fcd663c5-c20d-477c-bf26-11eb72d0886f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:22 compute-0 ceph-mon[74477]: pgmap v1597: 305 pgs: 305 active+clean; 405 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 177 op/s
Oct 02 08:33:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/4209464066' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:33:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/4209464066' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:33:22 compute-0 ceph-mon[74477]: osdmap e231: 3 total, 3 up, 3 in
Oct 02 08:33:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2310233441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2068564593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.972 2 DEBUG oslo_concurrency.processutils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:22 compute-0 nova_compute[260603]: 2025-10-02 08:33:22.982 2 DEBUG nova.compute.provider_tree [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.002 2 DEBUG nova.scheduler.client.report [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 358 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 868 KiB/s wr, 86 op/s
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.075 2 DEBUG oslo_concurrency.lockutils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.138 2 DEBUG oslo_concurrency.lockutils [None req-e2c78da2-2f7d-45e2-b1d1-0055e7330421 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 15.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.162 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fcd663c5-c20d-477c-bf26-11eb72d0886f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.236 2 DEBUG nova.storage.rbd_utils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] resizing rbd image fcd663c5-c20d-477c-bf26-11eb72d0886f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.341 2 DEBUG nova.objects.instance [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'migration_context' on Instance uuid fcd663c5-c20d-477c-bf26-11eb72d0886f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.353 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.353 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Ensure instance console log exists: /var/lib/nova/instances/fcd663c5-c20d-477c-bf26-11eb72d0886f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.354 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.354 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.355 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.383 2 DEBUG nova.network.neutron [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Successfully created port: c42b44f6-e8fd-46c9-a075-0952da11cf02 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.540 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:33:23 compute-0 nova_compute[260603]: 2025-10-02 08:33:23.540 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 08:33:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2068564593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.037 2 DEBUG nova.network.neutron [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Successfully updated port: c42b44f6-e8fd-46c9-a075-0952da11cf02 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.052 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "refresh_cache-fcd663c5-c20d-477c-bf26-11eb72d0886f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.052 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquired lock "refresh_cache-fcd663c5-c20d-477c-bf26-11eb72d0886f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.052 2 DEBUG nova.network.neutron [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.277 2 DEBUG nova.network.neutron [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.528 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.529 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.530 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.556 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.623 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.623 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.639 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.666 2 DEBUG nova.compute.manager [req-e26343af-8c83-4b76-985e-0234920a1249 req-65e767a9-15d4-419f-9c09-4d9fec8193e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Received event network-changed-c42b44f6-e8fd-46c9-a075-0952da11cf02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.666 2 DEBUG nova.compute.manager [req-e26343af-8c83-4b76-985e-0234920a1249 req-65e767a9-15d4-419f-9c09-4d9fec8193e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Refreshing instance network info cache due to event network-changed-c42b44f6-e8fd-46c9-a075-0952da11cf02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.667 2 DEBUG oslo_concurrency.lockutils [req-e26343af-8c83-4b76-985e-0234920a1249 req-65e767a9-15d4-419f-9c09-4d9fec8193e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fcd663c5-c20d-477c-bf26-11eb72d0886f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.691 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.692 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.705 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.707 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.707 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.716 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.717 2 INFO nova.compute.claims [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.788 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.905 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "6df0942d-95db-4140-9c7b-b5c51ada92bd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.906 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:24 compute-0 ceph-mon[74477]: pgmap v1599: 305 pgs: 305 active+clean; 358 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 868 KiB/s wr, 86 op/s
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.926 2 DEBUG nova.compute.manager [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:33:24 compute-0 nova_compute[260603]: 2025-10-02 08:33:24.950 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.022 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:25 compute-0 podman[334423]: 2025-10-02 08:33:25.026958682 +0000 UTC m=+0.087938189 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct 02 08:33:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 339 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 1.7 MiB/s wr, 103 op/s
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.102 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393990.0999851, 49e7e668-b62c-4e35-a4e2-bba540000961 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.102 2 INFO nova.compute.manager [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] VM Stopped (Lifecycle Event)
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.132 2 DEBUG nova.compute.manager [None req-9e7fd442-8160-4c05-823f-2d3a01652ba5 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.299 2 DEBUG nova.network.neutron [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Updating instance_info_cache with network_info: [{"id": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "address": "fa:16:3e:67:eb:ae", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc42b44f6-e8", "ovs_interfaceid": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.321 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Releasing lock "refresh_cache-fcd663c5-c20d-477c-bf26-11eb72d0886f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.321 2 DEBUG nova.compute.manager [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Instance network_info: |[{"id": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "address": "fa:16:3e:67:eb:ae", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc42b44f6-e8", "ovs_interfaceid": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.321 2 DEBUG oslo_concurrency.lockutils [req-e26343af-8c83-4b76-985e-0234920a1249 req-65e767a9-15d4-419f-9c09-4d9fec8193e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fcd663c5-c20d-477c-bf26-11eb72d0886f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.321 2 DEBUG nova.network.neutron [req-e26343af-8c83-4b76-985e-0234920a1249 req-65e767a9-15d4-419f-9c09-4d9fec8193e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Refreshing network info cache for port c42b44f6-e8fd-46c9-a075-0952da11cf02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.324 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Start _get_guest_xml network_info=[{"id": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "address": "fa:16:3e:67:eb:ae", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc42b44f6-e8", "ovs_interfaceid": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.328 2 WARNING nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.332 2 DEBUG nova.virt.libvirt.host [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.333 2 DEBUG nova.virt.libvirt.host [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.338 2 DEBUG nova.virt.libvirt.host [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.339 2 DEBUG nova.virt.libvirt.host [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.339 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.340 2 DEBUG nova.virt.hardware [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.340 2 DEBUG nova.virt.hardware [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.340 2 DEBUG nova.virt.hardware [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.341 2 DEBUG nova.virt.hardware [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.341 2 DEBUG nova.virt.hardware [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.341 2 DEBUG nova.virt.hardware [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.341 2 DEBUG nova.virt.hardware [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.341 2 DEBUG nova.virt.hardware [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.342 2 DEBUG nova.virt.hardware [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.342 2 DEBUG nova.virt.hardware [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.342 2 DEBUG nova.virt.hardware [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.345 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995273346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.457 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.466 2 DEBUG nova.compute.provider_tree [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.483 2 DEBUG nova.scheduler.client.report [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.516 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.517 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.524 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.532 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.533 2 INFO nova.compute.claims [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.593 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.594 2 DEBUG nova.network.neutron [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.626 2 INFO nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.655 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.759 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.809 2 DEBUG nova.policy [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6c3e0096dae34ce09545c8c4547dae81', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e1045858c7b24f1baf184c8469064740', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.811 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.813 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.813 2 INFO nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Creating image(s)
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.837 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/463092331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.862 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.889 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.893 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2995273346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/463092331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.946 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.970 2 DEBUG nova.storage.rbd_utils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image fcd663c5-c20d-477c-bf26-11eb72d0886f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:25 compute-0 nova_compute[260603]: 2025-10-02 08:33:25.975 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.028 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.029 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.030 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.030 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.053 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.057 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3783072578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.286 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.306 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.330 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] resizing rbd image 0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.351 2 DEBUG nova.compute.provider_tree [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.376 2 DEBUG nova.scheduler.client.report [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.402 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.403 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.405 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.409 2 DEBUG nova.objects.instance [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lazy-loading 'migration_context' on Instance uuid 0b8ef6ea-80ff-48a3-84ff-f057fc293169 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3014193597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.420 2 DEBUG nova.virt.hardware [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.420 2 INFO nova.compute.claims [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.434 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.434 2 DEBUG nova.virt.libvirt.vif [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:33:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1204804315',display_name='tempest-ServersTestJSON-server-1204804315',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1204804315',id=76,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-knt96idl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:22Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=fcd663c5-c20d-477c-bf26-11eb72d0886f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "address": "fa:16:3e:67:eb:ae", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc42b44f6-e8", "ovs_interfaceid": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.435 2 DEBUG nova.network.os_vif_util [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "address": "fa:16:3e:67:eb:ae", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc42b44f6-e8", "ovs_interfaceid": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.435 2 DEBUG nova.network.os_vif_util [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:eb:ae,bridge_name='br-int',has_traffic_filtering=True,id=c42b44f6-e8fd-46c9-a075-0952da11cf02,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc42b44f6-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.436 2 DEBUG nova.objects.instance [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid fcd663c5-c20d-477c-bf26-11eb72d0886f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.452 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.452 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Ensure instance console log exists: /var/lib/nova/instances/0b8ef6ea-80ff-48a3-84ff-f057fc293169/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.452 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.453 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.453 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.469 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:33:26 compute-0 nova_compute[260603]:   <uuid>fcd663c5-c20d-477c-bf26-11eb72d0886f</uuid>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   <name>instance-0000004c</name>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersTestJSON-server-1204804315</nova:name>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:33:25</nova:creationTime>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:33:26 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:33:26 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:33:26 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:33:26 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:33:26 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:33:26 compute-0 nova_compute[260603]:         <nova:user uuid="33ee6781337742479d7b4b078ad6a221">tempest-ServersTestJSON-520437589-project-member</nova:user>
Oct 02 08:33:26 compute-0 nova_compute[260603]:         <nova:project uuid="f6678937d40d4004ad15e1e9eef6f9c7">tempest-ServersTestJSON-520437589</nova:project>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:33:26 compute-0 nova_compute[260603]:         <nova:port uuid="c42b44f6-e8fd-46c9-a075-0952da11cf02">
Oct 02 08:33:26 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <system>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <entry name="serial">fcd663c5-c20d-477c-bf26-11eb72d0886f</entry>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <entry name="uuid">fcd663c5-c20d-477c-bf26-11eb72d0886f</entry>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     </system>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   <os>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   </os>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   <features>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   </features>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/fcd663c5-c20d-477c-bf26-11eb72d0886f_disk">
Oct 02 08:33:26 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:26 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/fcd663c5-c20d-477c-bf26-11eb72d0886f_disk.config">
Oct 02 08:33:26 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:26 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:67:eb:ae"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <target dev="tapc42b44f6-e8"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/fcd663c5-c20d-477c-bf26-11eb72d0886f/console.log" append="off"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <video>
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     </video>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:33:26 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:33:26 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:33:26 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:33:26 compute-0 nova_compute[260603]: </domain>
Oct 02 08:33:26 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.471 2 DEBUG nova.compute.manager [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Preparing to wait for external event network-vif-plugged-c42b44f6-e8fd-46c9-a075-0952da11cf02 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.471 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.472 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.472 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.474 2 DEBUG nova.virt.libvirt.vif [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:33:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1204804315',display_name='tempest-ServersTestJSON-server-1204804315',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1204804315',id=76,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-knt96idl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:22Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=fcd663c5-c20d-477c-bf26-11eb72d0886f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "address": "fa:16:3e:67:eb:ae", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc42b44f6-e8", "ovs_interfaceid": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.474 2 DEBUG nova.network.os_vif_util [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "address": "fa:16:3e:67:eb:ae", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc42b44f6-e8", "ovs_interfaceid": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.475 2 DEBUG nova.network.os_vif_util [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:eb:ae,bridge_name='br-int',has_traffic_filtering=True,id=c42b44f6-e8fd-46c9-a075-0952da11cf02,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc42b44f6-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.476 2 DEBUG os_vif [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:eb:ae,bridge_name='br-int',has_traffic_filtering=True,id=c42b44f6-e8fd-46c9-a075-0952da11cf02,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc42b44f6-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.479 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc42b44f6-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.485 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc42b44f6-e8, col_values=(('external_ids', {'iface-id': 'c42b44f6-e8fd-46c9-a075-0952da11cf02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:eb:ae', 'vm-uuid': 'fcd663c5-c20d-477c-bf26-11eb72d0886f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.490 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.491 2 DEBUG nova.network.neutron [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:26 compute-0 NetworkManager[45129]: <info>  [1759394006.5129] manager: (tapc42b44f6-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.516 2 INFO nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.524 2 INFO os_vif [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:eb:ae,bridge_name='br-int',has_traffic_filtering=True,id=c42b44f6-e8fd-46c9-a075-0952da11cf02,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc42b44f6-e8')
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.546 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.552 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.553 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.619 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.649 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.650 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.650 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No VIF found with MAC fa:16:3e:67:eb:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.651 2 INFO nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Using config drive
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.678 2 DEBUG nova.storage.rbd_utils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image fcd663c5-c20d-477c-bf26-11eb72d0886f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.684 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.729 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.731 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.731 2 INFO nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Creating image(s)
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.757 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.783 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.807 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.811 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.896 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.897 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.897 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.898 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.920 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.923 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:26 compute-0 ceph-mon[74477]: pgmap v1600: 305 pgs: 305 active+clean; 339 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 1.7 MiB/s wr, 103 op/s
Oct 02 08:33:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3783072578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3014193597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:26 compute-0 nova_compute[260603]: 2025-10-02 08:33:26.959 2 DEBUG nova.policy [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6c3e0096dae34ce09545c8c4547dae81', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e1045858c7b24f1baf184c8469064740', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:33:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 339 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 1.7 MiB/s wr, 103 op/s
Oct 02 08:33:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4907556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.149 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.160 2 DEBUG nova.compute.provider_tree [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.178 2 DEBUG nova.scheduler.client.report [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.182 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.211 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.212 2 DEBUG nova.compute.manager [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.215 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.215 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.215 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.215 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.313 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] resizing rbd image 69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.352 2 DEBUG nova.compute.manager [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.352 2 DEBUG nova.network.neutron [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.375 2 INFO nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.427 2 DEBUG nova.compute.manager [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.435 2 DEBUG nova.objects.instance [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lazy-loading 'migration_context' on Instance uuid 69c52a3e-2ba6-4468-8498-ceb30011bb4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.461 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.462 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Ensure instance console log exists: /var/lib/nova/instances/69c52a3e-2ba6-4468-8498-ceb30011bb4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.462 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.463 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.464 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.532 2 DEBUG nova.compute.manager [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.534 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.534 2 INFO nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Creating image(s)
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.562 2 DEBUG nova.storage.rbd_utils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.593 2 DEBUG nova.storage.rbd_utils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.625 2 DEBUG nova.storage.rbd_utils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.629 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1274226459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.690 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.733 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.734 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.735 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.735 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.758 2 DEBUG nova.storage.rbd_utils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.761 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.854 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.855 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.864 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.864 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.869 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.869 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:33:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:33:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:33:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:33:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:33:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:33:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:33:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:33:27
Oct 02 08:33:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:33:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:33:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.control', 'images', 'volumes', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta']
Oct 02 08:33:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:33:27 compute-0 nova_compute[260603]: 2025-10-02 08:33:27.954 2 DEBUG nova.policy [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd2e113d3d74a43998ac8dbf246ae9095', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '629efd330be646b7a2941e0c83b86e0e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:33:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4907556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1274226459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.017 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.047 2 DEBUG nova.network.neutron [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Successfully created port: e15c13c8-9db2-4011-934e-7c302b8e26c0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.087 2 DEBUG nova.storage.rbd_utils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] resizing rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.181 2 DEBUG nova.objects.instance [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'migration_context' on Instance uuid 6df0942d-95db-4140-9c7b-b5c51ada92bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.201 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.201 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Ensure instance console log exists: /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.202 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.202 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.203 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.271 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.274 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3434MB free_disk=59.88835144042969GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.274 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.275 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.289 2 INFO nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Creating config drive at /var/lib/nova/instances/fcd663c5-c20d-477c-bf26-11eb72d0886f/disk.config
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.298 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fcd663c5-c20d-477c-bf26-11eb72d0886f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyeug4_sd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.418 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 7ac34b0c-8ced-417d-9442-8fda77804a34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.419 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 19dd1983-6b14-4ed7-bcb1-f620e7426cc6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.419 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance fcd663c5-c20d-477c-bf26-11eb72d0886f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.420 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 0b8ef6ea-80ff-48a3-84ff-f057fc293169 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.420 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 69c52a3e-2ba6-4468-8498-ceb30011bb4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.420 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 6df0942d-95db-4140-9c7b-b5c51ada92bd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.421 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.421 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.463 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fcd663c5-c20d-477c-bf26-11eb72d0886f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyeug4_sd" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.496 2 DEBUG nova.storage.rbd_utils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image fcd663c5-c20d-477c-bf26-11eb72d0886f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.501 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fcd663c5-c20d-477c-bf26-11eb72d0886f/disk.config fcd663c5-c20d-477c-bf26-11eb72d0886f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.655 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.697 2 DEBUG oslo_concurrency.processutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fcd663c5-c20d-477c-bf26-11eb72d0886f/disk.config fcd663c5-c20d-477c-bf26-11eb72d0886f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.698 2 INFO nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Deleting local config drive /var/lib/nova/instances/fcd663c5-c20d-477c-bf26-11eb72d0886f/disk.config because it was imported into RBD.
Oct 02 08:33:28 compute-0 kernel: tapc42b44f6-e8: entered promiscuous mode
Oct 02 08:33:28 compute-0 NetworkManager[45129]: <info>  [1759394008.7684] manager: (tapc42b44f6-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/301)
Oct 02 08:33:28 compute-0 ovn_controller[152344]: 2025-10-02T08:33:28Z|00716|binding|INFO|Claiming lport c42b44f6-e8fd-46c9-a075-0952da11cf02 for this chassis.
Oct 02 08:33:28 compute-0 ovn_controller[152344]: 2025-10-02T08:33:28Z|00717|binding|INFO|c42b44f6-e8fd-46c9-a075-0952da11cf02: Claiming fa:16:3e:67:eb:ae 10.100.0.8
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.779 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:eb:ae 10.100.0.8'], port_security=['fa:16:3e:67:eb:ae 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fcd663c5-c20d-477c-bf26-11eb72d0886f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c42b44f6-e8fd-46c9-a075-0952da11cf02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.781 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c42b44f6-e8fd-46c9-a075-0952da11cf02 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 bound to our chassis
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.788 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:28 compute-0 ovn_controller[152344]: 2025-10-02T08:33:28Z|00718|binding|INFO|Setting lport c42b44f6-e8fd-46c9-a075-0952da11cf02 ovn-installed in OVS
Oct 02 08:33:28 compute-0 ovn_controller[152344]: 2025-10-02T08:33:28Z|00719|binding|INFO|Setting lport c42b44f6-e8fd-46c9-a075-0952da11cf02 up in Southbound
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.818 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f1bd5a-d213-46ec-9cc5-dc1a76fdfec3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:28 compute-0 systemd-machined[214636]: New machine qemu-85-instance-0000004c.
Oct 02 08:33:28 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-0000004c.
Oct 02 08:33:28 compute-0 systemd-udevd[335194]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.868 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8c3671-dfe3-46e4-95ff-d03be3129e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.872 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d666203a-5481-4139-8c1d-f647a0fcfb9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:28 compute-0 NetworkManager[45129]: <info>  [1759394008.8824] device (tapc42b44f6-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:33:28 compute-0 NetworkManager[45129]: <info>  [1759394008.8834] device (tapc42b44f6-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:33:28 compute-0 podman[335161]: 2025-10-02 08:33:28.92097974 +0000 UTC m=+0.106912078 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd)
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.921 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6ef2e3-1e09-4868-aac5-6182cb2785c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.942 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4711ba8-b784-4ff1-8200-0c3e63a3e4ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 29, 'rx_bytes': 1000, 'tx_bytes': 1362, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 29, 'rx_bytes': 1000, 'tx_bytes': 1362, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 25420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335215, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:28 compute-0 ceph-mon[74477]: pgmap v1601: 305 pgs: 305 active+clean; 339 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 1.7 MiB/s wr, 103 op/s
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.960 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1716d6-eb86-4688-b112-5d3db4922bc6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335217, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335217, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.962 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:28 compute-0 nova_compute[260603]: 2025-10-02 08:33:28.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.965 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.965 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.965 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:28.966 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 474 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 6.8 MiB/s wr, 140 op/s
Oct 02 08:33:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024518384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.163 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.171 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.190 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.215 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.216 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.636 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393994.6355505, 7d8d42e9-4547-43c4-999e-096925053f6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.637 2 INFO nova.compute.manager [-] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] VM Stopped (Lifecycle Event)
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.658 2 DEBUG nova.compute.manager [None req-96a51154-aeef-4c1b-b78f-a1ff98572a7b - - - - - -] [instance: 7d8d42e9-4547-43c4-999e-096925053f6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.682 2 DEBUG nova.network.neutron [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Successfully updated port: e15c13c8-9db2-4011-934e-7c302b8e26c0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.699 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "refresh_cache-0b8ef6ea-80ff-48a3-84ff-f057fc293169" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.699 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquired lock "refresh_cache-0b8ef6ea-80ff-48a3-84ff-f057fc293169" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.700 2 DEBUG nova.network.neutron [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.703 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394009.702626, fcd663c5-c20d-477c-bf26-11eb72d0886f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.703 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] VM Started (Lifecycle Event)
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.720 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.725 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394009.7027268, fcd663c5-c20d-477c-bf26-11eb72d0886f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.725 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] VM Paused (Lifecycle Event)
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.745 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.750 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.772 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.819 2 DEBUG nova.network.neutron [req-e26343af-8c83-4b76-985e-0234920a1249 req-65e767a9-15d4-419f-9c09-4d9fec8193e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Updated VIF entry in instance network info cache for port c42b44f6-e8fd-46c9-a075-0952da11cf02. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.820 2 DEBUG nova.network.neutron [req-e26343af-8c83-4b76-985e-0234920a1249 req-65e767a9-15d4-419f-9c09-4d9fec8193e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Updating instance_info_cache with network_info: [{"id": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "address": "fa:16:3e:67:eb:ae", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc42b44f6-e8", "ovs_interfaceid": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.852 2 DEBUG oslo_concurrency.lockutils [req-e26343af-8c83-4b76-985e-0234920a1249 req-65e767a9-15d4-419f-9c09-4d9fec8193e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fcd663c5-c20d-477c-bf26-11eb72d0886f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:29 compute-0 nova_compute[260603]: 2025-10-02 08:33:29.933 2 DEBUG nova.network.neutron [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:33:29 compute-0 ceph-mon[74477]: pgmap v1602: 305 pgs: 305 active+clean; 474 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 6.8 MiB/s wr, 140 op/s
Oct 02 08:33:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1024518384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.006 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.006 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.007 2 INFO nova.compute.manager [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Unshelving
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.045 2 DEBUG nova.network.neutron [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Successfully created port: f56f40e0-799f-47a4-a095-002a975375fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.104 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.105 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.132 2 DEBUG nova.objects.instance [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'pci_requests' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.152 2 DEBUG nova.objects.instance [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'numa_topology' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.170 2 DEBUG nova.virt.hardware [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.171 2 INFO nova.compute.claims [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.183 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.306 2 DEBUG nova.network.neutron [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Successfully created port: 11350006-4c80-4e3a-a271-5230799be1ba _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.397 2 DEBUG oslo_concurrency.processutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.692 2 DEBUG nova.network.neutron [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Updating instance_info_cache with network_info: [{"id": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "address": "fa:16:3e:b0:42:29", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15c13c8-9d", "ovs_interfaceid": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.717 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Releasing lock "refresh_cache-0b8ef6ea-80ff-48a3-84ff-f057fc293169" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.718 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Instance network_info: |[{"id": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "address": "fa:16:3e:b0:42:29", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15c13c8-9d", "ovs_interfaceid": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.723 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Start _get_guest_xml network_info=[{"id": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "address": "fa:16:3e:b0:42:29", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15c13c8-9d", "ovs_interfaceid": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.730 2 WARNING nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.735 2 DEBUG nova.virt.libvirt.host [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.736 2 DEBUG nova.virt.libvirt.host [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.740 2 DEBUG nova.virt.libvirt.host [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.740 2 DEBUG nova.virt.libvirt.host [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.741 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.742 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.743 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.743 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.744 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.744 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.745 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.745 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.746 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.746 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.747 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.747 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.755 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:30 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926306193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.836 2 DEBUG oslo_concurrency.processutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.842 2 DEBUG nova.compute.provider_tree [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.879 2 DEBUG nova.scheduler.client.report [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.910 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2926306193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.990 2 DEBUG nova.compute.manager [req-c5baffed-cbc3-4138-a568-ce13b69bbe39 req-efdfbc55-bb9a-42f5-becf-a3484e401e89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Received event network-changed-e15c13c8-9db2-4011-934e-7c302b8e26c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.991 2 DEBUG nova.compute.manager [req-c5baffed-cbc3-4138-a568-ce13b69bbe39 req-efdfbc55-bb9a-42f5-becf-a3484e401e89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Refreshing instance network info cache due to event network-changed-e15c13c8-9db2-4011-934e-7c302b8e26c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.991 2 DEBUG oslo_concurrency.lockutils [req-c5baffed-cbc3-4138-a568-ce13b69bbe39 req-efdfbc55-bb9a-42f5-becf-a3484e401e89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-0b8ef6ea-80ff-48a3-84ff-f057fc293169" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.992 2 DEBUG oslo_concurrency.lockutils [req-c5baffed-cbc3-4138-a568-ce13b69bbe39 req-efdfbc55-bb9a-42f5-becf-a3484e401e89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-0b8ef6ea-80ff-48a3-84ff-f057fc293169" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.992 2 DEBUG nova.network.neutron [req-c5baffed-cbc3-4138-a568-ce13b69bbe39 req-efdfbc55-bb9a-42f5-becf-a3484e401e89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Refreshing network info cache for port e15c13c8-9db2-4011-934e-7c302b8e26c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.997 2 DEBUG nova.compute.manager [req-22701efe-ee0f-4f43-b16e-3f8ca7351cce req-07d8f81c-9762-4f3b-a3d8-54e7e2ff608b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Received event network-vif-plugged-c42b44f6-e8fd-46c9-a075-0952da11cf02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.998 2 DEBUG oslo_concurrency.lockutils [req-22701efe-ee0f-4f43-b16e-3f8ca7351cce req-07d8f81c-9762-4f3b-a3d8-54e7e2ff608b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.998 2 DEBUG oslo_concurrency.lockutils [req-22701efe-ee0f-4f43-b16e-3f8ca7351cce req-07d8f81c-9762-4f3b-a3d8-54e7e2ff608b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:30 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.999 2 DEBUG oslo_concurrency.lockutils [req-22701efe-ee0f-4f43-b16e-3f8ca7351cce req-07d8f81c-9762-4f3b-a3d8-54e7e2ff608b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:30.999 2 DEBUG nova.compute.manager [req-22701efe-ee0f-4f43-b16e-3f8ca7351cce req-07d8f81c-9762-4f3b-a3d8-54e7e2ff608b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Processing event network-vif-plugged-c42b44f6-e8fd-46c9-a075-0952da11cf02 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.001 2 DEBUG nova.compute.manager [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.005 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394011.0053859, fcd663c5-c20d-477c-bf26-11eb72d0886f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.006 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] VM Resumed (Lifecycle Event)
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.011 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.021 2 INFO nova.virt.libvirt.driver [-] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Instance spawned successfully.
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.022 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.051 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.061 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 474 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 6.8 MiB/s wr, 140 op/s
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.062 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.062 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.063 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.063 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.064 2 DEBUG nova.virt.libvirt.driver [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.068 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.102 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.125 2 INFO nova.compute.manager [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Took 8.53 seconds to spawn the instance on the hypervisor.
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.126 2 DEBUG nova.compute.manager [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.195 2 INFO nova.compute.manager [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Took 9.88 seconds to build instance.
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.213 2 DEBUG oslo_concurrency.lockutils [None req-4fa848ef-5512-4870-aa8d-6e25ea94fdcc 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2942104630' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.245 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.263 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.267 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.319 2 INFO nova.network.neutron [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1450751200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.734 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.735 2 DEBUG nova.virt.libvirt.vif [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:33:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1935269911',display_name='tempest-tempest.common.compute-instance-1935269911-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1935269911-1',id=77,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e1045858c7b24f1baf184c8469064740',ramdisk_id='',reservation_id='r-rzqhmsfz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-248615309',owner_user_name='tempest-MultipleCreateTestJSON-248615309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:25Z,user_data=None,user_id='6c3e0096dae34ce09545c8c4547dae81',uuid=0b8ef6ea-80ff-48a3-84ff-f057fc293169,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "address": "fa:16:3e:b0:42:29", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15c13c8-9d", "ovs_interfaceid": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.736 2 DEBUG nova.network.os_vif_util [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converting VIF {"id": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "address": "fa:16:3e:b0:42:29", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15c13c8-9d", "ovs_interfaceid": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.737 2 DEBUG nova.network.os_vif_util [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:42:29,bridge_name='br-int',has_traffic_filtering=True,id=e15c13c8-9db2-4011-934e-7c302b8e26c0,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15c13c8-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.738 2 DEBUG nova.objects.instance [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b8ef6ea-80ff-48a3-84ff-f057fc293169 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.753 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:33:31 compute-0 nova_compute[260603]:   <uuid>0b8ef6ea-80ff-48a3-84ff-f057fc293169</uuid>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   <name>instance-0000004d</name>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <nova:name>tempest-tempest.common.compute-instance-1935269911-1</nova:name>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:33:30</nova:creationTime>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:33:31 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:33:31 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:33:31 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:33:31 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:33:31 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:33:31 compute-0 nova_compute[260603]:         <nova:user uuid="6c3e0096dae34ce09545c8c4547dae81">tempest-MultipleCreateTestJSON-248615309-project-member</nova:user>
Oct 02 08:33:31 compute-0 nova_compute[260603]:         <nova:project uuid="e1045858c7b24f1baf184c8469064740">tempest-MultipleCreateTestJSON-248615309</nova:project>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:33:31 compute-0 nova_compute[260603]:         <nova:port uuid="e15c13c8-9db2-4011-934e-7c302b8e26c0">
Oct 02 08:33:31 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <system>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <entry name="serial">0b8ef6ea-80ff-48a3-84ff-f057fc293169</entry>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <entry name="uuid">0b8ef6ea-80ff-48a3-84ff-f057fc293169</entry>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     </system>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   <os>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   </os>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   <features>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   </features>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk">
Oct 02 08:33:31 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:31 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk.config">
Oct 02 08:33:31 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:31 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:b0:42:29"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <target dev="tape15c13c8-9d"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/0b8ef6ea-80ff-48a3-84ff-f057fc293169/console.log" append="off"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <video>
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     </video>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:33:31 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:33:31 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:33:31 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:33:31 compute-0 nova_compute[260603]: </domain>
Oct 02 08:33:31 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.755 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Preparing to wait for external event network-vif-plugged-e15c13c8-9db2-4011-934e-7c302b8e26c0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.755 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.756 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.756 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.757 2 DEBUG nova.virt.libvirt.vif [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:33:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1935269911',display_name='tempest-tempest.common.compute-instance-1935269911-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1935269911-1',id=77,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e1045858c7b24f1baf184c8469064740',ramdisk_id='',reservation_id='r-rzqhmsfz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-248615309',owner_user_name='tempest-MultipleCreateTestJSON-248615309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:25Z,user_data=None,user_id='6c3e0096dae34ce09545c8c4547dae81',uuid=0b8ef6ea-80ff-48a3-84ff-f057fc293169,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "address": "fa:16:3e:b0:42:29", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15c13c8-9d", "ovs_interfaceid": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.757 2 DEBUG nova.network.os_vif_util [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converting VIF {"id": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "address": "fa:16:3e:b0:42:29", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15c13c8-9d", "ovs_interfaceid": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.758 2 DEBUG nova.network.os_vif_util [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:42:29,bridge_name='br-int',has_traffic_filtering=True,id=e15c13c8-9db2-4011-934e-7c302b8e26c0,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15c13c8-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.758 2 DEBUG os_vif [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:42:29,bridge_name='br-int',has_traffic_filtering=True,id=e15c13c8-9db2-4011-934e-7c302b8e26c0,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15c13c8-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.759 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape15c13c8-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.765 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape15c13c8-9d, col_values=(('external_ids', {'iface-id': 'e15c13c8-9db2-4011-934e-7c302b8e26c0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:42:29', 'vm-uuid': '0b8ef6ea-80ff-48a3-84ff-f057fc293169'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:31 compute-0 NetworkManager[45129]: <info>  [1759394011.7686] manager: (tape15c13c8-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.776 2 INFO os_vif [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:42:29,bridge_name='br-int',has_traffic_filtering=True,id=e15c13c8-9db2-4011-934e-7c302b8e26c0,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15c13c8-9d')
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.839 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.840 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.840 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] No VIF found with MAC fa:16:3e:b0:42:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.841 2 INFO nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Using config drive
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.868 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.900 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.901 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquired lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.901 2 DEBUG nova.network.neutron [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:33:31 compute-0 ceph-mon[74477]: pgmap v1603: 305 pgs: 305 active+clean; 474 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 6.8 MiB/s wr, 140 op/s
Oct 02 08:33:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2942104630' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1450751200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:31 compute-0 nova_compute[260603]: 2025-10-02 08:33:31.999 2 DEBUG nova.network.neutron [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Successfully updated port: f56f40e0-799f-47a4-a095-002a975375fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.018 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "refresh_cache-69c52a3e-2ba6-4468-8498-ceb30011bb4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.019 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquired lock "refresh_cache-69c52a3e-2ba6-4468-8498-ceb30011bb4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.019 2 DEBUG nova.network.neutron [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:33:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.351 2 INFO nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Creating config drive at /var/lib/nova/instances/0b8ef6ea-80ff-48a3-84ff-f057fc293169/disk.config
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.361 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b8ef6ea-80ff-48a3-84ff-f057fc293169/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_o7jivhn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.410 2 DEBUG nova.network.neutron [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.503 2 DEBUG nova.network.neutron [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Successfully updated port: 11350006-4c80-4e3a-a271-5230799be1ba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.511 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b8ef6ea-80ff-48a3-84ff-f057fc293169/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_o7jivhn" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.552 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.557 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b8ef6ea-80ff-48a3-84ff-f057fc293169/disk.config 0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.615 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "refresh_cache-6df0942d-95db-4140-9c7b-b5c51ada92bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.616 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquired lock "refresh_cache-6df0942d-95db-4140-9c7b-b5c51ada92bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.616 2 DEBUG nova.network.neutron [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.755 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b8ef6ea-80ff-48a3-84ff-f057fc293169/disk.config 0b8ef6ea-80ff-48a3-84ff-f057fc293169_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.756 2 INFO nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Deleting local config drive /var/lib/nova/instances/0b8ef6ea-80ff-48a3-84ff-f057fc293169/disk.config because it was imported into RBD.
Oct 02 08:33:32 compute-0 kernel: tape15c13c8-9d: entered promiscuous mode
Oct 02 08:33:32 compute-0 NetworkManager[45129]: <info>  [1759394012.8126] manager: (tape15c13c8-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/303)
Oct 02 08:33:32 compute-0 ovn_controller[152344]: 2025-10-02T08:33:32Z|00720|binding|INFO|Claiming lport e15c13c8-9db2-4011-934e-7c302b8e26c0 for this chassis.
Oct 02 08:33:32 compute-0 ovn_controller[152344]: 2025-10-02T08:33:32Z|00721|binding|INFO|e15c13c8-9db2-4011-934e-7c302b8e26c0: Claiming fa:16:3e:b0:42:29 10.100.0.14
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:32.824 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:42:29 10.100.0.14'], port_security=['fa:16:3e:b0:42:29 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0b8ef6ea-80ff-48a3-84ff-f057fc293169', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe2ef776-a59b-4369-95f5-69103d78f3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1045858c7b24f1baf184c8469064740', 'neutron:revision_number': '2', 'neutron:security_group_ids': '524655b6-f9e6-47ce-bebb-b7967ffdd769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=798444a3-a368-47d6-a395-3d1a0bcb7c88, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e15c13c8-9db2-4011-934e-7c302b8e26c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:32.827 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e15c13c8-9db2-4011-934e-7c302b8e26c0 in datapath fe2ef776-a59b-4369-95f5-69103d78f3da bound to our chassis
Oct 02 08:33:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:32.829 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe2ef776-a59b-4369-95f5-69103d78f3da
Oct 02 08:33:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:32.846 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b3bc9e07-0247-41a0-8634-fdf109d19eca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:32.847 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfe2ef776-a1 in ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.843 2 DEBUG nova.network.neutron [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:33:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:32.849 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfe2ef776-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:33:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:32.849 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffed305-828e-4693-b425-6fc4b872ffbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:32 compute-0 ovn_controller[152344]: 2025-10-02T08:33:32Z|00722|binding|INFO|Setting lport e15c13c8-9db2-4011-934e-7c302b8e26c0 ovn-installed in OVS
Oct 02 08:33:32 compute-0 ovn_controller[152344]: 2025-10-02T08:33:32Z|00723|binding|INFO|Setting lport e15c13c8-9db2-4011-934e-7c302b8e26c0 up in Southbound
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:32.852 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4cbee69b-7145-4255-bbff-4074cae4494c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:32 compute-0 nova_compute[260603]: 2025-10-02 08:33:32.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:32.873 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[07b5d3c3-5050-4f89-8d69-af615de53e6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:32 compute-0 systemd-machined[214636]: New machine qemu-86-instance-0000004d.
Oct 02 08:33:32 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-0000004d.
Oct 02 08:33:32 compute-0 systemd-udevd[335423]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:33:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:32.906 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[15b58f86-850b-4b24-af88-14b78af1d224]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:32 compute-0 NetworkManager[45129]: <info>  [1759394012.9131] device (tape15c13c8-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:33:32 compute-0 NetworkManager[45129]: <info>  [1759394012.9155] device (tape15c13c8-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:33:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:32.949 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3484065d-eb55-43d5-9e8b-c9d6b9edee92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:32.953 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1406897b-8889-4734-98cb-3f6ab04d5896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:32 compute-0 NetworkManager[45129]: <info>  [1759394012.9552] manager: (tapfe2ef776-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/304)
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.006 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b8a34b-daf6-43d7-9d9a-66e94c472e18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.009 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[113da3e1-c0bb-4d00-987c-73d72e851d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:33 compute-0 NetworkManager[45129]: <info>  [1759394013.0395] device (tapfe2ef776-a0): carrier: link connected
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.046 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f2731afe-35de-4055-8bd5-efc1223d4f2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 511 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.8 MiB/s wr, 227 op/s
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.063 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a17f14c0-1db6-4418-80fa-67dba8077605]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe2ef776-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:01:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489164, 'reachable_time': 38004, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335453, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.080 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a365b50f-5ffe-42bd-9447-a855752cc869]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:19e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489164, 'tstamp': 489164}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335454, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.096 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e93ab378-e649-409c-8323-d79ca7a32b23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe2ef776-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:01:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489164, 'reachable_time': 38004, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335455, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.124 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dc91f36f-6aeb-4e6e-b67d-2ee86c024bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.180 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e864095-1d22-4a91-9536-44c36d34ff3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.181 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe2ef776-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.182 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.182 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe2ef776-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:33 compute-0 NetworkManager[45129]: <info>  [1759394013.2270] manager: (tapfe2ef776-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Oct 02 08:33:33 compute-0 kernel: tapfe2ef776-a0: entered promiscuous mode
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.230 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe2ef776-a0, col_values=(('external_ids', {'iface-id': '3752b866-0aac-4a57-acbb-4c574bfc2b06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:33 compute-0 ovn_controller[152344]: 2025-10-02T08:33:33Z|00724|binding|INFO|Releasing lport 3752b866-0aac-4a57-acbb-4c574bfc2b06 from this chassis (sb_readonly=0)
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.279 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fe2ef776-a59b-4369-95f5-69103d78f3da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fe2ef776-a59b-4369-95f5-69103d78f3da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.282 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff51d86-ee79-444b-bcb6-433b5b37fee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.283 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-fe2ef776-a59b-4369-95f5-69103d78f3da
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/fe2ef776-a59b-4369-95f5-69103d78f3da.pid.haproxy
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID fe2ef776-a59b-4369-95f5-69103d78f3da
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:33:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:33.285 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'env', 'PROCESS_TAG=haproxy-fe2ef776-a59b-4369-95f5-69103d78f3da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fe2ef776-a59b-4369-95f5-69103d78f3da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:33:33 compute-0 podman[335529]: 2025-10-02 08:33:33.779024821 +0000 UTC m=+0.088090284 container create abf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 08:33:33 compute-0 podman[335529]: 2025-10-02 08:33:33.737057209 +0000 UTC m=+0.046122642 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:33:33 compute-0 systemd[1]: Started libpod-conmon-abf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34.scope.
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.842 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394013.8418193, 0b8ef6ea-80ff-48a3-84ff-f057fc293169 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.843 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] VM Started (Lifecycle Event)
Oct 02 08:33:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec2650f0eccb08d7c825e16e7312724895155fcebf088d90fcc799c9fdbc131/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.872 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.877 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394013.8419673, 0b8ef6ea-80ff-48a3-84ff-f057fc293169 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.877 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] VM Paused (Lifecycle Event)
Oct 02 08:33:33 compute-0 podman[335529]: 2025-10-02 08:33:33.89112643 +0000 UTC m=+0.200191903 container init abf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.896 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:33 compute-0 podman[335529]: 2025-10-02 08:33:33.897147156 +0000 UTC m=+0.206212629 container start abf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.899 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.916 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:33:33 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[335544]: [NOTICE]   (335548) : New worker (335550) forked
Oct 02 08:33:33 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[335544]: [NOTICE]   (335548) : Loading success.
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.939 2 DEBUG nova.network.neutron [req-c5baffed-cbc3-4138-a568-ce13b69bbe39 req-efdfbc55-bb9a-42f5-becf-a3484e401e89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Updated VIF entry in instance network info cache for port e15c13c8-9db2-4011-934e-7c302b8e26c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.940 2 DEBUG nova.network.neutron [req-c5baffed-cbc3-4138-a568-ce13b69bbe39 req-efdfbc55-bb9a-42f5-becf-a3484e401e89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Updating instance_info_cache with network_info: [{"id": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "address": "fa:16:3e:b0:42:29", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15c13c8-9d", "ovs_interfaceid": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:33 compute-0 nova_compute[260603]: 2025-10-02 08:33:33.954 2 DEBUG oslo_concurrency.lockutils [req-c5baffed-cbc3-4138-a568-ce13b69bbe39 req-efdfbc55-bb9a-42f5-becf-a3484e401e89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-0b8ef6ea-80ff-48a3-84ff-f057fc293169" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:34 compute-0 ceph-mon[74477]: pgmap v1604: 305 pgs: 305 active+clean; 511 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.8 MiB/s wr, 227 op/s
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.340 2 DEBUG nova.compute.manager [req-4b02fb91-5be4-4235-9baf-17eae425c991 req-9e9ccab6-f4bd-4bb2-8944-90c315979f55 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-changed-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.341 2 DEBUG nova.compute.manager [req-4b02fb91-5be4-4235-9baf-17eae425c991 req-9e9ccab6-f4bd-4bb2-8944-90c315979f55 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Refreshing instance network info cache due to event network-changed-37e9c33f-0ff9-4138-a7b5-989ba3c016a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.341 2 DEBUG oslo_concurrency.lockutils [req-4b02fb91-5be4-4235-9baf-17eae425c991 req-9e9ccab6-f4bd-4bb2-8944-90c315979f55 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.360 2 DEBUG nova.network.neutron [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Updating instance_info_cache with network_info: [{"id": "f56f40e0-799f-47a4-a095-002a975375fa", "address": "fa:16:3e:71:7b:52", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f40e0-79", "ovs_interfaceid": "f56f40e0-799f-47a4-a095-002a975375fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.374 2 DEBUG nova.compute.manager [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Received event network-vif-plugged-c42b44f6-e8fd-46c9-a075-0952da11cf02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.375 2 DEBUG oslo_concurrency.lockutils [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.375 2 DEBUG oslo_concurrency.lockutils [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.376 2 DEBUG oslo_concurrency.lockutils [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.376 2 DEBUG nova.compute.manager [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] No waiting events found dispatching network-vif-plugged-c42b44f6-e8fd-46c9-a075-0952da11cf02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.377 2 WARNING nova.compute.manager [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Received unexpected event network-vif-plugged-c42b44f6-e8fd-46c9-a075-0952da11cf02 for instance with vm_state active and task_state None.
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.377 2 DEBUG nova.compute.manager [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Received event network-changed-f56f40e0-799f-47a4-a095-002a975375fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.377 2 DEBUG nova.compute.manager [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Refreshing instance network info cache due to event network-changed-f56f40e0-799f-47a4-a095-002a975375fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.378 2 DEBUG oslo_concurrency.lockutils [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-69c52a3e-2ba6-4468-8498-ceb30011bb4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.383 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Releasing lock "refresh_cache-69c52a3e-2ba6-4468-8498-ceb30011bb4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.383 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Instance network_info: |[{"id": "f56f40e0-799f-47a4-a095-002a975375fa", "address": "fa:16:3e:71:7b:52", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f40e0-79", "ovs_interfaceid": "f56f40e0-799f-47a4-a095-002a975375fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.383 2 DEBUG oslo_concurrency.lockutils [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-69c52a3e-2ba6-4468-8498-ceb30011bb4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.384 2 DEBUG nova.network.neutron [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Refreshing network info cache for port f56f40e0-799f-47a4-a095-002a975375fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.386 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Start _get_guest_xml network_info=[{"id": "f56f40e0-799f-47a4-a095-002a975375fa", "address": "fa:16:3e:71:7b:52", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f40e0-79", "ovs_interfaceid": "f56f40e0-799f-47a4-a095-002a975375fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.390 2 WARNING nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.395 2 DEBUG nova.virt.libvirt.host [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.396 2 DEBUG nova.virt.libvirt.host [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.401 2 DEBUG nova.virt.libvirt.host [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.402 2 DEBUG nova.virt.libvirt.host [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.402 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.403 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.403 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.403 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.404 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.404 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.404 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.405 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.405 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.405 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.406 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.406 2 DEBUG nova.virt.hardware [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.409 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:34.819 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:34.821 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:34.822 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314637905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.874 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.908 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:34 compute-0 nova_compute[260603]: 2025-10-02 08:33:34.914 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 511 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 188 op/s
Oct 02 08:33:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1314637905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.195 2 DEBUG nova.network.neutron [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating instance_info_cache with network_info: [{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.217 2 DEBUG nova.network.neutron [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Updating instance_info_cache with network_info: [{"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.224 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Releasing lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.226 2 DEBUG nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.226 2 INFO nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Creating image(s)
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.249 2 DEBUG nova.storage.rbd_utils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.253 2 DEBUG nova.objects.instance [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'trusted_certs' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.255 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Releasing lock "refresh_cache-6df0942d-95db-4140-9c7b-b5c51ada92bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.256 2 DEBUG nova.compute.manager [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Instance network_info: |[{"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.256 2 DEBUG oslo_concurrency.lockutils [req-4b02fb91-5be4-4235-9baf-17eae425c991 req-9e9ccab6-f4bd-4bb2-8944-90c315979f55 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.256 2 DEBUG nova.network.neutron [req-4b02fb91-5be4-4235-9baf-17eae425c991 req-9e9ccab6-f4bd-4bb2-8944-90c315979f55 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Refreshing network info cache for port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.260 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Start _get_guest_xml network_info=[{"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.265 2 WARNING nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.271 2 DEBUG nova.virt.libvirt.host [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.271 2 DEBUG nova.virt.libvirt.host [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.275 2 DEBUG nova.virt.libvirt.host [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.275 2 DEBUG nova.virt.libvirt.host [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.275 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.276 2 DEBUG nova.virt.hardware [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.276 2 DEBUG nova.virt.hardware [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.277 2 DEBUG nova.virt.hardware [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.277 2 DEBUG nova.virt.hardware [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.277 2 DEBUG nova.virt.hardware [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.277 2 DEBUG nova.virt.hardware [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.278 2 DEBUG nova.virt.hardware [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.278 2 DEBUG nova.virt.hardware [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.278 2 DEBUG nova.virt.hardware [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.278 2 DEBUG nova.virt.hardware [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.279 2 DEBUG nova.virt.hardware [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.282 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.346 2 DEBUG nova.storage.rbd_utils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/419463809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.371 2 DEBUG nova.storage.rbd_utils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.374 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "2864c21d5c9cd13e611336dc52d03e6a7b4aef9b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.375 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "2864c21d5c9cd13e611336dc52d03e6a7b4aef9b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.398 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.399 2 DEBUG nova.virt.libvirt.vif [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:33:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1935269911',display_name='tempest-tempest.common.compute-instance-1935269911-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1935269911-2',id=78,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e1045858c7b24f1baf184c8469064740',ramdisk_id='',reservation_id='r-rzqhmsfz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-248615309',owner_user_name='tempest-MultipleCreateTestJSON-248615309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:26Z,user_data=None,user_id='6c3e0096dae34ce09545c8c4547dae81',uuid=69c52a3e-2ba6-4468-8498-ceb30011bb4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f56f40e0-799f-47a4-a095-002a975375fa", "address": "fa:16:3e:71:7b:52", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f40e0-79", "ovs_interfaceid": "f56f40e0-799f-47a4-a095-002a975375fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.400 2 DEBUG nova.network.os_vif_util [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converting VIF {"id": "f56f40e0-799f-47a4-a095-002a975375fa", "address": "fa:16:3e:71:7b:52", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f40e0-79", "ovs_interfaceid": "f56f40e0-799f-47a4-a095-002a975375fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.401 2 DEBUG nova.network.os_vif_util [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:7b:52,bridge_name='br-int',has_traffic_filtering=True,id=f56f40e0-799f-47a4-a095-002a975375fa,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f40e0-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.402 2 DEBUG nova.objects.instance [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69c52a3e-2ba6-4468-8498-ceb30011bb4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.422 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:33:35 compute-0 nova_compute[260603]:   <uuid>69c52a3e-2ba6-4468-8498-ceb30011bb4f</uuid>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   <name>instance-0000004e</name>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <nova:name>tempest-tempest.common.compute-instance-1935269911-2</nova:name>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:33:34</nova:creationTime>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:33:35 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:33:35 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:33:35 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:33:35 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:33:35 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:33:35 compute-0 nova_compute[260603]:         <nova:user uuid="6c3e0096dae34ce09545c8c4547dae81">tempest-MultipleCreateTestJSON-248615309-project-member</nova:user>
Oct 02 08:33:35 compute-0 nova_compute[260603]:         <nova:project uuid="e1045858c7b24f1baf184c8469064740">tempest-MultipleCreateTestJSON-248615309</nova:project>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:33:35 compute-0 nova_compute[260603]:         <nova:port uuid="f56f40e0-799f-47a4-a095-002a975375fa">
Oct 02 08:33:35 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <system>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <entry name="serial">69c52a3e-2ba6-4468-8498-ceb30011bb4f</entry>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <entry name="uuid">69c52a3e-2ba6-4468-8498-ceb30011bb4f</entry>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     </system>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   <os>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   </os>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   <features>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   </features>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk">
Oct 02 08:33:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk.config">
Oct 02 08:33:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:71:7b:52"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <target dev="tapf56f40e0-79"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/69c52a3e-2ba6-4468-8498-ceb30011bb4f/console.log" append="off"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <video>
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     </video>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:33:35 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:33:35 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:33:35 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:33:35 compute-0 nova_compute[260603]: </domain>
Oct 02 08:33:35 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.427 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Preparing to wait for external event network-vif-plugged-f56f40e0-799f-47a4-a095-002a975375fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.427 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.428 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.428 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.429 2 DEBUG nova.virt.libvirt.vif [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:33:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1935269911',display_name='tempest-tempest.common.compute-instance-1935269911-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1935269911-2',id=78,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e1045858c7b24f1baf184c8469064740',ramdisk_id='',reservation_id='r-rzqhmsfz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-248615309',owner_user_name='tempest-MultipleCreateTestJSON-248615309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:26Z,user_data=None,user_id='6c3e0096dae34ce09545c8c4547dae81',uuid=69c52a3e-2ba6-4468-8498-ceb30011bb4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f56f40e0-799f-47a4-a095-002a975375fa", "address": "fa:16:3e:71:7b:52", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f40e0-79", "ovs_interfaceid": "f56f40e0-799f-47a4-a095-002a975375fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.429 2 DEBUG nova.network.os_vif_util [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converting VIF {"id": "f56f40e0-799f-47a4-a095-002a975375fa", "address": "fa:16:3e:71:7b:52", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f40e0-79", "ovs_interfaceid": "f56f40e0-799f-47a4-a095-002a975375fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.430 2 DEBUG nova.network.os_vif_util [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:7b:52,bridge_name='br-int',has_traffic_filtering=True,id=f56f40e0-799f-47a4-a095-002a975375fa,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f40e0-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.430 2 DEBUG os_vif [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:7b:52,bridge_name='br-int',has_traffic_filtering=True,id=f56f40e0-799f-47a4-a095-002a975375fa,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f40e0-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.431 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.431 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf56f40e0-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf56f40e0-79, col_values=(('external_ids', {'iface-id': 'f56f40e0-799f-47a4-a095-002a975375fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:7b:52', 'vm-uuid': '69c52a3e-2ba6-4468-8498-ceb30011bb4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:35 compute-0 NetworkManager[45129]: <info>  [1759394015.4372] manager: (tapf56f40e0-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.444 2 INFO os_vif [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:7b:52,bridge_name='br-int',has_traffic_filtering=True,id=f56f40e0-799f-47a4-a095-002a975375fa,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f40e0-79')
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.505 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.505 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.505 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] No VIF found with MAC fa:16:3e:71:7b:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.506 2 INFO nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Using config drive
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.526 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.661 2 DEBUG nova.virt.libvirt.imagebackend [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/9c97855a-bc91-414b-b335-de5e3b829e28/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/9c97855a-bc91-414b-b335-de5e3b829e28/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 08:33:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/49394792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.726 2 DEBUG nova.virt.libvirt.imagebackend [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Selected location: {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/9c97855a-bc91-414b-b335-de5e3b829e28/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.727 2 DEBUG nova.storage.rbd_utils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] cloning images/9c97855a-bc91-414b-b335-de5e3b829e28@snap to None/49e7e668-b62c-4e35-a4e2-bba540000961_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.759 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.787 2 DEBUG nova.storage.rbd_utils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.792 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.862 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "2864c21d5c9cd13e611336dc52d03e6a7b4aef9b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:35 compute-0 nova_compute[260603]: 2025-10-02 08:33:35.978 2 DEBUG nova.objects.instance [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'migration_context' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.039 2 DEBUG nova.storage.rbd_utils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] flattening vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:33:36 compute-0 ceph-mon[74477]: pgmap v1605: 305 pgs: 305 active+clean; 511 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 188 op/s
Oct 02 08:33:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/419463809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/49394792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.335 2 DEBUG nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Image rbd:vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.337 2 DEBUG nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.338 2 DEBUG nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Ensure instance console log exists: /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.339 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.340 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.341 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2752196639' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.347 2 DEBUG nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Start _get_guest_xml network_info=[{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:33:07Z,direct_url=<?>,disk_format='raw',id=9c97855a-bc91-414b-b335-de5e3b829e28,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1525105387-shelved',owner='eda0caa41e4740148ab99d5ebf9e27ba',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:33:15Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.358 2 WARNING nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.364 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.366 2 DEBUG nova.virt.libvirt.vif [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:33:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1363390342',display_name='tempest-ServerRescueTestJSON-server-1363390342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1363390342',id=79,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='629efd330be646b7a2941e0c83b86e0e',ramdisk_id='',reservation_id='r-gcsl0g66',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-846559261',owner_user_name='tempest-ServerRescueTestJSON-846559261-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:27Z,user_data=None,user_id='d2e113d3d74a43998ac8dbf246ae9095',uuid=6df0942d-95db-4140-9c7b-b5c51ada92bd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.367 2 DEBUG nova.network.os_vif_util [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converting VIF {"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.369 2 DEBUG nova.network.os_vif_util [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:93:e4,bridge_name='br-int',has_traffic_filtering=True,id=11350006-4c80-4e3a-a271-5230799be1ba,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11350006-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.371 2 DEBUG nova.objects.instance [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 6df0942d-95db-4140-9c7b-b5c51ada92bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.374 2 DEBUG nova.virt.libvirt.host [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.375 2 DEBUG nova.virt.libvirt.host [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.380 2 DEBUG nova.virt.libvirt.host [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.381 2 DEBUG nova.virt.libvirt.host [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.382 2 DEBUG nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.383 2 DEBUG nova.virt.hardware [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:33:07Z,direct_url=<?>,disk_format='raw',id=9c97855a-bc91-414b-b335-de5e3b829e28,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1525105387-shelved',owner='eda0caa41e4740148ab99d5ebf9e27ba',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:33:15Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.384 2 DEBUG nova.virt.hardware [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.385 2 DEBUG nova.virt.hardware [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.385 2 DEBUG nova.virt.hardware [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.386 2 DEBUG nova.virt.hardware [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.387 2 DEBUG nova.virt.hardware [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.387 2 DEBUG nova.virt.hardware [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.388 2 DEBUG nova.virt.hardware [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.389 2 DEBUG nova.virt.hardware [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.390 2 DEBUG nova.virt.hardware [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.390 2 DEBUG nova.virt.hardware [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.391 2 DEBUG nova.objects.instance [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'vcpu_model' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.396 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:33:36 compute-0 nova_compute[260603]:   <uuid>6df0942d-95db-4140-9c7b-b5c51ada92bd</uuid>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   <name>instance-0000004f</name>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerRescueTestJSON-server-1363390342</nova:name>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:33:35</nova:creationTime>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:33:36 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:33:36 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:33:36 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:33:36 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:33:36 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:33:36 compute-0 nova_compute[260603]:         <nova:user uuid="d2e113d3d74a43998ac8dbf246ae9095">tempest-ServerRescueTestJSON-846559261-project-member</nova:user>
Oct 02 08:33:36 compute-0 nova_compute[260603]:         <nova:project uuid="629efd330be646b7a2941e0c83b86e0e">tempest-ServerRescueTestJSON-846559261</nova:project>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:33:36 compute-0 nova_compute[260603]:         <nova:port uuid="11350006-4c80-4e3a-a271-5230799be1ba">
Oct 02 08:33:36 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <system>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <entry name="serial">6df0942d-95db-4140-9c7b-b5c51ada92bd</entry>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <entry name="uuid">6df0942d-95db-4140-9c7b-b5c51ada92bd</entry>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     </system>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   <os>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   </os>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   <features>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   </features>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6df0942d-95db-4140-9c7b-b5c51ada92bd_disk">
Oct 02 08:33:36 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:36 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.config">
Oct 02 08:33:36 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:36 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:68:93:e4"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <target dev="tap11350006-4c"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/console.log" append="off"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <video>
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     </video>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:33:36 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:33:36 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:33:36 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:33:36 compute-0 nova_compute[260603]: </domain>
Oct 02 08:33:36 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.401 2 DEBUG nova.compute.manager [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Preparing to wait for external event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.402 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.402 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.402 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.404 2 DEBUG nova.virt.libvirt.vif [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:33:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1363390342',display_name='tempest-ServerRescueTestJSON-server-1363390342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1363390342',id=79,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='629efd330be646b7a2941e0c83b86e0e',ramdisk_id='',reservation_id='r-gcsl0g66',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-846559261',owner_user_name='tempest-ServerRescueTestJSON-846559261-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:27Z,user_data=None,user_id='d2e113d3d74a43998ac8dbf246ae9095',uuid=6df0942d-95db-4140-9c7b-b5c51ada92bd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.404 2 DEBUG nova.network.os_vif_util [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converting VIF {"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.405 2 DEBUG nova.network.os_vif_util [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:93:e4,bridge_name='br-int',has_traffic_filtering=True,id=11350006-4c80-4e3a-a271-5230799be1ba,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11350006-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.406 2 DEBUG os_vif [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:93:e4,bridge_name='br-int',has_traffic_filtering=True,id=11350006-4c80-4e3a-a271-5230799be1ba,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11350006-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.408 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.409 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11350006-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.415 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11350006-4c, col_values=(('external_ids', {'iface-id': '11350006-4c80-4e3a-a271-5230799be1ba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:93:e4', 'vm-uuid': '6df0942d-95db-4140-9c7b-b5c51ada92bd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:36 compute-0 NetworkManager[45129]: <info>  [1759394016.4179] manager: (tap11350006-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.427 2 DEBUG oslo_concurrency.processutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.474 2 INFO os_vif [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:93:e4,bridge_name='br-int',has_traffic_filtering=True,id=11350006-4c80-4e3a-a271-5230799be1ba,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11350006-4c')
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.561 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.562 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.562 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No VIF found with MAC fa:16:3e:68:93:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.563 2 INFO nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Using config drive
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.602 2 DEBUG nova.storage.rbd_utils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.784 2 INFO nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Creating config drive at /var/lib/nova/instances/69c52a3e-2ba6-4468-8498-ceb30011bb4f/disk.config
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.799 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69c52a3e-2ba6-4468-8498-ceb30011bb4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgc15f5fh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300446444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.890 2 DEBUG oslo_concurrency.processutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.933 2 DEBUG nova.storage.rbd_utils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:36 compute-0 nova_compute[260603]: 2025-10-02 08:33:36.938 2 DEBUG oslo_concurrency.processutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.008 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69c52a3e-2ba6-4468-8498-ceb30011bb4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgc15f5fh" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.050 2 DEBUG nova.storage.rbd_utils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.056 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69c52a3e-2ba6-4468-8498-ceb30011bb4f/disk.config 69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 511 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.3 MiB/s wr, 168 op/s
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.120 2 DEBUG nova.compute.manager [req-289f498b-c74f-4462-b708-2f388f51f5fb req-be4e4e80-6df2-427b-be79-58869d2c59b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Received event network-vif-plugged-e15c13c8-9db2-4011-934e-7c302b8e26c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.121 2 DEBUG oslo_concurrency.lockutils [req-289f498b-c74f-4462-b708-2f388f51f5fb req-be4e4e80-6df2-427b-be79-58869d2c59b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.122 2 DEBUG oslo_concurrency.lockutils [req-289f498b-c74f-4462-b708-2f388f51f5fb req-be4e4e80-6df2-427b-be79-58869d2c59b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.123 2 DEBUG oslo_concurrency.lockutils [req-289f498b-c74f-4462-b708-2f388f51f5fb req-be4e4e80-6df2-427b-be79-58869d2c59b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.123 2 DEBUG nova.compute.manager [req-289f498b-c74f-4462-b708-2f388f51f5fb req-be4e4e80-6df2-427b-be79-58869d2c59b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Processing event network-vif-plugged-e15c13c8-9db2-4011-934e-7c302b8e26c0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.123 2 DEBUG nova.compute.manager [req-289f498b-c74f-4462-b708-2f388f51f5fb req-be4e4e80-6df2-427b-be79-58869d2c59b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Received event network-vif-plugged-e15c13c8-9db2-4011-934e-7c302b8e26c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.124 2 DEBUG oslo_concurrency.lockutils [req-289f498b-c74f-4462-b708-2f388f51f5fb req-be4e4e80-6df2-427b-be79-58869d2c59b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.124 2 DEBUG oslo_concurrency.lockutils [req-289f498b-c74f-4462-b708-2f388f51f5fb req-be4e4e80-6df2-427b-be79-58869d2c59b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.124 2 DEBUG oslo_concurrency.lockutils [req-289f498b-c74f-4462-b708-2f388f51f5fb req-be4e4e80-6df2-427b-be79-58869d2c59b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.125 2 DEBUG nova.compute.manager [req-289f498b-c74f-4462-b708-2f388f51f5fb req-be4e4e80-6df2-427b-be79-58869d2c59b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] No waiting events found dispatching network-vif-plugged-e15c13c8-9db2-4011-934e-7c302b8e26c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.125 2 WARNING nova.compute.manager [req-289f498b-c74f-4462-b708-2f388f51f5fb req-be4e4e80-6df2-427b-be79-58869d2c59b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Received unexpected event network-vif-plugged-e15c13c8-9db2-4011-934e-7c302b8e26c0 for instance with vm_state building and task_state spawning.
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.126 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.130 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394017.1297982, 0b8ef6ea-80ff-48a3-84ff-f057fc293169 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.131 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] VM Resumed (Lifecycle Event)
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.133 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.148 2 INFO nova.virt.libvirt.driver [-] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Instance spawned successfully.
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.149 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.154 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.165 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:33:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:33:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2752196639' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2300446444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.181 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.182 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.182 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.183 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.183 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.184 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.193 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.248 2 DEBUG oslo_concurrency.processutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69c52a3e-2ba6-4468-8498-ceb30011bb4f/disk.config 69c52a3e-2ba6-4468-8498-ceb30011bb4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.249 2 INFO nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Deleting local config drive /var/lib/nova/instances/69c52a3e-2ba6-4468-8498-ceb30011bb4f/disk.config because it was imported into RBD.
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.262 2 INFO nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Took 11.45 seconds to spawn the instance on the hypervisor.
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.262 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:37 compute-0 NetworkManager[45129]: <info>  [1759394017.3044] manager: (tapf56f40e0-79): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Oct 02 08:33:37 compute-0 kernel: tapf56f40e0-79: entered promiscuous mode
Oct 02 08:33:37 compute-0 ovn_controller[152344]: 2025-10-02T08:33:37Z|00725|binding|INFO|Claiming lport f56f40e0-799f-47a4-a095-002a975375fa for this chassis.
Oct 02 08:33:37 compute-0 ovn_controller[152344]: 2025-10-02T08:33:37Z|00726|binding|INFO|f56f40e0-799f-47a4-a095-002a975375fa: Claiming fa:16:3e:71:7b:52 10.100.0.8
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.317 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:7b:52 10.100.0.8'], port_security=['fa:16:3e:71:7b:52 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '69c52a3e-2ba6-4468-8498-ceb30011bb4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe2ef776-a59b-4369-95f5-69103d78f3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1045858c7b24f1baf184c8469064740', 'neutron:revision_number': '2', 'neutron:security_group_ids': '524655b6-f9e6-47ce-bebb-b7967ffdd769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=798444a3-a368-47d6-a395-3d1a0bcb7c88, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f56f40e0-799f-47a4-a095-002a975375fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.318 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f56f40e0-799f-47a4-a095-002a975375fa in datapath fe2ef776-a59b-4369-95f5-69103d78f3da bound to our chassis
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.320 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe2ef776-a59b-4369-95f5-69103d78f3da
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.334 2 INFO nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Took 12.65 seconds to build instance.
Oct 02 08:33:37 compute-0 ovn_controller[152344]: 2025-10-02T08:33:37Z|00727|binding|INFO|Setting lport f56f40e0-799f-47a4-a095-002a975375fa ovn-installed in OVS
Oct 02 08:33:37 compute-0 ovn_controller[152344]: 2025-10-02T08:33:37Z|00728|binding|INFO|Setting lport f56f40e0-799f-47a4-a095-002a975375fa up in Southbound
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:37 compute-0 systemd-machined[214636]: New machine qemu-87-instance-0000004e.
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.343 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[05d5806e-21df-4f0a-aca7-ee133ab3f39f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:37 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-0000004e.
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.360 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:37 compute-0 systemd-udevd[336054]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:33:37 compute-0 NetworkManager[45129]: <info>  [1759394017.3828] device (tapf56f40e0-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:33:37 compute-0 NetworkManager[45129]: <info>  [1759394017.3837] device (tapf56f40e0-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.388 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2097333f-ebd0-45c1-8678-4ea41cc9befd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.391 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0b82e8ff-1dc9-4135-bb00-52f940ab8728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.431 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5978250d-7720-4e7d-a1d2-3282ed9939b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.457 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b9103eac-61dc-410f-ab5c-e2eb809e0891]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe2ef776-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:01:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489164, 'reachable_time': 38004, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336065, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727834055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.480 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f521eca9-1127-421a-9bc3-80f52621b44a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfe2ef776-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489175, 'tstamp': 489175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336066, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfe2ef776-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489177, 'tstamp': 489177}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336066, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.482 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe2ef776-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.489 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe2ef776-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.490 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.490 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe2ef776-a0, col_values=(('external_ids', {'iface-id': '3752b866-0aac-4a57-acbb-4c574bfc2b06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:37.490 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.494 2 DEBUG oslo_concurrency.processutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.495 2 DEBUG nova.virt.libvirt.vif [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1525105387',display_name='tempest-ServerActionsTestOtherB-server-1525105387',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1525105387',id=61,image_ref='9c97855a-bc91-414b-b335-de5e3b829e28',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1019763483',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-fypdkou3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member',shelved_at='2025-10-02T08:33:15.461206',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='9c97855a-bc91-414b-b335-de5e3b829e28'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9020ed38b31d46f88625374b2a76aef6',uuid=49e7e668-b62c-4e35-a4e2-bba540000961,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.495 2 DEBUG nova.network.os_vif_util [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.496 2 DEBUG nova.network.os_vif_util [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.497 2 DEBUG nova.objects.instance [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'pci_devices' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.517 2 DEBUG nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:33:37 compute-0 nova_compute[260603]:   <uuid>49e7e668-b62c-4e35-a4e2-bba540000961</uuid>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   <name>instance-0000003d</name>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerActionsTestOtherB-server-1525105387</nova:name>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:33:36</nova:creationTime>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:33:37 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:33:37 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:33:37 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:33:37 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:33:37 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:33:37 compute-0 nova_compute[260603]:         <nova:user uuid="9020ed38b31d46f88625374b2a76aef6">tempest-ServerActionsTestOtherB-1644249004-project-member</nova:user>
Oct 02 08:33:37 compute-0 nova_compute[260603]:         <nova:project uuid="eda0caa41e4740148ab99d5ebf9e27ba">tempest-ServerActionsTestOtherB-1644249004</nova:project>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="9c97855a-bc91-414b-b335-de5e3b829e28"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:33:37 compute-0 nova_compute[260603]:         <nova:port uuid="37e9c33f-0ff9-4138-a7b5-989ba3c016a0">
Oct 02 08:33:37 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <system>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <entry name="serial">49e7e668-b62c-4e35-a4e2-bba540000961</entry>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <entry name="uuid">49e7e668-b62c-4e35-a4e2-bba540000961</entry>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     </system>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   <os>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   </os>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   <features>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   </features>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk">
Oct 02 08:33:37 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:37 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk.config">
Oct 02 08:33:37 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:37 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:19:cc:f7"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <target dev="tap37e9c33f-0f"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/console.log" append="off"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <video>
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     </video>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:33:37 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:33:37 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:33:37 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:33:37 compute-0 nova_compute[260603]: </domain>
Oct 02 08:33:37 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.518 2 DEBUG nova.compute.manager [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Preparing to wait for external event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.518 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.519 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.519 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.519 2 DEBUG nova.virt.libvirt.vif [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1525105387',display_name='tempest-ServerActionsTestOtherB-server-1525105387',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1525105387',id=61,image_ref='9c97855a-bc91-414b-b335-de5e3b829e28',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1019763483',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-fypdkou3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member',shelved_at='2025-10-02T08:33:15.461206',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='9c97855a-bc91-414b-b335-de5e3b829e28'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9020ed38b31d46f88625374b2a76aef6',uuid=49e7e668-b62c-4e35-a4e2-bba540000961,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.520 2 DEBUG nova.network.os_vif_util [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.520 2 DEBUG nova.network.os_vif_util [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.520 2 DEBUG os_vif [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.523 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37e9c33f-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37e9c33f-0f, col_values=(('external_ids', {'iface-id': '37e9c33f-0ff9-4138-a7b5-989ba3c016a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:cc:f7', 'vm-uuid': '49e7e668-b62c-4e35-a4e2-bba540000961'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:37 compute-0 NetworkManager[45129]: <info>  [1759394017.5262] manager: (tap37e9c33f-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.533 2 INFO os_vif [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f')
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.586 2 DEBUG nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.587 2 DEBUG nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.587 2 DEBUG nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No VIF found with MAC fa:16:3e:19:cc:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.587 2 INFO nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Using config drive
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.608 2 DEBUG nova.storage.rbd_utils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.626 2 DEBUG nova.objects.instance [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'ec2_ids' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.711 2 DEBUG nova.objects.instance [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'keypairs' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.851 2 INFO nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Creating config drive at /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/disk.config
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.857 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3r5qycxe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.899 2 DEBUG nova.network.neutron [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Updated VIF entry in instance network info cache for port f56f40e0-799f-47a4-a095-002a975375fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.900 2 DEBUG nova.network.neutron [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Updating instance_info_cache with network_info: [{"id": "f56f40e0-799f-47a4-a095-002a975375fa", "address": "fa:16:3e:71:7b:52", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f40e0-79", "ovs_interfaceid": "f56f40e0-799f-47a4-a095-002a975375fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.927 2 DEBUG oslo_concurrency.lockutils [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-69c52a3e-2ba6-4468-8498-ceb30011bb4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.927 2 DEBUG nova.compute.manager [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received event network-changed-11350006-4c80-4e3a-a271-5230799be1ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.928 2 DEBUG nova.compute.manager [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Refreshing instance network info cache due to event network-changed-11350006-4c80-4e3a-a271-5230799be1ba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.928 2 DEBUG oslo_concurrency.lockutils [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-6df0942d-95db-4140-9c7b-b5c51ada92bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.928 2 DEBUG oslo_concurrency.lockutils [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6df0942d-95db-4140-9c7b-b5c51ada92bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:37 compute-0 nova_compute[260603]: 2025-10-02 08:33:37.928 2 DEBUG nova.network.neutron [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Refreshing network info cache for port 11350006-4c80-4e3a-a271-5230799be1ba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.003 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3r5qycxe" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.024 2 DEBUG nova.storage.rbd_utils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.027 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/disk.config 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.167 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394018.1664164, 69c52a3e-2ba6-4468-8498-ceb30011bb4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.167 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] VM Started (Lifecycle Event)
Oct 02 08:33:38 compute-0 ceph-mon[74477]: pgmap v1606: 305 pgs: 305 active+clean; 511 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.3 MiB/s wr, 168 op/s
Oct 02 08:33:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2727834055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.193 2 DEBUG oslo_concurrency.processutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/disk.config 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.194 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.194 2 INFO nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Deleting local config drive /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/disk.config because it was imported into RBD.
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.202 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394018.1706934, 69c52a3e-2ba6-4468-8498-ceb30011bb4f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.203 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] VM Paused (Lifecycle Event)
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.223 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.279 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:38 compute-0 NetworkManager[45129]: <info>  [1759394018.3003] manager: (tap11350006-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.301 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:33:38 compute-0 kernel: tap11350006-4c: entered promiscuous mode
Oct 02 08:33:38 compute-0 ovn_controller[152344]: 2025-10-02T08:33:38Z|00729|binding|INFO|Claiming lport 11350006-4c80-4e3a-a271-5230799be1ba for this chassis.
Oct 02 08:33:38 compute-0 ovn_controller[152344]: 2025-10-02T08:33:38Z|00730|binding|INFO|11350006-4c80-4e3a-a271-5230799be1ba: Claiming fa:16:3e:68:93:e4 10.100.0.12
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:38 compute-0 NetworkManager[45129]: <info>  [1759394018.3151] device (tap11350006-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:33:38 compute-0 NetworkManager[45129]: <info>  [1759394018.3168] device (tap11350006-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.319 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:93:e4 10.100.0.12'], port_security=['fa:16:3e:68:93:e4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6df0942d-95db-4140-9c7b-b5c51ada92bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86059eb0-17b1-462f-a30f-1dfe95c50614', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '629efd330be646b7a2941e0c83b86e0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4883fba0-ee44-4548-b479-1786f1cf77b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2e7de25-29e1-404b-a0d2-6f487c522884, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=11350006-4c80-4e3a-a271-5230799be1ba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.320 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 11350006-4c80-4e3a-a271-5230799be1ba in datapath 86059eb0-17b1-462f-a30f-1dfe95c50614 bound to our chassis
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.321 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86059eb0-17b1-462f-a30f-1dfe95c50614 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.324 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f38f1e7c-77cb-4187-bd7b-48a52f2508b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:38 compute-0 systemd-machined[214636]: New machine qemu-88-instance-0000004f.
Oct 02 08:33:38 compute-0 ovn_controller[152344]: 2025-10-02T08:33:38Z|00731|binding|INFO|Setting lport 11350006-4c80-4e3a-a271-5230799be1ba ovn-installed in OVS
Oct 02 08:33:38 compute-0 ovn_controller[152344]: 2025-10-02T08:33:38Z|00732|binding|INFO|Setting lport 11350006-4c80-4e3a-a271-5230799be1ba up in Southbound
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:38 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-0000004f.
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.407 2 INFO nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Creating config drive at /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.415 2 DEBUG oslo_concurrency.processutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7505di8b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.461 2 DEBUG nova.network.neutron [req-4b02fb91-5be4-4235-9baf-17eae425c991 req-9e9ccab6-f4bd-4bb2-8944-90c315979f55 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updated VIF entry in instance network info cache for port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.462 2 DEBUG nova.network.neutron [req-4b02fb91-5be4-4235-9baf-17eae425c991 req-9e9ccab6-f4bd-4bb2-8944-90c315979f55 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating instance_info_cache with network_info: [{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.482 2 DEBUG oslo_concurrency.lockutils [req-4b02fb91-5be4-4235-9baf-17eae425c991 req-9e9ccab6-f4bd-4bb2-8944-90c315979f55 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.571 2 DEBUG oslo_concurrency.processutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7505di8b" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002903106584131015 of space, bias 1.0, pg target 0.8709319752393045 quantized to 32 (current 32)
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001770365305723774 of space, bias 1.0, pg target 0.5311095917171322 quantized to 32 (current 32)
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:33:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.598 2 DEBUG nova.storage.rbd_utils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.601 2 DEBUG oslo_concurrency.processutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.760 2 DEBUG oslo_concurrency.processutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.761 2 INFO nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Deleting local config drive /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config because it was imported into RBD.
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.776 2 DEBUG nova.compute.manager [req-fd87199e-c238-4c02-ba44-55e3b585f799 req-a0899dcf-bdf0-4654-9841-6880b1aebc3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.777 2 DEBUG oslo_concurrency.lockutils [req-fd87199e-c238-4c02-ba44-55e3b585f799 req-a0899dcf-bdf0-4654-9841-6880b1aebc3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.777 2 DEBUG oslo_concurrency.lockutils [req-fd87199e-c238-4c02-ba44-55e3b585f799 req-a0899dcf-bdf0-4654-9841-6880b1aebc3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.777 2 DEBUG oslo_concurrency.lockutils [req-fd87199e-c238-4c02-ba44-55e3b585f799 req-a0899dcf-bdf0-4654-9841-6880b1aebc3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.777 2 DEBUG nova.compute.manager [req-fd87199e-c238-4c02-ba44-55e3b585f799 req-a0899dcf-bdf0-4654-9841-6880b1aebc3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Processing event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:33:38 compute-0 NetworkManager[45129]: <info>  [1759394018.8074] manager: (tap37e9c33f-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/311)
Oct 02 08:33:38 compute-0 kernel: tap37e9c33f-0f: entered promiscuous mode
Oct 02 08:33:38 compute-0 ovn_controller[152344]: 2025-10-02T08:33:38Z|00733|binding|INFO|Claiming lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 for this chassis.
Oct 02 08:33:38 compute-0 ovn_controller[152344]: 2025-10-02T08:33:38Z|00734|binding|INFO|37e9c33f-0ff9-4138-a7b5-989ba3c016a0: Claiming fa:16:3e:19:cc:f7 10.100.0.9
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.820 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:cc:f7 10.100.0.9'], port_security=['fa:16:3e:19:cc:f7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '49e7e668-b62c-4e35-a4e2-bba540000961', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'd09bd0a7-8be4-487a-8b24-ba3d4c0378f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04113540-c60b-4329-960e-cb06bfeb56f0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=37e9c33f-0ff9-4138-a7b5-989ba3c016a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.822 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 in datapath ef30d863-af60-49d9-b5d2-5e4f20c70d56 bound to our chassis
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.824 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef30d863-af60-49d9-b5d2-5e4f20c70d56
Oct 02 08:33:38 compute-0 NetworkManager[45129]: <info>  [1759394018.8278] device (tap37e9c33f-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:33:38 compute-0 NetworkManager[45129]: <info>  [1759394018.8286] device (tap37e9c33f-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:33:38 compute-0 ovn_controller[152344]: 2025-10-02T08:33:38Z|00735|binding|INFO|Setting lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 ovn-installed in OVS
Oct 02 08:33:38 compute-0 ovn_controller[152344]: 2025-10-02T08:33:38Z|00736|binding|INFO|Setting lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 up in Southbound
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.848 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[13efccdb-8aab-4744-911b-8e22855bad73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:38 compute-0 systemd-machined[214636]: New machine qemu-89-instance-0000003d.
Oct 02 08:33:38 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-0000003d.
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.878 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[674137b4-6070-43d6-8f61-34f4cfee631a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.881 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a83221cf-ea0e-40ed-be88-e9e36638ec7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.908 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b88936-813b-4337-8eae-1912bae01aaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.929 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5b344c-d862-4893-b007-50251a1b2916]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef30d863-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:1b:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 14, 'rx_bytes': 1000, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 14, 'rx_bytes': 1000, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474382, 'reachable_time': 23805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336260, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.945 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c8fd5a-7b50-44b5-9de2-14b92777609c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef30d863-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474399, 'tstamp': 474399}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336262, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef30d863-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474402, 'tstamp': 474402}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336262, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.948 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef30d863-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.951 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef30d863-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.951 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.951 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef30d863-a0, col_values=(('external_ids', {'iface-id': 'd143de50-fc80-43b6-82e2-6651430a4a42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:38.951 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.956 2 DEBUG oslo_concurrency.lockutils [None req-85f55979-7c18-46a6-bfc6-9d62b819c30c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "fcd663c5-c20d-477c-bf26-11eb72d0886f" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.956 2 DEBUG oslo_concurrency.lockutils [None req-85f55979-7c18-46a6-bfc6-9d62b819c30c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.956 2 DEBUG nova.compute.manager [None req-85f55979-7c18-46a6-bfc6-9d62b819c30c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.960 2 DEBUG nova.compute.manager [None req-85f55979-7c18-46a6-bfc6-9d62b819c30c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.960 2 DEBUG nova.objects.instance [None req-85f55979-7c18-46a6-bfc6-9d62b819c30c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'flavor' on Instance uuid fcd663c5-c20d-477c-bf26-11eb72d0886f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:38 compute-0 nova_compute[260603]: 2025-10-02 08:33:38.989 2 DEBUG nova.virt.libvirt.driver [None req-85f55979-7c18-46a6-bfc6-9d62b819c30c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:33:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 590 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 10 MiB/s wr, 269 op/s
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.103 2 DEBUG nova.compute.manager [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Received event network-vif-plugged-f56f40e0-799f-47a4-a095-002a975375fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.104 2 DEBUG oslo_concurrency.lockutils [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.104 2 DEBUG oslo_concurrency.lockutils [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.104 2 DEBUG oslo_concurrency.lockutils [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.104 2 DEBUG nova.compute.manager [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Processing event network-vif-plugged-f56f40e0-799f-47a4-a095-002a975375fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.105 2 DEBUG nova.compute.manager [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Received event network-vif-plugged-f56f40e0-799f-47a4-a095-002a975375fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.105 2 DEBUG oslo_concurrency.lockutils [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.105 2 DEBUG oslo_concurrency.lockutils [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.105 2 DEBUG oslo_concurrency.lockutils [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.105 2 DEBUG nova.compute.manager [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] No waiting events found dispatching network-vif-plugged-f56f40e0-799f-47a4-a095-002a975375fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.106 2 WARNING nova.compute.manager [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Received unexpected event network-vif-plugged-f56f40e0-799f-47a4-a095-002a975375fa for instance with vm_state building and task_state spawning.
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.106 2 DEBUG nova.compute.manager [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.106 2 DEBUG oslo_concurrency.lockutils [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.106 2 DEBUG oslo_concurrency.lockutils [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.106 2 DEBUG oslo_concurrency.lockutils [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.107 2 DEBUG nova.compute.manager [req-843d8e85-1294-4331-a4bc-9612c78b2baa req-415674a3-069d-44ed-968e-5529e0d36fff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Processing event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.107 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.116 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394019.1105638, 69c52a3e-2ba6-4468-8498-ceb30011bb4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.116 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] VM Resumed (Lifecycle Event)
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.118 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.122 2 INFO nova.virt.libvirt.driver [-] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Instance spawned successfully.
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.122 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.137 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.143 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.146 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.147 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.147 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.147 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.148 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.148 2 DEBUG nova.virt.libvirt.driver [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.174 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.214 2 INFO nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Took 12.48 seconds to spawn the instance on the hypervisor.
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.214 2 DEBUG nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.267 2 INFO nova.compute.manager [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Took 14.50 seconds to build instance.
Oct 02 08:33:39 compute-0 nova_compute[260603]: 2025-10-02 08:33:39.282 2 DEBUG oslo_concurrency.lockutils [None req-9f6a313f-d847-4830-93d7-8a0db0056627 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.026 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394020.0259326, 49e7e668-b62c-4e35-a4e2-bba540000961 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.026 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] VM Started (Lifecycle Event)
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.028 2 DEBUG nova.compute.manager [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.031 2 DEBUG nova.virt.libvirt.driver [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.035 2 INFO nova.virt.libvirt.driver [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance spawned successfully.
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.049 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.053 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.078 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.079 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394020.0281413, 49e7e668-b62c-4e35-a4e2-bba540000961 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.079 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] VM Paused (Lifecycle Event)
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.107 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.110 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394020.030498, 49e7e668-b62c-4e35-a4e2-bba540000961 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.110 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] VM Resumed (Lifecycle Event)
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.126 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.129 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.148 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:33:40 compute-0 ceph-mon[74477]: pgmap v1607: 305 pgs: 305 active+clean; 590 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 10 MiB/s wr, 269 op/s
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.415 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394020.4146812, 6df0942d-95db-4140-9c7b-b5c51ada92bd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.415 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] VM Started (Lifecycle Event)
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.417 2 DEBUG nova.compute.manager [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.425 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.428 2 INFO nova.virt.libvirt.driver [-] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Instance spawned successfully.
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.428 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.456 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.459 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.465 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.465 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.465 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.466 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.466 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.467 2 DEBUG nova.virt.libvirt.driver [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.500 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.501 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394020.416926, 6df0942d-95db-4140-9c7b-b5c51ada92bd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.501 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] VM Paused (Lifecycle Event)
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.529 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.532 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394020.4190063, 6df0942d-95db-4140-9c7b-b5c51ada92bd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.532 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] VM Resumed (Lifecycle Event)
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.579 2 INFO nova.compute.manager [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Took 13.05 seconds to spawn the instance on the hypervisor.
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.580 2 DEBUG nova.compute.manager [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.580 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.586 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.641 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.675 2 INFO nova.compute.manager [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Took 15.68 seconds to build instance.
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.693 2 DEBUG oslo_concurrency.lockutils [None req-1235a26d-c638-4042-a58f-04fb80c9bc1c d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.752 2 DEBUG nova.network.neutron [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Updated VIF entry in instance network info cache for port 11350006-4c80-4e3a-a271-5230799be1ba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.753 2 DEBUG nova.network.neutron [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Updating instance_info_cache with network_info: [{"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:40 compute-0 nova_compute[260603]: 2025-10-02 08:33:40.775 2 DEBUG oslo_concurrency.lockutils [req-98fba670-ef0d-49bd-be6e-1fad64420f54 req-29e30b80-c2c1-4259-895c-c6eb9545d78a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6df0942d-95db-4140-9c7b-b5c51ada92bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.060 2 DEBUG nova.compute.manager [req-fd5b9a87-5589-416c-b668-1b8a17d3c875 req-0cd5e492-4992-405d-9898-d7e5a2687cf1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.060 2 DEBUG oslo_concurrency.lockutils [req-fd5b9a87-5589-416c-b668-1b8a17d3c875 req-0cd5e492-4992-405d-9898-d7e5a2687cf1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.060 2 DEBUG oslo_concurrency.lockutils [req-fd5b9a87-5589-416c-b668-1b8a17d3c875 req-0cd5e492-4992-405d-9898-d7e5a2687cf1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.060 2 DEBUG oslo_concurrency.lockutils [req-fd5b9a87-5589-416c-b668-1b8a17d3c875 req-0cd5e492-4992-405d-9898-d7e5a2687cf1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.061 2 DEBUG nova.compute.manager [req-fd5b9a87-5589-416c-b668-1b8a17d3c875 req-0cd5e492-4992-405d-9898-d7e5a2687cf1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] No waiting events found dispatching network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.061 2 WARNING nova.compute.manager [req-fd5b9a87-5589-416c-b668-1b8a17d3c875 req-0cd5e492-4992-405d-9898-d7e5a2687cf1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received unexpected event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba for instance with vm_state active and task_state None.
Oct 02 08:33:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 590 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.3 MiB/s wr, 190 op/s
Oct 02 08:33:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Oct 02 08:33:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Oct 02 08:33:41 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.213 2 DEBUG nova.compute.manager [req-70f7774d-7255-45de-8db0-e3f1b0d90d9f req-8b63f909-be35-4481-9b6a-e1c97e402aaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.214 2 DEBUG oslo_concurrency.lockutils [req-70f7774d-7255-45de-8db0-e3f1b0d90d9f req-8b63f909-be35-4481-9b6a-e1c97e402aaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.214 2 DEBUG oslo_concurrency.lockutils [req-70f7774d-7255-45de-8db0-e3f1b0d90d9f req-8b63f909-be35-4481-9b6a-e1c97e402aaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.214 2 DEBUG oslo_concurrency.lockutils [req-70f7774d-7255-45de-8db0-e3f1b0d90d9f req-8b63f909-be35-4481-9b6a-e1c97e402aaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.215 2 DEBUG nova.compute.manager [req-70f7774d-7255-45de-8db0-e3f1b0d90d9f req-8b63f909-be35-4481-9b6a-e1c97e402aaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] No waiting events found dispatching network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.215 2 WARNING nova.compute.manager [req-70f7774d-7255-45de-8db0-e3f1b0d90d9f req-8b63f909-be35-4481-9b6a-e1c97e402aaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received unexpected event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 for instance with vm_state shelved_offloaded and task_state spawning.
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.745 2 DEBUG nova.compute.manager [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:41 compute-0 nova_compute[260603]: 2025-10-02 08:33:41.867 2 DEBUG oslo_concurrency.lockutils [None req-a82d50ad-f28f-43d4-80f7-14c11786e2bf 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:33:42 compute-0 ceph-mon[74477]: pgmap v1608: 305 pgs: 305 active+clean; 590 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.3 MiB/s wr, 190 op/s
Oct 02 08:33:42 compute-0 ceph-mon[74477]: osdmap e232: 3 total, 3 up, 3 in
Oct 02 08:33:42 compute-0 nova_compute[260603]: 2025-10-02 08:33:42.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:42 compute-0 nova_compute[260603]: 2025-10-02 08:33:42.985 2 DEBUG oslo_concurrency.lockutils [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:42 compute-0 nova_compute[260603]: 2025-10-02 08:33:42.985 2 DEBUG oslo_concurrency.lockutils [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:42 compute-0 nova_compute[260603]: 2025-10-02 08:33:42.986 2 DEBUG oslo_concurrency.lockutils [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:42 compute-0 nova_compute[260603]: 2025-10-02 08:33:42.987 2 DEBUG oslo_concurrency.lockutils [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:42 compute-0 nova_compute[260603]: 2025-10-02 08:33:42.987 2 DEBUG oslo_concurrency.lockutils [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:42 compute-0 nova_compute[260603]: 2025-10-02 08:33:42.989 2 INFO nova.compute.manager [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Terminating instance
Oct 02 08:33:42 compute-0 nova_compute[260603]: 2025-10-02 08:33:42.993 2 DEBUG nova.compute.manager [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:33:43 compute-0 ovn_controller[152344]: 2025-10-02T08:33:43Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:eb:ae 10.100.0.8
Oct 02 08:33:43 compute-0 ovn_controller[152344]: 2025-10-02T08:33:43Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:eb:ae 10.100.0.8
Oct 02 08:33:43 compute-0 kernel: tape15c13c8-9d (unregistering): left promiscuous mode
Oct 02 08:33:43 compute-0 NetworkManager[45129]: <info>  [1759394023.0604] device (tape15c13c8-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:33:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 521 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 5.4 MiB/s wr, 488 op/s
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 ovn_controller[152344]: 2025-10-02T08:33:43Z|00737|binding|INFO|Releasing lport e15c13c8-9db2-4011-934e-7c302b8e26c0 from this chassis (sb_readonly=0)
Oct 02 08:33:43 compute-0 ovn_controller[152344]: 2025-10-02T08:33:43Z|00738|binding|INFO|Setting lport e15c13c8-9db2-4011-934e-7c302b8e26c0 down in Southbound
Oct 02 08:33:43 compute-0 ovn_controller[152344]: 2025-10-02T08:33:43Z|00739|binding|INFO|Removing iface tape15c13c8-9d ovn-installed in OVS
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.086 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:42:29 10.100.0.14'], port_security=['fa:16:3e:b0:42:29 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0b8ef6ea-80ff-48a3-84ff-f057fc293169', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe2ef776-a59b-4369-95f5-69103d78f3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1045858c7b24f1baf184c8469064740', 'neutron:revision_number': '4', 'neutron:security_group_ids': '524655b6-f9e6-47ce-bebb-b7967ffdd769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=798444a3-a368-47d6-a395-3d1a0bcb7c88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e15c13c8-9db2-4011-934e-7c302b8e26c0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.087 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e15c13c8-9db2-4011-934e-7c302b8e26c0 in datapath fe2ef776-a59b-4369-95f5-69103d78f3da unbound from our chassis
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.089 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe2ef776-a59b-4369-95f5-69103d78f3da
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.112 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c4cc505a-ab4d-4ed9-ad68-fd621fe55722]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct 02 08:33:43 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004d.scope: Consumed 6.475s CPU time.
Oct 02 08:33:43 compute-0 systemd-machined[214636]: Machine qemu-86-instance-0000004d terminated.
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.145 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[52516ef3-136e-411e-98e8-eb80bd8cd297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.148 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[07307e51-9ee9-4cea-9c4e-ce275cd4aa2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.185 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c18e9b79-e695-49ea-8f5a-494713e2d8aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.205 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[93e0b6c0-b5f2-42fb-89d1-e1987cfbe32c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe2ef776-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:01:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489164, 'reachable_time': 38004, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336359, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.229 2 DEBUG oslo_concurrency.lockutils [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.230 2 DEBUG oslo_concurrency.lockutils [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.230 2 DEBUG oslo_concurrency.lockutils [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.231 2 DEBUG oslo_concurrency.lockutils [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.231 2 DEBUG oslo_concurrency.lockutils [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.232 2 INFO nova.compute.manager [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Terminating instance
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.233 2 DEBUG nova.compute.manager [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.235 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2786be91-8532-46c0-bbee-8866005560f3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfe2ef776-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489175, 'tstamp': 489175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336363, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfe2ef776-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489177, 'tstamp': 489177}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336363, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.236 2 INFO nova.virt.libvirt.driver [-] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Instance destroyed successfully.
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.237 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe2ef776-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.237 2 DEBUG nova.objects.instance [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lazy-loading 'resources' on Instance uuid 0b8ef6ea-80ff-48a3-84ff-f057fc293169 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.242 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe2ef776-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.243 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.243 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe2ef776-a0, col_values=(('external_ids', {'iface-id': '3752b866-0aac-4a57-acbb-4c574bfc2b06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.243 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.254 2 DEBUG nova.virt.libvirt.vif [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:33:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1935269911',display_name='tempest-tempest.common.compute-instance-1935269911-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1935269911-1',id=77,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:33:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e1045858c7b24f1baf184c8469064740',ramdisk_id='',reservation_id='r-rzqhmsfz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-248615309',owner_user_name='tempest-MultipleCreateTestJSON-248615309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:33:37Z,user_data=None,user_id='6c3e0096dae34ce09545c8c4547dae81',uuid=0b8ef6ea-80ff-48a3-84ff-f057fc293169,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "address": "fa:16:3e:b0:42:29", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15c13c8-9d", "ovs_interfaceid": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.255 2 DEBUG nova.network.os_vif_util [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converting VIF {"id": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "address": "fa:16:3e:b0:42:29", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape15c13c8-9d", "ovs_interfaceid": "e15c13c8-9db2-4011-934e-7c302b8e26c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.255 2 DEBUG nova.network.os_vif_util [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:42:29,bridge_name='br-int',has_traffic_filtering=True,id=e15c13c8-9db2-4011-934e-7c302b8e26c0,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15c13c8-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.256 2 DEBUG os_vif [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:42:29,bridge_name='br-int',has_traffic_filtering=True,id=e15c13c8-9db2-4011-934e-7c302b8e26c0,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15c13c8-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.258 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape15c13c8-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.263 2 INFO os_vif [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:42:29,bridge_name='br-int',has_traffic_filtering=True,id=e15c13c8-9db2-4011-934e-7c302b8e26c0,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape15c13c8-9d')
Oct 02 08:33:43 compute-0 kernel: tapf56f40e0-79 (unregistering): left promiscuous mode
Oct 02 08:33:43 compute-0 NetworkManager[45129]: <info>  [1759394023.2876] device (tapf56f40e0-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 ovn_controller[152344]: 2025-10-02T08:33:43Z|00740|binding|INFO|Releasing lport f56f40e0-799f-47a4-a095-002a975375fa from this chassis (sb_readonly=0)
Oct 02 08:33:43 compute-0 ovn_controller[152344]: 2025-10-02T08:33:43Z|00741|binding|INFO|Setting lport f56f40e0-799f-47a4-a095-002a975375fa down in Southbound
Oct 02 08:33:43 compute-0 ovn_controller[152344]: 2025-10-02T08:33:43Z|00742|binding|INFO|Removing iface tapf56f40e0-79 ovn-installed in OVS
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.303 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:7b:52 10.100.0.8'], port_security=['fa:16:3e:71:7b:52 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '69c52a3e-2ba6-4468-8498-ceb30011bb4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe2ef776-a59b-4369-95f5-69103d78f3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1045858c7b24f1baf184c8469064740', 'neutron:revision_number': '4', 'neutron:security_group_ids': '524655b6-f9e6-47ce-bebb-b7967ffdd769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=798444a3-a368-47d6-a395-3d1a0bcb7c88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f56f40e0-799f-47a4-a095-002a975375fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.304 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f56f40e0-799f-47a4-a095-002a975375fa in datapath fe2ef776-a59b-4369-95f5-69103d78f3da unbound from our chassis
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.306 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fe2ef776-a59b-4369-95f5-69103d78f3da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.307 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[653c46fa-3920-4dd7-b8a8-3ece6a4aa360]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.307 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da namespace which is not needed anymore
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Oct 02 08:33:43 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004e.scope: Consumed 4.603s CPU time.
Oct 02 08:33:43 compute-0 systemd-machined[214636]: Machine qemu-87-instance-0000004e terminated.
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.369 2 INFO nova.compute.manager [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Rescuing
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.369 2 DEBUG oslo_concurrency.lockutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "refresh_cache-6df0942d-95db-4140-9c7b-b5c51ada92bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.370 2 DEBUG oslo_concurrency.lockutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquired lock "refresh_cache-6df0942d-95db-4140-9c7b-b5c51ada92bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.370 2 DEBUG nova.network.neutron [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:33:43 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[335544]: [NOTICE]   (335548) : haproxy version is 2.8.14-c23fe91
Oct 02 08:33:43 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[335544]: [NOTICE]   (335548) : path to executable is /usr/sbin/haproxy
Oct 02 08:33:43 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[335544]: [WARNING]  (335548) : Exiting Master process...
Oct 02 08:33:43 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[335544]: [ALERT]    (335548) : Current worker (335550) exited with code 143 (Terminated)
Oct 02 08:33:43 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[335544]: [WARNING]  (335548) : All workers exited. Exiting... (0)
Oct 02 08:33:43 compute-0 systemd[1]: libpod-abf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34.scope: Deactivated successfully.
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.478 2 INFO nova.virt.libvirt.driver [-] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Instance destroyed successfully.
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.479 2 DEBUG nova.objects.instance [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lazy-loading 'resources' on Instance uuid 69c52a3e-2ba6-4468-8498-ceb30011bb4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:43 compute-0 podman[336410]: 2025-10-02 08:33:43.482975816 +0000 UTC m=+0.055448141 container died abf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.497 2 DEBUG nova.virt.libvirt.vif [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:33:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1935269911',display_name='tempest-tempest.common.compute-instance-1935269911-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1935269911-2',id=78,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T08:33:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e1045858c7b24f1baf184c8469064740',ramdisk_id='',reservation_id='r-rzqhmsfz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-248615309',owner_user_name='tempest-MultipleCreateTestJSON-248615309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:33:39Z,user_data=None,user_id='6c3e0096dae34ce09545c8c4547dae81',uuid=69c52a3e-2ba6-4468-8498-ceb30011bb4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f56f40e0-799f-47a4-a095-002a975375fa", "address": "fa:16:3e:71:7b:52", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f40e0-79", "ovs_interfaceid": "f56f40e0-799f-47a4-a095-002a975375fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.498 2 DEBUG nova.network.os_vif_util [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converting VIF {"id": "f56f40e0-799f-47a4-a095-002a975375fa", "address": "fa:16:3e:71:7b:52", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f40e0-79", "ovs_interfaceid": "f56f40e0-799f-47a4-a095-002a975375fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.498 2 DEBUG nova.network.os_vif_util [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:7b:52,bridge_name='br-int',has_traffic_filtering=True,id=f56f40e0-799f-47a4-a095-002a975375fa,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f40e0-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.499 2 DEBUG os_vif [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:7b:52,bridge_name='br-int',has_traffic_filtering=True,id=f56f40e0-799f-47a4-a095-002a975375fa,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f40e0-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.501 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf56f40e0-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.508 2 INFO os_vif [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:7b:52,bridge_name='br-int',has_traffic_filtering=True,id=f56f40e0-799f-47a4-a095-002a975375fa,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f40e0-79')
Oct 02 08:33:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ec2650f0eccb08d7c825e16e7312724895155fcebf088d90fcc799c9fdbc131-merged.mount: Deactivated successfully.
Oct 02 08:33:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-abf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34-userdata-shm.mount: Deactivated successfully.
Oct 02 08:33:43 compute-0 podman[336410]: 2025-10-02 08:33:43.535592999 +0000 UTC m=+0.108065304 container cleanup abf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:33:43 compute-0 systemd[1]: libpod-conmon-abf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34.scope: Deactivated successfully.
Oct 02 08:33:43 compute-0 podman[336466]: 2025-10-02 08:33:43.631536116 +0000 UTC m=+0.064285176 container remove abf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.644 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[58a7a175-e074-41b8-8183-15ca3275b025]: (4, ('Thu Oct  2 08:33:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da (abf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34)\nabf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34\nThu Oct  2 08:33:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da (abf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34)\nabf7f99ea5c216bbb74f5456689bb16a34a7a069895034e128ea0241d9fa6e34\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.645 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa2cac4-983a-4cdc-b138-2e2238880514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.646 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe2ef776-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:43 compute-0 kernel: tapfe2ef776-a0: left promiscuous mode
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.656 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50e95474-b979-4b71-a14e-0eb8f5e771f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.677 2 INFO nova.virt.libvirt.driver [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Deleting instance files /var/lib/nova/instances/0b8ef6ea-80ff-48a3-84ff-f057fc293169_del
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.678 2 INFO nova.virt.libvirt.driver [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Deletion of /var/lib/nova/instances/0b8ef6ea-80ff-48a3-84ff-f057fc293169_del complete
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.685 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c4c7cc-57b7-485f-89a2-4aa0e3e970af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.686 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[89f122d7-e24d-4351-9ebd-a2861cc00d94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.701 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[07b813ba-14e7-4126-8371-40d61d53a190]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489154, 'reachable_time': 16824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336481, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dfe2ef776\x2da59b\x2d4369\x2d95f5\x2d69103d78f3da.mount: Deactivated successfully.
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.703 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:33:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:43.703 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0193ef28-d682-4d53-a59d-077c23f7bbb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.726 2 INFO nova.compute.manager [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.727 2 DEBUG oslo.service.loopingcall [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.727 2 DEBUG nova.compute.manager [-] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.727 2 DEBUG nova.network.neutron [-] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.931 2 DEBUG nova.compute.manager [req-cb26c611-f397-42f3-bdf6-944283c6f99c req-659e1e4b-aa0e-4987-ac13-fff7959a6e65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Received event network-vif-unplugged-e15c13c8-9db2-4011-934e-7c302b8e26c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.931 2 DEBUG oslo_concurrency.lockutils [req-cb26c611-f397-42f3-bdf6-944283c6f99c req-659e1e4b-aa0e-4987-ac13-fff7959a6e65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.932 2 DEBUG oslo_concurrency.lockutils [req-cb26c611-f397-42f3-bdf6-944283c6f99c req-659e1e4b-aa0e-4987-ac13-fff7959a6e65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.932 2 DEBUG oslo_concurrency.lockutils [req-cb26c611-f397-42f3-bdf6-944283c6f99c req-659e1e4b-aa0e-4987-ac13-fff7959a6e65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.932 2 DEBUG nova.compute.manager [req-cb26c611-f397-42f3-bdf6-944283c6f99c req-659e1e4b-aa0e-4987-ac13-fff7959a6e65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] No waiting events found dispatching network-vif-unplugged-e15c13c8-9db2-4011-934e-7c302b8e26c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:43 compute-0 nova_compute[260603]: 2025-10-02 08:33:43.932 2 DEBUG nova.compute.manager [req-cb26c611-f397-42f3-bdf6-944283c6f99c req-659e1e4b-aa0e-4987-ac13-fff7959a6e65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Received event network-vif-unplugged-e15c13c8-9db2-4011-934e-7c302b8e26c0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:33:44 compute-0 ceph-mon[74477]: pgmap v1610: 305 pgs: 305 active+clean; 521 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 5.4 MiB/s wr, 488 op/s
Oct 02 08:33:44 compute-0 nova_compute[260603]: 2025-10-02 08:33:44.436 2 INFO nova.virt.libvirt.driver [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Deleting instance files /var/lib/nova/instances/69c52a3e-2ba6-4468-8498-ceb30011bb4f_del
Oct 02 08:33:44 compute-0 nova_compute[260603]: 2025-10-02 08:33:44.437 2 INFO nova.virt.libvirt.driver [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Deletion of /var/lib/nova/instances/69c52a3e-2ba6-4468-8498-ceb30011bb4f_del complete
Oct 02 08:33:44 compute-0 nova_compute[260603]: 2025-10-02 08:33:44.479 2 INFO nova.compute.manager [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Took 1.24 seconds to destroy the instance on the hypervisor.
Oct 02 08:33:44 compute-0 nova_compute[260603]: 2025-10-02 08:33:44.479 2 DEBUG oslo.service.loopingcall [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:33:44 compute-0 nova_compute[260603]: 2025-10-02 08:33:44.480 2 DEBUG nova.compute.manager [-] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:33:44 compute-0 nova_compute[260603]: 2025-10-02 08:33:44.480 2 DEBUG nova.network.neutron [-] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:33:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 521 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 5.4 MiB/s wr, 488 op/s
Oct 02 08:33:45 compute-0 nova_compute[260603]: 2025-10-02 08:33:45.824 2 DEBUG nova.network.neutron [-] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:45 compute-0 nova_compute[260603]: 2025-10-02 08:33:45.860 2 INFO nova.compute.manager [-] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Took 2.13 seconds to deallocate network for instance.
Oct 02 08:33:45 compute-0 nova_compute[260603]: 2025-10-02 08:33:45.885 2 DEBUG nova.network.neutron [-] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:45 compute-0 nova_compute[260603]: 2025-10-02 08:33:45.916 2 INFO nova.compute.manager [-] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Took 1.44 seconds to deallocate network for instance.
Oct 02 08:33:45 compute-0 nova_compute[260603]: 2025-10-02 08:33:45.921 2 DEBUG oslo_concurrency.lockutils [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:45 compute-0 nova_compute[260603]: 2025-10-02 08:33:45.922 2 DEBUG oslo_concurrency.lockutils [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:45 compute-0 nova_compute[260603]: 2025-10-02 08:33:45.962 2 DEBUG oslo_concurrency.lockutils [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.136 2 DEBUG oslo_concurrency.processutils [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.191 2 DEBUG nova.network.neutron [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Updating instance_info_cache with network_info: [{"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.234 2 DEBUG oslo_concurrency.lockutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Releasing lock "refresh_cache-6df0942d-95db-4140-9c7b-b5c51ada92bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:46 compute-0 ceph-mon[74477]: pgmap v1611: 305 pgs: 305 active+clean; 521 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 5.4 MiB/s wr, 488 op/s
Oct 02 08:33:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2884954754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.578 2 DEBUG oslo_concurrency.processutils [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.594 2 DEBUG nova.compute.provider_tree [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.623 2 DEBUG nova.scheduler.client.report [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.635 2 DEBUG nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.651 2 DEBUG oslo_concurrency.lockutils [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.656 2 DEBUG oslo_concurrency.lockutils [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.687 2 INFO nova.scheduler.client.report [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Deleted allocations for instance 0b8ef6ea-80ff-48a3-84ff-f057fc293169
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.783 2 DEBUG oslo_concurrency.lockutils [None req-fa45b05f-2b5c-48be-b1ee-b0a4436f62c4 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.836 2 DEBUG oslo_concurrency.processutils [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.994 2 DEBUG nova.compute.manager [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Received event network-vif-plugged-e15c13c8-9db2-4011-934e-7c302b8e26c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.995 2 DEBUG oslo_concurrency.lockutils [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.996 2 DEBUG oslo_concurrency.lockutils [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.996 2 DEBUG oslo_concurrency.lockutils [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0b8ef6ea-80ff-48a3-84ff-f057fc293169-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.996 2 DEBUG nova.compute.manager [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] No waiting events found dispatching network-vif-plugged-e15c13c8-9db2-4011-934e-7c302b8e26c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.997 2 WARNING nova.compute.manager [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Received unexpected event network-vif-plugged-e15c13c8-9db2-4011-934e-7c302b8e26c0 for instance with vm_state deleted and task_state None.
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.997 2 DEBUG nova.compute.manager [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Received event network-vif-unplugged-f56f40e0-799f-47a4-a095-002a975375fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.997 2 DEBUG oslo_concurrency.lockutils [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.997 2 DEBUG oslo_concurrency.lockutils [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.998 2 DEBUG oslo_concurrency.lockutils [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.998 2 DEBUG nova.compute.manager [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] No waiting events found dispatching network-vif-unplugged-f56f40e0-799f-47a4-a095-002a975375fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.999 2 WARNING nova.compute.manager [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Received unexpected event network-vif-unplugged-f56f40e0-799f-47a4-a095-002a975375fa for instance with vm_state deleted and task_state None.
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.999 2 DEBUG nova.compute.manager [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Received event network-vif-deleted-e15c13c8-9db2-4011-934e-7c302b8e26c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:46 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.999 2 DEBUG nova.compute.manager [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Received event network-vif-plugged-f56f40e0-799f-47a4-a095-002a975375fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:47 compute-0 nova_compute[260603]: 2025-10-02 08:33:46.999 2 DEBUG oslo_concurrency.lockutils [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:47 compute-0 nova_compute[260603]: 2025-10-02 08:33:47.000 2 DEBUG oslo_concurrency.lockutils [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:47 compute-0 nova_compute[260603]: 2025-10-02 08:33:47.000 2 DEBUG oslo_concurrency.lockutils [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:47 compute-0 nova_compute[260603]: 2025-10-02 08:33:47.001 2 DEBUG nova.compute.manager [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] No waiting events found dispatching network-vif-plugged-f56f40e0-799f-47a4-a095-002a975375fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:47 compute-0 nova_compute[260603]: 2025-10-02 08:33:47.001 2 WARNING nova.compute.manager [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Received unexpected event network-vif-plugged-f56f40e0-799f-47a4-a095-002a975375fa for instance with vm_state deleted and task_state None.
Oct 02 08:33:47 compute-0 nova_compute[260603]: 2025-10-02 08:33:47.001 2 DEBUG nova.compute.manager [req-cd5cea9b-4258-4313-a826-66b85aee6290 req-efb4a9cb-7053-4da3-a74d-e9e4b13bcae7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Received event network-vif-deleted-f56f40e0-799f-47a4-a095-002a975375fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 521 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 5.4 MiB/s wr, 488 op/s
Oct 02 08:33:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:33:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Oct 02 08:33:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Oct 02 08:33:47 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Oct 02 08:33:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2884954754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:47 compute-0 ceph-mon[74477]: osdmap e233: 3 total, 3 up, 3 in
Oct 02 08:33:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/455405240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:47 compute-0 nova_compute[260603]: 2025-10-02 08:33:47.335 2 DEBUG oslo_concurrency.processutils [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:47 compute-0 nova_compute[260603]: 2025-10-02 08:33:47.344 2 DEBUG nova.compute.provider_tree [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:47 compute-0 nova_compute[260603]: 2025-10-02 08:33:47.365 2 DEBUG nova.scheduler.client.report [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:47 compute-0 nova_compute[260603]: 2025-10-02 08:33:47.395 2 DEBUG oslo_concurrency.lockutils [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:47 compute-0 nova_compute[260603]: 2025-10-02 08:33:47.671 2 INFO nova.scheduler.client.report [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Deleted allocations for instance 69c52a3e-2ba6-4468-8498-ceb30011bb4f
Oct 02 08:33:47 compute-0 nova_compute[260603]: 2025-10-02 08:33:47.740 2 DEBUG oslo_concurrency.lockutils [None req-15b9c341-9f11-4f1c-9634-082f9f71d56c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "69c52a3e-2ba6-4468-8498-ceb30011bb4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:48 compute-0 ceph-mon[74477]: pgmap v1612: 305 pgs: 305 active+clean; 521 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 5.4 MiB/s wr, 488 op/s
Oct 02 08:33:48 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/455405240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:48 compute-0 nova_compute[260603]: 2025-10-02 08:33:48.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:48 compute-0 nova_compute[260603]: 2025-10-02 08:33:48.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:49 compute-0 nova_compute[260603]: 2025-10-02 08:33:49.032 2 DEBUG nova.virt.libvirt.driver [None req-85f55979-7c18-46a6-bfc6-9d62b819c30c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:33:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 435 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 3.2 MiB/s wr, 635 op/s
Oct 02 08:33:50 compute-0 podman[336528]: 2025-10-02 08:33:50.071185592 +0000 UTC m=+0.117776985 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 08:33:50 compute-0 podman[336527]: 2025-10-02 08:33:50.080720568 +0000 UTC m=+0.136674642 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 08:33:50 compute-0 ceph-mon[74477]: pgmap v1614: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 435 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 3.2 MiB/s wr, 635 op/s
Oct 02 08:33:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 435 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 2.6 MiB/s wr, 516 op/s
Oct 02 08:33:51 compute-0 kernel: tapc42b44f6-e8 (unregistering): left promiscuous mode
Oct 02 08:33:51 compute-0 NetworkManager[45129]: <info>  [1759394031.3659] device (tapc42b44f6-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:33:51 compute-0 ovn_controller[152344]: 2025-10-02T08:33:51Z|00743|binding|INFO|Releasing lport c42b44f6-e8fd-46c9-a075-0952da11cf02 from this chassis (sb_readonly=0)
Oct 02 08:33:51 compute-0 ovn_controller[152344]: 2025-10-02T08:33:51Z|00744|binding|INFO|Setting lport c42b44f6-e8fd-46c9-a075-0952da11cf02 down in Southbound
Oct 02 08:33:51 compute-0 ovn_controller[152344]: 2025-10-02T08:33:51Z|00745|binding|INFO|Removing iface tapc42b44f6-e8 ovn-installed in OVS
Oct 02 08:33:51 compute-0 nova_compute[260603]: 2025-10-02 08:33:51.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:51 compute-0 nova_compute[260603]: 2025-10-02 08:33:51.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.380 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:eb:ae 10.100.0.8'], port_security=['fa:16:3e:67:eb:ae 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fcd663c5-c20d-477c-bf26-11eb72d0886f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c42b44f6-e8fd-46c9-a075-0952da11cf02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.381 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c42b44f6-e8fd-46c9-a075-0952da11cf02 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 unbound from our chassis
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.382 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct 02 08:33:51 compute-0 nova_compute[260603]: 2025-10-02 08:33:51.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.405 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[811e3bc1-cbbc-4d8d-ba6a-13174522ca13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.438 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b790f5eb-720c-4195-8692-4fe130b7efc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.441 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b299a24a-6313-4941-8f0e-8961e5620519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:51 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Oct 02 08:33:51 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000004c.scope: Consumed 13.416s CPU time.
Oct 02 08:33:51 compute-0 systemd-machined[214636]: Machine qemu-85-instance-0000004c terminated.
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.474 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[86d6256e-0ea4-4e87-aa76-6c5199154ade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.491 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8305f3-dd83-4fb2-aebb-2d5e761843ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 31, 'rx_bytes': 1042, 'tx_bytes': 1446, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 31, 'rx_bytes': 1042, 'tx_bytes': 1446, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 25420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336582, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.510 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd826f57-6d85-4b20-bf47-2d0096e03f3a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476285, 'tstamp': 476285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336583, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1e3507cf-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476289, 'tstamp': 476289}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336583, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.513 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:51 compute-0 nova_compute[260603]: 2025-10-02 08:33:51.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:51 compute-0 nova_compute[260603]: 2025-10-02 08:33:51.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.520 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.521 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.521 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:51.521 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:51 compute-0 nova_compute[260603]: 2025-10-02 08:33:51.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:51 compute-0 nova_compute[260603]: 2025-10-02 08:33:51.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.051 2 INFO nova.virt.libvirt.driver [None req-85f55979-7c18-46a6-bfc6-9d62b819c30c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Instance shutdown successfully after 13 seconds.
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.065 2 INFO nova.virt.libvirt.driver [-] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Instance destroyed successfully.
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.065 2 DEBUG nova.objects.instance [None req-85f55979-7c18-46a6-bfc6-9d62b819c30c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'numa_topology' on Instance uuid fcd663c5-c20d-477c-bf26-11eb72d0886f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.090 2 DEBUG nova.compute.manager [None req-85f55979-7c18-46a6-bfc6-9d62b819c30c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.149 2 DEBUG oslo_concurrency.lockutils [None req-85f55979-7c18-46a6-bfc6-9d62b819c30c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:33:52 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 02 08:33:52 compute-0 ceph-mon[74477]: pgmap v1615: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 435 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 2.6 MiB/s wr, 516 op/s
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.550 2 DEBUG oslo_concurrency.lockutils [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.550 2 DEBUG oslo_concurrency.lockutils [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.551 2 DEBUG oslo_concurrency.lockutils [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.551 2 DEBUG oslo_concurrency.lockutils [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.551 2 DEBUG oslo_concurrency.lockutils [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.552 2 INFO nova.compute.manager [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Terminating instance
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.553 2 DEBUG nova.compute.manager [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:33:52 compute-0 kernel: tapc55dac75-c2 (unregistering): left promiscuous mode
Oct 02 08:33:52 compute-0 NetworkManager[45129]: <info>  [1759394032.5980] device (tapc55dac75-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:52 compute-0 ovn_controller[152344]: 2025-10-02T08:33:52Z|00746|binding|INFO|Releasing lport c55dac75-c247-4672-885b-8b1adb241591 from this chassis (sb_readonly=0)
Oct 02 08:33:52 compute-0 ovn_controller[152344]: 2025-10-02T08:33:52Z|00747|binding|INFO|Setting lport c55dac75-c247-4672-885b-8b1adb241591 down in Southbound
Oct 02 08:33:52 compute-0 ovn_controller[152344]: 2025-10-02T08:33:52Z|00748|binding|INFO|Removing iface tapc55dac75-c2 ovn-installed in OVS
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.614 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:a0:e4 10.100.0.3'], port_security=['fa:16:3e:08:a0:e4 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '19dd1983-6b14-4ed7-bcb1-f620e7426cc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df44854a-80b4-49ce-898d-50927f9b482f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04113540-c60b-4329-960e-cb06bfeb56f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c55dac75-c247-4672-885b-8b1adb241591) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.615 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c55dac75-c247-4672-885b-8b1adb241591 in datapath ef30d863-af60-49d9-b5d2-5e4f20c70d56 unbound from our chassis
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.616 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef30d863-af60-49d9-b5d2-5e4f20c70d56
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.631 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b98eb40-29c6-412c-9c09-4c3cde0e4dcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.672 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[810febf0-9129-4816-958b-5fb1bedc5973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:52 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000046.scope: Deactivated successfully.
Oct 02 08:33:52 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000046.scope: Consumed 16.277s CPU time.
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.676 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[716f984c-1fe9-4677-9901-0e0107cc43e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:52 compute-0 systemd-machined[214636]: Machine qemu-77-instance-00000046 terminated.
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.702 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd69d53-77e3-4f84-8b83-e44b4bfbda80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.721 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bac345ea-2229-485f-8037-6704898340e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef30d863-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:1b:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 16, 'rx_bytes': 1000, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 16, 'rx_bytes': 1000, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474382, 'reachable_time': 23805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336603, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.737 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b573b32a-05f0-4641-bcac-aa02e48e69cb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef30d863-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474399, 'tstamp': 474399}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336604, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef30d863-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474402, 'tstamp': 474402}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336604, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.738 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef30d863-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.751 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef30d863-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.751 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.752 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef30d863-a0, col_values=(('external_ids', {'iface-id': 'd143de50-fc80-43b6-82e2-6651430a4a42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:52.752 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:52 compute-0 ovn_controller[152344]: 2025-10-02T08:33:52Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:cc:f7 10.100.0.9
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.792 2 INFO nova.virt.libvirt.driver [-] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Instance destroyed successfully.
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.792 2 DEBUG nova.objects.instance [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'resources' on Instance uuid 19dd1983-6b14-4ed7-bcb1-f620e7426cc6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.810 2 DEBUG nova.virt.libvirt.vif [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:32:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-326848172',display_name='tempest-ServerActionsTestOtherB-server-326848172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-326848172',id=70,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:32:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-ku6xzkx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:32:23Z,user_data=None,user_id='9020ed38b31d46f88625374b2a76aef6',uuid=19dd1983-6b14-4ed7-bcb1-f620e7426cc6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c55dac75-c247-4672-885b-8b1adb241591", "address": "fa:16:3e:08:a0:e4", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc55dac75-c2", "ovs_interfaceid": "c55dac75-c247-4672-885b-8b1adb241591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.810 2 DEBUG nova.network.os_vif_util [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "c55dac75-c247-4672-885b-8b1adb241591", "address": "fa:16:3e:08:a0:e4", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc55dac75-c2", "ovs_interfaceid": "c55dac75-c247-4672-885b-8b1adb241591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.812 2 DEBUG nova.network.os_vif_util [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:a0:e4,bridge_name='br-int',has_traffic_filtering=True,id=c55dac75-c247-4672-885b-8b1adb241591,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc55dac75-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.812 2 DEBUG os_vif [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:a0:e4,bridge_name='br-int',has_traffic_filtering=True,id=c55dac75-c247-4672-885b-8b1adb241591,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc55dac75-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.816 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc55dac75-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.825 2 INFO os_vif [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:a0:e4,bridge_name='br-int',has_traffic_filtering=True,id=c55dac75-c247-4672-885b-8b1adb241591,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc55dac75-c2')
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.890 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.893 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.915 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.924 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.924 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.954 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.986 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.987 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.996 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:33:52 compute-0 nova_compute[260603]: 2025-10-02 08:33:52.996 2 INFO nova.compute.claims [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.026 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 421 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 999 KiB/s rd, 4.2 MiB/s wr, 246 op/s
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.147 2 INFO nova.virt.libvirt.driver [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Deleting instance files /var/lib/nova/instances/19dd1983-6b14-4ed7-bcb1-f620e7426cc6_del
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.148 2 INFO nova.virt.libvirt.driver [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Deletion of /var/lib/nova/instances/19dd1983-6b14-4ed7-bcb1-f620e7426cc6_del complete
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.206 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.264 2 INFO nova.compute.manager [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Took 0.71 seconds to destroy the instance on the hypervisor.
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.266 2 DEBUG oslo.service.loopingcall [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.266 2 DEBUG nova.compute.manager [-] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.268 2 DEBUG nova.network.neutron [-] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682496482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.667 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.676 2 DEBUG nova.compute.provider_tree [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.692 2 DEBUG nova.scheduler.client.report [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.722 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.723 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.727 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.736 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.737 2 INFO nova.compute.claims [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.792 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.793 2 DEBUG nova.network.neutron [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.815 2 INFO nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.848 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.961 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.963 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.964 2 INFO nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Creating image(s)
Oct 02 08:33:53 compute-0 nova_compute[260603]: 2025-10-02 08:33:53.992 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.017 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.042 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.046 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.096 2 DEBUG nova.policy [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6c3e0096dae34ce09545c8c4547dae81', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e1045858c7b24f1baf184c8469064740', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.148 2 DEBUG nova.network.neutron [-] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.153 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.154 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.154 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.155 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.178 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.182 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.222 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:54 compute-0 ceph-mon[74477]: pgmap v1616: 305 pgs: 305 active+clean; 421 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 999 KiB/s rd, 4.2 MiB/s wr, 246 op/s
Oct 02 08:33:54 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2682496482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.277 2 INFO nova.compute.manager [-] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Took 1.01 seconds to deallocate network for instance.
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.333 2 DEBUG oslo_concurrency.lockutils [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.479 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.530 2 DEBUG nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Received event network-vif-unplugged-c42b44f6-e8fd-46c9-a075-0952da11cf02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.531 2 DEBUG oslo_concurrency.lockutils [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.531 2 DEBUG oslo_concurrency.lockutils [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.531 2 DEBUG oslo_concurrency.lockutils [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.531 2 DEBUG nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] No waiting events found dispatching network-vif-unplugged-c42b44f6-e8fd-46c9-a075-0952da11cf02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.532 2 WARNING nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Received unexpected event network-vif-unplugged-c42b44f6-e8fd-46c9-a075-0952da11cf02 for instance with vm_state stopped and task_state None.
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.532 2 DEBUG nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Received event network-vif-plugged-c42b44f6-e8fd-46c9-a075-0952da11cf02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.532 2 DEBUG oslo_concurrency.lockutils [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.532 2 DEBUG oslo_concurrency.lockutils [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.532 2 DEBUG oslo_concurrency.lockutils [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.533 2 DEBUG nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] No waiting events found dispatching network-vif-plugged-c42b44f6-e8fd-46c9-a075-0952da11cf02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.533 2 WARNING nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Received unexpected event network-vif-plugged-c42b44f6-e8fd-46c9-a075-0952da11cf02 for instance with vm_state stopped and task_state None.
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.533 2 DEBUG nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Received event network-vif-unplugged-c55dac75-c247-4672-885b-8b1adb241591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.533 2 DEBUG oslo_concurrency.lockutils [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.534 2 DEBUG oslo_concurrency.lockutils [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.534 2 DEBUG oslo_concurrency.lockutils [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.534 2 DEBUG nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] No waiting events found dispatching network-vif-unplugged-c55dac75-c247-4672-885b-8b1adb241591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.534 2 WARNING nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Received unexpected event network-vif-unplugged-c55dac75-c247-4672-885b-8b1adb241591 for instance with vm_state deleted and task_state None.
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.534 2 DEBUG nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Received event network-vif-plugged-c55dac75-c247-4672-885b-8b1adb241591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.535 2 DEBUG oslo_concurrency.lockutils [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.535 2 DEBUG oslo_concurrency.lockutils [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.535 2 DEBUG oslo_concurrency.lockutils [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.535 2 DEBUG nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] No waiting events found dispatching network-vif-plugged-c55dac75-c247-4672-885b-8b1adb241591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.536 2 WARNING nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Received unexpected event network-vif-plugged-c55dac75-c247-4672-885b-8b1adb241591 for instance with vm_state deleted and task_state None.
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.536 2 DEBUG nova.compute.manager [req-a00a457d-bf80-4bdb-9856-282fa4ecf776 req-93a75788-d7f9-4256-8dbd-405ac152b8d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Received event network-vif-deleted-c55dac75-c247-4672-885b-8b1adb241591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.540 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] resizing rbd image ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.619 2 DEBUG nova.objects.instance [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lazy-loading 'migration_context' on Instance uuid ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.637 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.637 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Ensure instance console log exists: /var/lib/nova/instances/ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.638 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.638 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.638 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197802752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.744 2 DEBUG nova.network.neutron [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Successfully created port: 0627e49d-2462-482a-9f37-38289b11af20 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.754 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.760 2 DEBUG nova.compute.provider_tree [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.781 2 DEBUG nova.scheduler.client.report [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.820 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.821 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.824 2 DEBUG oslo_concurrency.lockutils [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.889 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.890 2 DEBUG nova.network.neutron [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.912 2 INFO nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:33:54 compute-0 nova_compute[260603]: 2025-10-02 08:33:54.937 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.029 2 DEBUG oslo_concurrency.processutils [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 421 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 999 KiB/s rd, 4.2 MiB/s wr, 246 op/s
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.104 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.108 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.109 2 INFO nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Creating image(s)
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.139 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.167 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.195 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.203 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1197802752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.280 2 DEBUG oslo_concurrency.lockutils [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "fcd663c5-c20d-477c-bf26-11eb72d0886f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.282 2 DEBUG oslo_concurrency.lockutils [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.282 2 DEBUG oslo_concurrency.lockutils [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.283 2 DEBUG oslo_concurrency.lockutils [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.283 2 DEBUG oslo_concurrency.lockutils [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.285 2 INFO nova.compute.manager [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Terminating instance
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.287 2 DEBUG nova.compute.manager [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.293 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.294 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.295 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.295 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.323 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.329 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.384 2 INFO nova.virt.libvirt.driver [-] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Instance destroyed successfully.
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.386 2 DEBUG nova.objects.instance [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'resources' on Instance uuid fcd663c5-c20d-477c-bf26-11eb72d0886f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.491 2 DEBUG nova.virt.libvirt.vif [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:33:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1204804315',display_name='tempest-Íñstáñcé-56555502',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1204804315',id=76,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:33:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-knt96idl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:33:53Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=fcd663c5-c20d-477c-bf26-11eb72d0886f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "address": "fa:16:3e:67:eb:ae", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc42b44f6-e8", "ovs_interfaceid": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.492 2 DEBUG nova.network.os_vif_util [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "address": "fa:16:3e:67:eb:ae", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc42b44f6-e8", "ovs_interfaceid": "c42b44f6-e8fd-46c9-a075-0952da11cf02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.494 2 DEBUG nova.network.os_vif_util [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:eb:ae,bridge_name='br-int',has_traffic_filtering=True,id=c42b44f6-e8fd-46c9-a075-0952da11cf02,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc42b44f6-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.495 2 DEBUG os_vif [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:eb:ae,bridge_name='br-int',has_traffic_filtering=True,id=c42b44f6-e8fd-46c9-a075-0952da11cf02,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc42b44f6-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.501 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc42b44f6-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:55 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486743655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.516 2 INFO os_vif [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:eb:ae,bridge_name='br-int',has_traffic_filtering=True,id=c42b44f6-e8fd-46c9-a075-0952da11cf02,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc42b44f6-e8')
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.537 2 DEBUG nova.policy [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6c3e0096dae34ce09545c8c4547dae81', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e1045858c7b24f1baf184c8469064740', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.541 2 DEBUG oslo_concurrency.processutils [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.551 2 DEBUG nova.compute.provider_tree [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.563 2 DEBUG nova.network.neutron [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Successfully updated port: 0627e49d-2462-482a-9f37-38289b11af20 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.570 2 DEBUG nova.scheduler.client.report [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.581 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "refresh_cache-ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.582 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquired lock "refresh_cache-ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.582 2 DEBUG nova.network.neutron [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.595 2 DEBUG oslo_concurrency.lockutils [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.603 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.635 2 INFO nova.scheduler.client.report [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Deleted allocations for instance 19dd1983-6b14-4ed7-bcb1-f620e7426cc6
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.681 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] resizing rbd image 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.729 2 DEBUG oslo_concurrency.lockutils [None req-b3e1ba53-1b2c-4554-84d3-81ecf75a79d3 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "19dd1983-6b14-4ed7-bcb1-f620e7426cc6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.783 2 DEBUG nova.objects.instance [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lazy-loading 'migration_context' on Instance uuid 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.797 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.798 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Ensure instance console log exists: /var/lib/nova/instances/22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.799 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.799 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.799 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.925 2 INFO nova.virt.libvirt.driver [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Deleting instance files /var/lib/nova/instances/fcd663c5-c20d-477c-bf26-11eb72d0886f_del
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.925 2 INFO nova.virt.libvirt.driver [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Deletion of /var/lib/nova/instances/fcd663c5-c20d-477c-bf26-11eb72d0886f_del complete
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.929 2 DEBUG nova.compute.manager [req-d4ab781d-64bf-4cb3-a8be-b3bb864ade6c req-fc3c621e-ec52-47b0-b2c0-7349e126ca2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Received event network-changed-0627e49d-2462-482a-9f37-38289b11af20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.929 2 DEBUG nova.compute.manager [req-d4ab781d-64bf-4cb3-a8be-b3bb864ade6c req-fc3c621e-ec52-47b0-b2c0-7349e126ca2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Refreshing instance network info cache due to event network-changed-0627e49d-2462-482a-9f37-38289b11af20. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.930 2 DEBUG oslo_concurrency.lockutils [req-d4ab781d-64bf-4cb3-a8be-b3bb864ade6c req-fc3c621e-ec52-47b0-b2c0-7349e126ca2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.980 2 INFO nova.compute.manager [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Took 0.69 seconds to destroy the instance on the hypervisor.
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.981 2 DEBUG oslo.service.loopingcall [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.982 2 DEBUG nova.compute.manager [-] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.982 2 DEBUG nova.network.neutron [-] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:33:55 compute-0 nova_compute[260603]: 2025-10-02 08:33:55.989 2 DEBUG nova.network.neutron [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:33:56 compute-0 podman[337055]: 2025-10-02 08:33:56.049962278 +0000 UTC m=+0.106293848 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.159 2 DEBUG oslo_concurrency.lockutils [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.159 2 DEBUG oslo_concurrency.lockutils [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.160 2 DEBUG oslo_concurrency.lockutils [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.160 2 DEBUG oslo_concurrency.lockutils [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.160 2 DEBUG oslo_concurrency.lockutils [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.161 2 INFO nova.compute.manager [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Terminating instance
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.162 2 DEBUG nova.compute.manager [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:33:56 compute-0 kernel: tap37e9c33f-0f (unregistering): left promiscuous mode
Oct 02 08:33:56 compute-0 NetworkManager[45129]: <info>  [1759394036.2324] device (tap37e9c33f-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.266 2 DEBUG nova.network.neutron [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Successfully created port: 95f2c075-1a91-4cac-a100-998ebf607a8d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:56 compute-0 ovn_controller[152344]: 2025-10-02T08:33:56Z|00749|binding|INFO|Releasing lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 from this chassis (sb_readonly=0)
Oct 02 08:33:56 compute-0 ovn_controller[152344]: 2025-10-02T08:33:56Z|00750|binding|INFO|Setting lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 down in Southbound
Oct 02 08:33:56 compute-0 ovn_controller[152344]: 2025-10-02T08:33:56Z|00751|binding|INFO|Removing iface tap37e9c33f-0f ovn-installed in OVS
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:56 compute-0 ceph-mon[74477]: pgmap v1617: 305 pgs: 305 active+clean; 421 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 999 KiB/s rd, 4.2 MiB/s wr, 246 op/s
Oct 02 08:33:56 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3486743655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.288 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:cc:f7 10.100.0.9'], port_security=['fa:16:3e:19:cc:f7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '49e7e668-b62c-4e35-a4e2-bba540000961', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'd09bd0a7-8be4-487a-8b24-ba3d4c0378f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04113540-c60b-4329-960e-cb06bfeb56f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=37e9c33f-0ff9-4138-a7b5-989ba3c016a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.299 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 in datapath ef30d863-af60-49d9-b5d2-5e4f20c70d56 unbound from our chassis
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.301 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef30d863-af60-49d9-b5d2-5e4f20c70d56, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.302 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[13c87b52-0805-4281-a95a-44b6db5a2d8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.304 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56 namespace which is not needed anymore
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:56 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Oct 02 08:33:56 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000003d.scope: Consumed 13.606s CPU time.
Oct 02 08:33:56 compute-0 systemd-machined[214636]: Machine qemu-89-instance-0000003d terminated.
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.398 2 INFO nova.virt.libvirt.driver [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance destroyed successfully.
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.399 2 DEBUG nova.objects.instance [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'resources' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.412 2 DEBUG nova.virt.libvirt.vif [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1525105387',display_name='tempest-ServerActionsTestOtherB-server-1525105387',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1525105387',id=61,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmeM0VXrLDuUgkvEcAKZLUawTz1v6B+3eBASOGcaRFlmF3ztxSFLOGPfQ5nMbtxqx6ZDoxMiinSb16iJLQrCBe+IsSQFaYXfW47SOpCqDkfThajOwmApFonqiBjUHNfHQ==',key_name='tempest-keypair-1019763483',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:33:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-fypdkou3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:33:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9020ed38b31d46f88625374b2a76aef6',uuid=49e7e668-b62c-4e35-a4e2-bba540000961,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.412 2 DEBUG nova.network.os_vif_util [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.413 2 DEBUG nova.network.os_vif_util [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.414 2 DEBUG os_vif [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37e9c33f-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.423 2 INFO os_vif [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f')
Oct 02 08:33:56 compute-0 neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56[323190]: [NOTICE]   (323194) : haproxy version is 2.8.14-c23fe91
Oct 02 08:33:56 compute-0 neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56[323190]: [NOTICE]   (323194) : path to executable is /usr/sbin/haproxy
Oct 02 08:33:56 compute-0 neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56[323190]: [WARNING]  (323194) : Exiting Master process...
Oct 02 08:33:56 compute-0 neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56[323190]: [ALERT]    (323194) : Current worker (323196) exited with code 143 (Terminated)
Oct 02 08:33:56 compute-0 neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56[323190]: [WARNING]  (323194) : All workers exited. Exiting... (0)
Oct 02 08:33:56 compute-0 systemd[1]: libpod-67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d.scope: Deactivated successfully.
Oct 02 08:33:56 compute-0 podman[337108]: 2025-10-02 08:33:56.479269599 +0000 UTC m=+0.054563564 container died 67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:33:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d-userdata-shm.mount: Deactivated successfully.
Oct 02 08:33:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f999397e7e358a7668d8bd32b5ebbd495e6797cdde869d93f7da4a88468be70b-merged.mount: Deactivated successfully.
Oct 02 08:33:56 compute-0 podman[337108]: 2025-10-02 08:33:56.53440211 +0000 UTC m=+0.109696065 container cleanup 67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:33:56 compute-0 systemd[1]: libpod-conmon-67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d.scope: Deactivated successfully.
Oct 02 08:33:56 compute-0 podman[337153]: 2025-10-02 08:33:56.616721984 +0000 UTC m=+0.053188211 container remove 67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.626 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a401286-baf0-4e3c-8b1a-7024d18dbeda]: (4, ('Thu Oct  2 08:33:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56 (67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d)\n67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d\nThu Oct  2 08:33:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56 (67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d)\n67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.629 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b43c0da8-f191-4b49-91b3-90c554284564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.630 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef30d863-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:56 compute-0 kernel: tapef30d863-a0: left promiscuous mode
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.644 2 DEBUG nova.compute.manager [req-1bbfd363-f067-473f-b51a-25e97fa64006 req-4a08b5d7-978a-4ac2-9ec4-a5161a9c40a6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-vif-unplugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.646 2 DEBUG oslo_concurrency.lockutils [req-1bbfd363-f067-473f-b51a-25e97fa64006 req-4a08b5d7-978a-4ac2-9ec4-a5161a9c40a6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.647 2 DEBUG oslo_concurrency.lockutils [req-1bbfd363-f067-473f-b51a-25e97fa64006 req-4a08b5d7-978a-4ac2-9ec4-a5161a9c40a6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.648 2 DEBUG oslo_concurrency.lockutils [req-1bbfd363-f067-473f-b51a-25e97fa64006 req-4a08b5d7-978a-4ac2-9ec4-a5161a9c40a6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.648 2 DEBUG nova.compute.manager [req-1bbfd363-f067-473f-b51a-25e97fa64006 req-4a08b5d7-978a-4ac2-9ec4-a5161a9c40a6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] No waiting events found dispatching network-vif-unplugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.650 2 DEBUG nova.compute.manager [req-1bbfd363-f067-473f-b51a-25e97fa64006 req-4a08b5d7-978a-4ac2-9ec4-a5161a9c40a6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-vif-unplugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.662 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5434ffde-ddcf-488b-b979-927871091494]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.691 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[53e333e4-3909-4c5b-9eda-1367e3a6c1b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.693 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8f101550-4963-43c0-a6ef-87804b3a9ae1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.716 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3d812e19-1590-4906-9cac-456936fe65a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474367, 'reachable_time': 22934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337169, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:56 compute-0 systemd[1]: run-netns-ovnmeta\x2def30d863\x2daf60\x2d49d9\x2db5d2\x2d5e4f20c70d56.mount: Deactivated successfully.
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.722 2 DEBUG nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.723 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:33:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:56.723 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[aea28dfe-0f76-41ad-acd9-8e08e6b34d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.802 2 DEBUG nova.network.neutron [-] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.826 2 INFO nova.compute.manager [-] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Took 0.84 seconds to deallocate network for instance.
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.837 2 INFO nova.virt.libvirt.driver [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Deleting instance files /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961_del
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.838 2 INFO nova.virt.libvirt.driver [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Deletion of /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961_del complete
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.931 2 DEBUG oslo_concurrency.lockutils [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.932 2 DEBUG oslo_concurrency.lockutils [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.947 2 INFO nova.compute.manager [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.947 2 DEBUG oslo.service.loopingcall [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.948 2 DEBUG nova.compute.manager [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:33:56 compute-0 nova_compute[260603]: 2025-10-02 08:33:56.948 2 DEBUG nova.network.neutron [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.009 2 DEBUG nova.network.neutron [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Updating instance_info_cache with network_info: [{"id": "0627e49d-2462-482a-9f37-38289b11af20", "address": "fa:16:3e:69:06:74", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0627e49d-24", "ovs_interfaceid": "0627e49d-2462-482a-9f37-38289b11af20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.039 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Releasing lock "refresh_cache-ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.039 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Instance network_info: |[{"id": "0627e49d-2462-482a-9f37-38289b11af20", "address": "fa:16:3e:69:06:74", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0627e49d-24", "ovs_interfaceid": "0627e49d-2462-482a-9f37-38289b11af20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.043 2 DEBUG oslo_concurrency.lockutils [req-d4ab781d-64bf-4cb3-a8be-b3bb864ade6c req-fc3c621e-ec52-47b0-b2c0-7349e126ca2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.044 2 DEBUG nova.network.neutron [req-d4ab781d-64bf-4cb3-a8be-b3bb864ade6c req-fc3c621e-ec52-47b0-b2c0-7349e126ca2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Refreshing network info cache for port 0627e49d-2462-482a-9f37-38289b11af20 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.048 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Start _get_guest_xml network_info=[{"id": "0627e49d-2462-482a-9f37-38289b11af20", "address": "fa:16:3e:69:06:74", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0627e49d-24", "ovs_interfaceid": "0627e49d-2462-482a-9f37-38289b11af20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.055 2 WARNING nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.063 2 DEBUG nova.virt.libvirt.host [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.064 2 DEBUG nova.virt.libvirt.host [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:33:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 421 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 999 KiB/s rd, 4.2 MiB/s wr, 246 op/s
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.078 2 DEBUG nova.virt.libvirt.host [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.079 2 DEBUG nova.virt.libvirt.host [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.080 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.080 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.081 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.081 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.082 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.082 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.083 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.083 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.084 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.084 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.085 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.085 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.089 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.175 2 DEBUG oslo_concurrency.processutils [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:33:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Oct 02 08:33:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Oct 02 08:33:57 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.287 2 DEBUG nova.network.neutron [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Successfully updated port: 95f2c075-1a91-4cac-a100-998ebf607a8d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.306 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "refresh_cache-22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.306 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquired lock "refresh_cache-22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.306 2 DEBUG nova.network.neutron [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.463 2 DEBUG nova.network.neutron [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:33:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047072522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.586 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3606973869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.630 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.635 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.691 2 DEBUG oslo_concurrency.processutils [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.703 2 DEBUG nova.compute.provider_tree [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.734 2 DEBUG nova.scheduler.client.report [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.765 2 DEBUG oslo_concurrency.lockutils [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.803 2 INFO nova.scheduler.client.report [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Deleted allocations for instance fcd663c5-c20d-477c-bf26-11eb72d0886f
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.891 2 DEBUG oslo_concurrency.lockutils [None req-f72fa43d-54f0-4b20-bc7f-c211e2a9411c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "fcd663c5-c20d-477c-bf26-11eb72d0886f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:33:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:33:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:33:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:33:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:33:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.925 2 DEBUG nova.network.neutron [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:57 compute-0 nova_compute[260603]: 2025-10-02 08:33:57.954 2 INFO nova.compute.manager [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Took 1.01 seconds to deallocate network for instance.
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.010 2 DEBUG oslo_concurrency.lockutils [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.011 2 DEBUG oslo_concurrency.lockutils [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.058 2 DEBUG nova.compute.manager [req-f277c03f-0bd3-48fb-a647-3a8b4a726fbd req-3e5648d5-81a2-431f-9c6c-7606301561ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Received event network-vif-deleted-c42b44f6-e8fd-46c9-a075-0952da11cf02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966588109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.125 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.127 2 DEBUG nova.virt.libvirt.vif [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:33:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1465134413',display_name='tempest-MultipleCreateTestJSON-server-1465134413-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1465134413-1',id=80,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e1045858c7b24f1baf184c8469064740',ramdisk_id='',reservation_id='r-qr0dvakd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-248615309',owner_user_name='tempest-MultipleCreateTestJSON-248615309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:53Z,user_data=None,user_id='6c3e0096dae34ce09545c8c4547dae81',uuid=ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0627e49d-2462-482a-9f37-38289b11af20", "address": "fa:16:3e:69:06:74", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0627e49d-24", "ovs_interfaceid": "0627e49d-2462-482a-9f37-38289b11af20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.128 2 DEBUG nova.network.os_vif_util [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converting VIF {"id": "0627e49d-2462-482a-9f37-38289b11af20", "address": "fa:16:3e:69:06:74", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0627e49d-24", "ovs_interfaceid": "0627e49d-2462-482a-9f37-38289b11af20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.129 2 DEBUG nova.network.os_vif_util [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:06:74,bridge_name='br-int',has_traffic_filtering=True,id=0627e49d-2462-482a-9f37-38289b11af20,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0627e49d-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.132 2 DEBUG nova.objects.instance [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lazy-loading 'pci_devices' on Instance uuid ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.168 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:33:58 compute-0 nova_compute[260603]:   <uuid>ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55</uuid>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   <name>instance-00000050</name>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <nova:name>tempest-MultipleCreateTestJSON-server-1465134413-1</nova:name>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:33:57</nova:creationTime>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:33:58 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:33:58 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:33:58 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:33:58 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:33:58 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:33:58 compute-0 nova_compute[260603]:         <nova:user uuid="6c3e0096dae34ce09545c8c4547dae81">tempest-MultipleCreateTestJSON-248615309-project-member</nova:user>
Oct 02 08:33:58 compute-0 nova_compute[260603]:         <nova:project uuid="e1045858c7b24f1baf184c8469064740">tempest-MultipleCreateTestJSON-248615309</nova:project>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:33:58 compute-0 nova_compute[260603]:         <nova:port uuid="0627e49d-2462-482a-9f37-38289b11af20">
Oct 02 08:33:58 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <system>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <entry name="serial">ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55</entry>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <entry name="uuid">ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55</entry>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     </system>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   <os>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   </os>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   <features>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   </features>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk">
Oct 02 08:33:58 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:58 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk.config">
Oct 02 08:33:58 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:58 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:69:06:74"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <target dev="tap0627e49d-24"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55/console.log" append="off"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <video>
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     </video>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:33:58 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:33:58 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:33:58 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:33:58 compute-0 nova_compute[260603]: </domain>
Oct 02 08:33:58 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.170 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Preparing to wait for external event network-vif-plugged-0627e49d-2462-482a-9f37-38289b11af20 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.171 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.171 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.171 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.173 2 DEBUG nova.virt.libvirt.vif [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:33:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1465134413',display_name='tempest-MultipleCreateTestJSON-server-1465134413-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1465134413-1',id=80,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e1045858c7b24f1baf184c8469064740',ramdisk_id='',reservation_id='r-qr0dvakd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-248615309',owner_user_name='tempest-MultipleCreateTestJSON-248615309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:53Z,user_data=None,user_id='6c3e0096dae34ce09545c8c4547dae81',uuid=ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0627e49d-2462-482a-9f37-38289b11af20", "address": "fa:16:3e:69:06:74", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0627e49d-24", "ovs_interfaceid": "0627e49d-2462-482a-9f37-38289b11af20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.173 2 DEBUG nova.network.os_vif_util [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converting VIF {"id": "0627e49d-2462-482a-9f37-38289b11af20", "address": "fa:16:3e:69:06:74", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0627e49d-24", "ovs_interfaceid": "0627e49d-2462-482a-9f37-38289b11af20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.174 2 DEBUG nova.network.os_vif_util [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:06:74,bridge_name='br-int',has_traffic_filtering=True,id=0627e49d-2462-482a-9f37-38289b11af20,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0627e49d-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.175 2 DEBUG os_vif [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:06:74,bridge_name='br-int',has_traffic_filtering=True,id=0627e49d-2462-482a-9f37-38289b11af20,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0627e49d-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.177 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0627e49d-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0627e49d-24, col_values=(('external_ids', {'iface-id': '0627e49d-2462-482a-9f37-38289b11af20', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:06:74', 'vm-uuid': 'ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:58 compute-0 ceph-mon[74477]: pgmap v1618: 305 pgs: 305 active+clean; 421 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 999 KiB/s rd, 4.2 MiB/s wr, 246 op/s
Oct 02 08:33:58 compute-0 ceph-mon[74477]: osdmap e234: 3 total, 3 up, 3 in
Oct 02 08:33:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4047072522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3606973869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3966588109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:58 compute-0 NetworkManager[45129]: <info>  [1759394038.1913] manager: (tap0627e49d-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.198 2 INFO os_vif [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:06:74,bridge_name='br-int',has_traffic_filtering=True,id=0627e49d-2462-482a-9f37-38289b11af20,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0627e49d-24')
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.200 2 DEBUG oslo_concurrency.processutils [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.251 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394023.2255967, 0b8ef6ea-80ff-48a3-84ff-f057fc293169 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.252 2 INFO nova.compute.manager [-] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] VM Stopped (Lifecycle Event)
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.280 2 DEBUG nova.compute.manager [None req-0de87762-7779-499e-9fb8-bc81c39006b0 - - - - - -] [instance: 0b8ef6ea-80ff-48a3-84ff-f057fc293169] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.345 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.345 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.346 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] No VIF found with MAC fa:16:3e:69:06:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.346 2 INFO nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Using config drive
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.382 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.397 2 DEBUG nova.network.neutron [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Updating instance_info_cache with network_info: [{"id": "95f2c075-1a91-4cac-a100-998ebf607a8d", "address": "fa:16:3e:ae:c5:30", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95f2c075-1a", "ovs_interfaceid": "95f2c075-1a91-4cac-a100-998ebf607a8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.450 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Releasing lock "refresh_cache-22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.450 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Instance network_info: |[{"id": "95f2c075-1a91-4cac-a100-998ebf607a8d", "address": "fa:16:3e:ae:c5:30", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95f2c075-1a", "ovs_interfaceid": "95f2c075-1a91-4cac-a100-998ebf607a8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.455 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Start _get_guest_xml network_info=[{"id": "95f2c075-1a91-4cac-a100-998ebf607a8d", "address": "fa:16:3e:ae:c5:30", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95f2c075-1a", "ovs_interfaceid": "95f2c075-1a91-4cac-a100-998ebf607a8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.462 2 WARNING nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.467 2 DEBUG nova.network.neutron [req-d4ab781d-64bf-4cb3-a8be-b3bb864ade6c req-fc3c621e-ec52-47b0-b2c0-7349e126ca2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Updated VIF entry in instance network info cache for port 0627e49d-2462-482a-9f37-38289b11af20. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.468 2 DEBUG nova.network.neutron [req-d4ab781d-64bf-4cb3-a8be-b3bb864ade6c req-fc3c621e-ec52-47b0-b2c0-7349e126ca2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Updating instance_info_cache with network_info: [{"id": "0627e49d-2462-482a-9f37-38289b11af20", "address": "fa:16:3e:69:06:74", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0627e49d-24", "ovs_interfaceid": "0627e49d-2462-482a-9f37-38289b11af20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.470 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394023.4679966, 69c52a3e-2ba6-4468-8498-ceb30011bb4f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.470 2 INFO nova.compute.manager [-] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] VM Stopped (Lifecycle Event)
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.473 2 DEBUG nova.virt.libvirt.host [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.474 2 DEBUG nova.virt.libvirt.host [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.479 2 DEBUG nova.virt.libvirt.host [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.480 2 DEBUG nova.virt.libvirt.host [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.480 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.481 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.482 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.482 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.482 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.483 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.483 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.483 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.484 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.484 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.485 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.485 2 DEBUG nova.virt.hardware [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.490 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.546 2 DEBUG nova.compute.manager [None req-df1ea0fc-2a87-4ffe-8f4b-0f78fb37f38f - - - - - -] [instance: 69c52a3e-2ba6-4468-8498-ceb30011bb4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.553 2 DEBUG oslo_concurrency.lockutils [req-d4ab781d-64bf-4cb3-a8be-b3bb864ade6c req-fc3c621e-ec52-47b0-b2c0-7349e126ca2e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:33:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:33:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3725108285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.676 2 DEBUG oslo_concurrency.processutils [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.684 2 DEBUG nova.compute.provider_tree [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.706 2 DEBUG nova.scheduler.client.report [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.740 2 DEBUG nova.compute.manager [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.740 2 DEBUG oslo_concurrency.lockutils [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.741 2 DEBUG oslo_concurrency.lockutils [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.741 2 DEBUG oslo_concurrency.lockutils [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.742 2 DEBUG nova.compute.manager [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] No waiting events found dispatching network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.742 2 WARNING nova.compute.manager [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received unexpected event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 for instance with vm_state deleted and task_state None.
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.743 2 DEBUG nova.compute.manager [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Received event network-changed-95f2c075-1a91-4cac-a100-998ebf607a8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.743 2 DEBUG nova.compute.manager [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Refreshing instance network info cache due to event network-changed-95f2c075-1a91-4cac-a100-998ebf607a8d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.744 2 DEBUG oslo_concurrency.lockutils [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.744 2 DEBUG oslo_concurrency.lockutils [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.744 2 DEBUG nova.network.neutron [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Refreshing network info cache for port 95f2c075-1a91-4cac-a100-998ebf607a8d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.748 2 DEBUG oslo_concurrency.lockutils [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.781 2 INFO nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Creating config drive at /var/lib/nova/instances/ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55/disk.config
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.790 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvkk2209 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.839 2 INFO nova.scheduler.client.report [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Deleted allocations for instance 49e7e668-b62c-4e35-a4e2-bba540000961
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.938 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvkk2209" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.971 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:58 compute-0 nova_compute[260603]: 2025-10-02 08:33:58.976 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55/disk.config ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563515234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:58 compute-0 kernel: tap11350006-4c (unregistering): left promiscuous mode
Oct 02 08:33:58 compute-0 NetworkManager[45129]: <info>  [1759394038.9848] device (tap11350006-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:33:59 compute-0 ovn_controller[152344]: 2025-10-02T08:33:59Z|00752|binding|INFO|Releasing lport 11350006-4c80-4e3a-a271-5230799be1ba from this chassis (sb_readonly=0)
Oct 02 08:33:59 compute-0 ovn_controller[152344]: 2025-10-02T08:33:59Z|00753|binding|INFO|Setting lport 11350006-4c80-4e3a-a271-5230799be1ba down in Southbound
Oct 02 08:33:59 compute-0 ovn_controller[152344]: 2025-10-02T08:33:59Z|00754|binding|INFO|Removing iface tap11350006-4c ovn-installed in OVS
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Oct 02 08:33:59 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004f.scope: Consumed 14.551s CPU time.
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.040 2 DEBUG oslo_concurrency.lockutils [None req-0704638b-63af-4929-9e3e-2faa9f8f8e1c 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:59 compute-0 systemd-machined[214636]: Machine qemu-88-instance-0000004f terminated.
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.045 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 292 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.9 MiB/s wr, 305 op/s
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.084 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.092 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:93:e4 10.100.0.12'], port_security=['fa:16:3e:68:93:e4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6df0942d-95db-4140-9c7b-b5c51ada92bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86059eb0-17b1-462f-a30f-1dfe95c50614', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '629efd330be646b7a2941e0c83b86e0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4883fba0-ee44-4548-b479-1786f1cf77b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2e7de25-29e1-404b-a0d2-6f487c522884, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=11350006-4c80-4e3a-a271-5230799be1ba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.093 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.093 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 11350006-4c80-4e3a-a271-5230799be1ba in datapath 86059eb0-17b1-462f-a30f-1dfe95c50614 unbound from our chassis
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.094 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86059eb0-17b1-462f-a30f-1dfe95c50614 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.095 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc5a926-425a-45fa-a7bf-76c9b26b9749]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 podman[337340]: 2025-10-02 08:33:59.09681983 +0000 UTC m=+0.084014179 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.152 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55/disk.config ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.153 2 INFO nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Deleting local config drive /var/lib/nova/instances/ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55/disk.config because it was imported into RBD.
Oct 02 08:33:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3725108285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:33:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1563515234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:59 compute-0 systemd-udevd[337077]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:33:59 compute-0 NetworkManager[45129]: <info>  [1759394039.2230] manager: (tap0627e49d-24): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Oct 02 08:33:59 compute-0 kernel: tap0627e49d-24: entered promiscuous mode
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 ovn_controller[152344]: 2025-10-02T08:33:59Z|00755|binding|INFO|Claiming lport 0627e49d-2462-482a-9f37-38289b11af20 for this chassis.
Oct 02 08:33:59 compute-0 ovn_controller[152344]: 2025-10-02T08:33:59Z|00756|binding|INFO|0627e49d-2462-482a-9f37-38289b11af20: Claiming fa:16:3e:69:06:74 10.100.0.8
Oct 02 08:33:59 compute-0 NetworkManager[45129]: <info>  [1759394039.2449] device (tap0627e49d-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:33:59 compute-0 NetworkManager[45129]: <info>  [1759394039.2503] device (tap0627e49d-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.254 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:06:74 10.100.0.8'], port_security=['fa:16:3e:69:06:74 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe2ef776-a59b-4369-95f5-69103d78f3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1045858c7b24f1baf184c8469064740', 'neutron:revision_number': '2', 'neutron:security_group_ids': '524655b6-f9e6-47ce-bebb-b7967ffdd769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=798444a3-a368-47d6-a395-3d1a0bcb7c88, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0627e49d-2462-482a-9f37-38289b11af20) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.255 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0627e49d-2462-482a-9f37-38289b11af20 in datapath fe2ef776-a59b-4369-95f5-69103d78f3da bound to our chassis
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.257 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe2ef776-a59b-4369-95f5-69103d78f3da
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.278 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1347598c-53d6-431b-a055-930000ec96b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.278 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfe2ef776-a1 in ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:33:59 compute-0 systemd-machined[214636]: New machine qemu-90-instance-00000050.
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.280 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfe2ef776-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.280 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7844c679-bf8e-4788-8e48-302640ea2056]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.282 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[51d0a3e9-ba0e-4c68-a593-abb52f2649b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 ovn_controller[152344]: 2025-10-02T08:33:59Z|00757|binding|INFO|Setting lport 0627e49d-2462-482a-9f37-38289b11af20 ovn-installed in OVS
Oct 02 08:33:59 compute-0 ovn_controller[152344]: 2025-10-02T08:33:59Z|00758|binding|INFO|Setting lport 0627e49d-2462-482a-9f37-38289b11af20 up in Southbound
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-00000050.
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.307 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee5f0dd-de87-45b7-80ec-de724cec3541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.337 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[265e58ac-dad0-47a8-b6cf-19f1ce79fd51]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.372 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2568426f-1355-422c-a552-4ba36039cebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.383 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1fccb404-345f-4fab-bff1-ae0d14851cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 NetworkManager[45129]: <info>  [1759394039.3855] manager: (tapfe2ef776-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/314)
Oct 02 08:33:59 compute-0 systemd-udevd[337452]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.426 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[111e402e-64a8-4fe1-be62-6ef1b9dc42ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.429 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c1510cef-35b9-4947-a12e-1daaacd1178e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 NetworkManager[45129]: <info>  [1759394039.4573] device (tapfe2ef776-a0): carrier: link connected
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.462 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[28f77e6a-90d3-4bf2-ab62-23ce05bfae95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.484 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[82082e72-465a-4a90-9090-e9499fe9052d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe2ef776-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:01:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491806, 'reachable_time': 26415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337486, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.503 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f321ac4b-7c48-4dd7-ae0d-8b89322d1cb9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:19e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491806, 'tstamp': 491806}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337487, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.530 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd1557aa-e45f-47cb-8389-68d0f3c10eca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe2ef776-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:01:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491806, 'reachable_time': 26415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 337488, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:33:59 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967696976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.583 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[928942d8-cfb2-4ade-b5ef-720c1b7180b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.593 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.595 2 DEBUG nova.virt.libvirt.vif [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:33:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1465134413',display_name='tempest-MultipleCreateTestJSON-server-1465134413-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1465134413-2',id=81,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e1045858c7b24f1baf184c8469064740',ramdisk_id='',reservation_id='r-qr0dvakd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-248615309',owner_user_name='tempest-MultipleCreateTestJSON-248615309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:54Z,user_data=None,user_id='6c3e0096dae34ce09545c8c4547dae81',uuid=22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95f2c075-1a91-4cac-a100-998ebf607a8d", "address": "fa:16:3e:ae:c5:30", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95f2c075-1a", "ovs_interfaceid": "95f2c075-1a91-4cac-a100-998ebf607a8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.596 2 DEBUG nova.network.os_vif_util [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converting VIF {"id": "95f2c075-1a91-4cac-a100-998ebf607a8d", "address": "fa:16:3e:ae:c5:30", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95f2c075-1a", "ovs_interfaceid": "95f2c075-1a91-4cac-a100-998ebf607a8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.598 2 DEBUG nova.network.os_vif_util [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=95f2c075-1a91-4cac-a100-998ebf607a8d,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95f2c075-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.601 2 DEBUG nova.objects.instance [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lazy-loading 'pci_devices' on Instance uuid 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.632 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:33:59 compute-0 nova_compute[260603]:   <uuid>22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc</uuid>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   <name>instance-00000051</name>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <nova:name>tempest-MultipleCreateTestJSON-server-1465134413-2</nova:name>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:33:58</nova:creationTime>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:33:59 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:33:59 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:33:59 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:33:59 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:33:59 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:33:59 compute-0 nova_compute[260603]:         <nova:user uuid="6c3e0096dae34ce09545c8c4547dae81">tempest-MultipleCreateTestJSON-248615309-project-member</nova:user>
Oct 02 08:33:59 compute-0 nova_compute[260603]:         <nova:project uuid="e1045858c7b24f1baf184c8469064740">tempest-MultipleCreateTestJSON-248615309</nova:project>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:33:59 compute-0 nova_compute[260603]:         <nova:port uuid="95f2c075-1a91-4cac-a100-998ebf607a8d">
Oct 02 08:33:59 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <system>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <entry name="serial">22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc</entry>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <entry name="uuid">22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc</entry>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     </system>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   <os>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   </os>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   <features>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   </features>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk">
Oct 02 08:33:59 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:59 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk.config">
Oct 02 08:33:59 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       </source>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:33:59 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:ae:c5:30"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <target dev="tap95f2c075-1a"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc/console.log" append="off"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <video>
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     </video>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:33:59 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:33:59 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:33:59 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:33:59 compute-0 nova_compute[260603]: </domain>
Oct 02 08:33:59 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.640 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Preparing to wait for external event network-vif-plugged-95f2c075-1a91-4cac-a100-998ebf607a8d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.640 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.641 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.641 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.642 2 DEBUG nova.virt.libvirt.vif [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:33:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1465134413',display_name='tempest-MultipleCreateTestJSON-server-1465134413-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1465134413-2',id=81,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e1045858c7b24f1baf184c8469064740',ramdisk_id='',reservation_id='r-qr0dvakd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-248615309',owner_user_name='tempest-MultipleCreateTestJSON-248615309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:54Z,user_data=None,user_id='6c3e0096dae34ce09545c8c4547dae81',uuid=22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95f2c075-1a91-4cac-a100-998ebf607a8d", "address": "fa:16:3e:ae:c5:30", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95f2c075-1a", "ovs_interfaceid": "95f2c075-1a91-4cac-a100-998ebf607a8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.643 2 DEBUG nova.network.os_vif_util [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converting VIF {"id": "95f2c075-1a91-4cac-a100-998ebf607a8d", "address": "fa:16:3e:ae:c5:30", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95f2c075-1a", "ovs_interfaceid": "95f2c075-1a91-4cac-a100-998ebf607a8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.643 2 DEBUG nova.network.os_vif_util [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=95f2c075-1a91-4cac-a100-998ebf607a8d,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95f2c075-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.644 2 DEBUG os_vif [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=95f2c075-1a91-4cac-a100-998ebf607a8d,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95f2c075-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95f2c075-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95f2c075-1a, col_values=(('external_ids', {'iface-id': '95f2c075-1a91-4cac-a100-998ebf607a8d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:c5:30', 'vm-uuid': '22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 NetworkManager[45129]: <info>  [1759394039.6538] manager: (tap95f2c075-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.663 2 INFO os_vif [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=95f2c075-1a91-4cac-a100-998ebf607a8d,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95f2c075-1a')
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.665 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7a94c2-7cb5-4773-9fe2-1ac410e0f381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 kernel: tapfe2ef776-a0: entered promiscuous mode
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.666 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe2ef776-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.667 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.667 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe2ef776-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:59 compute-0 NetworkManager[45129]: <info>  [1759394039.6701] manager: (tapfe2ef776-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.713 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe2ef776-a0, col_values=(('external_ids', {'iface-id': '3752b866-0aac-4a57-acbb-4c574bfc2b06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 ovn_controller[152344]: 2025-10-02T08:33:59Z|00759|binding|INFO|Releasing lport 3752b866-0aac-4a57-acbb-4c574bfc2b06 from this chassis (sb_readonly=0)
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.753 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.754 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.755 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] No VIF found with MAC fa:16:3e:ae:c5:30, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.756 2 INFO nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Using config drive
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.764 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fe2ef776-a59b-4369-95f5-69103d78f3da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fe2ef776-a59b-4369-95f5-69103d78f3da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.765 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e2674d89-0494-4105-a24e-771f8d167f8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.766 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-fe2ef776-a59b-4369-95f5-69103d78f3da
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/fe2ef776-a59b-4369-95f5-69103d78f3da.pid.haproxy
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID fe2ef776-a59b-4369-95f5-69103d78f3da
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.767 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'env', 'PROCESS_TAG=haproxy-fe2ef776-a59b-4369-95f5-69103d78f3da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fe2ef776-a59b-4369-95f5-69103d78f3da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.800 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.809 2 INFO nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Instance shutdown successfully after 13 seconds.
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.820 2 INFO nova.virt.libvirt.driver [-] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Instance destroyed successfully.
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.820 2 DEBUG nova.objects.instance [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'numa_topology' on Instance uuid 6df0942d-95db-4140-9c7b-b5c51ada92bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:33:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:33:59.832 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.855 2 INFO nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Attempting rescue
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.858 2 DEBUG nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.863 2 DEBUG nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.864 2 INFO nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Creating image(s)
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.894 2 DEBUG nova.storage.rbd_utils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.908 2 DEBUG nova.objects.instance [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6df0942d-95db-4140-9c7b-b5c51ada92bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.957 2 DEBUG nova.storage.rbd_utils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:33:59 compute-0 nova_compute[260603]: 2025-10-02 08:33:59.998 2 DEBUG nova.storage.rbd_utils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.003 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.078 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.079 2 DEBUG oslo_concurrency.lockutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.080 2 DEBUG oslo_concurrency.lockutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.081 2 DEBUG oslo_concurrency.lockutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.106 2 DEBUG nova.storage.rbd_utils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.110 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.179 2 DEBUG oslo_concurrency.lockutils [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "7ac34b0c-8ced-417d-9442-8fda77804a34" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.181 2 DEBUG oslo_concurrency.lockutils [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.182 2 DEBUG oslo_concurrency.lockutils [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.183 2 DEBUG oslo_concurrency.lockutils [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.184 2 DEBUG oslo_concurrency.lockutils [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.187 2 INFO nova.compute.manager [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Terminating instance
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.189 2 DEBUG nova.compute.manager [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:34:00 compute-0 ceph-mon[74477]: pgmap v1620: 305 pgs: 305 active+clean; 292 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.9 MiB/s wr, 305 op/s
Oct 02 08:34:00 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1967696976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:00 compute-0 kernel: tap2fbf8f14-9d (unregistering): left promiscuous mode
Oct 02 08:34:00 compute-0 NetworkManager[45129]: <info>  [1759394040.2501] device (tap2fbf8f14-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:34:00 compute-0 ovn_controller[152344]: 2025-10-02T08:34:00Z|00760|binding|INFO|Releasing lport 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 from this chassis (sb_readonly=0)
Oct 02 08:34:00 compute-0 ovn_controller[152344]: 2025-10-02T08:34:00Z|00761|binding|INFO|Setting lport 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 down in Southbound
Oct 02 08:34:00 compute-0 ovn_controller[152344]: 2025-10-02T08:34:00Z|00762|binding|INFO|Removing iface tap2fbf8f14-9d ovn-installed in OVS
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:00 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000040.scope: Deactivated successfully.
Oct 02 08:34:00 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000040.scope: Consumed 18.905s CPU time.
Oct 02 08:34:00 compute-0 systemd-machined[214636]: Machine qemu-71-instance-00000040 terminated.
Oct 02 08:34:00 compute-0 podman[337688]: 2025-10-02 08:34:00.342876773 +0000 UTC m=+0.095013828 container create 31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.376 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.377 2 DEBUG nova.objects.instance [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'migration_context' on Instance uuid 6df0942d-95db-4140-9c7b-b5c51ada92bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:00 compute-0 systemd[1]: Started libpod-conmon-31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c.scope.
Oct 02 08:34:00 compute-0 podman[337688]: 2025-10-02 08:34:00.294334307 +0000 UTC m=+0.046471392 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:34:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56159d18c0c6ddb61954cdc9423e275b4f29138854e68b16139087c0bd178e41/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:00 compute-0 podman[337688]: 2025-10-02 08:34:00.414576638 +0000 UTC m=+0.166713713 container init 31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 08:34:00 compute-0 podman[337688]: 2025-10-02 08:34:00.420867354 +0000 UTC m=+0.173004409 container start 31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.437 2 INFO nova.virt.libvirt.driver [-] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Instance destroyed successfully.
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.437 2 DEBUG nova.objects.instance [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'resources' on Instance uuid 7ac34b0c-8ced-417d-9442-8fda77804a34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:00 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[337710]: [NOTICE]   (337724) : New worker (337729) forked
Oct 02 08:34:00 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[337710]: [NOTICE]   (337724) : Loading success.
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.461 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:fa:94 10.100.0.3'], port_security=['fa:16:3e:23:fa:94 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7ac34b0c-8ced-417d-9442-8fda77804a34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.467 2 DEBUG nova.virt.libvirt.vif [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:31:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-1230108710',display_name='tempest-₡-1230108710',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1230108710',id=64,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-djhvanl3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:25Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=7ac34b0c-8ced-417d-9442-8fda77804a34,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.468 2 DEBUG nova.network.os_vif_util [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.468 2 DEBUG nova.network.os_vif_util [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:fa:94,bridge_name='br-int',has_traffic_filtering=True,id=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fbf8f14-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.468 2 DEBUG os_vif [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:fa:94,bridge_name='br-int',has_traffic_filtering=True,id=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fbf8f14-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.470 2 DEBUG nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.470 2 DEBUG nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Start _get_guest_xml network_info=[{"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2021168771-network", "vif_mac": "fa:16:3e:68:93:e4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.471 2 DEBUG nova.objects.instance [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'resources' on Instance uuid 6df0942d-95db-4140-9c7b-b5c51ada92bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.471 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fbf8f14-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.475 2 DEBUG nova.compute.manager [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received event network-vif-unplugged-11350006-4c80-4e3a-a271-5230799be1ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.476 2 DEBUG oslo_concurrency.lockutils [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.476 2 DEBUG oslo_concurrency.lockutils [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.476 2 DEBUG oslo_concurrency.lockutils [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.476 2 DEBUG nova.compute.manager [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] No waiting events found dispatching network-vif-unplugged-11350006-4c80-4e3a-a271-5230799be1ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.476 2 WARNING nova.compute.manager [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received unexpected event network-vif-unplugged-11350006-4c80-4e3a-a271-5230799be1ba for instance with vm_state active and task_state rescuing.
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.476 2 DEBUG nova.compute.manager [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.476 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.476 2 DEBUG oslo_concurrency.lockutils [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.477 2 DEBUG oslo_concurrency.lockutils [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.477 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 unbound from our chassis
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.477 2 DEBUG oslo_concurrency.lockutils [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.477 2 DEBUG nova.compute.manager [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] No waiting events found dispatching network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.477 2 WARNING nova.compute.manager [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received unexpected event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba for instance with vm_state active and task_state rescuing.
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.477 2 DEBUG nova.compute.manager [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Received event network-vif-plugged-0627e49d-2462-482a-9f37-38289b11af20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.478 2 DEBUG oslo_concurrency.lockutils [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.478 2 DEBUG oslo_concurrency.lockutils [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.478 2 DEBUG oslo_concurrency.lockutils [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.478 2 DEBUG nova.compute.manager [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Processing event network-vif-plugged-0627e49d-2462-482a-9f37-38289b11af20 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.478 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e3507cf-e1b2-456e-8cff-b075c2a55621, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.478 2 DEBUG nova.compute.manager [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Received event network-vif-plugged-0627e49d-2462-482a-9f37-38289b11af20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.478 2 DEBUG oslo_concurrency.lockutils [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.478 2 DEBUG oslo_concurrency.lockutils [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.479 2 DEBUG oslo_concurrency.lockutils [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.479 2 DEBUG nova.compute.manager [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] No waiting events found dispatching network-vif-plugged-0627e49d-2462-482a-9f37-38289b11af20 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.479 2 WARNING nova.compute.manager [req-1a8feb71-3e83-49a7-8339-657765b09b43 req-40f0670b-f0f0-4e59-9fd9-192cc5f31870 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Received unexpected event network-vif-plugged-0627e49d-2462-482a-9f37-38289b11af20 for instance with vm_state building and task_state spawning.
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.479 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[959b5541-0491-4552-91a2-97deccbc05d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.479 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621 namespace which is not needed anymore
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.483 2 INFO os_vif [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:fa:94,bridge_name='br-int',has_traffic_filtering=True,id=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fbf8f14-9d')
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.504 2 WARNING nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.509 2 DEBUG nova.virt.libvirt.host [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.510 2 DEBUG nova.virt.libvirt.host [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.514 2 DEBUG nova.virt.libvirt.host [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.514 2 DEBUG nova.virt.libvirt.host [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.515 2 DEBUG nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.515 2 DEBUG nova.virt.hardware [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.516 2 DEBUG nova.virt.hardware [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.516 2 DEBUG nova.virt.hardware [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.516 2 DEBUG nova.virt.hardware [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.516 2 DEBUG nova.virt.hardware [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.516 2 DEBUG nova.virt.hardware [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.517 2 DEBUG nova.virt.hardware [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.517 2 DEBUG nova.virt.hardware [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.517 2 DEBUG nova.virt.hardware [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.517 2 DEBUG nova.virt.hardware [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.517 2 DEBUG nova.virt.hardware [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.518 2 DEBUG nova.objects.instance [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6df0942d-95db-4140-9c7b-b5c51ada92bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.542 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.579 2 INFO nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Creating config drive at /var/lib/nova/instances/22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc/disk.config
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.585 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpki9np_gr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:00 compute-0 neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621[324813]: [NOTICE]   (324817) : haproxy version is 2.8.14-c23fe91
Oct 02 08:34:00 compute-0 neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621[324813]: [NOTICE]   (324817) : path to executable is /usr/sbin/haproxy
Oct 02 08:34:00 compute-0 neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621[324813]: [WARNING]  (324817) : Exiting Master process...
Oct 02 08:34:00 compute-0 neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621[324813]: [WARNING]  (324817) : Exiting Master process...
Oct 02 08:34:00 compute-0 neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621[324813]: [ALERT]    (324817) : Current worker (324819) exited with code 143 (Terminated)
Oct 02 08:34:00 compute-0 neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621[324813]: [WARNING]  (324817) : All workers exited. Exiting... (0)
Oct 02 08:34:00 compute-0 systemd[1]: libpod-1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34.scope: Deactivated successfully.
Oct 02 08:34:00 compute-0 podman[337773]: 2025-10-02 08:34:00.611315853 +0000 UTC m=+0.044587285 container died 1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.625 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394040.5417385, ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.627 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] VM Started (Lifecycle Event)
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.634 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:34:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34-userdata-shm.mount: Deactivated successfully.
Oct 02 08:34:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d61431590530a1e10271e0cde98940632ec3d957e216da4e4fc4080e4734f710-merged.mount: Deactivated successfully.
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.651 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.656 2 INFO nova.virt.libvirt.driver [-] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Instance spawned successfully.
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.656 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.660 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:00 compute-0 podman[337773]: 2025-10-02 08:34:00.663845373 +0000 UTC m=+0.097116775 container cleanup 1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.664 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:00 compute-0 systemd[1]: libpod-conmon-1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34.scope: Deactivated successfully.
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.689 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.689 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394040.5418518, ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.690 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] VM Paused (Lifecycle Event)
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.698 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.699 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.700 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.700 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.701 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.701 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.706 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.710 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394040.6395354, ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.710 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] VM Resumed (Lifecycle Event)
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.713 2 DEBUG nova.network.neutron [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Updated VIF entry in instance network info cache for port 95f2c075-1a91-4cac-a100-998ebf607a8d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.714 2 DEBUG nova.network.neutron [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Updating instance_info_cache with network_info: [{"id": "95f2c075-1a91-4cac-a100-998ebf607a8d", "address": "fa:16:3e:ae:c5:30", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95f2c075-1a", "ovs_interfaceid": "95f2c075-1a91-4cac-a100-998ebf607a8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.731 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpki9np_gr" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.753 2 DEBUG nova.storage.rbd_utils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] rbd image 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.756 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc/disk.config 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:00 compute-0 podman[337812]: 2025-10-02 08:34:00.762343469 +0000 UTC m=+0.053291844 container remove 1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.767 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[13a5a430-1404-412f-9e3f-f2bf60b1f5d6]: (4, ('Thu Oct  2 08:34:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621 (1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34)\n1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34\nThu Oct  2 08:34:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621 (1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34)\n1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.769 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ce65a040-65d5-41a0-a1bf-145dde778738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.770 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:00 compute-0 kernel: tap1e3507cf-e0: left promiscuous mode
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.808 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.808 2 DEBUG oslo_concurrency.lockutils [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.809 2 DEBUG nova.compute.manager [req-de15a5ba-8089-450d-ad07-1b9de528fa40 req-59a60e3a-480f-4ad1-ac66-d922e5952dff 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-vif-deleted-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.811 2 INFO nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Took 6.85 seconds to spawn the instance on the hypervisor.
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.811 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.814 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.827 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[426731c4-2c56-415c-a3ec-27e30d688a03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.841 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.849 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[00954601-236f-48f8-9076-2d847c89f2af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.850 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[60c4351f-56dd-45a0-bbde-8425d87b00c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.867 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c50ef0b-d462-44b8-adbc-87b2cfb06d48]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476261, 'reachable_time': 31586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337868, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d1e3507cf\x2de1b2\x2d456e\x2d8cff\x2db075c2a55621.mount: Deactivated successfully.
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.873 2 INFO nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Took 7.91 seconds to build instance.
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.873 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:34:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:00.874 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb84cd8-4af0-4c09-8d54-95d8ef161826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.895 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.955 2 DEBUG oslo_concurrency.processutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc/disk.config 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:00 compute-0 nova_compute[260603]: 2025-10-02 08:34:00.956 2 INFO nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Deleting local config drive /var/lib/nova/instances/22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc/disk.config because it was imported into RBD.
Oct 02 08:34:01 compute-0 NetworkManager[45129]: <info>  [1759394041.0123] manager: (tap95f2c075-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Oct 02 08:34:01 compute-0 kernel: tap95f2c075-1a: entered promiscuous mode
Oct 02 08:34:01 compute-0 ovn_controller[152344]: 2025-10-02T08:34:01Z|00763|binding|INFO|Claiming lport 95f2c075-1a91-4cac-a100-998ebf607a8d for this chassis.
Oct 02 08:34:01 compute-0 ovn_controller[152344]: 2025-10-02T08:34:01Z|00764|binding|INFO|95f2c075-1a91-4cac-a100-998ebf607a8d: Claiming fa:16:3e:ae:c5:30 10.100.0.9
Oct 02 08:34:01 compute-0 NetworkManager[45129]: <info>  [1759394041.0286] device (tap95f2c075-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:34:01 compute-0 NetworkManager[45129]: <info>  [1759394041.0295] device (tap95f2c075-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.032 2 INFO nova.virt.libvirt.driver [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Deleting instance files /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34_del
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.033 2 INFO nova.virt.libvirt.driver [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Deletion of /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34_del complete
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.038 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:c5:30 10.100.0.9'], port_security=['fa:16:3e:ae:c5:30 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe2ef776-a59b-4369-95f5-69103d78f3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1045858c7b24f1baf184c8469064740', 'neutron:revision_number': '2', 'neutron:security_group_ids': '524655b6-f9e6-47ce-bebb-b7967ffdd769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=798444a3-a368-47d6-a395-3d1a0bcb7c88, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=95f2c075-1a91-4cac-a100-998ebf607a8d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.039 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 95f2c075-1a91-4cac-a100-998ebf607a8d in datapath fe2ef776-a59b-4369-95f5-69103d78f3da bound to our chassis
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.041 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe2ef776-a59b-4369-95f5-69103d78f3da
Oct 02 08:34:01 compute-0 ovn_controller[152344]: 2025-10-02T08:34:01Z|00765|binding|INFO|Setting lport 95f2c075-1a91-4cac-a100-998ebf607a8d ovn-installed in OVS
Oct 02 08:34:01 compute-0 ovn_controller[152344]: 2025-10-02T08:34:01Z|00766|binding|INFO|Setting lport 95f2c075-1a91-4cac-a100-998ebf607a8d up in Southbound
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:01 compute-0 systemd-machined[214636]: New machine qemu-91-instance-00000051.
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.055 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe8b2c8-6f9b-4821-87a6-897bef26341c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:01 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-00000051.
Oct 02 08:34:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 292 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.9 MiB/s wr, 305 op/s
Oct 02 08:34:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3842633687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.086 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[27e73478-8a4b-467a-ac82-500b0406eac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.090 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2886f4-a66b-47d2-8765-14cd42035e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.109 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.111 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.115 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[97c1189c-61a4-4437-be6b-cf837deb1357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.130 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[52cd6a57-3aeb-45b6-8012-e363e83bd045]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe2ef776-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:01:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491806, 'reachable_time': 26415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337909, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.150 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[335e2097-1936-490a-8cab-9331dd9d8342]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfe2ef776-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491821, 'tstamp': 491821}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337912, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfe2ef776-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491825, 'tstamp': 491825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337912, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.152 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe2ef776-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.160 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe2ef776-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.159 2 INFO nova.compute.manager [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Took 0.97 seconds to destroy the instance on the hypervisor.
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.161 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.161 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe2ef776-a0, col_values=(('external_ids', {'iface-id': '3752b866-0aac-4a57-acbb-4c574bfc2b06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.161 2 DEBUG oslo.service.loopingcall [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:34:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:01.162 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.162 2 DEBUG nova.compute.manager [-] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.162 2 DEBUG nova.network.neutron [-] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:34:01 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3842633687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/122878349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.608 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.609 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.825 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394041.8240168, 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.825 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] VM Started (Lifecycle Event)
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.851 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.856 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394041.828237, 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.856 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] VM Paused (Lifecycle Event)
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.882 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.887 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:01 compute-0 nova_compute[260603]: 2025-10-02 08:34:01.914 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:34:02 compute-0 sudo[337999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:34:02 compute-0 sudo[337999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:02 compute-0 sudo[337999]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2196411300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.140 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.141 2 DEBUG nova.virt.libvirt.vif [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:33:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1363390342',display_name='tempest-ServerRescueTestJSON-server-1363390342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1363390342',id=79,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:33:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='629efd330be646b7a2941e0c83b86e0e',ramdisk_id='',reservation_id='r-gcsl0g66',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-846559261',owner_user_name='tempest-ServerRescueTestJSON-846559261-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:33:40Z,user_data=None,user_id='d2e113d3d74a43998ac8dbf246ae9095',uuid=6df0942d-95db-4140-9c7b-b5c51ada92bd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2021168771-network", "vif_mac": "fa:16:3e:68:93:e4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.142 2 DEBUG nova.network.os_vif_util [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converting VIF {"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2021168771-network", "vif_mac": "fa:16:3e:68:93:e4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.142 2 DEBUG nova.network.os_vif_util [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:93:e4,bridge_name='br-int',has_traffic_filtering=True,id=11350006-4c80-4e3a-a271-5230799be1ba,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11350006-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.143 2 DEBUG nova.objects.instance [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 6df0942d-95db-4140-9c7b-b5c51ada92bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:02 compute-0 sudo[338024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:34:02 compute-0 sudo[338024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:02 compute-0 sudo[338024]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.186 2 DEBUG nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:34:02 compute-0 nova_compute[260603]:   <uuid>6df0942d-95db-4140-9c7b-b5c51ada92bd</uuid>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   <name>instance-0000004f</name>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerRescueTestJSON-server-1363390342</nova:name>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:34:00</nova:creationTime>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <nova:user uuid="d2e113d3d74a43998ac8dbf246ae9095">tempest-ServerRescueTestJSON-846559261-project-member</nova:user>
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <nova:project uuid="629efd330be646b7a2941e0c83b86e0e">tempest-ServerRescueTestJSON-846559261</nova:project>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <nova:port uuid="11350006-4c80-4e3a-a271-5230799be1ba">
Oct 02 08:34:02 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <system>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <entry name="serial">6df0942d-95db-4140-9c7b-b5c51ada92bd</entry>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <entry name="uuid">6df0942d-95db-4140-9c7b-b5c51ada92bd</entry>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     </system>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   <os>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   </os>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   <features>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   </features>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.rescue">
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6df0942d-95db-4140-9c7b-b5c51ada92bd_disk">
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <target dev="vdb" bus="virtio"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.config.rescue">
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:02 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:68:93:e4"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <target dev="tap11350006-4c"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/console.log" append="off"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <video>
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     </video>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:34:02 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:34:02 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:34:02 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:34:02 compute-0 nova_compute[260603]: </domain>
Oct 02 08:34:02 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.195 2 INFO nova.virt.libvirt.driver [-] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Instance destroyed successfully.
Oct 02 08:34:02 compute-0 ceph-mon[74477]: pgmap v1621: 305 pgs: 305 active+clean; 292 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.9 MiB/s wr, 305 op/s
Oct 02 08:34:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/122878349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2196411300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:02 compute-0 sudo[338051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:34:02 compute-0 sudo[338051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:02 compute-0 sudo[338051]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.262 2 DEBUG nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.262 2 DEBUG nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.262 2 DEBUG nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.262 2 DEBUG nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No VIF found with MAC fa:16:3e:68:93:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.263 2 INFO nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Using config drive
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.284 2 DEBUG nova.storage.rbd_utils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.310 2 DEBUG nova.objects.instance [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6df0942d-95db-4140-9c7b-b5c51ada92bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:02 compute-0 sudo[338077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:34:02 compute-0 sudo[338077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.338 2 DEBUG nova.network.neutron [-] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.359 2 DEBUG nova.objects.instance [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'keypairs' on Instance uuid 6df0942d-95db-4140-9c7b-b5c51ada92bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.363 2 INFO nova.compute.manager [-] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Took 1.20 seconds to deallocate network for instance.
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.416 2 DEBUG oslo_concurrency.lockutils [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.417 2 DEBUG oslo_concurrency.lockutils [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.547 2 DEBUG oslo_concurrency.processutils [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.604 2 DEBUG nova.compute.manager [req-82f6138a-1546-4038-91b2-f84db907c088 req-6db29387-0043-49b6-9b67-c56134980d59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received event network-vif-deleted-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.684 2 DEBUG nova.compute.manager [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received event network-vif-unplugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.687 2 DEBUG oslo_concurrency.lockutils [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.687 2 DEBUG oslo_concurrency.lockutils [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.687 2 DEBUG oslo_concurrency.lockutils [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.687 2 DEBUG nova.compute.manager [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] No waiting events found dispatching network-vif-unplugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.687 2 WARNING nova.compute.manager [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received unexpected event network-vif-unplugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 for instance with vm_state deleted and task_state None.
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.687 2 DEBUG nova.compute.manager [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received event network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.688 2 DEBUG oslo_concurrency.lockutils [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.688 2 DEBUG oslo_concurrency.lockutils [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.688 2 DEBUG oslo_concurrency.lockutils [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.688 2 DEBUG nova.compute.manager [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] No waiting events found dispatching network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.688 2 WARNING nova.compute.manager [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received unexpected event network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 for instance with vm_state deleted and task_state None.
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.688 2 DEBUG nova.compute.manager [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Received event network-vif-plugged-95f2c075-1a91-4cac-a100-998ebf607a8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.688 2 DEBUG oslo_concurrency.lockutils [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.689 2 DEBUG oslo_concurrency.lockutils [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.689 2 DEBUG oslo_concurrency.lockutils [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.689 2 DEBUG nova.compute.manager [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Processing event network-vif-plugged-95f2c075-1a91-4cac-a100-998ebf607a8d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.689 2 DEBUG nova.compute.manager [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Received event network-vif-plugged-95f2c075-1a91-4cac-a100-998ebf607a8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.689 2 DEBUG oslo_concurrency.lockutils [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.689 2 DEBUG oslo_concurrency.lockutils [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.690 2 DEBUG oslo_concurrency.lockutils [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.690 2 DEBUG nova.compute.manager [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] No waiting events found dispatching network-vif-plugged-95f2c075-1a91-4cac-a100-998ebf607a8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.690 2 WARNING nova.compute.manager [req-199d5c92-5b24-427c-be12-e270506eac56 req-d550a981-bafc-4156-80e1-f52f3614362e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Received unexpected event network-vif-plugged-95f2c075-1a91-4cac-a100-998ebf607a8d for instance with vm_state building and task_state spawning.
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.690 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.696 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394042.6961744, 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.696 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] VM Resumed (Lifecycle Event)
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.698 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.701 2 INFO nova.virt.libvirt.driver [-] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Instance spawned successfully.
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.701 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.732 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.744 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.748 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.749 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.749 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.749 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.750 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.750 2 DEBUG nova.virt.libvirt.driver [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.791 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.825 2 INFO nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Took 7.72 seconds to spawn the instance on the hypervisor.
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.825 2 DEBUG nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.828 2 INFO nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Creating config drive at /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/disk.config.rescue
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.832 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqjcb6h8f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:02 compute-0 sudo[338077]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:34:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:34:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:34:02 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:34:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:34:02 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:34:02 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f8b9211d-449d-44dd-ba47-31d4e75fdec5 does not exist
Oct 02 08:34:02 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9ce4c498-d245-425e-92af-fd6b4a4788fc does not exist
Oct 02 08:34:02 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 1a8afc50-cfbc-43c6-bcba-4e76dab1e6d5 does not exist
Oct 02 08:34:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:34:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:34:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:34:02 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:34:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:34:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.964 2 INFO nova.compute.manager [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Took 9.96 seconds to build instance.
Oct 02 08:34:02 compute-0 nova_compute[260603]: 2025-10-02 08:34:02.980 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqjcb6h8f" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:02 compute-0 sudo[338175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:34:02 compute-0 sudo[338175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:02 compute-0 sudo[338175]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.018 2 DEBUG nova.storage.rbd_utils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/544196210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.027 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/disk.config.rescue 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:03 compute-0 sudo[338201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:34:03 compute-0 sudo[338201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:03 compute-0 sudo[338201]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.068 2 DEBUG oslo_concurrency.lockutils [None req-69fb7a70-c2af-46ef-a164-9cea76dd334c 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.069 2 DEBUG oslo_concurrency.processutils [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 260 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.7 MiB/s wr, 354 op/s
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.080 2 DEBUG nova.compute.provider_tree [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:34:03 compute-0 sudo[338246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:34:03 compute-0 sudo[338246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:03 compute-0 sudo[338246]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.098 2 DEBUG nova.scheduler.client.report [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.128 2 DEBUG oslo_concurrency.lockutils [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:03 compute-0 sudo[338278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:34:03 compute-0 sudo[338278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.175 2 INFO nova.scheduler.client.report [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Deleted allocations for instance 7ac34b0c-8ced-417d-9442-8fda77804a34
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.183 2 DEBUG oslo_concurrency.processutils [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/disk.config.rescue 6df0942d-95db-4140-9c7b-b5c51ada92bd_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.183 2 INFO nova.virt.libvirt.driver [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Deleting local config drive /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd/disk.config.rescue because it was imported into RBD.
Oct 02 08:34:03 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:34:03 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:34:03 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:34:03 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:34:03 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:34:03 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:34:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/544196210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:03 compute-0 kernel: tap11350006-4c: entered promiscuous mode
Oct 02 08:34:03 compute-0 NetworkManager[45129]: <info>  [1759394043.2276] manager: (tap11350006-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/318)
Oct 02 08:34:03 compute-0 ovn_controller[152344]: 2025-10-02T08:34:03Z|00767|binding|INFO|Claiming lport 11350006-4c80-4e3a-a271-5230799be1ba for this chassis.
Oct 02 08:34:03 compute-0 ovn_controller[152344]: 2025-10-02T08:34:03Z|00768|binding|INFO|11350006-4c80-4e3a-a271-5230799be1ba: Claiming fa:16:3e:68:93:e4 10.100.0.12
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:03.239 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:93:e4 10.100.0.12'], port_security=['fa:16:3e:68:93:e4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6df0942d-95db-4140-9c7b-b5c51ada92bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86059eb0-17b1-462f-a30f-1dfe95c50614', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '629efd330be646b7a2941e0c83b86e0e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4883fba0-ee44-4548-b479-1786f1cf77b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2e7de25-29e1-404b-a0d2-6f487c522884, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=11350006-4c80-4e3a-a271-5230799be1ba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:03.240 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 11350006-4c80-4e3a-a271-5230799be1ba in datapath 86059eb0-17b1-462f-a30f-1dfe95c50614 bound to our chassis
Oct 02 08:34:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:03.240 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86059eb0-17b1-462f-a30f-1dfe95c50614 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:34:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:03.241 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7b696d-0fa8-4aa9-92a5-73ce8a9fd951]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.256 2 DEBUG oslo_concurrency.lockutils [None req-4cdec722-54dd-453a-9fc0-503a03c430f8 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:03 compute-0 ovn_controller[152344]: 2025-10-02T08:34:03Z|00769|binding|INFO|Setting lport 11350006-4c80-4e3a-a271-5230799be1ba ovn-installed in OVS
Oct 02 08:34:03 compute-0 ovn_controller[152344]: 2025-10-02T08:34:03Z|00770|binding|INFO|Setting lport 11350006-4c80-4e3a-a271-5230799be1ba up in Southbound
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:03 compute-0 systemd-machined[214636]: New machine qemu-92-instance-0000004f.
Oct 02 08:34:03 compute-0 systemd-udevd[338328]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:34:03 compute-0 NetworkManager[45129]: <info>  [1759394043.2803] device (tap11350006-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:34:03 compute-0 NetworkManager[45129]: <info>  [1759394043.2810] device (tap11350006-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:34:03 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-0000004f.
Oct 02 08:34:03 compute-0 nova_compute[260603]: 2025-10-02 08:34:03.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:03.478 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:03 compute-0 podman[338390]: 2025-10-02 08:34:03.5355666 +0000 UTC m=+0.058147325 container create 4ba9eac4004b70f10dd8164cf7c2d14aaa2a4505ea4dd162b0b72139a34286ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:34:03 compute-0 systemd[1]: Started libpod-conmon-4ba9eac4004b70f10dd8164cf7c2d14aaa2a4505ea4dd162b0b72139a34286ca.scope.
Oct 02 08:34:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:34:03 compute-0 podman[338390]: 2025-10-02 08:34:03.604349654 +0000 UTC m=+0.126930399 container init 4ba9eac4004b70f10dd8164cf7c2d14aaa2a4505ea4dd162b0b72139a34286ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bohr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:34:03 compute-0 podman[338390]: 2025-10-02 08:34:03.512624558 +0000 UTC m=+0.035205333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:34:03 compute-0 podman[338390]: 2025-10-02 08:34:03.614657014 +0000 UTC m=+0.137237739 container start 4ba9eac4004b70f10dd8164cf7c2d14aaa2a4505ea4dd162b0b72139a34286ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:34:03 compute-0 podman[338390]: 2025-10-02 08:34:03.618171983 +0000 UTC m=+0.140752758 container attach 4ba9eac4004b70f10dd8164cf7c2d14aaa2a4505ea4dd162b0b72139a34286ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bohr, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:34:03 compute-0 systemd[1]: libpod-4ba9eac4004b70f10dd8164cf7c2d14aaa2a4505ea4dd162b0b72139a34286ca.scope: Deactivated successfully.
Oct 02 08:34:03 compute-0 cool_bohr[338447]: 167 167
Oct 02 08:34:03 compute-0 conmon[338447]: conmon 4ba9eac4004b70f10dd8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ba9eac4004b70f10dd8164cf7c2d14aaa2a4505ea4dd162b0b72139a34286ca.scope/container/memory.events
Oct 02 08:34:03 compute-0 podman[338390]: 2025-10-02 08:34:03.625830111 +0000 UTC m=+0.148410836 container died 4ba9eac4004b70f10dd8164cf7c2d14aaa2a4505ea4dd162b0b72139a34286ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7486f1af5c1d6895e61724c5d88d7835ec6d3701ce12bc94ec6ac14583428527-merged.mount: Deactivated successfully.
Oct 02 08:34:03 compute-0 podman[338390]: 2025-10-02 08:34:03.666571955 +0000 UTC m=+0.189152700 container remove 4ba9eac4004b70f10dd8164cf7c2d14aaa2a4505ea4dd162b0b72139a34286ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 08:34:03 compute-0 systemd[1]: libpod-conmon-4ba9eac4004b70f10dd8164cf7c2d14aaa2a4505ea4dd162b0b72139a34286ca.scope: Deactivated successfully.
Oct 02 08:34:03 compute-0 podman[338473]: 2025-10-02 08:34:03.855197038 +0000 UTC m=+0.052580923 container create 7195c5358b246a6e0ceff3d42d602b968c28bb7eab660d92cd4bb3c9a23836d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:34:03 compute-0 systemd[1]: Started libpod-conmon-7195c5358b246a6e0ceff3d42d602b968c28bb7eab660d92cd4bb3c9a23836d2.scope.
Oct 02 08:34:03 compute-0 podman[338473]: 2025-10-02 08:34:03.829833201 +0000 UTC m=+0.027217116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:34:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a241c8323649aa9354f2dff3939f0632bf0058937b26dc39fc96f687d15096/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a241c8323649aa9354f2dff3939f0632bf0058937b26dc39fc96f687d15096/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a241c8323649aa9354f2dff3939f0632bf0058937b26dc39fc96f687d15096/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a241c8323649aa9354f2dff3939f0632bf0058937b26dc39fc96f687d15096/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a241c8323649aa9354f2dff3939f0632bf0058937b26dc39fc96f687d15096/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:03 compute-0 podman[338473]: 2025-10-02 08:34:03.962817967 +0000 UTC m=+0.160201872 container init 7195c5358b246a6e0ceff3d42d602b968c28bb7eab660d92cd4bb3c9a23836d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_leakey, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 08:34:03 compute-0 podman[338473]: 2025-10-02 08:34:03.971660262 +0000 UTC m=+0.169044157 container start 7195c5358b246a6e0ceff3d42d602b968c28bb7eab660d92cd4bb3c9a23836d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 08:34:03 compute-0 podman[338473]: 2025-10-02 08:34:03.974955924 +0000 UTC m=+0.172339809 container attach 7195c5358b246a6e0ceff3d42d602b968c28bb7eab660d92cd4bb3c9a23836d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.129 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 6df0942d-95db-4140-9c7b-b5c51ada92bd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.131 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394044.1294978, 6df0942d-95db-4140-9c7b-b5c51ada92bd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.131 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] VM Resumed (Lifecycle Event)
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.134 2 DEBUG nova.compute.manager [None req-6da16d76-86d4-4272-8fd2-34a133a1624e d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.164 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.167 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.191 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394044.1332152, 6df0942d-95db-4140-9c7b-b5c51ada92bd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.191 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] VM Started (Lifecycle Event)
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.210 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.213 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.791 2 DEBUG nova.compute.manager [req-a3d7c6b5-bd8c-4d29-9ba8-05eea3c42998 req-3f4e26d4-c109-4260-aca7-2a341cf71786 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.793 2 DEBUG oslo_concurrency.lockutils [req-a3d7c6b5-bd8c-4d29-9ba8-05eea3c42998 req-3f4e26d4-c109-4260-aca7-2a341cf71786 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.793 2 DEBUG oslo_concurrency.lockutils [req-a3d7c6b5-bd8c-4d29-9ba8-05eea3c42998 req-3f4e26d4-c109-4260-aca7-2a341cf71786 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.793 2 DEBUG oslo_concurrency.lockutils [req-a3d7c6b5-bd8c-4d29-9ba8-05eea3c42998 req-3f4e26d4-c109-4260-aca7-2a341cf71786 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.794 2 DEBUG nova.compute.manager [req-a3d7c6b5-bd8c-4d29-9ba8-05eea3c42998 req-3f4e26d4-c109-4260-aca7-2a341cf71786 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] No waiting events found dispatching network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.794 2 WARNING nova.compute.manager [req-a3d7c6b5-bd8c-4d29-9ba8-05eea3c42998 req-3f4e26d4-c109-4260-aca7-2a341cf71786 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received unexpected event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba for instance with vm_state rescued and task_state None.
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.794 2 DEBUG nova.compute.manager [req-a3d7c6b5-bd8c-4d29-9ba8-05eea3c42998 req-3f4e26d4-c109-4260-aca7-2a341cf71786 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.795 2 DEBUG oslo_concurrency.lockutils [req-a3d7c6b5-bd8c-4d29-9ba8-05eea3c42998 req-3f4e26d4-c109-4260-aca7-2a341cf71786 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.795 2 DEBUG oslo_concurrency.lockutils [req-a3d7c6b5-bd8c-4d29-9ba8-05eea3c42998 req-3f4e26d4-c109-4260-aca7-2a341cf71786 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.795 2 DEBUG oslo_concurrency.lockutils [req-a3d7c6b5-bd8c-4d29-9ba8-05eea3c42998 req-3f4e26d4-c109-4260-aca7-2a341cf71786 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.796 2 DEBUG nova.compute.manager [req-a3d7c6b5-bd8c-4d29-9ba8-05eea3c42998 req-3f4e26d4-c109-4260-aca7-2a341cf71786 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] No waiting events found dispatching network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.796 2 WARNING nova.compute.manager [req-a3d7c6b5-bd8c-4d29-9ba8-05eea3c42998 req-3f4e26d4-c109-4260-aca7-2a341cf71786 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received unexpected event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba for instance with vm_state rescued and task_state None.
Oct 02 08:34:04 compute-0 ovn_controller[152344]: 2025-10-02T08:34:04Z|00771|binding|INFO|Releasing lport 3752b866-0aac-4a57-acbb-4c574bfc2b06 from this chassis (sb_readonly=0)
Oct 02 08:34:04 compute-0 nova_compute[260603]: 2025-10-02 08:34:04.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:04 compute-0 ceph-mon[74477]: pgmap v1622: 305 pgs: 305 active+clean; 260 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.7 MiB/s wr, 354 op/s
Oct 02 08:34:05 compute-0 ovn_controller[152344]: 2025-10-02T08:34:05Z|00772|binding|INFO|Releasing lport 3752b866-0aac-4a57-acbb-4c574bfc2b06 from this chassis (sb_readonly=0)
Oct 02 08:34:05 compute-0 nova_compute[260603]: 2025-10-02 08:34:05.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 260 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.7 MiB/s wr, 354 op/s
Oct 02 08:34:05 compute-0 trusting_leakey[338489]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:34:05 compute-0 trusting_leakey[338489]: --> relative data size: 1.0
Oct 02 08:34:05 compute-0 trusting_leakey[338489]: --> All data devices are unavailable
Oct 02 08:34:05 compute-0 systemd[1]: libpod-7195c5358b246a6e0ceff3d42d602b968c28bb7eab660d92cd4bb3c9a23836d2.scope: Deactivated successfully.
Oct 02 08:34:05 compute-0 systemd[1]: libpod-7195c5358b246a6e0ceff3d42d602b968c28bb7eab660d92cd4bb3c9a23836d2.scope: Consumed 1.020s CPU time.
Oct 02 08:34:05 compute-0 conmon[338489]: conmon 7195c5358b246a6e0cef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7195c5358b246a6e0ceff3d42d602b968c28bb7eab660d92cd4bb3c9a23836d2.scope/container/memory.events
Oct 02 08:34:05 compute-0 podman[338473]: 2025-10-02 08:34:05.119317592 +0000 UTC m=+1.316701477 container died 7195c5358b246a6e0ceff3d42d602b968c28bb7eab660d92cd4bb3c9a23836d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-07a241c8323649aa9354f2dff3939f0632bf0058937b26dc39fc96f687d15096-merged.mount: Deactivated successfully.
Oct 02 08:34:05 compute-0 podman[338473]: 2025-10-02 08:34:05.179541931 +0000 UTC m=+1.376925816 container remove 7195c5358b246a6e0ceff3d42d602b968c28bb7eab660d92cd4bb3c9a23836d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_leakey, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:34:05 compute-0 systemd[1]: libpod-conmon-7195c5358b246a6e0ceff3d42d602b968c28bb7eab660d92cd4bb3c9a23836d2.scope: Deactivated successfully.
Oct 02 08:34:05 compute-0 sudo[338278]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:05 compute-0 sudo[338531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:34:05 compute-0 sudo[338531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:05 compute-0 sudo[338531]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:05 compute-0 sudo[338556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:34:05 compute-0 sudo[338556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:05 compute-0 sudo[338556]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:05 compute-0 sudo[338581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:34:05 compute-0 sudo[338581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:05 compute-0 sudo[338581]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:05 compute-0 sudo[338606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:34:05 compute-0 sudo[338606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:05 compute-0 nova_compute[260603]: 2025-10-02 08:34:05.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:05 compute-0 podman[338669]: 2025-10-02 08:34:05.783941355 +0000 UTC m=+0.058388693 container create 4c582268e9dc3db093e7a3901adb695efb9d15e361c536597af322626c67c639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mcclintock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 08:34:05 compute-0 systemd[1]: Started libpod-conmon-4c582268e9dc3db093e7a3901adb695efb9d15e361c536597af322626c67c639.scope.
Oct 02 08:34:05 compute-0 podman[338669]: 2025-10-02 08:34:05.750370283 +0000 UTC m=+0.024817671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:34:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:34:05 compute-0 podman[338669]: 2025-10-02 08:34:05.883555236 +0000 UTC m=+0.158002554 container init 4c582268e9dc3db093e7a3901adb695efb9d15e361c536597af322626c67c639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 08:34:05 compute-0 podman[338669]: 2025-10-02 08:34:05.891159362 +0000 UTC m=+0.165606700 container start 4c582268e9dc3db093e7a3901adb695efb9d15e361c536597af322626c67c639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 08:34:05 compute-0 podman[338669]: 2025-10-02 08:34:05.895130025 +0000 UTC m=+0.169577353 container attach 4c582268e9dc3db093e7a3901adb695efb9d15e361c536597af322626c67c639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mcclintock, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:34:05 compute-0 musing_mcclintock[338685]: 167 167
Oct 02 08:34:05 compute-0 systemd[1]: libpod-4c582268e9dc3db093e7a3901adb695efb9d15e361c536597af322626c67c639.scope: Deactivated successfully.
Oct 02 08:34:05 compute-0 conmon[338685]: conmon 4c582268e9dc3db093e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c582268e9dc3db093e7a3901adb695efb9d15e361c536597af322626c67c639.scope/container/memory.events
Oct 02 08:34:05 compute-0 podman[338690]: 2025-10-02 08:34:05.944203137 +0000 UTC m=+0.027081620 container died 4c582268e9dc3db093e7a3901adb695efb9d15e361c536597af322626c67c639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mcclintock, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3585346550a32d72923620f07234366e8d3c9e27cee35aebb16584e6159be7cf-merged.mount: Deactivated successfully.
Oct 02 08:34:05 compute-0 podman[338690]: 2025-10-02 08:34:05.990188964 +0000 UTC m=+0.073067397 container remove 4c582268e9dc3db093e7a3901adb695efb9d15e361c536597af322626c67c639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:34:06 compute-0 systemd[1]: libpod-conmon-4c582268e9dc3db093e7a3901adb695efb9d15e361c536597af322626c67c639.scope: Deactivated successfully.
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.139 2 DEBUG oslo_concurrency.lockutils [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.141 2 DEBUG oslo_concurrency.lockutils [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.141 2 DEBUG oslo_concurrency.lockutils [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.142 2 DEBUG oslo_concurrency.lockutils [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.142 2 DEBUG oslo_concurrency.lockutils [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.143 2 INFO nova.compute.manager [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Terminating instance
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.145 2 DEBUG nova.compute.manager [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:34:06 compute-0 kernel: tap0627e49d-24 (unregistering): left promiscuous mode
Oct 02 08:34:06 compute-0 NetworkManager[45129]: <info>  [1759394046.2121] device (tap0627e49d-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:34:06 compute-0 ovn_controller[152344]: 2025-10-02T08:34:06Z|00773|binding|INFO|Releasing lport 0627e49d-2462-482a-9f37-38289b11af20 from this chassis (sb_readonly=0)
Oct 02 08:34:06 compute-0 ovn_controller[152344]: 2025-10-02T08:34:06Z|00774|binding|INFO|Setting lport 0627e49d-2462-482a-9f37-38289b11af20 down in Southbound
Oct 02 08:34:06 compute-0 ovn_controller[152344]: 2025-10-02T08:34:06Z|00775|binding|INFO|Removing iface tap0627e49d-24 ovn-installed in OVS
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.243 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:06:74 10.100.0.8'], port_security=['fa:16:3e:69:06:74 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe2ef776-a59b-4369-95f5-69103d78f3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1045858c7b24f1baf184c8469064740', 'neutron:revision_number': '4', 'neutron:security_group_ids': '524655b6-f9e6-47ce-bebb-b7967ffdd769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=798444a3-a368-47d6-a395-3d1a0bcb7c88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0627e49d-2462-482a-9f37-38289b11af20) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.245 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0627e49d-2462-482a-9f37-38289b11af20 in datapath fe2ef776-a59b-4369-95f5-69103d78f3da unbound from our chassis
Oct 02 08:34:06 compute-0 podman[338711]: 2025-10-02 08:34:06.248654374 +0000 UTC m=+0.067402232 container create 516be48e5fc7be859e03aface7476fa582ae0a1f995b39f47ad25fdcf013b475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leakey, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.248 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe2ef776-a59b-4369-95f5-69103d78f3da
Oct 02 08:34:06 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d00000050.scope: Deactivated successfully.
Oct 02 08:34:06 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d00000050.scope: Consumed 6.514s CPU time.
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 systemd-machined[214636]: Machine qemu-90-instance-00000050 terminated.
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.285 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[82e78fca-3f71-4ac8-9cd9-83771a3339ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.303 2 DEBUG oslo_concurrency.lockutils [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.304 2 DEBUG oslo_concurrency.lockutils [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.304 2 DEBUG oslo_concurrency.lockutils [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.304 2 DEBUG oslo_concurrency.lockutils [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.305 2 DEBUG oslo_concurrency.lockutils [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:06 compute-0 systemd[1]: Started libpod-conmon-516be48e5fc7be859e03aface7476fa582ae0a1f995b39f47ad25fdcf013b475.scope.
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.306 2 INFO nova.compute.manager [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Terminating instance
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.307 2 DEBUG nova.compute.manager [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:34:06 compute-0 podman[338711]: 2025-10-02 08:34:06.226360753 +0000 UTC m=+0.045108671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.325 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[dde4e1f6-c446-43a7-8461-73d33259140e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.328 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[af5e613a-ad3c-4d6e-ad4b-513430cfe25f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dde80c374ea8d54be6e50484444121ff28d9ee549f060aef6e01ea49366995/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dde80c374ea8d54be6e50484444121ff28d9ee549f060aef6e01ea49366995/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dde80c374ea8d54be6e50484444121ff28d9ee549f060aef6e01ea49366995/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dde80c374ea8d54be6e50484444121ff28d9ee549f060aef6e01ea49366995/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:06 compute-0 kernel: tap95f2c075-1a (unregistering): left promiscuous mode
Oct 02 08:34:06 compute-0 podman[338711]: 2025-10-02 08:34:06.359950478 +0000 UTC m=+0.178698356 container init 516be48e5fc7be859e03aface7476fa582ae0a1f995b39f47ad25fdcf013b475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leakey, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 08:34:06 compute-0 NetworkManager[45129]: <info>  [1759394046.3608] device (tap95f2c075-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:34:06 compute-0 podman[338711]: 2025-10-02 08:34:06.369167034 +0000 UTC m=+0.187914892 container start 516be48e5fc7be859e03aface7476fa582ae0a1f995b39f47ad25fdcf013b475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leakey, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 ovn_controller[152344]: 2025-10-02T08:34:06Z|00776|binding|INFO|Releasing lport 95f2c075-1a91-4cac-a100-998ebf607a8d from this chassis (sb_readonly=0)
Oct 02 08:34:06 compute-0 ovn_controller[152344]: 2025-10-02T08:34:06Z|00777|binding|INFO|Setting lport 95f2c075-1a91-4cac-a100-998ebf607a8d down in Southbound
Oct 02 08:34:06 compute-0 ovn_controller[152344]: 2025-10-02T08:34:06Z|00778|binding|INFO|Removing iface tap95f2c075-1a ovn-installed in OVS
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 podman[338711]: 2025-10-02 08:34:06.374270543 +0000 UTC m=+0.193018431 container attach 516be48e5fc7be859e03aface7476fa582ae0a1f995b39f47ad25fdcf013b475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.378 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:c5:30 10.100.0.9'], port_security=['fa:16:3e:ae:c5:30 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe2ef776-a59b-4369-95f5-69103d78f3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1045858c7b24f1baf184c8469064740', 'neutron:revision_number': '4', 'neutron:security_group_ids': '524655b6-f9e6-47ce-bebb-b7967ffdd769', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=798444a3-a368-47d6-a395-3d1a0bcb7c88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=95f2c075-1a91-4cac-a100-998ebf607a8d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.383 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c12a2148-eadd-4ef2-96e7-8e9a6fc4a997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.398 2 INFO nova.virt.libvirt.driver [-] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Instance destroyed successfully.
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.399 2 DEBUG nova.objects.instance [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lazy-loading 'resources' on Instance uuid ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000051.scope: Deactivated successfully.
Oct 02 08:34:06 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000051.scope: Consumed 4.264s CPU time.
Oct 02 08:34:06 compute-0 systemd-machined[214636]: Machine qemu-91-instance-00000051 terminated.
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.412 2 DEBUG nova.virt.libvirt.vif [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:33:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1465134413',display_name='tempest-MultipleCreateTestJSON-server-1465134413-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1465134413-1',id=80,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e1045858c7b24f1baf184c8469064740',ramdisk_id='',reservation_id='r-qr0dvakd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-248615309',owner_user_name='tempest-MultipleCreateTestJSON-248615309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:34:00Z,user_data=None,user_id='6c3e0096dae34ce09545c8c4547dae81',uuid=ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0627e49d-2462-482a-9f37-38289b11af20", "address": "fa:16:3e:69:06:74", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0627e49d-24", "ovs_interfaceid": "0627e49d-2462-482a-9f37-38289b11af20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.412 2 DEBUG nova.network.os_vif_util [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converting VIF {"id": "0627e49d-2462-482a-9f37-38289b11af20", "address": "fa:16:3e:69:06:74", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0627e49d-24", "ovs_interfaceid": "0627e49d-2462-482a-9f37-38289b11af20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.413 2 DEBUG nova.network.os_vif_util [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:06:74,bridge_name='br-int',has_traffic_filtering=True,id=0627e49d-2462-482a-9f37-38289b11af20,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0627e49d-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.413 2 DEBUG os_vif [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:06:74,bridge_name='br-int',has_traffic_filtering=True,id=0627e49d-2462-482a-9f37-38289b11af20,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0627e49d-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.407 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ee6640e-3d49-4d0a-9547-88b8a6c45ab7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe2ef776-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:01:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491806, 'reachable_time': 26415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338756, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.415 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0627e49d-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.424 2 INFO os_vif [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:06:74,bridge_name='br-int',has_traffic_filtering=True,id=0627e49d-2462-482a-9f37-38289b11af20,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0627e49d-24')
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.427 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c8769a-4414-4f2c-9942-a05a3d87c9ba]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfe2ef776-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491821, 'tstamp': 491821}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338762, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfe2ef776-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491825, 'tstamp': 491825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338762, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.429 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe2ef776-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.433 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe2ef776-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.434 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.434 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe2ef776-a0, col_values=(('external_ids', {'iface-id': '3752b866-0aac-4a57-acbb-4c574bfc2b06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.434 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.436 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 95f2c075-1a91-4cac-a100-998ebf607a8d in datapath fe2ef776-a59b-4369-95f5-69103d78f3da unbound from our chassis
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.437 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fe2ef776-a59b-4369-95f5-69103d78f3da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.438 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0415b0-8b54-4c3e-bb83-3823614c9f2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.438 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da namespace which is not needed anymore
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.537 2 INFO nova.virt.libvirt.driver [-] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Instance destroyed successfully.
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.537 2 DEBUG nova.objects.instance [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lazy-loading 'resources' on Instance uuid 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.560 2 DEBUG nova.virt.libvirt.vif [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:33:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1465134413',display_name='tempest-MultipleCreateTestJSON-server-1465134413-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1465134413-2',id=81,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T08:34:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e1045858c7b24f1baf184c8469064740',ramdisk_id='',reservation_id='r-qr0dvakd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-248615309',owner_user_name='tempest-MultipleCreateTestJSON-248615309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:34:02Z,user_data=None,user_id='6c3e0096dae34ce09545c8c4547dae81',uuid=22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "95f2c075-1a91-4cac-a100-998ebf607a8d", "address": "fa:16:3e:ae:c5:30", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95f2c075-1a", "ovs_interfaceid": "95f2c075-1a91-4cac-a100-998ebf607a8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.560 2 DEBUG nova.network.os_vif_util [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converting VIF {"id": "95f2c075-1a91-4cac-a100-998ebf607a8d", "address": "fa:16:3e:ae:c5:30", "network": {"id": "fe2ef776-a59b-4369-95f5-69103d78f3da", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1332242296-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1045858c7b24f1baf184c8469064740", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95f2c075-1a", "ovs_interfaceid": "95f2c075-1a91-4cac-a100-998ebf607a8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.561 2 DEBUG nova.network.os_vif_util [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=95f2c075-1a91-4cac-a100-998ebf607a8d,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95f2c075-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.561 2 DEBUG os_vif [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=95f2c075-1a91-4cac-a100-998ebf607a8d,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95f2c075-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.563 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95f2c075-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.568 2 INFO os_vif [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=95f2c075-1a91-4cac-a100-998ebf607a8d,network=Network(fe2ef776-a59b-4369-95f5-69103d78f3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95f2c075-1a')
Oct 02 08:34:06 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[337710]: [NOTICE]   (337724) : haproxy version is 2.8.14-c23fe91
Oct 02 08:34:06 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[337710]: [NOTICE]   (337724) : path to executable is /usr/sbin/haproxy
Oct 02 08:34:06 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[337710]: [WARNING]  (337724) : Exiting Master process...
Oct 02 08:34:06 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[337710]: [WARNING]  (337724) : Exiting Master process...
Oct 02 08:34:06 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[337710]: [ALERT]    (337724) : Current worker (337729) exited with code 143 (Terminated)
Oct 02 08:34:06 compute-0 neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da[337710]: [WARNING]  (337724) : All workers exited. Exiting... (0)
Oct 02 08:34:06 compute-0 systemd[1]: libpod-31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c.scope: Deactivated successfully.
Oct 02 08:34:06 compute-0 podman[338801]: 2025-10-02 08:34:06.620162542 +0000 UTC m=+0.076577767 container died 31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.626 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394031.6260772, fcd663c5-c20d-477c-bf26-11eb72d0886f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.627 2 INFO nova.compute.manager [-] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] VM Stopped (Lifecycle Event)
Oct 02 08:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c-userdata-shm.mount: Deactivated successfully.
Oct 02 08:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-56159d18c0c6ddb61954cdc9423e275b4f29138854e68b16139087c0bd178e41-merged.mount: Deactivated successfully.
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.653 2 DEBUG nova.compute.manager [None req-fc8773aa-0a23-48fd-8608-8b45a71a6188 - - - - - -] [instance: fcd663c5-c20d-477c-bf26-11eb72d0886f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:06 compute-0 podman[338801]: 2025-10-02 08:34:06.668939486 +0000 UTC m=+0.125354711 container cleanup 31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 08:34:06 compute-0 systemd[1]: libpod-conmon-31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c.scope: Deactivated successfully.
Oct 02 08:34:06 compute-0 podman[338855]: 2025-10-02 08:34:06.742039374 +0000 UTC m=+0.049763595 container remove 31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.748 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c5aebb72-cdb3-417b-903c-813f104aa763]: (4, ('Thu Oct  2 08:34:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da (31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c)\n31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c\nThu Oct  2 08:34:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da (31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c)\n31a6277cd98edece94d0697415e68faa24b9a3bfd3a37080ae8468db6057508c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.750 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e253e970-204a-4b08-aaf2-3180d18f3475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.750 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe2ef776-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:06 compute-0 kernel: tapfe2ef776-a0: left promiscuous mode
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.757 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09d22f6b-241d-46d6-90d5-44ff4d89a098]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.791 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8f847674-e86c-47d9-a2a2-a9dec5f966b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.792 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b71bc03c-3a55-41ec-b143-aee45cfd6dbf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.808 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[defcdebb-ed99-46c1-a97e-039666939ecb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491797, 'reachable_time': 28533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338870, 'error': None, 'target': 'ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 systemd[1]: run-netns-ovnmeta\x2dfe2ef776\x2da59b\x2d4369\x2d95f5\x2d69103d78f3da.mount: Deactivated successfully.
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.812 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fe2ef776-a59b-4369-95f5-69103d78f3da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:34:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:06.812 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[6b87c988-3fe8-42f7-938e-546f769ed868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.862 2 INFO nova.virt.libvirt.driver [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Deleting instance files /var/lib/nova/instances/ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_del
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.863 2 INFO nova.virt.libvirt.driver [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Deletion of /var/lib/nova/instances/ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55_del complete
Oct 02 08:34:06 compute-0 ceph-mon[74477]: pgmap v1623: 305 pgs: 305 active+clean; 260 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.7 MiB/s wr, 354 op/s
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.951 2 INFO nova.virt.libvirt.driver [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Deleting instance files /var/lib/nova/instances/22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_del
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.951 2 INFO nova.virt.libvirt.driver [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Deletion of /var/lib/nova/instances/22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc_del complete
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.957 2 INFO nova.compute.manager [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.958 2 DEBUG oslo.service.loopingcall [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.958 2 DEBUG nova.compute.manager [-] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:34:06 compute-0 nova_compute[260603]: 2025-10-02 08:34:06.958 2 DEBUG nova.network.neutron [-] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.006 2 INFO nova.compute.manager [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.006 2 DEBUG oslo.service.loopingcall [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.006 2 DEBUG nova.compute.manager [-] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.007 2 DEBUG nova.network.neutron [-] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.012 2 DEBUG nova.compute.manager [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Received event network-vif-unplugged-0627e49d-2462-482a-9f37-38289b11af20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.012 2 DEBUG oslo_concurrency.lockutils [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.013 2 DEBUG oslo_concurrency.lockutils [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.013 2 DEBUG oslo_concurrency.lockutils [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.013 2 DEBUG nova.compute.manager [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] No waiting events found dispatching network-vif-unplugged-0627e49d-2462-482a-9f37-38289b11af20 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.013 2 DEBUG nova.compute.manager [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Received event network-vif-unplugged-0627e49d-2462-482a-9f37-38289b11af20 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.014 2 DEBUG nova.compute.manager [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Received event network-vif-plugged-0627e49d-2462-482a-9f37-38289b11af20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.014 2 DEBUG oslo_concurrency.lockutils [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.014 2 DEBUG oslo_concurrency.lockutils [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.014 2 DEBUG oslo_concurrency.lockutils [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.015 2 DEBUG nova.compute.manager [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] No waiting events found dispatching network-vif-plugged-0627e49d-2462-482a-9f37-38289b11af20 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.015 2 WARNING nova.compute.manager [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Received unexpected event network-vif-plugged-0627e49d-2462-482a-9f37-38289b11af20 for instance with vm_state active and task_state deleting.
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.015 2 DEBUG nova.compute.manager [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Received event network-vif-unplugged-95f2c075-1a91-4cac-a100-998ebf607a8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.015 2 DEBUG oslo_concurrency.lockutils [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.016 2 DEBUG oslo_concurrency.lockutils [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.016 2 DEBUG oslo_concurrency.lockutils [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.016 2 DEBUG nova.compute.manager [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] No waiting events found dispatching network-vif-unplugged-95f2c075-1a91-4cac-a100-998ebf607a8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.016 2 DEBUG nova.compute.manager [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Received event network-vif-unplugged-95f2c075-1a91-4cac-a100-998ebf607a8d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.016 2 DEBUG nova.compute.manager [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Received event network-vif-plugged-95f2c075-1a91-4cac-a100-998ebf607a8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.017 2 DEBUG oslo_concurrency.lockutils [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.017 2 DEBUG oslo_concurrency.lockutils [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.017 2 DEBUG oslo_concurrency.lockutils [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.017 2 DEBUG nova.compute.manager [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] No waiting events found dispatching network-vif-plugged-95f2c075-1a91-4cac-a100-998ebf607a8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.018 2 WARNING nova.compute.manager [req-6ab821de-ef42-4d9c-8bf7-9971fa2bcdc8 req-0fe6f93e-b94b-4a0b-b121-6e7b6323806f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Received unexpected event network-vif-plugged-95f2c075-1a91-4cac-a100-998ebf607a8d for instance with vm_state active and task_state deleting.
Oct 02 08:34:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 260 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.7 MiB/s wr, 354 op/s
Oct 02 08:34:07 compute-0 fervent_leakey[338735]: {
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:     "0": [
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:         {
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "devices": [
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "/dev/loop3"
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             ],
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_name": "ceph_lv0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_size": "21470642176",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "name": "ceph_lv0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "tags": {
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.cluster_name": "ceph",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.crush_device_class": "",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.encrypted": "0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.osd_id": "0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.type": "block",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.vdo": "0"
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             },
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "type": "block",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "vg_name": "ceph_vg0"
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:         }
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:     ],
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:     "1": [
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:         {
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "devices": [
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "/dev/loop4"
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             ],
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_name": "ceph_lv1",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_size": "21470642176",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "name": "ceph_lv1",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "tags": {
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.cluster_name": "ceph",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.crush_device_class": "",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.encrypted": "0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.osd_id": "1",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.type": "block",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.vdo": "0"
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             },
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "type": "block",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "vg_name": "ceph_vg1"
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:         }
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:     ],
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:     "2": [
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:         {
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "devices": [
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "/dev/loop5"
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             ],
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_name": "ceph_lv2",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_size": "21470642176",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "name": "ceph_lv2",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "tags": {
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.cluster_name": "ceph",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.crush_device_class": "",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.encrypted": "0",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.osd_id": "2",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.type": "block",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:                 "ceph.vdo": "0"
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             },
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "type": "block",
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:             "vg_name": "ceph_vg2"
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:         }
Oct 02 08:34:07 compute-0 fervent_leakey[338735]:     ]
Oct 02 08:34:07 compute-0 fervent_leakey[338735]: }
Oct 02 08:34:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:34:07 compute-0 systemd[1]: libpod-516be48e5fc7be859e03aface7476fa582ae0a1f995b39f47ad25fdcf013b475.scope: Deactivated successfully.
Oct 02 08:34:07 compute-0 podman[338711]: 2025-10-02 08:34:07.188109305 +0000 UTC m=+1.006857203 container died 516be48e5fc7be859e03aface7476fa582ae0a1f995b39f47ad25fdcf013b475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leakey, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 08:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-34dde80c374ea8d54be6e50484444121ff28d9ee549f060aef6e01ea49366995-merged.mount: Deactivated successfully.
Oct 02 08:34:07 compute-0 podman[338711]: 2025-10-02 08:34:07.268805169 +0000 UTC m=+1.087553067 container remove 516be48e5fc7be859e03aface7476fa582ae0a1f995b39f47ad25fdcf013b475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 08:34:07 compute-0 systemd[1]: libpod-conmon-516be48e5fc7be859e03aface7476fa582ae0a1f995b39f47ad25fdcf013b475.scope: Deactivated successfully.
Oct 02 08:34:07 compute-0 sudo[338606]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:07 compute-0 sudo[338885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:34:07 compute-0 sudo[338885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:07 compute-0 sudo[338885]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:07 compute-0 sudo[338910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:34:07 compute-0 sudo[338910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:07 compute-0 sudo[338910]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:07 compute-0 sudo[338935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:34:07 compute-0 sudo[338935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:07 compute-0 sudo[338935]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:07 compute-0 sudo[338960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:34:07 compute-0 sudo[338960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.660 2 DEBUG nova.network.neutron [-] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.679 2 INFO nova.compute.manager [-] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Took 0.67 seconds to deallocate network for instance.
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.724 2 DEBUG oslo_concurrency.lockutils [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.725 2 DEBUG oslo_concurrency.lockutils [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.790 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394032.789842, 19dd1983-6b14-4ed7-bcb1-f620e7426cc6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.791 2 INFO nova.compute.manager [-] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] VM Stopped (Lifecycle Event)
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.810 2 DEBUG nova.compute.manager [None req-c92ae5f3-8ffd-4eb9-9b38-638852508dbf - - - - - -] [instance: 19dd1983-6b14-4ed7-bcb1-f620e7426cc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:07 compute-0 nova_compute[260603]: 2025-10-02 08:34:07.834 2 DEBUG oslo_concurrency.processutils [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.040 2 DEBUG nova.network.neutron [-] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.064 2 INFO nova.compute.manager [-] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Took 1.11 seconds to deallocate network for instance.
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.152 2 DEBUG oslo_concurrency.lockutils [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:08 compute-0 podman[339041]: 2025-10-02 08:34:08.154625255 +0000 UTC m=+0.071140478 container create 27cc9071f76161baf7542885419f1f60e42fbe2e0c9c2ed5cf5eb413ac34c338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_blackwell, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:34:08 compute-0 systemd[1]: Started libpod-conmon-27cc9071f76161baf7542885419f1f60e42fbe2e0c9c2ed5cf5eb413ac34c338.scope.
Oct 02 08:34:08 compute-0 podman[339041]: 2025-10-02 08:34:08.124313795 +0000 UTC m=+0.040829028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:34:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:34:08 compute-0 podman[339041]: 2025-10-02 08:34:08.261075638 +0000 UTC m=+0.177590921 container init 27cc9071f76161baf7542885419f1f60e42fbe2e0c9c2ed5cf5eb413ac34c338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 08:34:08 compute-0 podman[339041]: 2025-10-02 08:34:08.272341878 +0000 UTC m=+0.188857101 container start 27cc9071f76161baf7542885419f1f60e42fbe2e0c9c2ed5cf5eb413ac34c338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 08:34:08 compute-0 podman[339041]: 2025-10-02 08:34:08.276182157 +0000 UTC m=+0.192697430 container attach 27cc9071f76161baf7542885419f1f60e42fbe2e0c9c2ed5cf5eb413ac34c338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_blackwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:34:08 compute-0 dazzling_blackwell[339057]: 167 167
Oct 02 08:34:08 compute-0 systemd[1]: libpod-27cc9071f76161baf7542885419f1f60e42fbe2e0c9c2ed5cf5eb413ac34c338.scope: Deactivated successfully.
Oct 02 08:34:08 compute-0 podman[339041]: 2025-10-02 08:34:08.278918992 +0000 UTC m=+0.195434215 container died 27cc9071f76161baf7542885419f1f60e42fbe2e0c9c2ed5cf5eb413ac34c338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 08:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-02143856fce96f178487722214fe0c2def01186a8cd39a0d3f9d7990651683f1-merged.mount: Deactivated successfully.
Oct 02 08:34:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:08 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195943330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:08 compute-0 podman[339041]: 2025-10-02 08:34:08.338961495 +0000 UTC m=+0.255476718 container remove 27cc9071f76161baf7542885419f1f60e42fbe2e0c9c2ed5cf5eb413ac34c338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.340 2 DEBUG oslo_concurrency.processutils [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.347 2 DEBUG nova.compute.provider_tree [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:34:08 compute-0 systemd[1]: libpod-conmon-27cc9071f76161baf7542885419f1f60e42fbe2e0c9c2ed5cf5eb413ac34c338.scope: Deactivated successfully.
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.369 2 DEBUG nova.scheduler.client.report [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.393 2 DEBUG oslo_concurrency.lockutils [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.395 2 DEBUG oslo_concurrency.lockutils [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.428 2 INFO nova.scheduler.client.report [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Deleted allocations for instance 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.482 2 DEBUG oslo_concurrency.processutils [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:08 compute-0 podman[339082]: 2025-10-02 08:34:08.491046674 +0000 UTC m=+0.036113891 container create 4c1170fe74988c6d511bb5e81898c85687d944be571194d0c8bf06da4033a632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.522 2 DEBUG nova.compute.manager [req-f460259a-684c-4016-b683-27262d821cb7 req-d222946b-dc36-4742-a651-ee565d0faa65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Received event network-vif-deleted-0627e49d-2462-482a-9f37-38289b11af20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.525 2 DEBUG oslo_concurrency.lockutils [None req-575173d6-2eda-4d05-9d59-ee38f9ccfc69 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:08 compute-0 systemd[1]: Started libpod-conmon-4c1170fe74988c6d511bb5e81898c85687d944be571194d0c8bf06da4033a632.scope.
Oct 02 08:34:08 compute-0 podman[339082]: 2025-10-02 08:34:08.475420909 +0000 UTC m=+0.020488146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:34:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee9a50c800d0e2b3ae3f43504c21f6116909333cb8a3623133dd14d5466c3776/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee9a50c800d0e2b3ae3f43504c21f6116909333cb8a3623133dd14d5466c3776/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee9a50c800d0e2b3ae3f43504c21f6116909333cb8a3623133dd14d5466c3776/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee9a50c800d0e2b3ae3f43504c21f6116909333cb8a3623133dd14d5466c3776/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:08 compute-0 podman[339082]: 2025-10-02 08:34:08.596212157 +0000 UTC m=+0.141279384 container init 4c1170fe74988c6d511bb5e81898c85687d944be571194d0c8bf06da4033a632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_taussig, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 08:34:08 compute-0 podman[339082]: 2025-10-02 08:34:08.606977132 +0000 UTC m=+0.152044369 container start 4c1170fe74988c6d511bb5e81898c85687d944be571194d0c8bf06da4033a632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 08:34:08 compute-0 podman[339082]: 2025-10-02 08:34:08.611286565 +0000 UTC m=+0.156353782 container attach 4c1170fe74988c6d511bb5e81898c85687d944be571194d0c8bf06da4033a632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_taussig, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 08:34:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:08 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561398209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.917 2 DEBUG oslo_concurrency.processutils [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.925 2 DEBUG nova.compute.provider_tree [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.940 2 DEBUG nova.scheduler.client.report [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:34:08 compute-0 ceph-mon[74477]: pgmap v1624: 305 pgs: 305 active+clean; 260 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.7 MiB/s wr, 354 op/s
Oct 02 08:34:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2195943330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/561398209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:08 compute-0 nova_compute[260603]: 2025-10-02 08:34:08.973 2 DEBUG oslo_concurrency.lockutils [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:09 compute-0 nova_compute[260603]: 2025-10-02 08:34:09.003 2 INFO nova.scheduler.client.report [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Deleted allocations for instance ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55
Oct 02 08:34:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.1 MiB/s wr, 430 op/s
Oct 02 08:34:09 compute-0 nova_compute[260603]: 2025-10-02 08:34:09.081 2 DEBUG oslo_concurrency.lockutils [None req-eec7c926-c9fd-40d0-a2c2-2277f332a35b 6c3e0096dae34ce09545c8c4547dae81 e1045858c7b24f1baf184c8469064740 - - default default] Lock "ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:09 compute-0 nova_compute[260603]: 2025-10-02 08:34:09.131 2 DEBUG nova.compute.manager [req-f74732b6-735c-41de-930c-050a91a702cd req-f4ea0ac1-4c5a-43c2-8ad4-b19194e37804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Received event network-vif-deleted-95f2c075-1a91-4cac-a100-998ebf607a8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:09 compute-0 fervent_taussig[339099]: {
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "osd_id": 2,
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "type": "bluestore"
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:     },
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "osd_id": 1,
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "type": "bluestore"
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:     },
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "osd_id": 0,
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:         "type": "bluestore"
Oct 02 08:34:09 compute-0 fervent_taussig[339099]:     }
Oct 02 08:34:09 compute-0 fervent_taussig[339099]: }
Oct 02 08:34:09 compute-0 systemd[1]: libpod-4c1170fe74988c6d511bb5e81898c85687d944be571194d0c8bf06da4033a632.scope: Deactivated successfully.
Oct 02 08:34:09 compute-0 systemd[1]: libpod-4c1170fe74988c6d511bb5e81898c85687d944be571194d0c8bf06da4033a632.scope: Consumed 1.080s CPU time.
Oct 02 08:34:09 compute-0 podman[339082]: 2025-10-02 08:34:09.681924776 +0000 UTC m=+1.226992003 container died 4c1170fe74988c6d511bb5e81898c85687d944be571194d0c8bf06da4033a632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee9a50c800d0e2b3ae3f43504c21f6116909333cb8a3623133dd14d5466c3776-merged.mount: Deactivated successfully.
Oct 02 08:34:09 compute-0 podman[339082]: 2025-10-02 08:34:09.747610825 +0000 UTC m=+1.292678062 container remove 4c1170fe74988c6d511bb5e81898c85687d944be571194d0c8bf06da4033a632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 08:34:09 compute-0 systemd[1]: libpod-conmon-4c1170fe74988c6d511bb5e81898c85687d944be571194d0c8bf06da4033a632.scope: Deactivated successfully.
Oct 02 08:34:09 compute-0 sudo[338960]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:34:09 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:34:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:34:09 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:34:09 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e89e06e7-69d7-4636-a876-a8a2f3338430 does not exist
Oct 02 08:34:09 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b676392a-ea3a-43dd-9599-36c65a61cf6b does not exist
Oct 02 08:34:09 compute-0 sudo[339164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:34:09 compute-0 sudo[339164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:09 compute-0 sudo[339164]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:09 compute-0 sudo[339189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:34:09 compute-0 sudo[339189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:34:09 compute-0 sudo[339189]: pam_unix(sudo:session): session closed for user root
Oct 02 08:34:10 compute-0 ceph-mon[74477]: pgmap v1625: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.1 MiB/s wr, 430 op/s
Oct 02 08:34:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:34:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:34:10 compute-0 nova_compute[260603]: 2025-10-02 08:34:10.878 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:10 compute-0 nova_compute[260603]: 2025-10-02 08:34:10.879 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:10 compute-0 nova_compute[260603]: 2025-10-02 08:34:10.899 2 DEBUG nova.compute.manager [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:34:10 compute-0 nova_compute[260603]: 2025-10-02 08:34:10.970 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:10 compute-0 nova_compute[260603]: 2025-10-02 08:34:10.971 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:10 compute-0 nova_compute[260603]: 2025-10-02 08:34:10.983 2 DEBUG nova.virt.hardware [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:34:10 compute-0 nova_compute[260603]: 2025-10-02 08:34:10.984 2 INFO nova.compute.claims [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:34:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 320 op/s
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.153 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.396 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394036.395296, 49e7e668-b62c-4e35-a4e2-bba540000961 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.398 2 INFO nova.compute.manager [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] VM Stopped (Lifecycle Event)
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.427 2 DEBUG nova.compute.manager [None req-2f51ae3c-2e02-415e-9343-387ab7e2d695 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1375074782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.622 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.630 2 DEBUG nova.compute.provider_tree [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.654 2 DEBUG nova.scheduler.client.report [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.694 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.695 2 DEBUG nova.compute.manager [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.770 2 DEBUG nova.compute.manager [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.771 2 DEBUG nova.network.neutron [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.797 2 INFO nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:34:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1375074782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.839 2 DEBUG nova.compute.manager [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.962 2 DEBUG nova.compute.manager [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.965 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:34:11 compute-0 nova_compute[260603]: 2025-10-02 08:34:11.966 2 INFO nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Creating image(s)
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.001 2 DEBUG nova.storage.rbd_utils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.039 2 DEBUG nova.storage.rbd_utils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.067 2 DEBUG nova.storage.rbd_utils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.071 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.158 2 DEBUG nova.policy [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd2e113d3d74a43998ac8dbf246ae9095', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '629efd330be646b7a2941e0c83b86e0e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.161 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.162 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.163 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.163 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.187 2 DEBUG nova.storage.rbd_utils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.190 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.473 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.559 2 DEBUG nova.storage.rbd_utils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] resizing rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.638 2 DEBUG nova.objects.instance [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'migration_context' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.652 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.652 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Ensure instance console log exists: /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.652 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.653 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:12 compute-0 nova_compute[260603]: 2025-10-02 08:34:12.653 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:12 compute-0 ceph-mon[74477]: pgmap v1626: 305 pgs: 305 active+clean; 167 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 320 op/s
Oct 02 08:34:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 192 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.6 MiB/s wr, 335 op/s
Oct 02 08:34:13 compute-0 nova_compute[260603]: 2025-10-02 08:34:13.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:14 compute-0 nova_compute[260603]: 2025-10-02 08:34:14.028 2 DEBUG nova.network.neutron [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Successfully created port: 597a75f0-6607-46bb-a380-b46a358c3bf7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:34:14 compute-0 nova_compute[260603]: 2025-10-02 08:34:14.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:14 compute-0 nova_compute[260603]: 2025-10-02 08:34:14.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:34:14 compute-0 nova_compute[260603]: 2025-10-02 08:34:14.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:34:14 compute-0 ceph-mon[74477]: pgmap v1627: 305 pgs: 305 active+clean; 192 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.6 MiB/s wr, 335 op/s
Oct 02 08:34:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 192 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 856 KiB/s wr, 207 op/s
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.425 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394040.424286, 7ac34b0c-8ced-417d-9442-8fda77804a34 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.426 2 INFO nova.compute.manager [-] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] VM Stopped (Lifecycle Event)
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.454 2 DEBUG nova.compute.manager [None req-a4e0c14b-80f0-4100-88d5-eb66b60c13c3 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.464 2 DEBUG oslo_concurrency.lockutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Acquiring lock "62bd6fb5-9b1d-40a6-81da-a3d89d022346" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.465 2 DEBUG oslo_concurrency.lockutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "62bd6fb5-9b1d-40a6-81da-a3d89d022346" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.489 2 DEBUG nova.compute.manager [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.582 2 DEBUG oslo_concurrency.lockutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.583 2 DEBUG oslo_concurrency.lockutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.621 2 DEBUG nova.virt.hardware [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.621 2 INFO nova.compute.claims [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.919 2 DEBUG nova.network.neutron [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Successfully updated port: 597a75f0-6607-46bb-a380-b46a358c3bf7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.950 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "refresh_cache-5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.950 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquired lock "refresh_cache-5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:34:15 compute-0 nova_compute[260603]: 2025-10-02 08:34:15.950 2 DEBUG nova.network.neutron [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.050 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.313 2 DEBUG nova.network.neutron [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:34:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/696358538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.511 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.518 2 DEBUG nova.compute.provider_tree [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.537 2 DEBUG nova.scheduler.client.report [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.550 2 DEBUG nova.compute.manager [req-218fa369-eb63-43b6-8dba-2c3beba51b17 req-2f1433ce-d664-449f-ac6d-7158a06ebc7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-changed-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.550 2 DEBUG nova.compute.manager [req-218fa369-eb63-43b6-8dba-2c3beba51b17 req-2f1433ce-d664-449f-ac6d-7158a06ebc7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Refreshing instance network info cache due to event network-changed-597a75f0-6607-46bb-a380-b46a358c3bf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.551 2 DEBUG oslo_concurrency.lockutils [req-218fa369-eb63-43b6-8dba-2c3beba51b17 req-2f1433ce-d664-449f-ac6d-7158a06ebc7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.578 2 DEBUG oslo_concurrency.lockutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.579 2 DEBUG nova.compute.manager [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.644 2 DEBUG nova.compute.manager [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.668 2 INFO nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.701 2 DEBUG nova.compute.manager [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.823 2 DEBUG nova.compute.manager [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.824 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.825 2 INFO nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Creating image(s)
Oct 02 08:34:16 compute-0 ceph-mon[74477]: pgmap v1628: 305 pgs: 305 active+clean; 192 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 856 KiB/s wr, 207 op/s
Oct 02 08:34:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/696358538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.861 2 DEBUG nova.storage.rbd_utils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.896 2 DEBUG nova.storage.rbd_utils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.931 2 DEBUG nova.storage.rbd_utils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:16 compute-0 nova_compute[260603]: 2025-10-02 08:34:16.936 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.042 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.044 2 DEBUG oslo_concurrency.lockutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.046 2 DEBUG oslo_concurrency.lockutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.046 2 DEBUG oslo_concurrency.lockutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 192 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 856 KiB/s wr, 207 op/s
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.080 2 DEBUG nova.storage.rbd_utils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.085 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.391 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.455 2 DEBUG nova.storage.rbd_utils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] resizing rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.537 2 DEBUG nova.objects.instance [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lazy-loading 'migration_context' on Instance uuid 62bd6fb5-9b1d-40a6-81da-a3d89d022346 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.556 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.557 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Ensure instance console log exists: /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.557 2 DEBUG oslo_concurrency.lockutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.557 2 DEBUG oslo_concurrency.lockutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.558 2 DEBUG oslo_concurrency.lockutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.559 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.563 2 WARNING nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.568 2 DEBUG nova.virt.libvirt.host [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.568 2 DEBUG nova.virt.libvirt.host [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.571 2 DEBUG nova.virt.libvirt.host [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.572 2 DEBUG nova.virt.libvirt.host [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.572 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.572 2 DEBUG nova.virt.hardware [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.573 2 DEBUG nova.virt.hardware [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.573 2 DEBUG nova.virt.hardware [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.573 2 DEBUG nova.virt.hardware [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.573 2 DEBUG nova.virt.hardware [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.574 2 DEBUG nova.virt.hardware [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.574 2 DEBUG nova.virt.hardware [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.574 2 DEBUG nova.virt.hardware [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.574 2 DEBUG nova.virt.hardware [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.574 2 DEBUG nova.virt.hardware [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.575 2 DEBUG nova.virt.hardware [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:34:17 compute-0 nova_compute[260603]: 2025-10-02 08:34:17.577 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1806789156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.005 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.039 2 DEBUG nova.storage.rbd_utils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.043 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:34:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/719104882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.542 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.544 2 DEBUG nova.objects.instance [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lazy-loading 'pci_devices' on Instance uuid 62bd6fb5-9b1d-40a6-81da-a3d89d022346 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.567 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:34:18 compute-0 nova_compute[260603]:   <uuid>62bd6fb5-9b1d-40a6-81da-a3d89d022346</uuid>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   <name>instance-00000053</name>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerShowV257Test-server-288043821</nova:name>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:34:17</nova:creationTime>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:34:18 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:34:18 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:34:18 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:34:18 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:34:18 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:34:18 compute-0 nova_compute[260603]:         <nova:user uuid="c49315d68a544e0fb41f4e3ba5c71ba5">tempest-ServerShowV257Test-1175614036-project-member</nova:user>
Oct 02 08:34:18 compute-0 nova_compute[260603]:         <nova:project uuid="14d1e9bc40ee43a2bad5cc224c4ea79f">tempest-ServerShowV257Test-1175614036</nova:project>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <nova:ports/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <system>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <entry name="serial">62bd6fb5-9b1d-40a6-81da-a3d89d022346</entry>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <entry name="uuid">62bd6fb5-9b1d-40a6-81da-a3d89d022346</entry>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     </system>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   <os>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   </os>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   <features>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   </features>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk">
Oct 02 08:34:18 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:18 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk.config">
Oct 02 08:34:18 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:18 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/console.log" append="off"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <video>
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     </video>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:34:18 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:34:18 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:34:18 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:34:18 compute-0 nova_compute[260603]: </domain>
Oct 02 08:34:18 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.627 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.628 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.633 2 INFO nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Using config drive
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.654 2 DEBUG nova.storage.rbd_utils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.660 2 DEBUG nova.network.neutron [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Updating instance_info_cache with network_info: [{"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.682 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Releasing lock "refresh_cache-5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.683 2 DEBUG nova.compute.manager [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Instance network_info: |[{"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.683 2 DEBUG oslo_concurrency.lockutils [req-218fa369-eb63-43b6-8dba-2c3beba51b17 req-2f1433ce-d664-449f-ac6d-7158a06ebc7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.684 2 DEBUG nova.network.neutron [req-218fa369-eb63-43b6-8dba-2c3beba51b17 req-2f1433ce-d664-449f-ac6d-7158a06ebc7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Refreshing network info cache for port 597a75f0-6607-46bb-a380-b46a358c3bf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.687 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Start _get_guest_xml network_info=[{"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.691 2 WARNING nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.695 2 DEBUG nova.virt.libvirt.host [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.696 2 DEBUG nova.virt.libvirt.host [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.703 2 DEBUG nova.virt.libvirt.host [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.704 2 DEBUG nova.virt.libvirt.host [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.704 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.704 2 DEBUG nova.virt.hardware [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.705 2 DEBUG nova.virt.hardware [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.705 2 DEBUG nova.virt.hardware [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.706 2 DEBUG nova.virt.hardware [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.706 2 DEBUG nova.virt.hardware [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.706 2 DEBUG nova.virt.hardware [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.706 2 DEBUG nova.virt.hardware [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.707 2 DEBUG nova.virt.hardware [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.707 2 DEBUG nova.virt.hardware [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.707 2 DEBUG nova.virt.hardware [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.707 2 DEBUG nova.virt.hardware [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.711 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:18 compute-0 ceph-mon[74477]: pgmap v1629: 305 pgs: 305 active+clean; 192 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 856 KiB/s wr, 207 op/s
Oct 02 08:34:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1806789156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/719104882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.868 2 INFO nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Creating config drive at /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/disk.config
Oct 02 08:34:18 compute-0 nova_compute[260603]: 2025-10-02 08:34:18.879 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp634c_y46 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.052 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp634c_y46" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 240 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 2.8 MiB/s wr, 308 op/s
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.093 2 DEBUG nova.storage.rbd_utils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.098 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/disk.config 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/964610081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.206 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.232 2 DEBUG nova.storage.rbd_utils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.238 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.286 2 DEBUG oslo_concurrency.processutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/disk.config 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.287 2 INFO nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Deleting local config drive /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/disk.config because it was imported into RBD.
Oct 02 08:34:19 compute-0 systemd-machined[214636]: New machine qemu-93-instance-00000053.
Oct 02 08:34:19 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-00000053.
Oct 02 08:34:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430702593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.736 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.741 2 DEBUG nova.virt.libvirt.vif [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:34:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-677598906',display_name='tempest-ServerRescueTestJSON-server-677598906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-677598906',id=82,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='629efd330be646b7a2941e0c83b86e0e',ramdisk_id='',reservation_id='r-j80za7bp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-846559261',owner_user_name='tempest-ServerRescueTestJSON-846559261-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:34:11Z,user_data=None,user_id='d2e113d3d74a43998ac8dbf246ae9095',uuid=5b5054c4-5d13-4ca5-8037-e0d93f21d9ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.742 2 DEBUG nova.network.os_vif_util [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converting VIF {"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.744 2 DEBUG nova.network.os_vif_util [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:ab:f3,bridge_name='br-int',has_traffic_filtering=True,id=597a75f0-6607-46bb-a380-b46a358c3bf7,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap597a75f0-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.746 2 DEBUG nova.objects.instance [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.770 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:34:19 compute-0 nova_compute[260603]:   <uuid>5b5054c4-5d13-4ca5-8037-e0d93f21d9ea</uuid>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   <name>instance-00000052</name>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerRescueTestJSON-server-677598906</nova:name>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:34:18</nova:creationTime>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:34:19 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:34:19 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:34:19 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:34:19 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:34:19 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:34:19 compute-0 nova_compute[260603]:         <nova:user uuid="d2e113d3d74a43998ac8dbf246ae9095">tempest-ServerRescueTestJSON-846559261-project-member</nova:user>
Oct 02 08:34:19 compute-0 nova_compute[260603]:         <nova:project uuid="629efd330be646b7a2941e0c83b86e0e">tempest-ServerRescueTestJSON-846559261</nova:project>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:34:19 compute-0 nova_compute[260603]:         <nova:port uuid="597a75f0-6607-46bb-a380-b46a358c3bf7">
Oct 02 08:34:19 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <system>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <entry name="serial">5b5054c4-5d13-4ca5-8037-e0d93f21d9ea</entry>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <entry name="uuid">5b5054c4-5d13-4ca5-8037-e0d93f21d9ea</entry>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     </system>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   <os>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   </os>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   <features>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   </features>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk">
Oct 02 08:34:19 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:19 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.config">
Oct 02 08:34:19 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:19 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:f5:ab:f3"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <target dev="tap597a75f0-66"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/console.log" append="off"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <video>
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     </video>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:34:19 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:34:19 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:34:19 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:34:19 compute-0 nova_compute[260603]: </domain>
Oct 02 08:34:19 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.771 2 DEBUG nova.compute.manager [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Preparing to wait for external event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.771 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.772 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.772 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.772 2 DEBUG nova.virt.libvirt.vif [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:34:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-677598906',display_name='tempest-ServerRescueTestJSON-server-677598906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-677598906',id=82,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='629efd330be646b7a2941e0c83b86e0e',ramdisk_id='',reservation_id='r-j80za7bp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-846559261',owner_user_name='tempest-ServerRescueTestJSON-846559261-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:34:11Z,user_data=None,user_id='d2e113d3d74a43998ac8dbf246ae9095',uuid=5b5054c4-5d13-4ca5-8037-e0d93f21d9ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.773 2 DEBUG nova.network.os_vif_util [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converting VIF {"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.773 2 DEBUG nova.network.os_vif_util [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:ab:f3,bridge_name='br-int',has_traffic_filtering=True,id=597a75f0-6607-46bb-a380-b46a358c3bf7,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap597a75f0-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.774 2 DEBUG os_vif [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:ab:f3,bridge_name='br-int',has_traffic_filtering=True,id=597a75f0-6607-46bb-a380-b46a358c3bf7,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap597a75f0-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.774 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.779 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap597a75f0-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.779 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap597a75f0-66, col_values=(('external_ids', {'iface-id': '597a75f0-6607-46bb-a380-b46a358c3bf7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:ab:f3', 'vm-uuid': '5b5054c4-5d13-4ca5-8037-e0d93f21d9ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:19 compute-0 NetworkManager[45129]: <info>  [1759394059.8289] manager: (tap597a75f0-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.837 2 INFO os_vif [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:ab:f3,bridge_name='br-int',has_traffic_filtering=True,id=597a75f0-6607-46bb-a380-b46a358c3bf7,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap597a75f0-66')
Oct 02 08:34:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/964610081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3430702593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.895 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.896 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.896 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No VIF found with MAC fa:16:3e:f5:ab:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.897 2 INFO nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Using config drive
Oct 02 08:34:19 compute-0 nova_compute[260603]: 2025-10-02 08:34:19.921 2 DEBUG nova.storage.rbd_utils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.185 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394060.1852255, 62bd6fb5-9b1d-40a6-81da-a3d89d022346 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.187 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] VM Resumed (Lifecycle Event)
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.189 2 DEBUG nova.compute.manager [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.189 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.192 2 INFO nova.virt.libvirt.driver [-] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Instance spawned successfully.
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.193 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.215 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.218 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.218 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.219 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.219 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.219 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.220 2 DEBUG nova.virt.libvirt.driver [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.223 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.266 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.267 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394060.186362, 62bd6fb5-9b1d-40a6-81da-a3d89d022346 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.267 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] VM Started (Lifecycle Event)
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.286 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.289 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.292 2 INFO nova.compute.manager [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Took 3.47 seconds to spawn the instance on the hypervisor.
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.293 2 DEBUG nova.compute.manager [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.333 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.379 2 INFO nova.compute.manager [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Took 4.84 seconds to build instance.
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.403 2 DEBUG oslo_concurrency.lockutils [None req-b3ab43a3-1f49-48d1-a1a5-0b2138a0f18a c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "62bd6fb5-9b1d-40a6-81da-a3d89d022346" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.655 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.821 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-6df0942d-95db-4140-9c7b-b5c51ada92bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.821 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-6df0942d-95db-4140-9c7b-b5c51ada92bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.822 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:34:20 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.823 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6df0942d-95db-4140-9c7b-b5c51ada92bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:20 compute-0 ceph-mon[74477]: pgmap v1630: 305 pgs: 305 active+clean; 240 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 2.8 MiB/s wr, 308 op/s
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.991 2 INFO nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Creating config drive at /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/disk.config
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:20.997 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp86l1maou execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:21 compute-0 podman[339858]: 2025-10-02 08:34:21.060281167 +0000 UTC m=+0.079942922 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 08:34:21 compute-0 podman[339851]: 2025-10-02 08:34:21.065739226 +0000 UTC m=+0.119765137 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:34:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 240 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 645 KiB/s rd, 2.8 MiB/s wr, 115 op/s
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.154 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp86l1maou" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.178 2 DEBUG nova.storage.rbd_utils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.182 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/disk.config 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.344 2 DEBUG oslo_concurrency.processutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/disk.config 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.345 2 INFO nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Deleting local config drive /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/disk.config because it was imported into RBD.
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.394 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394046.3921027, ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.395 2 INFO nova.compute.manager [-] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] VM Stopped (Lifecycle Event)
Oct 02 08:34:21 compute-0 kernel: tap597a75f0-66: entered promiscuous mode
Oct 02 08:34:21 compute-0 ovn_controller[152344]: 2025-10-02T08:34:21Z|00779|binding|INFO|Claiming lport 597a75f0-6607-46bb-a380-b46a358c3bf7 for this chassis.
Oct 02 08:34:21 compute-0 ovn_controller[152344]: 2025-10-02T08:34:21Z|00780|binding|INFO|597a75f0-6607-46bb-a380-b46a358c3bf7: Claiming fa:16:3e:f5:ab:f3 10.100.0.7
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:21 compute-0 systemd-udevd[339829]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:34:21 compute-0 NetworkManager[45129]: <info>  [1759394061.4095] manager: (tap597a75f0-66): new Tun device (/org/freedesktop/NetworkManager/Devices/320)
Oct 02 08:34:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:21.410 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:ab:f3 10.100.0.7'], port_security=['fa:16:3e:f5:ab:f3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5b5054c4-5d13-4ca5-8037-e0d93f21d9ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86059eb0-17b1-462f-a30f-1dfe95c50614', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '629efd330be646b7a2941e0c83b86e0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4883fba0-ee44-4548-b479-1786f1cf77b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2e7de25-29e1-404b-a0d2-6f487c522884, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=597a75f0-6607-46bb-a380-b46a358c3bf7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:21.411 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 597a75f0-6607-46bb-a380-b46a358c3bf7 in datapath 86059eb0-17b1-462f-a30f-1dfe95c50614 bound to our chassis
Oct 02 08:34:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:21.411 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86059eb0-17b1-462f-a30f-1dfe95c50614 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:34:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:21.413 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e09dc2-1eaa-44a1-b03f-2f536228f71f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:21 compute-0 ovn_controller[152344]: 2025-10-02T08:34:21Z|00781|binding|INFO|Setting lport 597a75f0-6607-46bb-a380-b46a358c3bf7 ovn-installed in OVS
Oct 02 08:34:21 compute-0 ovn_controller[152344]: 2025-10-02T08:34:21Z|00782|binding|INFO|Setting lport 597a75f0-6607-46bb-a380-b46a358c3bf7 up in Southbound
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:21 compute-0 NetworkManager[45129]: <info>  [1759394061.4273] device (tap597a75f0-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.427 2 DEBUG nova.compute.manager [None req-816d810c-050d-4753-a9d3-d398136d58fc - - - - - -] [instance: ae1c4dce-2c98-415a-9ec4-dd4cf1e1eb55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:21 compute-0 NetworkManager[45129]: <info>  [1759394061.4303] device (tap597a75f0-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:34:21 compute-0 systemd-machined[214636]: New machine qemu-94-instance-00000052.
Oct 02 08:34:21 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-00000052.
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.537 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394046.5350757, 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.539 2 INFO nova.compute.manager [-] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] VM Stopped (Lifecycle Event)
Oct 02 08:34:21 compute-0 nova_compute[260603]: 2025-10-02 08:34:21.558 2 DEBUG nova.compute.manager [None req-6618ac8e-bf6c-406e-b992-f061cd5cfcd3 - - - - - -] [instance: 22606b08-8ce0-4ce0-9e5c-e7f8f6b3e8bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:34:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3596073866' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:34:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:34:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3596073866' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:34:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.348 2 DEBUG nova.network.neutron [req-218fa369-eb63-43b6-8dba-2c3beba51b17 req-2f1433ce-d664-449f-ac6d-7158a06ebc7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Updated VIF entry in instance network info cache for port 597a75f0-6607-46bb-a380-b46a358c3bf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.349 2 DEBUG nova.network.neutron [req-218fa369-eb63-43b6-8dba-2c3beba51b17 req-2f1433ce-d664-449f-ac6d-7158a06ebc7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Updating instance_info_cache with network_info: [{"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.369 2 DEBUG oslo_concurrency.lockutils [req-218fa369-eb63-43b6-8dba-2c3beba51b17 req-2f1433ce-d664-449f-ac6d-7158a06ebc7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.395 2 DEBUG nova.compute.manager [req-62c1eed9-1503-4ae9-aef1-e11225ecf6d2 req-9a97f4a9-6a8c-4a39-9130-27651f460ab7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.395 2 DEBUG oslo_concurrency.lockutils [req-62c1eed9-1503-4ae9-aef1-e11225ecf6d2 req-9a97f4a9-6a8c-4a39-9130-27651f460ab7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.396 2 DEBUG oslo_concurrency.lockutils [req-62c1eed9-1503-4ae9-aef1-e11225ecf6d2 req-9a97f4a9-6a8c-4a39-9130-27651f460ab7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.396 2 DEBUG oslo_concurrency.lockutils [req-62c1eed9-1503-4ae9-aef1-e11225ecf6d2 req-9a97f4a9-6a8c-4a39-9130-27651f460ab7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.396 2 DEBUG nova.compute.manager [req-62c1eed9-1503-4ae9-aef1-e11225ecf6d2 req-9a97f4a9-6a8c-4a39-9130-27651f460ab7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Processing event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.609 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394062.6084216, 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.609 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] VM Started (Lifecycle Event)
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.611 2 DEBUG nova.compute.manager [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.624 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.629 2 INFO nova.virt.libvirt.driver [-] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Instance spawned successfully.
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.629 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.633 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.644 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.666 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.666 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.666 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.667 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.667 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.668 2 DEBUG nova.virt.libvirt.driver [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.679 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.680 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394062.608666, 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.680 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] VM Paused (Lifecycle Event)
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.701 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.702 2 INFO nova.compute.manager [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Rebuilding instance
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.713 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394062.615432, 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.714 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] VM Resumed (Lifecycle Event)
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.739 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.745 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.773 2 INFO nova.compute.manager [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Took 10.81 seconds to spawn the instance on the hypervisor.
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.773 2 DEBUG nova.compute.manager [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.773 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.836 2 INFO nova.compute.manager [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Took 11.89 seconds to build instance.
Oct 02 08:34:22 compute-0 nova_compute[260603]: 2025-10-02 08:34:22.852 2 DEBUG oslo_concurrency.lockutils [None req-f1c3de02-f02a-4da4-a413-c1340eebe6a7 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:22 compute-0 ceph-mon[74477]: pgmap v1631: 305 pgs: 305 active+clean; 240 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 645 KiB/s rd, 2.8 MiB/s wr, 115 op/s
Oct 02 08:34:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3596073866' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:34:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3596073866' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.035 2 DEBUG nova.objects.instance [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 62bd6fb5-9b1d-40a6-81da-a3d89d022346 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.055 2 DEBUG nova.compute.manager [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 262 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.107 2 DEBUG nova.objects.instance [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lazy-loading 'pci_requests' on Instance uuid 62bd6fb5-9b1d-40a6-81da-a3d89d022346 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.120 2 DEBUG nova.objects.instance [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lazy-loading 'pci_devices' on Instance uuid 62bd6fb5-9b1d-40a6-81da-a3d89d022346 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.142 2 DEBUG nova.objects.instance [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lazy-loading 'resources' on Instance uuid 62bd6fb5-9b1d-40a6-81da-a3d89d022346 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.155 2 DEBUG nova.objects.instance [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lazy-loading 'migration_context' on Instance uuid 62bd6fb5-9b1d-40a6-81da-a3d89d022346 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.170 2 DEBUG nova.objects.instance [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.177 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.811 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Updating instance_info_cache with network_info: [{"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.826 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-6df0942d-95db-4140-9c7b-b5c51ada92bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.827 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.827 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:34:23 compute-0 nova_compute[260603]: 2025-10-02 08:34:23.828 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:34:24 compute-0 nova_compute[260603]: 2025-10-02 08:34:24.406 2 INFO nova.compute.manager [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Rescuing
Oct 02 08:34:24 compute-0 nova_compute[260603]: 2025-10-02 08:34:24.407 2 DEBUG oslo_concurrency.lockutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "refresh_cache-5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:34:24 compute-0 nova_compute[260603]: 2025-10-02 08:34:24.407 2 DEBUG oslo_concurrency.lockutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquired lock "refresh_cache-5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:34:24 compute-0 nova_compute[260603]: 2025-10-02 08:34:24.408 2 DEBUG nova.network.neutron [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:34:24 compute-0 nova_compute[260603]: 2025-10-02 08:34:24.686 2 DEBUG nova.compute.manager [req-2fd29cde-26f9-423b-95a6-38e9c2d8fe32 req-1b8954eb-cfab-46c5-a729-38050e87913d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:24 compute-0 nova_compute[260603]: 2025-10-02 08:34:24.687 2 DEBUG oslo_concurrency.lockutils [req-2fd29cde-26f9-423b-95a6-38e9c2d8fe32 req-1b8954eb-cfab-46c5-a729-38050e87913d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:24 compute-0 nova_compute[260603]: 2025-10-02 08:34:24.688 2 DEBUG oslo_concurrency.lockutils [req-2fd29cde-26f9-423b-95a6-38e9c2d8fe32 req-1b8954eb-cfab-46c5-a729-38050e87913d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:24 compute-0 nova_compute[260603]: 2025-10-02 08:34:24.689 2 DEBUG oslo_concurrency.lockutils [req-2fd29cde-26f9-423b-95a6-38e9c2d8fe32 req-1b8954eb-cfab-46c5-a729-38050e87913d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:24 compute-0 nova_compute[260603]: 2025-10-02 08:34:24.689 2 DEBUG nova.compute.manager [req-2fd29cde-26f9-423b-95a6-38e9c2d8fe32 req-1b8954eb-cfab-46c5-a729-38050e87913d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] No waiting events found dispatching network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:24 compute-0 nova_compute[260603]: 2025-10-02 08:34:24.690 2 WARNING nova.compute.manager [req-2fd29cde-26f9-423b-95a6-38e9c2d8fe32 req-1b8954eb-cfab-46c5-a729-38050e87913d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received unexpected event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 for instance with vm_state active and task_state rescuing.
Oct 02 08:34:24 compute-0 nova_compute[260603]: 2025-10-02 08:34:24.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:24 compute-0 ceph-mon[74477]: pgmap v1632: 305 pgs: 305 active+clean; 262 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 02 08:34:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 262 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.8 MiB/s wr, 187 op/s
Oct 02 08:34:26 compute-0 nova_compute[260603]: 2025-10-02 08:34:26.409 2 DEBUG nova.network.neutron [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Updating instance_info_cache with network_info: [{"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:26 compute-0 nova_compute[260603]: 2025-10-02 08:34:26.433 2 DEBUG oslo_concurrency.lockutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Releasing lock "refresh_cache-5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:34:26 compute-0 nova_compute[260603]: 2025-10-02 08:34:26.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:34:26 compute-0 nova_compute[260603]: 2025-10-02 08:34:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:34:26 compute-0 nova_compute[260603]: 2025-10-02 08:34:26.755 2 DEBUG nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:34:26 compute-0 ceph-mon[74477]: pgmap v1633: 305 pgs: 305 active+clean; 262 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.8 MiB/s wr, 187 op/s
Oct 02 08:34:27 compute-0 podman[339998]: 2025-10-02 08:34:27.045981135 +0000 UTC m=+0.103194753 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 08:34:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 262 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.8 MiB/s wr, 187 op/s
Oct 02 08:34:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:34:27 compute-0 nova_compute[260603]: 2025-10-02 08:34:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:34:27 compute-0 nova_compute[260603]: 2025-10-02 08:34:27.562 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:27 compute-0 nova_compute[260603]: 2025-10-02 08:34:27.563 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:27 compute-0 nova_compute[260603]: 2025-10-02 08:34:27.563 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:27 compute-0 nova_compute[260603]: 2025-10-02 08:34:27.563 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:34:27 compute-0 nova_compute[260603]: 2025-10-02 08:34:27.564 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:34:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:34:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:34:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:34:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:34:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:34:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:34:27
Oct 02 08:34:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:34:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:34:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'default.rgw.control', '.mgr', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'volumes', 'cephfs.cephfs.data']
Oct 02 08:34:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:34:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802302612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.010 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.084 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.084 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.084 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.087 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.088 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.091 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.091 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.106 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Acquiring lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.106 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.136 2 DEBUG nova.compute.manager [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.212 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.213 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.219 2 DEBUG nova.virt.hardware [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.220 2 INFO nova.compute.claims [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.285 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.286 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3415MB free_disk=59.88007736206055GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.286 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.300 2 DEBUG nova.scheduler.client.report [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.329 2 DEBUG nova.scheduler.client.report [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.329 2 DEBUG nova.compute.provider_tree [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.362 2 DEBUG nova.scheduler.client.report [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.380 2 DEBUG nova.scheduler.client.report [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.515 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:28 compute-0 ceph-mon[74477]: pgmap v1634: 305 pgs: 305 active+clean; 262 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.8 MiB/s wr, 187 op/s
Oct 02 08:34:28 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3802302612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3997515421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:28 compute-0 nova_compute[260603]: 2025-10-02 08:34:28.998 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.003 2 DEBUG nova.compute.provider_tree [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.017 2 DEBUG nova.scheduler.client.report [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.039 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.041 2 DEBUG nova.compute.manager [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.045 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 262 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 2.8 MiB/s wr, 250 op/s
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.115 2 DEBUG nova.compute.manager [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.116 2 DEBUG nova.network.neutron [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.138 2 INFO nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.150 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 6df0942d-95db-4140-9c7b-b5c51ada92bd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.150 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.151 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 62bd6fb5-9b1d-40a6-81da-a3d89d022346 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.151 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.152 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.153 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.164 2 DEBUG nova.compute.manager [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.271 2 DEBUG nova.compute.manager [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.273 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.274 2 INFO nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Creating image(s)
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.308 2 DEBUG nova.storage.rbd_utils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] rbd image 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.343 2 DEBUG nova.storage.rbd_utils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] rbd image 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.376 2 DEBUG nova.storage.rbd_utils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] rbd image 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.380 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.423 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.471 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.472 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.473 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.473 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.498 2 DEBUG nova.storage.rbd_utils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] rbd image 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.501 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.753 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.814 2 DEBUG nova.storage.rbd_utils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] resizing rbd image 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3997515421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.935 2 DEBUG nova.policy [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a9af39d3a3a84355a9892112bc52b9ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3282d5aea9414fb399ecefab98aaae1f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.942 2 DEBUG nova.objects.instance [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lazy-loading 'migration_context' on Instance uuid 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2014178023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.968 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.968 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Ensure instance console log exists: /var/lib/nova/instances/92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.969 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.970 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.970 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.971 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.975 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:34:29 compute-0 podman[340232]: 2025-10-02 08:34:29.982254207 +0000 UTC m=+0.051885561 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:34:29 compute-0 nova_compute[260603]: 2025-10-02 08:34:29.991 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:34:30 compute-0 nova_compute[260603]: 2025-10-02 08:34:30.012 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:34:30 compute-0 nova_compute[260603]: 2025-10-02 08:34:30.012 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:30 compute-0 ceph-mon[74477]: pgmap v1635: 305 pgs: 305 active+clean; 262 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 2.8 MiB/s wr, 250 op/s
Oct 02 08:34:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2014178023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 262 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 857 KiB/s wr, 149 op/s
Oct 02 08:34:31 compute-0 nova_compute[260603]: 2025-10-02 08:34:31.471 2 DEBUG nova.network.neutron [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Successfully created port: e20b81f7-8e61-43c9-8817-6dfc1361a4fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:34:32 compute-0 nova_compute[260603]: 2025-10-02 08:34:32.012 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:34:32 compute-0 nova_compute[260603]: 2025-10-02 08:34:32.048 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:34:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:34:32 compute-0 nova_compute[260603]: 2025-10-02 08:34:32.900 2 DEBUG nova.network.neutron [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Successfully updated port: e20b81f7-8e61-43c9-8817-6dfc1361a4fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:34:32 compute-0 ceph-mon[74477]: pgmap v1636: 305 pgs: 305 active+clean; 262 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 857 KiB/s wr, 149 op/s
Oct 02 08:34:32 compute-0 nova_compute[260603]: 2025-10-02 08:34:32.916 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Acquiring lock "refresh_cache-92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:34:32 compute-0 nova_compute[260603]: 2025-10-02 08:34:32.917 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Acquired lock "refresh_cache-92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:34:32 compute-0 nova_compute[260603]: 2025-10-02 08:34:32.917 2 DEBUG nova.network.neutron [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:34:33 compute-0 nova_compute[260603]: 2025-10-02 08:34:33.046 2 DEBUG nova.compute.manager [req-adb7045b-be50-404f-b457-cf1cdb62b9d8 req-7ac3a0e8-c52e-42a6-aa4d-95c0c9ab7ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Received event network-changed-e20b81f7-8e61-43c9-8817-6dfc1361a4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:33 compute-0 nova_compute[260603]: 2025-10-02 08:34:33.047 2 DEBUG nova.compute.manager [req-adb7045b-be50-404f-b457-cf1cdb62b9d8 req-7ac3a0e8-c52e-42a6-aa4d-95c0c9ab7ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Refreshing instance network info cache due to event network-changed-e20b81f7-8e61-43c9-8817-6dfc1361a4fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:34:33 compute-0 nova_compute[260603]: 2025-10-02 08:34:33.048 2 DEBUG oslo_concurrency.lockutils [req-adb7045b-be50-404f-b457-cf1cdb62b9d8 req-7ac3a0e8-c52e-42a6-aa4d-95c0c9ab7ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:34:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 339 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.7 MiB/s wr, 239 op/s
Oct 02 08:34:33 compute-0 nova_compute[260603]: 2025-10-02 08:34:33.115 2 DEBUG nova.network.neutron [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:34:33 compute-0 nova_compute[260603]: 2025-10-02 08:34:33.224 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:34:33 compute-0 nova_compute[260603]: 2025-10-02 08:34:33.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.194 2 DEBUG nova.network.neutron [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Updating instance_info_cache with network_info: [{"id": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "address": "fa:16:3e:b8:c9:81", "network": {"id": "9d77520f-250a-4dd0-879a-f74e997e9ca3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-412855416-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3282d5aea9414fb399ecefab98aaae1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape20b81f7-8e", "ovs_interfaceid": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.213 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Releasing lock "refresh_cache-92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.213 2 DEBUG nova.compute.manager [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Instance network_info: |[{"id": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "address": "fa:16:3e:b8:c9:81", "network": {"id": "9d77520f-250a-4dd0-879a-f74e997e9ca3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-412855416-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3282d5aea9414fb399ecefab98aaae1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape20b81f7-8e", "ovs_interfaceid": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.214 2 DEBUG oslo_concurrency.lockutils [req-adb7045b-be50-404f-b457-cf1cdb62b9d8 req-7ac3a0e8-c52e-42a6-aa4d-95c0c9ab7ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.214 2 DEBUG nova.network.neutron [req-adb7045b-be50-404f-b457-cf1cdb62b9d8 req-7ac3a0e8-c52e-42a6-aa4d-95c0c9ab7ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Refreshing network info cache for port e20b81f7-8e61-43c9-8817-6dfc1361a4fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.218 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Start _get_guest_xml network_info=[{"id": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "address": "fa:16:3e:b8:c9:81", "network": {"id": "9d77520f-250a-4dd0-879a-f74e997e9ca3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-412855416-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3282d5aea9414fb399ecefab98aaae1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape20b81f7-8e", "ovs_interfaceid": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.221 2 WARNING nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.226 2 DEBUG nova.virt.libvirt.host [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.227 2 DEBUG nova.virt.libvirt.host [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.229 2 DEBUG nova.virt.libvirt.host [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.230 2 DEBUG nova.virt.libvirt.host [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.230 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.230 2 DEBUG nova.virt.hardware [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.231 2 DEBUG nova.virt.hardware [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.232 2 DEBUG nova.virt.hardware [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.232 2 DEBUG nova.virt.hardware [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.232 2 DEBUG nova.virt.hardware [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.233 2 DEBUG nova.virt.hardware [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.233 2 DEBUG nova.virt.hardware [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.233 2 DEBUG nova.virt.hardware [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.234 2 DEBUG nova.virt.hardware [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.234 2 DEBUG nova.virt.hardware [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.234 2 DEBUG nova.virt.hardware [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.237 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/808156364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.661 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.685 2 DEBUG nova.storage.rbd_utils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] rbd image 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.690 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:34.820 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:34.821 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:34.821 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:34 compute-0 nova_compute[260603]: 2025-10-02 08:34:34.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:34 compute-0 ceph-mon[74477]: pgmap v1637: 305 pgs: 305 active+clean; 339 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.7 MiB/s wr, 239 op/s
Oct 02 08:34:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/808156364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 339 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 153 op/s
Oct 02 08:34:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/158898546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.101 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.104 2 DEBUG nova.virt.libvirt.vif [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:34:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1098684972',display_name='tempest-ServerAddressesTestJSON-server-1098684972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1098684972',id=84,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3282d5aea9414fb399ecefab98aaae1f',ramdisk_id='',reservation_id='r-hog3uwdm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-131594302',owner_user_name='tempest-ServerAddressesTestJSON-131594302-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:34:29Z,user_data=None,user_id='a9af39d3a3a84355a9892112bc52b9ba',uuid=92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "address": "fa:16:3e:b8:c9:81", "network": {"id": "9d77520f-250a-4dd0-879a-f74e997e9ca3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-412855416-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3282d5aea9414fb399ecefab98aaae1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape20b81f7-8e", "ovs_interfaceid": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.105 2 DEBUG nova.network.os_vif_util [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Converting VIF {"id": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "address": "fa:16:3e:b8:c9:81", "network": {"id": "9d77520f-250a-4dd0-879a-f74e997e9ca3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-412855416-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3282d5aea9414fb399ecefab98aaae1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape20b81f7-8e", "ovs_interfaceid": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.106 2 DEBUG nova.network.os_vif_util [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:c9:81,bridge_name='br-int',has_traffic_filtering=True,id=e20b81f7-8e61-43c9-8817-6dfc1361a4fc,network=Network(9d77520f-250a-4dd0-879a-f74e997e9ca3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape20b81f7-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.109 2 DEBUG nova.objects.instance [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lazy-loading 'pci_devices' on Instance uuid 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.126 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:34:35 compute-0 nova_compute[260603]:   <uuid>92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1</uuid>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   <name>instance-00000054</name>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerAddressesTestJSON-server-1098684972</nova:name>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:34:34</nova:creationTime>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:34:35 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:34:35 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:34:35 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:34:35 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:34:35 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:34:35 compute-0 nova_compute[260603]:         <nova:user uuid="a9af39d3a3a84355a9892112bc52b9ba">tempest-ServerAddressesTestJSON-131594302-project-member</nova:user>
Oct 02 08:34:35 compute-0 nova_compute[260603]:         <nova:project uuid="3282d5aea9414fb399ecefab98aaae1f">tempest-ServerAddressesTestJSON-131594302</nova:project>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:34:35 compute-0 nova_compute[260603]:         <nova:port uuid="e20b81f7-8e61-43c9-8817-6dfc1361a4fc">
Oct 02 08:34:35 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <system>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <entry name="serial">92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1</entry>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <entry name="uuid">92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1</entry>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     </system>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   <os>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   </os>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   <features>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   </features>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk">
Oct 02 08:34:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk.config">
Oct 02 08:34:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:b8:c9:81"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <target dev="tape20b81f7-8e"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1/console.log" append="off"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <video>
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     </video>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:34:35 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:34:35 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:34:35 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:34:35 compute-0 nova_compute[260603]: </domain>
Oct 02 08:34:35 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.127 2 DEBUG nova.compute.manager [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Preparing to wait for external event network-vif-plugged-e20b81f7-8e61-43c9-8817-6dfc1361a4fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.128 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Acquiring lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.128 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.128 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.129 2 DEBUG nova.virt.libvirt.vif [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:34:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1098684972',display_name='tempest-ServerAddressesTestJSON-server-1098684972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1098684972',id=84,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3282d5aea9414fb399ecefab98aaae1f',ramdisk_id='',reservation_id='r-hog3uwdm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-131594302',owner_user_name='tempest-ServerAddressesTestJSON-131594302-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:34:29Z,user_data=None,user_id='a9af39d3a3a84355a9892112bc52b9ba',uuid=92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "address": "fa:16:3e:b8:c9:81", "network": {"id": "9d77520f-250a-4dd0-879a-f74e997e9ca3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-412855416-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3282d5aea9414fb399ecefab98aaae1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape20b81f7-8e", "ovs_interfaceid": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.129 2 DEBUG nova.network.os_vif_util [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Converting VIF {"id": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "address": "fa:16:3e:b8:c9:81", "network": {"id": "9d77520f-250a-4dd0-879a-f74e997e9ca3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-412855416-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3282d5aea9414fb399ecefab98aaae1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape20b81f7-8e", "ovs_interfaceid": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.130 2 DEBUG nova.network.os_vif_util [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:c9:81,bridge_name='br-int',has_traffic_filtering=True,id=e20b81f7-8e61-43c9-8817-6dfc1361a4fc,network=Network(9d77520f-250a-4dd0-879a-f74e997e9ca3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape20b81f7-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.130 2 DEBUG os_vif [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:c9:81,bridge_name='br-int',has_traffic_filtering=True,id=e20b81f7-8e61-43c9-8817-6dfc1361a4fc,network=Network(9d77520f-250a-4dd0-879a-f74e997e9ca3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape20b81f7-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.134 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape20b81f7-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.134 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape20b81f7-8e, col_values=(('external_ids', {'iface-id': 'e20b81f7-8e61-43c9-8817-6dfc1361a4fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:c9:81', 'vm-uuid': '92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:35 compute-0 NetworkManager[45129]: <info>  [1759394075.1385] manager: (tape20b81f7-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.147 2 INFO os_vif [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:c9:81,bridge_name='br-int',has_traffic_filtering=True,id=e20b81f7-8e61-43c9-8817-6dfc1361a4fc,network=Network(9d77520f-250a-4dd0-879a-f74e997e9ca3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape20b81f7-8e')
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.204 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.205 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.206 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] No VIF found with MAC fa:16:3e:b8:c9:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.207 2 INFO nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Using config drive
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.241 2 DEBUG nova.storage.rbd_utils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] rbd image 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:35 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d00000053.scope: Deactivated successfully.
Oct 02 08:34:35 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d00000053.scope: Consumed 12.897s CPU time.
Oct 02 08:34:35 compute-0 systemd-machined[214636]: Machine qemu-93-instance-00000053 terminated.
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.821 2 INFO nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Creating config drive at /var/lib/nova/instances/92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1/disk.config
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.830 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptddhiywx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/158898546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:35 compute-0 nova_compute[260603]: 2025-10-02 08:34:35.980 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptddhiywx" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.020 2 DEBUG nova.storage.rbd_utils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] rbd image 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.025 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1/disk.config 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.199 2 DEBUG oslo_concurrency.processutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1/disk.config 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.201 2 INFO nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Deleting local config drive /var/lib/nova/instances/92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1/disk.config because it was imported into RBD.
Oct 02 08:34:36 compute-0 kernel: tape20b81f7-8e: entered promiscuous mode
Oct 02 08:34:36 compute-0 NetworkManager[45129]: <info>  [1759394076.2647] manager: (tape20b81f7-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Oct 02 08:34:36 compute-0 systemd-udevd[340353]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:34:36 compute-0 ovn_controller[152344]: 2025-10-02T08:34:36Z|00783|binding|INFO|Claiming lport e20b81f7-8e61-43c9-8817-6dfc1361a4fc for this chassis.
Oct 02 08:34:36 compute-0 ovn_controller[152344]: 2025-10-02T08:34:36Z|00784|binding|INFO|e20b81f7-8e61-43c9-8817-6dfc1361a4fc: Claiming fa:16:3e:b8:c9:81 10.100.0.5
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.279 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:c9:81 10.100.0.5'], port_security=['fa:16:3e:b8:c9:81 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d77520f-250a-4dd0-879a-f74e997e9ca3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3282d5aea9414fb399ecefab98aaae1f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c58cc91d-4606-4ef0-8b8f-76f68f93f85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1763449-8639-4552-be1f-d1ce06847fff, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e20b81f7-8e61-43c9-8817-6dfc1361a4fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:36 compute-0 NetworkManager[45129]: <info>  [1759394076.2835] device (tape20b81f7-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:34:36 compute-0 NetworkManager[45129]: <info>  [1759394076.2846] device (tape20b81f7-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.284 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e20b81f7-8e61-43c9-8817-6dfc1361a4fc in datapath 9d77520f-250a-4dd0-879a-f74e997e9ca3 bound to our chassis
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.286 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d77520f-250a-4dd0-879a-f74e997e9ca3
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.298 2 INFO nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Instance shutdown successfully after 13 seconds.
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.309 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[497db527-5359-457f-8381-67cf81fcf5ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.310 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d77520f-21 in ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.312 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d77520f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.313 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b345890-0900-4fc5-9518-23373c5be6d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.313 2 INFO nova.virt.libvirt.driver [-] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Instance destroyed successfully.
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.314 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4869e39-db8f-47a2-bd24-06090dad9202]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 systemd-machined[214636]: New machine qemu-95-instance-00000054.
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.323 2 INFO nova.virt.libvirt.driver [-] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Instance destroyed successfully.
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.333 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[93b733f2-6e44-41f6-905b-2e4672aeae6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-00000054.
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.360 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[89489f55-b14c-47ba-a308-8619a8dc11c8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 ovn_controller[152344]: 2025-10-02T08:34:36Z|00785|binding|INFO|Setting lport e20b81f7-8e61-43c9-8817-6dfc1361a4fc ovn-installed in OVS
Oct 02 08:34:36 compute-0 ovn_controller[152344]: 2025-10-02T08:34:36Z|00786|binding|INFO|Setting lport e20b81f7-8e61-43c9-8817-6dfc1361a4fc up in Southbound
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.410 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[63561904-af43-4040-ac0d-88a05ba43f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 NetworkManager[45129]: <info>  [1759394076.4172] manager: (tap9d77520f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/323)
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.415 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab84ceea-6f7a-4803-907d-bde50faa16d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.462 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8fe76c-560e-4ab7-a92c-883cc9fa53d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.465 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9bdfd3-6928-48cc-be47-95038eb70359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 NetworkManager[45129]: <info>  [1759394076.4989] device (tap9d77520f-20): carrier: link connected
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.507 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[152c7d60-641b-4049-ad46-dc20d0ad72b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.526 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[47eb2617-5a24-4573-bbca-01a154210ecd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d77520f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:77:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495510, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340460, 'error': None, 'target': 'ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.544 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ff14cc-02cc-4f09-a6c0-57353201e572]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:77aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495510, 'tstamp': 495510}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340461, 'error': None, 'target': 'ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.562 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d11b35fc-8d9c-4e45-b36a-fefc445b6aee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d77520f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:77:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495510, 'reachable_time': 29396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340462, 'error': None, 'target': 'ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.598 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4e8a38-642a-480f-852f-6442b38d276b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.665 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7ce5e4-7881-4d91-b745-5f8d9ca148cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.666 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d77520f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.667 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.668 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d77520f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:36 compute-0 NetworkManager[45129]: <info>  [1759394076.6707] manager: (tap9d77520f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:36 compute-0 kernel: tap9d77520f-20: entered promiscuous mode
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.674 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d77520f-20, col_values=(('external_ids', {'iface-id': '844a6602-0e1c-4a75-a390-7e5fa3fcd2cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:36 compute-0 ovn_controller[152344]: 2025-10-02T08:34:36Z|00787|binding|INFO|Releasing lport 844a6602-0e1c-4a75-a390-7e5fa3fcd2cb from this chassis (sb_readonly=0)
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.702 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d77520f-250a-4dd0-879a-f74e997e9ca3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d77520f-250a-4dd0-879a-f74e997e9ca3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.703 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[de6ec319-0cba-432d-91f8-89d3e0253f0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.703 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-9d77520f-250a-4dd0-879a-f74e997e9ca3
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/9d77520f-250a-4dd0-879a-f74e997e9ca3.pid.haproxy
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 9d77520f-250a-4dd0-879a-f74e997e9ca3
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:34:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:36.705 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3', 'env', 'PROCESS_TAG=haproxy-9d77520f-250a-4dd0-879a-f74e997e9ca3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d77520f-250a-4dd0-879a-f74e997e9ca3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.754 2 INFO nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Deleting instance files /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346_del
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.755 2 INFO nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Deletion of /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346_del complete
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.880 2 DEBUG nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.887 2 DEBUG nova.network.neutron [req-adb7045b-be50-404f-b457-cf1cdb62b9d8 req-7ac3a0e8-c52e-42a6-aa4d-95c0c9ab7ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Updated VIF entry in instance network info cache for port e20b81f7-8e61-43c9-8817-6dfc1361a4fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.888 2 DEBUG nova.network.neutron [req-adb7045b-be50-404f-b457-cf1cdb62b9d8 req-7ac3a0e8-c52e-42a6-aa4d-95c0c9ab7ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Updating instance_info_cache with network_info: [{"id": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "address": "fa:16:3e:b8:c9:81", "network": {"id": "9d77520f-250a-4dd0-879a-f74e997e9ca3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-412855416-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3282d5aea9414fb399ecefab98aaae1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape20b81f7-8e", "ovs_interfaceid": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.899 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.900 2 INFO nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Creating image(s)
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.927 2 DEBUG nova.storage.rbd_utils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:36 compute-0 ceph-mon[74477]: pgmap v1638: 305 pgs: 305 active+clean; 339 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 153 op/s
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.967 2 DEBUG nova.storage.rbd_utils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:36 compute-0 nova_compute[260603]: 2025-10-02 08:34:36.997 2 DEBUG nova.storage.rbd_utils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.005 2 DEBUG oslo_concurrency.processutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.049 2 DEBUG oslo_concurrency.lockutils [req-adb7045b-be50-404f-b457-cf1cdb62b9d8 req-7ac3a0e8-c52e-42a6-aa4d-95c0c9ab7ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:34:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 339 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 153 op/s
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.088 2 DEBUG oslo_concurrency.processutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.089 2 DEBUG oslo_concurrency.lockutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.089 2 DEBUG oslo_concurrency.lockutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.090 2 DEBUG oslo_concurrency.lockutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.119 2 DEBUG nova.storage.rbd_utils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.124 2 DEBUG oslo_concurrency.processutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:37 compute-0 podman[340602]: 2025-10-02 08:34:37.164181933 +0000 UTC m=+0.046051925 container create f98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:34:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:34:37 compute-0 systemd[1]: Started libpod-conmon-f98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9.scope.
Oct 02 08:34:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:34:37 compute-0 podman[340602]: 2025-10-02 08:34:37.140669996 +0000 UTC m=+0.022540008 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8a8fbb99ddb45f899c6733f25db714b918cd6e45a4247f503c75ea3b74c708/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:37 compute-0 podman[340602]: 2025-10-02 08:34:37.253641171 +0000 UTC m=+0.135511163 container init f98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 08:34:37 compute-0 podman[340602]: 2025-10-02 08:34:37.260654983 +0000 UTC m=+0.142524975 container start f98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 08:34:37 compute-0 neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3[340641]: [NOTICE]   (340648) : New worker (340650) forked
Oct 02 08:34:37 compute-0 neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3[340641]: [NOTICE]   (340648) : Loading success.
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.290 2 DEBUG nova.compute.manager [req-8b2ca59e-9053-472d-90ae-5fe0735385c3 req-e4d382c3-2d8a-48ca-a33b-451b3fa43d58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Received event network-vif-plugged-e20b81f7-8e61-43c9-8817-6dfc1361a4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.291 2 DEBUG oslo_concurrency.lockutils [req-8b2ca59e-9053-472d-90ae-5fe0735385c3 req-e4d382c3-2d8a-48ca-a33b-451b3fa43d58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.292 2 DEBUG oslo_concurrency.lockutils [req-8b2ca59e-9053-472d-90ae-5fe0735385c3 req-e4d382c3-2d8a-48ca-a33b-451b3fa43d58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.293 2 DEBUG oslo_concurrency.lockutils [req-8b2ca59e-9053-472d-90ae-5fe0735385c3 req-e4d382c3-2d8a-48ca-a33b-451b3fa43d58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.293 2 DEBUG nova.compute.manager [req-8b2ca59e-9053-472d-90ae-5fe0735385c3 req-e4d382c3-2d8a-48ca-a33b-451b3fa43d58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Processing event network-vif-plugged-e20b81f7-8e61-43c9-8817-6dfc1361a4fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.389 2 DEBUG oslo_concurrency.processutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.438 2 DEBUG nova.storage.rbd_utils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] resizing rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.553 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.554 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Ensure instance console log exists: /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.555 2 DEBUG oslo_concurrency.lockutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.555 2 DEBUG oslo_concurrency.lockutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.557 2 DEBUG oslo_concurrency.lockutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.560 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.562 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394077.5612469, 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.562 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] VM Started (Lifecycle Event)
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.570 2 DEBUG nova.compute.manager [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.576 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.578 2 WARNING nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.586 2 INFO nova.virt.libvirt.driver [-] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Instance spawned successfully.
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.586 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.590 2 DEBUG nova.virt.libvirt.host [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.590 2 DEBUG nova.virt.libvirt.host [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.594 2 DEBUG nova.virt.libvirt.host [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.594 2 DEBUG nova.virt.libvirt.host [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.595 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.595 2 DEBUG nova.virt.hardware [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.595 2 DEBUG nova.virt.hardware [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.596 2 DEBUG nova.virt.hardware [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.596 2 DEBUG nova.virt.hardware [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.596 2 DEBUG nova.virt.hardware [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.596 2 DEBUG nova.virt.hardware [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.596 2 DEBUG nova.virt.hardware [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.597 2 DEBUG nova.virt.hardware [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.597 2 DEBUG nova.virt.hardware [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.597 2 DEBUG nova.virt.hardware [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.597 2 DEBUG nova.virt.hardware [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.597 2 DEBUG nova.objects.instance [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 62bd6fb5-9b1d-40a6-81da-a3d89d022346 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.600 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.602 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.628 2 DEBUG oslo_concurrency.processutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.685 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.686 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394077.5614817, 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.686 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] VM Paused (Lifecycle Event)
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.690 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.691 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.691 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.692 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.692 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.692 2 DEBUG nova.virt.libvirt.driver [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.746 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.752 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394077.5737965, 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.752 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] VM Resumed (Lifecycle Event)
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.811 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.822 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.836 2 INFO nova.compute.manager [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Took 8.56 seconds to spawn the instance on the hypervisor.
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.836 2 DEBUG nova.compute.manager [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.871 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.910 2 INFO nova.compute.manager [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Took 9.72 seconds to build instance.
Oct 02 08:34:37 compute-0 nova_compute[260603]: 2025-10-02 08:34:37.927 2 DEBUG oslo_concurrency.lockutils [None req-c747c10f-e087-47e3-b47e-38bf53d8152a a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661076763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:38 compute-0 nova_compute[260603]: 2025-10-02 08:34:38.109 2 DEBUG oslo_concurrency.processutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:38 compute-0 nova_compute[260603]: 2025-10-02 08:34:38.151 2 DEBUG nova.storage.rbd_utils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:38 compute-0 nova_compute[260603]: 2025-10-02 08:34:38.161 2 DEBUG oslo_concurrency.processutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:38 compute-0 nova_compute[260603]: 2025-10-02 08:34:38.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0025473801645829264 of space, bias 1.0, pg target 0.7642140493748779 quantized to 32 (current 32)
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:34:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:34:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/401879651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:38 compute-0 nova_compute[260603]: 2025-10-02 08:34:38.691 2 DEBUG oslo_concurrency.processutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:38 compute-0 nova_compute[260603]: 2025-10-02 08:34:38.696 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:34:38 compute-0 nova_compute[260603]:   <uuid>62bd6fb5-9b1d-40a6-81da-a3d89d022346</uuid>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   <name>instance-00000053</name>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerShowV257Test-server-288043821</nova:name>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:34:37</nova:creationTime>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:34:38 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:34:38 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:34:38 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:34:38 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:34:38 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:34:38 compute-0 nova_compute[260603]:         <nova:user uuid="c49315d68a544e0fb41f4e3ba5c71ba5">tempest-ServerShowV257Test-1175614036-project-member</nova:user>
Oct 02 08:34:38 compute-0 nova_compute[260603]:         <nova:project uuid="14d1e9bc40ee43a2bad5cc224c4ea79f">tempest-ServerShowV257Test-1175614036</nova:project>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <nova:ports/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <system>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <entry name="serial">62bd6fb5-9b1d-40a6-81da-a3d89d022346</entry>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <entry name="uuid">62bd6fb5-9b1d-40a6-81da-a3d89d022346</entry>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     </system>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   <os>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   </os>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   <features>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   </features>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk">
Oct 02 08:34:38 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:38 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk.config">
Oct 02 08:34:38 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:38 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/console.log" append="off"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <video>
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     </video>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:34:38 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:34:38 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:34:38 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:34:38 compute-0 nova_compute[260603]: </domain>
Oct 02 08:34:38 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:34:38 compute-0 nova_compute[260603]: 2025-10-02 08:34:38.754 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:38 compute-0 nova_compute[260603]: 2025-10-02 08:34:38.755 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:38 compute-0 nova_compute[260603]: 2025-10-02 08:34:38.756 2 INFO nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Using config drive
Oct 02 08:34:38 compute-0 nova_compute[260603]: 2025-10-02 08:34:38.788 2 DEBUG nova.storage.rbd_utils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:38 compute-0 nova_compute[260603]: 2025-10-02 08:34:38.809 2 DEBUG nova.objects.instance [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 62bd6fb5-9b1d-40a6-81da-a3d89d022346 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:38 compute-0 nova_compute[260603]: 2025-10-02 08:34:38.840 2 DEBUG nova.objects.instance [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lazy-loading 'keypairs' on Instance uuid 62bd6fb5-9b1d-40a6-81da-a3d89d022346 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:38 compute-0 ceph-mon[74477]: pgmap v1639: 305 pgs: 305 active+clean; 339 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 153 op/s
Oct 02 08:34:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3661076763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/401879651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.003 2 INFO nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Creating config drive at /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/disk.config
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.015 2 DEBUG oslo_concurrency.processutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3rvf3p5d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 324 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.9 MiB/s wr, 286 op/s
Oct 02 08:34:39 compute-0 kernel: tap597a75f0-66 (unregistering): left promiscuous mode
Oct 02 08:34:39 compute-0 NetworkManager[45129]: <info>  [1759394079.1392] device (tap597a75f0-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:34:39 compute-0 ovn_controller[152344]: 2025-10-02T08:34:39Z|00788|binding|INFO|Releasing lport 597a75f0-6607-46bb-a380-b46a358c3bf7 from this chassis (sb_readonly=0)
Oct 02 08:34:39 compute-0 ovn_controller[152344]: 2025-10-02T08:34:39Z|00789|binding|INFO|Setting lport 597a75f0-6607-46bb-a380-b46a358c3bf7 down in Southbound
Oct 02 08:34:39 compute-0 ovn_controller[152344]: 2025-10-02T08:34:39Z|00790|binding|INFO|Removing iface tap597a75f0-66 ovn-installed in OVS
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:39.160 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:ab:f3 10.100.0.7'], port_security=['fa:16:3e:f5:ab:f3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5b5054c4-5d13-4ca5-8037-e0d93f21d9ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86059eb0-17b1-462f-a30f-1dfe95c50614', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '629efd330be646b7a2941e0c83b86e0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4883fba0-ee44-4548-b479-1786f1cf77b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2e7de25-29e1-404b-a0d2-6f487c522884, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=597a75f0-6607-46bb-a380-b46a358c3bf7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:39.161 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 597a75f0-6607-46bb-a380-b46a358c3bf7 in datapath 86059eb0-17b1-462f-a30f-1dfe95c50614 unbound from our chassis
Oct 02 08:34:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:39.162 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86059eb0-17b1-462f-a30f-1dfe95c50614 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:34:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:39.163 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d080395c-82e7-4bc3-9e24-0e91b972842d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.174 2 DEBUG oslo_concurrency.processutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3rvf3p5d" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.211 2 DEBUG nova.storage.rbd_utils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] rbd image 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.217 2 DEBUG oslo_concurrency.processutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/disk.config 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:39 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d00000052.scope: Deactivated successfully.
Oct 02 08:34:39 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d00000052.scope: Consumed 12.525s CPU time.
Oct 02 08:34:39 compute-0 systemd-machined[214636]: Machine qemu-94-instance-00000052 terminated.
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.387 2 DEBUG nova.compute.manager [req-7ca0c783-d130-4445-9095-035dac512202 req-20956c03-a534-45b0-9a4d-038431ca1d08 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Received event network-vif-plugged-e20b81f7-8e61-43c9-8817-6dfc1361a4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.388 2 DEBUG oslo_concurrency.lockutils [req-7ca0c783-d130-4445-9095-035dac512202 req-20956c03-a534-45b0-9a4d-038431ca1d08 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.388 2 DEBUG oslo_concurrency.lockutils [req-7ca0c783-d130-4445-9095-035dac512202 req-20956c03-a534-45b0-9a4d-038431ca1d08 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.388 2 DEBUG oslo_concurrency.lockutils [req-7ca0c783-d130-4445-9095-035dac512202 req-20956c03-a534-45b0-9a4d-038431ca1d08 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.388 2 DEBUG nova.compute.manager [req-7ca0c783-d130-4445-9095-035dac512202 req-20956c03-a534-45b0-9a4d-038431ca1d08 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] No waiting events found dispatching network-vif-plugged-e20b81f7-8e61-43c9-8817-6dfc1361a4fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.389 2 WARNING nova.compute.manager [req-7ca0c783-d130-4445-9095-035dac512202 req-20956c03-a534-45b0-9a4d-038431ca1d08 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Received unexpected event network-vif-plugged-e20b81f7-8e61-43c9-8817-6dfc1361a4fc for instance with vm_state active and task_state None.
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.392 2 DEBUG oslo_concurrency.processutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/disk.config 62bd6fb5-9b1d-40a6-81da-a3d89d022346_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.392 2 INFO nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Deleting local config drive /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346/disk.config because it was imported into RBD.
Oct 02 08:34:39 compute-0 systemd-machined[214636]: New machine qemu-96-instance-00000053.
Oct 02 08:34:39 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-00000053.
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.605 2 DEBUG oslo_concurrency.lockutils [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Acquiring lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.607 2 DEBUG oslo_concurrency.lockutils [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.607 2 DEBUG oslo_concurrency.lockutils [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Acquiring lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.607 2 DEBUG oslo_concurrency.lockutils [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.608 2 DEBUG oslo_concurrency.lockutils [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.609 2 INFO nova.compute.manager [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Terminating instance
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.610 2 DEBUG nova.compute.manager [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:34:39 compute-0 kernel: tape20b81f7-8e (unregistering): left promiscuous mode
Oct 02 08:34:39 compute-0 NetworkManager[45129]: <info>  [1759394079.6513] device (tape20b81f7-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:34:39 compute-0 ovn_controller[152344]: 2025-10-02T08:34:39Z|00791|binding|INFO|Releasing lport e20b81f7-8e61-43c9-8817-6dfc1361a4fc from this chassis (sb_readonly=0)
Oct 02 08:34:39 compute-0 ovn_controller[152344]: 2025-10-02T08:34:39Z|00792|binding|INFO|Setting lport e20b81f7-8e61-43c9-8817-6dfc1361a4fc down in Southbound
Oct 02 08:34:39 compute-0 ovn_controller[152344]: 2025-10-02T08:34:39Z|00793|binding|INFO|Removing iface tape20b81f7-8e ovn-installed in OVS
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:39.717 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:c9:81 10.100.0.5'], port_security=['fa:16:3e:b8:c9:81 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d77520f-250a-4dd0-879a-f74e997e9ca3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3282d5aea9414fb399ecefab98aaae1f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c58cc91d-4606-4ef0-8b8f-76f68f93f85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1763449-8639-4552-be1f-d1ce06847fff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e20b81f7-8e61-43c9-8817-6dfc1361a4fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:39.718 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e20b81f7-8e61-43c9-8817-6dfc1361a4fc in datapath 9d77520f-250a-4dd0-879a-f74e997e9ca3 unbound from our chassis
Oct 02 08:34:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:39.720 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d77520f-250a-4dd0-879a-f74e997e9ca3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:34:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:39.721 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2019885-47f9-484c-8c69-515967219a66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:39.726 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3 namespace which is not needed anymore
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:39 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d00000054.scope: Deactivated successfully.
Oct 02 08:34:39 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d00000054.scope: Consumed 3.155s CPU time.
Oct 02 08:34:39 compute-0 systemd-machined[214636]: Machine qemu-95-instance-00000054 terminated.
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.854 2 INFO nova.virt.libvirt.driver [-] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Instance destroyed successfully.
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.854 2 DEBUG nova.objects.instance [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lazy-loading 'resources' on Instance uuid 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.877 2 DEBUG nova.virt.libvirt.vif [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1098684972',display_name='tempest-ServerAddressesTestJSON-server-1098684972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1098684972',id=84,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3282d5aea9414fb399ecefab98aaae1f',ramdisk_id='',reservation_id='r-hog3uwdm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-131594302',owner_user_name='tempest-ServerAddressesTestJSON-131594302-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:34:37Z,user_data=None,user_id='a9af39d3a3a84355a9892112bc52b9ba',uuid=92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "address": "fa:16:3e:b8:c9:81", "network": {"id": "9d77520f-250a-4dd0-879a-f74e997e9ca3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-412855416-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3282d5aea9414fb399ecefab98aaae1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape20b81f7-8e", "ovs_interfaceid": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.878 2 DEBUG nova.network.os_vif_util [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Converting VIF {"id": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "address": "fa:16:3e:b8:c9:81", "network": {"id": "9d77520f-250a-4dd0-879a-f74e997e9ca3", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-412855416-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3282d5aea9414fb399ecefab98aaae1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape20b81f7-8e", "ovs_interfaceid": "e20b81f7-8e61-43c9-8817-6dfc1361a4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.878 2 DEBUG nova.network.os_vif_util [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:c9:81,bridge_name='br-int',has_traffic_filtering=True,id=e20b81f7-8e61-43c9-8817-6dfc1361a4fc,network=Network(9d77520f-250a-4dd0-879a-f74e997e9ca3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape20b81f7-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.879 2 DEBUG os_vif [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:c9:81,bridge_name='br-int',has_traffic_filtering=True,id=e20b81f7-8e61-43c9-8817-6dfc1361a4fc,network=Network(9d77520f-250a-4dd0-879a-f74e997e9ca3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape20b81f7-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.881 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape20b81f7-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.893 2 INFO os_vif [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:c9:81,bridge_name='br-int',has_traffic_filtering=True,id=e20b81f7-8e61-43c9-8817-6dfc1361a4fc,network=Network(9d77520f-250a-4dd0-879a-f74e997e9ca3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape20b81f7-8e')
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.913 2 INFO nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Instance shutdown successfully after 13 seconds.
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.926 2 INFO nova.virt.libvirt.driver [-] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Instance destroyed successfully.
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.926 2 DEBUG nova.objects.instance [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'numa_topology' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.943 2 INFO nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Attempting rescue
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.944 2 DEBUG nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 08:34:39 compute-0 neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3[340641]: [NOTICE]   (340648) : haproxy version is 2.8.14-c23fe91
Oct 02 08:34:39 compute-0 neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3[340641]: [NOTICE]   (340648) : path to executable is /usr/sbin/haproxy
Oct 02 08:34:39 compute-0 neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3[340641]: [WARNING]  (340648) : Exiting Master process...
Oct 02 08:34:39 compute-0 neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3[340641]: [ALERT]    (340648) : Current worker (340650) exited with code 143 (Terminated)
Oct 02 08:34:39 compute-0 neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3[340641]: [WARNING]  (340648) : All workers exited. Exiting... (0)
Oct 02 08:34:39 compute-0 systemd[1]: libpod-f98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9.scope: Deactivated successfully.
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.952 2 DEBUG nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 08:34:39 compute-0 nova_compute[260603]: 2025-10-02 08:34:39.953 2 INFO nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Creating image(s)
Oct 02 08:34:39 compute-0 podman[340922]: 2025-10-02 08:34:39.963215019 +0000 UTC m=+0.090741238 container died f98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b8a8fbb99ddb45f899c6733f25db714b918cd6e45a4247f503c75ea3b74c708-merged.mount: Deactivated successfully.
Oct 02 08:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9-userdata-shm.mount: Deactivated successfully.
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.000 2 DEBUG nova.storage.rbd_utils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:40 compute-0 podman[340922]: 2025-10-02 08:34:40.003715047 +0000 UTC m=+0.131241256 container cleanup f98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.011 2 DEBUG nova.objects.instance [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:40 compute-0 systemd[1]: libpod-conmon-f98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9.scope: Deactivated successfully.
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.053 2 DEBUG nova.storage.rbd_utils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:40 compute-0 podman[341027]: 2025-10-02 08:34:40.079300008 +0000 UTC m=+0.055990083 container remove f98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.086 2 DEBUG nova.storage.rbd_utils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:40.089 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5e909342-e882-4974-a724-1af4366c2a47]: (4, ('Thu Oct  2 08:34:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3 (f98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9)\nf98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9\nThu Oct  2 08:34:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3 (f98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9)\nf98eff45a906a7a94cad2c6f3ce2c88d6eb0db3dafc6a55e6f8e3ebdc0bb78b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:40.090 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[81f1d4c9-f726-4a44-81c4-162409e437ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.091 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:40.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d77520f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:40 compute-0 kernel: tap9d77520f-20: left promiscuous mode
Oct 02 08:34:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:40.124 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ecdfb8d8-f31a-4487-a0dc-a0217932048c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:40.144 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[38b06632-d64b-4126-930e-6bfbc35e0667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:40.145 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd078e6-089f-45fd-8ffa-7885414cb704]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:40.161 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2b96ec0d-b3bb-40ad-9e66-0dea72c7f199]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495500, 'reachable_time': 44401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341087, 'error': None, 'target': 'ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d9d77520f\x2d250a\x2d4dd0\x2d879a\x2df74e997e9ca3.mount: Deactivated successfully.
Oct 02 08:34:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:40.164 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d77520f-250a-4dd0-879a-f74e997e9ca3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:34:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:40.165 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2cce29-ec59-42bd-a367-b26dd538dd17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.176 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.177 2 DEBUG oslo_concurrency.lockutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.177 2 DEBUG oslo_concurrency.lockutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.178 2 DEBUG oslo_concurrency.lockutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.195 2 DEBUG nova.storage.rbd_utils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.198 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.305 2 INFO nova.virt.libvirt.driver [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Deleting instance files /var/lib/nova/instances/92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_del
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.306 2 INFO nova.virt.libvirt.driver [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Deletion of /var/lib/nova/instances/92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1_del complete
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.403 2 INFO nova.compute.manager [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.403 2 DEBUG oslo.service.loopingcall [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.403 2 DEBUG nova.compute.manager [-] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.404 2 DEBUG nova.network.neutron [-] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.456 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.457 2 DEBUG nova.objects.instance [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'migration_context' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.472 2 DEBUG nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.473 2 DEBUG nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Start _get_guest_xml network_info=[{"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2021168771-network", "vif_mac": "fa:16:3e:f5:ab:f3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.473 2 DEBUG nova.objects.instance [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'resources' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.486 2 WARNING nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.492 2 DEBUG nova.virt.libvirt.host [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.492 2 DEBUG nova.virt.libvirt.host [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.495 2 DEBUG nova.virt.libvirt.host [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.496 2 DEBUG nova.virt.libvirt.host [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.496 2 DEBUG nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.496 2 DEBUG nova.virt.hardware [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.497 2 DEBUG nova.virt.hardware [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.497 2 DEBUG nova.virt.hardware [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.497 2 DEBUG nova.virt.hardware [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.498 2 DEBUG nova.virt.hardware [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.498 2 DEBUG nova.virt.hardware [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.498 2 DEBUG nova.virt.hardware [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.498 2 DEBUG nova.virt.hardware [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.499 2 DEBUG nova.virt.hardware [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.499 2 DEBUG nova.virt.hardware [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.499 2 DEBUG nova.virt.hardware [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.499 2 DEBUG nova.objects.instance [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.513 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.550 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 62bd6fb5-9b1d-40a6-81da-a3d89d022346 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.550 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394080.5303762, 62bd6fb5-9b1d-40a6-81da-a3d89d022346 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.550 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] VM Resumed (Lifecycle Event)
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.553 2 DEBUG nova.compute.manager [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.553 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.561 2 INFO nova.virt.libvirt.driver [-] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Instance spawned successfully.
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.561 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.579 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.590 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.595 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.596 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.596 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.597 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.597 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.598 2 DEBUG nova.virt.libvirt.driver [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.609 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.609 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394080.5308301, 62bd6fb5-9b1d-40a6-81da-a3d89d022346 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.610 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] VM Started (Lifecycle Event)
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.631 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.634 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.660 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.674 2 DEBUG nova.compute.manager [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.720 2 DEBUG oslo_concurrency.lockutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.720 2 DEBUG oslo_concurrency.lockutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.720 2 DEBUG nova.objects.instance [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.771 2 DEBUG oslo_concurrency.lockutils [None req-f54e45aa-4629-4512-8a58-54b58d99bc2b c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/811457432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.932 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.933 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:40 compute-0 ceph-mon[74477]: pgmap v1640: 305 pgs: 305 active+clean; 324 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.9 MiB/s wr, 286 op/s
Oct 02 08:34:40 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/811457432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.969 2 DEBUG nova.network.neutron [-] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:40 compute-0 nova_compute[260603]: 2025-10-02 08:34:40.989 2 INFO nova.compute.manager [-] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Took 0.59 seconds to deallocate network for instance.
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.030 2 DEBUG nova.compute.manager [req-bcdedf80-4543-4684-87f1-427dd74cbdbb req-445529c8-8ea3-4998-afc0-4f0394283645 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Received event network-vif-deleted-e20b81f7-8e61-43c9-8817-6dfc1361a4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.046 2 DEBUG oslo_concurrency.lockutils [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.047 2 DEBUG oslo_concurrency.lockutils [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 324 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 6.9 MiB/s wr, 222 op/s
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.161 2 DEBUG oslo_concurrency.processutils [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1873472455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.402 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.404 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/86502247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.713 2 DEBUG oslo_concurrency.processutils [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.722 2 DEBUG nova.compute.provider_tree [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.744 2 DEBUG nova.scheduler.client.report [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.771 2 DEBUG oslo_concurrency.lockutils [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.820 2 INFO nova.scheduler.client.report [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Deleted allocations for instance 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1
Oct 02 08:34:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737306724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.862 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.864 2 DEBUG nova.virt.libvirt.vif [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-677598906',display_name='tempest-ServerRescueTestJSON-server-677598906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-677598906',id=82,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='629efd330be646b7a2941e0c83b86e0e',ramdisk_id='',reservation_id='r-j80za7bp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-846559261',owner_user_name='tempest-ServerRescueTestJSON-846559261-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:34:22Z,user_data=None,user_id='d2e113d3d74a43998ac8dbf246ae9095',uuid=5b5054c4-5d13-4ca5-8037-e0d93f21d9ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2021168771-network", "vif_mac": "fa:16:3e:f5:ab:f3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.864 2 DEBUG nova.network.os_vif_util [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converting VIF {"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2021168771-network", "vif_mac": "fa:16:3e:f5:ab:f3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.866 2 DEBUG nova.network.os_vif_util [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:ab:f3,bridge_name='br-int',has_traffic_filtering=True,id=597a75f0-6607-46bb-a380-b46a358c3bf7,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap597a75f0-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.867 2 DEBUG nova.objects.instance [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.892 2 DEBUG nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:34:41 compute-0 nova_compute[260603]:   <uuid>5b5054c4-5d13-4ca5-8037-e0d93f21d9ea</uuid>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   <name>instance-00000052</name>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerRescueTestJSON-server-677598906</nova:name>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:34:40</nova:creationTime>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <nova:user uuid="d2e113d3d74a43998ac8dbf246ae9095">tempest-ServerRescueTestJSON-846559261-project-member</nova:user>
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <nova:project uuid="629efd330be646b7a2941e0c83b86e0e">tempest-ServerRescueTestJSON-846559261</nova:project>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <nova:port uuid="597a75f0-6607-46bb-a380-b46a358c3bf7">
Oct 02 08:34:41 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <system>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <entry name="serial">5b5054c4-5d13-4ca5-8037-e0d93f21d9ea</entry>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <entry name="uuid">5b5054c4-5d13-4ca5-8037-e0d93f21d9ea</entry>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     </system>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   <os>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   </os>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   <features>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   </features>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.rescue">
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk">
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <target dev="vdb" bus="virtio"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.config.rescue">
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:f5:ab:f3"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <target dev="tap597a75f0-66"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/console.log" append="off"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <video>
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     </video>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:34:41 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:34:41 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:34:41 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:34:41 compute-0 nova_compute[260603]: </domain>
Oct 02 08:34:41 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.899 2 INFO nova.virt.libvirt.driver [-] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Instance destroyed successfully.
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.903 2 DEBUG oslo_concurrency.lockutils [None req-4d85bc61-1cd8-44df-9b9b-69ffcb73d237 a9af39d3a3a84355a9892112bc52b9ba 3282d5aea9414fb399ecefab98aaae1f - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1873472455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/86502247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/737306724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.963 2 DEBUG nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.964 2 DEBUG nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.964 2 DEBUG nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.965 2 DEBUG nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] No VIF found with MAC fa:16:3e:f5:ab:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:34:41 compute-0 nova_compute[260603]: 2025-10-02 08:34:41.966 2 INFO nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Using config drive
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.000 2 DEBUG nova.storage.rbd_utils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.023 2 DEBUG nova.objects.instance [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.034 2 DEBUG nova.compute.manager [req-cdbfc253-e623-4eaf-af7b-7f64b8431c2a req-70527ecb-7a77-4bbc-91a7-31ef5fdc7f23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Received event network-vif-unplugged-e20b81f7-8e61-43c9-8817-6dfc1361a4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.034 2 DEBUG oslo_concurrency.lockutils [req-cdbfc253-e623-4eaf-af7b-7f64b8431c2a req-70527ecb-7a77-4bbc-91a7-31ef5fdc7f23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.034 2 DEBUG oslo_concurrency.lockutils [req-cdbfc253-e623-4eaf-af7b-7f64b8431c2a req-70527ecb-7a77-4bbc-91a7-31ef5fdc7f23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.035 2 DEBUG oslo_concurrency.lockutils [req-cdbfc253-e623-4eaf-af7b-7f64b8431c2a req-70527ecb-7a77-4bbc-91a7-31ef5fdc7f23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.035 2 DEBUG nova.compute.manager [req-cdbfc253-e623-4eaf-af7b-7f64b8431c2a req-70527ecb-7a77-4bbc-91a7-31ef5fdc7f23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] No waiting events found dispatching network-vif-unplugged-e20b81f7-8e61-43c9-8817-6dfc1361a4fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.035 2 WARNING nova.compute.manager [req-cdbfc253-e623-4eaf-af7b-7f64b8431c2a req-70527ecb-7a77-4bbc-91a7-31ef5fdc7f23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Received unexpected event network-vif-unplugged-e20b81f7-8e61-43c9-8817-6dfc1361a4fc for instance with vm_state deleted and task_state None.
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.035 2 DEBUG nova.compute.manager [req-cdbfc253-e623-4eaf-af7b-7f64b8431c2a req-70527ecb-7a77-4bbc-91a7-31ef5fdc7f23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Received event network-vif-plugged-e20b81f7-8e61-43c9-8817-6dfc1361a4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.036 2 DEBUG oslo_concurrency.lockutils [req-cdbfc253-e623-4eaf-af7b-7f64b8431c2a req-70527ecb-7a77-4bbc-91a7-31ef5fdc7f23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.036 2 DEBUG oslo_concurrency.lockutils [req-cdbfc253-e623-4eaf-af7b-7f64b8431c2a req-70527ecb-7a77-4bbc-91a7-31ef5fdc7f23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.036 2 DEBUG oslo_concurrency.lockutils [req-cdbfc253-e623-4eaf-af7b-7f64b8431c2a req-70527ecb-7a77-4bbc-91a7-31ef5fdc7f23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.036 2 DEBUG nova.compute.manager [req-cdbfc253-e623-4eaf-af7b-7f64b8431c2a req-70527ecb-7a77-4bbc-91a7-31ef5fdc7f23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] No waiting events found dispatching network-vif-plugged-e20b81f7-8e61-43c9-8817-6dfc1361a4fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.036 2 WARNING nova.compute.manager [req-cdbfc253-e623-4eaf-af7b-7f64b8431c2a req-70527ecb-7a77-4bbc-91a7-31ef5fdc7f23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Received unexpected event network-vif-plugged-e20b81f7-8e61-43c9-8817-6dfc1361a4fc for instance with vm_state deleted and task_state None.
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.059 2 DEBUG nova.objects.instance [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'keypairs' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.499 2 INFO nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Creating config drive at /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/disk.config.rescue
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.507 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph6ipw3kr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.587 2 DEBUG oslo_concurrency.lockutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Acquiring lock "62bd6fb5-9b1d-40a6-81da-a3d89d022346" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.588 2 DEBUG oslo_concurrency.lockutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "62bd6fb5-9b1d-40a6-81da-a3d89d022346" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.589 2 DEBUG oslo_concurrency.lockutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Acquiring lock "62bd6fb5-9b1d-40a6-81da-a3d89d022346-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.589 2 DEBUG oslo_concurrency.lockutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "62bd6fb5-9b1d-40a6-81da-a3d89d022346-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.589 2 DEBUG oslo_concurrency.lockutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "62bd6fb5-9b1d-40a6-81da-a3d89d022346-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.592 2 INFO nova.compute.manager [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Terminating instance
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.593 2 DEBUG oslo_concurrency.lockutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Acquiring lock "refresh_cache-62bd6fb5-9b1d-40a6-81da-a3d89d022346" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.593 2 DEBUG oslo_concurrency.lockutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Acquired lock "refresh_cache-62bd6fb5-9b1d-40a6-81da-a3d89d022346" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.594 2 DEBUG nova.network.neutron [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.674 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph6ipw3kr" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.711 2 DEBUG nova.storage.rbd_utils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] rbd image 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.716 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/disk.config.rescue 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.771 2 DEBUG nova.network.neutron [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.927 2 DEBUG oslo_concurrency.processutils [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/disk.config.rescue 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:42 compute-0 nova_compute[260603]: 2025-10-02 08:34:42.928 2 INFO nova.virt.libvirt.driver [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Deleting local config drive /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea/disk.config.rescue because it was imported into RBD.
Oct 02 08:34:42 compute-0 ceph-mon[74477]: pgmap v1641: 305 pgs: 305 active+clean; 324 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 6.9 MiB/s wr, 222 op/s
Oct 02 08:34:43 compute-0 kernel: tap597a75f0-66: entered promiscuous mode
Oct 02 08:34:43 compute-0 NetworkManager[45129]: <info>  [1759394083.0195] manager: (tap597a75f0-66): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Oct 02 08:34:43 compute-0 ovn_controller[152344]: 2025-10-02T08:34:43Z|00794|binding|INFO|Claiming lport 597a75f0-6607-46bb-a380-b46a358c3bf7 for this chassis.
Oct 02 08:34:43 compute-0 ovn_controller[152344]: 2025-10-02T08:34:43Z|00795|binding|INFO|597a75f0-6607-46bb-a380-b46a358c3bf7: Claiming fa:16:3e:f5:ab:f3 10.100.0.7
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:43.033 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:ab:f3 10.100.0.7'], port_security=['fa:16:3e:f5:ab:f3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5b5054c4-5d13-4ca5-8037-e0d93f21d9ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86059eb0-17b1-462f-a30f-1dfe95c50614', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '629efd330be646b7a2941e0c83b86e0e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4883fba0-ee44-4548-b479-1786f1cf77b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2e7de25-29e1-404b-a0d2-6f487c522884, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=597a75f0-6607-46bb-a380-b46a358c3bf7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:43.035 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 597a75f0-6607-46bb-a380-b46a358c3bf7 in datapath 86059eb0-17b1-462f-a30f-1dfe95c50614 bound to our chassis
Oct 02 08:34:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:43.036 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86059eb0-17b1-462f-a30f-1dfe95c50614 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:34:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:43.038 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab60736-c669-4ebd-89f4-3e2e9c8f8a81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:43 compute-0 systemd-udevd[341285]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:34:43 compute-0 ovn_controller[152344]: 2025-10-02T08:34:43Z|00796|binding|INFO|Setting lport 597a75f0-6607-46bb-a380-b46a358c3bf7 up in Southbound
Oct 02 08:34:43 compute-0 ovn_controller[152344]: 2025-10-02T08:34:43Z|00797|binding|INFO|Setting lport 597a75f0-6607-46bb-a380-b46a358c3bf7 ovn-installed in OVS
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:43 compute-0 systemd-machined[214636]: New machine qemu-97-instance-00000052.
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:43 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-00000052.
Oct 02 08:34:43 compute-0 NetworkManager[45129]: <info>  [1759394083.0830] device (tap597a75f0-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:34:43 compute-0 NetworkManager[45129]: <info>  [1759394083.0840] device (tap597a75f0-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.083 2 DEBUG nova.network.neutron [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 341 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 404 op/s
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.099 2 DEBUG oslo_concurrency.lockutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Releasing lock "refresh_cache-62bd6fb5-9b1d-40a6-81da-a3d89d022346" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.100 2 DEBUG nova.compute.manager [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.102 2 DEBUG nova.compute.manager [req-61f00736-421d-4f00-85b7-4d4674315928 req-cf406247-a0a7-4af9-9fde-217286bb4a23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-unplugged-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.103 2 DEBUG oslo_concurrency.lockutils [req-61f00736-421d-4f00-85b7-4d4674315928 req-cf406247-a0a7-4af9-9fde-217286bb4a23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.103 2 DEBUG oslo_concurrency.lockutils [req-61f00736-421d-4f00-85b7-4d4674315928 req-cf406247-a0a7-4af9-9fde-217286bb4a23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.103 2 DEBUG oslo_concurrency.lockutils [req-61f00736-421d-4f00-85b7-4d4674315928 req-cf406247-a0a7-4af9-9fde-217286bb4a23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.103 2 DEBUG nova.compute.manager [req-61f00736-421d-4f00-85b7-4d4674315928 req-cf406247-a0a7-4af9-9fde-217286bb4a23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] No waiting events found dispatching network-vif-unplugged-597a75f0-6607-46bb-a380-b46a358c3bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.103 2 WARNING nova.compute.manager [req-61f00736-421d-4f00-85b7-4d4674315928 req-cf406247-a0a7-4af9-9fde-217286bb4a23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received unexpected event network-vif-unplugged-597a75f0-6607-46bb-a380-b46a358c3bf7 for instance with vm_state active and task_state rescuing.
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.104 2 DEBUG nova.compute.manager [req-61f00736-421d-4f00-85b7-4d4674315928 req-cf406247-a0a7-4af9-9fde-217286bb4a23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.104 2 DEBUG oslo_concurrency.lockutils [req-61f00736-421d-4f00-85b7-4d4674315928 req-cf406247-a0a7-4af9-9fde-217286bb4a23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.104 2 DEBUG oslo_concurrency.lockutils [req-61f00736-421d-4f00-85b7-4d4674315928 req-cf406247-a0a7-4af9-9fde-217286bb4a23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.104 2 DEBUG oslo_concurrency.lockutils [req-61f00736-421d-4f00-85b7-4d4674315928 req-cf406247-a0a7-4af9-9fde-217286bb4a23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.104 2 DEBUG nova.compute.manager [req-61f00736-421d-4f00-85b7-4d4674315928 req-cf406247-a0a7-4af9-9fde-217286bb4a23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] No waiting events found dispatching network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.105 2 WARNING nova.compute.manager [req-61f00736-421d-4f00-85b7-4d4674315928 req-cf406247-a0a7-4af9-9fde-217286bb4a23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received unexpected event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 for instance with vm_state active and task_state rescuing.
Oct 02 08:34:43 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d00000053.scope: Deactivated successfully.
Oct 02 08:34:43 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d00000053.scope: Consumed 3.490s CPU time.
Oct 02 08:34:43 compute-0 systemd-machined[214636]: Machine qemu-96-instance-00000053 terminated.
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.332 2 INFO nova.virt.libvirt.driver [-] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Instance destroyed successfully.
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.333 2 DEBUG nova.objects.instance [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lazy-loading 'resources' on Instance uuid 62bd6fb5-9b1d-40a6-81da-a3d89d022346 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.509 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.510 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.530 2 DEBUG nova.compute.manager [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.638 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.638 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.650 2 DEBUG nova.virt.hardware [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.650 2 INFO nova.compute.claims [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.802 2 INFO nova.virt.libvirt.driver [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Deleting instance files /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346_del
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.803 2 INFO nova.virt.libvirt.driver [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Deletion of /var/lib/nova/instances/62bd6fb5-9b1d-40a6-81da-a3d89d022346_del complete
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.834 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.894 2 INFO nova.compute.manager [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.896 2 DEBUG oslo.service.loopingcall [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.897 2 DEBUG nova.compute.manager [-] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:34:43 compute-0 nova_compute[260603]: 2025-10-02 08:34:43.897 2 DEBUG nova.network.neutron [-] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.079 2 DEBUG nova.network.neutron [-] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.090 2 DEBUG nova.network.neutron [-] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.102 2 INFO nova.compute.manager [-] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Took 0.21 seconds to deallocate network for instance.
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.142 2 DEBUG oslo_concurrency.lockutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2627174804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.318 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.326 2 DEBUG nova.compute.provider_tree [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.340 2 DEBUG nova.scheduler.client.report [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.357 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.358 2 DEBUG nova.compute.manager [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.361 2 DEBUG oslo_concurrency.lockutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.401 2 DEBUG nova.compute.manager [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.402 2 DEBUG nova.network.neutron [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.422 2 INFO nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.449 2 DEBUG nova.compute.manager [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.486 2 DEBUG oslo_concurrency.processutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.556 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.557 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394084.5564675, 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.557 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] VM Resumed (Lifecycle Event)
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.562 2 DEBUG nova.compute.manager [None req-638643c0-c148-4ac5-bb67-4cdd692bf9c4 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.578 2 DEBUG nova.compute.manager [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.580 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.580 2 INFO nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Creating image(s)
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.604 2 DEBUG nova.storage.rbd_utils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image ba2cf934-ce76-4de7-a495-285f144bdab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.626 2 DEBUG nova.storage.rbd_utils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image ba2cf934-ce76-4de7-a495-285f144bdab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.657 2 DEBUG nova.storage.rbd_utils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image ba2cf934-ce76-4de7-a495-285f144bdab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.660 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.697 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.705 2 DEBUG nova.policy [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb1b3a5ae9514259b27a0b7a28f23cda', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.712 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.732 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.733 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.734 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.735 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.762 2 DEBUG nova.storage.rbd_utils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image ba2cf934-ce76-4de7-a495-285f144bdab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.767 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ba2cf934-ce76-4de7-a495-285f144bdab7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.810 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394084.5599117, 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.811 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] VM Started (Lifecycle Event)
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.867 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.873 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1610957812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:44 compute-0 ceph-mon[74477]: pgmap v1642: 305 pgs: 305 active+clean; 341 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 404 op/s
Oct 02 08:34:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2627174804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1610957812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.986 2 DEBUG oslo_concurrency.processutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:44 compute-0 nova_compute[260603]: 2025-10-02 08:34:44.991 2 DEBUG nova.compute.provider_tree [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.008 2 DEBUG nova.scheduler.client.report [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.039 2 DEBUG oslo_concurrency.lockutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.041 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ba2cf934-ce76-4de7-a495-285f144bdab7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 341 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.8 MiB/s wr, 315 op/s
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.119 2 DEBUG nova.storage.rbd_utils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] resizing rbd image ba2cf934-ce76-4de7-a495-285f144bdab7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.151 2 INFO nova.scheduler.client.report [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Deleted allocations for instance 62bd6fb5-9b1d-40a6-81da-a3d89d022346
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.216 2 DEBUG nova.objects.instance [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'migration_context' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.235 2 DEBUG oslo_concurrency.lockutils [None req-705cbfca-99d5-4e12-bb41-394d2368d5bb c49315d68a544e0fb41f4e3ba5c71ba5 14d1e9bc40ee43a2bad5cc224c4ea79f - - default default] Lock "62bd6fb5-9b1d-40a6-81da-a3d89d022346" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.236 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.237 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Ensure instance console log exists: /var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.237 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.238 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.238 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.267 2 DEBUG nova.compute.manager [req-aecfe316-dcad-4608-9518-33ae7b3d4eb1 req-2a99fefb-a275-4b99-b8c9-ed22281472bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.267 2 DEBUG oslo_concurrency.lockutils [req-aecfe316-dcad-4608-9518-33ae7b3d4eb1 req-2a99fefb-a275-4b99-b8c9-ed22281472bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.267 2 DEBUG oslo_concurrency.lockutils [req-aecfe316-dcad-4608-9518-33ae7b3d4eb1 req-2a99fefb-a275-4b99-b8c9-ed22281472bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.268 2 DEBUG oslo_concurrency.lockutils [req-aecfe316-dcad-4608-9518-33ae7b3d4eb1 req-2a99fefb-a275-4b99-b8c9-ed22281472bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.268 2 DEBUG nova.compute.manager [req-aecfe316-dcad-4608-9518-33ae7b3d4eb1 req-2a99fefb-a275-4b99-b8c9-ed22281472bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] No waiting events found dispatching network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.268 2 WARNING nova.compute.manager [req-aecfe316-dcad-4608-9518-33ae7b3d4eb1 req-2a99fefb-a275-4b99-b8c9-ed22281472bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received unexpected event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 for instance with vm_state rescued and task_state None.
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.268 2 DEBUG nova.compute.manager [req-aecfe316-dcad-4608-9518-33ae7b3d4eb1 req-2a99fefb-a275-4b99-b8c9-ed22281472bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.268 2 DEBUG oslo_concurrency.lockutils [req-aecfe316-dcad-4608-9518-33ae7b3d4eb1 req-2a99fefb-a275-4b99-b8c9-ed22281472bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.269 2 DEBUG oslo_concurrency.lockutils [req-aecfe316-dcad-4608-9518-33ae7b3d4eb1 req-2a99fefb-a275-4b99-b8c9-ed22281472bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.269 2 DEBUG oslo_concurrency.lockutils [req-aecfe316-dcad-4608-9518-33ae7b3d4eb1 req-2a99fefb-a275-4b99-b8c9-ed22281472bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.269 2 DEBUG nova.compute.manager [req-aecfe316-dcad-4608-9518-33ae7b3d4eb1 req-2a99fefb-a275-4b99-b8c9-ed22281472bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] No waiting events found dispatching network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.269 2 WARNING nova.compute.manager [req-aecfe316-dcad-4608-9518-33ae7b3d4eb1 req-2a99fefb-a275-4b99-b8c9-ed22281472bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received unexpected event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 for instance with vm_state rescued and task_state None.
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.388 2 INFO nova.compute.manager [None req-700c7bfb-26d8-4756-a69f-fe0424b74b81 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Unrescuing
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.388 2 DEBUG oslo_concurrency.lockutils [None req-700c7bfb-26d8-4756-a69f-fe0424b74b81 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "refresh_cache-5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.389 2 DEBUG oslo_concurrency.lockutils [None req-700c7bfb-26d8-4756-a69f-fe0424b74b81 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquired lock "refresh_cache-5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.389 2 DEBUG nova.network.neutron [None req-700c7bfb-26d8-4756-a69f-fe0424b74b81 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:34:45 compute-0 nova_compute[260603]: 2025-10-02 08:34:45.762 2 DEBUG nova.network.neutron [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Successfully created port: 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:34:45 compute-0 ceph-mon[74477]: pgmap v1643: 305 pgs: 305 active+clean; 341 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.8 MiB/s wr, 315 op/s
Oct 02 08:34:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 341 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.8 MiB/s wr, 315 op/s
Oct 02 08:34:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:34:47 compute-0 nova_compute[260603]: 2025-10-02 08:34:47.300 2 DEBUG nova.network.neutron [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Successfully updated port: 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:34:47 compute-0 nova_compute[260603]: 2025-10-02 08:34:47.320 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:34:47 compute-0 nova_compute[260603]: 2025-10-02 08:34:47.321 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:34:47 compute-0 nova_compute[260603]: 2025-10-02 08:34:47.321 2 DEBUG nova.network.neutron [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:34:47 compute-0 nova_compute[260603]: 2025-10-02 08:34:47.427 2 DEBUG nova.compute.manager [req-91aa17cc-fc5e-4e17-badb-ef09409e3577 req-22d4c2c2-35ac-40b6-b899-6ad284399dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-changed-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:47 compute-0 nova_compute[260603]: 2025-10-02 08:34:47.430 2 DEBUG nova.compute.manager [req-91aa17cc-fc5e-4e17-badb-ef09409e3577 req-22d4c2c2-35ac-40b6-b899-6ad284399dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Refreshing instance network info cache due to event network-changed-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:34:47 compute-0 nova_compute[260603]: 2025-10-02 08:34:47.430 2 DEBUG oslo_concurrency.lockutils [req-91aa17cc-fc5e-4e17-badb-ef09409e3577 req-22d4c2c2-35ac-40b6-b899-6ad284399dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:34:47 compute-0 nova_compute[260603]: 2025-10-02 08:34:47.797 2 DEBUG nova.network.neutron [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:34:47 compute-0 nova_compute[260603]: 2025-10-02 08:34:47.872 2 DEBUG nova.network.neutron [None req-700c7bfb-26d8-4756-a69f-fe0424b74b81 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Updating instance_info_cache with network_info: [{"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:47 compute-0 nova_compute[260603]: 2025-10-02 08:34:47.896 2 DEBUG oslo_concurrency.lockutils [None req-700c7bfb-26d8-4756-a69f-fe0424b74b81 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Releasing lock "refresh_cache-5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:34:47 compute-0 nova_compute[260603]: 2025-10-02 08:34:47.897 2 DEBUG nova.objects.instance [None req-700c7bfb-26d8-4756-a69f-fe0424b74b81 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'flavor' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:47 compute-0 kernel: tap597a75f0-66 (unregistering): left promiscuous mode
Oct 02 08:34:47 compute-0 NetworkManager[45129]: <info>  [1759394087.9975] device (tap597a75f0-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:34:48 compute-0 ovn_controller[152344]: 2025-10-02T08:34:48Z|00798|binding|INFO|Releasing lport 597a75f0-6607-46bb-a380-b46a358c3bf7 from this chassis (sb_readonly=0)
Oct 02 08:34:48 compute-0 ovn_controller[152344]: 2025-10-02T08:34:48Z|00799|binding|INFO|Setting lport 597a75f0-6607-46bb-a380-b46a358c3bf7 down in Southbound
Oct 02 08:34:48 compute-0 ovn_controller[152344]: 2025-10-02T08:34:48Z|00800|binding|INFO|Removing iface tap597a75f0-66 ovn-installed in OVS
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:48.020 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:ab:f3 10.100.0.7'], port_security=['fa:16:3e:f5:ab:f3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5b5054c4-5d13-4ca5-8037-e0d93f21d9ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86059eb0-17b1-462f-a30f-1dfe95c50614', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '629efd330be646b7a2941e0c83b86e0e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4883fba0-ee44-4548-b479-1786f1cf77b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2e7de25-29e1-404b-a0d2-6f487c522884, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=597a75f0-6607-46bb-a380-b46a358c3bf7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:48.022 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 597a75f0-6607-46bb-a380-b46a358c3bf7 in datapath 86059eb0-17b1-462f-a30f-1dfe95c50614 unbound from our chassis
Oct 02 08:34:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:48.024 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86059eb0-17b1-462f-a30f-1dfe95c50614 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:34:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:48.027 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6db4c193-c43f-49b1-b962-7fa2b6916c7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:48 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000052.scope: Deactivated successfully.
Oct 02 08:34:48 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000052.scope: Consumed 4.787s CPU time.
Oct 02 08:34:48 compute-0 systemd-machined[214636]: Machine qemu-97-instance-00000052 terminated.
Oct 02 08:34:48 compute-0 ceph-mon[74477]: pgmap v1644: 305 pgs: 305 active+clean; 341 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.8 MiB/s wr, 315 op/s
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.172 2 INFO nova.virt.libvirt.driver [-] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Instance destroyed successfully.
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.173 2 DEBUG nova.objects.instance [None req-700c7bfb-26d8-4756-a69f-fe0424b74b81 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'numa_topology' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:48 compute-0 kernel: tap597a75f0-66: entered promiscuous mode
Oct 02 08:34:48 compute-0 NetworkManager[45129]: <info>  [1759394088.2846] manager: (tap597a75f0-66): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.284 2 DEBUG nova.compute.manager [req-25db632a-7c5c-4ec3-bf72-97724f0d00db req-689948a9-1aa3-4e18-98ba-dcd984b5eb9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-unplugged-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.285 2 DEBUG oslo_concurrency.lockutils [req-25db632a-7c5c-4ec3-bf72-97724f0d00db req-689948a9-1aa3-4e18-98ba-dcd984b5eb9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.285 2 DEBUG oslo_concurrency.lockutils [req-25db632a-7c5c-4ec3-bf72-97724f0d00db req-689948a9-1aa3-4e18-98ba-dcd984b5eb9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:48 compute-0 systemd-udevd[341589]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.285 2 DEBUG oslo_concurrency.lockutils [req-25db632a-7c5c-4ec3-bf72-97724f0d00db req-689948a9-1aa3-4e18-98ba-dcd984b5eb9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.286 2 DEBUG nova.compute.manager [req-25db632a-7c5c-4ec3-bf72-97724f0d00db req-689948a9-1aa3-4e18-98ba-dcd984b5eb9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] No waiting events found dispatching network-vif-unplugged-597a75f0-6607-46bb-a380-b46a358c3bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.287 2 WARNING nova.compute.manager [req-25db632a-7c5c-4ec3-bf72-97724f0d00db req-689948a9-1aa3-4e18-98ba-dcd984b5eb9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received unexpected event network-vif-unplugged-597a75f0-6607-46bb-a380-b46a358c3bf7 for instance with vm_state rescued and task_state unrescuing.
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:48 compute-0 ovn_controller[152344]: 2025-10-02T08:34:48Z|00801|binding|INFO|Claiming lport 597a75f0-6607-46bb-a380-b46a358c3bf7 for this chassis.
Oct 02 08:34:48 compute-0 ovn_controller[152344]: 2025-10-02T08:34:48Z|00802|binding|INFO|597a75f0-6607-46bb-a380-b46a358c3bf7: Claiming fa:16:3e:f5:ab:f3 10.100.0.7
Oct 02 08:34:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:48.302 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:ab:f3 10.100.0.7'], port_security=['fa:16:3e:f5:ab:f3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5b5054c4-5d13-4ca5-8037-e0d93f21d9ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86059eb0-17b1-462f-a30f-1dfe95c50614', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '629efd330be646b7a2941e0c83b86e0e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '4883fba0-ee44-4548-b479-1786f1cf77b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2e7de25-29e1-404b-a0d2-6f487c522884, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=597a75f0-6607-46bb-a380-b46a358c3bf7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:48.304 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 597a75f0-6607-46bb-a380-b46a358c3bf7 in datapath 86059eb0-17b1-462f-a30f-1dfe95c50614 bound to our chassis
Oct 02 08:34:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:48.305 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86059eb0-17b1-462f-a30f-1dfe95c50614 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:34:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:48.306 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[32ee639f-5e15-4d41-9a1c-e4f70694029b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:48 compute-0 NetworkManager[45129]: <info>  [1759394088.3073] device (tap597a75f0-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:34:48 compute-0 NetworkManager[45129]: <info>  [1759394088.3090] device (tap597a75f0-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:34:48 compute-0 ovn_controller[152344]: 2025-10-02T08:34:48Z|00803|binding|INFO|Setting lport 597a75f0-6607-46bb-a380-b46a358c3bf7 ovn-installed in OVS
Oct 02 08:34:48 compute-0 ovn_controller[152344]: 2025-10-02T08:34:48Z|00804|binding|INFO|Setting lport 597a75f0-6607-46bb-a380-b46a358c3bf7 up in Southbound
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:48 compute-0 systemd-machined[214636]: New machine qemu-98-instance-00000052.
Oct 02 08:34:48 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-00000052.
Oct 02 08:34:48 compute-0 nova_compute[260603]: 2025-10-02 08:34:48.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.042 2 DEBUG nova.network.neutron [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.078 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.078 2 DEBUG nova.compute.manager [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance network_info: |[{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.079 2 DEBUG oslo_concurrency.lockutils [req-91aa17cc-fc5e-4e17-badb-ef09409e3577 req-22d4c2c2-35ac-40b6-b899-6ad284399dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.080 2 DEBUG nova.network.neutron [req-91aa17cc-fc5e-4e17-badb-ef09409e3577 req-22d4c2c2-35ac-40b6-b899-6ad284399dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Refreshing network info cache for port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.087 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Start _get_guest_xml network_info=[{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:34:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 341 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.6 MiB/s wr, 444 op/s
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.094 2 WARNING nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.100 2 DEBUG nova.virt.libvirt.host [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.101 2 DEBUG nova.virt.libvirt.host [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.111 2 DEBUG nova.virt.libvirt.host [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.111 2 DEBUG nova.virt.libvirt.host [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.112 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.113 2 DEBUG nova.virt.hardware [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.113 2 DEBUG nova.virt.hardware [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.114 2 DEBUG nova.virt.hardware [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.114 2 DEBUG nova.virt.hardware [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.115 2 DEBUG nova.virt.hardware [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.115 2 DEBUG nova.virt.hardware [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.115 2 DEBUG nova.virt.hardware [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.116 2 DEBUG nova.virt.hardware [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.116 2 DEBUG nova.virt.hardware [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.117 2 DEBUG nova.virt.hardware [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.117 2 DEBUG nova.virt.hardware [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.122 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/408840123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.622 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.643 2 DEBUG nova.storage.rbd_utils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image ba2cf934-ce76-4de7-a495-285f144bdab7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.646 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.996 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.997 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394089.9962785, 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:49 compute-0 nova_compute[260603]: 2025-10-02 08:34:49.997 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] VM Resumed (Lifecycle Event)
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.029 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.034 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.072 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.073 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394089.996441, 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.074 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] VM Started (Lifecycle Event)
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.095 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.101 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:34:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3832544124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.134 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.147 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.149 2 DEBUG nova.virt.libvirt.vif [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:34:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.150 2 DEBUG nova.network.os_vif_util [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.153 2 DEBUG nova.network.os_vif_util [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:50 compute-0 ceph-mon[74477]: pgmap v1645: 305 pgs: 305 active+clean; 341 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.6 MiB/s wr, 444 op/s
Oct 02 08:34:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/408840123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3832544124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.156 2 DEBUG nova.objects.instance [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.170 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:34:50 compute-0 nova_compute[260603]:   <uuid>ba2cf934-ce76-4de7-a495-285f144bdab7</uuid>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   <name>instance-00000055</name>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerActionsTestJSON-server-276449458</nova:name>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:34:49</nova:creationTime>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:34:50 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:34:50 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:34:50 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:34:50 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:34:50 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:34:50 compute-0 nova_compute[260603]:         <nova:user uuid="bb1b3a5ae9514259b27a0b7a28f23cda">tempest-ServerActionsTestJSON-1407264397-project-member</nova:user>
Oct 02 08:34:50 compute-0 nova_compute[260603]:         <nova:project uuid="b43ebc87104041aba179e47c5e6ecc5f">tempest-ServerActionsTestJSON-1407264397</nova:project>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:34:50 compute-0 nova_compute[260603]:         <nova:port uuid="961da5ba-b0ac-4a87-a74c-26d0d2d2bf50">
Oct 02 08:34:50 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <system>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <entry name="serial">ba2cf934-ce76-4de7-a495-285f144bdab7</entry>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <entry name="uuid">ba2cf934-ce76-4de7-a495-285f144bdab7</entry>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     </system>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   <os>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   </os>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   <features>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   </features>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ba2cf934-ce76-4de7-a495-285f144bdab7_disk">
Oct 02 08:34:50 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:50 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ba2cf934-ce76-4de7-a495-285f144bdab7_disk.config">
Oct 02 08:34:50 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       </source>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:34:50 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:bb:af:04"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <target dev="tap961da5ba-b0"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/console.log" append="off"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <video>
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     </video>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:34:50 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:34:50 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:34:50 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:34:50 compute-0 nova_compute[260603]: </domain>
Oct 02 08:34:50 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.183 2 DEBUG nova.compute.manager [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Preparing to wait for external event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.183 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.184 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.184 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.186 2 DEBUG nova.virt.libvirt.vif [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:34:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.187 2 DEBUG nova.network.os_vif_util [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.189 2 DEBUG nova.network.os_vif_util [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.190 2 DEBUG os_vif [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.192 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.193 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.198 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap961da5ba-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap961da5ba-b0, col_values=(('external_ids', {'iface-id': '961da5ba-b0ac-4a87-a74c-26d0d2d2bf50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:af:04', 'vm-uuid': 'ba2cf934-ce76-4de7-a495-285f144bdab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:50 compute-0 NetworkManager[45129]: <info>  [1759394090.2025] manager: (tap961da5ba-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.207 2 INFO os_vif [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.262 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.263 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.263 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No VIF found with MAC fa:16:3e:bb:af:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.263 2 INFO nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Using config drive
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.282 2 DEBUG nova.storage.rbd_utils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image ba2cf934-ce76-4de7-a495-285f144bdab7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.382 2 DEBUG nova.compute.manager [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.383 2 DEBUG oslo_concurrency.lockutils [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.383 2 DEBUG oslo_concurrency.lockutils [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.383 2 DEBUG oslo_concurrency.lockutils [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.383 2 DEBUG nova.compute.manager [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] No waiting events found dispatching network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.384 2 WARNING nova.compute.manager [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received unexpected event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 for instance with vm_state rescued and task_state unrescuing.
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.384 2 DEBUG nova.compute.manager [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.384 2 DEBUG oslo_concurrency.lockutils [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.384 2 DEBUG oslo_concurrency.lockutils [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.385 2 DEBUG oslo_concurrency.lockutils [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.385 2 DEBUG nova.compute.manager [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] No waiting events found dispatching network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.385 2 WARNING nova.compute.manager [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received unexpected event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 for instance with vm_state rescued and task_state unrescuing.
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.386 2 DEBUG nova.compute.manager [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.386 2 DEBUG oslo_concurrency.lockutils [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.386 2 DEBUG oslo_concurrency.lockutils [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.386 2 DEBUG oslo_concurrency.lockutils [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.386 2 DEBUG nova.compute.manager [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] No waiting events found dispatching network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.387 2 WARNING nova.compute.manager [req-abc9ca9d-547d-4520-82ed-d25c16187d9e req-54766910-e24b-4a5d-b664-2b07217c330d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received unexpected event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 for instance with vm_state rescued and task_state unrescuing.
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.395 2 DEBUG nova.compute.manager [None req-700c7bfb-26d8-4756-a69f-fe0424b74b81 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.783 2 INFO nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Creating config drive at /var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/disk.config
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.796 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6erss0xp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.870 2 DEBUG nova.network.neutron [req-91aa17cc-fc5e-4e17-badb-ef09409e3577 req-22d4c2c2-35ac-40b6-b899-6ad284399dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updated VIF entry in instance network info cache for port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.871 2 DEBUG nova.network.neutron [req-91aa17cc-fc5e-4e17-badb-ef09409e3577 req-22d4c2c2-35ac-40b6-b899-6ad284399dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.888 2 DEBUG oslo_concurrency.lockutils [req-91aa17cc-fc5e-4e17-badb-ef09409e3577 req-22d4c2c2-35ac-40b6-b899-6ad284399dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.958 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6erss0xp" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.993 2 DEBUG nova.storage.rbd_utils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image ba2cf934-ce76-4de7-a495-285f144bdab7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:34:50 compute-0 nova_compute[260603]: 2025-10-02 08:34:50.998 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/disk.config ba2cf934-ce76-4de7-a495-285f144bdab7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 341 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.5 MiB/s wr, 312 op/s
Oct 02 08:34:51 compute-0 nova_compute[260603]: 2025-10-02 08:34:51.171 2 DEBUG oslo_concurrency.processutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/disk.config ba2cf934-ce76-4de7-a495-285f144bdab7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:51 compute-0 nova_compute[260603]: 2025-10-02 08:34:51.172 2 INFO nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Deleting local config drive /var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/disk.config because it was imported into RBD.
Oct 02 08:34:51 compute-0 NetworkManager[45129]: <info>  [1759394091.2460] manager: (tap961da5ba-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/328)
Oct 02 08:34:51 compute-0 kernel: tap961da5ba-b0: entered promiscuous mode
Oct 02 08:34:51 compute-0 ovn_controller[152344]: 2025-10-02T08:34:51Z|00805|binding|INFO|Claiming lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for this chassis.
Oct 02 08:34:51 compute-0 ovn_controller[152344]: 2025-10-02T08:34:51Z|00806|binding|INFO|961da5ba-b0ac-4a87-a74c-26d0d2d2bf50: Claiming fa:16:3e:bb:af:04 10.100.0.8
Oct 02 08:34:51 compute-0 nova_compute[260603]: 2025-10-02 08:34:51.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:51 compute-0 nova_compute[260603]: 2025-10-02 08:34:51.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.267 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.269 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 bound to our chassis
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.272 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.285 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[98e15c46-298b-4839-b5b5-f86b0da2d355]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.286 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74f187c2-71 in ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.288 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74f187c2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.288 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e67ef60a-c908-4488-afde-ad81206744db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.289 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22414433-71f8-4f42-bdc2-f841364f1900]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 systemd-udevd[341830]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.304 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7a05613c-f925-4581-998c-b2afbf8cd325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 systemd-machined[214636]: New machine qemu-99-instance-00000055.
Oct 02 08:34:51 compute-0 NetworkManager[45129]: <info>  [1759394091.3131] device (tap961da5ba-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:34:51 compute-0 NetworkManager[45129]: <info>  [1759394091.3145] device (tap961da5ba-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:34:51 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-00000055.
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.323 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18f6f41f-27bc-44c8-a661-08bc96eda83d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 ovn_controller[152344]: 2025-10-02T08:34:51Z|00807|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 ovn-installed in OVS
Oct 02 08:34:51 compute-0 ovn_controller[152344]: 2025-10-02T08:34:51Z|00808|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 up in Southbound
Oct 02 08:34:51 compute-0 nova_compute[260603]: 2025-10-02 08:34:51.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:51 compute-0 nova_compute[260603]: 2025-10-02 08:34:51.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.360 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff30f75-5589-454f-8a52-6e147301a8a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 NetworkManager[45129]: <info>  [1759394091.3705] manager: (tap74f187c2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/329)
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.369 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[626b8b41-6fb9-4861-bece-3f04efea41d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.409 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc46cf1-f50e-4b21-8d98-656f2211dc33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.413 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ebc623-c539-4b49-9772-69b8f1ae2b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 podman[341818]: 2025-10-02 08:34:51.422478195 +0000 UTC m=+0.143166795 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 02 08:34:51 compute-0 podman[341816]: 2025-10-02 08:34:51.42966778 +0000 UTC m=+0.150435832 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 08:34:51 compute-0 NetworkManager[45129]: <info>  [1759394091.4467] device (tap74f187c2-70): carrier: link connected
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.455 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[22e19c98-8d97-4d3a-b7cc-0dec8dac536f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.473 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0635aa75-846e-4370-86af-8d2434f2f3f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497004, 'reachable_time': 36541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341895, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.488 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d91717-475d-4af5-857a-849b00c169e2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:7f62'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497004, 'tstamp': 497004}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341896, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.506 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[077d0336-c579-4db5-a826-50cc212e7dd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497004, 'reachable_time': 36541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341897, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.544 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ea949cb1-7754-4dd0-9a83-ba3c61db5c58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.600 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4779e8fe-c17b-4862-bdd0-39a68d9a6749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.601 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.602 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.602 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:51 compute-0 nova_compute[260603]: 2025-10-02 08:34:51.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:51 compute-0 NetworkManager[45129]: <info>  [1759394091.6044] manager: (tap74f187c2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Oct 02 08:34:51 compute-0 kernel: tap74f187c2-70: entered promiscuous mode
Oct 02 08:34:51 compute-0 nova_compute[260603]: 2025-10-02 08:34:51.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.606 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:51 compute-0 nova_compute[260603]: 2025-10-02 08:34:51.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:51 compute-0 ovn_controller[152344]: 2025-10-02T08:34:51Z|00809|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:34:51 compute-0 nova_compute[260603]: 2025-10-02 08:34:51.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:51 compute-0 nova_compute[260603]: 2025-10-02 08:34:51.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.635 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.635 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8ecc991c-1a2f-4a2a-af23-e8dc2f78404e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.636 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:34:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:51.637 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'env', 'PROCESS_TAG=haproxy-74f187c2-780c-418d-98eb-b25294872ab0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74f187c2-780c-418d-98eb-b25294872ab0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.047 2 DEBUG oslo_concurrency.lockutils [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.049 2 DEBUG oslo_concurrency.lockutils [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.050 2 DEBUG oslo_concurrency.lockutils [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.050 2 DEBUG oslo_concurrency.lockutils [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.050 2 DEBUG oslo_concurrency.lockutils [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.052 2 INFO nova.compute.manager [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Terminating instance
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.054 2 DEBUG nova.compute.manager [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:34:52 compute-0 podman[341970]: 2025-10-02 08:34:52.076094859 +0000 UTC m=+0.079759658 container create 87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 08:34:52 compute-0 kernel: tap597a75f0-66 (unregistering): left promiscuous mode
Oct 02 08:34:52 compute-0 NetworkManager[45129]: <info>  [1759394092.1091] device (tap597a75f0-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:34:52 compute-0 podman[341970]: 2025-10-02 08:34:52.035706465 +0000 UTC m=+0.039371274 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:34:52 compute-0 ovn_controller[152344]: 2025-10-02T08:34:52Z|00810|binding|INFO|Releasing lport 597a75f0-6607-46bb-a380-b46a358c3bf7 from this chassis (sb_readonly=0)
Oct 02 08:34:52 compute-0 ovn_controller[152344]: 2025-10-02T08:34:52Z|00811|binding|INFO|Setting lport 597a75f0-6607-46bb-a380-b46a358c3bf7 down in Southbound
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:52 compute-0 ovn_controller[152344]: 2025-10-02T08:34:52Z|00812|binding|INFO|Removing iface tap597a75f0-66 ovn-installed in OVS
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:52 compute-0 ceph-mon[74477]: pgmap v1646: 305 pgs: 305 active+clean; 341 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.5 MiB/s wr, 312 op/s
Oct 02 08:34:52 compute-0 systemd[1]: Started libpod-conmon-87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8.scope.
Oct 02 08:34:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:52.170 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:ab:f3 10.100.0.7'], port_security=['fa:16:3e:f5:ab:f3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5b5054c4-5d13-4ca5-8037-e0d93f21d9ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86059eb0-17b1-462f-a30f-1dfe95c50614', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '629efd330be646b7a2941e0c83b86e0e', 'neutron:revision_number': '8', 'neutron:security_group_ids': '4883fba0-ee44-4548-b479-1786f1cf77b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2e7de25-29e1-404b-a0d2-6f487c522884, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=597a75f0-6607-46bb-a380-b46a358c3bf7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95041ea22d2a3b4d29d27a083aa55580a6144219edc06f57014b0f9c0abddb0e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:34:52 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000052.scope: Deactivated successfully.
Oct 02 08:34:52 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000052.scope: Consumed 3.721s CPU time.
Oct 02 08:34:52 compute-0 systemd-machined[214636]: Machine qemu-98-instance-00000052 terminated.
Oct 02 08:34:52 compute-0 podman[341970]: 2025-10-02 08:34:52.227469569 +0000 UTC m=+0.231134358 container init 87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 08:34:52 compute-0 podman[341970]: 2025-10-02 08:34:52.232350315 +0000 UTC m=+0.236015084 container start 87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:34:52 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[341988]: [NOTICE]   (341992) : New worker (341994) forked
Oct 02 08:34:52 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[341988]: [NOTICE]   (341992) : Loading success.
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.261 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394092.2614582, ba2cf934-ce76-4de7-a495-285f144bdab7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.262 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Started (Lifecycle Event)
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.287 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.297 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394092.2615452, ba2cf934-ce76-4de7-a495-285f144bdab7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.298 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Paused (Lifecycle Event)
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.299 2 INFO nova.virt.libvirt.driver [-] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Instance destroyed successfully.
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.300 2 DEBUG nova.objects.instance [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'resources' on Instance uuid 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.320 2 DEBUG nova.virt.libvirt.vif [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-677598906',display_name='tempest-ServerRescueTestJSON-server-677598906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-677598906',id=82,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='629efd330be646b7a2941e0c83b86e0e',ramdisk_id='',reservation_id='r-j80za7bp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-846559261',owner_user_name='tempest-ServerRescueTestJSON-846559261-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:34:50Z,user_data=None,user_id='d2e113d3d74a43998ac8dbf246ae9095',uuid=5b5054c4-5d13-4ca5-8037-e0d93f21d9ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.320 2 DEBUG nova.network.os_vif_util [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converting VIF {"id": "597a75f0-6607-46bb-a380-b46a358c3bf7", "address": "fa:16:3e:f5:ab:f3", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap597a75f0-66", "ovs_interfaceid": "597a75f0-6607-46bb-a380-b46a358c3bf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.321 2 DEBUG nova.network.os_vif_util [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:ab:f3,bridge_name='br-int',has_traffic_filtering=True,id=597a75f0-6607-46bb-a380-b46a358c3bf7,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap597a75f0-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.321 2 DEBUG os_vif [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:ab:f3,bridge_name='br-int',has_traffic_filtering=True,id=597a75f0-6607-46bb-a380-b46a358c3bf7,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap597a75f0-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap597a75f0-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:52.323 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 597a75f0-6607-46bb-a380-b46a358c3bf7 in datapath 86059eb0-17b1-462f-a30f-1dfe95c50614 unbound from our chassis
Oct 02 08:34:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:52.324 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86059eb0-17b1-462f-a30f-1dfe95c50614 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:52.325 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a288ea-107a-4585-96b7-5acc99bcce1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.329 2 INFO os_vif [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:ab:f3,bridge_name='br-int',has_traffic_filtering=True,id=597a75f0-6607-46bb-a380-b46a358c3bf7,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap597a75f0-66')
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.345 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.349 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.384 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.541 2 DEBUG nova.compute.manager [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.542 2 DEBUG oslo_concurrency.lockutils [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.542 2 DEBUG oslo_concurrency.lockutils [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.543 2 DEBUG oslo_concurrency.lockutils [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.543 2 DEBUG nova.compute.manager [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Processing event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.544 2 DEBUG nova.compute.manager [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.544 2 DEBUG oslo_concurrency.lockutils [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.545 2 DEBUG oslo_concurrency.lockutils [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.545 2 DEBUG oslo_concurrency.lockutils [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.545 2 DEBUG nova.compute.manager [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.546 2 WARNING nova.compute.manager [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state building and task_state spawning.
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.546 2 DEBUG nova.compute.manager [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-unplugged-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.547 2 DEBUG oslo_concurrency.lockutils [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.547 2 DEBUG oslo_concurrency.lockutils [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.548 2 DEBUG oslo_concurrency.lockutils [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.548 2 DEBUG nova.compute.manager [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] No waiting events found dispatching network-vif-unplugged-597a75f0-6607-46bb-a380-b46a358c3bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.549 2 DEBUG nova.compute.manager [req-6908d162-dd53-45a9-85fb-a8a94e42afc8 req-8fca5b3e-d613-4d97-9a49-09969626af2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-unplugged-597a75f0-6607-46bb-a380-b46a358c3bf7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.550 2 DEBUG nova.compute.manager [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.560 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394092.5601768, ba2cf934-ce76-4de7-a495-285f144bdab7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.561 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Resumed (Lifecycle Event)
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.565 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.571 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance spawned successfully.
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.572 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.608 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.617 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.618 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.619 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.620 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.620 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.621 2 DEBUG nova.virt.libvirt.driver [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.633 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.697 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.725 2 INFO nova.compute.manager [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Took 8.15 seconds to spawn the instance on the hypervisor.
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.725 2 DEBUG nova.compute.manager [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.754 2 INFO nova.virt.libvirt.driver [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Deleting instance files /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_del
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.755 2 INFO nova.virt.libvirt.driver [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Deletion of /var/lib/nova/instances/5b5054c4-5d13-4ca5-8037-e0d93f21d9ea_del complete
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.812 2 INFO nova.compute.manager [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Took 9.20 seconds to build instance.
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.830 2 INFO nova.compute.manager [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.831 2 DEBUG oslo.service.loopingcall [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.832 2 DEBUG nova.compute.manager [-] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.832 2 DEBUG nova.network.neutron [-] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:34:52 compute-0 nova_compute[260603]: 2025-10-02 08:34:52.845 2 DEBUG oslo_concurrency.lockutils [None req-780773a1-37e4-4ff1-954c-7d87ed701555 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 249 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 4.5 MiB/s wr, 433 op/s
Oct 02 08:34:53 compute-0 nova_compute[260603]: 2025-10-02 08:34:53.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:53 compute-0 nova_compute[260603]: 2025-10-02 08:34:53.919 2 DEBUG nova.network.neutron [-] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:53 compute-0 nova_compute[260603]: 2025-10-02 08:34:53.951 2 INFO nova.compute.manager [-] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Took 1.12 seconds to deallocate network for instance.
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.010 2 DEBUG oslo_concurrency.lockutils [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.011 2 DEBUG oslo_concurrency.lockutils [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.107 2 DEBUG oslo_concurrency.processutils [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.167 2 DEBUG nova.compute.manager [req-3d6253c5-13f2-4064-875b-dbefddf27cf6 req-0c61a843-c300-4236-b282-b3b3254eebcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-deleted-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:54 compute-0 ceph-mon[74477]: pgmap v1647: 305 pgs: 305 active+clean; 249 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 4.5 MiB/s wr, 433 op/s
Oct 02 08:34:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3362700761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.538 2 DEBUG oslo_concurrency.processutils [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.546 2 DEBUG nova.compute.provider_tree [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.566 2 DEBUG nova.scheduler.client.report [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.621 2 DEBUG oslo_concurrency.lockutils [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.651 2 INFO nova.scheduler.client.report [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Deleted allocations for instance 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.743 2 DEBUG oslo_concurrency.lockutils [None req-beaec659-d594-4fea-a3fc-90056b92a063 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.786 2 DEBUG nova.compute.manager [req-5165a563-ad8a-4ef8-a535-93a4f06a2581 req-b2c37254-6225-42bc-bd08-64ae849b41d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.786 2 DEBUG oslo_concurrency.lockutils [req-5165a563-ad8a-4ef8-a535-93a4f06a2581 req-b2c37254-6225-42bc-bd08-64ae849b41d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.787 2 DEBUG oslo_concurrency.lockutils [req-5165a563-ad8a-4ef8-a535-93a4f06a2581 req-b2c37254-6225-42bc-bd08-64ae849b41d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.787 2 DEBUG oslo_concurrency.lockutils [req-5165a563-ad8a-4ef8-a535-93a4f06a2581 req-b2c37254-6225-42bc-bd08-64ae849b41d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5b5054c4-5d13-4ca5-8037-e0d93f21d9ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.787 2 DEBUG nova.compute.manager [req-5165a563-ad8a-4ef8-a535-93a4f06a2581 req-b2c37254-6225-42bc-bd08-64ae849b41d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] No waiting events found dispatching network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.787 2 WARNING nova.compute.manager [req-5165a563-ad8a-4ef8-a535-93a4f06a2581 req-b2c37254-6225-42bc-bd08-64ae849b41d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Received unexpected event network-vif-plugged-597a75f0-6607-46bb-a380-b46a358c3bf7 for instance with vm_state deleted and task_state None.
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.850 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394079.8495226, 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.850 2 INFO nova.compute.manager [-] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] VM Stopped (Lifecycle Event)
Oct 02 08:34:54 compute-0 nova_compute[260603]: 2025-10-02 08:34:54.930 2 DEBUG nova.compute.manager [None req-506de3b6-672f-445d-be8a-d008c1d857a9 - - - - - -] [instance: 92ac5d7b-ea52-4ee5-ac56-e3caa5d723c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 249 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 250 op/s
Oct 02 08:34:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3362700761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:56 compute-0 ceph-mon[74477]: pgmap v1648: 305 pgs: 305 active+clean; 249 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 250 op/s
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.241 2 DEBUG oslo_concurrency.lockutils [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "6df0942d-95db-4140-9c7b-b5c51ada92bd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.242 2 DEBUG oslo_concurrency.lockutils [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.243 2 DEBUG oslo_concurrency.lockutils [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.243 2 DEBUG oslo_concurrency.lockutils [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.244 2 DEBUG oslo_concurrency.lockutils [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.246 2 INFO nova.compute.manager [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Terminating instance
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.249 2 DEBUG nova.compute.manager [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:34:56 compute-0 kernel: tap11350006-4c (unregistering): left promiscuous mode
Oct 02 08:34:56 compute-0 NetworkManager[45129]: <info>  [1759394096.3114] device (tap11350006-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:34:56 compute-0 ovn_controller[152344]: 2025-10-02T08:34:56Z|00813|binding|INFO|Releasing lport 11350006-4c80-4e3a-a271-5230799be1ba from this chassis (sb_readonly=0)
Oct 02 08:34:56 compute-0 ovn_controller[152344]: 2025-10-02T08:34:56Z|00814|binding|INFO|Setting lport 11350006-4c80-4e3a-a271-5230799be1ba down in Southbound
Oct 02 08:34:56 compute-0 ovn_controller[152344]: 2025-10-02T08:34:56Z|00815|binding|INFO|Removing iface tap11350006-4c ovn-installed in OVS
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:56.327 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:93:e4 10.100.0.12'], port_security=['fa:16:3e:68:93:e4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6df0942d-95db-4140-9c7b-b5c51ada92bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86059eb0-17b1-462f-a30f-1dfe95c50614', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '629efd330be646b7a2941e0c83b86e0e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4883fba0-ee44-4548-b479-1786f1cf77b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2e7de25-29e1-404b-a0d2-6f487c522884, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=11350006-4c80-4e3a-a271-5230799be1ba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:34:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:56.328 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 11350006-4c80-4e3a-a271-5230799be1ba in datapath 86059eb0-17b1-462f-a30f-1dfe95c50614 unbound from our chassis
Oct 02 08:34:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:56.329 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86059eb0-17b1-462f-a30f-1dfe95c50614 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:34:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:34:56.330 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc37a54-8442-47cf-ba6c-13e8bef7b955]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:56 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Oct 02 08:34:56 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000004f.scope: Consumed 14.196s CPU time.
Oct 02 08:34:56 compute-0 systemd-machined[214636]: Machine qemu-92-instance-0000004f terminated.
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.496 2 INFO nova.virt.libvirt.driver [-] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Instance destroyed successfully.
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.496 2 DEBUG nova.objects.instance [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lazy-loading 'resources' on Instance uuid 6df0942d-95db-4140-9c7b-b5c51ada92bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.510 2 DEBUG nova.virt.libvirt.vif [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:33:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1363390342',display_name='tempest-ServerRescueTestJSON-server-1363390342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1363390342',id=79,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='629efd330be646b7a2941e0c83b86e0e',ramdisk_id='',reservation_id='r-gcsl0g66',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-846559261',owner_user_name='tempest-ServerRescueTestJSON-846559261-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:34:04Z,user_data=None,user_id='d2e113d3d74a43998ac8dbf246ae9095',uuid=6df0942d-95db-4140-9c7b-b5c51ada92bd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.511 2 DEBUG nova.network.os_vif_util [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converting VIF {"id": "11350006-4c80-4e3a-a271-5230799be1ba", "address": "fa:16:3e:68:93:e4", "network": {"id": "86059eb0-17b1-462f-a30f-1dfe95c50614", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2021168771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "629efd330be646b7a2941e0c83b86e0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11350006-4c", "ovs_interfaceid": "11350006-4c80-4e3a-a271-5230799be1ba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.512 2 DEBUG nova.network.os_vif_util [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:93:e4,bridge_name='br-int',has_traffic_filtering=True,id=11350006-4c80-4e3a-a271-5230799be1ba,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11350006-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.513 2 DEBUG os_vif [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:93:e4,bridge_name='br-int',has_traffic_filtering=True,id=11350006-4c80-4e3a-a271-5230799be1ba,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11350006-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11350006-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.527 2 INFO os_vif [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:93:e4,bridge_name='br-int',has_traffic_filtering=True,id=11350006-4c80-4e3a-a271-5230799be1ba,network=Network(86059eb0-17b1-462f-a30f-1dfe95c50614),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11350006-4c')
Oct 02 08:34:56 compute-0 NetworkManager[45129]: <info>  [1759394096.6387] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Oct 02 08:34:56 compute-0 NetworkManager[45129]: <info>  [1759394096.6394] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:56 compute-0 ovn_controller[152344]: 2025-10-02T08:34:56Z|00816|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:56 compute-0 ovn_controller[152344]: 2025-10-02T08:34:56Z|00817|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.943 2 DEBUG nova.compute.manager [req-11246df6-9dd1-4798-9286-700367d177c6 req-f9dbce97-ef4f-44bf-b937-e33f4318969b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received event network-vif-unplugged-11350006-4c80-4e3a-a271-5230799be1ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.944 2 DEBUG oslo_concurrency.lockutils [req-11246df6-9dd1-4798-9286-700367d177c6 req-f9dbce97-ef4f-44bf-b937-e33f4318969b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.944 2 DEBUG oslo_concurrency.lockutils [req-11246df6-9dd1-4798-9286-700367d177c6 req-f9dbce97-ef4f-44bf-b937-e33f4318969b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.948 2 DEBUG oslo_concurrency.lockutils [req-11246df6-9dd1-4798-9286-700367d177c6 req-f9dbce97-ef4f-44bf-b937-e33f4318969b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.948 2 DEBUG nova.compute.manager [req-11246df6-9dd1-4798-9286-700367d177c6 req-f9dbce97-ef4f-44bf-b937-e33f4318969b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] No waiting events found dispatching network-vif-unplugged-11350006-4c80-4e3a-a271-5230799be1ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.949 2 DEBUG nova.compute.manager [req-11246df6-9dd1-4798-9286-700367d177c6 req-f9dbce97-ef4f-44bf-b937-e33f4318969b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received event network-vif-unplugged-11350006-4c80-4e3a-a271-5230799be1ba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.949 2 DEBUG nova.compute.manager [req-11246df6-9dd1-4798-9286-700367d177c6 req-f9dbce97-ef4f-44bf-b937-e33f4318969b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.950 2 DEBUG oslo_concurrency.lockutils [req-11246df6-9dd1-4798-9286-700367d177c6 req-f9dbce97-ef4f-44bf-b937-e33f4318969b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.951 2 DEBUG oslo_concurrency.lockutils [req-11246df6-9dd1-4798-9286-700367d177c6 req-f9dbce97-ef4f-44bf-b937-e33f4318969b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.951 2 DEBUG oslo_concurrency.lockutils [req-11246df6-9dd1-4798-9286-700367d177c6 req-f9dbce97-ef4f-44bf-b937-e33f4318969b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.952 2 DEBUG nova.compute.manager [req-11246df6-9dd1-4798-9286-700367d177c6 req-f9dbce97-ef4f-44bf-b937-e33f4318969b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] No waiting events found dispatching network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:34:56 compute-0 nova_compute[260603]: 2025-10-02 08:34:56.952 2 WARNING nova.compute.manager [req-11246df6-9dd1-4798-9286-700367d177c6 req-f9dbce97-ef4f-44bf-b937-e33f4318969b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received unexpected event network-vif-plugged-11350006-4c80-4e3a-a271-5230799be1ba for instance with vm_state rescued and task_state deleting.
Oct 02 08:34:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 249 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 251 op/s
Oct 02 08:34:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:34:57 compute-0 nova_compute[260603]: 2025-10-02 08:34:57.321 2 INFO nova.virt.libvirt.driver [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Deleting instance files /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd_del
Oct 02 08:34:57 compute-0 nova_compute[260603]: 2025-10-02 08:34:57.322 2 INFO nova.virt.libvirt.driver [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Deletion of /var/lib/nova/instances/6df0942d-95db-4140-9c7b-b5c51ada92bd_del complete
Oct 02 08:34:57 compute-0 nova_compute[260603]: 2025-10-02 08:34:57.379 2 INFO nova.compute.manager [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Took 1.13 seconds to destroy the instance on the hypervisor.
Oct 02 08:34:57 compute-0 nova_compute[260603]: 2025-10-02 08:34:57.379 2 DEBUG oslo.service.loopingcall [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:34:57 compute-0 nova_compute[260603]: 2025-10-02 08:34:57.380 2 DEBUG nova.compute.manager [-] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:34:57 compute-0 nova_compute[260603]: 2025-10-02 08:34:57.381 2 DEBUG nova.network.neutron [-] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:34:57 compute-0 nova_compute[260603]: 2025-10-02 08:34:57.691 2 DEBUG nova.compute.manager [req-f08d9793-45d8-4a5a-9a83-6a1b5bde6157 req-1e0de0ee-b855-4612-8349-b2452b3dff07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-changed-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:34:57 compute-0 nova_compute[260603]: 2025-10-02 08:34:57.692 2 DEBUG nova.compute.manager [req-f08d9793-45d8-4a5a-9a83-6a1b5bde6157 req-1e0de0ee-b855-4612-8349-b2452b3dff07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Refreshing instance network info cache due to event network-changed-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:34:57 compute-0 nova_compute[260603]: 2025-10-02 08:34:57.692 2 DEBUG oslo_concurrency.lockutils [req-f08d9793-45d8-4a5a-9a83-6a1b5bde6157 req-1e0de0ee-b855-4612-8349-b2452b3dff07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:34:57 compute-0 nova_compute[260603]: 2025-10-02 08:34:57.693 2 DEBUG oslo_concurrency.lockutils [req-f08d9793-45d8-4a5a-9a83-6a1b5bde6157 req-1e0de0ee-b855-4612-8349-b2452b3dff07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:34:57 compute-0 nova_compute[260603]: 2025-10-02 08:34:57.693 2 DEBUG nova.network.neutron [req-f08d9793-45d8-4a5a-9a83-6a1b5bde6157 req-1e0de0ee-b855-4612-8349-b2452b3dff07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Refreshing network info cache for port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:34:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:34:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:34:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:34:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:34:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:34:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:34:58 compute-0 podman[342092]: 2025-10-02 08:34:58.045212816 +0000 UTC m=+0.096659876 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 08:34:58 compute-0 ceph-mon[74477]: pgmap v1649: 305 pgs: 305 active+clean; 249 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 251 op/s
Oct 02 08:34:58 compute-0 nova_compute[260603]: 2025-10-02 08:34:58.329 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394083.327908, 62bd6fb5-9b1d-40a6-81da-a3d89d022346 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:34:58 compute-0 nova_compute[260603]: 2025-10-02 08:34:58.331 2 INFO nova.compute.manager [-] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] VM Stopped (Lifecycle Event)
Oct 02 08:34:58 compute-0 nova_compute[260603]: 2025-10-02 08:34:58.392 2 DEBUG nova.compute.manager [None req-c6a4134b-dfcb-4f13-9c15-35a364c179a9 - - - - - -] [instance: 62bd6fb5-9b1d-40a6-81da-a3d89d022346] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:34:58 compute-0 nova_compute[260603]: 2025-10-02 08:34:58.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:58 compute-0 nova_compute[260603]: 2025-10-02 08:34:58.453 2 DEBUG nova.network.neutron [-] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:58 compute-0 nova_compute[260603]: 2025-10-02 08:34:58.479 2 INFO nova.compute.manager [-] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Took 1.10 seconds to deallocate network for instance.
Oct 02 08:34:58 compute-0 nova_compute[260603]: 2025-10-02 08:34:58.551 2 DEBUG oslo_concurrency.lockutils [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:34:58 compute-0 nova_compute[260603]: 2025-10-02 08:34:58.552 2 DEBUG oslo_concurrency.lockutils [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:34:58 compute-0 nova_compute[260603]: 2025-10-02 08:34:58.651 2 DEBUG oslo_concurrency.processutils [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:34:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:34:59 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877489337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 120 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.8 MiB/s wr, 378 op/s
Oct 02 08:34:59 compute-0 nova_compute[260603]: 2025-10-02 08:34:59.116 2 DEBUG oslo_concurrency.processutils [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:34:59 compute-0 nova_compute[260603]: 2025-10-02 08:34:59.122 2 DEBUG nova.compute.provider_tree [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:34:59 compute-0 nova_compute[260603]: 2025-10-02 08:34:59.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:34:59 compute-0 nova_compute[260603]: 2025-10-02 08:34:59.139 2 DEBUG nova.scheduler.client.report [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:34:59 compute-0 nova_compute[260603]: 2025-10-02 08:34:59.165 2 DEBUG oslo_concurrency.lockutils [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2877489337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:34:59 compute-0 nova_compute[260603]: 2025-10-02 08:34:59.213 2 INFO nova.scheduler.client.report [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Deleted allocations for instance 6df0942d-95db-4140-9c7b-b5c51ada92bd
Oct 02 08:34:59 compute-0 nova_compute[260603]: 2025-10-02 08:34:59.288 2 DEBUG nova.network.neutron [req-f08d9793-45d8-4a5a-9a83-6a1b5bde6157 req-1e0de0ee-b855-4612-8349-b2452b3dff07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updated VIF entry in instance network info cache for port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:34:59 compute-0 nova_compute[260603]: 2025-10-02 08:34:59.290 2 DEBUG nova.network.neutron [req-f08d9793-45d8-4a5a-9a83-6a1b5bde6157 req-1e0de0ee-b855-4612-8349-b2452b3dff07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:34:59 compute-0 nova_compute[260603]: 2025-10-02 08:34:59.322 2 DEBUG oslo_concurrency.lockutils [None req-d5dedfe0-ff59-44df-a381-baedce6c97e1 d2e113d3d74a43998ac8dbf246ae9095 629efd330be646b7a2941e0c83b86e0e - - default default] Lock "6df0942d-95db-4140-9c7b-b5c51ada92bd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:34:59 compute-0 nova_compute[260603]: 2025-10-02 08:34:59.326 2 DEBUG oslo_concurrency.lockutils [req-f08d9793-45d8-4a5a-9a83-6a1b5bde6157 req-1e0de0ee-b855-4612-8349-b2452b3dff07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:34:59 compute-0 nova_compute[260603]: 2025-10-02 08:34:59.926 2 DEBUG nova.compute.manager [req-ba7de13a-89d7-4164-974c-d9ff94e19fd4 req-1541fb40-60a6-4d13-9d4e-a34196f3307a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Received event network-vif-deleted-11350006-4c80-4e3a-a271-5230799be1ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:00 compute-0 ceph-mon[74477]: pgmap v1650: 305 pgs: 305 active+clean; 120 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.8 MiB/s wr, 378 op/s
Oct 02 08:35:01 compute-0 podman[342134]: 2025-10-02 08:35:01.035966244 +0000 UTC m=+0.094113929 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:35:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 120 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 24 KiB/s wr, 249 op/s
Oct 02 08:35:01 compute-0 anacron[188284]: Job `cron.daily' started
Oct 02 08:35:01 compute-0 anacron[188284]: Job `cron.daily' terminated
Oct 02 08:35:01 compute-0 nova_compute[260603]: 2025-10-02 08:35:01.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:35:02 compute-0 ceph-mon[74477]: pgmap v1651: 305 pgs: 305 active+clean; 120 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 24 KiB/s wr, 249 op/s
Oct 02 08:35:02 compute-0 nova_compute[260603]: 2025-10-02 08:35:02.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:02.392 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:35:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:02.395 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:35:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 88 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 78 KiB/s wr, 257 op/s
Oct 02 08:35:03 compute-0 nova_compute[260603]: 2025-10-02 08:35:03.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:04 compute-0 ceph-mon[74477]: pgmap v1652: 305 pgs: 305 active+clean; 88 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 78 KiB/s wr, 257 op/s
Oct 02 08:35:05 compute-0 ovn_controller[152344]: 2025-10-02T08:35:05Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:af:04 10.100.0.8
Oct 02 08:35:05 compute-0 ovn_controller[152344]: 2025-10-02T08:35:05Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:af:04 10.100.0.8
Oct 02 08:35:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 88 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 61 KiB/s wr, 135 op/s
Oct 02 08:35:06 compute-0 ovn_controller[152344]: 2025-10-02T08:35:06Z|00818|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:35:06 compute-0 ceph-mon[74477]: pgmap v1653: 305 pgs: 305 active+clean; 88 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 61 KiB/s wr, 135 op/s
Oct 02 08:35:06 compute-0 nova_compute[260603]: 2025-10-02 08:35:06.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:06 compute-0 nova_compute[260603]: 2025-10-02 08:35:06.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 88 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 61 KiB/s wr, 135 op/s
Oct 02 08:35:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:35:07 compute-0 nova_compute[260603]: 2025-10-02 08:35:07.385 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394092.295564, 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:07 compute-0 nova_compute[260603]: 2025-10-02 08:35:07.386 2 INFO nova.compute.manager [-] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] VM Stopped (Lifecycle Event)
Oct 02 08:35:07 compute-0 nova_compute[260603]: 2025-10-02 08:35:07.404 2 DEBUG nova.compute.manager [None req-a33938d1-8e13-4375-aa05-2e80381dc020 - - - - - -] [instance: 5b5054c4-5d13-4ca5-8037-e0d93f21d9ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:08 compute-0 ceph-mon[74477]: pgmap v1654: 305 pgs: 305 active+clean; 88 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 61 KiB/s wr, 135 op/s
Oct 02 08:35:08 compute-0 nova_compute[260603]: 2025-10-02 08:35:08.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 121 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 193 op/s
Oct 02 08:35:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:09.398 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:10 compute-0 sudo[342156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:35:10 compute-0 sudo[342156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:10 compute-0 sudo[342156]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:10 compute-0 sudo[342181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:35:10 compute-0 sudo[342181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:10 compute-0 sudo[342181]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:10 compute-0 ceph-mon[74477]: pgmap v1655: 305 pgs: 305 active+clean; 121 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 193 op/s
Oct 02 08:35:10 compute-0 sudo[342206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:35:10 compute-0 sudo[342206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:10 compute-0 sudo[342206]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:10 compute-0 sudo[342231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:35:10 compute-0 sudo[342231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:10 compute-0 sudo[342231]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:35:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:35:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:35:11 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:35:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:35:11 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:35:11 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e996366f-1f5b-4013-879f-754b69049b02 does not exist
Oct 02 08:35:11 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0fa35763-b8cf-4d4e-aaf6-4362e4e4b9f3 does not exist
Oct 02 08:35:11 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e1e391b7-d132-483d-8b2a-e304e643b973 does not exist
Oct 02 08:35:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:35:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:35:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:35:11 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:35:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:35:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:35:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 121 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 08:35:11 compute-0 sudo[342289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:35:11 compute-0 sudo[342289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:11 compute-0 sudo[342289]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:11 compute-0 sudo[342314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:35:11 compute-0 sudo[342314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:11 compute-0 sudo[342314]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:35:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:35:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:35:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:35:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:35:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:35:11 compute-0 sudo[342339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:35:11 compute-0 sudo[342339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:11 compute-0 sudo[342339]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:11 compute-0 sudo[342364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:35:11 compute-0 sudo[342364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:11 compute-0 nova_compute[260603]: 2025-10-02 08:35:11.493 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394096.492934, 6df0942d-95db-4140-9c7b-b5c51ada92bd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:11 compute-0 nova_compute[260603]: 2025-10-02 08:35:11.494 2 INFO nova.compute.manager [-] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] VM Stopped (Lifecycle Event)
Oct 02 08:35:11 compute-0 nova_compute[260603]: 2025-10-02 08:35:11.518 2 DEBUG nova.compute.manager [None req-4b30e8f2-b018-46c1-9ee6-f247cc6aa3ca - - - - - -] [instance: 6df0942d-95db-4140-9c7b-b5c51ada92bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:11 compute-0 nova_compute[260603]: 2025-10-02 08:35:11.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:11 compute-0 podman[342431]: 2025-10-02 08:35:11.92089881 +0000 UTC m=+0.070855561 container create 57107a397cf276cf19f41019afcc3579922b1a56694fe4b573037fa834deb132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 08:35:11 compute-0 systemd[1]: Started libpod-conmon-57107a397cf276cf19f41019afcc3579922b1a56694fe4b573037fa834deb132.scope.
Oct 02 08:35:11 compute-0 podman[342431]: 2025-10-02 08:35:11.893822256 +0000 UTC m=+0.043779097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:35:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:35:12 compute-0 podman[342431]: 2025-10-02 08:35:12.037484984 +0000 UTC m=+0.187441805 container init 57107a397cf276cf19f41019afcc3579922b1a56694fe4b573037fa834deb132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:35:12 compute-0 podman[342431]: 2025-10-02 08:35:12.048996079 +0000 UTC m=+0.198952850 container start 57107a397cf276cf19f41019afcc3579922b1a56694fe4b573037fa834deb132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:35:12 compute-0 podman[342431]: 2025-10-02 08:35:12.053199716 +0000 UTC m=+0.203156547 container attach 57107a397cf276cf19f41019afcc3579922b1a56694fe4b573037fa834deb132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 08:35:12 compute-0 systemd[1]: libpod-57107a397cf276cf19f41019afcc3579922b1a56694fe4b573037fa834deb132.scope: Deactivated successfully.
Oct 02 08:35:12 compute-0 serene_almeida[342447]: 167 167
Oct 02 08:35:12 compute-0 conmon[342447]: conmon 57107a397cf276cf19f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57107a397cf276cf19f41019afcc3579922b1a56694fe4b573037fa834deb132.scope/container/memory.events
Oct 02 08:35:12 compute-0 podman[342431]: 2025-10-02 08:35:12.062423953 +0000 UTC m=+0.212380784 container died 57107a397cf276cf19f41019afcc3579922b1a56694fe4b573037fa834deb132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:35:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cea2df1cb8fc8f10db554bd0f7cceec9449f7de3b33641e5f2d89fb1fcdc6c9-merged.mount: Deactivated successfully.
Oct 02 08:35:12 compute-0 podman[342431]: 2025-10-02 08:35:12.107289851 +0000 UTC m=+0.257246592 container remove 57107a397cf276cf19f41019afcc3579922b1a56694fe4b573037fa834deb132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 08:35:12 compute-0 systemd[1]: libpod-conmon-57107a397cf276cf19f41019afcc3579922b1a56694fe4b573037fa834deb132.scope: Deactivated successfully.
Oct 02 08:35:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:35:12 compute-0 ceph-mon[74477]: pgmap v1656: 305 pgs: 305 active+clean; 121 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 08:35:12 compute-0 podman[342471]: 2025-10-02 08:35:12.324620414 +0000 UTC m=+0.056540911 container create 160b34480b0c26517070f1d82a6c445b00f972c0941ec45d3d5cc265838a165c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 08:35:12 compute-0 systemd[1]: Started libpod-conmon-160b34480b0c26517070f1d82a6c445b00f972c0941ec45d3d5cc265838a165c.scope.
Oct 02 08:35:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:35:12 compute-0 podman[342471]: 2025-10-02 08:35:12.30820136 +0000 UTC m=+0.040121867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/900858c4b82532a7bc6a3a5c4476cc0f8cd7d84735419d928b6d18b7c1c47fd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/900858c4b82532a7bc6a3a5c4476cc0f8cd7d84735419d928b6d18b7c1c47fd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/900858c4b82532a7bc6a3a5c4476cc0f8cd7d84735419d928b6d18b7c1c47fd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/900858c4b82532a7bc6a3a5c4476cc0f8cd7d84735419d928b6d18b7c1c47fd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/900858c4b82532a7bc6a3a5c4476cc0f8cd7d84735419d928b6d18b7c1c47fd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:12 compute-0 podman[342471]: 2025-10-02 08:35:12.431108384 +0000 UTC m=+0.163028881 container init 160b34480b0c26517070f1d82a6c445b00f972c0941ec45d3d5cc265838a165c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 08:35:12 compute-0 podman[342471]: 2025-10-02 08:35:12.442731063 +0000 UTC m=+0.174651580 container start 160b34480b0c26517070f1d82a6c445b00f972c0941ec45d3d5cc265838a165c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 08:35:12 compute-0 podman[342471]: 2025-10-02 08:35:12.446797255 +0000 UTC m=+0.178717752 container attach 160b34480b0c26517070f1d82a6c445b00f972c0941ec45d3d5cc265838a165c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:35:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 121 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:35:13 compute-0 nova_compute[260603]: 2025-10-02 08:35:13.389 2 DEBUG oslo_concurrency.lockutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "5cc7f7cd-c574-48a6-be3a-07161f94bd7f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:13 compute-0 nova_compute[260603]: 2025-10-02 08:35:13.392 2 DEBUG oslo_concurrency.lockutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "5cc7f7cd-c574-48a6-be3a-07161f94bd7f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:13 compute-0 nova_compute[260603]: 2025-10-02 08:35:13.412 2 DEBUG nova.compute.manager [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:35:13 compute-0 nova_compute[260603]: 2025-10-02 08:35:13.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:13 compute-0 nova_compute[260603]: 2025-10-02 08:35:13.493 2 DEBUG oslo_concurrency.lockutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:13 compute-0 nova_compute[260603]: 2025-10-02 08:35:13.494 2 DEBUG oslo_concurrency.lockutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:13 compute-0 nova_compute[260603]: 2025-10-02 08:35:13.506 2 DEBUG nova.virt.hardware [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:35:13 compute-0 nova_compute[260603]: 2025-10-02 08:35:13.506 2 INFO nova.compute.claims [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:35:13 compute-0 admiring_hugle[342487]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:35:13 compute-0 admiring_hugle[342487]: --> relative data size: 1.0
Oct 02 08:35:13 compute-0 admiring_hugle[342487]: --> All data devices are unavailable
Oct 02 08:35:13 compute-0 systemd[1]: libpod-160b34480b0c26517070f1d82a6c445b00f972c0941ec45d3d5cc265838a165c.scope: Deactivated successfully.
Oct 02 08:35:13 compute-0 systemd[1]: libpod-160b34480b0c26517070f1d82a6c445b00f972c0941ec45d3d5cc265838a165c.scope: Consumed 1.076s CPU time.
Oct 02 08:35:13 compute-0 podman[342471]: 2025-10-02 08:35:13.576864051 +0000 UTC m=+1.308784628 container died 160b34480b0c26517070f1d82a6c445b00f972c0941ec45d3d5cc265838a165c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-900858c4b82532a7bc6a3a5c4476cc0f8cd7d84735419d928b6d18b7c1c47fd4-merged.mount: Deactivated successfully.
Oct 02 08:35:13 compute-0 nova_compute[260603]: 2025-10-02 08:35:13.636 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:13 compute-0 podman[342471]: 2025-10-02 08:35:13.658970968 +0000 UTC m=+1.390891505 container remove 160b34480b0c26517070f1d82a6c445b00f972c0941ec45d3d5cc265838a165c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:35:13 compute-0 systemd[1]: libpod-conmon-160b34480b0c26517070f1d82a6c445b00f972c0941ec45d3d5cc265838a165c.scope: Deactivated successfully.
Oct 02 08:35:13 compute-0 sudo[342364]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:13 compute-0 sudo[342531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:35:13 compute-0 sudo[342531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:13 compute-0 sudo[342531]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:13 compute-0 sudo[342563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:35:13 compute-0 sudo[342563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:13 compute-0 sudo[342563]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:13 compute-0 sudo[342600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:35:13 compute-0 sudo[342600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:13 compute-0 sudo[342600]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:13 compute-0 sudo[342625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:35:13 compute-0 sudo[342625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:35:14 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2325066558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.104 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.110 2 DEBUG nova.compute.provider_tree [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.140 2 DEBUG nova.scheduler.client.report [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.168 2 DEBUG oslo_concurrency.lockutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.169 2 DEBUG nova.compute.manager [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.226 2 DEBUG nova.compute.manager [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.248 2 INFO nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:14 compute-0 ceph-mon[74477]: pgmap v1657: 305 pgs: 305 active+clean; 121 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:35:14 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2325066558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.274 2 DEBUG nova.compute.manager [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:35:14 compute-0 podman[342694]: 2025-10-02 08:35:14.337251645 +0000 UTC m=+0.039671274 container create 89a748f3926f1dc50e20552623faf178f477670435e0c631e0b591bf935a7da8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:35:14 compute-0 systemd[1]: Started libpod-conmon-89a748f3926f1dc50e20552623faf178f477670435e0c631e0b591bf935a7da8.scope.
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.401 2 DEBUG nova.compute.manager [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.403 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.404 2 INFO nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Creating image(s)
Oct 02 08:35:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:35:14 compute-0 podman[342694]: 2025-10-02 08:35:14.319695597 +0000 UTC m=+0.022115256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:35:14 compute-0 podman[342694]: 2025-10-02 08:35:14.423339032 +0000 UTC m=+0.125758701 container init 89a748f3926f1dc50e20552623faf178f477670435e0c631e0b591bf935a7da8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.424 2 DEBUG nova.storage.rbd_utils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:14 compute-0 podman[342694]: 2025-10-02 08:35:14.432417365 +0000 UTC m=+0.134836984 container start 89a748f3926f1dc50e20552623faf178f477670435e0c631e0b591bf935a7da8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:35:14 compute-0 podman[342694]: 2025-10-02 08:35:14.435965712 +0000 UTC m=+0.138385381 container attach 89a748f3926f1dc50e20552623faf178f477670435e0c631e0b591bf935a7da8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:35:14 compute-0 angry_saha[342711]: 167 167
Oct 02 08:35:14 compute-0 systemd[1]: libpod-89a748f3926f1dc50e20552623faf178f477670435e0c631e0b591bf935a7da8.scope: Deactivated successfully.
Oct 02 08:35:14 compute-0 podman[342694]: 2025-10-02 08:35:14.437846768 +0000 UTC m=+0.140266387 container died 89a748f3926f1dc50e20552623faf178f477670435e0c631e0b591bf935a7da8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.450 2 DEBUG nova.storage.rbd_utils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c9f71134d704838de70300bf7916246914dfbe6790ad520b493e7262870fbc0-merged.mount: Deactivated successfully.
Oct 02 08:35:14 compute-0 podman[342694]: 2025-10-02 08:35:14.473221821 +0000 UTC m=+0.175641440 container remove 89a748f3926f1dc50e20552623faf178f477670435e0c631e0b591bf935a7da8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.479 2 DEBUG nova.storage.rbd_utils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.496 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:14 compute-0 systemd[1]: libpod-conmon-89a748f3926f1dc50e20552623faf178f477670435e0c631e0b591bf935a7da8.scope: Deactivated successfully.
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.542 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.543 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.546 2 DEBUG oslo_concurrency.lockutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "330c4cea-9fec-4ab5-8ce1-3820232464cb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.546 2 DEBUG oslo_concurrency.lockutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "330c4cea-9fec-4ab5-8ce1-3820232464cb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.564 2 DEBUG nova.compute.manager [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.586 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.587 2 DEBUG oslo_concurrency.lockutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.587 2 DEBUG oslo_concurrency.lockutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.588 2 DEBUG oslo_concurrency.lockutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.604 2 DEBUG nova.storage.rbd_utils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.607 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:14 compute-0 podman[342792]: 2025-10-02 08:35:14.647727626 +0000 UTC m=+0.045530849 container create 70961a6812e67922e7a8352d2b654e2431e28462b4519af89a0ff4fb00c88a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.671 2 DEBUG oslo_concurrency.lockutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.672 2 DEBUG oslo_concurrency.lockutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.681 2 DEBUG nova.virt.hardware [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.681 2 INFO nova.compute.claims [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:35:14 compute-0 systemd[1]: Started libpod-conmon-70961a6812e67922e7a8352d2b654e2431e28462b4519af89a0ff4fb00c88a17.scope.
Oct 02 08:35:14 compute-0 podman[342792]: 2025-10-02 08:35:14.62257137 +0000 UTC m=+0.020374603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:35:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3721758bf5514676ea53ead266cab8ebe26e90f3f7bb899314a6d8cbd3cc4bbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3721758bf5514676ea53ead266cab8ebe26e90f3f7bb899314a6d8cbd3cc4bbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3721758bf5514676ea53ead266cab8ebe26e90f3f7bb899314a6d8cbd3cc4bbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3721758bf5514676ea53ead266cab8ebe26e90f3f7bb899314a6d8cbd3cc4bbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:14 compute-0 podman[342792]: 2025-10-02 08:35:14.745906597 +0000 UTC m=+0.143709810 container init 70961a6812e67922e7a8352d2b654e2431e28462b4519af89a0ff4fb00c88a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:35:14 compute-0 podman[342792]: 2025-10-02 08:35:14.758380922 +0000 UTC m=+0.156184135 container start 70961a6812e67922e7a8352d2b654e2431e28462b4519af89a0ff4fb00c88a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:35:14 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 02 08:35:14 compute-0 podman[342792]: 2025-10-02 08:35:14.766417223 +0000 UTC m=+0.164220456 container attach 70961a6812e67922e7a8352d2b654e2431e28462b4519af89a0ff4fb00c88a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.853 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.897 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:14 compute-0 nova_compute[260603]: 2025-10-02 08:35:14.966 2 DEBUG nova.storage.rbd_utils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] resizing rbd image 5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.023 2 DEBUG oslo_concurrency.lockutils [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.024 2 DEBUG oslo_concurrency.lockutils [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.024 2 INFO nova.compute.manager [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Rebooting instance
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.061 2 DEBUG oslo_concurrency.lockutils [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.062 2 DEBUG oslo_concurrency.lockutils [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.062 2 DEBUG nova.network.neutron [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.069 2 DEBUG nova.objects.instance [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'migration_context' on Instance uuid 5cc7f7cd-c574-48a6-be3a-07161f94bd7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.087 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.087 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Ensure instance console log exists: /var/lib/nova/instances/5cc7f7cd-c574-48a6-be3a-07161f94bd7f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.088 2 DEBUG oslo_concurrency.lockutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.088 2 DEBUG oslo_concurrency.lockutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.089 2 DEBUG oslo_concurrency.lockutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.090 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.096 2 WARNING nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.100 2 DEBUG nova.virt.libvirt.host [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.101 2 DEBUG nova.virt.libvirt.host [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:35:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 121 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.107 2 DEBUG nova.virt.libvirt.host [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.108 2 DEBUG nova.virt.libvirt.host [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.108 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.109 2 DEBUG nova.virt.hardware [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.109 2 DEBUG nova.virt.hardware [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.110 2 DEBUG nova.virt.hardware [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.110 2 DEBUG nova.virt.hardware [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.110 2 DEBUG nova.virt.hardware [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.111 2 DEBUG nova.virt.hardware [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.111 2 DEBUG nova.virt.hardware [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.112 2 DEBUG nova.virt.hardware [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.112 2 DEBUG nova.virt.hardware [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.112 2 DEBUG nova.virt.hardware [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.113 2 DEBUG nova.virt.hardware [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.116 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:35:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016871713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.345 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.352 2 DEBUG nova.compute.provider_tree [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.371 2 DEBUG nova.scheduler.client.report [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.394 2 DEBUG oslo_concurrency.lockutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.395 2 DEBUG nova.compute.manager [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.454 2 DEBUG nova.compute.manager [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.475 2 INFO nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]: {
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:     "0": [
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:         {
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "devices": [
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "/dev/loop3"
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             ],
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_name": "ceph_lv0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_size": "21470642176",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "name": "ceph_lv0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "tags": {
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.cluster_name": "ceph",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.crush_device_class": "",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.encrypted": "0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.osd_id": "0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.type": "block",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.vdo": "0"
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             },
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "type": "block",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "vg_name": "ceph_vg0"
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:         }
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:     ],
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:     "1": [
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:         {
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "devices": [
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "/dev/loop4"
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             ],
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_name": "ceph_lv1",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_size": "21470642176",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "name": "ceph_lv1",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "tags": {
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.cluster_name": "ceph",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.crush_device_class": "",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.encrypted": "0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.osd_id": "1",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.type": "block",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.vdo": "0"
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             },
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "type": "block",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "vg_name": "ceph_vg1"
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:         }
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:     ],
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:     "2": [
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:         {
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "devices": [
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "/dev/loop5"
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             ],
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_name": "ceph_lv2",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_size": "21470642176",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "name": "ceph_lv2",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "tags": {
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.cluster_name": "ceph",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.crush_device_class": "",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.encrypted": "0",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.osd_id": "2",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.type": "block",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:                 "ceph.vdo": "0"
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             },
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "type": "block",
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:             "vg_name": "ceph_vg2"
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:         }
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]:     ]
Oct 02 08:35:15 compute-0 dreamy_matsumoto[342842]: }
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.494 2 DEBUG nova.compute.manager [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:35:15 compute-0 systemd[1]: libpod-70961a6812e67922e7a8352d2b654e2431e28462b4519af89a0ff4fb00c88a17.scope: Deactivated successfully.
Oct 02 08:35:15 compute-0 podman[342792]: 2025-10-02 08:35:15.525067765 +0000 UTC m=+0.922870978 container died 70961a6812e67922e7a8352d2b654e2431e28462b4519af89a0ff4fb00c88a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 08:35:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765302471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3721758bf5514676ea53ead266cab8ebe26e90f3f7bb899314a6d8cbd3cc4bbe-merged.mount: Deactivated successfully.
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.555 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:15 compute-0 podman[342792]: 2025-10-02 08:35:15.574714708 +0000 UTC m=+0.972517921 container remove 70961a6812e67922e7a8352d2b654e2431e28462b4519af89a0ff4fb00c88a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 08:35:15 compute-0 systemd[1]: libpod-conmon-70961a6812e67922e7a8352d2b654e2431e28462b4519af89a0ff4fb00c88a17.scope: Deactivated successfully.
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.592 2 DEBUG nova.storage.rbd_utils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.596 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:15 compute-0 sudo[342625]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.643 2 DEBUG nova.compute.manager [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.646 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.646 2 INFO nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Creating image(s)
Oct 02 08:35:15 compute-0 sudo[343003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.671 2 DEBUG nova.storage.rbd_utils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:15 compute-0 sudo[343003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:15 compute-0 sudo[343003]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.698 2 DEBUG nova.storage.rbd_utils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.725 2 DEBUG nova.storage.rbd_utils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.731 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:15 compute-0 sudo[343047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:35:15 compute-0 sudo[343047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:15 compute-0 sudo[343047]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:15 compute-0 sudo[343127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:35:15 compute-0 sudo[343127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:15 compute-0 sudo[343127]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.824 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.825 2 DEBUG oslo_concurrency.lockutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.826 2 DEBUG oslo_concurrency.lockutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.827 2 DEBUG oslo_concurrency.lockutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.853 2 DEBUG nova.storage.rbd_utils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:15 compute-0 nova_compute[260603]: 2025-10-02 08:35:15.856 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:15 compute-0 sudo[343152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:35:15 compute-0 sudo[343152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1264225920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.079 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.081 2 DEBUG nova.objects.instance [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5cc7f7cd-c574-48a6-be3a-07161f94bd7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.096 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:35:16 compute-0 nova_compute[260603]:   <uuid>5cc7f7cd-c574-48a6-be3a-07161f94bd7f</uuid>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   <name>instance-00000056</name>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerShowV247Test-server-424278229</nova:name>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:35:15</nova:creationTime>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:35:16 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:35:16 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:35:16 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:35:16 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:35:16 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:35:16 compute-0 nova_compute[260603]:         <nova:user uuid="d3671400936c4ef08f70278919c780ad">tempest-ServerShowV247Test-914637317-project-member</nova:user>
Oct 02 08:35:16 compute-0 nova_compute[260603]:         <nova:project uuid="c3f40887ae8b47f58d2d6d9acd3ae2e3">tempest-ServerShowV247Test-914637317</nova:project>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <nova:ports/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <system>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <entry name="serial">5cc7f7cd-c574-48a6-be3a-07161f94bd7f</entry>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <entry name="uuid">5cc7f7cd-c574-48a6-be3a-07161f94bd7f</entry>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     </system>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   <os>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   </os>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   <features>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   </features>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk">
Oct 02 08:35:16 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:16 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk.config">
Oct 02 08:35:16 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:16 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/5cc7f7cd-c574-48a6-be3a-07161f94bd7f/console.log" append="off"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <video>
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     </video>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:35:16 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:35:16 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:35:16 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:35:16 compute-0 nova_compute[260603]: </domain>
Oct 02 08:35:16 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.250 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:16 compute-0 podman[343256]: 2025-10-02 08:35:16.269859501 +0000 UTC m=+0.057766807 container create 079ea2d6988bc63cd70a13bd1e0df3bd0fa9eae4b40cf46d505fe05b4a027bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brown, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:35:16 compute-0 ceph-mon[74477]: pgmap v1658: 305 pgs: 305 active+clean; 121 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 02 08:35:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3016871713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2765302471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1264225920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.290 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.290 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.290 2 INFO nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Using config drive
Oct 02 08:35:16 compute-0 systemd[1]: Started libpod-conmon-079ea2d6988bc63cd70a13bd1e0df3bd0fa9eae4b40cf46d505fe05b4a027bc8.scope.
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.315 2 DEBUG nova.storage.rbd_utils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:16 compute-0 podman[343256]: 2025-10-02 08:35:16.235984023 +0000 UTC m=+0.023891429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:35:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:35:16 compute-0 podman[343256]: 2025-10-02 08:35:16.398543128 +0000 UTC m=+0.186450464 container init 079ea2d6988bc63cd70a13bd1e0df3bd0fa9eae4b40cf46d505fe05b4a027bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brown, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:35:16 compute-0 podman[343256]: 2025-10-02 08:35:16.406119816 +0000 UTC m=+0.194027172 container start 079ea2d6988bc63cd70a13bd1e0df3bd0fa9eae4b40cf46d505fe05b4a027bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:35:16 compute-0 podman[343256]: 2025-10-02 08:35:16.410005313 +0000 UTC m=+0.197912629 container attach 079ea2d6988bc63cd70a13bd1e0df3bd0fa9eae4b40cf46d505fe05b4a027bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brown, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:35:16 compute-0 brave_brown[343306]: 167 167
Oct 02 08:35:16 compute-0 systemd[1]: libpod-079ea2d6988bc63cd70a13bd1e0df3bd0fa9eae4b40cf46d505fe05b4a027bc8.scope: Deactivated successfully.
Oct 02 08:35:16 compute-0 conmon[343306]: conmon 079ea2d6988bc63cd70a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-079ea2d6988bc63cd70a13bd1e0df3bd0fa9eae4b40cf46d505fe05b4a027bc8.scope/container/memory.events
Oct 02 08:35:16 compute-0 podman[343256]: 2025-10-02 08:35:16.412475856 +0000 UTC m=+0.200383172 container died 079ea2d6988bc63cd70a13bd1e0df3bd0fa9eae4b40cf46d505fe05b4a027bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.426 2 DEBUG nova.storage.rbd_utils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] resizing rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-af5ffac3e9094633e73577cc0642a3b07b2736b1eb7c11e5213de1e21de5bc9c-merged.mount: Deactivated successfully.
Oct 02 08:35:16 compute-0 podman[343256]: 2025-10-02 08:35:16.453656834 +0000 UTC m=+0.241564190 container remove 079ea2d6988bc63cd70a13bd1e0df3bd0fa9eae4b40cf46d505fe05b4a027bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brown, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:35:16 compute-0 systemd[1]: libpod-conmon-079ea2d6988bc63cd70a13bd1e0df3bd0fa9eae4b40cf46d505fe05b4a027bc8.scope: Deactivated successfully.
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.580 2 DEBUG nova.objects.instance [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'migration_context' on Instance uuid 330c4cea-9fec-4ab5-8ce1-3820232464cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.598 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.599 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Ensure instance console log exists: /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.600 2 DEBUG oslo_concurrency.lockutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.601 2 DEBUG oslo_concurrency.lockutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.601 2 DEBUG oslo_concurrency.lockutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.605 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.611 2 WARNING nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.616 2 DEBUG nova.virt.libvirt.host [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.618 2 DEBUG nova.virt.libvirt.host [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.621 2 DEBUG nova.virt.libvirt.host [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.622 2 DEBUG nova.virt.libvirt.host [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.622 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.623 2 DEBUG nova.virt.hardware [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.623 2 DEBUG nova.virt.hardware [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.624 2 DEBUG nova.virt.hardware [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.625 2 DEBUG nova.virt.hardware [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.625 2 DEBUG nova.virt.hardware [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.625 2 DEBUG nova.virt.hardware [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.626 2 DEBUG nova.virt.hardware [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.626 2 DEBUG nova.virt.hardware [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.627 2 DEBUG nova.virt.hardware [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.628 2 DEBUG nova.virt.hardware [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.628 2 DEBUG nova.virt.hardware [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.633 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:16 compute-0 podman[343386]: 2025-10-02 08:35:16.672975957 +0000 UTC m=+0.044367055 container create 2de3dbc3d82b0bd86cb9fc56092aed476fdf1d24b343dbbf8c08171160f0e7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:35:16 compute-0 systemd[1]: Started libpod-conmon-2de3dbc3d82b0bd86cb9fc56092aed476fdf1d24b343dbbf8c08171160f0e7d2.scope.
Oct 02 08:35:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:35:16 compute-0 podman[343386]: 2025-10-02 08:35:16.652604724 +0000 UTC m=+0.023995852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5049ea030638055d0753110da930d732340f20d98c3101ea74e1b45375c58372/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5049ea030638055d0753110da930d732340f20d98c3101ea74e1b45375c58372/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5049ea030638055d0753110da930d732340f20d98c3101ea74e1b45375c58372/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5049ea030638055d0753110da930d732340f20d98c3101ea74e1b45375c58372/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:16 compute-0 podman[343386]: 2025-10-02 08:35:16.768527918 +0000 UTC m=+0.139919026 container init 2de3dbc3d82b0bd86cb9fc56092aed476fdf1d24b343dbbf8c08171160f0e7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 08:35:16 compute-0 podman[343386]: 2025-10-02 08:35:16.774984692 +0000 UTC m=+0.146375770 container start 2de3dbc3d82b0bd86cb9fc56092aed476fdf1d24b343dbbf8c08171160f0e7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 08:35:16 compute-0 podman[343386]: 2025-10-02 08:35:16.778665273 +0000 UTC m=+0.150056371 container attach 2de3dbc3d82b0bd86cb9fc56092aed476fdf1d24b343dbbf8c08171160f0e7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.818 2 INFO nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Creating config drive at /var/lib/nova/instances/5cc7f7cd-c574-48a6-be3a-07161f94bd7f/disk.config
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.827 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5cc7f7cd-c574-48a6-be3a-07161f94bd7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmporo5exl1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:16 compute-0 nova_compute[260603]: 2025-10-02 08:35:16.994 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5cc7f7cd-c574-48a6-be3a-07161f94bd7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmporo5exl1" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.024 2 DEBUG nova.storage.rbd_utils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.029 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5cc7f7cd-c574-48a6-be3a-07161f94bd7f/disk.config 5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376734829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.096 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 121 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.125 2 DEBUG nova.storage.rbd_utils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.129 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.235 2 DEBUG oslo_concurrency.processutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5cc7f7cd-c574-48a6-be3a-07161f94bd7f/disk.config 5cc7f7cd-c574-48a6-be3a-07161f94bd7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.236 2 INFO nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Deleting local config drive /var/lib/nova/instances/5cc7f7cd-c574-48a6-be3a-07161f94bd7f/disk.config because it was imported into RBD.
Oct 02 08:35:17 compute-0 virtqemud[260328]: End of file while reading data: Input/output error
Oct 02 08:35:17 compute-0 virtqemud[260328]: End of file while reading data: Input/output error
Oct 02 08:35:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/376734829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:17 compute-0 systemd-machined[214636]: New machine qemu-100-instance-00000056.
Oct 02 08:35:17 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-00000056.
Oct 02 08:35:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1136169005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.609 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.612 2 DEBUG nova.objects.instance [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 330c4cea-9fec-4ab5-8ce1-3820232464cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.630 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:35:17 compute-0 nova_compute[260603]:   <uuid>330c4cea-9fec-4ab5-8ce1-3820232464cb</uuid>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   <name>instance-00000057</name>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerShowV247Test-server-1285796490</nova:name>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:35:16</nova:creationTime>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:35:17 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:35:17 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:35:17 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:35:17 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:35:17 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:35:17 compute-0 nova_compute[260603]:         <nova:user uuid="d3671400936c4ef08f70278919c780ad">tempest-ServerShowV247Test-914637317-project-member</nova:user>
Oct 02 08:35:17 compute-0 nova_compute[260603]:         <nova:project uuid="c3f40887ae8b47f58d2d6d9acd3ae2e3">tempest-ServerShowV247Test-914637317</nova:project>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <nova:ports/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <system>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <entry name="serial">330c4cea-9fec-4ab5-8ce1-3820232464cb</entry>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <entry name="uuid">330c4cea-9fec-4ab5-8ce1-3820232464cb</entry>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     </system>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   <os>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   </os>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   <features>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   </features>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/330c4cea-9fec-4ab5-8ce1-3820232464cb_disk">
Oct 02 08:35:17 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:17 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/330c4cea-9fec-4ab5-8ce1-3820232464cb_disk.config">
Oct 02 08:35:17 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:17 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/console.log" append="off"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <video>
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     </video>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:35:17 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:35:17 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:35:17 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:35:17 compute-0 nova_compute[260603]: </domain>
Oct 02 08:35:17 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.690 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.691 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.691 2 INFO nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Using config drive
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.711 2 DEBUG nova.storage.rbd_utils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]: {
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "osd_id": 2,
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "type": "bluestore"
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:     },
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "osd_id": 1,
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "type": "bluestore"
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:     },
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "osd_id": 0,
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:         "type": "bluestore"
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]:     }
Oct 02 08:35:17 compute-0 sweet_bhabha[343404]: }
Oct 02 08:35:17 compute-0 systemd[1]: libpod-2de3dbc3d82b0bd86cb9fc56092aed476fdf1d24b343dbbf8c08171160f0e7d2.scope: Deactivated successfully.
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.792 2 DEBUG nova.network.neutron [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.808 2 DEBUG oslo_concurrency.lockutils [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.809 2 DEBUG nova.compute.manager [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:17 compute-0 podman[343571]: 2025-10-02 08:35:17.811200136 +0000 UTC m=+0.031597940 container died 2de3dbc3d82b0bd86cb9fc56092aed476fdf1d24b343dbbf8c08171160f0e7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:35:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5049ea030638055d0753110da930d732340f20d98c3101ea74e1b45375c58372-merged.mount: Deactivated successfully.
Oct 02 08:35:17 compute-0 podman[343571]: 2025-10-02 08:35:17.877149479 +0000 UTC m=+0.097547273 container remove 2de3dbc3d82b0bd86cb9fc56092aed476fdf1d24b343dbbf8c08171160f0e7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:35:17 compute-0 systemd[1]: libpod-conmon-2de3dbc3d82b0bd86cb9fc56092aed476fdf1d24b343dbbf8c08171160f0e7d2.scope: Deactivated successfully.
Oct 02 08:35:17 compute-0 sudo[343152]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:35:17 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:35:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:35:17 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:35:17 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ae54d8b1-4370-4f15-a697-c884283a8354 does not exist
Oct 02 08:35:17 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 34ff5730-b91e-4f74-a3d2-5a381dc4017a does not exist
Oct 02 08:35:17 compute-0 kernel: tap961da5ba-b0 (unregistering): left promiscuous mode
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.989 2 INFO nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Creating config drive at /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/disk.config
Oct 02 08:35:17 compute-0 nova_compute[260603]: 2025-10-02 08:35:17.996 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplm54syi1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:17 compute-0 NetworkManager[45129]: <info>  [1759394117.9980] device (tap961da5ba-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:35:18 compute-0 ovn_controller[152344]: 2025-10-02T08:35:18Z|00819|binding|INFO|Releasing lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 from this chassis (sb_readonly=0)
Oct 02 08:35:18 compute-0 ovn_controller[152344]: 2025-10-02T08:35:18Z|00820|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 down in Southbound
Oct 02 08:35:18 compute-0 ovn_controller[152344]: 2025-10-02T08:35:18Z|00821|binding|INFO|Removing iface tap961da5ba-b0 ovn-installed in OVS
Oct 02 08:35:18 compute-0 sudo[343588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.021 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.023 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis
Oct 02 08:35:18 compute-0 sudo[343588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.025 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74f187c2-780c-418d-98eb-b25294872ab0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.027 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8fecd4-1ce7-47df-999e-3f951c110ea1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.028 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace which is not needed anymore
Oct 02 08:35:18 compute-0 sudo[343588]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:18 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 02 08:35:18 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000055.scope: Consumed 13.150s CPU time.
Oct 02 08:35:18 compute-0 systemd-machined[214636]: Machine qemu-99-instance-00000055 terminated.
Oct 02 08:35:18 compute-0 sudo[343617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:35:18 compute-0 sudo[343617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:35:18 compute-0 sudo[343617]: pam_unix(sudo:session): session closed for user root
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.147 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplm54syi1" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.189 2 DEBUG nova.storage.rbd_utils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.195 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/disk.config 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:18 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[341988]: [NOTICE]   (341992) : haproxy version is 2.8.14-c23fe91
Oct 02 08:35:18 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[341988]: [NOTICE]   (341992) : path to executable is /usr/sbin/haproxy
Oct 02 08:35:18 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[341988]: [WARNING]  (341992) : Exiting Master process...
Oct 02 08:35:18 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[341988]: [ALERT]    (341992) : Current worker (341994) exited with code 143 (Terminated)
Oct 02 08:35:18 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[341988]: [WARNING]  (341992) : All workers exited. Exiting... (0)
Oct 02 08:35:18 compute-0 systemd[1]: libpod-87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8.scope: Deactivated successfully.
Oct 02 08:35:18 compute-0 podman[343665]: 2025-10-02 08:35:18.218537639 +0000 UTC m=+0.070249862 container died 87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:35:18 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:35:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8-userdata-shm.mount: Deactivated successfully.
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.255 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance destroyed successfully.
Oct 02 08:35:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-95041ea22d2a3b4d29d27a083aa55580a6144219edc06f57014b0f9c0abddb0e-merged.mount: Deactivated successfully.
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.257 2 DEBUG nova.objects.instance [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'resources' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:18 compute-0 podman[343665]: 2025-10-02 08:35:18.271234973 +0000 UTC m=+0.122947196 container cleanup 87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:35:18 compute-0 systemd[1]: libpod-conmon-87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8.scope: Deactivated successfully.
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.296 2 DEBUG nova.virt.libvirt.vif [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:35:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.297 2 DEBUG nova.network.os_vif_util [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.298 2 DEBUG nova.network.os_vif_util [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.299 2 DEBUG os_vif [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap961da5ba-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.306 2 INFO os_vif [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.316 2 DEBUG nova.virt.libvirt.driver [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Start _get_guest_xml network_info=[{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:35:18 compute-0 ceph-mon[74477]: pgmap v1659: 305 pgs: 305 active+clean; 121 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 02 08:35:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1136169005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:18 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:35:18 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.321 2 WARNING nova.virt.libvirt.driver [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.330 2 DEBUG nova.virt.libvirt.host [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.331 2 DEBUG nova.virt.libvirt.host [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.334 2 DEBUG nova.virt.libvirt.host [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.335 2 DEBUG nova.virt.libvirt.host [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.335 2 DEBUG nova.virt.libvirt.driver [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.335 2 DEBUG nova.virt.hardware [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.336 2 DEBUG nova.virt.hardware [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.336 2 DEBUG nova.virt.hardware [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.336 2 DEBUG nova.virt.hardware [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.336 2 DEBUG nova.virt.hardware [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.337 2 DEBUG nova.virt.hardware [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.337 2 DEBUG nova.virt.hardware [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.337 2 DEBUG nova.virt.hardware [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.337 2 DEBUG nova.virt.hardware [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.337 2 DEBUG nova.virt.hardware [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.338 2 DEBUG nova.virt.hardware [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.338 2 DEBUG nova.objects.instance [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'vcpu_model' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:18 compute-0 podman[343736]: 2025-10-02 08:35:18.362255889 +0000 UTC m=+0.052032035 container remove 87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.365 2 DEBUG oslo_concurrency.processutils [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.368 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[15a61ea1-2973-441f-9ae5-031c9874cdba]: (4, ('Thu Oct  2 08:35:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8)\n87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8\nThu Oct  2 08:35:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8)\n87f30b1156cfffd57e50a82365421e4d326965dba057f46b500e75ad45dcccc8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.369 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aae135ce-0696-44ba-a816-954d3e8e1ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.370 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:18 compute-0 kernel: tap74f187c2-70: left promiscuous mode
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.403 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[41adcfef-6faa-4b76-9688-70fc3971eb44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.411 2 DEBUG oslo_concurrency.processutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/disk.config 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.412 2 INFO nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Deleting local config drive /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/disk.config because it was imported into RBD.
Oct 02 08:35:18 compute-0 virtqemud[260328]: End of file while reading data: Input/output error
Oct 02 08:35:18 compute-0 virtqemud[260328]: End of file while reading data: Input/output error
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.432 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7485630-aff5-4197-8a3a-453967b1307e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.433 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[67f98be4-990d-4fb4-9b16-cb62d3ccaf1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.449 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dca6c1d2-54dc-40a4-bb14-d31e2d0d53ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496995, 'reachable_time': 31328, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343801, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d74f187c2\x2d780c\x2d418d\x2d98eb\x2db25294872ab0.mount: Deactivated successfully.
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.454 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:35:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:18.455 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[fc754e5a-fc3f-4461-8c4c-b13ca96f7d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:18 compute-0 systemd-machined[214636]: New machine qemu-101-instance-00000057.
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:18 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-00000057.
Oct 02 08:35:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1143411548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.805 2 DEBUG oslo_concurrency.processutils [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.837 2 DEBUG oslo_concurrency.processutils [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.885 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394118.8681533, 5cc7f7cd-c574-48a6-be3a-07161f94bd7f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.885 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] VM Resumed (Lifecycle Event)
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.888 2 DEBUG nova.compute.manager [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.888 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.893 2 INFO nova.virt.libvirt.driver [-] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Instance spawned successfully.
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.894 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.912 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.920 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.925 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.925 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.926 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.926 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.927 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.927 2 DEBUG nova.virt.libvirt.driver [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.950 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.951 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394118.8688827, 5cc7f7cd-c574-48a6-be3a-07161f94bd7f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.951 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] VM Started (Lifecycle Event)
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.977 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.980 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.984 2 INFO nova.compute.manager [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Took 4.58 seconds to spawn the instance on the hypervisor.
Oct 02 08:35:18 compute-0 nova_compute[260603]: 2025-10-02 08:35:18.985 2 DEBUG nova.compute.manager [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.001 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.044 2 INFO nova.compute.manager [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Took 5.59 seconds to build instance.
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.066 2 DEBUG oslo_concurrency.lockutils [None req-52eab809-306e-4d69-b889-d4dfc5000b3a d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "5cc7f7cd-c574-48a6-be3a-07161f94bd7f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 213 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 5.6 MiB/s wr, 117 op/s
Oct 02 08:35:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2542017258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1143411548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2542017258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.329 2 DEBUG oslo_concurrency.processutils [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.330 2 DEBUG nova.virt.libvirt.vif [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:35:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.331 2 DEBUG nova.network.os_vif_util [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.333 2 DEBUG nova.network.os_vif_util [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.335 2 DEBUG nova.objects.instance [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.359 2 DEBUG nova.virt.libvirt.driver [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:35:19 compute-0 nova_compute[260603]:   <uuid>ba2cf934-ce76-4de7-a495-285f144bdab7</uuid>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   <name>instance-00000055</name>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerActionsTestJSON-server-276449458</nova:name>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:35:18</nova:creationTime>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:35:19 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:35:19 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:35:19 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:35:19 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:35:19 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:35:19 compute-0 nova_compute[260603]:         <nova:user uuid="bb1b3a5ae9514259b27a0b7a28f23cda">tempest-ServerActionsTestJSON-1407264397-project-member</nova:user>
Oct 02 08:35:19 compute-0 nova_compute[260603]:         <nova:project uuid="b43ebc87104041aba179e47c5e6ecc5f">tempest-ServerActionsTestJSON-1407264397</nova:project>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:35:19 compute-0 nova_compute[260603]:         <nova:port uuid="961da5ba-b0ac-4a87-a74c-26d0d2d2bf50">
Oct 02 08:35:19 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <system>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <entry name="serial">ba2cf934-ce76-4de7-a495-285f144bdab7</entry>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <entry name="uuid">ba2cf934-ce76-4de7-a495-285f144bdab7</entry>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     </system>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   <os>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   </os>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   <features>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   </features>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ba2cf934-ce76-4de7-a495-285f144bdab7_disk">
Oct 02 08:35:19 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:19 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ba2cf934-ce76-4de7-a495-285f144bdab7_disk.config">
Oct 02 08:35:19 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:19 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:bb:af:04"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <target dev="tap961da5ba-b0"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/console.log" append="off"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <video>
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     </video>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:35:19 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:35:19 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:35:19 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:35:19 compute-0 nova_compute[260603]: </domain>
Oct 02 08:35:19 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.360 2 DEBUG nova.virt.libvirt.driver [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.360 2 DEBUG nova.virt.libvirt.driver [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.362 2 DEBUG nova.virt.libvirt.vif [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:35:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.362 2 DEBUG nova.network.os_vif_util [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.363 2 DEBUG nova.network.os_vif_util [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.364 2 DEBUG os_vif [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.366 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.367 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap961da5ba-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap961da5ba-b0, col_values=(('external_ids', {'iface-id': '961da5ba-b0ac-4a87-a74c-26d0d2d2bf50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:af:04', 'vm-uuid': 'ba2cf934-ce76-4de7-a495-285f144bdab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:19 compute-0 NetworkManager[45129]: <info>  [1759394119.3749] manager: (tap961da5ba-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.384 2 INFO os_vif [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')
Oct 02 08:35:19 compute-0 NetworkManager[45129]: <info>  [1759394119.4624] manager: (tap961da5ba-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/334)
Oct 02 08:35:19 compute-0 kernel: tap961da5ba-b0: entered promiscuous mode
Oct 02 08:35:19 compute-0 systemd-udevd[343632]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:19 compute-0 ovn_controller[152344]: 2025-10-02T08:35:19Z|00822|binding|INFO|Claiming lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for this chassis.
Oct 02 08:35:19 compute-0 ovn_controller[152344]: 2025-10-02T08:35:19Z|00823|binding|INFO|961da5ba-b0ac-4a87-a74c-26d0d2d2bf50: Claiming fa:16:3e:bb:af:04 10.100.0.8
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.474 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.475 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 bound to our chassis
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.476 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:35:19 compute-0 NetworkManager[45129]: <info>  [1759394119.4997] device (tap961da5ba-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:35:19 compute-0 NetworkManager[45129]: <info>  [1759394119.5010] device (tap961da5ba-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:35:19 compute-0 systemd-machined[214636]: New machine qemu-102-instance-00000055.
Oct 02 08:35:19 compute-0 ovn_controller[152344]: 2025-10-02T08:35:19Z|00824|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 ovn-installed in OVS
Oct 02 08:35:19 compute-0 ovn_controller[152344]: 2025-10-02T08:35:19Z|00825|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 up in Southbound
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:19 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-00000055.
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.514 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[659e171c-a889-4b53-8d83-7cf1e468e2e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.515 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74f187c2-71 in ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.516 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74f187c2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.517 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1f763896-28e5-4aaf-a197-8c7726e4cfa4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.518 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[745d84ee-99e1-425d-a673-804705f40653]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.523 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.539 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb3846c-ff6f-48c7-b797-ddcc09da706a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.569 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[40f2adde-fb15-4d35-9104-cc8a5f3207d5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.603 2 DEBUG nova.compute.manager [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.604 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.609 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394119.609159, 330c4cea-9fec-4ab5-8ce1-3820232464cb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.610 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] VM Resumed (Lifecycle Event)
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.613 2 INFO nova.virt.libvirt.driver [-] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Instance spawned successfully.
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.613 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.623 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4a211164-5b53-4531-af9d-b1fe75a2f0a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 NetworkManager[45129]: <info>  [1759394119.6289] manager: (tap74f187c2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/335)
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.627 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[63fa1f70-829c-42c0-bbb8-c43d24b5a4d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.653 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.659 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.661 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.661 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.662 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.662 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.662 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.663 2 DEBUG nova.virt.libvirt.driver [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.671 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e188cf23-03ca-4c8a-a7ef-9ba4a5537ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.674 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b27a46-1e1d-4aee-8950-aaa99a8d10f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 NetworkManager[45129]: <info>  [1759394119.7075] device (tap74f187c2-70): carrier: link connected
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.708 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.708 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394119.6112652, 330c4cea-9fec-4ab5-8ce1-3820232464cb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.709 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] VM Started (Lifecycle Event)
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.720 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8c622761-0482-428d-b8e0-efe4124b1a51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.739 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5362e423-4537-4a09-8579-d899535b14d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499831, 'reachable_time': 24683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343963, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.750 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.754 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.764 2 INFO nova.compute.manager [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Took 4.12 seconds to spawn the instance on the hypervisor.
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.764 2 DEBUG nova.compute.manager [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.766 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c75a6d1-ecee-4d63-b287-305873084e5b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:7f62'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499831, 'tstamp': 499831}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343964, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.779 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.787 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8725b9-793a-4167-977c-2ca7bbe53e1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499831, 'reachable_time': 24683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343965, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.823 2 INFO nova.compute.manager [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Took 5.18 seconds to build instance.
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.829 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9720d500-a5f0-4f27-b3cb-ed87e8a48d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.840 2 DEBUG oslo_concurrency.lockutils [None req-52d9ef0b-7075-4ade-a651-dabc760c5047 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "330c4cea-9fec-4ab5-8ce1-3820232464cb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.904 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8f146286-8329-47cf-b9b4-def780fe5863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.905 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.906 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.906 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:19 compute-0 NetworkManager[45129]: <info>  [1759394119.9086] manager: (tap74f187c2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Oct 02 08:35:19 compute-0 kernel: tap74f187c2-70: entered promiscuous mode
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.912 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:19 compute-0 ovn_controller[152344]: 2025-10-02T08:35:19Z|00826|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.914 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.915 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[14da7cdb-11c7-4893-af6c-d9258d67bd5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.915 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:35:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:19.916 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'env', 'PROCESS_TAG=haproxy-74f187c2-780c-418d-98eb-b25294872ab0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74f187c2-780c-418d-98eb-b25294872ab0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:35:19 compute-0 nova_compute[260603]: 2025-10-02 08:35:19.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:20 compute-0 ceph-mon[74477]: pgmap v1660: 305 pgs: 305 active+clean; 213 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 5.6 MiB/s wr, 117 op/s
Oct 02 08:35:20 compute-0 podman[344039]: 2025-10-02 08:35:20.413137629 +0000 UTC m=+0.057960043 container create 344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:35:20 compute-0 systemd[1]: Started libpod-conmon-344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842.scope.
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.456 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for ba2cf934-ce76-4de7-a495-285f144bdab7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.457 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394120.455863, ba2cf934-ce76-4de7-a495-285f144bdab7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.457 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Resumed (Lifecycle Event)
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.459 2 DEBUG nova.compute.manager [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.462 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance rebooted successfully.
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.463 2 DEBUG nova.compute.manager [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8475d22b21271e45292b359d590037976a0dad60191ce705500bd7e5483420/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:20 compute-0 podman[344039]: 2025-10-02 08:35:20.381366445 +0000 UTC m=+0.026188879 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:35:20 compute-0 podman[344039]: 2025-10-02 08:35:20.492863196 +0000 UTC m=+0.137685650 container init 344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:35:20 compute-0 podman[344039]: 2025-10-02 08:35:20.500880526 +0000 UTC m=+0.145702930 container start 344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.503 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.506 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:20 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[344054]: [NOTICE]   (344058) : New worker (344060) forked
Oct 02 08:35:20 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[344054]: [NOTICE]   (344058) : Loading success.
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.541 2 DEBUG oslo_concurrency.lockutils [None req-b8f46803-e42e-441c-8f61-a6a3f47983f8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.542 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.542 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394120.457287, ba2cf934-ce76-4de7-a495-285f144bdab7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.542 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Started (Lifecycle Event)
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.568 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:20 compute-0 nova_compute[260603]: 2025-10-02 08:35:20.572 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 213 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Oct 02 08:35:21 compute-0 nova_compute[260603]: 2025-10-02 08:35:21.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:35:21 compute-0 nova_compute[260603]: 2025-10-02 08:35:21.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:35:21 compute-0 nova_compute[260603]: 2025-10-02 08:35:21.554 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:35:21 compute-0 nova_compute[260603]: 2025-10-02 08:35:21.554 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:35:22 compute-0 podman[344070]: 2025-10-02 08:35:22.025578493 +0000 UTC m=+0.087648645 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:35:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:35:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/805001866' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:35:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:35:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/805001866' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:35:22 compute-0 podman[344069]: 2025-10-02 08:35:22.075227965 +0000 UTC m=+0.130350709 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:35:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:35:22 compute-0 ceph-mon[74477]: pgmap v1661: 305 pgs: 305 active+clean; 213 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Oct 02 08:35:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/805001866' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:35:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/805001866' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:35:22 compute-0 nova_compute[260603]: 2025-10-02 08:35:22.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:35:23 compute-0 nova_compute[260603]: 2025-10-02 08:35:23.101 2 INFO nova.compute.manager [None req-5ed7f211-4a86-463a-84ae-1dbc43f246b0 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Get console output
Oct 02 08:35:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 214 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 273 op/s
Oct 02 08:35:23 compute-0 nova_compute[260603]: 2025-10-02 08:35:23.353 2 INFO nova.compute.manager [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Rebuilding instance
Oct 02 08:35:23 compute-0 nova_compute[260603]: 2025-10-02 08:35:23.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:23 compute-0 nova_compute[260603]: 2025-10-02 08:35:23.568 2 DEBUG nova.objects.instance [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 330c4cea-9fec-4ab5-8ce1-3820232464cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:23 compute-0 nova_compute[260603]: 2025-10-02 08:35:23.584 2 DEBUG nova.compute.manager [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:23 compute-0 nova_compute[260603]: 2025-10-02 08:35:23.630 2 DEBUG nova.objects.instance [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'pci_requests' on Instance uuid 330c4cea-9fec-4ab5-8ce1-3820232464cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:23 compute-0 nova_compute[260603]: 2025-10-02 08:35:23.674 2 DEBUG nova.objects.instance [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 330c4cea-9fec-4ab5-8ce1-3820232464cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:23 compute-0 nova_compute[260603]: 2025-10-02 08:35:23.686 2 DEBUG nova.objects.instance [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'resources' on Instance uuid 330c4cea-9fec-4ab5-8ce1-3820232464cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:23 compute-0 nova_compute[260603]: 2025-10-02 08:35:23.698 2 DEBUG nova.objects.instance [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'migration_context' on Instance uuid 330c4cea-9fec-4ab5-8ce1-3820232464cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:23 compute-0 nova_compute[260603]: 2025-10-02 08:35:23.711 2 DEBUG nova.objects.instance [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:35:23 compute-0 nova_compute[260603]: 2025-10-02 08:35:23.715 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.045 2 DEBUG nova.compute.manager [req-e29c22cd-8cd7-43e7-ac68-ff02db2d87da req-c99e53b6-7037-4154-bcaa-ca41e488bdfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.046 2 DEBUG oslo_concurrency.lockutils [req-e29c22cd-8cd7-43e7-ac68-ff02db2d87da req-c99e53b6-7037-4154-bcaa-ca41e488bdfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.046 2 DEBUG oslo_concurrency.lockutils [req-e29c22cd-8cd7-43e7-ac68-ff02db2d87da req-c99e53b6-7037-4154-bcaa-ca41e488bdfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.047 2 DEBUG oslo_concurrency.lockutils [req-e29c22cd-8cd7-43e7-ac68-ff02db2d87da req-c99e53b6-7037-4154-bcaa-ca41e488bdfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.047 2 DEBUG nova.compute.manager [req-e29c22cd-8cd7-43e7-ac68-ff02db2d87da req-c99e53b6-7037-4154-bcaa-ca41e488bdfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.048 2 WARNING nova.compute.manager [req-e29c22cd-8cd7-43e7-ac68-ff02db2d87da req-c99e53b6-7037-4154-bcaa-ca41e488bdfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state None.
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.180 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "c20fff86-9831-43b7-b579-48efe60be24b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.182 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.196 2 DEBUG nova.compute.manager [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.291 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.291 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.299 2 DEBUG nova.virt.hardware [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.299 2 INFO nova.compute.claims [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:35:24 compute-0 ceph-mon[74477]: pgmap v1662: 305 pgs: 305 active+clean; 214 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 273 op/s
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.478 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:35:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1997927061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.908 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.914 2 DEBUG nova.compute.provider_tree [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.932 2 DEBUG nova.scheduler.client.report [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.957 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:24 compute-0 nova_compute[260603]: 2025-10-02 08:35:24.958 2 DEBUG nova.compute.manager [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.017 2 DEBUG nova.compute.manager [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.018 2 DEBUG nova.network.neutron [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.041 2 INFO nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.055 2 DEBUG nova.compute.manager [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:35:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 214 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 273 op/s
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.138 2 DEBUG nova.compute.manager [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.139 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.140 2 INFO nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Creating image(s)
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.167 2 DEBUG nova.storage.rbd_utils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image c20fff86-9831-43b7-b579-48efe60be24b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.195 2 DEBUG nova.storage.rbd_utils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image c20fff86-9831-43b7-b579-48efe60be24b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.223 2 DEBUG nova.storage.rbd_utils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image c20fff86-9831-43b7-b579-48efe60be24b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.227 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.267 2 DEBUG nova.policy [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2fa0bf72f6f34b41ac7942357b5d7851', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '44f1ad17ce794fbdbf606f465f6ab7ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.311 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.312 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.313 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.313 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.339 2 DEBUG nova.storage.rbd_utils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image c20fff86-9831-43b7-b579-48efe60be24b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.343 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c20fff86-9831-43b7-b579-48efe60be24b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1997927061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.617 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c20fff86-9831-43b7-b579-48efe60be24b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.675 2 DEBUG nova.storage.rbd_utils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] resizing rbd image c20fff86-9831-43b7-b579-48efe60be24b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.744 2 DEBUG nova.objects.instance [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'migration_context' on Instance uuid c20fff86-9831-43b7-b579-48efe60be24b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.765 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.765 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Ensure instance console log exists: /var/lib/nova/instances/c20fff86-9831-43b7-b579-48efe60be24b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.766 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.766 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.766 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:25 compute-0 nova_compute[260603]: 2025-10-02 08:35:25.971 2 DEBUG nova.network.neutron [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Successfully created port: 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.316 2 DEBUG nova.compute.manager [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.317 2 DEBUG oslo_concurrency.lockutils [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.318 2 DEBUG oslo_concurrency.lockutils [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.318 2 DEBUG oslo_concurrency.lockutils [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.318 2 DEBUG nova.compute.manager [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.319 2 WARNING nova.compute.manager [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state None.
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.319 2 DEBUG nova.compute.manager [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.319 2 DEBUG oslo_concurrency.lockutils [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.320 2 DEBUG oslo_concurrency.lockutils [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.320 2 DEBUG oslo_concurrency.lockutils [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.320 2 DEBUG nova.compute.manager [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.321 2 WARNING nova.compute.manager [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state None.
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.321 2 DEBUG nova.compute.manager [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.321 2 DEBUG oslo_concurrency.lockutils [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.322 2 DEBUG oslo_concurrency.lockutils [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.322 2 DEBUG oslo_concurrency.lockutils [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.322 2 DEBUG nova.compute.manager [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.323 2 WARNING nova.compute.manager [req-023ff809-33e8-4d08-9e63-31d228cf90ef req-7beaab7e-ccb6-4dc9-a7d9-1b3c395fd912 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state None.
Oct 02 08:35:26 compute-0 ceph-mon[74477]: pgmap v1663: 305 pgs: 305 active+clean; 214 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 273 op/s
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.541 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.542 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.559 2 DEBUG nova.compute.manager [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.655 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.656 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.662 2 DEBUG nova.virt.hardware [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.662 2 INFO nova.compute.claims [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:35:26 compute-0 nova_compute[260603]: 2025-10-02 08:35:26.847 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 214 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 273 op/s
Oct 02 08:35:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:35:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:35:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3567241420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.316 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.323 2 DEBUG nova.compute.provider_tree [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.345 2 DEBUG nova.scheduler.client.report [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.370 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.371 2 DEBUG nova.compute.manager [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:35:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3567241420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.403 2 DEBUG nova.network.neutron [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Successfully updated port: 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.424 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "refresh_cache-c20fff86-9831-43b7-b579-48efe60be24b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.425 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquired lock "refresh_cache-c20fff86-9831-43b7-b579-48efe60be24b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.425 2 DEBUG nova.network.neutron [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.431 2 DEBUG nova.compute.manager [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.432 2 DEBUG nova.network.neutron [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.453 2 INFO nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.470 2 DEBUG nova.compute.manager [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.537 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.537 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.538 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.538 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.538 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.594 2 DEBUG nova.compute.manager [req-92a19c3a-0050-4b80-a3ed-80cf7429646e req-2998f8da-9ed6-4cb1-8342-4436c8196cca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received event network-changed-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.595 2 DEBUG nova.compute.manager [req-92a19c3a-0050-4b80-a3ed-80cf7429646e req-2998f8da-9ed6-4cb1-8342-4436c8196cca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Refreshing instance network info cache due to event network-changed-03538fdf-ba28-4b62-a592-dbf3a07dcfb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.595 2 DEBUG oslo_concurrency.lockutils [req-92a19c3a-0050-4b80-a3ed-80cf7429646e req-2998f8da-9ed6-4cb1-8342-4436c8196cca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c20fff86-9831-43b7-b579-48efe60be24b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.597 2 DEBUG nova.compute.manager [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.598 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.599 2 INFO nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Creating image(s)
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.627 2 DEBUG nova.storage.rbd_utils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.655 2 DEBUG nova.storage.rbd_utils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.683 2 DEBUG nova.storage.rbd_utils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.687 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.737 2 DEBUG nova.policy [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2fa0bf72f6f34b41ac7942357b5d7851', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '44f1ad17ce794fbdbf606f465f6ab7ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.740 2 DEBUG nova.network.neutron [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.792 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.793 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.794 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.794 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.818 2 DEBUG nova.storage.rbd_utils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.822 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.880 2 DEBUG oslo_concurrency.lockutils [None req-ff4681aa-575f-4c22-a683-7a94b8ecfc78 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.881 2 DEBUG oslo_concurrency.lockutils [None req-ff4681aa-575f-4c22-a683-7a94b8ecfc78 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.882 2 DEBUG nova.compute.manager [None req-ff4681aa-575f-4c22-a683-7a94b8ecfc78 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.888 2 DEBUG nova.compute.manager [None req-ff4681aa-575f-4c22-a683-7a94b8ecfc78 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.892 2 DEBUG nova.objects.instance [None req-ff4681aa-575f-4c22-a683-7a94b8ecfc78 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'flavor' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:35:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:35:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:35:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:35:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:35:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:35:27 compute-0 nova_compute[260603]: 2025-10-02 08:35:27.921 2 DEBUG nova.virt.libvirt.driver [None req-ff4681aa-575f-4c22-a683-7a94b8ecfc78 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:35:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:35:27
Oct 02 08:35:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:35:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:35:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.log', '.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta']
Oct 02 08:35:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:35:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:35:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3655311275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.041 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.136 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:28 compute-0 podman[344437]: 2025-10-02 08:35:28.173331348 +0000 UTC m=+0.074599173 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.191 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.191 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.231 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.231 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.235 2 DEBUG nova.storage.rbd_utils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] resizing rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.274 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.274 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.281 2 DEBUG nova.network.neutron [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Successfully created port: bf1797c8-0b08-41b3-a518-4cbacdbac9a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.352 2 DEBUG nova.objects.instance [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'migration_context' on Instance uuid 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.367 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.368 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Ensure instance console log exists: /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.368 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.368 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.369 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:28 compute-0 ceph-mon[74477]: pgmap v1664: 305 pgs: 305 active+clean; 214 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 273 op/s
Oct 02 08:35:28 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3655311275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.512 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.513 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3369MB free_disk=59.90092468261719GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.513 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.514 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.601 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ba2cf934-ce76-4de7-a495-285f144bdab7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.602 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 5cc7f7cd-c574-48a6-be3a-07161f94bd7f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.602 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 330c4cea-9fec-4ab5-8ce1-3820232464cb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.602 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance c20fff86-9831-43b7-b579-48efe60be24b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.602 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.603 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.603 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.736 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.930 2 DEBUG nova.network.neutron [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Updating instance_info_cache with network_info: [{"id": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "address": "fa:16:3e:1c:d9:a2", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03538fdf-ba", "ovs_interfaceid": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.955 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Releasing lock "refresh_cache-c20fff86-9831-43b7-b579-48efe60be24b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.956 2 DEBUG nova.compute.manager [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Instance network_info: |[{"id": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "address": "fa:16:3e:1c:d9:a2", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03538fdf-ba", "ovs_interfaceid": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.957 2 DEBUG oslo_concurrency.lockutils [req-92a19c3a-0050-4b80-a3ed-80cf7429646e req-2998f8da-9ed6-4cb1-8342-4436c8196cca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c20fff86-9831-43b7-b579-48efe60be24b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.957 2 DEBUG nova.network.neutron [req-92a19c3a-0050-4b80-a3ed-80cf7429646e req-2998f8da-9ed6-4cb1-8342-4436c8196cca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Refreshing network info cache for port 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.962 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Start _get_guest_xml network_info=[{"id": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "address": "fa:16:3e:1c:d9:a2", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03538fdf-ba", "ovs_interfaceid": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.967 2 WARNING nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.972 2 DEBUG nova.virt.libvirt.host [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.972 2 DEBUG nova.virt.libvirt.host [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.981 2 DEBUG nova.virt.libvirt.host [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.982 2 DEBUG nova.virt.libvirt.host [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.982 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.983 2 DEBUG nova.virt.hardware [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.984 2 DEBUG nova.virt.hardware [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.984 2 DEBUG nova.virt.hardware [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.984 2 DEBUG nova.virt.hardware [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.984 2 DEBUG nova.virt.hardware [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.985 2 DEBUG nova.virt.hardware [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.985 2 DEBUG nova.virt.hardware [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.985 2 DEBUG nova.virt.hardware [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.986 2 DEBUG nova.virt.hardware [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.986 2 DEBUG nova.virt.hardware [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.986 2 DEBUG nova.virt.hardware [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:35:28 compute-0 nova_compute[260603]: 2025-10-02 08:35:28.994 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 260 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.3 MiB/s wr, 300 op/s
Oct 02 08:35:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:35:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2372761315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:29 compute-0 nova_compute[260603]: 2025-10-02 08:35:29.259 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:29 compute-0 nova_compute[260603]: 2025-10-02 08:35:29.267 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:35:29 compute-0 nova_compute[260603]: 2025-10-02 08:35:29.286 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:35:29 compute-0 nova_compute[260603]: 2025-10-02 08:35:29.311 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:35:29 compute-0 nova_compute[260603]: 2025-10-02 08:35:29.311 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:29 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 02 08:35:29 compute-0 nova_compute[260603]: 2025-10-02 08:35:29.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2372761315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3878684582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:29 compute-0 nova_compute[260603]: 2025-10-02 08:35:29.548 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:29 compute-0 nova_compute[260603]: 2025-10-02 08:35:29.577 2 DEBUG nova.storage.rbd_utils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image c20fff86-9831-43b7-b579-48efe60be24b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:29 compute-0 nova_compute[260603]: 2025-10-02 08:35:29.582 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:30 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/873973882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.151 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.153 2 DEBUG nova.virt.libvirt.vif [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:35:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-966219500',display_name='tempest-ServerRescueNegativeTestJSON-server-966219500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-966219500',id=88,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44f1ad17ce794fbdbf606f465f6ab7ec',ramdisk_id='',reservation_id='r-2tkk3ni7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1454054093',owner_user_name='tempest-ServerRescueNegativeTestJSON-1454054093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:35:25Z,user_data=None,user_id='2fa0bf72f6f34b41ac7942357b5d7851',uuid=c20fff86-9831-43b7-b579-48efe60be24b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "address": "fa:16:3e:1c:d9:a2", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03538fdf-ba", "ovs_interfaceid": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.154 2 DEBUG nova.network.os_vif_util [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converting VIF {"id": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "address": "fa:16:3e:1c:d9:a2", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03538fdf-ba", "ovs_interfaceid": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.154 2 DEBUG nova.network.os_vif_util [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:d9:a2,bridge_name='br-int',has_traffic_filtering=True,id=03538fdf-ba28-4b62-a592-dbf3a07dcfb9,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03538fdf-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.156 2 DEBUG nova.objects.instance [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'pci_devices' on Instance uuid c20fff86-9831-43b7-b579-48efe60be24b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.171 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:35:30 compute-0 nova_compute[260603]:   <uuid>c20fff86-9831-43b7-b579-48efe60be24b</uuid>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   <name>instance-00000058</name>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-966219500</nova:name>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:35:28</nova:creationTime>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:35:30 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:35:30 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:35:30 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:35:30 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:35:30 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:35:30 compute-0 nova_compute[260603]:         <nova:user uuid="2fa0bf72f6f34b41ac7942357b5d7851">tempest-ServerRescueNegativeTestJSON-1454054093-project-member</nova:user>
Oct 02 08:35:30 compute-0 nova_compute[260603]:         <nova:project uuid="44f1ad17ce794fbdbf606f465f6ab7ec">tempest-ServerRescueNegativeTestJSON-1454054093</nova:project>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:35:30 compute-0 nova_compute[260603]:         <nova:port uuid="03538fdf-ba28-4b62-a592-dbf3a07dcfb9">
Oct 02 08:35:30 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <system>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <entry name="serial">c20fff86-9831-43b7-b579-48efe60be24b</entry>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <entry name="uuid">c20fff86-9831-43b7-b579-48efe60be24b</entry>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     </system>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   <os>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   </os>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   <features>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   </features>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/c20fff86-9831-43b7-b579-48efe60be24b_disk">
Oct 02 08:35:30 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:30 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/c20fff86-9831-43b7-b579-48efe60be24b_disk.config">
Oct 02 08:35:30 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:30 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:1c:d9:a2"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <target dev="tap03538fdf-ba"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/c20fff86-9831-43b7-b579-48efe60be24b/console.log" append="off"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <video>
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     </video>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:35:30 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:35:30 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:35:30 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:35:30 compute-0 nova_compute[260603]: </domain>
Oct 02 08:35:30 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.172 2 DEBUG nova.compute.manager [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Preparing to wait for external event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.173 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "c20fff86-9831-43b7-b579-48efe60be24b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.173 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.173 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.174 2 DEBUG nova.virt.libvirt.vif [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:35:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-966219500',display_name='tempest-ServerRescueNegativeTestJSON-server-966219500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-966219500',id=88,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44f1ad17ce794fbdbf606f465f6ab7ec',ramdisk_id='',reservation_id='r-2tkk3ni7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1454054093',owner_user_name='tempest-ServerRescueNegativeTestJSON-1454054093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:35:25Z,user_data=None,user_id='2fa0bf72f6f34b41ac7942357b5d7851',uuid=c20fff86-9831-43b7-b579-48efe60be24b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "address": "fa:16:3e:1c:d9:a2", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03538fdf-ba", "ovs_interfaceid": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.174 2 DEBUG nova.network.os_vif_util [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converting VIF {"id": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "address": "fa:16:3e:1c:d9:a2", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03538fdf-ba", "ovs_interfaceid": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.175 2 DEBUG nova.network.os_vif_util [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:d9:a2,bridge_name='br-int',has_traffic_filtering=True,id=03538fdf-ba28-4b62-a592-dbf3a07dcfb9,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03538fdf-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.175 2 DEBUG os_vif [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:d9:a2,bridge_name='br-int',has_traffic_filtering=True,id=03538fdf-ba28-4b62-a592-dbf3a07dcfb9,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03538fdf-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.180 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03538fdf-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.180 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap03538fdf-ba, col_values=(('external_ids', {'iface-id': '03538fdf-ba28-4b62-a592-dbf3a07dcfb9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1c:d9:a2', 'vm-uuid': 'c20fff86-9831-43b7-b579-48efe60be24b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:30 compute-0 NetworkManager[45129]: <info>  [1759394130.1836] manager: (tap03538fdf-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.192 2 INFO os_vif [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:d9:a2,bridge_name='br-int',has_traffic_filtering=True,id=03538fdf-ba28-4b62-a592-dbf3a07dcfb9,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03538fdf-ba')
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.244 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.245 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.245 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] No VIF found with MAC fa:16:3e:1c:d9:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.246 2 INFO nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Using config drive
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.275 2 DEBUG nova.storage.rbd_utils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image c20fff86-9831-43b7-b579-48efe60be24b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.307 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.308 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:35:30 compute-0 nova_compute[260603]: 2025-10-02 08:35:30.308 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:35:30 compute-0 ceph-mon[74477]: pgmap v1665: 305 pgs: 305 active+clean; 260 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.3 MiB/s wr, 300 op/s
Oct 02 08:35:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3878684582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/873973882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 260 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.119 2 DEBUG nova.network.neutron [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Successfully updated port: bf1797c8-0b08-41b3-a518-4cbacdbac9a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.140 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "refresh_cache-5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.140 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquired lock "refresh_cache-5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.140 2 DEBUG nova.network.neutron [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.161 2 INFO nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Creating config drive at /var/lib/nova/instances/c20fff86-9831-43b7-b579-48efe60be24b/disk.config
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.169 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c20fff86-9831-43b7-b579-48efe60be24b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd3g7_7mz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.227 2 DEBUG nova.compute.manager [req-149463f0-f1cb-4038-9660-87923033f7ab req-681459af-5d51-4a57-9ed7-6f66d7813eef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received event network-changed-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.227 2 DEBUG nova.compute.manager [req-149463f0-f1cb-4038-9660-87923033f7ab req-681459af-5d51-4a57-9ed7-6f66d7813eef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Refreshing instance network info cache due to event network-changed-bf1797c8-0b08-41b3-a518-4cbacdbac9a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.228 2 DEBUG oslo_concurrency.lockutils [req-149463f0-f1cb-4038-9660-87923033f7ab req-681459af-5d51-4a57-9ed7-6f66d7813eef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.327 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c20fff86-9831-43b7-b579-48efe60be24b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd3g7_7mz" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.363 2 DEBUG nova.storage.rbd_utils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image c20fff86-9831-43b7-b579-48efe60be24b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.374 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c20fff86-9831-43b7-b579-48efe60be24b/disk.config c20fff86-9831-43b7-b579-48efe60be24b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.476 2 DEBUG nova.network.neutron [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.553 2 DEBUG oslo_concurrency.processutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c20fff86-9831-43b7-b579-48efe60be24b/disk.config c20fff86-9831-43b7-b579-48efe60be24b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.554 2 INFO nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Deleting local config drive /var/lib/nova/instances/c20fff86-9831-43b7-b579-48efe60be24b/disk.config because it was imported into RBD.
Oct 02 08:35:31 compute-0 kernel: tap03538fdf-ba: entered promiscuous mode
Oct 02 08:35:31 compute-0 NetworkManager[45129]: <info>  [1759394131.6070] manager: (tap03538fdf-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/338)
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:31 compute-0 ovn_controller[152344]: 2025-10-02T08:35:31Z|00827|binding|INFO|Claiming lport 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 for this chassis.
Oct 02 08:35:31 compute-0 ovn_controller[152344]: 2025-10-02T08:35:31Z|00828|binding|INFO|03538fdf-ba28-4b62-a592-dbf3a07dcfb9: Claiming fa:16:3e:1c:d9:a2 10.100.0.13
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.620 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:d9:a2 10.100.0.13'], port_security=['fa:16:3e:1c:d9:a2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c20fff86-9831-43b7-b579-48efe60be24b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44f1ad17ce794fbdbf606f465f6ab7ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17cfbd28-1f2b-4eb6-ab31-e8960544023c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af1b7ebd-469f-49ea-a9a9-ba2e42baffc7, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=03538fdf-ba28-4b62-a592-dbf3a07dcfb9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.621 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 in datapath 4aed206b-8fa2-4bbf-a202-0f7b63ace915 bound to our chassis
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.622 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4aed206b-8fa2-4bbf-a202-0f7b63ace915
Oct 02 08:35:31 compute-0 ovn_controller[152344]: 2025-10-02T08:35:31Z|00829|binding|INFO|Setting lport 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 ovn-installed in OVS
Oct 02 08:35:31 compute-0 ovn_controller[152344]: 2025-10-02T08:35:31Z|00830|binding|INFO|Setting lport 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 up in Southbound
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.634 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48e3815e-2157-4511-af9a-a9c4643b9c29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.635 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4aed206b-81 in ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.637 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4aed206b-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.637 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd27c2ce-f0eb-4eae-a8ca-26d51777fd6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.638 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e6905ebd-873b-4a39-9ed4-9a9ffdf05e65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.649 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[81c8fa46-adb6-4bdd-be5c-46dd66894109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 systemd-machined[214636]: New machine qemu-103-instance-00000058.
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.662 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[67b27a4c-98ec-4a31-bbed-0a349ed10128]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-00000058.
Oct 02 08:35:31 compute-0 systemd-udevd[344699]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:35:31 compute-0 NetworkManager[45129]: <info>  [1759394131.6902] device (tap03538fdf-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:35:31 compute-0 NetworkManager[45129]: <info>  [1759394131.6910] device (tap03538fdf-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.707 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[caeeeff5-c717-4de2-b434-a57cab46cc84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 NetworkManager[45129]: <info>  [1759394131.7136] manager: (tap4aed206b-80): new Veth device (/org/freedesktop/NetworkManager/Devices/339)
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.712 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3aa3d9-5f0b-482e-bf80-91211e6d7c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.727 2 DEBUG nova.network.neutron [req-92a19c3a-0050-4b80-a3ed-80cf7429646e req-2998f8da-9ed6-4cb1-8342-4436c8196cca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Updated VIF entry in instance network info cache for port 03538fdf-ba28-4b62-a592-dbf3a07dcfb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.727 2 DEBUG nova.network.neutron [req-92a19c3a-0050-4b80-a3ed-80cf7429646e req-2998f8da-9ed6-4cb1-8342-4436c8196cca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Updating instance_info_cache with network_info: [{"id": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "address": "fa:16:3e:1c:d9:a2", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03538fdf-ba", "ovs_interfaceid": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.747 2 DEBUG oslo_concurrency.lockutils [req-92a19c3a-0050-4b80-a3ed-80cf7429646e req-2998f8da-9ed6-4cb1-8342-4436c8196cca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c20fff86-9831-43b7-b579-48efe60be24b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.751 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[de152e78-a0c5-4bba-a53e-cc590701266e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 podman[344684]: 2025-10-02 08:35:31.753945775 +0000 UTC m=+0.104986926 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.754 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bc422f0a-a8de-4946-9d34-339e68dab43d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 NetworkManager[45129]: <info>  [1759394131.7792] device (tap4aed206b-80): carrier: link connected
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.783 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b89966-a2b2-4c63-8e6a-40f4558585d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.801 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[82c9c5cb-2b42-4fe3-b1fa-114b2f29f139]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4aed206b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:78:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501038, 'reachable_time': 34238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344737, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.818 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e22eed79-349c-459f-85b3-71a730579512]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4a:78ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501038, 'tstamp': 501038}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344738, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.835 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a3a8d3-fe01-438a-aa7f-74d36fc60646]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4aed206b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:78:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501038, 'reachable_time': 34238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344739, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.862 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e06a5b-72b9-4b12-8e45-152db681f08d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.947 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b597d628-cf83-4eed-ae0e-57574ce7bf47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.949 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4aed206b-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.949 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.949 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4aed206b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:31 compute-0 NetworkManager[45129]: <info>  [1759394131.9518] manager: (tap4aed206b-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:31 compute-0 kernel: tap4aed206b-80: entered promiscuous mode
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.955 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4aed206b-80, col_values=(('external_ids', {'iface-id': '2156689f-0588-4bb0-992e-a8d6a79bb745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:31 compute-0 ovn_controller[152344]: 2025-10-02T08:35:31Z|00831|binding|INFO|Releasing lport 2156689f-0588-4bb0-992e-a8d6a79bb745 from this chassis (sb_readonly=0)
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:31 compute-0 nova_compute[260603]: 2025-10-02 08:35:31.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.975 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4aed206b-8fa2-4bbf-a202-0f7b63ace915.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4aed206b-8fa2-4bbf-a202-0f7b63ace915.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.976 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a846136a-dc43-4066-8b9e-cf0a48ce43d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.977 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-4aed206b-8fa2-4bbf-a202-0f7b63ace915
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/4aed206b-8fa2-4bbf-a202-0f7b63ace915.pid.haproxy
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 4aed206b-8fa2-4bbf-a202-0f7b63ace915
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:35:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:31.978 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'env', 'PROCESS_TAG=haproxy-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4aed206b-8fa2-4bbf-a202-0f7b63ace915.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:35:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:35:32 compute-0 ovn_controller[152344]: 2025-10-02T08:35:32Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:af:04 10.100.0.8
Oct 02 08:35:32 compute-0 podman[344813]: 2025-10-02 08:35:32.403667733 +0000 UTC m=+0.057019374 container create 7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:35:32 compute-0 ceph-mon[74477]: pgmap v1666: 305 pgs: 305 active+clean; 260 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Oct 02 08:35:32 compute-0 systemd[1]: Started libpod-conmon-7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1.scope.
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.442 2 DEBUG nova.network.neutron [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Updating instance_info_cache with network_info: [{"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:35:32 compute-0 podman[344813]: 2025-10-02 08:35:32.373407144 +0000 UTC m=+0.026758825 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.470 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Releasing lock "refresh_cache-5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.470 2 DEBUG nova.compute.manager [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Instance network_info: |[{"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.470 2 DEBUG oslo_concurrency.lockutils [req-149463f0-f1cb-4038-9660-87923033f7ab req-681459af-5d51-4a57-9ed7-6f66d7813eef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.471 2 DEBUG nova.network.neutron [req-149463f0-f1cb-4038-9660-87923033f7ab req-681459af-5d51-4a57-9ed7-6f66d7813eef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Refreshing network info cache for port bf1797c8-0b08-41b3-a518-4cbacdbac9a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.473 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Start _get_guest_xml network_info=[{"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:35:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.478 2 WARNING nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99ea9d6bf9f213c3e40c001481d9887f6952603e36e6b65a533836bdc10da90/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.486 2 DEBUG nova.virt.libvirt.host [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.486 2 DEBUG nova.virt.libvirt.host [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.494 2 DEBUG nova.virt.libvirt.host [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.494 2 DEBUG nova.virt.libvirt.host [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:35:32 compute-0 podman[344813]: 2025-10-02 08:35:32.495272657 +0000 UTC m=+0.148624298 container init 7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.495 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.495 2 DEBUG nova.virt.hardware [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.495 2 DEBUG nova.virt.hardware [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.495 2 DEBUG nova.virt.hardware [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.496 2 DEBUG nova.virt.hardware [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.496 2 DEBUG nova.virt.hardware [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.496 2 DEBUG nova.virt.hardware [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.496 2 DEBUG nova.virt.hardware [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.496 2 DEBUG nova.virt.hardware [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.496 2 DEBUG nova.virt.hardware [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.497 2 DEBUG nova.virt.hardware [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.497 2 DEBUG nova.virt.hardware [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.500 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:32 compute-0 podman[344813]: 2025-10-02 08:35:32.502727911 +0000 UTC m=+0.156079532 container start 7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:35:32 compute-0 neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915[344828]: [NOTICE]   (344832) : New worker (344835) forked
Oct 02 08:35:32 compute-0 neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915[344828]: [NOTICE]   (344832) : Loading success.
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.537 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394132.5161493, c20fff86-9831-43b7-b579-48efe60be24b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.537 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] VM Started (Lifecycle Event)
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.560 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.563 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394132.5165205, c20fff86-9831-43b7-b579-48efe60be24b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.563 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] VM Paused (Lifecycle Event)
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.582 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.584 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.610 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:35:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4258506211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:32 compute-0 nova_compute[260603]: 2025-10-02 08:35:32.983 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.006 2 DEBUG nova.storage.rbd_utils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.010 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 372 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.8 MiB/s wr, 446 op/s
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.384 2 DEBUG nova.compute.manager [req-0d4100d9-35c0-484b-84fd-90ed5f89e46e req-04ed321a-008f-4656-894a-974c4d4fb566 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.384 2 DEBUG oslo_concurrency.lockutils [req-0d4100d9-35c0-484b-84fd-90ed5f89e46e req-04ed321a-008f-4656-894a-974c4d4fb566 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c20fff86-9831-43b7-b579-48efe60be24b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.385 2 DEBUG oslo_concurrency.lockutils [req-0d4100d9-35c0-484b-84fd-90ed5f89e46e req-04ed321a-008f-4656-894a-974c4d4fb566 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.385 2 DEBUG oslo_concurrency.lockutils [req-0d4100d9-35c0-484b-84fd-90ed5f89e46e req-04ed321a-008f-4656-894a-974c4d4fb566 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.385 2 DEBUG nova.compute.manager [req-0d4100d9-35c0-484b-84fd-90ed5f89e46e req-04ed321a-008f-4656-894a-974c4d4fb566 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Processing event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.385 2 DEBUG nova.compute.manager [req-0d4100d9-35c0-484b-84fd-90ed5f89e46e req-04ed321a-008f-4656-894a-974c4d4fb566 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.386 2 DEBUG oslo_concurrency.lockutils [req-0d4100d9-35c0-484b-84fd-90ed5f89e46e req-04ed321a-008f-4656-894a-974c4d4fb566 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c20fff86-9831-43b7-b579-48efe60be24b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.386 2 DEBUG oslo_concurrency.lockutils [req-0d4100d9-35c0-484b-84fd-90ed5f89e46e req-04ed321a-008f-4656-894a-974c4d4fb566 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.386 2 DEBUG oslo_concurrency.lockutils [req-0d4100d9-35c0-484b-84fd-90ed5f89e46e req-04ed321a-008f-4656-894a-974c4d4fb566 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.386 2 DEBUG nova.compute.manager [req-0d4100d9-35c0-484b-84fd-90ed5f89e46e req-04ed321a-008f-4656-894a-974c4d4fb566 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] No waiting events found dispatching network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.387 2 WARNING nova.compute.manager [req-0d4100d9-35c0-484b-84fd-90ed5f89e46e req-04ed321a-008f-4656-894a-974c4d4fb566 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received unexpected event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 for instance with vm_state building and task_state spawning.
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.387 2 DEBUG nova.compute.manager [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.392 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394133.3921137, c20fff86-9831-43b7-b579-48efe60be24b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.392 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] VM Resumed (Lifecycle Event)
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.395 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.406 2 INFO nova.virt.libvirt.driver [-] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Instance spawned successfully.
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.406 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.419 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4258506211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.431 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.441 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.442 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.442 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.442 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.443 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.443 2 DEBUG nova.virt.libvirt.driver [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.448 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:35:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785545755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.470 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.471 2 DEBUG nova.virt.libvirt.vif [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:35:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1207156767',display_name='tempest-ServerRescueNegativeTestJSON-server-1207156767',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1207156767',id=89,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44f1ad17ce794fbdbf606f465f6ab7ec',ramdisk_id='',reservation_id='r-k7oyv205',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1454054093',owner_user_name='tempest-ServerRescueNegativeTestJSON-1454054093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:35:27Z,user_data=None,user_id='2fa0bf72f6f34b41ac7942357b5d7851',uuid=5f692626-ba1b-4b0c-9046-4cd9ef3c25cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.471 2 DEBUG nova.network.os_vif_util [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converting VIF {"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.472 2 DEBUG nova.network.os_vif_util [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=bf1797c8-0b08-41b3-a518-4cbacdbac9a0,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1797c8-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.472 2 DEBUG nova.objects.instance [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.497 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:35:33 compute-0 nova_compute[260603]:   <uuid>5f692626-ba1b-4b0c-9046-4cd9ef3c25cd</uuid>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   <name>instance-00000059</name>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1207156767</nova:name>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:35:32</nova:creationTime>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:35:33 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:35:33 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:35:33 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:35:33 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:35:33 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:35:33 compute-0 nova_compute[260603]:         <nova:user uuid="2fa0bf72f6f34b41ac7942357b5d7851">tempest-ServerRescueNegativeTestJSON-1454054093-project-member</nova:user>
Oct 02 08:35:33 compute-0 nova_compute[260603]:         <nova:project uuid="44f1ad17ce794fbdbf606f465f6ab7ec">tempest-ServerRescueNegativeTestJSON-1454054093</nova:project>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:35:33 compute-0 nova_compute[260603]:         <nova:port uuid="bf1797c8-0b08-41b3-a518-4cbacdbac9a0">
Oct 02 08:35:33 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <system>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <entry name="serial">5f692626-ba1b-4b0c-9046-4cd9ef3c25cd</entry>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <entry name="uuid">5f692626-ba1b-4b0c-9046-4cd9ef3c25cd</entry>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     </system>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   <os>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   </os>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   <features>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   </features>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk">
Oct 02 08:35:33 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:33 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.config">
Oct 02 08:35:33 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:33 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:62:1b:d0"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <target dev="tapbf1797c8-0b"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/console.log" append="off"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <video>
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     </video>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:35:33 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:35:33 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:35:33 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:35:33 compute-0 nova_compute[260603]: </domain>
Oct 02 08:35:33 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.497 2 DEBUG nova.compute.manager [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Preparing to wait for external event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.497 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.497 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.498 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.498 2 DEBUG nova.virt.libvirt.vif [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:35:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1207156767',display_name='tempest-ServerRescueNegativeTestJSON-server-1207156767',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1207156767',id=89,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44f1ad17ce794fbdbf606f465f6ab7ec',ramdisk_id='',reservation_id='r-k7oyv205',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1454054093',owner_user_name='tempest-ServerRescueNegativeTestJSON-1454054093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:35:27Z,user_data=None,user_id='2fa0bf72f6f34b41ac7942357b5d7851',uuid=5f692626-ba1b-4b0c-9046-4cd9ef3c25cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.498 2 DEBUG nova.network.os_vif_util [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converting VIF {"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.499 2 DEBUG nova.network.os_vif_util [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=bf1797c8-0b08-41b3-a518-4cbacdbac9a0,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1797c8-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.499 2 DEBUG os_vif [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=bf1797c8-0b08-41b3-a518-4cbacdbac9a0,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1797c8-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf1797c8-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.503 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf1797c8-0b, col_values=(('external_ids', {'iface-id': 'bf1797c8-0b08-41b3-a518-4cbacdbac9a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:1b:d0', 'vm-uuid': '5f692626-ba1b-4b0c-9046-4cd9ef3c25cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:33 compute-0 NetworkManager[45129]: <info>  [1759394133.5047] manager: (tapbf1797c8-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.507 2 INFO nova.compute.manager [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Took 8.37 seconds to spawn the instance on the hypervisor.
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.507 2 DEBUG nova.compute.manager [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.511 2 INFO os_vif [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=bf1797c8-0b08-41b3-a518-4cbacdbac9a0,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1797c8-0b')
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.588 2 INFO nova.compute.manager [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Took 9.32 seconds to build instance.
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.591 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.591 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.591 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] No VIF found with MAC fa:16:3e:62:1b:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.591 2 INFO nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Using config drive
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.609 2 DEBUG nova.storage.rbd_utils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.614 2 DEBUG oslo_concurrency.lockutils [None req-93f5aa1f-e52a-4090-a9dd-bde0df417b71 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.818 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:35:33 compute-0 nova_compute[260603]: 2025-10-02 08:35:33.998 2 INFO nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Creating config drive at /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/disk.config
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.003 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxgwjuff9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.153 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxgwjuff9" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.191 2 DEBUG nova.storage.rbd_utils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.196 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/disk.config 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.395 2 DEBUG oslo_concurrency.processutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/disk.config 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.397 2 INFO nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Deleting local config drive /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/disk.config because it was imported into RBD.
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.410 2 DEBUG nova.network.neutron [req-149463f0-f1cb-4038-9660-87923033f7ab req-681459af-5d51-4a57-9ed7-6f66d7813eef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Updated VIF entry in instance network info cache for port bf1797c8-0b08-41b3-a518-4cbacdbac9a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.412 2 DEBUG nova.network.neutron [req-149463f0-f1cb-4038-9660-87923033f7ab req-681459af-5d51-4a57-9ed7-6f66d7813eef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Updating instance_info_cache with network_info: [{"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:35:34 compute-0 ceph-mon[74477]: pgmap v1667: 305 pgs: 305 active+clean; 372 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.8 MiB/s wr, 446 op/s
Oct 02 08:35:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/785545755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.453 2 DEBUG oslo_concurrency.lockutils [req-149463f0-f1cb-4038-9660-87923033f7ab req-681459af-5d51-4a57-9ed7-6f66d7813eef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:35:34 compute-0 kernel: tapbf1797c8-0b: entered promiscuous mode
Oct 02 08:35:34 compute-0 NetworkManager[45129]: <info>  [1759394134.4780] manager: (tapbf1797c8-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/342)
Oct 02 08:35:34 compute-0 ovn_controller[152344]: 2025-10-02T08:35:34Z|00832|binding|INFO|Claiming lport bf1797c8-0b08-41b3-a518-4cbacdbac9a0 for this chassis.
Oct 02 08:35:34 compute-0 ovn_controller[152344]: 2025-10-02T08:35:34Z|00833|binding|INFO|bf1797c8-0b08-41b3-a518-4cbacdbac9a0: Claiming fa:16:3e:62:1b:d0 10.100.0.5
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.498 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:1b:d0 10.100.0.5'], port_security=['fa:16:3e:62:1b:d0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5f692626-ba1b-4b0c-9046-4cd9ef3c25cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44f1ad17ce794fbdbf606f465f6ab7ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17cfbd28-1f2b-4eb6-ab31-e8960544023c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af1b7ebd-469f-49ea-a9a9-ba2e42baffc7, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bf1797c8-0b08-41b3-a518-4cbacdbac9a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:35:34 compute-0 NetworkManager[45129]: <info>  [1759394134.5041] device (tapbf1797c8-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.500 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bf1797c8-0b08-41b3-a518-4cbacdbac9a0 in datapath 4aed206b-8fa2-4bbf-a202-0f7b63ace915 bound to our chassis
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.503 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4aed206b-8fa2-4bbf-a202-0f7b63ace915
Oct 02 08:35:34 compute-0 NetworkManager[45129]: <info>  [1759394134.5108] device (tapbf1797c8-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.522 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[578ac449-f546-4090-9431-356ab2f0823c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:34 compute-0 ovn_controller[152344]: 2025-10-02T08:35:34Z|00834|binding|INFO|Setting lport bf1797c8-0b08-41b3-a518-4cbacdbac9a0 ovn-installed in OVS
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:34 compute-0 ovn_controller[152344]: 2025-10-02T08:35:34Z|00835|binding|INFO|Setting lport bf1797c8-0b08-41b3-a518-4cbacdbac9a0 up in Southbound
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:34 compute-0 systemd-machined[214636]: New machine qemu-104-instance-00000059.
Oct 02 08:35:34 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-00000059.
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.566 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f688f005-32da-423e-adea-9f5d4cec3ca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.571 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[11117644-bf2f-46f4-ae9b-a0883cac38c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.611 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0adf339d-84c2-4976-8da1-8404ef06f7fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.635 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[58dd7401-9db9-4a64-8a7d-a0e31ec5f37d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4aed206b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:78:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501038, 'reachable_time': 34238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344989, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.658 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[acc5f583-b770-4da0-9eb1-7864d7fb3be1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4aed206b-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501050, 'tstamp': 501050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344991, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4aed206b-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501054, 'tstamp': 501054}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344991, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.660 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4aed206b-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.667 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4aed206b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.668 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.668 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4aed206b-80, col_values=(('external_ids', {'iface-id': '2156689f-0588-4bb0-992e-a8d6a79bb745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.669 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.746 2 DEBUG nova.compute.manager [req-2e385770-cee7-4a20-84c2-9f8ff9243422 req-0384a840-0a96-411f-9245-ad8b5eb0e675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.746 2 DEBUG oslo_concurrency.lockutils [req-2e385770-cee7-4a20-84c2-9f8ff9243422 req-0384a840-0a96-411f-9245-ad8b5eb0e675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.747 2 DEBUG oslo_concurrency.lockutils [req-2e385770-cee7-4a20-84c2-9f8ff9243422 req-0384a840-0a96-411f-9245-ad8b5eb0e675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.747 2 DEBUG oslo_concurrency.lockutils [req-2e385770-cee7-4a20-84c2-9f8ff9243422 req-0384a840-0a96-411f-9245-ad8b5eb0e675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:34 compute-0 nova_compute[260603]: 2025-10-02 08:35:34.747 2 DEBUG nova.compute.manager [req-2e385770-cee7-4a20-84c2-9f8ff9243422 req-0384a840-0a96-411f-9245-ad8b5eb0e675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Processing event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.820 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.821 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:34.822 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 372 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.8 MiB/s wr, 233 op/s
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.877 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394135.8765073, 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.877 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] VM Started (Lifecycle Event)
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.880 2 DEBUG nova.compute.manager [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.884 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.888 2 INFO nova.virt.libvirt.driver [-] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Instance spawned successfully.
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.889 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.901 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.910 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.916 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.917 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.917 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.918 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.919 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.919 2 DEBUG nova.virt.libvirt.driver [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.956 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.957 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394135.8768318, 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.957 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] VM Paused (Lifecycle Event)
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.988 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.994 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394135.883295, 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:35 compute-0 nova_compute[260603]: 2025-10-02 08:35:35.994 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] VM Resumed (Lifecycle Event)
Oct 02 08:35:36 compute-0 nova_compute[260603]: 2025-10-02 08:35:36.000 2 INFO nova.compute.manager [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Took 8.40 seconds to spawn the instance on the hypervisor.
Oct 02 08:35:36 compute-0 nova_compute[260603]: 2025-10-02 08:35:36.000 2 DEBUG nova.compute.manager [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:36 compute-0 nova_compute[260603]: 2025-10-02 08:35:36.013 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:36 compute-0 nova_compute[260603]: 2025-10-02 08:35:36.017 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:36 compute-0 nova_compute[260603]: 2025-10-02 08:35:36.046 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:35:36 compute-0 nova_compute[260603]: 2025-10-02 08:35:36.091 2 INFO nova.compute.manager [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Took 9.45 seconds to build instance.
Oct 02 08:35:36 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000057.scope: Deactivated successfully.
Oct 02 08:35:36 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000057.scope: Consumed 12.104s CPU time.
Oct 02 08:35:36 compute-0 systemd-machined[214636]: Machine qemu-101-instance-00000057 terminated.
Oct 02 08:35:36 compute-0 nova_compute[260603]: 2025-10-02 08:35:36.277 2 DEBUG oslo_concurrency.lockutils [None req-9fa8ebc5-c07b-44d3-904f-29fe83878d20 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:36 compute-0 ceph-mon[74477]: pgmap v1668: 305 pgs: 305 active+clean; 372 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.8 MiB/s wr, 233 op/s
Oct 02 08:35:36 compute-0 nova_compute[260603]: 2025-10-02 08:35:36.831 2 INFO nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Instance shutdown successfully after 13 seconds.
Oct 02 08:35:36 compute-0 nova_compute[260603]: 2025-10-02 08:35:36.837 2 INFO nova.virt.libvirt.driver [-] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Instance destroyed successfully.
Oct 02 08:35:36 compute-0 nova_compute[260603]: 2025-10-02 08:35:36.843 2 INFO nova.virt.libvirt.driver [-] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Instance destroyed successfully.
Oct 02 08:35:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 372 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.8 MiB/s wr, 233 op/s
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.200 2 INFO nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Deleting instance files /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb_del
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.201 2 INFO nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Deletion of /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb_del complete
Oct 02 08:35:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.387 2 DEBUG nova.compute.manager [req-11fb4555-2d21-48af-83e8-5536fcb50b24 req-1347b097-0146-4ed3-89bb-1cff48e0dc52 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.387 2 DEBUG oslo_concurrency.lockutils [req-11fb4555-2d21-48af-83e8-5536fcb50b24 req-1347b097-0146-4ed3-89bb-1cff48e0dc52 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.387 2 DEBUG oslo_concurrency.lockutils [req-11fb4555-2d21-48af-83e8-5536fcb50b24 req-1347b097-0146-4ed3-89bb-1cff48e0dc52 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.387 2 DEBUG oslo_concurrency.lockutils [req-11fb4555-2d21-48af-83e8-5536fcb50b24 req-1347b097-0146-4ed3-89bb-1cff48e0dc52 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.387 2 DEBUG nova.compute.manager [req-11fb4555-2d21-48af-83e8-5536fcb50b24 req-1347b097-0146-4ed3-89bb-1cff48e0dc52 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] No waiting events found dispatching network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.388 2 WARNING nova.compute.manager [req-11fb4555-2d21-48af-83e8-5536fcb50b24 req-1347b097-0146-4ed3-89bb-1cff48e0dc52 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received unexpected event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 for instance with vm_state active and task_state None.
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.471 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.472 2 INFO nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Creating image(s)
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.510 2 DEBUG nova.storage.rbd_utils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.547 2 DEBUG nova.storage.rbd_utils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.579 2 DEBUG nova.storage.rbd_utils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.584 2 DEBUG oslo_concurrency.processutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.692 2 DEBUG oslo_concurrency.processutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.694 2 DEBUG oslo_concurrency.lockutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.695 2 DEBUG oslo_concurrency.lockutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.695 2 DEBUG oslo_concurrency.lockutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.730 2 DEBUG nova.storage.rbd_utils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.734 2 DEBUG oslo_concurrency.processutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:37 compute-0 nova_compute[260603]: 2025-10-02 08:35:37.977 2 DEBUG nova.virt.libvirt.driver [None req-ff4681aa-575f-4c22-a683-7a94b8ecfc78 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.088 2 DEBUG oslo_concurrency.processutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.151 2 DEBUG nova.storage.rbd_utils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] resizing rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.258 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.260 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Ensure instance console log exists: /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.261 2 DEBUG oslo_concurrency.lockutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.261 2 DEBUG oslo_concurrency.lockutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.262 2 DEBUG oslo_concurrency.lockutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.265 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.271 2 WARNING nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.279 2 DEBUG nova.virt.libvirt.host [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.280 2 DEBUG nova.virt.libvirt.host [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.283 2 DEBUG nova.virt.libvirt.host [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.283 2 DEBUG nova.virt.libvirt.host [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.284 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.284 2 DEBUG nova.virt.hardware [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.285 2 DEBUG nova.virt.hardware [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.285 2 DEBUG nova.virt.hardware [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.285 2 DEBUG nova.virt.hardware [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.285 2 DEBUG nova.virt.hardware [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.286 2 DEBUG nova.virt.hardware [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.286 2 DEBUG nova.virt.hardware [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.286 2 DEBUG nova.virt.hardware [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.287 2 DEBUG nova.virt.hardware [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.287 2 DEBUG nova.virt.hardware [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.287 2 DEBUG nova.virt.hardware [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.287 2 DEBUG nova.objects.instance [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 330c4cea-9fec-4ab5-8ce1-3820232464cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.312 2 DEBUG oslo_concurrency.processutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:38 compute-0 ceph-mon[74477]: pgmap v1669: 305 pgs: 305 active+clean; 372 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.8 MiB/s wr, 233 op/s
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002965616249918604 of space, bias 1.0, pg target 0.8896848749755812 quantized to 32 (current 32)
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:35:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:35:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2936503610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.773 2 DEBUG oslo_concurrency.processutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.800 2 DEBUG nova.storage.rbd_utils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:38 compute-0 nova_compute[260603]: 2025-10-02 08:35:38.805 2 DEBUG oslo_concurrency.processutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 305 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 8.2 MiB/s wr, 406 op/s
Oct 02 08:35:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:39 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2153063173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.242 2 DEBUG oslo_concurrency.processutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.248 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:35:39 compute-0 nova_compute[260603]:   <uuid>330c4cea-9fec-4ab5-8ce1-3820232464cb</uuid>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   <name>instance-00000057</name>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerShowV247Test-server-1285796490</nova:name>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:35:38</nova:creationTime>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:35:39 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:35:39 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:35:39 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:35:39 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:35:39 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:35:39 compute-0 nova_compute[260603]:         <nova:user uuid="d3671400936c4ef08f70278919c780ad">tempest-ServerShowV247Test-914637317-project-member</nova:user>
Oct 02 08:35:39 compute-0 nova_compute[260603]:         <nova:project uuid="c3f40887ae8b47f58d2d6d9acd3ae2e3">tempest-ServerShowV247Test-914637317</nova:project>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <nova:ports/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <system>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <entry name="serial">330c4cea-9fec-4ab5-8ce1-3820232464cb</entry>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <entry name="uuid">330c4cea-9fec-4ab5-8ce1-3820232464cb</entry>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     </system>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   <os>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   </os>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   <features>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   </features>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/330c4cea-9fec-4ab5-8ce1-3820232464cb_disk">
Oct 02 08:35:39 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:39 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/330c4cea-9fec-4ab5-8ce1-3820232464cb_disk.config">
Oct 02 08:35:39 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:39 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/console.log" append="off"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <video>
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     </video>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:35:39 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:35:39 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:35:39 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:35:39 compute-0 nova_compute[260603]: </domain>
Oct 02 08:35:39 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.360 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.361 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.373 2 INFO nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Using config drive
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.412 2 DEBUG nova.storage.rbd_utils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.452 2 DEBUG nova.objects.instance [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 330c4cea-9fec-4ab5-8ce1-3820232464cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.458 2 INFO nova.compute.manager [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Rescuing
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.458 2 DEBUG oslo_concurrency.lockutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "refresh_cache-5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.459 2 DEBUG oslo_concurrency.lockutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquired lock "refresh_cache-5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.459 2 DEBUG nova.network.neutron [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:35:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2936503610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2153063173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.484337) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394139484402, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1741, "num_deletes": 254, "total_data_size": 2499066, "memory_usage": 2539112, "flush_reason": "Manual Compaction"}
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.499 2 DEBUG nova.objects.instance [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'keypairs' on Instance uuid 330c4cea-9fec-4ab5-8ce1-3820232464cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394139500724, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2460195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33590, "largest_seqno": 35330, "table_properties": {"data_size": 2452334, "index_size": 4611, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17903, "raw_average_key_size": 20, "raw_value_size": 2435939, "raw_average_value_size": 2819, "num_data_blocks": 204, "num_entries": 864, "num_filter_entries": 864, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759393987, "oldest_key_time": 1759393987, "file_creation_time": 1759394139, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 16703 microseconds, and 5486 cpu microseconds.
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.501054) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2460195 bytes OK
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.501123) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.503319) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.503337) EVENT_LOG_v1 {"time_micros": 1759394139503331, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.503352) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2491413, prev total WAL file size 2491413, number of live WAL files 2.
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.504212) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2402KB)], [74(8742KB)]
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394139504260, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11412280, "oldest_snapshot_seqno": -1}
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6017 keys, 9753021 bytes, temperature: kUnknown
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394139546396, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9753021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9710535, "index_size": 26312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 152080, "raw_average_key_size": 25, "raw_value_size": 9600428, "raw_average_value_size": 1595, "num_data_blocks": 1066, "num_entries": 6017, "num_filter_entries": 6017, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394139, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.546579) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9753021 bytes
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.547635) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 270.5 rd, 231.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 8.5 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(8.6) write-amplify(4.0) OK, records in: 6543, records dropped: 526 output_compression: NoCompression
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.547649) EVENT_LOG_v1 {"time_micros": 1759394139547642, "job": 42, "event": "compaction_finished", "compaction_time_micros": 42191, "compaction_time_cpu_micros": 22809, "output_level": 6, "num_output_files": 1, "total_output_size": 9753021, "num_input_records": 6543, "num_output_records": 6017, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394139548064, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394139549239, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.504156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.549261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.549266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.549267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.549268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:35:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:35:39.549270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.690 2 INFO nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Creating config drive at /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/disk.config
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.694 2 DEBUG oslo_concurrency.processutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprbsqrpbg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.855 2 DEBUG oslo_concurrency.processutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprbsqrpbg" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.904 2 DEBUG nova.storage.rbd_utils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] rbd image 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:39 compute-0 nova_compute[260603]: 2025-10-02 08:35:39.914 2 DEBUG oslo_concurrency.processutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/disk.config 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:40 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.118 2 DEBUG oslo_concurrency.processutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/disk.config 330c4cea-9fec-4ab5-8ce1-3820232464cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:40 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.122 2 INFO nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Deleting local config drive /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb/disk.config because it was imported into RBD.
Oct 02 08:35:40 compute-0 systemd-machined[214636]: New machine qemu-105-instance-00000057.
Oct 02 08:35:40 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-00000057.
Oct 02 08:35:40 compute-0 kernel: tap961da5ba-b0 (unregistering): left promiscuous mode
Oct 02 08:35:40 compute-0 NetworkManager[45129]: <info>  [1759394140.3243] device (tap961da5ba-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:35:40 compute-0 ovn_controller[152344]: 2025-10-02T08:35:40Z|00836|binding|INFO|Releasing lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 from this chassis (sb_readonly=0)
Oct 02 08:35:40 compute-0 ovn_controller[152344]: 2025-10-02T08:35:40Z|00837|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 down in Southbound
Oct 02 08:35:40 compute-0 ovn_controller[152344]: 2025-10-02T08:35:40Z|00838|binding|INFO|Removing iface tap961da5ba-b0 ovn-installed in OVS
Oct 02 08:35:40 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.364 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.368 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.370 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74f187c2-780c-418d-98eb-b25294872ab0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.373 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[23cb01c4-6219-4263-bf66-d621421c12fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.374 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace which is not needed anymore
Oct 02 08:35:40 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:40 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 02 08:35:40 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000055.scope: Consumed 12.990s CPU time.
Oct 02 08:35:40 compute-0 systemd-machined[214636]: Machine qemu-102-instance-00000055 terminated.
Oct 02 08:35:40 compute-0 ceph-mon[74477]: pgmap v1670: 305 pgs: 305 active+clean; 305 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 8.2 MiB/s wr, 406 op/s
Oct 02 08:35:40 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[344054]: [NOTICE]   (344058) : haproxy version is 2.8.14-c23fe91
Oct 02 08:35:40 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[344054]: [NOTICE]   (344058) : path to executable is /usr/sbin/haproxy
Oct 02 08:35:40 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[344054]: [WARNING]  (344058) : Exiting Master process...
Oct 02 08:35:40 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[344054]: [ALERT]    (344058) : Current worker (344060) exited with code 143 (Terminated)
Oct 02 08:35:40 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[344054]: [WARNING]  (344058) : All workers exited. Exiting... (0)
Oct 02 08:35:40 compute-0 systemd[1]: libpod-344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842.scope: Deactivated successfully.
Oct 02 08:35:40 compute-0 podman[345381]: 2025-10-02 08:35:40.53610109 +0000 UTC m=+0.055103658 container died 344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:35:40 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.576 2 DEBUG nova.compute.manager [req-8436f733-250c-456f-9acc-18bfca6be207 req-19dece45-e3ee-4783-af92-1f9de4b226e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:40 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.577 2 DEBUG oslo_concurrency.lockutils [req-8436f733-250c-456f-9acc-18bfca6be207 req-19dece45-e3ee-4783-af92-1f9de4b226e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:40 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.577 2 DEBUG oslo_concurrency.lockutils [req-8436f733-250c-456f-9acc-18bfca6be207 req-19dece45-e3ee-4783-af92-1f9de4b226e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:40 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.577 2 DEBUG oslo_concurrency.lockutils [req-8436f733-250c-456f-9acc-18bfca6be207 req-19dece45-e3ee-4783-af92-1f9de4b226e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:40 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.577 2 DEBUG nova.compute.manager [req-8436f733-250c-456f-9acc-18bfca6be207 req-19dece45-e3ee-4783-af92-1f9de4b226e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:35:40 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.578 2 WARNING nova.compute.manager [req-8436f733-250c-456f-9acc-18bfca6be207 req-19dece45-e3ee-4783-af92-1f9de4b226e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state powering-off.
Oct 02 08:35:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842-userdata-shm.mount: Deactivated successfully.
Oct 02 08:35:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8475d22b21271e45292b359d590037976a0dad60191ce705500bd7e5483420-merged.mount: Deactivated successfully.
Oct 02 08:35:40 compute-0 podman[345381]: 2025-10-02 08:35:40.598545086 +0000 UTC m=+0.117547614 container cleanup 344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:35:40 compute-0 systemd[1]: libpod-conmon-344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842.scope: Deactivated successfully.
Oct 02 08:35:40 compute-0 podman[345450]: 2025-10-02 08:35:40.657560961 +0000 UTC m=+0.036788788 container remove 344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.662 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c989faa-8708-4290-9a82-9911addf013c]: (4, ('Thu Oct  2 08:35:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842)\n344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842\nThu Oct  2 08:35:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842)\n344f7d3fd08381c463cb8dcdcd1cb628ac7dd44917c7f9a40e01d802dd938842\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.664 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[87f477ec-927a-4144-b087-1a97501d7bba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.665 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:40 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:40 compute-0 kernel: tap74f187c2-70: left promiscuous mode
Oct 02 08:35:40 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.723 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2080310a-14d6-4a8e-9aaf-f1d9a3e55794]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.743 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d55d6f-71f3-4845-9f7d-0d1862d43fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1da34f6c-369a-4b79-a497-0120aea39577]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.764 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc60d64-d57f-459d-95c7-e74220e951d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499821, 'reachable_time': 32154, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345478, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d74f187c2\x2d780c\x2d418d\x2d98eb\x2db25294872ab0.mount: Deactivated successfully.
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.769 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:35:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:40.769 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6ba581-38ba-4543-b6d9-aac6cabf29a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:40.999 2 INFO nova.virt.libvirt.driver [None req-ff4681aa-575f-4c22-a683-7a94b8ecfc78 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance shutdown successfully after 13 seconds.
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.007 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance destroyed successfully.
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.008 2 DEBUG nova.objects.instance [None req-ff4681aa-575f-4c22-a683-7a94b8ecfc78 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'numa_topology' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.024 2 DEBUG nova.compute.manager [None req-ff4681aa-575f-4c22-a683-7a94b8ecfc78 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.083 2 DEBUG oslo_concurrency.lockutils [None req-ff4681aa-575f-4c22-a683-7a94b8ecfc78 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 305 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 6.5 MiB/s wr, 378 op/s
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.295 2 DEBUG nova.compute.manager [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.296 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.296 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 330c4cea-9fec-4ab5-8ce1-3820232464cb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.297 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394141.2944968, 330c4cea-9fec-4ab5-8ce1-3820232464cb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.297 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] VM Resumed (Lifecycle Event)
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.304 2 INFO nova.virt.libvirt.driver [-] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Instance spawned successfully.
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.304 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.338 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.344 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.348 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.349 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.349 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.349 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.350 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.350 2 DEBUG nova.virt.libvirt.driver [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.389 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.389 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394141.2947142, 330c4cea-9fec-4ab5-8ce1-3820232464cb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.390 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] VM Started (Lifecycle Event)
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.415 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.419 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.423 2 DEBUG nova.compute.manager [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.443 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.466 2 DEBUG oslo_concurrency.lockutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.467 2 DEBUG oslo_concurrency.lockutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.467 2 DEBUG nova.objects.instance [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.512 2 DEBUG oslo_concurrency.lockutils [None req-0637ab63-7b7b-4e34-b663-be76792b2318 d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.717 2 DEBUG nova.network.neutron [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Updating instance_info_cache with network_info: [{"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:35:41 compute-0 nova_compute[260603]: 2025-10-02 08:35:41.740 2 DEBUG oslo_concurrency.lockutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Releasing lock "refresh_cache-5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:35:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:35:42 compute-0 ceph-mon[74477]: pgmap v1671: 305 pgs: 305 active+clean; 305 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 6.5 MiB/s wr, 378 op/s
Oct 02 08:35:42 compute-0 nova_compute[260603]: 2025-10-02 08:35:42.614 2 DEBUG nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:35:42 compute-0 nova_compute[260603]: 2025-10-02 08:35:42.928 2 DEBUG nova.compute.manager [req-e3f05321-685e-45f4-8e50-23e48c30a083 req-7737d41a-4c93-4abc-843f-388094e3c3ce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:42 compute-0 nova_compute[260603]: 2025-10-02 08:35:42.929 2 DEBUG oslo_concurrency.lockutils [req-e3f05321-685e-45f4-8e50-23e48c30a083 req-7737d41a-4c93-4abc-843f-388094e3c3ce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:42 compute-0 nova_compute[260603]: 2025-10-02 08:35:42.930 2 DEBUG oslo_concurrency.lockutils [req-e3f05321-685e-45f4-8e50-23e48c30a083 req-7737d41a-4c93-4abc-843f-388094e3c3ce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:42 compute-0 nova_compute[260603]: 2025-10-02 08:35:42.930 2 DEBUG oslo_concurrency.lockutils [req-e3f05321-685e-45f4-8e50-23e48c30a083 req-7737d41a-4c93-4abc-843f-388094e3c3ce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:42 compute-0 nova_compute[260603]: 2025-10-02 08:35:42.930 2 DEBUG nova.compute.manager [req-e3f05321-685e-45f4-8e50-23e48c30a083 req-7737d41a-4c93-4abc-843f-388094e3c3ce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:35:42 compute-0 nova_compute[260603]: 2025-10-02 08:35:42.931 2 WARNING nova.compute.manager [req-e3f05321-685e-45f4-8e50-23e48c30a083 req-7737d41a-4c93-4abc-843f-388094e3c3ce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state stopped and task_state None.
Oct 02 08:35:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 341 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 7.9 MiB/s wr, 472 op/s
Oct 02 08:35:43 compute-0 nova_compute[260603]: 2025-10-02 08:35:43.216 2 DEBUG nova.objects.instance [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'flavor' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:43 compute-0 nova_compute[260603]: 2025-10-02 08:35:43.246 2 DEBUG oslo_concurrency.lockutils [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:35:43 compute-0 nova_compute[260603]: 2025-10-02 08:35:43.246 2 DEBUG oslo_concurrency.lockutils [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:35:43 compute-0 nova_compute[260603]: 2025-10-02 08:35:43.247 2 DEBUG nova.network.neutron [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:35:43 compute-0 nova_compute[260603]: 2025-10-02 08:35:43.247 2 DEBUG nova.objects.instance [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'info_cache' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:43 compute-0 nova_compute[260603]: 2025-10-02 08:35:43.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:43 compute-0 nova_compute[260603]: 2025-10-02 08:35:43.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:44 compute-0 nova_compute[260603]: 2025-10-02 08:35:44.431 2 DEBUG oslo_concurrency.lockutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "330c4cea-9fec-4ab5-8ce1-3820232464cb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:44 compute-0 nova_compute[260603]: 2025-10-02 08:35:44.431 2 DEBUG oslo_concurrency.lockutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "330c4cea-9fec-4ab5-8ce1-3820232464cb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:44 compute-0 nova_compute[260603]: 2025-10-02 08:35:44.431 2 DEBUG oslo_concurrency.lockutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "330c4cea-9fec-4ab5-8ce1-3820232464cb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:44 compute-0 nova_compute[260603]: 2025-10-02 08:35:44.431 2 DEBUG oslo_concurrency.lockutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "330c4cea-9fec-4ab5-8ce1-3820232464cb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:44 compute-0 nova_compute[260603]: 2025-10-02 08:35:44.432 2 DEBUG oslo_concurrency.lockutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "330c4cea-9fec-4ab5-8ce1-3820232464cb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:44 compute-0 nova_compute[260603]: 2025-10-02 08:35:44.432 2 INFO nova.compute.manager [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Terminating instance
Oct 02 08:35:44 compute-0 nova_compute[260603]: 2025-10-02 08:35:44.433 2 DEBUG oslo_concurrency.lockutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "refresh_cache-330c4cea-9fec-4ab5-8ce1-3820232464cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:35:44 compute-0 nova_compute[260603]: 2025-10-02 08:35:44.433 2 DEBUG oslo_concurrency.lockutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquired lock "refresh_cache-330c4cea-9fec-4ab5-8ce1-3820232464cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:35:44 compute-0 nova_compute[260603]: 2025-10-02 08:35:44.433 2 DEBUG nova.network.neutron [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:35:44 compute-0 ceph-mon[74477]: pgmap v1672: 305 pgs: 305 active+clean; 341 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 7.9 MiB/s wr, 472 op/s
Oct 02 08:35:44 compute-0 nova_compute[260603]: 2025-10-02 08:35:44.764 2 DEBUG nova.network.neutron [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:35:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 341 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.9 MiB/s wr, 266 op/s
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.513 2 DEBUG nova.network.neutron [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.542 2 DEBUG oslo_concurrency.lockutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Releasing lock "refresh_cache-330c4cea-9fec-4ab5-8ce1-3820232464cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.542 2 DEBUG nova.compute.manager [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:35:45 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000057.scope: Deactivated successfully.
Oct 02 08:35:45 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000057.scope: Consumed 5.002s CPU time.
Oct 02 08:35:45 compute-0 systemd-machined[214636]: Machine qemu-105-instance-00000057 terminated.
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.650 2 DEBUG nova.network.neutron [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.684 2 DEBUG oslo_concurrency.lockutils [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.714 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance destroyed successfully.
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.714 2 DEBUG nova.objects.instance [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'numa_topology' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.732 2 DEBUG nova.objects.instance [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'resources' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.751 2 DEBUG nova.virt.libvirt.vif [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:35:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.752 2 DEBUG nova.network.os_vif_util [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.752 2 DEBUG nova.network.os_vif_util [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.755 2 DEBUG os_vif [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.757 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap961da5ba-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.765 2 INFO os_vif [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.769 2 INFO nova.virt.libvirt.driver [-] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Instance destroyed successfully.
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.770 2 DEBUG nova.objects.instance [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'resources' on Instance uuid 330c4cea-9fec-4ab5-8ce1-3820232464cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.776 2 DEBUG nova.virt.libvirt.driver [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Start _get_guest_xml network_info=[{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.780 2 WARNING nova.virt.libvirt.driver [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.785 2 DEBUG nova.virt.libvirt.host [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.786 2 DEBUG nova.virt.libvirt.host [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.802 2 DEBUG nova.virt.libvirt.host [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.803 2 DEBUG nova.virt.libvirt.host [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.803 2 DEBUG nova.virt.libvirt.driver [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.803 2 DEBUG nova.virt.hardware [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.804 2 DEBUG nova.virt.hardware [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.804 2 DEBUG nova.virt.hardware [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.804 2 DEBUG nova.virt.hardware [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.804 2 DEBUG nova.virt.hardware [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.805 2 DEBUG nova.virt.hardware [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.805 2 DEBUG nova.virt.hardware [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.806 2 DEBUG nova.virt.hardware [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.806 2 DEBUG nova.virt.hardware [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.806 2 DEBUG nova.virt.hardware [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.806 2 DEBUG nova.virt.hardware [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.807 2 DEBUG nova.objects.instance [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'vcpu_model' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:45 compute-0 nova_compute[260603]: 2025-10-02 08:35:45.826 2 DEBUG oslo_concurrency.processutils [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.181 2 INFO nova.virt.libvirt.driver [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Deleting instance files /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb_del
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.182 2 INFO nova.virt.libvirt.driver [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Deletion of /var/lib/nova/instances/330c4cea-9fec-4ab5-8ce1-3820232464cb_del complete
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.234 2 INFO nova.compute.manager [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Took 0.69 seconds to destroy the instance on the hypervisor.
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.234 2 DEBUG oslo.service.loopingcall [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.235 2 DEBUG nova.compute.manager [-] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.235 2 DEBUG nova.network.neutron [-] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:35:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2559914036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.300 2 DEBUG oslo_concurrency.processutils [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.329 2 DEBUG oslo_concurrency.processutils [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:46 compute-0 ovn_controller[152344]: 2025-10-02T08:35:46Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1c:d9:a2 10.100.0.13
Oct 02 08:35:46 compute-0 ovn_controller[152344]: 2025-10-02T08:35:46Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1c:d9:a2 10.100.0.13
Oct 02 08:35:46 compute-0 ceph-mon[74477]: pgmap v1673: 305 pgs: 305 active+clean; 341 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.9 MiB/s wr, 266 op/s
Oct 02 08:35:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2559914036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.793 2 DEBUG nova.network.neutron [-] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:35:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4020052789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.809 2 DEBUG nova.network.neutron [-] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.825 2 DEBUG oslo_concurrency.processutils [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.827 2 DEBUG nova.virt.libvirt.vif [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:35:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.828 2 DEBUG nova.network.os_vif_util [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.829 2 DEBUG nova.network.os_vif_util [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.831 2 DEBUG nova.objects.instance [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.834 2 INFO nova.compute.manager [-] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Took 0.60 seconds to deallocate network for instance.
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.846 2 DEBUG nova.virt.libvirt.driver [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:35:46 compute-0 nova_compute[260603]:   <uuid>ba2cf934-ce76-4de7-a495-285f144bdab7</uuid>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   <name>instance-00000055</name>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerActionsTestJSON-server-276449458</nova:name>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:35:45</nova:creationTime>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:35:46 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:35:46 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:35:46 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:35:46 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:35:46 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:35:46 compute-0 nova_compute[260603]:         <nova:user uuid="bb1b3a5ae9514259b27a0b7a28f23cda">tempest-ServerActionsTestJSON-1407264397-project-member</nova:user>
Oct 02 08:35:46 compute-0 nova_compute[260603]:         <nova:project uuid="b43ebc87104041aba179e47c5e6ecc5f">tempest-ServerActionsTestJSON-1407264397</nova:project>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:35:46 compute-0 nova_compute[260603]:         <nova:port uuid="961da5ba-b0ac-4a87-a74c-26d0d2d2bf50">
Oct 02 08:35:46 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <system>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <entry name="serial">ba2cf934-ce76-4de7-a495-285f144bdab7</entry>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <entry name="uuid">ba2cf934-ce76-4de7-a495-285f144bdab7</entry>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     </system>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   <os>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   </os>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   <features>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   </features>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ba2cf934-ce76-4de7-a495-285f144bdab7_disk">
Oct 02 08:35:46 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:46 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ba2cf934-ce76-4de7-a495-285f144bdab7_disk.config">
Oct 02 08:35:46 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:46 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:bb:af:04"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <target dev="tap961da5ba-b0"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/console.log" append="off"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <video>
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     </video>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:35:46 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:35:46 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:35:46 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:35:46 compute-0 nova_compute[260603]: </domain>
Oct 02 08:35:46 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.848 2 DEBUG nova.virt.libvirt.driver [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.849 2 DEBUG nova.virt.libvirt.driver [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.850 2 DEBUG nova.virt.libvirt.vif [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:35:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.850 2 DEBUG nova.network.os_vif_util [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.852 2 DEBUG nova.network.os_vif_util [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.852 2 DEBUG os_vif [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.854 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.854 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.859 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap961da5ba-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.859 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap961da5ba-b0, col_values=(('external_ids', {'iface-id': '961da5ba-b0ac-4a87-a74c-26d0d2d2bf50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:af:04', 'vm-uuid': 'ba2cf934-ce76-4de7-a495-285f144bdab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:46 compute-0 NetworkManager[45129]: <info>  [1759394146.8628] manager: (tap961da5ba-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.869 2 INFO os_vif [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.879 2 DEBUG oslo_concurrency.lockutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.880 2 DEBUG oslo_concurrency.lockutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:46 compute-0 kernel: tap961da5ba-b0: entered promiscuous mode
Oct 02 08:35:46 compute-0 systemd-udevd[345482]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:35:46 compute-0 NetworkManager[45129]: <info>  [1759394146.9614] manager: (tap961da5ba-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/344)
Oct 02 08:35:46 compute-0 ovn_controller[152344]: 2025-10-02T08:35:46Z|00839|binding|INFO|Claiming lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for this chassis.
Oct 02 08:35:46 compute-0 ovn_controller[152344]: 2025-10-02T08:35:46Z|00840|binding|INFO|961da5ba-b0ac-4a87-a74c-26d0d2d2bf50: Claiming fa:16:3e:bb:af:04 10.100.0.8
Oct 02 08:35:46 compute-0 nova_compute[260603]: 2025-10-02 08:35:46.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:46.973 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:35:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:46.975 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 bound to our chassis
Oct 02 08:35:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:46.977 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:35:46 compute-0 NetworkManager[45129]: <info>  [1759394146.9874] device (tap961da5ba-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:35:46 compute-0 NetworkManager[45129]: <info>  [1759394146.9886] device (tap961da5ba-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:35:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:46.994 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[46018912-9e06-459e-8cf8-fb65e348975e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:46.995 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74f187c2-71 in ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:35:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:46.998 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74f187c2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:35:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:46.998 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2c45cab0-be37-47af-9d2e-b8e7dd9dc798]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.000 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f5a65c-d08e-40f6-9b18-b14928005938]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 ovn_controller[152344]: 2025-10-02T08:35:47Z|00841|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 ovn-installed in OVS
Oct 02 08:35:47 compute-0 ovn_controller[152344]: 2025-10-02T08:35:47Z|00842|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 up in Southbound
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.023 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a4057a8a-6657-40a6-a543-c12e1e9b198c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 systemd-machined[214636]: New machine qemu-106-instance-00000055.
Oct 02 08:35:47 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-00000055.
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.057 2 DEBUG oslo_concurrency.processutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.065 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4b68de00-f86d-4f02-9ddd-618b6eb36c36]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 341 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.9 MiB/s wr, 266 op/s
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.119 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[92b1bff6-f204-4937-8eac-1af07c17976a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 NetworkManager[45129]: <info>  [1759394147.1376] manager: (tap74f187c2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/345)
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.136 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[342a5bdc-21b7-466f-924c-7afbd166f08b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.176 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[938e884e-9ef1-4f22-9f64-ffe3416791f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.181 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cc914a76-53c1-4ad4-a177-b577d9fe718a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 NetworkManager[45129]: <info>  [1759394147.2076] device (tap74f187c2-70): carrier: link connected
Oct 02 08:35:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.215 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d5921345-8d01-4eb9-8303-bfc0bff85ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.218 2 DEBUG nova.compute.manager [req-7808fc99-ad0e-4d8b-9697-a4dcdac1535a req-68303770-0910-4102-9308-a431ed0d27d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.218 2 DEBUG oslo_concurrency.lockutils [req-7808fc99-ad0e-4d8b-9697-a4dcdac1535a req-68303770-0910-4102-9308-a431ed0d27d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.219 2 DEBUG oslo_concurrency.lockutils [req-7808fc99-ad0e-4d8b-9697-a4dcdac1535a req-68303770-0910-4102-9308-a431ed0d27d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.219 2 DEBUG oslo_concurrency.lockutils [req-7808fc99-ad0e-4d8b-9697-a4dcdac1535a req-68303770-0910-4102-9308-a431ed0d27d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.219 2 DEBUG nova.compute.manager [req-7808fc99-ad0e-4d8b-9697-a4dcdac1535a req-68303770-0910-4102-9308-a431ed0d27d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.219 2 WARNING nova.compute.manager [req-7808fc99-ad0e-4d8b-9697-a4dcdac1535a req-68303770-0910-4102-9308-a431ed0d27d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state stopped and task_state powering-on.
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.237 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4a867848-cbfe-4731-a45e-cf9f647f1259]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502581, 'reachable_time': 38617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345635, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.258 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f8aae875-6a1a-474a-9912-b845873b7929]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:7f62'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502581, 'tstamp': 502581}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345636, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.278 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d21e245d-abd3-4f1f-9db6-46a6770008fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502581, 'reachable_time': 38617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345637, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.311 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[177e728f-048e-4350-86e7-a54bf71024c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.372 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6512b37-fc56-454f-9181-5168b4782b68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.374 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.374 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.374 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:47 compute-0 kernel: tap74f187c2-70: entered promiscuous mode
Oct 02 08:35:47 compute-0 NetworkManager[45129]: <info>  [1759394147.3767] manager: (tap74f187c2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.378 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:47 compute-0 ovn_controller[152344]: 2025-10-02T08:35:47Z|00843|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.398 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.399 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe968ac-aaea-4b17-95cf-49ab97539f8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.400 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:35:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:47.400 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'env', 'PROCESS_TAG=haproxy-74f187c2-780c-418d-98eb-b25294872ab0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74f187c2-780c-418d-98eb-b25294872ab0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:35:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:35:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2610840911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4020052789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2610840911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.521 2 DEBUG oslo_concurrency.processutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.530 2 DEBUG nova.compute.provider_tree [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.556 2 DEBUG nova.scheduler.client.report [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.590 2 DEBUG oslo_concurrency.lockutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.631 2 INFO nova.scheduler.client.report [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Deleted allocations for instance 330c4cea-9fec-4ab5-8ce1-3820232464cb
Oct 02 08:35:47 compute-0 nova_compute[260603]: 2025-10-02 08:35:47.727 2 DEBUG oslo_concurrency.lockutils [None req-bdf70103-8b30-4807-82b0-d93ad3c64bde d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "330c4cea-9fec-4ab5-8ce1-3820232464cb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:47 compute-0 podman[345713]: 2025-10-02 08:35:47.787060223 +0000 UTC m=+0.061476799 container create baa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 08:35:47 compute-0 systemd[1]: Started libpod-conmon-baa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255.scope.
Oct 02 08:35:47 compute-0 podman[345713]: 2025-10-02 08:35:47.751938338 +0000 UTC m=+0.026354994 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:35:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:35:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42ccdd452e351449e79cd5b4758f9f640bab42757ae6e60bc029173c5d4ae257/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:35:47 compute-0 podman[345713]: 2025-10-02 08:35:47.887074379 +0000 UTC m=+0.161491025 container init baa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 08:35:47 compute-0 podman[345713]: 2025-10-02 08:35:47.893948766 +0000 UTC m=+0.168365372 container start baa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 08:35:47 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[345729]: [NOTICE]   (345733) : New worker (345735) forked
Oct 02 08:35:47 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[345729]: [NOTICE]   (345733) : Loading success.
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.009 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for ba2cf934-ce76-4de7-a495-285f144bdab7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.009 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394148.0083442, ba2cf934-ce76-4de7-a495-285f144bdab7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.010 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Resumed (Lifecycle Event)
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.014 2 DEBUG nova.compute.manager [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.018 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance rebooted successfully.
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.018 2 DEBUG nova.compute.manager [None req-820fcde0-c951-40da-a00b-1b27297a8395 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.047 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.051 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.084 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.085 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394148.0130026, ba2cf934-ce76-4de7-a495-285f144bdab7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.085 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Started (Lifecycle Event)
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.104 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.107 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.229 2 DEBUG oslo_concurrency.lockutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "5cc7f7cd-c574-48a6-be3a-07161f94bd7f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.230 2 DEBUG oslo_concurrency.lockutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "5cc7f7cd-c574-48a6-be3a-07161f94bd7f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.230 2 DEBUG oslo_concurrency.lockutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "5cc7f7cd-c574-48a6-be3a-07161f94bd7f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.231 2 DEBUG oslo_concurrency.lockutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "5cc7f7cd-c574-48a6-be3a-07161f94bd7f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.231 2 DEBUG oslo_concurrency.lockutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "5cc7f7cd-c574-48a6-be3a-07161f94bd7f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.232 2 INFO nova.compute.manager [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Terminating instance
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.233 2 DEBUG oslo_concurrency.lockutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "refresh_cache-5cc7f7cd-c574-48a6-be3a-07161f94bd7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.233 2 DEBUG oslo_concurrency.lockutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquired lock "refresh_cache-5cc7f7cd-c574-48a6-be3a-07161f94bd7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.234 2 DEBUG nova.network.neutron [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:48 compute-0 ovn_controller[152344]: 2025-10-02T08:35:48Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:62:1b:d0 10.100.0.5
Oct 02 08:35:48 compute-0 ovn_controller[152344]: 2025-10-02T08:35:48Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:62:1b:d0 10.100.0.5
Oct 02 08:35:48 compute-0 ceph-mon[74477]: pgmap v1674: 305 pgs: 305 active+clean; 341 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.9 MiB/s wr, 266 op/s
Oct 02 08:35:48 compute-0 nova_compute[260603]: 2025-10-02 08:35:48.819 2 DEBUG nova.network.neutron [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:35:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 353 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.1 MiB/s wr, 407 op/s
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.196 2 DEBUG nova.network.neutron [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.218 2 DEBUG oslo_concurrency.lockutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Releasing lock "refresh_cache-5cc7f7cd-c574-48a6-be3a-07161f94bd7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.218 2 DEBUG nova.compute.manager [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:35:49 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000056.scope: Deactivated successfully.
Oct 02 08:35:49 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000056.scope: Consumed 13.372s CPU time.
Oct 02 08:35:49 compute-0 systemd-machined[214636]: Machine qemu-100-instance-00000056 terminated.
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.399 2 DEBUG nova.compute.manager [req-30459006-18d5-4f2f-8c3f-70acf54297c0 req-56dffee9-8525-42ff-9531-83885ddf4d0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.400 2 DEBUG oslo_concurrency.lockutils [req-30459006-18d5-4f2f-8c3f-70acf54297c0 req-56dffee9-8525-42ff-9531-83885ddf4d0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.400 2 DEBUG oslo_concurrency.lockutils [req-30459006-18d5-4f2f-8c3f-70acf54297c0 req-56dffee9-8525-42ff-9531-83885ddf4d0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.400 2 DEBUG oslo_concurrency.lockutils [req-30459006-18d5-4f2f-8c3f-70acf54297c0 req-56dffee9-8525-42ff-9531-83885ddf4d0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.400 2 DEBUG nova.compute.manager [req-30459006-18d5-4f2f-8c3f-70acf54297c0 req-56dffee9-8525-42ff-9531-83885ddf4d0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.401 2 WARNING nova.compute.manager [req-30459006-18d5-4f2f-8c3f-70acf54297c0 req-56dffee9-8525-42ff-9531-83885ddf4d0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state None.
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.446 2 INFO nova.virt.libvirt.driver [-] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Instance destroyed successfully.
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.446 2 DEBUG nova.objects.instance [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lazy-loading 'resources' on Instance uuid 5cc7f7cd-c574-48a6-be3a-07161f94bd7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.829 2 INFO nova.virt.libvirt.driver [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Deleting instance files /var/lib/nova/instances/5cc7f7cd-c574-48a6-be3a-07161f94bd7f_del
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.830 2 INFO nova.virt.libvirt.driver [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Deletion of /var/lib/nova/instances/5cc7f7cd-c574-48a6-be3a-07161f94bd7f_del complete
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.879 2 INFO nova.compute.manager [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Took 0.66 seconds to destroy the instance on the hypervisor.
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.880 2 DEBUG oslo.service.loopingcall [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.880 2 DEBUG nova.compute.manager [-] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:35:49 compute-0 nova_compute[260603]: 2025-10-02 08:35:49.880 2 DEBUG nova.network.neutron [-] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:35:50 compute-0 nova_compute[260603]: 2025-10-02 08:35:50.137 2 DEBUG nova.network.neutron [-] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:35:50 compute-0 nova_compute[260603]: 2025-10-02 08:35:50.150 2 DEBUG nova.network.neutron [-] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:35:50 compute-0 nova_compute[260603]: 2025-10-02 08:35:50.163 2 INFO nova.compute.manager [-] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Took 0.28 seconds to deallocate network for instance.
Oct 02 08:35:50 compute-0 nova_compute[260603]: 2025-10-02 08:35:50.208 2 DEBUG oslo_concurrency.lockutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:50 compute-0 nova_compute[260603]: 2025-10-02 08:35:50.209 2 DEBUG oslo_concurrency.lockutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:50 compute-0 nova_compute[260603]: 2025-10-02 08:35:50.316 2 DEBUG oslo_concurrency.processutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:50 compute-0 ceph-mon[74477]: pgmap v1675: 305 pgs: 305 active+clean; 353 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.1 MiB/s wr, 407 op/s
Oct 02 08:35:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:35:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236621207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:50 compute-0 nova_compute[260603]: 2025-10-02 08:35:50.783 2 DEBUG oslo_concurrency.processutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:50 compute-0 nova_compute[260603]: 2025-10-02 08:35:50.791 2 DEBUG nova.compute.provider_tree [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:35:50 compute-0 nova_compute[260603]: 2025-10-02 08:35:50.830 2 DEBUG nova.scheduler.client.report [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:35:50 compute-0 nova_compute[260603]: 2025-10-02 08:35:50.864 2 DEBUG oslo_concurrency.lockutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:50 compute-0 nova_compute[260603]: 2025-10-02 08:35:50.890 2 INFO nova.scheduler.client.report [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Deleted allocations for instance 5cc7f7cd-c574-48a6-be3a-07161f94bd7f
Oct 02 08:35:50 compute-0 nova_compute[260603]: 2025-10-02 08:35:50.971 2 DEBUG oslo_concurrency.lockutils [None req-8b2bb1c7-d578-4703-aadc-2a20d1bc48ba d3671400936c4ef08f70278919c780ad c3f40887ae8b47f58d2d6d9acd3ae2e3 - - default default] Lock "5cc7f7cd-c574-48a6-be3a-07161f94bd7f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 353 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.6 MiB/s wr, 234 op/s
Oct 02 08:35:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4236621207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:35:51 compute-0 nova_compute[260603]: 2025-10-02 08:35:51.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:35:52 compute-0 ceph-mon[74477]: pgmap v1676: 305 pgs: 305 active+clean; 353 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.6 MiB/s wr, 234 op/s
Oct 02 08:35:52 compute-0 nova_compute[260603]: 2025-10-02 08:35:52.658 2 DEBUG nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:35:52 compute-0 nova_compute[260603]: 2025-10-02 08:35:52.719 2 INFO nova.compute.manager [None req-bf4e0555-d3cc-485b-8570-80dfdbe26e49 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Pausing
Oct 02 08:35:52 compute-0 nova_compute[260603]: 2025-10-02 08:35:52.720 2 DEBUG nova.objects.instance [None req-bf4e0555-d3cc-485b-8570-80dfdbe26e49 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'flavor' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:52 compute-0 nova_compute[260603]: 2025-10-02 08:35:52.753 2 DEBUG nova.compute.manager [None req-bf4e0555-d3cc-485b-8570-80dfdbe26e49 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:52 compute-0 nova_compute[260603]: 2025-10-02 08:35:52.755 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394152.7550783, ba2cf934-ce76-4de7-a495-285f144bdab7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:52 compute-0 nova_compute[260603]: 2025-10-02 08:35:52.755 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Paused (Lifecycle Event)
Oct 02 08:35:52 compute-0 nova_compute[260603]: 2025-10-02 08:35:52.778 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:52 compute-0 nova_compute[260603]: 2025-10-02 08:35:52.781 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:52 compute-0 nova_compute[260603]: 2025-10-02 08:35:52.809 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 02 08:35:53 compute-0 podman[345790]: 2025-10-02 08:35:53.043546741 +0000 UTC m=+0.095866883 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:35:53 compute-0 podman[345789]: 2025-10-02 08:35:53.094966286 +0000 UTC m=+0.149295638 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 08:35:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 281 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.7 MiB/s wr, 353 op/s
Oct 02 08:35:53 compute-0 nova_compute[260603]: 2025-10-02 08:35:53.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:54 compute-0 ceph-mon[74477]: pgmap v1677: 305 pgs: 305 active+clean; 281 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.7 MiB/s wr, 353 op/s
Oct 02 08:35:54 compute-0 kernel: tapbf1797c8-0b (unregistering): left promiscuous mode
Oct 02 08:35:54 compute-0 NetworkManager[45129]: <info>  [1759394154.9876] device (tapbf1797c8-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:35:54 compute-0 nova_compute[260603]: 2025-10-02 08:35:54.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:54 compute-0 ovn_controller[152344]: 2025-10-02T08:35:54Z|00844|binding|INFO|Releasing lport bf1797c8-0b08-41b3-a518-4cbacdbac9a0 from this chassis (sb_readonly=0)
Oct 02 08:35:54 compute-0 ovn_controller[152344]: 2025-10-02T08:35:54Z|00845|binding|INFO|Setting lport bf1797c8-0b08-41b3-a518-4cbacdbac9a0 down in Southbound
Oct 02 08:35:54 compute-0 ovn_controller[152344]: 2025-10-02T08:35:54Z|00846|binding|INFO|Removing iface tapbf1797c8-0b ovn-installed in OVS
Oct 02 08:35:54 compute-0 nova_compute[260603]: 2025-10-02 08:35:54.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:55 compute-0 nova_compute[260603]: 2025-10-02 08:35:55.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:55 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000059.scope: Deactivated successfully.
Oct 02 08:35:55 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000059.scope: Consumed 13.207s CPU time.
Oct 02 08:35:55 compute-0 systemd-machined[214636]: Machine qemu-104-instance-00000059 terminated.
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.069 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:1b:d0 10.100.0.5'], port_security=['fa:16:3e:62:1b:d0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5f692626-ba1b-4b0c-9046-4cd9ef3c25cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44f1ad17ce794fbdbf606f465f6ab7ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17cfbd28-1f2b-4eb6-ab31-e8960544023c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af1b7ebd-469f-49ea-a9a9-ba2e42baffc7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bf1797c8-0b08-41b3-a518-4cbacdbac9a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.071 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bf1797c8-0b08-41b3-a518-4cbacdbac9a0 in datapath 4aed206b-8fa2-4bbf-a202-0f7b63ace915 unbound from our chassis
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.072 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4aed206b-8fa2-4bbf-a202-0f7b63ace915
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.093 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5e1a8d-d9b9-43be-8792-b1dacece6497]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 281 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 259 op/s
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.124 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[35c306a1-5c38-43c0-aca4-3d765621d9a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.127 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5f0285-b989-4d8c-91e6-5d77e36a625e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.159 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0f65c521-c171-43df-abf5-4bae15047dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.180 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b9ff61-b435-49f7-8fc3-1fbf25bcd1a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4aed206b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:78:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501038, 'reachable_time': 34238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345846, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.199 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b62d45-4e00-40cd-aadf-7221f296dfaa]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4aed206b-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501050, 'tstamp': 501050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345847, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4aed206b-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501054, 'tstamp': 501054}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345847, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.201 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4aed206b-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:55 compute-0 nova_compute[260603]: 2025-10-02 08:35:55.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:55 compute-0 nova_compute[260603]: 2025-10-02 08:35:55.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.207 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4aed206b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.207 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.208 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4aed206b-80, col_values=(('external_ids', {'iface-id': '2156689f-0588-4bb0-992e-a8d6a79bb745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:35:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:35:55.208 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:35:55 compute-0 nova_compute[260603]: 2025-10-02 08:35:55.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:55 compute-0 nova_compute[260603]: 2025-10-02 08:35:55.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:55 compute-0 nova_compute[260603]: 2025-10-02 08:35:55.674 2 INFO nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Instance shutdown successfully after 13 seconds.
Oct 02 08:35:55 compute-0 nova_compute[260603]: 2025-10-02 08:35:55.683 2 INFO nova.virt.libvirt.driver [-] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Instance destroyed successfully.
Oct 02 08:35:55 compute-0 nova_compute[260603]: 2025-10-02 08:35:55.683 2 DEBUG nova.objects.instance [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'numa_topology' on Instance uuid 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.562 2 INFO nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Attempting rescue
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.563 2 DEBUG nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 08:35:56 compute-0 ceph-mon[74477]: pgmap v1678: 305 pgs: 305 active+clean; 281 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 259 op/s
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.570 2 DEBUG nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.571 2 INFO nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Creating image(s)
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.606 2 DEBUG nova.storage.rbd_utils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.612 2 DEBUG nova.objects.instance [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.663 2 DEBUG nova.storage.rbd_utils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.702 2 DEBUG nova.storage.rbd_utils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.709 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.827 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.828 2 DEBUG oslo_concurrency.lockutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.828 2 DEBUG oslo_concurrency.lockutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.829 2 DEBUG oslo_concurrency.lockutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.855 2 DEBUG nova.storage.rbd_utils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.859 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:56 compute-0 nova_compute[260603]: 2025-10-02 08:35:56.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.035 2 DEBUG nova.compute.manager [req-c2f6afd5-8247-4588-96f0-df773ec78d49 req-1d459106-a270-403b-a991-1c69e103324a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received event network-vif-unplugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.036 2 DEBUG oslo_concurrency.lockutils [req-c2f6afd5-8247-4588-96f0-df773ec78d49 req-1d459106-a270-403b-a991-1c69e103324a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.036 2 DEBUG oslo_concurrency.lockutils [req-c2f6afd5-8247-4588-96f0-df773ec78d49 req-1d459106-a270-403b-a991-1c69e103324a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.036 2 DEBUG oslo_concurrency.lockutils [req-c2f6afd5-8247-4588-96f0-df773ec78d49 req-1d459106-a270-403b-a991-1c69e103324a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.037 2 DEBUG nova.compute.manager [req-c2f6afd5-8247-4588-96f0-df773ec78d49 req-1d459106-a270-403b-a991-1c69e103324a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] No waiting events found dispatching network-vif-unplugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.037 2 WARNING nova.compute.manager [req-c2f6afd5-8247-4588-96f0-df773ec78d49 req-1d459106-a270-403b-a991-1c69e103324a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received unexpected event network-vif-unplugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 for instance with vm_state active and task_state rescuing.
Oct 02 08:35:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 281 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 259 op/s
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.140 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.141 2 DEBUG nova.objects.instance [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'migration_context' on Instance uuid 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.171 2 DEBUG nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.172 2 DEBUG nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Start _get_guest_xml network_info=[{"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "vif_mac": "fa:16:3e:62:1b:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.172 2 DEBUG nova.objects.instance [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'resources' on Instance uuid 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.194 2 WARNING nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.200 2 DEBUG nova.virt.libvirt.host [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.201 2 DEBUG nova.virt.libvirt.host [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.204 2 DEBUG nova.virt.libvirt.host [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.204 2 DEBUG nova.virt.libvirt.host [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.204 2 DEBUG nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.205 2 DEBUG nova.virt.hardware [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.205 2 DEBUG nova.virt.hardware [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.205 2 DEBUG nova.virt.hardware [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.205 2 DEBUG nova.virt.hardware [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.205 2 DEBUG nova.virt.hardware [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.206 2 DEBUG nova.virt.hardware [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.206 2 DEBUG nova.virt.hardware [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.206 2 DEBUG nova.virt.hardware [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.206 2 DEBUG nova.virt.hardware [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.206 2 DEBUG nova.virt.hardware [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.207 2 DEBUG nova.virt.hardware [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.207 2 DEBUG nova.objects.instance [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.226 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.367 2 INFO nova.compute.manager [None req-39b62796-7872-46cd-9b79-12256cd6e6a3 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Unpausing
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.369 2 DEBUG nova.objects.instance [None req-39b62796-7872-46cd-9b79-12256cd6e6a3 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'flavor' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.401 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394157.401182, ba2cf934-ce76-4de7-a495-285f144bdab7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.402 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Resumed (Lifecycle Event)
Oct 02 08:35:57 compute-0 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.413 2 DEBUG nova.virt.libvirt.guest [None req-39b62796-7872-46cd-9b79-12256cd6e6a3 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.414 2 DEBUG nova.compute.manager [None req-39b62796-7872-46cd-9b79-12256cd6e6a3 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.421 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.428 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.471 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 02 08:35:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/691307269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.746 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:57 compute-0 nova_compute[260603]: 2025-10-02 08:35:57.747 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:35:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:35:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:35:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:35:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:35:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:35:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/128147564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.211 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.213 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:35:58 compute-0 ceph-mon[74477]: pgmap v1679: 305 pgs: 305 active+clean; 281 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 259 op/s
Oct 02 08:35:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/691307269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/128147564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:35:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2290276532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.699 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.702 2 DEBUG nova.virt.libvirt.vif [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:35:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1207156767',display_name='tempest-ServerRescueNegativeTestJSON-server-1207156767',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1207156767',id=89,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:35:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='44f1ad17ce794fbdbf606f465f6ab7ec',ramdisk_id='',reservation_id='r-k7oyv205',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1454054093',owner_user_name='tempest-ServerRescueNegativeTestJSON-1454054093-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:35:36Z,user_data=None,user_id='2fa0bf72f6f34b41ac7942357b5d7851',uuid=5f692626-ba1b-4b0c-9046-4cd9ef3c25cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "vif_mac": "fa:16:3e:62:1b:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.703 2 DEBUG nova.network.os_vif_util [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converting VIF {"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "vif_mac": "fa:16:3e:62:1b:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.705 2 DEBUG nova.network.os_vif_util [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:62:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=bf1797c8-0b08-41b3-a518-4cbacdbac9a0,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1797c8-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.707 2 DEBUG nova.objects.instance [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.736 2 DEBUG nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:35:58 compute-0 nova_compute[260603]:   <uuid>5f692626-ba1b-4b0c-9046-4cd9ef3c25cd</uuid>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   <name>instance-00000059</name>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1207156767</nova:name>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:35:57</nova:creationTime>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <nova:user uuid="2fa0bf72f6f34b41ac7942357b5d7851">tempest-ServerRescueNegativeTestJSON-1454054093-project-member</nova:user>
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <nova:project uuid="44f1ad17ce794fbdbf606f465f6ab7ec">tempest-ServerRescueNegativeTestJSON-1454054093</nova:project>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <nova:port uuid="bf1797c8-0b08-41b3-a518-4cbacdbac9a0">
Oct 02 08:35:58 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <system>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <entry name="serial">5f692626-ba1b-4b0c-9046-4cd9ef3c25cd</entry>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <entry name="uuid">5f692626-ba1b-4b0c-9046-4cd9ef3c25cd</entry>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     </system>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   <os>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   </os>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   <features>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   </features>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.rescue">
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk">
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <target dev="vdb" bus="virtio"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.config.rescue">
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       </source>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:35:58 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:62:1b:d0"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <target dev="tapbf1797c8-0b"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/console.log" append="off"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <video>
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     </video>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:35:58 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:35:58 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:35:58 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:35:58 compute-0 nova_compute[260603]: </domain>
Oct 02 08:35:58 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.759 2 INFO nova.virt.libvirt.driver [-] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Instance destroyed successfully.
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.858 2 DEBUG nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.859 2 DEBUG nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.861 2 DEBUG nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.861 2 DEBUG nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] No VIF found with MAC fa:16:3e:62:1b:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.862 2 INFO nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Using config drive
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.893 2 DEBUG nova.storage.rbd_utils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:35:58 compute-0 podman[346020]: 2025-10-02 08:35:58.917743673 +0000 UTC m=+0.103017906 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.922 2 DEBUG nova.objects.instance [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:58 compute-0 nova_compute[260603]: 2025-10-02 08:35:58.955 2 DEBUG nova.objects.instance [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'keypairs' on Instance uuid 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:35:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 327 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.1 MiB/s wr, 278 op/s
Oct 02 08:35:59 compute-0 nova_compute[260603]: 2025-10-02 08:35:59.296 2 DEBUG nova.compute.manager [req-906929b4-9b23-4e9f-a824-d5c2dab3e96f req-d29ec057-2683-4fac-9354-cd5449132ee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:35:59 compute-0 nova_compute[260603]: 2025-10-02 08:35:59.297 2 DEBUG oslo_concurrency.lockutils [req-906929b4-9b23-4e9f-a824-d5c2dab3e96f req-d29ec057-2683-4fac-9354-cd5449132ee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:35:59 compute-0 nova_compute[260603]: 2025-10-02 08:35:59.298 2 DEBUG oslo_concurrency.lockutils [req-906929b4-9b23-4e9f-a824-d5c2dab3e96f req-d29ec057-2683-4fac-9354-cd5449132ee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:35:59 compute-0 nova_compute[260603]: 2025-10-02 08:35:59.298 2 DEBUG oslo_concurrency.lockutils [req-906929b4-9b23-4e9f-a824-d5c2dab3e96f req-d29ec057-2683-4fac-9354-cd5449132ee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:35:59 compute-0 nova_compute[260603]: 2025-10-02 08:35:59.298 2 DEBUG nova.compute.manager [req-906929b4-9b23-4e9f-a824-d5c2dab3e96f req-d29ec057-2683-4fac-9354-cd5449132ee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] No waiting events found dispatching network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:35:59 compute-0 nova_compute[260603]: 2025-10-02 08:35:59.299 2 WARNING nova.compute.manager [req-906929b4-9b23-4e9f-a824-d5c2dab3e96f req-d29ec057-2683-4fac-9354-cd5449132ee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received unexpected event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 for instance with vm_state active and task_state rescuing.
Oct 02 08:35:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2290276532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.028 2 INFO nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Creating config drive at /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/disk.config.rescue
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.035 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuw5omtda execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.203 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuw5omtda" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.247 2 DEBUG nova.storage.rbd_utils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] rbd image 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.253 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/disk.config.rescue 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.490 2 DEBUG oslo_concurrency.processutils [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/disk.config.rescue 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.492 2 INFO nova.virt.libvirt.driver [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Deleting local config drive /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd/disk.config.rescue because it was imported into RBD.
Oct 02 08:36:00 compute-0 kernel: tapbf1797c8-0b: entered promiscuous mode
Oct 02 08:36:00 compute-0 NetworkManager[45129]: <info>  [1759394160.5835] manager: (tapbf1797c8-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/347)
Oct 02 08:36:00 compute-0 ovn_controller[152344]: 2025-10-02T08:36:00Z|00847|binding|INFO|Claiming lport bf1797c8-0b08-41b3-a518-4cbacdbac9a0 for this chassis.
Oct 02 08:36:00 compute-0 ovn_controller[152344]: 2025-10-02T08:36:00Z|00848|binding|INFO|bf1797c8-0b08-41b3-a518-4cbacdbac9a0: Claiming fa:16:3e:62:1b:d0 10.100.0.5
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:00 compute-0 ceph-mon[74477]: pgmap v1680: 305 pgs: 305 active+clean; 327 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.1 MiB/s wr, 278 op/s
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.596 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:1b:d0 10.100.0.5'], port_security=['fa:16:3e:62:1b:d0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5f692626-ba1b-4b0c-9046-4cd9ef3c25cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44f1ad17ce794fbdbf606f465f6ab7ec', 'neutron:revision_number': '5', 'neutron:security_group_ids': '17cfbd28-1f2b-4eb6-ab31-e8960544023c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af1b7ebd-469f-49ea-a9a9-ba2e42baffc7, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bf1797c8-0b08-41b3-a518-4cbacdbac9a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.597 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bf1797c8-0b08-41b3-a518-4cbacdbac9a0 in datapath 4aed206b-8fa2-4bbf-a202-0f7b63ace915 bound to our chassis
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.598 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4aed206b-8fa2-4bbf-a202-0f7b63ace915
Oct 02 08:36:00 compute-0 systemd-udevd[346107]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:36:00 compute-0 NetworkManager[45129]: <info>  [1759394160.6296] device (tapbf1797c8-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:36:00 compute-0 NetworkManager[45129]: <info>  [1759394160.6305] device (tapbf1797c8-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:36:00 compute-0 ovn_controller[152344]: 2025-10-02T08:36:00Z|00849|binding|INFO|Setting lport bf1797c8-0b08-41b3-a518-4cbacdbac9a0 ovn-installed in OVS
Oct 02 08:36:00 compute-0 ovn_controller[152344]: 2025-10-02T08:36:00Z|00850|binding|INFO|Setting lport bf1797c8-0b08-41b3-a518-4cbacdbac9a0 up in Southbound
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.638 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[19ff4dd7-bc53-4aa3-a301-bc60418943ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:00 compute-0 systemd-machined[214636]: New machine qemu-107-instance-00000059.
Oct 02 08:36:00 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-00000059.
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.683 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac95b16-b69f-4a32-a9ef-dff906247edf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.688 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c9ed4c-9861-4641-bf19-650a460a079c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.739 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[833205e5-135b-447b-822a-2f531fa803b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.764 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394145.7622533, 330c4cea-9fec-4ab5-8ce1-3820232464cb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.765 2 INFO nova.compute.manager [-] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] VM Stopped (Lifecycle Event)
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.770 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3582f9f2-26ae-43ef-bf17-3566aa76b9ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4aed206b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:78:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501038, 'reachable_time': 34238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346123, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.790 2 DEBUG nova.compute.manager [None req-b0c7e654-cdfe-422c-b88f-7ab53af11a38 - - - - - -] [instance: 330c4cea-9fec-4ab5-8ce1-3820232464cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.794 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72bcc834-c84f-43c7-a74f-0efdf20013cc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4aed206b-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501050, 'tstamp': 501050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346124, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4aed206b-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501054, 'tstamp': 501054}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346124, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.795 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4aed206b-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.798 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4aed206b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.799 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.799 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4aed206b-80, col_values=(('external_ids', {'iface-id': '2156689f-0588-4bb0-992e-a8d6a79bb745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:00.799 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:36:00 compute-0 nova_compute[260603]: 2025-10-02 08:36:00.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 327 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 137 op/s
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.395 2 DEBUG nova.compute.manager [req-9f3b51e5-0a50-43fc-9530-c0a073be93e8 req-649f7e6f-214c-48b5-ba69-e6ebc68ea549 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.396 2 DEBUG oslo_concurrency.lockutils [req-9f3b51e5-0a50-43fc-9530-c0a073be93e8 req-649f7e6f-214c-48b5-ba69-e6ebc68ea549 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.396 2 DEBUG oslo_concurrency.lockutils [req-9f3b51e5-0a50-43fc-9530-c0a073be93e8 req-649f7e6f-214c-48b5-ba69-e6ebc68ea549 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.396 2 DEBUG oslo_concurrency.lockutils [req-9f3b51e5-0a50-43fc-9530-c0a073be93e8 req-649f7e6f-214c-48b5-ba69-e6ebc68ea549 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.397 2 DEBUG nova.compute.manager [req-9f3b51e5-0a50-43fc-9530-c0a073be93e8 req-649f7e6f-214c-48b5-ba69-e6ebc68ea549 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] No waiting events found dispatching network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.397 2 WARNING nova.compute.manager [req-9f3b51e5-0a50-43fc-9530-c0a073be93e8 req-649f7e6f-214c-48b5-ba69-e6ebc68ea549 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received unexpected event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 for instance with vm_state active and task_state rescuing.
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.397 2 DEBUG nova.compute.manager [req-9f3b51e5-0a50-43fc-9530-c0a073be93e8 req-649f7e6f-214c-48b5-ba69-e6ebc68ea549 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.397 2 DEBUG oslo_concurrency.lockutils [req-9f3b51e5-0a50-43fc-9530-c0a073be93e8 req-649f7e6f-214c-48b5-ba69-e6ebc68ea549 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.398 2 DEBUG oslo_concurrency.lockutils [req-9f3b51e5-0a50-43fc-9530-c0a073be93e8 req-649f7e6f-214c-48b5-ba69-e6ebc68ea549 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.398 2 DEBUG oslo_concurrency.lockutils [req-9f3b51e5-0a50-43fc-9530-c0a073be93e8 req-649f7e6f-214c-48b5-ba69-e6ebc68ea549 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.398 2 DEBUG nova.compute.manager [req-9f3b51e5-0a50-43fc-9530-c0a073be93e8 req-649f7e6f-214c-48b5-ba69-e6ebc68ea549 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] No waiting events found dispatching network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.398 2 WARNING nova.compute.manager [req-9f3b51e5-0a50-43fc-9530-c0a073be93e8 req-649f7e6f-214c-48b5-ba69-e6ebc68ea549 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received unexpected event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 for instance with vm_state active and task_state rescuing.
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.686 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.687 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394161.6843903, 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.687 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] VM Resumed (Lifecycle Event)
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.694 2 DEBUG nova.compute.manager [None req-ed57061e-6589-4d08-91f3-6ed7fc6a521f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.739 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.745 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.789 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.790 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394161.6849306, 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.790 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] VM Started (Lifecycle Event)
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.813 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.817 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:36:01 compute-0 nova_compute[260603]: 2025-10-02 08:36:01.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:02 compute-0 podman[346185]: 2025-10-02 08:36:02.047571224 +0000 UTC m=+0.104926506 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 02 08:36:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:36:02 compute-0 ceph-mon[74477]: pgmap v1681: 305 pgs: 305 active+clean; 327 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 137 op/s
Oct 02 08:36:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 328 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.9 MiB/s wr, 180 op/s
Oct 02 08:36:03 compute-0 nova_compute[260603]: 2025-10-02 08:36:03.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:04 compute-0 nova_compute[260603]: 2025-10-02 08:36:04.255 2 INFO nova.compute.manager [None req-a26434cf-1441-46e3-a820-b24df15f5d9c 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Pausing
Oct 02 08:36:04 compute-0 nova_compute[260603]: 2025-10-02 08:36:04.256 2 DEBUG nova.objects.instance [None req-a26434cf-1441-46e3-a820-b24df15f5d9c 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'flavor' on Instance uuid c20fff86-9831-43b7-b579-48efe60be24b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:36:04 compute-0 nova_compute[260603]: 2025-10-02 08:36:04.285 2 DEBUG nova.compute.manager [None req-a26434cf-1441-46e3-a820-b24df15f5d9c 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:04 compute-0 nova_compute[260603]: 2025-10-02 08:36:04.286 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394164.2853577, c20fff86-9831-43b7-b579-48efe60be24b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:04 compute-0 nova_compute[260603]: 2025-10-02 08:36:04.287 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] VM Paused (Lifecycle Event)
Oct 02 08:36:04 compute-0 nova_compute[260603]: 2025-10-02 08:36:04.313 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:04 compute-0 nova_compute[260603]: 2025-10-02 08:36:04.316 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:36:04 compute-0 nova_compute[260603]: 2025-10-02 08:36:04.337 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 02 08:36:04 compute-0 ovn_controller[152344]: 2025-10-02T08:36:04Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:af:04 10.100.0.8
Oct 02 08:36:04 compute-0 nova_compute[260603]: 2025-10-02 08:36:04.443 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394149.4430323, 5cc7f7cd-c574-48a6-be3a-07161f94bd7f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:04 compute-0 nova_compute[260603]: 2025-10-02 08:36:04.444 2 INFO nova.compute.manager [-] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] VM Stopped (Lifecycle Event)
Oct 02 08:36:04 compute-0 nova_compute[260603]: 2025-10-02 08:36:04.464 2 DEBUG nova.compute.manager [None req-3408ac6e-3eae-415a-80cf-80447f60c46b - - - - - -] [instance: 5cc7f7cd-c574-48a6-be3a-07161f94bd7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:04 compute-0 ceph-mon[74477]: pgmap v1682: 305 pgs: 305 active+clean; 328 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.9 MiB/s wr, 180 op/s
Oct 02 08:36:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 328 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 951 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 08:36:06 compute-0 ceph-mon[74477]: pgmap v1683: 305 pgs: 305 active+clean; 328 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 951 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 08:36:06 compute-0 nova_compute[260603]: 2025-10-02 08:36:06.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:07 compute-0 nova_compute[260603]: 2025-10-02 08:36:07.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:07.063 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:36:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:07.065 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:36:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 328 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 08:36:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:36:07 compute-0 nova_compute[260603]: 2025-10-02 08:36:07.360 2 INFO nova.compute.manager [None req-3dc1deb0-45b7-4982-80f9-cfb40cb0fc25 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Unpausing
Oct 02 08:36:07 compute-0 nova_compute[260603]: 2025-10-02 08:36:07.362 2 DEBUG nova.objects.instance [None req-3dc1deb0-45b7-4982-80f9-cfb40cb0fc25 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'flavor' on Instance uuid c20fff86-9831-43b7-b579-48efe60be24b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:36:07 compute-0 nova_compute[260603]: 2025-10-02 08:36:07.387 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394167.3869793, c20fff86-9831-43b7-b579-48efe60be24b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:07 compute-0 nova_compute[260603]: 2025-10-02 08:36:07.387 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] VM Resumed (Lifecycle Event)
Oct 02 08:36:07 compute-0 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct 02 08:36:07 compute-0 nova_compute[260603]: 2025-10-02 08:36:07.394 2 DEBUG nova.virt.libvirt.guest [None req-3dc1deb0-45b7-4982-80f9-cfb40cb0fc25 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 08:36:07 compute-0 nova_compute[260603]: 2025-10-02 08:36:07.395 2 DEBUG nova.compute.manager [None req-3dc1deb0-45b7-4982-80f9-cfb40cb0fc25 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:07 compute-0 nova_compute[260603]: 2025-10-02 08:36:07.406 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:07 compute-0 nova_compute[260603]: 2025-10-02 08:36:07.409 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:36:07 compute-0 nova_compute[260603]: 2025-10-02 08:36:07.441 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 02 08:36:08 compute-0 nova_compute[260603]: 2025-10-02 08:36:08.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:08 compute-0 ceph-mon[74477]: pgmap v1684: 305 pgs: 305 active+clean; 328 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.110 2 DEBUG oslo_concurrency.lockutils [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.111 2 DEBUG oslo_concurrency.lockutils [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.111 2 DEBUG oslo_concurrency.lockutils [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.112 2 DEBUG oslo_concurrency.lockutils [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.112 2 DEBUG oslo_concurrency.lockutils [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.114 2 INFO nova.compute.manager [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Terminating instance
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.116 2 DEBUG nova.compute.manager [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:36:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 328 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Oct 02 08:36:09 compute-0 kernel: tapbf1797c8-0b (unregistering): left promiscuous mode
Oct 02 08:36:09 compute-0 NetworkManager[45129]: <info>  [1759394169.1821] device (tapbf1797c8-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:36:09 compute-0 ovn_controller[152344]: 2025-10-02T08:36:09Z|00851|binding|INFO|Releasing lport bf1797c8-0b08-41b3-a518-4cbacdbac9a0 from this chassis (sb_readonly=0)
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:09 compute-0 ovn_controller[152344]: 2025-10-02T08:36:09Z|00852|binding|INFO|Setting lport bf1797c8-0b08-41b3-a518-4cbacdbac9a0 down in Southbound
Oct 02 08:36:09 compute-0 ovn_controller[152344]: 2025-10-02T08:36:09Z|00853|binding|INFO|Removing iface tapbf1797c8-0b ovn-installed in OVS
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.212 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:1b:d0 10.100.0.5'], port_security=['fa:16:3e:62:1b:d0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5f692626-ba1b-4b0c-9046-4cd9ef3c25cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44f1ad17ce794fbdbf606f465f6ab7ec', 'neutron:revision_number': '6', 'neutron:security_group_ids': '17cfbd28-1f2b-4eb6-ab31-e8960544023c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af1b7ebd-469f-49ea-a9a9-ba2e42baffc7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bf1797c8-0b08-41b3-a518-4cbacdbac9a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.212 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bf1797c8-0b08-41b3-a518-4cbacdbac9a0 in datapath 4aed206b-8fa2-4bbf-a202-0f7b63ace915 unbound from our chassis
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.213 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4aed206b-8fa2-4bbf-a202-0f7b63ace915
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.234 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7852afdc-9542-4291-825a-865b29248333]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:09 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000059.scope: Deactivated successfully.
Oct 02 08:36:09 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000059.scope: Consumed 8.562s CPU time.
Oct 02 08:36:09 compute-0 systemd-machined[214636]: Machine qemu-107-instance-00000059 terminated.
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.266 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1614e6c4-b928-4fa5-898b-4a47d0a90951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.270 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f14ab56f-79c1-4f30-989e-f352d52d4ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.302 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[12842c2c-733e-4233-addb-ad1447904e7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.332 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6323b01a-0d86-4b2b-a67a-3b6440984ebb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4aed206b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:78:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501038, 'reachable_time': 34238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346216, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.351 2 INFO nova.virt.libvirt.driver [-] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Instance destroyed successfully.
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.351 2 DEBUG nova.objects.instance [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'resources' on Instance uuid 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.355 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1c50cd-098a-4a35-82d5-63cbf4532c57]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4aed206b-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501050, 'tstamp': 501050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346220, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4aed206b-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501054, 'tstamp': 501054}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346220, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.357 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4aed206b-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.364 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4aed206b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.366 2 DEBUG nova.virt.libvirt.vif [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:35:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1207156767',display_name='tempest-ServerRescueNegativeTestJSON-server-1207156767',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1207156767',id=89,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:36:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='44f1ad17ce794fbdbf606f465f6ab7ec',ramdisk_id='',reservation_id='r-k7oyv205',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1454054093',owner_user_name='tempest-ServerRescueNegativeTestJSON-1454054093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:36:01Z,user_data=None,user_id='2fa0bf72f6f34b41ac7942357b5d7851',uuid=5f692626-ba1b-4b0c-9046-4cd9ef3c25cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.364 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.365 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4aed206b-80, col_values=(('external_ids', {'iface-id': '2156689f-0588-4bb0-992e-a8d6a79bb745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:09.365 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.366 2 DEBUG nova.network.os_vif_util [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converting VIF {"id": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "address": "fa:16:3e:62:1b:d0", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1797c8-0b", "ovs_interfaceid": "bf1797c8-0b08-41b3-a518-4cbacdbac9a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.367 2 DEBUG nova.network.os_vif_util [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:62:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=bf1797c8-0b08-41b3-a518-4cbacdbac9a0,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1797c8-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.367 2 DEBUG os_vif [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:62:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=bf1797c8-0b08-41b3-a518-4cbacdbac9a0,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1797c8-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.369 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf1797c8-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.373 2 INFO os_vif [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:62:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=bf1797c8-0b08-41b3-a518-4cbacdbac9a0,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1797c8-0b')
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.906 2 DEBUG nova.compute.manager [req-33f3516d-f61f-4abc-a478-74c272789fe3 req-7dbc709a-26d2-434e-a190-0940ace6dc8e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received event network-vif-unplugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.908 2 DEBUG oslo_concurrency.lockutils [req-33f3516d-f61f-4abc-a478-74c272789fe3 req-7dbc709a-26d2-434e-a190-0940ace6dc8e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.909 2 DEBUG oslo_concurrency.lockutils [req-33f3516d-f61f-4abc-a478-74c272789fe3 req-7dbc709a-26d2-434e-a190-0940ace6dc8e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.909 2 DEBUG oslo_concurrency.lockutils [req-33f3516d-f61f-4abc-a478-74c272789fe3 req-7dbc709a-26d2-434e-a190-0940ace6dc8e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.910 2 DEBUG nova.compute.manager [req-33f3516d-f61f-4abc-a478-74c272789fe3 req-7dbc709a-26d2-434e-a190-0940ace6dc8e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] No waiting events found dispatching network-vif-unplugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:09 compute-0 nova_compute[260603]: 2025-10-02 08:36:09.911 2 DEBUG nova.compute.manager [req-33f3516d-f61f-4abc-a478-74c272789fe3 req-7dbc709a-26d2-434e-a190-0940ace6dc8e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received event network-vif-unplugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:36:10 compute-0 nova_compute[260603]: 2025-10-02 08:36:10.205 2 INFO nova.virt.libvirt.driver [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Deleting instance files /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_del
Oct 02 08:36:10 compute-0 nova_compute[260603]: 2025-10-02 08:36:10.207 2 INFO nova.virt.libvirt.driver [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Deletion of /var/lib/nova/instances/5f692626-ba1b-4b0c-9046-4cd9ef3c25cd_del complete
Oct 02 08:36:10 compute-0 nova_compute[260603]: 2025-10-02 08:36:10.268 2 INFO nova.compute.manager [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Took 1.15 seconds to destroy the instance on the hypervisor.
Oct 02 08:36:10 compute-0 nova_compute[260603]: 2025-10-02 08:36:10.269 2 DEBUG oslo.service.loopingcall [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:36:10 compute-0 nova_compute[260603]: 2025-10-02 08:36:10.270 2 DEBUG nova.compute.manager [-] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:36:10 compute-0 nova_compute[260603]: 2025-10-02 08:36:10.270 2 DEBUG nova.network.neutron [-] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:36:10 compute-0 ceph-mon[74477]: pgmap v1685: 305 pgs: 305 active+clean; 328 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Oct 02 08:36:10 compute-0 nova_compute[260603]: 2025-10-02 08:36:10.990 2 DEBUG nova.network.neutron [-] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:36:11 compute-0 nova_compute[260603]: 2025-10-02 08:36:11.014 2 INFO nova.compute.manager [-] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Took 0.74 seconds to deallocate network for instance.
Oct 02 08:36:11 compute-0 nova_compute[260603]: 2025-10-02 08:36:11.062 2 DEBUG oslo_concurrency.lockutils [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:11 compute-0 nova_compute[260603]: 2025-10-02 08:36:11.063 2 DEBUG oslo_concurrency.lockutils [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:11.067 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 328 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 23 KiB/s wr, 120 op/s
Oct 02 08:36:11 compute-0 nova_compute[260603]: 2025-10-02 08:36:11.175 2 DEBUG oslo_concurrency.processutils [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:11 compute-0 nova_compute[260603]: 2025-10-02 08:36:11.320 2 DEBUG nova.compute.manager [req-3b75a97f-8bff-46cd-be54-b53373a59710 req-957db9de-1425-4cf1-ae8c-154428740972 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received event network-vif-deleted-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:36:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279373243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:11 compute-0 nova_compute[260603]: 2025-10-02 08:36:11.626 2 DEBUG oslo_concurrency.processutils [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:11 compute-0 nova_compute[260603]: 2025-10-02 08:36:11.635 2 DEBUG nova.compute.provider_tree [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:36:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4279373243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:11 compute-0 nova_compute[260603]: 2025-10-02 08:36:11.663 2 DEBUG nova.scheduler.client.report [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:36:11 compute-0 nova_compute[260603]: 2025-10-02 08:36:11.703 2 DEBUG oslo_concurrency.lockutils [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:11 compute-0 nova_compute[260603]: 2025-10-02 08:36:11.739 2 INFO nova.scheduler.client.report [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Deleted allocations for instance 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd
Oct 02 08:36:11 compute-0 nova_compute[260603]: 2025-10-02 08:36:11.850 2 DEBUG oslo_concurrency.lockutils [None req-3fd1b911-21e4-4554-a1fe-fcc0083c484f 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:12 compute-0 nova_compute[260603]: 2025-10-02 08:36:12.171 2 DEBUG nova.compute.manager [req-18e9624a-585d-45e1-bbcd-a177bb7b9835 req-a56c0097-f0c2-40c3-98ad-71597de8338f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:12 compute-0 nova_compute[260603]: 2025-10-02 08:36:12.172 2 DEBUG oslo_concurrency.lockutils [req-18e9624a-585d-45e1-bbcd-a177bb7b9835 req-a56c0097-f0c2-40c3-98ad-71597de8338f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:12 compute-0 nova_compute[260603]: 2025-10-02 08:36:12.172 2 DEBUG oslo_concurrency.lockutils [req-18e9624a-585d-45e1-bbcd-a177bb7b9835 req-a56c0097-f0c2-40c3-98ad-71597de8338f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:12 compute-0 nova_compute[260603]: 2025-10-02 08:36:12.172 2 DEBUG oslo_concurrency.lockutils [req-18e9624a-585d-45e1-bbcd-a177bb7b9835 req-a56c0097-f0c2-40c3-98ad-71597de8338f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5f692626-ba1b-4b0c-9046-4cd9ef3c25cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:12 compute-0 nova_compute[260603]: 2025-10-02 08:36:12.173 2 DEBUG nova.compute.manager [req-18e9624a-585d-45e1-bbcd-a177bb7b9835 req-a56c0097-f0c2-40c3-98ad-71597de8338f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] No waiting events found dispatching network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:12 compute-0 nova_compute[260603]: 2025-10-02 08:36:12.173 2 WARNING nova.compute.manager [req-18e9624a-585d-45e1-bbcd-a177bb7b9835 req-a56c0097-f0c2-40c3-98ad-71597de8338f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Received unexpected event network-vif-plugged-bf1797c8-0b08-41b3-a518-4cbacdbac9a0 for instance with vm_state deleted and task_state None.
Oct 02 08:36:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:36:12 compute-0 ceph-mon[74477]: pgmap v1686: 305 pgs: 305 active+clean; 328 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 23 KiB/s wr, 120 op/s
Oct 02 08:36:12 compute-0 nova_compute[260603]: 2025-10-02 08:36:12.818 2 DEBUG oslo_concurrency.lockutils [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:12 compute-0 nova_compute[260603]: 2025-10-02 08:36:12.819 2 DEBUG oslo_concurrency.lockutils [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:12 compute-0 nova_compute[260603]: 2025-10-02 08:36:12.820 2 INFO nova.compute.manager [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Rebooting instance
Oct 02 08:36:12 compute-0 nova_compute[260603]: 2025-10-02 08:36:12.837 2 DEBUG oslo_concurrency.lockutils [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:36:12 compute-0 nova_compute[260603]: 2025-10-02 08:36:12.837 2 DEBUG oslo_concurrency.lockutils [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:36:12 compute-0 nova_compute[260603]: 2025-10-02 08:36:12.838 2 DEBUG nova.network.neutron [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.106 2 DEBUG oslo_concurrency.lockutils [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "c20fff86-9831-43b7-b579-48efe60be24b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.107 2 DEBUG oslo_concurrency.lockutils [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.107 2 DEBUG oslo_concurrency.lockutils [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "c20fff86-9831-43b7-b579-48efe60be24b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.108 2 DEBUG oslo_concurrency.lockutils [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.108 2 DEBUG oslo_concurrency.lockutils [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.110 2 INFO nova.compute.manager [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Terminating instance
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.112 2 DEBUG nova.compute.manager [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:36:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 202 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 175 op/s
Oct 02 08:36:13 compute-0 kernel: tap03538fdf-ba (unregistering): left promiscuous mode
Oct 02 08:36:13 compute-0 NetworkManager[45129]: <info>  [1759394173.1686] device (tap03538fdf-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00854|binding|INFO|Releasing lport 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 from this chassis (sb_readonly=0)
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00855|binding|INFO|Setting lport 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 down in Southbound
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00856|binding|INFO|Removing iface tap03538fdf-ba ovn-installed in OVS
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.182 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:d9:a2 10.100.0.13'], port_security=['fa:16:3e:1c:d9:a2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c20fff86-9831-43b7-b579-48efe60be24b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44f1ad17ce794fbdbf606f465f6ab7ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17cfbd28-1f2b-4eb6-ab31-e8960544023c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af1b7ebd-469f-49ea-a9a9-ba2e42baffc7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=03538fdf-ba28-4b62-a592-dbf3a07dcfb9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.183 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 in datapath 4aed206b-8fa2-4bbf-a202-0f7b63ace915 unbound from our chassis
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.185 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4aed206b-8fa2-4bbf-a202-0f7b63ace915, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.186 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[db2a5798-1ad1-4e4c-8f0d-5bff0efcf2b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.187 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915 namespace which is not needed anymore
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:13 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000058.scope: Deactivated successfully.
Oct 02 08:36:13 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000058.scope: Consumed 13.722s CPU time.
Oct 02 08:36:13 compute-0 systemd-machined[214636]: Machine qemu-103-instance-00000058 terminated.
Oct 02 08:36:13 compute-0 kernel: tap03538fdf-ba: entered promiscuous mode
Oct 02 08:36:13 compute-0 NetworkManager[45129]: <info>  [1759394173.3365] manager: (tap03538fdf-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/348)
Oct 02 08:36:13 compute-0 systemd-udevd[346274]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00857|binding|INFO|Claiming lport 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 for this chassis.
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00858|binding|INFO|03538fdf-ba28-4b62-a592-dbf3a07dcfb9: Claiming fa:16:3e:1c:d9:a2 10.100.0.13
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:13 compute-0 kernel: tap03538fdf-ba (unregistering): left promiscuous mode
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.348 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:d9:a2 10.100.0.13'], port_security=['fa:16:3e:1c:d9:a2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c20fff86-9831-43b7-b579-48efe60be24b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44f1ad17ce794fbdbf606f465f6ab7ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17cfbd28-1f2b-4eb6-ab31-e8960544023c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af1b7ebd-469f-49ea-a9a9-ba2e42baffc7, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=03538fdf-ba28-4b62-a592-dbf3a07dcfb9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:36:13 compute-0 neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915[344828]: [NOTICE]   (344832) : haproxy version is 2.8.14-c23fe91
Oct 02 08:36:13 compute-0 neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915[344828]: [NOTICE]   (344832) : path to executable is /usr/sbin/haproxy
Oct 02 08:36:13 compute-0 neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915[344828]: [WARNING]  (344832) : Exiting Master process...
Oct 02 08:36:13 compute-0 neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915[344828]: [WARNING]  (344832) : Exiting Master process...
Oct 02 08:36:13 compute-0 neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915[344828]: [ALERT]    (344832) : Current worker (344835) exited with code 143 (Terminated)
Oct 02 08:36:13 compute-0 neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915[344828]: [WARNING]  (344832) : All workers exited. Exiting... (0)
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.380 2 INFO nova.virt.libvirt.driver [-] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Instance destroyed successfully.
Oct 02 08:36:13 compute-0 systemd[1]: libpod-7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1.scope: Deactivated successfully.
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.380 2 DEBUG nova.objects.instance [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lazy-loading 'resources' on Instance uuid c20fff86-9831-43b7-b579-48efe60be24b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:36:13 compute-0 podman[346292]: 2025-10-02 08:36:13.384197542 +0000 UTC m=+0.063644553 container died 7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00859|binding|INFO|Setting lport 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 ovn-installed in OVS
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00860|binding|INFO|Setting lport 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 up in Southbound
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00861|binding|INFO|Releasing lport 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 from this chassis (sb_readonly=1)
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00862|if_status|INFO|Dropped 2 log messages in last 357 seconds (most recently, 357 seconds ago) due to excessive rate
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00863|if_status|INFO|Not setting lport 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 down as sb is readonly
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00864|binding|INFO|Removing iface tap03538fdf-ba ovn-installed in OVS
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00865|binding|INFO|Releasing lport 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 from this chassis (sb_readonly=0)
Oct 02 08:36:13 compute-0 ovn_controller[152344]: 2025-10-02T08:36:13Z|00866|binding|INFO|Setting lport 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 down in Southbound
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.400 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:d9:a2 10.100.0.13'], port_security=['fa:16:3e:1c:d9:a2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c20fff86-9831-43b7-b579-48efe60be24b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44f1ad17ce794fbdbf606f465f6ab7ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17cfbd28-1f2b-4eb6-ab31-e8960544023c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af1b7ebd-469f-49ea-a9a9-ba2e42baffc7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=03538fdf-ba28-4b62-a592-dbf3a07dcfb9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.400 2 DEBUG nova.virt.libvirt.vif [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:35:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-966219500',display_name='tempest-ServerRescueNegativeTestJSON-server-966219500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-966219500',id=88,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:35:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='44f1ad17ce794fbdbf606f465f6ab7ec',ramdisk_id='',reservation_id='r-2tkk3ni7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1454054093',owner_user_name='tempest-ServerRescueNegativeTestJSON-1454054093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:36:07Z,user_data=None,user_id='2fa0bf72f6f34b41ac7942357b5d7851',uuid=c20fff86-9831-43b7-b579-48efe60be24b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "address": "fa:16:3e:1c:d9:a2", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03538fdf-ba", "ovs_interfaceid": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.401 2 DEBUG nova.network.os_vif_util [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converting VIF {"id": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "address": "fa:16:3e:1c:d9:a2", "network": {"id": "4aed206b-8fa2-4bbf-a202-0f7b63ace915", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1536701548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44f1ad17ce794fbdbf606f465f6ab7ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03538fdf-ba", "ovs_interfaceid": "03538fdf-ba28-4b62-a592-dbf3a07dcfb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.402 2 DEBUG nova.network.os_vif_util [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:d9:a2,bridge_name='br-int',has_traffic_filtering=True,id=03538fdf-ba28-4b62-a592-dbf3a07dcfb9,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03538fdf-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.403 2 DEBUG os_vif [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:d9:a2,bridge_name='br-int',has_traffic_filtering=True,id=03538fdf-ba28-4b62-a592-dbf3a07dcfb9,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03538fdf-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.405 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03538fdf-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.415 2 INFO os_vif [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:d9:a2,bridge_name='br-int',has_traffic_filtering=True,id=03538fdf-ba28-4b62-a592-dbf3a07dcfb9,network=Network(4aed206b-8fa2-4bbf-a202-0f7b63ace915),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03538fdf-ba')
Oct 02 08:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1-userdata-shm.mount: Deactivated successfully.
Oct 02 08:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c99ea9d6bf9f213c3e40c001481d9887f6952603e36e6b65a533836bdc10da90-merged.mount: Deactivated successfully.
Oct 02 08:36:13 compute-0 podman[346292]: 2025-10-02 08:36:13.452175115 +0000 UTC m=+0.131622156 container cleanup 7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:36:13 compute-0 systemd[1]: libpod-conmon-7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1.scope: Deactivated successfully.
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:13 compute-0 podman[346345]: 2025-10-02 08:36:13.574001307 +0000 UTC m=+0.074171800 container remove 7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.583 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[39d826bb-a8ef-40db-baa4-7686ff3f0d17]: (4, ('Thu Oct  2 08:36:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915 (7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1)\n7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1\nThu Oct  2 08:36:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915 (7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1)\n7dc63ec967131903c4f2c4eca3a3f4ccefdfcadaf6e33db65e1f321815d624d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.585 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[108180a5-3edd-4cab-8729-499438b35964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.586 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4aed206b-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:13 compute-0 kernel: tap4aed206b-80: left promiscuous mode
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.629 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eef38758-e183-418b-8b68-f9661e893dbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.654 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[84ad0997-738a-48da-9fa6-b41d06efc28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.655 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e35ad336-991b-4833-8f33-1c99252d0ca4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.682 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1b02a0de-70e7-4e06-8788-e5ac88977d13]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501030, 'reachable_time': 21047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346362, 'error': None, 'target': 'ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d4aed206b\x2d8fa2\x2d4bbf\x2da202\x2d0f7b63ace915.mount: Deactivated successfully.
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.687 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4aed206b-8fa2-4bbf-a202-0f7b63ace915 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.687 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8a255782-543e-453c-92bd-cf448bc58d4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.688 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 in datapath 4aed206b-8fa2-4bbf-a202-0f7b63ace915 unbound from our chassis
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.689 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4aed206b-8fa2-4bbf-a202-0f7b63ace915, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.690 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac69243-e2bd-4f7b-b1f6-a664f51baaf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.690 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 03538fdf-ba28-4b62-a592-dbf3a07dcfb9 in datapath 4aed206b-8fa2-4bbf-a202-0f7b63ace915 unbound from our chassis
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.692 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4aed206b-8fa2-4bbf-a202-0f7b63ace915, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:36:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:13.692 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e54f6023-763b-42a0-9166-434c2c2199c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.783 2 DEBUG nova.compute.manager [req-bd6e8b6b-7f0e-4fa6-90aa-4fa0555c93e4 req-76997967-7e83-45b7-9206-11d9ddc2c6ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received event network-vif-unplugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.784 2 DEBUG oslo_concurrency.lockutils [req-bd6e8b6b-7f0e-4fa6-90aa-4fa0555c93e4 req-76997967-7e83-45b7-9206-11d9ddc2c6ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c20fff86-9831-43b7-b579-48efe60be24b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.785 2 DEBUG oslo_concurrency.lockutils [req-bd6e8b6b-7f0e-4fa6-90aa-4fa0555c93e4 req-76997967-7e83-45b7-9206-11d9ddc2c6ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.785 2 DEBUG oslo_concurrency.lockutils [req-bd6e8b6b-7f0e-4fa6-90aa-4fa0555c93e4 req-76997967-7e83-45b7-9206-11d9ddc2c6ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.786 2 DEBUG nova.compute.manager [req-bd6e8b6b-7f0e-4fa6-90aa-4fa0555c93e4 req-76997967-7e83-45b7-9206-11d9ddc2c6ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] No waiting events found dispatching network-vif-unplugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.786 2 DEBUG nova.compute.manager [req-bd6e8b6b-7f0e-4fa6-90aa-4fa0555c93e4 req-76997967-7e83-45b7-9206-11d9ddc2c6ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received event network-vif-unplugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.851 2 INFO nova.virt.libvirt.driver [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Deleting instance files /var/lib/nova/instances/c20fff86-9831-43b7-b579-48efe60be24b_del
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.852 2 INFO nova.virt.libvirt.driver [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Deletion of /var/lib/nova/instances/c20fff86-9831-43b7-b579-48efe60be24b_del complete
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.927 2 INFO nova.compute.manager [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.928 2 DEBUG oslo.service.loopingcall [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.929 2 DEBUG nova.compute.manager [-] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:36:13 compute-0 nova_compute[260603]: 2025-10-02 08:36:13.930 2 DEBUG nova.network.neutron [-] [instance: c20fff86-9831-43b7-b579-48efe60be24b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:36:14 compute-0 ceph-mon[74477]: pgmap v1687: 305 pgs: 305 active+clean; 202 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 175 op/s
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.031 2 DEBUG nova.network.neutron [-] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.068 2 INFO nova.compute.manager [-] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Took 1.14 seconds to deallocate network for instance.
Oct 02 08:36:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 202 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 31 KiB/s wr, 132 op/s
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.141 2 DEBUG oslo_concurrency.lockutils [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.142 2 DEBUG oslo_concurrency.lockutils [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.219 2 DEBUG oslo_concurrency.processutils [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.376 2 DEBUG nova.network.neutron [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.405 2 DEBUG oslo_concurrency.lockutils [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.408 2 DEBUG nova.compute.manager [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:36:15 compute-0 kernel: tap961da5ba-b0 (unregistering): left promiscuous mode
Oct 02 08:36:15 compute-0 NetworkManager[45129]: <info>  [1759394175.5872] device (tap961da5ba-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:15 compute-0 ovn_controller[152344]: 2025-10-02T08:36:15Z|00867|binding|INFO|Releasing lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 from this chassis (sb_readonly=0)
Oct 02 08:36:15 compute-0 ovn_controller[152344]: 2025-10-02T08:36:15Z|00868|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 down in Southbound
Oct 02 08:36:15 compute-0 ovn_controller[152344]: 2025-10-02T08:36:15Z|00869|binding|INFO|Removing iface tap961da5ba-b0 ovn-installed in OVS
Oct 02 08:36:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:15.600 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:36:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:15.601 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis
Oct 02 08:36:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:15.602 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74f187c2-780c-418d-98eb-b25294872ab0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:36:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:15.603 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf25467-7077-4ce8-9588-9d151d81d442]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:15.603 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace which is not needed anymore
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:15 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 02 08:36:15 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000055.scope: Consumed 13.303s CPU time.
Oct 02 08:36:15 compute-0 systemd-machined[214636]: Machine qemu-106-instance-00000055 terminated.
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.776 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance destroyed successfully.
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.779 2 DEBUG nova.objects.instance [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'resources' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:36:15 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[345729]: [NOTICE]   (345733) : haproxy version is 2.8.14-c23fe91
Oct 02 08:36:15 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[345729]: [NOTICE]   (345733) : path to executable is /usr/sbin/haproxy
Oct 02 08:36:15 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[345729]: [WARNING]  (345733) : Exiting Master process...
Oct 02 08:36:15 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[345729]: [ALERT]    (345733) : Current worker (345735) exited with code 143 (Terminated)
Oct 02 08:36:15 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[345729]: [WARNING]  (345733) : All workers exited. Exiting... (0)
Oct 02 08:36:15 compute-0 systemd[1]: libpod-baa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255.scope: Deactivated successfully.
Oct 02 08:36:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:36:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1493682980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:15 compute-0 podman[346405]: 2025-10-02 08:36:15.792565198 +0000 UTC m=+0.066711886 container died baa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.798 2 DEBUG nova.virt.libvirt.vif [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:36:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.799 2 DEBUG nova.network.os_vif_util [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.800 2 DEBUG nova.network.os_vif_util [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.801 2 DEBUG os_vif [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.805 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap961da5ba-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.819 2 DEBUG oslo_concurrency.processutils [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-baa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255-userdata-shm.mount: Deactivated successfully.
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.822 2 INFO os_vif [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')
Oct 02 08:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-42ccdd452e351449e79cd5b4758f9f640bab42757ae6e60bc029173c5d4ae257-merged.mount: Deactivated successfully.
Oct 02 08:36:15 compute-0 podman[346405]: 2025-10-02 08:36:15.831936331 +0000 UTC m=+0.106082989 container cleanup baa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.836 2 DEBUG nova.virt.libvirt.driver [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Start _get_guest_xml network_info=[{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.838 2 DEBUG nova.compute.provider_tree [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.845 2 WARNING nova.virt.libvirt.driver [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.852 2 DEBUG nova.virt.libvirt.host [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.852 2 DEBUG nova.virt.libvirt.host [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:36:15 compute-0 systemd[1]: libpod-conmon-baa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255.scope: Deactivated successfully.
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.857 2 DEBUG nova.scheduler.client.report [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.862 2 DEBUG nova.virt.libvirt.host [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.862 2 DEBUG nova.virt.libvirt.host [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.863 2 DEBUG nova.virt.libvirt.driver [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.863 2 DEBUG nova.virt.hardware [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.864 2 DEBUG nova.virt.hardware [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.864 2 DEBUG nova.virt.hardware [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.865 2 DEBUG nova.virt.hardware [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.865 2 DEBUG nova.virt.hardware [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.866 2 DEBUG nova.virt.hardware [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.866 2 DEBUG nova.virt.hardware [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.866 2 DEBUG nova.virt.hardware [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.867 2 DEBUG nova.virt.hardware [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.867 2 DEBUG nova.virt.hardware [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.867 2 DEBUG nova.virt.hardware [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.868 2 DEBUG nova.objects.instance [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'vcpu_model' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.886 2 DEBUG oslo_concurrency.lockutils [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.893 2 DEBUG oslo_concurrency.processutils [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:15 compute-0 podman[346449]: 2025-10-02 08:36:15.913822572 +0000 UTC m=+0.053605962 container remove baa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 08:36:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:15.922 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[693d7a6e-be3d-4ecf-a4a1-6f7282c1fdae]: (4, ('Thu Oct  2 08:36:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (baa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255)\nbaa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255\nThu Oct  2 08:36:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (baa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255)\nbaa11b67b2eb1beda6ae4edc358deeba930aafb50d8a2e966120506b2b259255\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:15.924 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d06e5e71-be60-4e0a-9a56-2576d4b2d3c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:15.925 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:15 compute-0 kernel: tap74f187c2-70: left promiscuous mode
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:15.952 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a6df0255-dbe9-49a3-80e3-5aecb55414e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:15 compute-0 nova_compute[260603]: 2025-10-02 08:36:15.983 2 INFO nova.scheduler.client.report [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Deleted allocations for instance c20fff86-9831-43b7-b579-48efe60be24b
Oct 02 08:36:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:15.984 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c24fb6-1dfd-4350-b3ae-ee108ac51750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:15.985 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a98e27-72fc-4da7-9c42-621dd7e48c53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:16.007 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[99ae2f96-b4c8-4f7a-966f-ea0f33e24dde]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502571, 'reachable_time': 24911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346463, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d74f187c2\x2d780c\x2d418d\x2d98eb\x2db25294872ab0.mount: Deactivated successfully.
Oct 02 08:36:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:16.013 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:36:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:16.013 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8c7e21-ca71-4649-beae-e75ce4af1187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.118 2 DEBUG nova.compute.manager [req-870a5eab-4a8b-48f7-90d0-a7918721d81c req-a752ae04-0c93-40e0-b133-b4a132d6b0b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.120 2 DEBUG oslo_concurrency.lockutils [req-870a5eab-4a8b-48f7-90d0-a7918721d81c req-a752ae04-0c93-40e0-b133-b4a132d6b0b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.120 2 DEBUG oslo_concurrency.lockutils [req-870a5eab-4a8b-48f7-90d0-a7918721d81c req-a752ae04-0c93-40e0-b133-b4a132d6b0b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.121 2 DEBUG oslo_concurrency.lockutils [req-870a5eab-4a8b-48f7-90d0-a7918721d81c req-a752ae04-0c93-40e0-b133-b4a132d6b0b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.121 2 DEBUG nova.compute.manager [req-870a5eab-4a8b-48f7-90d0-a7918721d81c req-a752ae04-0c93-40e0-b133-b4a132d6b0b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.122 2 WARNING nova.compute.manager [req-870a5eab-4a8b-48f7-90d0-a7918721d81c req-a752ae04-0c93-40e0-b133-b4a132d6b0b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.126 2 DEBUG oslo_concurrency.lockutils [None req-82a1f66f-3c64-4da4-9864-4171acd69133 2fa0bf72f6f34b41ac7942357b5d7851 44f1ad17ce794fbdbf606f465f6ab7ec - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.133 2 DEBUG nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.134 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c20fff86-9831-43b7-b579-48efe60be24b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.134 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.135 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.135 2 DEBUG nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] No waiting events found dispatching network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.136 2 WARNING nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received unexpected event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 for instance with vm_state deleted and task_state None.
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.136 2 DEBUG nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.137 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c20fff86-9831-43b7-b579-48efe60be24b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.137 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.138 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.138 2 DEBUG nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] No waiting events found dispatching network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.139 2 WARNING nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received unexpected event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 for instance with vm_state deleted and task_state None.
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.139 2 DEBUG nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.140 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c20fff86-9831-43b7-b579-48efe60be24b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.141 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.141 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.142 2 DEBUG nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] No waiting events found dispatching network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.143 2 WARNING nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received unexpected event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 for instance with vm_state deleted and task_state None.
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.143 2 DEBUG nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received event network-vif-unplugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.144 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c20fff86-9831-43b7-b579-48efe60be24b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.145 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.145 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.146 2 DEBUG nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] No waiting events found dispatching network-vif-unplugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.146 2 WARNING nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received unexpected event network-vif-unplugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 for instance with vm_state deleted and task_state None.
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.147 2 DEBUG nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.147 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c20fff86-9831-43b7-b579-48efe60be24b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.148 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.149 2 DEBUG oslo_concurrency.lockutils [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c20fff86-9831-43b7-b579-48efe60be24b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.149 2 DEBUG nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] No waiting events found dispatching network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.150 2 WARNING nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received unexpected event network-vif-plugged-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 for instance with vm_state deleted and task_state None.
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.150 2 DEBUG nova.compute.manager [req-dff3b5bd-d2bd-466d-946b-3df5876a0ac4 req-ceb56efc-1cff-409c-a961-adb57e45d229 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Received event network-vif-deleted-03538fdf-ba28-4b62-a592-dbf3a07dcfb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:36:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1647791893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.400 2 DEBUG oslo_concurrency.processutils [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.437 2 DEBUG oslo_concurrency.processutils [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:16 compute-0 ceph-mon[74477]: pgmap v1688: 305 pgs: 305 active+clean; 202 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 31 KiB/s wr, 132 op/s
Oct 02 08:36:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1493682980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1647791893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:36:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:36:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174771647' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.887 2 DEBUG oslo_concurrency.processutils [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.890 2 DEBUG nova.virt.libvirt.vif [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:36:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.891 2 DEBUG nova.network.os_vif_util [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.893 2 DEBUG nova.network.os_vif_util [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.895 2 DEBUG nova.objects.instance [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.923 2 DEBUG nova.virt.libvirt.driver [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:36:16 compute-0 nova_compute[260603]:   <uuid>ba2cf934-ce76-4de7-a495-285f144bdab7</uuid>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   <name>instance-00000055</name>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerActionsTestJSON-server-276449458</nova:name>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:36:15</nova:creationTime>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:36:16 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:36:16 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:36:16 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:36:16 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:36:16 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:36:16 compute-0 nova_compute[260603]:         <nova:user uuid="bb1b3a5ae9514259b27a0b7a28f23cda">tempest-ServerActionsTestJSON-1407264397-project-member</nova:user>
Oct 02 08:36:16 compute-0 nova_compute[260603]:         <nova:project uuid="b43ebc87104041aba179e47c5e6ecc5f">tempest-ServerActionsTestJSON-1407264397</nova:project>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:36:16 compute-0 nova_compute[260603]:         <nova:port uuid="961da5ba-b0ac-4a87-a74c-26d0d2d2bf50">
Oct 02 08:36:16 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <system>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <entry name="serial">ba2cf934-ce76-4de7-a495-285f144bdab7</entry>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <entry name="uuid">ba2cf934-ce76-4de7-a495-285f144bdab7</entry>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     </system>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   <os>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   </os>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   <features>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   </features>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ba2cf934-ce76-4de7-a495-285f144bdab7_disk">
Oct 02 08:36:16 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       </source>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:36:16 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ba2cf934-ce76-4de7-a495-285f144bdab7_disk.config">
Oct 02 08:36:16 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       </source>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:36:16 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:bb:af:04"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <target dev="tap961da5ba-b0"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/console.log" append="off"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <video>
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     </video>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:36:16 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:36:16 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:36:16 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:36:16 compute-0 nova_compute[260603]: </domain>
Oct 02 08:36:16 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.925 2 DEBUG nova.virt.libvirt.driver [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.926 2 DEBUG nova.virt.libvirt.driver [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.927 2 DEBUG nova.virt.libvirt.vif [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:36:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.927 2 DEBUG nova.network.os_vif_util [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.928 2 DEBUG nova.network.os_vif_util [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.928 2 DEBUG os_vif [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.929 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.930 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap961da5ba-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap961da5ba-b0, col_values=(('external_ids', {'iface-id': '961da5ba-b0ac-4a87-a74c-26d0d2d2bf50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:af:04', 'vm-uuid': 'ba2cf934-ce76-4de7-a495-285f144bdab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:16 compute-0 NetworkManager[45129]: <info>  [1759394176.9406] manager: (tap961da5ba-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:16 compute-0 nova_compute[260603]: 2025-10-02 08:36:16.945 2 INFO os_vif [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')
Oct 02 08:36:17 compute-0 kernel: tap961da5ba-b0: entered promiscuous mode
Oct 02 08:36:17 compute-0 ovn_controller[152344]: 2025-10-02T08:36:17Z|00870|binding|INFO|Claiming lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for this chassis.
Oct 02 08:36:17 compute-0 ovn_controller[152344]: 2025-10-02T08:36:17Z|00871|binding|INFO|961da5ba-b0ac-4a87-a74c-26d0d2d2bf50: Claiming fa:16:3e:bb:af:04 10.100.0.8
Oct 02 08:36:17 compute-0 NetworkManager[45129]: <info>  [1759394177.0216] manager: (tap961da5ba-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/350)
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.029 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '9', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:36:17 compute-0 nova_compute[260603]: 2025-10-02 08:36:17.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.032 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 bound to our chassis
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.034 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:36:17 compute-0 ovn_controller[152344]: 2025-10-02T08:36:17Z|00872|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 ovn-installed in OVS
Oct 02 08:36:17 compute-0 ovn_controller[152344]: 2025-10-02T08:36:17Z|00873|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 up in Southbound
Oct 02 08:36:17 compute-0 nova_compute[260603]: 2025-10-02 08:36:17.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:17 compute-0 nova_compute[260603]: 2025-10-02 08:36:17.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.047 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[46374bcc-a792-4e8f-8f98-e9027ba10151]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.048 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74f187c2-71 in ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.050 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74f187c2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.050 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bb1e819e-29d7-442d-af67-73bdda5bb58a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.052 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4bf815-0729-4c98-973e-7d45180b3ff0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 systemd-machined[214636]: New machine qemu-108-instance-00000055.
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.064 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7d6e2eac-d2fa-49de-a6fd-626a87f6c9b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-00000055.
Oct 02 08:36:17 compute-0 systemd-udevd[346542]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.091 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[01f6c159-ba65-4663-9c9c-fc75f658b736]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 NetworkManager[45129]: <info>  [1759394177.1030] device (tap961da5ba-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:36:17 compute-0 NetworkManager[45129]: <info>  [1759394177.1042] device (tap961da5ba-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.120 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0583eadd-d719-406a-8b66-c7b964cd7af5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 systemd-udevd[346547]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:36:17 compute-0 NetworkManager[45129]: <info>  [1759394177.1267] manager: (tap74f187c2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/351)
Oct 02 08:36:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 177 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 37 KiB/s wr, 136 op/s
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.126 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[90570c61-5a37-4f6e-96df-3ae50d47241a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.178 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa9772d-02a1-4f34-8bc1-c858df3936c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.184 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3984ed-d119-4d09-93a3-e0cb83c08745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 NetworkManager[45129]: <info>  [1759394177.2105] device (tap74f187c2-70): carrier: link connected
Oct 02 08:36:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.220 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[839b4c64-4ea5-493a-bdbc-ce771b781d51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.239 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[399c0436-6b9b-45a3-b42d-65963cedc5a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346572, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.264 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[524adcce-f607-47df-bda1-612ff265f437]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:7f62'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505581, 'tstamp': 505581}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346573, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.289 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[13188935-0c0a-42f3-b545-47ea04f2ec7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346574, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.325 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6d890e-a350-4f81-8df2-d5aa38b232e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.386 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3c01e962-df87-42ae-9ad9-da6b26eaa412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.388 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.388 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.389 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:17 compute-0 nova_compute[260603]: 2025-10-02 08:36:17.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:17 compute-0 kernel: tap74f187c2-70: entered promiscuous mode
Oct 02 08:36:17 compute-0 NetworkManager[45129]: <info>  [1759394177.4361] manager: (tap74f187c2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.439 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:17 compute-0 ovn_controller[152344]: 2025-10-02T08:36:17Z|00874|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.441 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.442 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a31ccd7-5a2b-424e-9eaa-5e519bfc8743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.443 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:36:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:17.444 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'env', 'PROCESS_TAG=haproxy-74f187c2-780c-418d-98eb-b25294872ab0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74f187c2-780c-418d-98eb-b25294872ab0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:36:17 compute-0 nova_compute[260603]: 2025-10-02 08:36:17.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4174771647' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:36:17 compute-0 podman[346606]: 2025-10-02 08:36:17.82756298 +0000 UTC m=+0.052591731 container create 7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 08:36:17 compute-0 systemd[1]: Started libpod-conmon-7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd.scope.
Oct 02 08:36:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:36:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ae6c2f20a3b91c5f145f721853a053d0238e20c8cff6c32cd1844fef5ff94f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:17 compute-0 podman[346606]: 2025-10-02 08:36:17.801248919 +0000 UTC m=+0.026277720 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:36:17 compute-0 podman[346606]: 2025-10-02 08:36:17.906658098 +0000 UTC m=+0.131686869 container init 7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:36:17 compute-0 podman[346606]: 2025-10-02 08:36:17.911726021 +0000 UTC m=+0.136754782 container start 7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 08:36:17 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[346621]: [NOTICE]   (346641) : New worker (346652) forked
Oct 02 08:36:17 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[346621]: [NOTICE]   (346641) : Loading success.
Oct 02 08:36:18 compute-0 sudo[346678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:36:18 compute-0 sudo[346678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:18 compute-0 sudo[346678]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:18 compute-0 sudo[346703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:36:18 compute-0 sudo[346703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:18 compute-0 sudo[346703]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:18 compute-0 sudo[346728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:36:18 compute-0 sudo[346728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:18 compute-0 sudo[346728]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.342 2 DEBUG nova.compute.manager [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:18 compute-0 sudo[346753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.344 2 DEBUG oslo_concurrency.lockutils [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.345 2 DEBUG oslo_concurrency.lockutils [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.345 2 DEBUG oslo_concurrency.lockutils [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.346 2 DEBUG nova.compute.manager [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.346 2 WARNING nova.compute.manager [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.346 2 DEBUG nova.compute.manager [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.347 2 DEBUG oslo_concurrency.lockutils [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.347 2 DEBUG oslo_concurrency.lockutils [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:18 compute-0 sudo[346753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.347 2 DEBUG oslo_concurrency.lockutils [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.347 2 DEBUG nova.compute.manager [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.348 2 WARNING nova.compute.manager [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.348 2 DEBUG nova.compute.manager [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.348 2 DEBUG oslo_concurrency.lockutils [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.349 2 DEBUG oslo_concurrency.lockutils [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.349 2 DEBUG oslo_concurrency.lockutils [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.349 2 DEBUG nova.compute.manager [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.350 2 WARNING nova.compute.manager [req-7555b0aa-f305-459f-b213-6b8d34e355f0 req-b719379b-17b2-4a41-8855-75e5a3975fbd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.433 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for ba2cf934-ce76-4de7-a495-285f144bdab7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.436 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394178.4330134, ba2cf934-ce76-4de7-a495-285f144bdab7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.437 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Resumed (Lifecycle Event)
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.438 2 DEBUG nova.compute.manager [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.445 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance rebooted successfully.
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.446 2 DEBUG nova.compute.manager [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.478 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.481 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.511 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.512 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394178.4332159, ba2cf934-ce76-4de7-a495-285f144bdab7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.512 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Started (Lifecycle Event)
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.518 2 DEBUG oslo_concurrency.lockutils [None req-6c07f506-630b-47b5-8ed0-e6a231f02ed8 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.531 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.534 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:36:18 compute-0 nova_compute[260603]: 2025-10-02 08:36:18.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:18 compute-0 ceph-mon[74477]: pgmap v1689: 305 pgs: 305 active+clean; 177 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 37 KiB/s wr, 136 op/s
Oct 02 08:36:18 compute-0 sudo[346753]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:36:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:36:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:36:18 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:36:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:36:18 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:36:18 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev fd8f3ae3-1ccb-4cf7-8631-ba6a9b0c0648 does not exist
Oct 02 08:36:18 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 67dbbf40-e947-4252-bdc9-c9e4a8f12eec does not exist
Oct 02 08:36:18 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8f36ae75-120e-468d-b4be-4e00328b04ed does not exist
Oct 02 08:36:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:36:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:36:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:36:18 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:36:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:36:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:36:18 compute-0 sudo[346810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:36:18 compute-0 sudo[346810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:18 compute-0 sudo[346810]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:19 compute-0 sudo[346835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:36:19 compute-0 sudo[346835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:19 compute-0 sudo[346835]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:19 compute-0 sudo[346860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:36:19 compute-0 sudo[346860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:19 compute-0 sudo[346860]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 36 KiB/s wr, 116 op/s
Oct 02 08:36:19 compute-0 sudo[346885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:36:19 compute-0 sudo[346885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:19 compute-0 podman[346952]: 2025-10-02 08:36:19.594362172 +0000 UTC m=+0.041631032 container create c79ae303a885f3ae00676a63ca68a3df83db8d52849f478db0d942f2efa251d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 08:36:19 compute-0 systemd[1]: Started libpod-conmon-c79ae303a885f3ae00676a63ca68a3df83db8d52849f478db0d942f2efa251d0.scope.
Oct 02 08:36:19 compute-0 podman[346952]: 2025-10-02 08:36:19.572894567 +0000 UTC m=+0.020163447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:36:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:36:19 compute-0 podman[346952]: 2025-10-02 08:36:19.688479392 +0000 UTC m=+0.135748292 container init c79ae303a885f3ae00676a63ca68a3df83db8d52849f478db0d942f2efa251d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curie, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 08:36:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:36:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:36:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:36:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:36:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:36:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:36:19 compute-0 podman[346952]: 2025-10-02 08:36:19.695802521 +0000 UTC m=+0.143071381 container start c79ae303a885f3ae00676a63ca68a3df83db8d52849f478db0d942f2efa251d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:36:19 compute-0 podman[346952]: 2025-10-02 08:36:19.698565875 +0000 UTC m=+0.145834735 container attach c79ae303a885f3ae00676a63ca68a3df83db8d52849f478db0d942f2efa251d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 08:36:19 compute-0 infallible_curie[346969]: 167 167
Oct 02 08:36:19 compute-0 systemd[1]: libpod-c79ae303a885f3ae00676a63ca68a3df83db8d52849f478db0d942f2efa251d0.scope: Deactivated successfully.
Oct 02 08:36:19 compute-0 podman[346952]: 2025-10-02 08:36:19.7027247 +0000 UTC m=+0.149993570 container died c79ae303a885f3ae00676a63ca68a3df83db8d52849f478db0d942f2efa251d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:36:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c97cbbe68c606a112666310697fe665b42e0f538d087dc77ad0900b9a419b6f-merged.mount: Deactivated successfully.
Oct 02 08:36:19 compute-0 podman[346952]: 2025-10-02 08:36:19.745903337 +0000 UTC m=+0.193172197 container remove c79ae303a885f3ae00676a63ca68a3df83db8d52849f478db0d942f2efa251d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 08:36:19 compute-0 systemd[1]: libpod-conmon-c79ae303a885f3ae00676a63ca68a3df83db8d52849f478db0d942f2efa251d0.scope: Deactivated successfully.
Oct 02 08:36:19 compute-0 podman[346992]: 2025-10-02 08:36:19.927162925 +0000 UTC m=+0.042936321 container create 49a3d41e2b0dc075a42a920bdd409879591bacf340bdf68630d0ebcf5b3d9d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:36:19 compute-0 systemd[1]: Started libpod-conmon-49a3d41e2b0dc075a42a920bdd409879591bacf340bdf68630d0ebcf5b3d9d82.scope.
Oct 02 08:36:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:36:20 compute-0 podman[346992]: 2025-10-02 08:36:19.906398131 +0000 UTC m=+0.022171547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b38d5a3001303b856f0e6308f772a999ff5bf95dbe58190eff571b8f69829c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b38d5a3001303b856f0e6308f772a999ff5bf95dbe58190eff571b8f69829c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b38d5a3001303b856f0e6308f772a999ff5bf95dbe58190eff571b8f69829c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b38d5a3001303b856f0e6308f772a999ff5bf95dbe58190eff571b8f69829c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b38d5a3001303b856f0e6308f772a999ff5bf95dbe58190eff571b8f69829c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:20 compute-0 podman[346992]: 2025-10-02 08:36:20.054023858 +0000 UTC m=+0.169797334 container init 49a3d41e2b0dc075a42a920bdd409879591bacf340bdf68630d0ebcf5b3d9d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 08:36:20 compute-0 podman[346992]: 2025-10-02 08:36:20.062341698 +0000 UTC m=+0.178115134 container start 49a3d41e2b0dc075a42a920bdd409879591bacf340bdf68630d0ebcf5b3d9d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 08:36:20 compute-0 podman[346992]: 2025-10-02 08:36:20.065655558 +0000 UTC m=+0.181429044 container attach 49a3d41e2b0dc075a42a920bdd409879591bacf340bdf68630d0ebcf5b3d9d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:36:20 compute-0 ovn_controller[152344]: 2025-10-02T08:36:20Z|00875|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:36:20 compute-0 nova_compute[260603]: 2025-10-02 08:36:20.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:20 compute-0 nova_compute[260603]: 2025-10-02 08:36:20.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:36:20 compute-0 ceph-mon[74477]: pgmap v1690: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 36 KiB/s wr, 116 op/s
Oct 02 08:36:21 compute-0 wizardly_keller[347009]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:36:21 compute-0 wizardly_keller[347009]: --> relative data size: 1.0
Oct 02 08:36:21 compute-0 wizardly_keller[347009]: --> All data devices are unavailable
Oct 02 08:36:21 compute-0 systemd[1]: libpod-49a3d41e2b0dc075a42a920bdd409879591bacf340bdf68630d0ebcf5b3d9d82.scope: Deactivated successfully.
Oct 02 08:36:21 compute-0 podman[346992]: 2025-10-02 08:36:21.120565844 +0000 UTC m=+1.236339240 container died 49a3d41e2b0dc075a42a920bdd409879591bacf340bdf68630d0ebcf5b3d9d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 08:36:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 27 KiB/s wr, 89 op/s
Oct 02 08:36:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-49b38d5a3001303b856f0e6308f772a999ff5bf95dbe58190eff571b8f69829c-merged.mount: Deactivated successfully.
Oct 02 08:36:21 compute-0 podman[346992]: 2025-10-02 08:36:21.175815885 +0000 UTC m=+1.291589291 container remove 49a3d41e2b0dc075a42a920bdd409879591bacf340bdf68630d0ebcf5b3d9d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 08:36:21 compute-0 systemd[1]: libpod-conmon-49a3d41e2b0dc075a42a920bdd409879591bacf340bdf68630d0ebcf5b3d9d82.scope: Deactivated successfully.
Oct 02 08:36:21 compute-0 sudo[346885]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:21 compute-0 sudo[347050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:36:21 compute-0 sudo[347050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:21 compute-0 sudo[347050]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:21 compute-0 sudo[347075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:36:21 compute-0 sudo[347075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:21 compute-0 sudo[347075]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:21 compute-0 sudo[347100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:36:21 compute-0 sudo[347100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:21 compute-0 sudo[347100]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:21 compute-0 sudo[347125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:36:21 compute-0 sudo[347125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:21 compute-0 nova_compute[260603]: 2025-10-02 08:36:21.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:22 compute-0 podman[347189]: 2025-10-02 08:36:22.026169563 +0000 UTC m=+0.060117119 container create 0cad87ac7917d92e93368e991d6d56d98d27c010bbbc7b77a431f2ffd783b16e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 08:36:22 compute-0 systemd[1]: Started libpod-conmon-0cad87ac7917d92e93368e991d6d56d98d27c010bbbc7b77a431f2ffd783b16e.scope.
Oct 02 08:36:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:36:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/895544682' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:36:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:36:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/895544682' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:36:22 compute-0 podman[347189]: 2025-10-02 08:36:21.999191832 +0000 UTC m=+0.033139468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:36:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:36:22 compute-0 podman[347189]: 2025-10-02 08:36:22.127363623 +0000 UTC m=+0.161311189 container init 0cad87ac7917d92e93368e991d6d56d98d27c010bbbc7b77a431f2ffd783b16e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 08:36:22 compute-0 podman[347189]: 2025-10-02 08:36:22.135985213 +0000 UTC m=+0.169932779 container start 0cad87ac7917d92e93368e991d6d56d98d27c010bbbc7b77a431f2ffd783b16e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:36:22 compute-0 podman[347189]: 2025-10-02 08:36:22.13923047 +0000 UTC m=+0.173178076 container attach 0cad87ac7917d92e93368e991d6d56d98d27c010bbbc7b77a431f2ffd783b16e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:36:22 compute-0 zen_elion[347205]: 167 167
Oct 02 08:36:22 compute-0 systemd[1]: libpod-0cad87ac7917d92e93368e991d6d56d98d27c010bbbc7b77a431f2ffd783b16e.scope: Deactivated successfully.
Oct 02 08:36:22 compute-0 podman[347189]: 2025-10-02 08:36:22.146402556 +0000 UTC m=+0.180350122 container died 0cad87ac7917d92e93368e991d6d56d98d27c010bbbc7b77a431f2ffd783b16e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 08:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b995924059c42ac4c350d572e4a6854b064accd64c25982a710d82debd7d21c-merged.mount: Deactivated successfully.
Oct 02 08:36:22 compute-0 podman[347189]: 2025-10-02 08:36:22.185892943 +0000 UTC m=+0.219840509 container remove 0cad87ac7917d92e93368e991d6d56d98d27c010bbbc7b77a431f2ffd783b16e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 02 08:36:22 compute-0 systemd[1]: libpod-conmon-0cad87ac7917d92e93368e991d6d56d98d27c010bbbc7b77a431f2ffd783b16e.scope: Deactivated successfully.
Oct 02 08:36:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:36:22 compute-0 podman[347230]: 2025-10-02 08:36:22.387343238 +0000 UTC m=+0.051361165 container create 0dcebcc6423de52fba0cc5aa0f940584df9e776dd82f63a08256911fa310518c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 08:36:22 compute-0 systemd[1]: Started libpod-conmon-0dcebcc6423de52fba0cc5aa0f940584df9e776dd82f63a08256911fa310518c.scope.
Oct 02 08:36:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:36:22 compute-0 podman[347230]: 2025-10-02 08:36:22.369154401 +0000 UTC m=+0.033172358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff440ce497f524c84b4e53bc2d4d05558dae20c09a9df03d91b368e1ac8ab0f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff440ce497f524c84b4e53bc2d4d05558dae20c09a9df03d91b368e1ac8ab0f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff440ce497f524c84b4e53bc2d4d05558dae20c09a9df03d91b368e1ac8ab0f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff440ce497f524c84b4e53bc2d4d05558dae20c09a9df03d91b368e1ac8ab0f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:22 compute-0 podman[347230]: 2025-10-02 08:36:22.491878629 +0000 UTC m=+0.155896576 container init 0dcebcc6423de52fba0cc5aa0f940584df9e776dd82f63a08256911fa310518c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 08:36:22 compute-0 podman[347230]: 2025-10-02 08:36:22.502435016 +0000 UTC m=+0.166452943 container start 0dcebcc6423de52fba0cc5aa0f940584df9e776dd82f63a08256911fa310518c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:36:22 compute-0 podman[347230]: 2025-10-02 08:36:22.507064886 +0000 UTC m=+0.171082823 container attach 0dcebcc6423de52fba0cc5aa0f940584df9e776dd82f63a08256911fa310518c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 08:36:22 compute-0 nova_compute[260603]: 2025-10-02 08:36:22.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:36:22 compute-0 nova_compute[260603]: 2025-10-02 08:36:22.523 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:36:22 compute-0 nova_compute[260603]: 2025-10-02 08:36:22.523 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:36:22 compute-0 ceph-mon[74477]: pgmap v1691: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 27 KiB/s wr, 89 op/s
Oct 02 08:36:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/895544682' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:36:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/895544682' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:36:23 compute-0 nova_compute[260603]: 2025-10-02 08:36:23.060 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:36:23 compute-0 nova_compute[260603]: 2025-10-02 08:36:23.061 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:36:23 compute-0 nova_compute[260603]: 2025-10-02 08:36:23.062 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:36:23 compute-0 nova_compute[260603]: 2025-10-02 08:36:23.064 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:36:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 155 op/s
Oct 02 08:36:23 compute-0 magical_cartwright[347247]: {
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:     "0": [
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:         {
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "devices": [
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "/dev/loop3"
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             ],
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_name": "ceph_lv0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_size": "21470642176",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "name": "ceph_lv0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "tags": {
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.cluster_name": "ceph",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.crush_device_class": "",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.encrypted": "0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.osd_id": "0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.type": "block",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.vdo": "0"
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             },
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "type": "block",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "vg_name": "ceph_vg0"
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:         }
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:     ],
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:     "1": [
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:         {
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "devices": [
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "/dev/loop4"
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             ],
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_name": "ceph_lv1",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_size": "21470642176",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "name": "ceph_lv1",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "tags": {
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.cluster_name": "ceph",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.crush_device_class": "",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.encrypted": "0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.osd_id": "1",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.type": "block",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.vdo": "0"
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             },
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "type": "block",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "vg_name": "ceph_vg1"
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:         }
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:     ],
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:     "2": [
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:         {
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "devices": [
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "/dev/loop5"
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             ],
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_name": "ceph_lv2",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_size": "21470642176",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "name": "ceph_lv2",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "tags": {
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.cluster_name": "ceph",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.crush_device_class": "",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.encrypted": "0",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.osd_id": "2",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.type": "block",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:                 "ceph.vdo": "0"
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             },
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "type": "block",
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:             "vg_name": "ceph_vg2"
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:         }
Oct 02 08:36:23 compute-0 magical_cartwright[347247]:     ]
Oct 02 08:36:23 compute-0 magical_cartwright[347247]: }
Oct 02 08:36:23 compute-0 systemd[1]: libpod-0dcebcc6423de52fba0cc5aa0f940584df9e776dd82f63a08256911fa310518c.scope: Deactivated successfully.
Oct 02 08:36:23 compute-0 podman[347257]: 2025-10-02 08:36:23.3792401 +0000 UTC m=+0.052932752 container died 0dcebcc6423de52fba0cc5aa0f940584df9e776dd82f63a08256911fa310518c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 08:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff440ce497f524c84b4e53bc2d4d05558dae20c09a9df03d91b368e1ac8ab0f5-merged.mount: Deactivated successfully.
Oct 02 08:36:23 compute-0 podman[347257]: 2025-10-02 08:36:23.467740129 +0000 UTC m=+0.141432781 container remove 0dcebcc6423de52fba0cc5aa0f940584df9e776dd82f63a08256911fa310518c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:36:23 compute-0 systemd[1]: libpod-conmon-0dcebcc6423de52fba0cc5aa0f940584df9e776dd82f63a08256911fa310518c.scope: Deactivated successfully.
Oct 02 08:36:23 compute-0 podman[347258]: 2025-10-02 08:36:23.479295827 +0000 UTC m=+0.139120423 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 08:36:23 compute-0 podman[347256]: 2025-10-02 08:36:23.507549856 +0000 UTC m=+0.166282639 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:36:23 compute-0 sudo[347125]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:23 compute-0 nova_compute[260603]: 2025-10-02 08:36:23.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:23 compute-0 sudo[347318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:36:23 compute-0 sudo[347318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:23 compute-0 sudo[347318]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:23 compute-0 sudo[347343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:36:23 compute-0 sudo[347343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:23 compute-0 sudo[347343]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:23 compute-0 sudo[347368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:36:23 compute-0 sudo[347368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:23 compute-0 sudo[347368]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:23 compute-0 sudo[347393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:36:23 compute-0 sudo[347393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:24 compute-0 nova_compute[260603]: 2025-10-02 08:36:24.350 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394169.3496919, 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:24 compute-0 nova_compute[260603]: 2025-10-02 08:36:24.351 2 INFO nova.compute.manager [-] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] VM Stopped (Lifecycle Event)
Oct 02 08:36:24 compute-0 nova_compute[260603]: 2025-10-02 08:36:24.390 2 DEBUG nova.compute.manager [None req-bfee394d-10d3-4fc7-933f-922fbc4ec7fa - - - - - -] [instance: 5f692626-ba1b-4b0c-9046-4cd9ef3c25cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:24 compute-0 podman[347458]: 2025-10-02 08:36:24.404243067 +0000 UTC m=+0.067577212 container create cf6ec1ce072dd17b62936e17ecfc62e6089b7d50d4bf33e308e53c4128d2b848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:36:24 compute-0 systemd[1]: Started libpod-conmon-cf6ec1ce072dd17b62936e17ecfc62e6089b7d50d4bf33e308e53c4128d2b848.scope.
Oct 02 08:36:24 compute-0 podman[347458]: 2025-10-02 08:36:24.36671558 +0000 UTC m=+0.030049805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:36:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:36:24 compute-0 podman[347458]: 2025-10-02 08:36:24.499371166 +0000 UTC m=+0.162705391 container init cf6ec1ce072dd17b62936e17ecfc62e6089b7d50d4bf33e308e53c4128d2b848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 08:36:24 compute-0 podman[347458]: 2025-10-02 08:36:24.511416738 +0000 UTC m=+0.174750863 container start cf6ec1ce072dd17b62936e17ecfc62e6089b7d50d4bf33e308e53c4128d2b848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 08:36:24 compute-0 podman[347458]: 2025-10-02 08:36:24.515322306 +0000 UTC m=+0.178656471 container attach cf6ec1ce072dd17b62936e17ecfc62e6089b7d50d4bf33e308e53c4128d2b848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 08:36:24 compute-0 blissful_wing[347476]: 167 167
Oct 02 08:36:24 compute-0 systemd[1]: libpod-cf6ec1ce072dd17b62936e17ecfc62e6089b7d50d4bf33e308e53c4128d2b848.scope: Deactivated successfully.
Oct 02 08:36:24 compute-0 podman[347458]: 2025-10-02 08:36:24.518850421 +0000 UTC m=+0.182184586 container died cf6ec1ce072dd17b62936e17ecfc62e6089b7d50d4bf33e308e53c4128d2b848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 08:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dd467769738084c2954d380b5eda11a6436d5d788947bba9e2b0ae868f48445-merged.mount: Deactivated successfully.
Oct 02 08:36:24 compute-0 podman[347458]: 2025-10-02 08:36:24.569576287 +0000 UTC m=+0.232910422 container remove cf6ec1ce072dd17b62936e17ecfc62e6089b7d50d4bf33e308e53c4128d2b848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 08:36:24 compute-0 systemd[1]: libpod-conmon-cf6ec1ce072dd17b62936e17ecfc62e6089b7d50d4bf33e308e53c4128d2b848.scope: Deactivated successfully.
Oct 02 08:36:24 compute-0 podman[347500]: 2025-10-02 08:36:24.767941718 +0000 UTC m=+0.044064235 container create e0882f003e7ee82cca111feb5e05c9f80935af7bbdc5a051f59520ad931ab4b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:36:24 compute-0 systemd[1]: Started libpod-conmon-e0882f003e7ee82cca111feb5e05c9f80935af7bbdc5a051f59520ad931ab4b1.scope.
Oct 02 08:36:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec427f64ae6871d0e58a8ee9a9bf770539c48e9ea2f366d4fecfd66b00cc8e7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:24 compute-0 podman[347500]: 2025-10-02 08:36:24.751345319 +0000 UTC m=+0.027467846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec427f64ae6871d0e58a8ee9a9bf770539c48e9ea2f366d4fecfd66b00cc8e7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec427f64ae6871d0e58a8ee9a9bf770539c48e9ea2f366d4fecfd66b00cc8e7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec427f64ae6871d0e58a8ee9a9bf770539c48e9ea2f366d4fecfd66b00cc8e7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:24 compute-0 podman[347500]: 2025-10-02 08:36:24.872724878 +0000 UTC m=+0.148847485 container init e0882f003e7ee82cca111feb5e05c9f80935af7bbdc5a051f59520ad931ab4b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_agnesi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 08:36:24 compute-0 podman[347500]: 2025-10-02 08:36:24.879397288 +0000 UTC m=+0.155519835 container start e0882f003e7ee82cca111feb5e05c9f80935af7bbdc5a051f59520ad931ab4b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 08:36:24 compute-0 podman[347500]: 2025-10-02 08:36:24.883642385 +0000 UTC m=+0.159765002 container attach e0882f003e7ee82cca111feb5e05c9f80935af7bbdc5a051f59520ad931ab4b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:36:24 compute-0 ceph-mon[74477]: pgmap v1692: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 155 op/s
Oct 02 08:36:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.2 KiB/s wr, 100 op/s
Oct 02 08:36:25 compute-0 nova_compute[260603]: 2025-10-02 08:36:25.465 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:36:25 compute-0 nova_compute[260603]: 2025-10-02 08:36:25.499 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:36:25 compute-0 nova_compute[260603]: 2025-10-02 08:36:25.500 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:36:25 compute-0 nova_compute[260603]: 2025-10-02 08:36:25.500 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:36:25 compute-0 nova_compute[260603]: 2025-10-02 08:36:25.501 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]: {
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "osd_id": 2,
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "type": "bluestore"
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:     },
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "osd_id": 1,
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "type": "bluestore"
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:     },
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "osd_id": 0,
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:         "type": "bluestore"
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]:     }
Oct 02 08:36:25 compute-0 friendly_agnesi[347517]: }
Oct 02 08:36:25 compute-0 systemd[1]: libpod-e0882f003e7ee82cca111feb5e05c9f80935af7bbdc5a051f59520ad931ab4b1.scope: Deactivated successfully.
Oct 02 08:36:25 compute-0 systemd[1]: libpod-e0882f003e7ee82cca111feb5e05c9f80935af7bbdc5a051f59520ad931ab4b1.scope: Consumed 1.085s CPU time.
Oct 02 08:36:25 compute-0 conmon[347517]: conmon e0882f003e7ee82cca11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0882f003e7ee82cca111feb5e05c9f80935af7bbdc5a051f59520ad931ab4b1.scope/container/memory.events
Oct 02 08:36:26 compute-0 podman[347550]: 2025-10-02 08:36:26.034571098 +0000 UTC m=+0.043434056 container died e0882f003e7ee82cca111feb5e05c9f80935af7bbdc5a051f59520ad931ab4b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_agnesi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 08:36:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec427f64ae6871d0e58a8ee9a9bf770539c48e9ea2f366d4fecfd66b00cc8e7a-merged.mount: Deactivated successfully.
Oct 02 08:36:26 compute-0 podman[347550]: 2025-10-02 08:36:26.09388306 +0000 UTC m=+0.102745978 container remove e0882f003e7ee82cca111feb5e05c9f80935af7bbdc5a051f59520ad931ab4b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_agnesi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 08:36:26 compute-0 systemd[1]: libpod-conmon-e0882f003e7ee82cca111feb5e05c9f80935af7bbdc5a051f59520ad931ab4b1.scope: Deactivated successfully.
Oct 02 08:36:26 compute-0 sudo[347393]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:36:26 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:36:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:36:26 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:36:26 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev adc1b9e2-28cb-44f3-b48d-19b2624013d8 does not exist
Oct 02 08:36:26 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e6d36601-ea4e-4174-8b24-7aa5ef5dfe39 does not exist
Oct 02 08:36:26 compute-0 sudo[347566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:36:26 compute-0 sudo[347566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:26 compute-0 sudo[347566]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:26 compute-0 sudo[347591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:36:26 compute-0 sudo[347591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:36:26 compute-0 sudo[347591]: pam_unix(sudo:session): session closed for user root
Oct 02 08:36:26 compute-0 ceph-mon[74477]: pgmap v1693: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.2 KiB/s wr, 100 op/s
Oct 02 08:36:26 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:36:26 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:36:26 compute-0 nova_compute[260603]: 2025-10-02 08:36:26.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.2 KiB/s wr, 100 op/s
Oct 02 08:36:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:36:27 compute-0 nova_compute[260603]: 2025-10-02 08:36:27.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:36:27 compute-0 nova_compute[260603]: 2025-10-02 08:36:27.550 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:27 compute-0 nova_compute[260603]: 2025-10-02 08:36:27.551 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:27 compute-0 nova_compute[260603]: 2025-10-02 08:36:27.552 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:27 compute-0 nova_compute[260603]: 2025-10-02 08:36:27.552 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:36:27 compute-0 nova_compute[260603]: 2025-10-02 08:36:27.553 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:36:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:36:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:36:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:36:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:36:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:36:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:36:27
Oct 02 08:36:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:36:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:36:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.control']
Oct 02 08:36:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:36:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:36:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/899898039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:27 compute-0 nova_compute[260603]: 2025-10-02 08:36:27.983 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.055 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.055 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.267 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.268 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3723MB free_disk=59.94264602661133GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.268 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.269 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.367 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ba2cf934-ce76-4de7-a495-285f144bdab7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.368 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.369 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.379 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394173.378292, c20fff86-9831-43b7-b579-48efe60be24b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.379 2 INFO nova.compute.manager [-] [instance: c20fff86-9831-43b7-b579-48efe60be24b] VM Stopped (Lifecycle Event)
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.405 2 DEBUG nova.compute.manager [None req-31946fe0-77bd-4bf2-b9df-d4346ac62b58 - - - - - -] [instance: c20fff86-9831-43b7-b579-48efe60be24b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.438 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:28 compute-0 ceph-mon[74477]: pgmap v1694: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.2 KiB/s wr, 100 op/s
Oct 02 08:36:28 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/899898039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:36:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398107989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.990 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:28 compute-0 nova_compute[260603]: 2025-10-02 08:36:28.997 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:36:29 compute-0 nova_compute[260603]: 2025-10-02 08:36:29.017 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:36:29 compute-0 nova_compute[260603]: 2025-10-02 08:36:29.042 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:36:29 compute-0 nova_compute[260603]: 2025-10-02 08:36:29.043 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 682 B/s wr, 96 op/s
Oct 02 08:36:29 compute-0 rsyslogd[1004]: imjournal: 16821 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 02 08:36:29 compute-0 ovn_controller[152344]: 2025-10-02T08:36:29Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:af:04 10.100.0.8
Oct 02 08:36:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3398107989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:30 compute-0 podman[347661]: 2025-10-02 08:36:30.006469707 +0000 UTC m=+0.066323455 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 02 08:36:30 compute-0 nova_compute[260603]: 2025-10-02 08:36:30.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:30 compute-0 ceph-mon[74477]: pgmap v1695: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 682 B/s wr, 96 op/s
Oct 02 08:36:31 compute-0 nova_compute[260603]: 2025-10-02 08:36:31.043 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:36:31 compute-0 nova_compute[260603]: 2025-10-02 08:36:31.044 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:36:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct 02 08:36:31 compute-0 nova_compute[260603]: 2025-10-02 08:36:31.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:36:32 compute-0 nova_compute[260603]: 2025-10-02 08:36:32.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:36:32 compute-0 nova_compute[260603]: 2025-10-02 08:36:32.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:36:32 compute-0 ceph-mon[74477]: pgmap v1696: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct 02 08:36:33 compute-0 podman[347683]: 2025-10-02 08:36:33.038725674 +0000 UTC m=+0.102635936 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:36:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 123 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 9.7 KiB/s wr, 109 op/s
Oct 02 08:36:33 compute-0 nova_compute[260603]: 2025-10-02 08:36:33.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:34 compute-0 nova_compute[260603]: 2025-10-02 08:36:34.679 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:34 compute-0 nova_compute[260603]: 2025-10-02 08:36:34.680 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:34 compute-0 nova_compute[260603]: 2025-10-02 08:36:34.723 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:36:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:34.821 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:34.822 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:34.822 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:34 compute-0 nova_compute[260603]: 2025-10-02 08:36:34.859 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:34 compute-0 nova_compute[260603]: 2025-10-02 08:36:34.860 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:34 compute-0 nova_compute[260603]: 2025-10-02 08:36:34.868 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:36:34 compute-0 nova_compute[260603]: 2025-10-02 08:36:34.869 2 INFO nova.compute.claims [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:36:34 compute-0 ceph-mon[74477]: pgmap v1697: 305 pgs: 305 active+clean; 123 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 9.7 KiB/s wr, 109 op/s
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.025 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 123 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 9.7 KiB/s wr, 43 op/s
Oct 02 08:36:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:36:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2537084274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.495 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.504 2 DEBUG nova.compute.provider_tree [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.530 2 DEBUG nova.scheduler.client.report [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.562 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.564 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.846 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.847 2 DEBUG nova.network.neutron [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.872 2 INFO nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.890 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:36:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2537084274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.977 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.979 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:36:35 compute-0 nova_compute[260603]: 2025-10-02 08:36:35.980 2 INFO nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Creating image(s)
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.020 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.060 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.114 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.121 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.192 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.193 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.194 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.194 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.220 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.225 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.474 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.525 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] resizing rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.621 2 DEBUG nova.policy [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cbbd2bff5ed749af8443f40670db21e1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0fb52febddf74ed0a2a55eb14b67cd8f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.633 2 DEBUG nova.objects.instance [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lazy-loading 'migration_context' on Instance uuid 92c7bc30-8758-43e7-804c-9e0ef04a4a77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.661 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.661 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Ensure instance console log exists: /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.662 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.663 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:36 compute-0 nova_compute[260603]: 2025-10-02 08:36:36.664 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:36 compute-0 ceph-mon[74477]: pgmap v1698: 305 pgs: 305 active+clean; 123 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 9.7 KiB/s wr, 43 op/s
Oct 02 08:36:37 compute-0 nova_compute[260603]: 2025-10-02 08:36:37.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 137 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 486 KiB/s wr, 57 op/s
Oct 02 08:36:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:36:37 compute-0 nova_compute[260603]: 2025-10-02 08:36:37.448 2 DEBUG nova.network.neutron [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Successfully created port: bac9c5c0-ed2b-471f-b459-79324411faf4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:36:38 compute-0 nova_compute[260603]: 2025-10-02 08:36:38.287 2 DEBUG nova.network.neutron [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Successfully updated port: bac9c5c0-ed2b-471f-b459-79324411faf4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:36:38 compute-0 nova_compute[260603]: 2025-10-02 08:36:38.311 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "refresh_cache-92c7bc30-8758-43e7-804c-9e0ef04a4a77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:36:38 compute-0 nova_compute[260603]: 2025-10-02 08:36:38.311 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquired lock "refresh_cache-92c7bc30-8758-43e7-804c-9e0ef04a4a77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:36:38 compute-0 nova_compute[260603]: 2025-10-02 08:36:38.312 2 DEBUG nova.network.neutron [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:36:38 compute-0 nova_compute[260603]: 2025-10-02 08:36:38.512 2 DEBUG nova.compute.manager [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-changed-bac9c5c0-ed2b-471f-b459-79324411faf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:38 compute-0 nova_compute[260603]: 2025-10-02 08:36:38.513 2 DEBUG nova.compute.manager [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Refreshing instance network info cache due to event network-changed-bac9c5c0-ed2b-471f-b459-79324411faf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:36:38 compute-0 nova_compute[260603]: 2025-10-02 08:36:38.514 2 DEBUG oslo_concurrency.lockutils [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-92c7bc30-8758-43e7-804c-9e0ef04a4a77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:36:38 compute-0 nova_compute[260603]: 2025-10-02 08:36:38.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008500169914371296 of space, bias 1.0, pg target 0.2550050974311389 quantized to 32 (current 32)
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:36:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:36:38 compute-0 nova_compute[260603]: 2025-10-02 08:36:38.855 2 DEBUG nova.network.neutron [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:36:38 compute-0 ceph-mon[74477]: pgmap v1699: 305 pgs: 305 active+clean; 137 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 486 KiB/s wr, 57 op/s
Oct 02 08:36:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.468 2 DEBUG nova.network.neutron [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Updating instance_info_cache with network_info: [{"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.496 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Releasing lock "refresh_cache-92c7bc30-8758-43e7-804c-9e0ef04a4a77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.497 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Instance network_info: |[{"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.497 2 DEBUG oslo_concurrency.lockutils [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-92c7bc30-8758-43e7-804c-9e0ef04a4a77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.498 2 DEBUG nova.network.neutron [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Refreshing network info cache for port bac9c5c0-ed2b-471f-b459-79324411faf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.503 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Start _get_guest_xml network_info=[{"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.509 2 WARNING nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.514 2 DEBUG nova.virt.libvirt.host [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.515 2 DEBUG nova.virt.libvirt.host [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.519 2 DEBUG nova.virt.libvirt.host [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.520 2 DEBUG nova.virt.libvirt.host [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.522 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.522 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.523 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.524 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.524 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.525 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.525 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.526 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.527 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.527 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.528 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.528 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:36:40 compute-0 nova_compute[260603]: 2025-10-02 08:36:40.534 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:36:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4158205129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:36:40 compute-0 ceph-mon[74477]: pgmap v1700: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.012 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.034 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.038 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct 02 08:36:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:36:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901209340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.470 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.473 2 DEBUG nova.virt.libvirt.vif [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:36:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-527322298',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-527322298',id=90,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0fb52febddf74ed0a2a55eb14b67cd8f',ramdisk_id='',reservation_id='r-epjg0x69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1766965232',owner_user_name='tempest-ServerTagsTestJSON-1766965232-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:36:35Z,user_data=None,user_id='cbbd2bff5ed749af8443f40670db21e1',uuid=92c7bc30-8758-43e7-804c-9e0ef04a4a77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.474 2 DEBUG nova.network.os_vif_util [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Converting VIF {"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.476 2 DEBUG nova.network.os_vif_util [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.478 2 DEBUG nova.objects.instance [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lazy-loading 'pci_devices' on Instance uuid 92c7bc30-8758-43e7-804c-9e0ef04a4a77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:36:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:36:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 7895 writes, 35K keys, 7895 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 7895 writes, 7895 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1573 writes, 7355 keys, 1573 commit groups, 1.0 writes per commit group, ingest: 9.40 MB, 0.02 MB/s
                                           Interval WAL: 1573 writes, 1573 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    145.6      0.29              0.16        21    0.014       0      0       0.0       0.0
                                             L6      1/0    9.30 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   3.6    201.9    165.8      0.93              0.52        20    0.046    101K    11K       0.0       0.0
                                            Sum      1/0    9.30 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.6    153.3    161.0      1.22              0.68        41    0.030    101K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   5.7    176.1    179.7      0.29              0.17        10    0.029     31K   3133       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    201.9    165.8      0.93              0.52        20    0.046    101K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    148.1      0.29              0.16        20    0.014       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.042, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.19 GB write, 0.07 MB/s write, 0.18 GB read, 0.06 MB/s read, 1.2 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 22.05 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000222 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1439,21.26 MB,6.99214%) FilterBlock(42,291.92 KB,0.0937763%) IndexBlock(42,522.70 KB,0.167912%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.509 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:36:41 compute-0 nova_compute[260603]:   <uuid>92c7bc30-8758-43e7-804c-9e0ef04a4a77</uuid>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   <name>instance-0000005a</name>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerTagsTestJSON-server-527322298</nova:name>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:36:40</nova:creationTime>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:36:41 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:36:41 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:36:41 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:36:41 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:36:41 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:36:41 compute-0 nova_compute[260603]:         <nova:user uuid="cbbd2bff5ed749af8443f40670db21e1">tempest-ServerTagsTestJSON-1766965232-project-member</nova:user>
Oct 02 08:36:41 compute-0 nova_compute[260603]:         <nova:project uuid="0fb52febddf74ed0a2a55eb14b67cd8f">tempest-ServerTagsTestJSON-1766965232</nova:project>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:36:41 compute-0 nova_compute[260603]:         <nova:port uuid="bac9c5c0-ed2b-471f-b459-79324411faf4">
Oct 02 08:36:41 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <system>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <entry name="serial">92c7bc30-8758-43e7-804c-9e0ef04a4a77</entry>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <entry name="uuid">92c7bc30-8758-43e7-804c-9e0ef04a4a77</entry>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     </system>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   <os>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   </os>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   <features>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   </features>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk">
Oct 02 08:36:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       </source>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:36:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk.config">
Oct 02 08:36:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       </source>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:36:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:2a:8c:25"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <target dev="tapbac9c5c0-ed"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/console.log" append="off"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <video>
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     </video>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:36:41 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:36:41 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:36:41 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:36:41 compute-0 nova_compute[260603]: </domain>
Oct 02 08:36:41 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.510 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Preparing to wait for external event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.511 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.511 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.512 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.513 2 DEBUG nova.virt.libvirt.vif [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:36:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-527322298',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-527322298',id=90,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0fb52febddf74ed0a2a55eb14b67cd8f',ramdisk_id='',reservation_id='r-epjg0x69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1766965232',owner_user_name='tempest-ServerTagsTestJSON-1766965232-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:36:35Z,user_data=None,user_id='cbbd2bff5ed749af8443f40670db21e1',uuid=92c7bc30-8758-43e7-804c-9e0ef04a4a77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.514 2 DEBUG nova.network.os_vif_util [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Converting VIF {"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.515 2 DEBUG nova.network.os_vif_util [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.516 2 DEBUG os_vif [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbac9c5c0-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.525 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbac9c5c0-ed, col_values=(('external_ids', {'iface-id': 'bac9c5c0-ed2b-471f-b459-79324411faf4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:8c:25', 'vm-uuid': '92c7bc30-8758-43e7-804c-9e0ef04a4a77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:41 compute-0 NetworkManager[45129]: <info>  [1759394201.5283] manager: (tapbac9c5c0-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.541 2 INFO os_vif [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed')
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.608 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.609 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.609 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] No VIF found with MAC fa:16:3e:2a:8c:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.610 2 INFO nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Using config drive
Oct 02 08:36:41 compute-0 nova_compute[260603]: 2025-10-02 08:36:41.645 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:36:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4158205129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:36:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3901209340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:36:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:36:42 compute-0 nova_compute[260603]: 2025-10-02 08:36:42.719 2 INFO nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Creating config drive at /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/disk.config
Oct 02 08:36:42 compute-0 nova_compute[260603]: 2025-10-02 08:36:42.726 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk6knqjm7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:42 compute-0 nova_compute[260603]: 2025-10-02 08:36:42.894 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk6knqjm7" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:42 compute-0 nova_compute[260603]: 2025-10-02 08:36:42.922 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:36:42 compute-0 nova_compute[260603]: 2025-10-02 08:36:42.926 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/disk.config 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:43 compute-0 ceph-mon[74477]: pgmap v1701: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.129 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/disk.config 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.131 2 INFO nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Deleting local config drive /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/disk.config because it was imported into RBD.
Oct 02 08:36:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 02 08:36:43 compute-0 kernel: tapbac9c5c0-ed: entered promiscuous mode
Oct 02 08:36:43 compute-0 NetworkManager[45129]: <info>  [1759394203.2059] manager: (tapbac9c5c0-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:43 compute-0 ovn_controller[152344]: 2025-10-02T08:36:43Z|00876|binding|INFO|Claiming lport bac9c5c0-ed2b-471f-b459-79324411faf4 for this chassis.
Oct 02 08:36:43 compute-0 ovn_controller[152344]: 2025-10-02T08:36:43Z|00877|binding|INFO|bac9c5c0-ed2b-471f-b459-79324411faf4: Claiming fa:16:3e:2a:8c:25 10.100.0.7
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.217 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:8c:25 10.100.0.7'], port_security=['fa:16:3e:2a:8c:25 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '92c7bc30-8758-43e7-804c-9e0ef04a4a77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fb52febddf74ed0a2a55eb14b67cd8f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f2de6c7-a80e-44b4-89d4-b4ed8f0a0be3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0e85455-5dea-43e0-94fe-8e8ae5134610, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bac9c5c0-ed2b-471f-b459-79324411faf4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.220 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bac9c5c0-ed2b-471f-b459-79324411faf4 in datapath 7312fb31-cf2e-459a-94af-ddc1a56ed03f bound to our chassis
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.223 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7312fb31-cf2e-459a-94af-ddc1a56ed03f
Oct 02 08:36:43 compute-0 ovn_controller[152344]: 2025-10-02T08:36:43Z|00878|binding|INFO|Setting lport bac9c5c0-ed2b-471f-b459-79324411faf4 up in Southbound
Oct 02 08:36:43 compute-0 ovn_controller[152344]: 2025-10-02T08:36:43Z|00879|binding|INFO|Setting lport bac9c5c0-ed2b-471f-b459-79324411faf4 ovn-installed in OVS
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.243 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[860363f6-69c0-4cca-97e1-e7a96b42a399]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.244 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7312fb31-c1 in ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.247 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7312fb31-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.247 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29cae41a-2c83-40dd-9361-b1acfe5989ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.248 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4a119b67-9c13-4c23-be75-1d2d91703081]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.262 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[475d6f72-16d5-4a86-8b51-3802686333f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 systemd-machined[214636]: New machine qemu-109-instance-0000005a.
Oct 02 08:36:43 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-0000005a.
Oct 02 08:36:43 compute-0 systemd-udevd[348030]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.296 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9f34ef4a-e641-47ec-bca7-debd3d0bf75d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 NetworkManager[45129]: <info>  [1759394203.3083] device (tapbac9c5c0-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:36:43 compute-0 NetworkManager[45129]: <info>  [1759394203.3111] device (tapbac9c5c0-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.337 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a9b863-1541-4217-9961-c952cf9b0e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 NetworkManager[45129]: <info>  [1759394203.3479] manager: (tap7312fb31-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/355)
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.346 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6dae86-5348-4717-b013-c60951920944]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.390 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[377a79e0-64a6-42fc-a9ad-bcc99f554326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.392 2 DEBUG nova.network.neutron [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Updated VIF entry in instance network info cache for port bac9c5c0-ed2b-471f-b459-79324411faf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.393 2 DEBUG nova.network.neutron [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Updating instance_info_cache with network_info: [{"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.394 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[24fcb211-8b8d-4c55-808c-83846a068422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.421 2 DEBUG oslo_concurrency.lockutils [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-92c7bc30-8758-43e7-804c-9e0ef04a4a77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:36:43 compute-0 NetworkManager[45129]: <info>  [1759394203.4233] device (tap7312fb31-c0): carrier: link connected
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.430 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ef715973-4371-4ee9-ae66-d4ddb7afa7ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.456 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b6f3df-da36-4afb-95e3-f15ae9bd0a4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7312fb31-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:f5:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508202, 'reachable_time': 17939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348060, 'error': None, 'target': 'ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.473 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df787fb8-f28d-40a0-8918-8ea9ec2bc1d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:f5ef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508202, 'tstamp': 508202}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348061, 'error': None, 'target': 'ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.500 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1b54c5-b6f8-4bcf-b469-da4459fd2cbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7312fb31-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:f5:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508202, 'reachable_time': 17939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348062, 'error': None, 'target': 'ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.542 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5b1794-e6cb-4d3a-ad66-df01d31c2f20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.621 2 DEBUG nova.compute.manager [req-dbfec227-5fd7-4d60-a202-5dcb57387995 req-a4260d72-b183-48e0-bd3b-bf5b7ed93357 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.622 2 DEBUG oslo_concurrency.lockutils [req-dbfec227-5fd7-4d60-a202-5dcb57387995 req-a4260d72-b183-48e0-bd3b-bf5b7ed93357 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.622 2 DEBUG oslo_concurrency.lockutils [req-dbfec227-5fd7-4d60-a202-5dcb57387995 req-a4260d72-b183-48e0-bd3b-bf5b7ed93357 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.622 2 DEBUG oslo_concurrency.lockutils [req-dbfec227-5fd7-4d60-a202-5dcb57387995 req-a4260d72-b183-48e0-bd3b-bf5b7ed93357 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.623 2 DEBUG nova.compute.manager [req-dbfec227-5fd7-4d60-a202-5dcb57387995 req-a4260d72-b183-48e0-bd3b-bf5b7ed93357 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Processing event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.643 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[55fb9a1b-644d-4d59-be6e-fad5f8abe05e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.646 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7312fb31-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.647 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.648 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7312fb31-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:43 compute-0 kernel: tap7312fb31-c0: entered promiscuous mode
Oct 02 08:36:43 compute-0 NetworkManager[45129]: <info>  [1759394203.6519] manager: (tap7312fb31-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.660 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7312fb31-c0, col_values=(('external_ids', {'iface-id': 'c49b84dc-2380-45f9-ba6d-c1552a7a185c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:43 compute-0 ovn_controller[152344]: 2025-10-02T08:36:43Z|00880|binding|INFO|Releasing lport c49b84dc-2380-45f9-ba6d-c1552a7a185c from this chassis (sb_readonly=0)
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.665 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7312fb31-cf2e-459a-94af-ddc1a56ed03f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7312fb31-cf2e-459a-94af-ddc1a56ed03f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.666 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d856b31d-ca54-400c-bfe8-0cc0a6ff08c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.668 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-7312fb31-cf2e-459a-94af-ddc1a56ed03f
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/7312fb31-cf2e-459a-94af-ddc1a56ed03f.pid.haproxy
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 7312fb31-cf2e-459a-94af-ddc1a56ed03f
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:36:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.670 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'env', 'PROCESS_TAG=haproxy-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7312fb31-cf2e-459a-94af-ddc1a56ed03f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:36:43 compute-0 nova_compute[260603]: 2025-10-02 08:36:43.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:44 compute-0 podman[348137]: 2025-10-02 08:36:44.137903029 +0000 UTC m=+0.077365486 container create fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 08:36:44 compute-0 systemd[1]: Started libpod-conmon-fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda.scope.
Oct 02 08:36:44 compute-0 podman[348137]: 2025-10-02 08:36:44.093663459 +0000 UTC m=+0.033126006 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:36:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:36:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d296f85a914a5844763b98d0871ccd31024d21220b2eea41f5c25dcb94cf2e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:36:44 compute-0 podman[348137]: 2025-10-02 08:36:44.229752269 +0000 UTC m=+0.169214756 container init fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:36:44 compute-0 podman[348137]: 2025-10-02 08:36:44.235636426 +0000 UTC m=+0.175098873 container start fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 08:36:44 compute-0 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [NOTICE]   (348156) : New worker (348158) forked
Oct 02 08:36:44 compute-0 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [NOTICE]   (348156) : Loading success.
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.387 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.389 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394204.3862288, 92c7bc30-8758-43e7-804c-9e0ef04a4a77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.390 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] VM Started (Lifecycle Event)
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.397 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.404 2 INFO nova.virt.libvirt.driver [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Instance spawned successfully.
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.405 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.417 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.431 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.435 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.436 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.436 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.436 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.437 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.437 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.465 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.466 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394204.3867424, 92c7bc30-8758-43e7-804c-9e0ef04a4a77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.466 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] VM Paused (Lifecycle Event)
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.501 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.506 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394204.3959439, 92c7bc30-8758-43e7-804c-9e0ef04a4a77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.506 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] VM Resumed (Lifecycle Event)
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.516 2 INFO nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Took 8.54 seconds to spawn the instance on the hypervisor.
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.516 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.525 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.529 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.552 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.578 2 INFO nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Took 9.76 seconds to build instance.
Oct 02 08:36:44 compute-0 nova_compute[260603]: 2025-10-02 08:36:44.597 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:45 compute-0 ceph-mon[74477]: pgmap v1702: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 02 08:36:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 08:36:45 compute-0 nova_compute[260603]: 2025-10-02 08:36:45.774 2 DEBUG nova.compute.manager [req-e5b0487e-b3d4-4645-ae61-d1e6bc7fbbab req-88b684a8-b4b5-4016-940b-839e974c7382 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:45 compute-0 nova_compute[260603]: 2025-10-02 08:36:45.774 2 DEBUG oslo_concurrency.lockutils [req-e5b0487e-b3d4-4645-ae61-d1e6bc7fbbab req-88b684a8-b4b5-4016-940b-839e974c7382 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:45 compute-0 nova_compute[260603]: 2025-10-02 08:36:45.774 2 DEBUG oslo_concurrency.lockutils [req-e5b0487e-b3d4-4645-ae61-d1e6bc7fbbab req-88b684a8-b4b5-4016-940b-839e974c7382 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:45 compute-0 nova_compute[260603]: 2025-10-02 08:36:45.774 2 DEBUG oslo_concurrency.lockutils [req-e5b0487e-b3d4-4645-ae61-d1e6bc7fbbab req-88b684a8-b4b5-4016-940b-839e974c7382 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:45 compute-0 nova_compute[260603]: 2025-10-02 08:36:45.775 2 DEBUG nova.compute.manager [req-e5b0487e-b3d4-4645-ae61-d1e6bc7fbbab req-88b684a8-b4b5-4016-940b-839e974c7382 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] No waiting events found dispatching network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:45 compute-0 nova_compute[260603]: 2025-10-02 08:36:45.775 2 WARNING nova.compute.manager [req-e5b0487e-b3d4-4645-ae61-d1e6bc7fbbab req-88b684a8-b4b5-4016-940b-839e974c7382 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received unexpected event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 for instance with vm_state active and task_state None.
Oct 02 08:36:46 compute-0 ceph-mon[74477]: pgmap v1703: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 08:36:46 compute-0 nova_compute[260603]: 2025-10-02 08:36:46.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 557 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct 02 08:36:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:36:48 compute-0 ceph-mon[74477]: pgmap v1704: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 557 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct 02 08:36:48 compute-0 nova_compute[260603]: 2025-10-02 08:36:48.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 169 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 87 op/s
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.433 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.434 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.434 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.435 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.435 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.437 2 INFO nova.compute.manager [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Terminating instance
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.439 2 DEBUG nova.compute.manager [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:36:49 compute-0 kernel: tapbac9c5c0-ed (unregistering): left promiscuous mode
Oct 02 08:36:49 compute-0 NetworkManager[45129]: <info>  [1759394209.4999] device (tapbac9c5c0-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:49 compute-0 ovn_controller[152344]: 2025-10-02T08:36:49Z|00881|binding|INFO|Releasing lport bac9c5c0-ed2b-471f-b459-79324411faf4 from this chassis (sb_readonly=0)
Oct 02 08:36:49 compute-0 ovn_controller[152344]: 2025-10-02T08:36:49Z|00882|binding|INFO|Setting lport bac9c5c0-ed2b-471f-b459-79324411faf4 down in Southbound
Oct 02 08:36:49 compute-0 ovn_controller[152344]: 2025-10-02T08:36:49Z|00883|binding|INFO|Removing iface tapbac9c5c0-ed ovn-installed in OVS
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.523 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:8c:25 10.100.0.7'], port_security=['fa:16:3e:2a:8c:25 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '92c7bc30-8758-43e7-804c-9e0ef04a4a77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fb52febddf74ed0a2a55eb14b67cd8f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f2de6c7-a80e-44b4-89d4-b4ed8f0a0be3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0e85455-5dea-43e0-94fe-8e8ae5134610, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bac9c5c0-ed2b-471f-b459-79324411faf4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:36:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.526 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bac9c5c0-ed2b-471f-b459-79324411faf4 in datapath 7312fb31-cf2e-459a-94af-ddc1a56ed03f unbound from our chassis
Oct 02 08:36:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.528 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7312fb31-cf2e-459a-94af-ddc1a56ed03f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:36:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.529 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[716f8a5b-7d08-4da0-96e1-3468f37306d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.530 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f namespace which is not needed anymore
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:49 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Oct 02 08:36:49 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005a.scope: Consumed 6.206s CPU time.
Oct 02 08:36:49 compute-0 systemd-machined[214636]: Machine qemu-109-instance-0000005a terminated.
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.673 2 INFO nova.virt.libvirt.driver [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Instance destroyed successfully.
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.673 2 DEBUG nova.objects.instance [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lazy-loading 'resources' on Instance uuid 92c7bc30-8758-43e7-804c-9e0ef04a4a77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.687 2 DEBUG nova.virt.libvirt.vif [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:36:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-527322298',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-527322298',id=90,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:36:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0fb52febddf74ed0a2a55eb14b67cd8f',ramdisk_id='',reservation_id='r-epjg0x69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1766965232',owner_user_name='tempest-ServerTagsTestJSON-1766965232-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:36:44Z,user_data=None,user_id='cbbd2bff5ed749af8443f40670db21e1',uuid=92c7bc30-8758-43e7-804c-9e0ef04a4a77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.687 2 DEBUG nova.network.os_vif_util [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Converting VIF {"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.688 2 DEBUG nova.network.os_vif_util [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.688 2 DEBUG os_vif [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbac9c5c0-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.735 2 INFO os_vif [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed')
Oct 02 08:36:49 compute-0 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [NOTICE]   (348156) : haproxy version is 2.8.14-c23fe91
Oct 02 08:36:49 compute-0 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [NOTICE]   (348156) : path to executable is /usr/sbin/haproxy
Oct 02 08:36:49 compute-0 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [WARNING]  (348156) : Exiting Master process...
Oct 02 08:36:49 compute-0 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [ALERT]    (348156) : Current worker (348158) exited with code 143 (Terminated)
Oct 02 08:36:49 compute-0 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [WARNING]  (348156) : All workers exited. Exiting... (0)
Oct 02 08:36:49 compute-0 systemd[1]: libpod-fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda.scope: Deactivated successfully.
Oct 02 08:36:49 compute-0 podman[348194]: 2025-10-02 08:36:49.777389207 +0000 UTC m=+0.097138990 container died fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda-userdata-shm.mount: Deactivated successfully.
Oct 02 08:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d296f85a914a5844763b98d0871ccd31024d21220b2eea41f5c25dcb94cf2e1-merged.mount: Deactivated successfully.
Oct 02 08:36:49 compute-0 podman[348194]: 2025-10-02 08:36:49.836228416 +0000 UTC m=+0.155978179 container cleanup fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 08:36:49 compute-0 systemd[1]: libpod-conmon-fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda.scope: Deactivated successfully.
Oct 02 08:36:49 compute-0 podman[348248]: 2025-10-02 08:36:49.943346496 +0000 UTC m=+0.076156481 container remove fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:36:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.957 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[845bd698-aa58-456a-9ce7-e871f57122bb]: (4, ('Thu Oct  2 08:36:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f (fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda)\nfb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda\nThu Oct  2 08:36:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f (fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda)\nfb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.960 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[298418ab-ba39-4827-bf07-c5a816097b02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.962 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7312fb31-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:49 compute-0 kernel: tap7312fb31-c0: left promiscuous mode
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.969 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c7ce7d0-0fd4-450f-8c8e-3e77a154f5a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:49 compute-0 nova_compute[260603]: 2025-10-02 08:36:49.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:50.000 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba7d1e0-e0d0-4c63-b9cd-b3c282d9da09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:50.002 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8362bafd-29e6-4d4e-84cc-f56cf3ad53bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.024 2 DEBUG nova.compute.manager [req-78f28ee2-0377-4386-b8ba-87a11a5a5da7 req-b5e5267e-2a0f-40ee-ace5-195e75c46801 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-vif-unplugged-bac9c5c0-ed2b-471f-b459-79324411faf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.024 2 DEBUG oslo_concurrency.lockutils [req-78f28ee2-0377-4386-b8ba-87a11a5a5da7 req-b5e5267e-2a0f-40ee-ace5-195e75c46801 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.024 2 DEBUG oslo_concurrency.lockutils [req-78f28ee2-0377-4386-b8ba-87a11a5a5da7 req-b5e5267e-2a0f-40ee-ace5-195e75c46801 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.024 2 DEBUG oslo_concurrency.lockutils [req-78f28ee2-0377-4386-b8ba-87a11a5a5da7 req-b5e5267e-2a0f-40ee-ace5-195e75c46801 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.025 2 DEBUG nova.compute.manager [req-78f28ee2-0377-4386-b8ba-87a11a5a5da7 req-b5e5267e-2a0f-40ee-ace5-195e75c46801 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] No waiting events found dispatching network-vif-unplugged-bac9c5c0-ed2b-471f-b459-79324411faf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.025 2 DEBUG nova.compute.manager [req-78f28ee2-0377-4386-b8ba-87a11a5a5da7 req-b5e5267e-2a0f-40ee-ace5-195e75c46801 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-vif-unplugged-bac9c5c0-ed2b-471f-b459-79324411faf4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:36:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:50.028 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[415d8758-84a4-44e9-bc6d-daa4f1f0d051]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508193, 'reachable_time': 25566, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348264, 'error': None, 'target': 'ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:50.031 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:36:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:36:50.031 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[319e5e74-594c-4822-849a-23587d15bf11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:36:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d7312fb31\x2dcf2e\x2d459a\x2d94af\x2dddc1a56ed03f.mount: Deactivated successfully.
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.152 2 INFO nova.virt.libvirt.driver [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Deleting instance files /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77_del
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.153 2 INFO nova.virt.libvirt.driver [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Deletion of /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77_del complete
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.206 2 INFO nova.compute.manager [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.206 2 DEBUG oslo.service.loopingcall [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.206 2 DEBUG nova.compute.manager [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.206 2 DEBUG nova.network.neutron [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:36:50 compute-0 ceph-mon[74477]: pgmap v1705: 305 pgs: 305 active+clean; 169 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 87 op/s
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.929 2 DEBUG nova.network.neutron [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.944 2 INFO nova.compute.manager [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Took 0.74 seconds to deallocate network for instance.
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.988 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:50 compute-0 nova_compute[260603]: 2025-10-02 08:36:50.988 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:51 compute-0 nova_compute[260603]: 2025-10-02 08:36:51.067 2 DEBUG oslo_concurrency.processutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:51 compute-0 nova_compute[260603]: 2025-10-02 08:36:51.112 2 DEBUG nova.compute.manager [req-c7eb78dc-54df-4c70-8bd4-33dd4a5d7159 req-025ce5ed-df52-430a-b11f-d3709e7c3e63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-vif-deleted-bac9c5c0-ed2b-471f-b459-79324411faf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 169 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Oct 02 08:36:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:36:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448324435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:51 compute-0 nova_compute[260603]: 2025-10-02 08:36:51.531 2 DEBUG oslo_concurrency.processutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:51 compute-0 nova_compute[260603]: 2025-10-02 08:36:51.545 2 DEBUG nova.compute.provider_tree [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:36:51 compute-0 nova_compute[260603]: 2025-10-02 08:36:51.574 2 DEBUG nova.scheduler.client.report [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:36:51 compute-0 nova_compute[260603]: 2025-10-02 08:36:51.604 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:51 compute-0 nova_compute[260603]: 2025-10-02 08:36:51.644 2 INFO nova.scheduler.client.report [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Deleted allocations for instance 92c7bc30-8758-43e7-804c-9e0ef04a4a77
Oct 02 08:36:51 compute-0 nova_compute[260603]: 2025-10-02 08:36:51.729 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:52 compute-0 ceph-mon[74477]: pgmap v1706: 305 pgs: 305 active+clean; 169 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Oct 02 08:36:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/448324435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:36:52 compute-0 nova_compute[260603]: 2025-10-02 08:36:52.238 2 DEBUG nova.compute.manager [req-beb0af33-c51c-41ca-afcd-6ab81a683d40 req-55f89cd7-1a68-4ba3-99c4-8a6a0503bcf5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:36:52 compute-0 nova_compute[260603]: 2025-10-02 08:36:52.239 2 DEBUG oslo_concurrency.lockutils [req-beb0af33-c51c-41ca-afcd-6ab81a683d40 req-55f89cd7-1a68-4ba3-99c4-8a6a0503bcf5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:52 compute-0 nova_compute[260603]: 2025-10-02 08:36:52.239 2 DEBUG oslo_concurrency.lockutils [req-beb0af33-c51c-41ca-afcd-6ab81a683d40 req-55f89cd7-1a68-4ba3-99c4-8a6a0503bcf5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:52 compute-0 nova_compute[260603]: 2025-10-02 08:36:52.240 2 DEBUG oslo_concurrency.lockutils [req-beb0af33-c51c-41ca-afcd-6ab81a683d40 req-55f89cd7-1a68-4ba3-99c4-8a6a0503bcf5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:52 compute-0 nova_compute[260603]: 2025-10-02 08:36:52.240 2 DEBUG nova.compute.manager [req-beb0af33-c51c-41ca-afcd-6ab81a683d40 req-55f89cd7-1a68-4ba3-99c4-8a6a0503bcf5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] No waiting events found dispatching network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:36:52 compute-0 nova_compute[260603]: 2025-10-02 08:36:52.241 2 WARNING nova.compute.manager [req-beb0af33-c51c-41ca-afcd-6ab81a683d40 req-55f89cd7-1a68-4ba3-99c4-8a6a0503bcf5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received unexpected event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 for instance with vm_state deleted and task_state None.
Oct 02 08:36:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 101 op/s
Oct 02 08:36:53 compute-0 nova_compute[260603]: 2025-10-02 08:36:53.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:54 compute-0 podman[348288]: 2025-10-02 08:36:54.048275771 +0000 UTC m=+0.098974216 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 08:36:54 compute-0 podman[348287]: 2025-10-02 08:36:54.137378429 +0000 UTC m=+0.190679392 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 08:36:54 compute-0 ceph-mon[74477]: pgmap v1707: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 101 op/s
Oct 02 08:36:54 compute-0 nova_compute[260603]: 2025-10-02 08:36:54.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:55 compute-0 ovn_controller[152344]: 2025-10-02T08:36:55Z|00884|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:36:55 compute-0 nova_compute[260603]: 2025-10-02 08:36:55.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 100 op/s
Oct 02 08:36:56 compute-0 ceph-mon[74477]: pgmap v1708: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 100 op/s
Oct 02 08:36:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 100 op/s
Oct 02 08:36:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:36:57 compute-0 nova_compute[260603]: 2025-10-02 08:36:57.528 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:57 compute-0 nova_compute[260603]: 2025-10-02 08:36:57.529 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:57 compute-0 nova_compute[260603]: 2025-10-02 08:36:57.549 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:36:57 compute-0 nova_compute[260603]: 2025-10-02 08:36:57.620 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:57 compute-0 nova_compute[260603]: 2025-10-02 08:36:57.621 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:57 compute-0 nova_compute[260603]: 2025-10-02 08:36:57.632 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:36:57 compute-0 nova_compute[260603]: 2025-10-02 08:36:57.632 2 INFO nova.compute.claims [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:36:57 compute-0 nova_compute[260603]: 2025-10-02 08:36:57.749 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:36:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:36:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:36:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:36:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:36:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:36:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:36:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3238598947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.237 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.244 2 DEBUG nova.compute.provider_tree [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:36:58 compute-0 ceph-mon[74477]: pgmap v1709: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 100 op/s
Oct 02 08:36:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3238598947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.275 2 DEBUG nova.scheduler.client.report [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.297 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.297 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.351 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.352 2 DEBUG nova.network.neutron [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.386 2 INFO nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.408 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.501 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.503 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.503 2 INFO nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Creating image(s)
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.525 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.548 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.572 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.575 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.677 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.679 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.680 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.680 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.715 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.720 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7392e1c1-40db-4b57-8ed8-278f89402f65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.771 2 DEBUG nova.policy [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb1b3a5ae9514259b27a0b7a28f23cda', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:36:58 compute-0 nova_compute[260603]: 2025-10-02 08:36:58.985 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7392e1c1-40db-4b57-8ed8-278f89402f65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:36:59 compute-0 nova_compute[260603]: 2025-10-02 08:36:59.042 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] resizing rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:36:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 79 op/s
Oct 02 08:36:59 compute-0 nova_compute[260603]: 2025-10-02 08:36:59.151 2 DEBUG nova.objects.instance [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'migration_context' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:36:59 compute-0 nova_compute[260603]: 2025-10-02 08:36:59.200 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:36:59 compute-0 nova_compute[260603]: 2025-10-02 08:36:59.201 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Ensure instance console log exists: /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:36:59 compute-0 nova_compute[260603]: 2025-10-02 08:36:59.201 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:36:59 compute-0 nova_compute[260603]: 2025-10-02 08:36:59.202 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:36:59 compute-0 nova_compute[260603]: 2025-10-02 08:36:59.202 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:36:59 compute-0 nova_compute[260603]: 2025-10-02 08:36:59.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:36:59 compute-0 nova_compute[260603]: 2025-10-02 08:36:59.875 2 DEBUG nova.network.neutron [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Successfully created port: d38707db-5f0b-4ebe-80b9-ed84810b2c21 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:37:00 compute-0 ceph-mon[74477]: pgmap v1710: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 79 op/s
Oct 02 08:37:01 compute-0 podman[348523]: 2025-10-02 08:37:01.033165077 +0000 UTC m=+0.085573403 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 08:37:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 08:37:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:37:02 compute-0 ceph-mon[74477]: pgmap v1711: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.326 2 DEBUG nova.network.neutron [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Successfully updated port: d38707db-5f0b-4ebe-80b9-ed84810b2c21 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.348 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "refresh_cache-7392e1c1-40db-4b57-8ed8-278f89402f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.348 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquired lock "refresh_cache-7392e1c1-40db-4b57-8ed8-278f89402f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.349 2 DEBUG nova.network.neutron [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.431 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.431 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.456 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.567 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.568 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.576 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.576 2 INFO nova.compute.claims [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.596 2 DEBUG nova.compute.manager [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-changed-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.597 2 DEBUG nova.compute.manager [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Refreshing instance network info cache due to event network-changed-d38707db-5f0b-4ebe-80b9-ed84810b2c21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.597 2 DEBUG oslo_concurrency.lockutils [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7392e1c1-40db-4b57-8ed8-278f89402f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.760 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:02 compute-0 nova_compute[260603]: 2025-10-02 08:37:02.861 2 DEBUG nova.network.neutron [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:37:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 169 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct 02 08:37:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:37:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/343413180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.248 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.256 2 DEBUG nova.compute.provider_tree [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:37:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Oct 02 08:37:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Oct 02 08:37:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/343413180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:37:03 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.316 2 DEBUG nova.scheduler.client.report [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.353 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.354 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.413 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.438 2 INFO nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.460 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.576 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.578 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.579 2 INFO nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Creating image(s)
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.615 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.645 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.678 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.685 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.795 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.797 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.797 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.798 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.825 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.829 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.922 2 DEBUG nova.network.neutron [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Updating instance_info_cache with network_info: [{"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.979 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Releasing lock "refresh_cache-7392e1c1-40db-4b57-8ed8-278f89402f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.980 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance network_info: |[{"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.981 2 DEBUG oslo_concurrency.lockutils [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7392e1c1-40db-4b57-8ed8-278f89402f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.982 2 DEBUG nova.network.neutron [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Refreshing network info cache for port d38707db-5f0b-4ebe-80b9-ed84810b2c21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.988 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Start _get_guest_xml network_info=[{"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:37:03 compute-0 nova_compute[260603]: 2025-10-02 08:37:03.999 2 WARNING nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.006 2 DEBUG nova.virt.libvirt.host [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.007 2 DEBUG nova.virt.libvirt.host [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.017 2 DEBUG nova.virt.libvirt.host [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.017 2 DEBUG nova.virt.libvirt.host [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.018 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.018 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.019 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.019 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.019 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.020 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.020 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.020 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.021 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.021 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.021 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.021 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.025 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:04 compute-0 podman[348656]: 2025-10-02 08:37:04.039499483 +0000 UTC m=+0.107490882 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.118 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.195 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] resizing rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:37:04 compute-0 ceph-mon[74477]: pgmap v1712: 305 pgs: 305 active+clean; 169 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct 02 08:37:04 compute-0 ceph-mon[74477]: osdmap e235: 3 total, 3 up, 3 in
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.294 2 DEBUG nova.objects.instance [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'migration_context' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.314 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.314 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Ensure instance console log exists: /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.315 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.315 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.315 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.317 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.322 2 WARNING nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.330 2 DEBUG nova.virt.libvirt.host [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.330 2 DEBUG nova.virt.libvirt.host [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.333 2 DEBUG nova.virt.libvirt.host [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.334 2 DEBUG nova.virt.libvirt.host [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.334 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.334 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.335 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.335 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.336 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.336 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.336 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.336 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.337 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.337 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.337 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.337 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.341 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:37:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3287600729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.514 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.552 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.563 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.672 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394209.6714199, 92c7bc30-8758-43e7-804c-9e0ef04a4a77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.673 2 INFO nova.compute.manager [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] VM Stopped (Lifecycle Event)
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.712 2 DEBUG nova.compute.manager [None req-f53031e8-781b-44a6-b839-468693ff08d0 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:37:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2191010326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.771 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.793 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:04 compute-0 nova_compute[260603]: 2025-10-02 08:37:04.796 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:37:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3706953851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.025 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.028 2 DEBUG nova.virt.libvirt.vif [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:36:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1274366950',display_name='tempest-tempest.common.compute-instance-1274366950',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1274366950',id=91,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-14rph60n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:36:58Z,user_data=None,user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=7392e1c1-40db-4b57-8ed8-278f89402f65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.029 2 DEBUG nova.network.os_vif_util [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.030 2 DEBUG nova.network.os_vif_util [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.032 2 DEBUG nova.objects.instance [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 169 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Oct 02 08:37:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:37:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/248856262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.203 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.205 2 DEBUG nova.objects.instance [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'pci_devices' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.255 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <uuid>496c1bb3-a098-41ab-ac67-5a6a89a0de53</uuid>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <name>instance-0000005c</name>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerShowV254Test-server-2038843268</nova:name>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:37:04</nova:creationTime>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:user uuid="9aaa9a9ec2564ed3a346216a96231feb">tempest-ServerShowV254Test-348861658-project-member</nova:user>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:project uuid="850f950eaa1d49239f8913dfee5dc44e">tempest-ServerShowV254Test-348861658</nova:project>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:ports/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <system>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <entry name="serial">496c1bb3-a098-41ab-ac67-5a6a89a0de53</entry>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <entry name="uuid">496c1bb3-a098-41ab-ac67-5a6a89a0de53</entry>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </system>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <os>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </os>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <features>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </features>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk">
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </source>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config">
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </source>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/console.log" append="off"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <video>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </video>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:37:05 compute-0 nova_compute[260603]: </domain>
Oct 02 08:37:05 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.260 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <uuid>7392e1c1-40db-4b57-8ed8-278f89402f65</uuid>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <name>instance-0000005b</name>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:name>tempest-tempest.common.compute-instance-1274366950</nova:name>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:37:04</nova:creationTime>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:user uuid="bb1b3a5ae9514259b27a0b7a28f23cda">tempest-ServerActionsTestJSON-1407264397-project-member</nova:user>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:project uuid="b43ebc87104041aba179e47c5e6ecc5f">tempest-ServerActionsTestJSON-1407264397</nova:project>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <nova:port uuid="d38707db-5f0b-4ebe-80b9-ed84810b2c21">
Oct 02 08:37:05 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <system>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <entry name="serial">7392e1c1-40db-4b57-8ed8-278f89402f65</entry>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <entry name="uuid">7392e1c1-40db-4b57-8ed8-278f89402f65</entry>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </system>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <os>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </os>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <features>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </features>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7392e1c1-40db-4b57-8ed8-278f89402f65_disk">
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </source>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config">
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </source>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:37:05 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:fe:e4:85"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <target dev="tapd38707db-5f"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/console.log" append="off"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <video>
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </video>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:37:05 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:37:05 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:37:05 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:37:05 compute-0 nova_compute[260603]: </domain>
Oct 02 08:37:05 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.263 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Preparing to wait for external event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.264 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.265 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.265 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.266 2 DEBUG nova.virt.libvirt.vif [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:36:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1274366950',display_name='tempest-tempest.common.compute-instance-1274366950',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1274366950',id=91,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-14rph60n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:36:58Z,user_data=None,user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=7392e1c1-40db-4b57-8ed8-278f89402f65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.267 2 DEBUG nova.network.os_vif_util [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.268 2 DEBUG nova.network.os_vif_util [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.269 2 DEBUG os_vif [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.275 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.276 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.282 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd38707db-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:05 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3287600729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:05 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2191010326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:05 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3706953851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:05 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/248856262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd38707db-5f, col_values=(('external_ids', {'iface-id': 'd38707db-5f0b-4ebe-80b9-ed84810b2c21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:e4:85', 'vm-uuid': '7392e1c1-40db-4b57-8ed8-278f89402f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:05 compute-0 NetworkManager[45129]: <info>  [1759394225.2911] manager: (tapd38707db-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.302 2 INFO os_vif [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f')
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.369 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.370 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.370 2 INFO nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Using config drive
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.388 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.398 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.399 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.399 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No VIF found with MAC fa:16:3e:fe:e4:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.399 2 INFO nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Using config drive
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.418 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.742 2 INFO nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Creating config drive at /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.750 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnid9j9j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.883 2 DEBUG nova.network.neutron [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Updated VIF entry in instance network info cache for port d38707db-5f0b-4ebe-80b9-ed84810b2c21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.884 2 DEBUG nova.network.neutron [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Updating instance_info_cache with network_info: [{"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.921 2 DEBUG oslo_concurrency.lockutils [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7392e1c1-40db-4b57-8ed8-278f89402f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.923 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnid9j9j" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.951 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:05 compute-0 nova_compute[260603]: 2025-10-02 08:37:05.955 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.117 2 INFO nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Creating config drive at /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.127 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptcupm8gt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.171 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.173 2 INFO nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deleting local config drive /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config because it was imported into RBD.
Oct 02 08:37:06 compute-0 systemd-machined[214636]: New machine qemu-110-instance-0000005c.
Oct 02 08:37:06 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-0000005c.
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.279 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptcupm8gt" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Oct 02 08:37:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Oct 02 08:37:06 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Oct 02 08:37:06 compute-0 ceph-mon[74477]: pgmap v1714: 305 pgs: 305 active+clean; 169 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.347 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.352 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.532 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.534 2 INFO nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deleting local config drive /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config because it was imported into RBD.
Oct 02 08:37:06 compute-0 kernel: tapd38707db-5f: entered promiscuous mode
Oct 02 08:37:06 compute-0 NetworkManager[45129]: <info>  [1759394226.6193] manager: (tapd38707db-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Oct 02 08:37:06 compute-0 ovn_controller[152344]: 2025-10-02T08:37:06Z|00885|binding|INFO|Claiming lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 for this chassis.
Oct 02 08:37:06 compute-0 ovn_controller[152344]: 2025-10-02T08:37:06Z|00886|binding|INFO|d38707db-5f0b-4ebe-80b9-ed84810b2c21: Claiming fa:16:3e:fe:e4:85 10.100.0.10
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.639 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e4:85 10.100.0.10'], port_security=['fa:16:3e:fe:e4:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7392e1c1-40db-4b57-8ed8-278f89402f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd6efb5fb-5780-4bec-a02c-71fa342fd128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d38707db-5f0b-4ebe-80b9-ed84810b2c21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.641 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d38707db-5f0b-4ebe-80b9-ed84810b2c21 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 bound to our chassis
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.642 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:37:06 compute-0 ovn_controller[152344]: 2025-10-02T08:37:06Z|00887|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 ovn-installed in OVS
Oct 02 08:37:06 compute-0 ovn_controller[152344]: 2025-10-02T08:37:06Z|00888|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 up in Southbound
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:06 compute-0 systemd-machined[214636]: New machine qemu-111-instance-0000005b.
Oct 02 08:37:06 compute-0 systemd-udevd[349023]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.676 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[19f01d48-7bc0-4926-becb-7d469fa4c194]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:06 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-0000005b.
Oct 02 08:37:06 compute-0 NetworkManager[45129]: <info>  [1759394226.6869] device (tapd38707db-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:37:06 compute-0 NetworkManager[45129]: <info>  [1759394226.6877] device (tapd38707db-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.725 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b43cd2-3b34-4ebe-8db4-2a046aeb1162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.730 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[20125f4c-bf96-4edb-b75a-62c68b5e261f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.767 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c51c3f7e-e353-438e-b531-5d780cb1fd45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.792 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[925d3d1c-3713-41e2-ba2a-f6242add3954]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349036, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.813 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[374c748a-7c42-4dda-8239-bc9f4d6af9c9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505595, 'tstamp': 505595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349037, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505598, 'tstamp': 505598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349037, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.816 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.820 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.820 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.821 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.822 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.979 2 DEBUG nova.compute.manager [req-e584d5aa-f1a6-4620-a37e-0620501ad72f req-da352786-3a04-4d23-afd6-abe10abfbd8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.980 2 DEBUG oslo_concurrency.lockutils [req-e584d5aa-f1a6-4620-a37e-0620501ad72f req-da352786-3a04-4d23-afd6-abe10abfbd8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.980 2 DEBUG oslo_concurrency.lockutils [req-e584d5aa-f1a6-4620-a37e-0620501ad72f req-da352786-3a04-4d23-afd6-abe10abfbd8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.981 2 DEBUG oslo_concurrency.lockutils [req-e584d5aa-f1a6-4620-a37e-0620501ad72f req-da352786-3a04-4d23-afd6-abe10abfbd8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:06 compute-0 nova_compute[260603]: 2025-10-02 08:37:06.981 2 DEBUG nova.compute.manager [req-e584d5aa-f1a6-4620-a37e-0620501ad72f req-da352786-3a04-4d23-afd6-abe10abfbd8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Processing event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:37:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 186 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 4.1 MiB/s wr, 101 op/s
Oct 02 08:37:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:37:07 compute-0 ceph-mon[74477]: osdmap e236: 3 total, 3 up, 3 in
Oct 02 08:37:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:07.329 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:07.331 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.594 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394227.5941772, 496c1bb3-a098-41ab-ac67-5a6a89a0de53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.595 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] VM Resumed (Lifecycle Event)
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.597 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.597 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.598 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.601 2 INFO nova.virt.libvirt.driver [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance spawned successfully.
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.601 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.603 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.606 2 INFO nova.virt.libvirt.driver [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance spawned successfully.
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.606 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.620 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.625 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.634 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.634 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.635 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.636 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.637 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.638 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.647 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.647 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394227.5954683, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.648 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Started (Lifecycle Event)
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.651 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.652 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.653 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.653 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.654 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.655 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.685 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.689 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.719 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.719 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394227.5955641, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.720 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Paused (Lifecycle Event)
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.730 2 INFO nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Took 4.15 seconds to spawn the instance on the hypervisor.
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.731 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.739 2 INFO nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Took 9.24 seconds to spawn the instance on the hypervisor.
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.739 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.766 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.770 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394227.5955925, 496c1bb3-a098-41ab-ac67-5a6a89a0de53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.771 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] VM Started (Lifecycle Event)
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.792 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.796 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.812 2 INFO nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Took 5.27 seconds to build instance.
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.817 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394227.6017065, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.818 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Resumed (Lifecycle Event)
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.820 2 INFO nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Took 10.23 seconds to build instance.
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.834 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.836 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.837 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:07 compute-0 nova_compute[260603]: 2025-10-02 08:37:07.839 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:37:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Oct 02 08:37:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Oct 02 08:37:08 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Oct 02 08:37:08 compute-0 ceph-mon[74477]: pgmap v1716: 305 pgs: 305 active+clean; 186 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 4.1 MiB/s wr, 101 op/s
Oct 02 08:37:08 compute-0 nova_compute[260603]: 2025-10-02 08:37:08.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct 02 08:37:09 compute-0 nova_compute[260603]: 2025-10-02 08:37:09.296 2 DEBUG nova.compute.manager [req-34d7fdd0-bb03-49ac-8cf9-14ef30de3f34 req-4be4d0fd-9e2c-4de3-a7e6-df994dd88dd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:09 compute-0 nova_compute[260603]: 2025-10-02 08:37:09.297 2 DEBUG oslo_concurrency.lockutils [req-34d7fdd0-bb03-49ac-8cf9-14ef30de3f34 req-4be4d0fd-9e2c-4de3-a7e6-df994dd88dd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:09 compute-0 nova_compute[260603]: 2025-10-02 08:37:09.297 2 DEBUG oslo_concurrency.lockutils [req-34d7fdd0-bb03-49ac-8cf9-14ef30de3f34 req-4be4d0fd-9e2c-4de3-a7e6-df994dd88dd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:09 compute-0 nova_compute[260603]: 2025-10-02 08:37:09.297 2 DEBUG oslo_concurrency.lockutils [req-34d7fdd0-bb03-49ac-8cf9-14ef30de3f34 req-4be4d0fd-9e2c-4de3-a7e6-df994dd88dd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:09 compute-0 nova_compute[260603]: 2025-10-02 08:37:09.297 2 DEBUG nova.compute.manager [req-34d7fdd0-bb03-49ac-8cf9-14ef30de3f34 req-4be4d0fd-9e2c-4de3-a7e6-df994dd88dd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:37:09 compute-0 nova_compute[260603]: 2025-10-02 08:37:09.298 2 WARNING nova.compute.manager [req-34d7fdd0-bb03-49ac-8cf9-14ef30de3f34 req-4be4d0fd-9e2c-4de3-a7e6-df994dd88dd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state None.
Oct 02 08:37:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:09.333 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:09 compute-0 ceph-mon[74477]: osdmap e237: 3 total, 3 up, 3 in
Oct 02 08:37:10 compute-0 nova_compute[260603]: 2025-10-02 08:37:10.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Oct 02 08:37:10 compute-0 ceph-mon[74477]: pgmap v1718: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct 02 08:37:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Oct 02 08:37:10 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Oct 02 08:37:10 compute-0 nova_compute[260603]: 2025-10-02 08:37:10.530 2 INFO nova.compute.manager [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Rebuilding instance
Oct 02 08:37:10 compute-0 nova_compute[260603]: 2025-10-02 08:37:10.824 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:10 compute-0 nova_compute[260603]: 2025-10-02 08:37:10.841 2 DEBUG nova.compute.manager [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:10 compute-0 nova_compute[260603]: 2025-10-02 08:37:10.891 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'pci_requests' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:10 compute-0 nova_compute[260603]: 2025-10-02 08:37:10.904 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'pci_devices' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:10 compute-0 nova_compute[260603]: 2025-10-02 08:37:10.921 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'resources' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:10 compute-0 nova_compute[260603]: 2025-10-02 08:37:10.923 2 INFO nova.compute.manager [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Rebuilding instance
Oct 02 08:37:10 compute-0 nova_compute[260603]: 2025-10-02 08:37:10.932 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'migration_context' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:10 compute-0 nova_compute[260603]: 2025-10-02 08:37:10.943 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:37:10 compute-0 nova_compute[260603]: 2025-10-02 08:37:10.945 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:37:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct 02 08:37:11 compute-0 ceph-mon[74477]: osdmap e238: 3 total, 3 up, 3 in
Oct 02 08:37:11 compute-0 nova_compute[260603]: 2025-10-02 08:37:11.576 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:11 compute-0 nova_compute[260603]: 2025-10-02 08:37:11.599 2 DEBUG nova.compute.manager [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:11 compute-0 nova_compute[260603]: 2025-10-02 08:37:11.665 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_requests' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:11 compute-0 nova_compute[260603]: 2025-10-02 08:37:11.679 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:11 compute-0 nova_compute[260603]: 2025-10-02 08:37:11.693 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'resources' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:11 compute-0 nova_compute[260603]: 2025-10-02 08:37:11.704 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'migration_context' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:11 compute-0 nova_compute[260603]: 2025-10-02 08:37:11.714 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:37:11 compute-0 nova_compute[260603]: 2025-10-02 08:37:11.716 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:37:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:37:12 compute-0 ceph-mon[74477]: pgmap v1720: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct 02 08:37:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.2 MiB/s wr, 435 op/s
Oct 02 08:37:13 compute-0 nova_compute[260603]: 2025-10-02 08:37:13.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:14 compute-0 ceph-mon[74477]: pgmap v1721: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.2 MiB/s wr, 435 op/s
Oct 02 08:37:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.3 MiB/s wr, 321 op/s
Oct 02 08:37:15 compute-0 nova_compute[260603]: 2025-10-02 08:37:15.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:16 compute-0 ceph-mon[74477]: pgmap v1722: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.3 MiB/s wr, 321 op/s
Oct 02 08:37:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.6 KiB/s wr, 242 op/s
Oct 02 08:37:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:37:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Oct 02 08:37:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Oct 02 08:37:17 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Oct 02 08:37:17 compute-0 nova_compute[260603]: 2025-10-02 08:37:17.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:37:17 compute-0 nova_compute[260603]: 2025-10-02 08:37:17.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:37:18 compute-0 ceph-mon[74477]: pgmap v1723: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.6 KiB/s wr, 242 op/s
Oct 02 08:37:18 compute-0 ceph-mon[74477]: osdmap e239: 3 total, 3 up, 3 in
Oct 02 08:37:18 compute-0 nova_compute[260603]: 2025-10-02 08:37:18.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 221 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.2 MiB/s wr, 266 op/s
Oct 02 08:37:20 compute-0 ovn_controller[152344]: 2025-10-02T08:37:20Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:e4:85 10.100.0.10
Oct 02 08:37:20 compute-0 ovn_controller[152344]: 2025-10-02T08:37:20Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:e4:85 10.100.0.10
Oct 02 08:37:20 compute-0 ceph-mon[74477]: pgmap v1725: 305 pgs: 305 active+clean; 221 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.2 MiB/s wr, 266 op/s
Oct 02 08:37:20 compute-0 nova_compute[260603]: 2025-10-02 08:37:20.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:20 compute-0 nova_compute[260603]: 2025-10-02 08:37:20.983 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:37:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 221 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.0 MiB/s wr, 234 op/s
Oct 02 08:37:21 compute-0 nova_compute[260603]: 2025-10-02 08:37:21.760 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:37:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:37:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2474133518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:37:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:37:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2474133518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:37:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:37:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Oct 02 08:37:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Oct 02 08:37:22 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Oct 02 08:37:22 compute-0 ceph-mon[74477]: pgmap v1726: 305 pgs: 305 active+clean; 221 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.0 MiB/s wr, 234 op/s
Oct 02 08:37:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2474133518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:37:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2474133518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:37:22 compute-0 nova_compute[260603]: 2025-10-02 08:37:22.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:37:22 compute-0 nova_compute[260603]: 2025-10-02 08:37:22.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:37:22 compute-0 nova_compute[260603]: 2025-10-02 08:37:22.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:37:22 compute-0 nova_compute[260603]: 2025-10-02 08:37:22.720 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:37:22 compute-0 nova_compute[260603]: 2025-10-02 08:37:22.721 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:37:22 compute-0 nova_compute[260603]: 2025-10-02 08:37:22.721 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:37:22 compute-0 nova_compute[260603]: 2025-10-02 08:37:22.721 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 393 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 948 KiB/s rd, 20 MiB/s wr, 248 op/s
Oct 02 08:37:23 compute-0 ceph-mon[74477]: osdmap e240: 3 total, 3 up, 3 in
Oct 02 08:37:23 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct 02 08:37:23 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005c.scope: Consumed 12.882s CPU time.
Oct 02 08:37:23 compute-0 systemd-machined[214636]: Machine qemu-110-instance-0000005c terminated.
Oct 02 08:37:23 compute-0 nova_compute[260603]: 2025-10-02 08:37:23.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.002 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance shutdown successfully after 13 seconds.
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.010 2 INFO nova.virt.libvirt.driver [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance destroyed successfully.
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.018 2 INFO nova.virt.libvirt.driver [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance destroyed successfully.
Oct 02 08:37:24 compute-0 kernel: tapd38707db-5f (unregistering): left promiscuous mode
Oct 02 08:37:24 compute-0 NetworkManager[45129]: <info>  [1759394244.1034] device (tapd38707db-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00889|binding|INFO|Releasing lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 from this chassis (sb_readonly=0)
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00890|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 down in Southbound
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00891|binding|INFO|Removing iface tapd38707db-5f ovn-installed in OVS
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct 02 08:37:24 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005b.scope: Consumed 12.644s CPU time.
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.211 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e4:85 10.100.0.10'], port_security=['fa:16:3e:fe:e4:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7392e1c1-40db-4b57-8ed8-278f89402f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6efb5fb-5780-4bec-a02c-71fa342fd128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d38707db-5f0b-4ebe-80b9-ed84810b2c21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.213 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d38707db-5f0b-4ebe-80b9-ed84810b2c21 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.214 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:37:24 compute-0 systemd-machined[214636]: Machine qemu-111-instance-0000005b terminated.
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.235 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fb2c20-8481-49f6-8141-f651bf8d72e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 podman[349145]: 2025-10-02 08:37:24.268232941 +0000 UTC m=+0.104565482 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.282 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[be09bfdf-c164-47e1-9b51-204ddee4b6e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.286 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9811c8a9-5393-4eca-b1b5-317539df905e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Oct 02 08:37:24 compute-0 ceph-mon[74477]: pgmap v1728: 305 pgs: 305 active+clean; 393 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 948 KiB/s rd, 20 MiB/s wr, 248 op/s
Oct 02 08:37:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Oct 02 08:37:24 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.323 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e2882a7c-6336-4838-ac1e-4188ab9db160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 kernel: tapd38707db-5f: entered promiscuous mode
Oct 02 08:37:24 compute-0 NetworkManager[45129]: <info>  [1759394244.3314] manager: (tapd38707db-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/359)
Oct 02 08:37:24 compute-0 systemd-udevd[349124]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:37:24 compute-0 kernel: tapd38707db-5f (unregistering): left promiscuous mode
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00892|binding|INFO|Claiming lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 for this chassis.
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00893|binding|INFO|d38707db-5f0b-4ebe-80b9-ed84810b2c21: Claiming fa:16:3e:fe:e4:85 10.100.0.10
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.345 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e4:85 10.100.0.10'], port_security=['fa:16:3e:fe:e4:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7392e1c1-40db-4b57-8ed8-278f89402f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6efb5fb-5780-4bec-a02c-71fa342fd128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d38707db-5f0b-4ebe-80b9-ed84810b2c21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.353 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e472d6-362b-4188-ba6c-9682cd4dbcd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349196, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00894|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 ovn-installed in OVS
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00895|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 up in Southbound
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00896|binding|INFO|Releasing lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 from this chassis (sb_readonly=1)
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00897|if_status|INFO|Dropped 2 log messages in last 71 seconds (most recently, 71 seconds ago) due to excessive rate
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00898|if_status|INFO|Not setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 down as sb is readonly
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00899|binding|INFO|Removing iface tapd38707db-5f ovn-installed in OVS
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00900|binding|INFO|Releasing lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 from this chassis (sb_readonly=1)
Oct 02 08:37:24 compute-0 ovn_controller[152344]: 2025-10-02T08:37:24Z|00901|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 down in Southbound
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.371 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86b66655-1442-48b1-8e5c-79fea0d55621]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505595, 'tstamp': 505595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349204, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505598, 'tstamp': 505598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349204, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.379 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.385 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e4:85 10.100.0.10'], port_security=['fa:16:3e:fe:e4:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7392e1c1-40db-4b57-8ed8-278f89402f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6efb5fb-5780-4bec-a02c-71fa342fd128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d38707db-5f0b-4ebe-80b9-ed84810b2c21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.389 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.390 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.391 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.391 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.393 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d38707db-5f0b-4ebe-80b9-ed84810b2c21 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.395 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:37:24 compute-0 podman[349163]: 2025-10-02 08:37:24.415470957 +0000 UTC m=+0.172856777 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.419 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[118f5336-aa4c-45b3-a985-3ad205d3f6dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.453 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ca634f86-2450-4ae4-a6d2-4b1d2fcc3dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.457 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3a701505-ab06-4bf4-b671-97ec9617d183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.495 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4e9757-49fd-4185-827f-a2cacbb84804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[299a06c3-830b-4f35-b8c1-a7d0f61881a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349211, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.547 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[39f40b36-a98b-4318-98c7-8da2b7162665]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505595, 'tstamp': 505595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349212, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505598, 'tstamp': 505598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349212, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.550 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.558 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.558 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.559 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.560 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.561 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d38707db-5f0b-4ebe-80b9-ed84810b2c21 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.563 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.589 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[06dc09ef-93cc-4c59-9c7f-a04b02bd4210]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.597 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deleting instance files /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53_del
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.598 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deletion of /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53_del complete
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.642 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[45225baa-7fa1-459a-9a92-8cb65d9f32fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.650 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[abc87c89-bb6a-49f4-b4c5-b1504ed121b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.700 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[36c4d5a1-d49a-42d3-a510-66e248f8fad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.729 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0113ecf4-5817-4557-817e-7738cd31af6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349219, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.755 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[736d6ba7-ffec-41c9-ba34-e5a699f14fe8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505595, 'tstamp': 505595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349220, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505598, 'tstamp': 505598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349220, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.757 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.766 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.766 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.767 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.768 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.777 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance shutdown successfully after 13 seconds.
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.788 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.789 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Creating image(s)
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.820 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.854 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.887 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.892 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.952 2 DEBUG nova.compute.manager [req-18df3d01-4e39-48d1-b711-4c9928f02f88 req-ce10e4a4-4fb0-4fe3-a8ce-24376d24542a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-unplugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.953 2 DEBUG oslo_concurrency.lockutils [req-18df3d01-4e39-48d1-b711-4c9928f02f88 req-ce10e4a4-4fb0-4fe3-a8ce-24376d24542a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.954 2 DEBUG oslo_concurrency.lockutils [req-18df3d01-4e39-48d1-b711-4c9928f02f88 req-ce10e4a4-4fb0-4fe3-a8ce-24376d24542a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.955 2 DEBUG oslo_concurrency.lockutils [req-18df3d01-4e39-48d1-b711-4c9928f02f88 req-ce10e4a4-4fb0-4fe3-a8ce-24376d24542a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.955 2 DEBUG nova.compute.manager [req-18df3d01-4e39-48d1-b711-4c9928f02f88 req-ce10e4a4-4fb0-4fe3-a8ce-24376d24542a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-unplugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.956 2 WARNING nova.compute.manager [req-18df3d01-4e39-48d1-b711-4c9928f02f88 req-ce10e4a4-4fb0-4fe3-a8ce-24376d24542a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-unplugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state rebuilding.
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.974 2 INFO nova.virt.libvirt.driver [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance destroyed successfully.
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.984 2 INFO nova.virt.libvirt.driver [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance destroyed successfully.
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.986 2 DEBUG nova.virt.libvirt.vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:36:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1274366950',display_name='tempest-ServerActionsTestJSON-server-35828613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1274366950',id=91,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:37:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-14rph60n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:37:10Z,user_data=None,user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=7392e1c1-40db-4b57-8ed8-278f89402f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.987 2 DEBUG nova.network.os_vif_util [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.989 2 DEBUG nova.network.os_vif_util [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.990 2 DEBUG os_vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.993 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd38707db-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:24 compute-0 nova_compute[260603]: 2025-10-02 08:37:24.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.001 2 INFO os_vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f')
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.026 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.027 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.028 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.028 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.055 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.058 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 393 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 950 KiB/s rd, 21 MiB/s wr, 241 op/s
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.160 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.192 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.192 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.193 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.193 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.193 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:37:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Oct 02 08:37:25 compute-0 ceph-mon[74477]: osdmap e241: 3 total, 3 up, 3 in
Oct 02 08:37:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Oct 02 08:37:25 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.345 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.419 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] resizing rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.520 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.521 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Ensure instance console log exists: /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.522 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.522 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.522 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.524 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.532 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deleting instance files /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65_del
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.533 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deletion of /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65_del complete
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.539 2 WARNING nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.547 2 DEBUG nova.virt.libvirt.host [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.548 2 DEBUG nova.virt.libvirt.host [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.552 2 DEBUG nova.virt.libvirt.host [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.553 2 DEBUG nova.virt.libvirt.host [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.553 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.553 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.554 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.554 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.554 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.555 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.555 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.555 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.555 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.556 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.556 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.556 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.557 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.586 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.760 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.761 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Creating image(s)
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.800 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.836 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.873 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.878 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.985 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.987 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.988 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:25 compute-0 nova_compute[260603]: 2025-10-02 08:37:25.988 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.019 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.024 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 7392e1c1-40db-4b57-8ed8-278f89402f65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:37:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644899086' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.078 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.100 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.106 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:26 compute-0 ceph-mon[74477]: pgmap v1730: 305 pgs: 305 active+clean; 393 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 950 KiB/s rd, 21 MiB/s wr, 241 op/s
Oct 02 08:37:26 compute-0 ceph-mon[74477]: osdmap e242: 3 total, 3 up, 3 in
Oct 02 08:37:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/644899086' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.336 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 7392e1c1-40db-4b57-8ed8-278f89402f65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:26 compute-0 sudo[349561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:37:26 compute-0 sudo[349561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:26 compute-0 sudo[349561]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.395 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] resizing rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:37:26 compute-0 sudo[349604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:37:26 compute-0 sudo[349604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:26 compute-0 sudo[349604]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:26 compute-0 sudo[349665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.480 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.481 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Ensure instance console log exists: /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.481 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.482 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.482 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:26 compute-0 sudo[349665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.484 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Start _get_guest_xml network_info=[{"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:37:26 compute-0 sudo[349665]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.487 2 WARNING nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.492 2 DEBUG nova.virt.libvirt.host [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.493 2 DEBUG nova.virt.libvirt.host [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.497 2 DEBUG nova.virt.libvirt.host [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.498 2 DEBUG nova.virt.libvirt.host [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.498 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.498 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.499 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.499 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.499 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.499 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.499 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.500 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.500 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.500 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.500 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.500 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.501 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.523 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:26 compute-0 sudo[349708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:37:26 compute-0 sudo[349708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:37:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317023788' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.610 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.614 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:37:26 compute-0 nova_compute[260603]:   <uuid>496c1bb3-a098-41ab-ac67-5a6a89a0de53</uuid>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   <name>instance-0000005c</name>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerShowV254Test-server-2038843268</nova:name>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:37:25</nova:creationTime>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:37:26 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:37:26 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:37:26 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:37:26 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:37:26 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:37:26 compute-0 nova_compute[260603]:         <nova:user uuid="9aaa9a9ec2564ed3a346216a96231feb">tempest-ServerShowV254Test-348861658-project-member</nova:user>
Oct 02 08:37:26 compute-0 nova_compute[260603]:         <nova:project uuid="850f950eaa1d49239f8913dfee5dc44e">tempest-ServerShowV254Test-348861658</nova:project>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <nova:ports/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <system>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <entry name="serial">496c1bb3-a098-41ab-ac67-5a6a89a0de53</entry>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <entry name="uuid">496c1bb3-a098-41ab-ac67-5a6a89a0de53</entry>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     </system>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   <os>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   </os>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   <features>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   </features>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk">
Oct 02 08:37:26 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       </source>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:37:26 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config">
Oct 02 08:37:26 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       </source>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:37:26 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/console.log" append="off"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <video>
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     </video>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:37:26 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:37:26 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:37:26 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:37:26 compute-0 nova_compute[260603]: </domain>
Oct 02 08:37:26 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.681 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.682 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.683 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Using config drive
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.710 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.744 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:37:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4056181963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.956 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.981 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:26 compute-0 nova_compute[260603]: 2025-10-02 08:37:26.987 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:27 compute-0 sudo[349708]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.023 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Creating config drive at /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.028 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ac04uqg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:37:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:37:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:37:27 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:37:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:37:27 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 27727d66-c55f-47a8-9684-365a6cf14b9c does not exist
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e2769864-10ed-43ce-822c-7e6767a4bfe5 does not exist
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a5d5f412-b811-46fb-87de-661777682ee6 does not exist
Oct 02 08:37:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:37:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:37:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:37:27 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:37:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:37:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.072 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.073 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.073 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.073 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.074 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.074 2 WARNING nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.074 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.074 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.075 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.075 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.076 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.076 2 WARNING nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.076 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.076 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.077 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.077 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.077 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.078 2 WARNING nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.078 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-unplugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.078 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.078 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.079 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.079 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-unplugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.079 2 WARNING nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-unplugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.079 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.080 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.080 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.080 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.081 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.081 2 WARNING nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:37:27 compute-0 sudo[349825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:37:27 compute-0 sudo[349825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:27 compute-0 sudo[349825]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 332 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 28 MiB/s wr, 390 op/s
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.177 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ac04uqg" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.199 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:27 compute-0 sudo[349871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:37:27 compute-0 sudo[349871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.216 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:27 compute-0 sudo[349871]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:37:27 compute-0 sudo[349914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:37:27 compute-0 sudo[349914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:27 compute-0 sudo[349914]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/317023788' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4056181963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:27 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:37:27 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:37:27 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:37:27 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:37:27 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:37:27 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:37:27 compute-0 sudo[349947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:37:27 compute-0 sudo[349947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.394 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.397 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deleting local config drive /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config because it was imported into RBD.
Oct 02 08:37:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:37:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2706149099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.459 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.460 2 DEBUG nova.virt.libvirt.vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:36:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1274366950',display_name='tempest-ServerActionsTestJSON-server-35828613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1274366950',id=91,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:37:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-14rph60n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:37:25Z,user_data=None,user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=7392e1c1-40db-4b57-8ed8-278f89402f65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.461 2 DEBUG nova.network.os_vif_util [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.462 2 DEBUG nova.network.os_vif_util [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.464 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:37:27 compute-0 nova_compute[260603]:   <uuid>7392e1c1-40db-4b57-8ed8-278f89402f65</uuid>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   <name>instance-0000005b</name>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerActionsTestJSON-server-35828613</nova:name>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:37:26</nova:creationTime>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:37:27 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:37:27 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:37:27 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:37:27 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:37:27 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:37:27 compute-0 nova_compute[260603]:         <nova:user uuid="bb1b3a5ae9514259b27a0b7a28f23cda">tempest-ServerActionsTestJSON-1407264397-project-member</nova:user>
Oct 02 08:37:27 compute-0 nova_compute[260603]:         <nova:project uuid="b43ebc87104041aba179e47c5e6ecc5f">tempest-ServerActionsTestJSON-1407264397</nova:project>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:37:27 compute-0 nova_compute[260603]:         <nova:port uuid="d38707db-5f0b-4ebe-80b9-ed84810b2c21">
Oct 02 08:37:27 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <system>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <entry name="serial">7392e1c1-40db-4b57-8ed8-278f89402f65</entry>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <entry name="uuid">7392e1c1-40db-4b57-8ed8-278f89402f65</entry>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     </system>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   <os>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   </os>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   <features>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   </features>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7392e1c1-40db-4b57-8ed8-278f89402f65_disk">
Oct 02 08:37:27 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       </source>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:37:27 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config">
Oct 02 08:37:27 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       </source>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:37:27 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:fe:e4:85"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <target dev="tapd38707db-5f"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/console.log" append="off"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <video>
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     </video>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:37:27 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:37:27 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:37:27 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:37:27 compute-0 nova_compute[260603]: </domain>
Oct 02 08:37:27 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.465 2 DEBUG nova.compute.manager [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Preparing to wait for external event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.465 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.465 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.465 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.466 2 DEBUG nova.virt.libvirt.vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:36:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1274366950',display_name='tempest-ServerActionsTestJSON-server-35828613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1274366950',id=91,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:37:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-14rph60n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:37:25Z,user_data=None,user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=7392e1c1-40db-4b57-8ed8-278f89402f65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.466 2 DEBUG nova.network.os_vif_util [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.466 2 DEBUG nova.network.os_vif_util [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.467 2 DEBUG os_vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.467 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.468 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd38707db-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd38707db-5f, col_values=(('external_ids', {'iface-id': 'd38707db-5f0b-4ebe-80b9-ed84810b2c21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:e4:85', 'vm-uuid': '7392e1c1-40db-4b57-8ed8-278f89402f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:27 compute-0 NetworkManager[45129]: <info>  [1759394247.4752] manager: (tapd38707db-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.479 2 INFO os_vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f')
Oct 02 08:37:27 compute-0 systemd-machined[214636]: New machine qemu-112-instance-0000005c.
Oct 02 08:37:27 compute-0 systemd[1]: Started Virtual Machine qemu-112-instance-0000005c.
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.526 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.526 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.526 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No VIF found with MAC fa:16:3e:fe:e4:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.527 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Using config drive
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.562 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.592 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.598 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.599 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.599 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.600 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.600 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:27 compute-0 nova_compute[260603]: 2025-10-02 08:37:27.651 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'keypairs' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:27 compute-0 podman[350078]: 2025-10-02 08:37:27.869372136 +0000 UTC m=+0.070219761 container create 7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 08:37:27 compute-0 systemd[1]: Started libpod-conmon-7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9.scope.
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:37:27 compute-0 podman[350078]: 2025-10-02 08:37:27.841111926 +0000 UTC m=+0.041959621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:37:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:37:27
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'images', 'backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Oct 02 08:37:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:37:27 compute-0 podman[350078]: 2025-10-02 08:37:27.973560468 +0000 UTC m=+0.174408173 container init 7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 08:37:27 compute-0 podman[350078]: 2025-10-02 08:37:27.985778335 +0000 UTC m=+0.186625950 container start 7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:37:27 compute-0 podman[350078]: 2025-10-02 08:37:27.989106445 +0000 UTC m=+0.189954140 container attach 7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:37:27 compute-0 gifted_montalcini[350092]: 167 167
Oct 02 08:37:27 compute-0 systemd[1]: libpod-7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9.scope: Deactivated successfully.
Oct 02 08:37:27 compute-0 podman[350078]: 2025-10-02 08:37:27.998327032 +0000 UTC m=+0.199174677 container died 7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-56a8e728249b23ef4eec9e44572f8a2d6f70da0dab88b3f9087941af4bb8a191-merged.mount: Deactivated successfully.
Oct 02 08:37:28 compute-0 podman[350078]: 2025-10-02 08:37:28.049587683 +0000 UTC m=+0.250435298 container remove 7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:37:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:37:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/831022876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:37:28 compute-0 systemd[1]: libpod-conmon-7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9.scope: Deactivated successfully.
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.097 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.170 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.170 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.174 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.175 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.182 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.183 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.191 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Creating config drive at /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.197 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb0iw9s75 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:37:28 compute-0 podman[350157]: 2025-10-02 08:37:28.345162837 +0000 UTC m=+0.110955256 container create 39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhabha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.355 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb0iw9s75" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:28 compute-0 podman[350157]: 2025-10-02 08:37:28.272677997 +0000 UTC m=+0.038470456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.384 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.386 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:28 compute-0 ceph-mon[74477]: pgmap v1732: 305 pgs: 305 active+clean; 332 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 28 MiB/s wr, 390 op/s
Oct 02 08:37:28 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2706149099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:28 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/831022876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:37:28 compute-0 systemd[1]: Started libpod-conmon-39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c.scope.
Oct 02 08:37:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8feb11e72207a9f80912923d99f61a955ba70479c19cf9fc0bb767e926381a94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8feb11e72207a9f80912923d99f61a955ba70479c19cf9fc0bb767e926381a94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8feb11e72207a9f80912923d99f61a955ba70479c19cf9fc0bb767e926381a94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8feb11e72207a9f80912923d99f61a955ba70479c19cf9fc0bb767e926381a94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8feb11e72207a9f80912923d99f61a955ba70479c19cf9fc0bb767e926381a94/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.533 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.535 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3605MB free_disk=59.872135162353516GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.536 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.536 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:28 compute-0 podman[350157]: 2025-10-02 08:37:28.552853248 +0000 UTC m=+0.318645707 container init 39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:37:28 compute-0 podman[350157]: 2025-10-02 08:37:28.559854079 +0000 UTC m=+0.325646488 container start 39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhabha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:37:28 compute-0 podman[350157]: 2025-10-02 08:37:28.57089231 +0000 UTC m=+0.336684769 container attach 39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.622 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ba2cf934-ce76-4de7-a495-285f144bdab7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.622 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 7392e1c1-40db-4b57-8ed8-278f89402f65 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.622 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 496c1bb3-a098-41ab-ac67-5a6a89a0de53 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.623 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.623 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.689 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.743 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.744 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deleting local config drive /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config because it was imported into RBD.
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.802 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 496c1bb3-a098-41ab-ac67-5a6a89a0de53 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.803 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394248.8022242, 496c1bb3-a098-41ab-ac67-5a6a89a0de53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.804 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] VM Resumed (Lifecycle Event)
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.807 2 DEBUG nova.compute.manager [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:37:28 compute-0 kernel: tapd38707db-5f: entered promiscuous mode
Oct 02 08:37:28 compute-0 NetworkManager[45129]: <info>  [1759394248.8106] manager: (tapd38707db-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/361)
Oct 02 08:37:28 compute-0 ovn_controller[152344]: 2025-10-02T08:37:28Z|00902|binding|INFO|Claiming lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 for this chassis.
Oct 02 08:37:28 compute-0 ovn_controller[152344]: 2025-10-02T08:37:28Z|00903|binding|INFO|d38707db-5f0b-4ebe-80b9-ed84810b2c21: Claiming fa:16:3e:fe:e4:85 10.100.0.10
Oct 02 08:37:28 compute-0 systemd-udevd[350175]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.808 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.821 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e4:85 10.100.0.10'], port_security=['fa:16:3e:fe:e4:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7392e1c1-40db-4b57-8ed8-278f89402f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'd6efb5fb-5780-4bec-a02c-71fa342fd128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d38707db-5f0b-4ebe-80b9-ed84810b2c21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.824 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d38707db-5f0b-4ebe-80b9-ed84810b2c21 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 bound to our chassis
Oct 02 08:37:28 compute-0 NetworkManager[45129]: <info>  [1759394248.8263] device (tapd38707db-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:37:28 compute-0 NetworkManager[45129]: <info>  [1759394248.8269] device (tapd38707db-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.828 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.834 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:28 compute-0 ovn_controller[152344]: 2025-10-02T08:37:28Z|00904|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 ovn-installed in OVS
Oct 02 08:37:28 compute-0 ovn_controller[152344]: 2025-10-02T08:37:28Z|00905|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 up in Southbound
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.846 2 INFO nova.virt.libvirt.driver [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance spawned successfully.
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.847 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.849 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b39923-3e9d-41e7-8e57-ccf24a708536]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.853 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:37:28 compute-0 systemd-machined[214636]: New machine qemu-113-instance-0000005b.
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.872 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.872 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394248.8034272, 496c1bb3-a098-41ab-ac67-5a6a89a0de53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.873 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] VM Started (Lifecycle Event)
Oct 02 08:37:28 compute-0 systemd[1]: Started Virtual Machine qemu-113-instance-0000005b.
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.881 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.881 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.882 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.883 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.884 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.885 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.887 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[901289eb-b05d-4225-870e-46b30eb461c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.890 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb6a8b9f-2892-4f71-854b-dcbd6ee9525a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.893 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.921 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.928 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4a413369-fbe4-41c3-a412-cb99b97714c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.929 2 DEBUG nova.compute.manager [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.941 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.944 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c98053e-b8e7-4c0b-92c3-ef799171b223]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350269, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.960 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b03b36-5135-43a3-a1aa-b01c0fcb5479]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505595, 'tstamp': 505595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350271, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505598, 'tstamp': 505598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350271, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.961 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:28 compute-0 nova_compute[260603]: 2025-10-02 08:37:28.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.964 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.964 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.965 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.965 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.052 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.129 2 DEBUG nova.compute.manager [req-9d2c9ab0-d460-4144-af1b-cdf5a88eba36 req-b16a99c8-02e2-40ec-bbb8-17009b19e40e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.129 2 DEBUG oslo_concurrency.lockutils [req-9d2c9ab0-d460-4144-af1b-cdf5a88eba36 req-b16a99c8-02e2-40ec-bbb8-17009b19e40e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.130 2 DEBUG oslo_concurrency.lockutils [req-9d2c9ab0-d460-4144-af1b-cdf5a88eba36 req-b16a99c8-02e2-40ec-bbb8-17009b19e40e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.130 2 DEBUG oslo_concurrency.lockutils [req-9d2c9ab0-d460-4144-af1b-cdf5a88eba36 req-b16a99c8-02e2-40ec-bbb8-17009b19e40e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.131 2 DEBUG nova.compute.manager [req-9d2c9ab0-d460-4144-af1b-cdf5a88eba36 req-b16a99c8-02e2-40ec-bbb8-17009b19e40e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Processing event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:37:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 215 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 931 KiB/s rd, 22 MiB/s wr, 453 op/s
Oct 02 08:37:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:37:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410219014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.192 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.197 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.220 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.241 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.242 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.242 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.242 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:37:29 compute-0 nova_compute[260603]: 2025-10-02 08:37:29.330 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/410219014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:37:29 compute-0 magical_bhabha[350200]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:37:29 compute-0 magical_bhabha[350200]: --> relative data size: 1.0
Oct 02 08:37:29 compute-0 magical_bhabha[350200]: --> All data devices are unavailable
Oct 02 08:37:29 compute-0 systemd[1]: libpod-39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c.scope: Deactivated successfully.
Oct 02 08:37:29 compute-0 systemd[1]: libpod-39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c.scope: Consumed 1.073s CPU time.
Oct 02 08:37:29 compute-0 podman[350157]: 2025-10-02 08:37:29.781767685 +0000 UTC m=+1.547560094 container died 39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhabha, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:37:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8feb11e72207a9f80912923d99f61a955ba70479c19cf9fc0bb767e926381a94-merged.mount: Deactivated successfully.
Oct 02 08:37:29 compute-0 podman[350157]: 2025-10-02 08:37:29.832029395 +0000 UTC m=+1.597821804 container remove 39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:37:29 compute-0 systemd[1]: libpod-conmon-39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c.scope: Deactivated successfully.
Oct 02 08:37:29 compute-0 sudo[349947]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:29 compute-0 sudo[350351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:37:29 compute-0 sudo[350351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:29 compute-0 sudo[350351]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:29 compute-0 sudo[350376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:37:29 compute-0 sudo[350376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:29 compute-0 sudo[350376]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:30 compute-0 sudo[350401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:37:30 compute-0 sudo[350401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:30 compute-0 sudo[350401]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:30 compute-0 sudo[350426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:37:30 compute-0 sudo[350426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.127 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 7392e1c1-40db-4b57-8ed8-278f89402f65 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.128 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394250.1268086, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.128 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Started (Lifecycle Event)
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.130 2 DEBUG nova.compute.manager [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.133 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.135 2 INFO nova.virt.libvirt.driver [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance spawned successfully.
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.136 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.159 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.164 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.168 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.169 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.169 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.169 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.169 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.170 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.211 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.211 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394250.127031, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.212 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Paused (Lifecycle Event)
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.253 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.258 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394250.1326697, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.258 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Resumed (Lifecycle Event)
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.259 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.259 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.259 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.260 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.260 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.261 2 INFO nova.compute.manager [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Terminating instance
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.261 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "refresh_cache-496c1bb3-a098-41ab-ac67-5a6a89a0de53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.261 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquired lock "refresh_cache-496c1bb3-a098-41ab-ac67-5a6a89a0de53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.262 2 DEBUG nova.network.neutron [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.264 2 DEBUG nova.compute.manager [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.276 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.283 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.311 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.327 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.327 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.328 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.394 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:30 compute-0 ceph-mon[74477]: pgmap v1733: 305 pgs: 305 active+clean; 215 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 931 KiB/s rd, 22 MiB/s wr, 453 op/s
Oct 02 08:37:30 compute-0 podman[350491]: 2025-10-02 08:37:30.410399159 +0000 UTC m=+0.039225500 container create 7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.436 2 DEBUG nova.network.neutron [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:37:30 compute-0 systemd[1]: Started libpod-conmon-7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572.scope.
Oct 02 08:37:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:37:30 compute-0 podman[350491]: 2025-10-02 08:37:30.49030642 +0000 UTC m=+0.119132781 container init 7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 08:37:30 compute-0 podman[350491]: 2025-10-02 08:37:30.396069788 +0000 UTC m=+0.024896149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:37:30 compute-0 podman[350491]: 2025-10-02 08:37:30.497163937 +0000 UTC m=+0.125990278 container start 7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:37:30 compute-0 podman[350491]: 2025-10-02 08:37:30.500078434 +0000 UTC m=+0.128904785 container attach 7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:37:30 compute-0 magical_darwin[350508]: 167 167
Oct 02 08:37:30 compute-0 systemd[1]: libpod-7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572.scope: Deactivated successfully.
Oct 02 08:37:30 compute-0 podman[350491]: 2025-10-02 08:37:30.505229049 +0000 UTC m=+0.134055390 container died 7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:37:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bab54815eddc5b1bff8affe604d76688e211f71ca5efc053ca0cad5190539de-merged.mount: Deactivated successfully.
Oct 02 08:37:30 compute-0 podman[350491]: 2025-10-02 08:37:30.539903531 +0000 UTC m=+0.168729872 container remove 7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:37:30 compute-0 systemd[1]: libpod-conmon-7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572.scope: Deactivated successfully.
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.750 2 DEBUG nova.network.neutron [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.768 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Releasing lock "refresh_cache-496c1bb3-a098-41ab-ac67-5a6a89a0de53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.768 2 DEBUG nova.compute.manager [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:37:30 compute-0 podman[350530]: 2025-10-02 08:37:30.794623407 +0000 UTC m=+0.053928992 container create 7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:37:30 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct 02 08:37:30 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005c.scope: Consumed 3.148s CPU time.
Oct 02 08:37:30 compute-0 systemd-machined[214636]: Machine qemu-112-instance-0000005c terminated.
Oct 02 08:37:30 compute-0 systemd[1]: Started libpod-conmon-7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade.scope.
Oct 02 08:37:30 compute-0 podman[350530]: 2025-10-02 08:37:30.776635406 +0000 UTC m=+0.035941021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:37:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a6d437798136b24479957da6ad03829e71e4a800fb19aae4309c06abcdffc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a6d437798136b24479957da6ad03829e71e4a800fb19aae4309c06abcdffc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a6d437798136b24479957da6ad03829e71e4a800fb19aae4309c06abcdffc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a6d437798136b24479957da6ad03829e71e4a800fb19aae4309c06abcdffc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:30 compute-0 podman[350530]: 2025-10-02 08:37:30.949838002 +0000 UTC m=+0.209143667 container init 7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 08:37:30 compute-0 podman[350530]: 2025-10-02 08:37:30.959313427 +0000 UTC m=+0.218619042 container start 7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:37:30 compute-0 podman[350530]: 2025-10-02 08:37:30.965277616 +0000 UTC m=+0.224583241 container attach 7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.995 2 INFO nova.virt.libvirt.driver [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance destroyed successfully.
Oct 02 08:37:30 compute-0 nova_compute[260603]: 2025-10-02 08:37:30.996 2 DEBUG nova.objects.instance [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'resources' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 215 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 5.3 MiB/s wr, 240 op/s
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.205 2 DEBUG nova.compute.manager [req-7261d9ff-f01e-43e3-bb83-9e3f8a5bee25 req-872c95b1-44a4-44dc-973d-bba1f4f258ed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.207 2 DEBUG oslo_concurrency.lockutils [req-7261d9ff-f01e-43e3-bb83-9e3f8a5bee25 req-872c95b1-44a4-44dc-973d-bba1f4f258ed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.207 2 DEBUG oslo_concurrency.lockutils [req-7261d9ff-f01e-43e3-bb83-9e3f8a5bee25 req-872c95b1-44a4-44dc-973d-bba1f4f258ed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.207 2 DEBUG oslo_concurrency.lockutils [req-7261d9ff-f01e-43e3-bb83-9e3f8a5bee25 req-872c95b1-44a4-44dc-973d-bba1f4f258ed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.208 2 DEBUG nova.compute.manager [req-7261d9ff-f01e-43e3-bb83-9e3f8a5bee25 req-872c95b1-44a4-44dc-973d-bba1f4f258ed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.208 2 WARNING nova.compute.manager [req-7261d9ff-f01e-43e3-bb83-9e3f8a5bee25 req-872c95b1-44a4-44dc-973d-bba1f4f258ed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state None.
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.379 2 INFO nova.virt.libvirt.driver [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deleting instance files /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53_del
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.380 2 INFO nova.virt.libvirt.driver [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deletion of /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53_del complete
Oct 02 08:37:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Oct 02 08:37:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Oct 02 08:37:31 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.473 2 INFO nova.compute.manager [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.475 2 DEBUG oslo.service.loopingcall [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.475 2 DEBUG nova.compute.manager [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.476 2 DEBUG nova.network.neutron [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.683 2 DEBUG nova.network.neutron [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.711 2 DEBUG nova.network.neutron [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.737 2 INFO nova.compute.manager [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Took 0.26 seconds to deallocate network for instance.
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.801 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.801 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:31 compute-0 interesting_clarke[350546]: {
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:     "0": [
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:         {
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "devices": [
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "/dev/loop3"
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             ],
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_name": "ceph_lv0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_size": "21470642176",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "name": "ceph_lv0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "tags": {
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.cluster_name": "ceph",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.crush_device_class": "",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.encrypted": "0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.osd_id": "0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.type": "block",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.vdo": "0"
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             },
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "type": "block",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "vg_name": "ceph_vg0"
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:         }
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:     ],
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:     "1": [
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:         {
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "devices": [
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "/dev/loop4"
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             ],
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_name": "ceph_lv1",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_size": "21470642176",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "name": "ceph_lv1",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "tags": {
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.cluster_name": "ceph",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.crush_device_class": "",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.encrypted": "0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.osd_id": "1",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.type": "block",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.vdo": "0"
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             },
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "type": "block",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "vg_name": "ceph_vg1"
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:         }
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:     ],
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:     "2": [
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:         {
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "devices": [
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "/dev/loop5"
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             ],
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_name": "ceph_lv2",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_size": "21470642176",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "name": "ceph_lv2",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "tags": {
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.cluster_name": "ceph",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.crush_device_class": "",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.encrypted": "0",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.osd_id": "2",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.type": "block",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:                 "ceph.vdo": "0"
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             },
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "type": "block",
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:             "vg_name": "ceph_vg2"
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:         }
Oct 02 08:37:31 compute-0 interesting_clarke[350546]:     ]
Oct 02 08:37:31 compute-0 interesting_clarke[350546]: }
Oct 02 08:37:31 compute-0 systemd[1]: libpod-7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade.scope: Deactivated successfully.
Oct 02 08:37:31 compute-0 podman[350530]: 2025-10-02 08:37:31.884673769 +0000 UTC m=+1.143979364 container died 7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:37:31 compute-0 nova_compute[260603]: 2025-10-02 08:37:31.889 2 DEBUG oslo_concurrency.processutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8a6d437798136b24479957da6ad03829e71e4a800fb19aae4309c06abcdffc6-merged.mount: Deactivated successfully.
Oct 02 08:37:31 compute-0 podman[350530]: 2025-10-02 08:37:31.975653223 +0000 UTC m=+1.234958798 container remove 7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 08:37:31 compute-0 systemd[1]: libpod-conmon-7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade.scope: Deactivated successfully.
Oct 02 08:37:32 compute-0 sudo[350426]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:32 compute-0 podman[350576]: 2025-10-02 08:37:32.051032589 +0000 UTC m=+0.124101691 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:37:32 compute-0 sudo[350609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:37:32 compute-0 sudo[350609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:32 compute-0 sudo[350609]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:32 compute-0 sudo[350653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:37:32 compute-0 sudo[350653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:32 compute-0 sudo[350653]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:32 compute-0 sudo[350678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:37:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:37:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Oct 02 08:37:32 compute-0 sudo[350678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Oct 02 08:37:32 compute-0 sudo[350678]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:32 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Oct 02 08:37:32 compute-0 nova_compute[260603]: 2025-10-02 08:37:32.246 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:37:32 compute-0 nova_compute[260603]: 2025-10-02 08:37:32.247 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:37:32 compute-0 sudo[350703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:37:32 compute-0 sudo[350703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:37:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389264728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:37:32 compute-0 nova_compute[260603]: 2025-10-02 08:37:32.357 2 DEBUG oslo_concurrency.processutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:32 compute-0 nova_compute[260603]: 2025-10-02 08:37:32.368 2 DEBUG nova.compute.provider_tree [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:37:32 compute-0 nova_compute[260603]: 2025-10-02 08:37:32.393 2 DEBUG nova.scheduler.client.report [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:37:32 compute-0 ceph-mon[74477]: pgmap v1734: 305 pgs: 305 active+clean; 215 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 5.3 MiB/s wr, 240 op/s
Oct 02 08:37:32 compute-0 ceph-mon[74477]: osdmap e243: 3 total, 3 up, 3 in
Oct 02 08:37:32 compute-0 ceph-mon[74477]: osdmap e244: 3 total, 3 up, 3 in
Oct 02 08:37:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2389264728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:37:32 compute-0 nova_compute[260603]: 2025-10-02 08:37:32.423 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:32 compute-0 nova_compute[260603]: 2025-10-02 08:37:32.450 2 INFO nova.scheduler.client.report [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Deleted allocations for instance 496c1bb3-a098-41ab-ac67-5a6a89a0de53
Oct 02 08:37:32 compute-0 nova_compute[260603]: 2025-10-02 08:37:32.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:37:32 compute-0 nova_compute[260603]: 2025-10-02 08:37:32.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:32 compute-0 nova_compute[260603]: 2025-10-02 08:37:32.551 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:32 compute-0 podman[350769]: 2025-10-02 08:37:32.752951766 +0000 UTC m=+0.058610392 container create 2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:37:32 compute-0 systemd[1]: Started libpod-conmon-2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2.scope.
Oct 02 08:37:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:37:32 compute-0 podman[350769]: 2025-10-02 08:37:32.725440359 +0000 UTC m=+0.031099015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:37:32 compute-0 podman[350769]: 2025-10-02 08:37:32.831051123 +0000 UTC m=+0.136709759 container init 2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 08:37:32 compute-0 podman[350769]: 2025-10-02 08:37:32.843429625 +0000 UTC m=+0.149088271 container start 2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 08:37:32 compute-0 podman[350769]: 2025-10-02 08:37:32.847350313 +0000 UTC m=+0.153008969 container attach 2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:37:32 compute-0 serene_satoshi[350785]: 167 167
Oct 02 08:37:32 compute-0 systemd[1]: libpod-2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2.scope: Deactivated successfully.
Oct 02 08:37:32 compute-0 podman[350769]: 2025-10-02 08:37:32.854074375 +0000 UTC m=+0.159733041 container died 2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:37:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-dca145ce764075954a7b3b63b2154e4b1287168fae6fac3162a2877c75f72546-merged.mount: Deactivated successfully.
Oct 02 08:37:32 compute-0 podman[350769]: 2025-10-02 08:37:32.893222342 +0000 UTC m=+0.198880968 container remove 2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:37:32 compute-0 systemd[1]: libpod-conmon-2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2.scope: Deactivated successfully.
Oct 02 08:37:33 compute-0 podman[350811]: 2025-10-02 08:37:33.095389098 +0000 UTC m=+0.048194599 container create d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rhodes, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 08:37:33 compute-0 systemd[1]: Started libpod-conmon-d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6.scope.
Oct 02 08:37:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:37:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 169 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 5.5 MiB/s wr, 544 op/s
Oct 02 08:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bed1c61775948368e50fbd0ed7cb4ada3bfadda67f9d1bf2717941d216769587/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bed1c61775948368e50fbd0ed7cb4ada3bfadda67f9d1bf2717941d216769587/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bed1c61775948368e50fbd0ed7cb4ada3bfadda67f9d1bf2717941d216769587/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bed1c61775948368e50fbd0ed7cb4ada3bfadda67f9d1bf2717941d216769587/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:33 compute-0 podman[350811]: 2025-10-02 08:37:33.073392747 +0000 UTC m=+0.026198278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:37:33 compute-0 podman[350811]: 2025-10-02 08:37:33.185088184 +0000 UTC m=+0.137893705 container init d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 08:37:33 compute-0 podman[350811]: 2025-10-02 08:37:33.196007472 +0000 UTC m=+0.148812973 container start d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 08:37:33 compute-0 podman[350811]: 2025-10-02 08:37:33.199164827 +0000 UTC m=+0.151970348 container attach d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rhodes, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 08:37:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Oct 02 08:37:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Oct 02 08:37:33 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.570 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.571 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.572 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.572 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.573 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.575 2 INFO nova.compute.manager [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Terminating instance
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.577 2 DEBUG nova.compute.manager [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:37:33 compute-0 kernel: tapd38707db-5f (unregistering): left promiscuous mode
Oct 02 08:37:33 compute-0 NetworkManager[45129]: <info>  [1759394253.6201] device (tapd38707db-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:33 compute-0 ovn_controller[152344]: 2025-10-02T08:37:33Z|00906|binding|INFO|Releasing lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 from this chassis (sb_readonly=0)
Oct 02 08:37:33 compute-0 ovn_controller[152344]: 2025-10-02T08:37:33Z|00907|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 down in Southbound
Oct 02 08:37:33 compute-0 ovn_controller[152344]: 2025-10-02T08:37:33Z|00908|binding|INFO|Removing iface tapd38707db-5f ovn-installed in OVS
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.690 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e4:85 10.100.0.10'], port_security=['fa:16:3e:fe:e4:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7392e1c1-40db-4b57-8ed8-278f89402f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd6efb5fb-5780-4bec-a02c-71fa342fd128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d38707db-5f0b-4ebe-80b9-ed84810b2c21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.691 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d38707db-5f0b-4ebe-80b9-ed84810b2c21 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.692 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.708 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2162cd-54fb-414b-b845-adb864aedb78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:33 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct 02 08:37:33 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005b.scope: Consumed 4.606s CPU time.
Oct 02 08:37:33 compute-0 systemd-machined[214636]: Machine qemu-113-instance-0000005b terminated.
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.733 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[821c5697-508e-453a-aad0-a904712eb3a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.735 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[beccf5e2-3c1c-43df-b1fd-a53055d96599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.760 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5f072fe4-65d0-4b98-863d-2edbb07064a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.775 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d577d7a8-5d99-435d-9360-e00813d2759a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350844, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.789 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2874b4-58d8-419e-8fa8-0da56b42371f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505595, 'tstamp': 505595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350845, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505598, 'tstamp': 505598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350845, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.790 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.796 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.796 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.796 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:33 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.797 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.821 2 INFO nova.virt.libvirt.driver [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance destroyed successfully.
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.821 2 DEBUG nova.objects.instance [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'resources' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.838 2 DEBUG nova.virt.libvirt.vif [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:36:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1274366950',display_name='tempest-ServerActionsTestJSON-server-35828613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1274366950',id=91,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:37:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-14rph60n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:37:30Z,user_data=None,user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=7392e1c1-40db-4b57-8ed8-278f89402f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.838 2 DEBUG nova.network.os_vif_util [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.839 2 DEBUG nova.network.os_vif_util [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.840 2 DEBUG os_vif [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.842 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd38707db-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:37:33 compute-0 nova_compute[260603]: 2025-10-02 08:37:33.848 2 INFO os_vif [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f')
Oct 02 08:37:34 compute-0 nova_compute[260603]: 2025-10-02 08:37:34.149 2 INFO nova.virt.libvirt.driver [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deleting instance files /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65_del
Oct 02 08:37:34 compute-0 nova_compute[260603]: 2025-10-02 08:37:34.150 2 INFO nova.virt.libvirt.driver [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deletion of /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65_del complete
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]: {
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "osd_id": 2,
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "type": "bluestore"
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:     },
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "osd_id": 1,
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "type": "bluestore"
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:     },
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "osd_id": 0,
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:         "type": "bluestore"
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]:     }
Oct 02 08:37:34 compute-0 gracious_rhodes[350828]: }
Oct 02 08:37:34 compute-0 nova_compute[260603]: 2025-10-02 08:37:34.235 2 INFO nova.compute.manager [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Took 0.66 seconds to destroy the instance on the hypervisor.
Oct 02 08:37:34 compute-0 nova_compute[260603]: 2025-10-02 08:37:34.235 2 DEBUG oslo.service.loopingcall [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:37:34 compute-0 nova_compute[260603]: 2025-10-02 08:37:34.236 2 DEBUG nova.compute.manager [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:37:34 compute-0 nova_compute[260603]: 2025-10-02 08:37:34.237 2 DEBUG nova.network.neutron [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:37:34 compute-0 systemd[1]: libpod-d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6.scope: Deactivated successfully.
Oct 02 08:37:34 compute-0 systemd[1]: libpod-d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6.scope: Consumed 1.005s CPU time.
Oct 02 08:37:34 compute-0 podman[350906]: 2025-10-02 08:37:34.297161298 +0000 UTC m=+0.034565170 container died d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rhodes, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 02 08:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bed1c61775948368e50fbd0ed7cb4ada3bfadda67f9d1bf2717941d216769587-merged.mount: Deactivated successfully.
Oct 02 08:37:34 compute-0 podman[350905]: 2025-10-02 08:37:34.350256164 +0000 UTC m=+0.072009865 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 08:37:34 compute-0 podman[350906]: 2025-10-02 08:37:34.357302306 +0000 UTC m=+0.094706098 container remove d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rhodes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:37:34 compute-0 systemd[1]: libpod-conmon-d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6.scope: Deactivated successfully.
Oct 02 08:37:34 compute-0 sudo[350703]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:37:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Oct 02 08:37:34 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:37:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:37:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Oct 02 08:37:34 compute-0 ceph-mon[74477]: pgmap v1737: 305 pgs: 305 active+clean; 169 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 5.5 MiB/s wr, 544 op/s
Oct 02 08:37:34 compute-0 ceph-mon[74477]: osdmap e245: 3 total, 3 up, 3 in
Oct 02 08:37:34 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Oct 02 08:37:34 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:37:34 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 443f6d8c-4c0f-4150-b827-96866203a4f6 does not exist
Oct 02 08:37:34 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a3478dbf-b48e-46e5-a8d7-4f200d03c11b does not exist
Oct 02 08:37:34 compute-0 sudo[350940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:37:34 compute-0 sudo[350940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:34 compute-0 sudo[350940]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:34 compute-0 sudo[350965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:37:34 compute-0 sudo[350965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:37:34 compute-0 sudo[350965]: pam_unix(sudo:session): session closed for user root
Oct 02 08:37:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:34.822 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:34.822 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:34.824 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:34 compute-0 nova_compute[260603]: 2025-10-02 08:37:34.966 2 DEBUG nova.network.neutron [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:37:34 compute-0 nova_compute[260603]: 2025-10-02 08:37:34.999 2 INFO nova.compute.manager [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Took 0.76 seconds to deallocate network for instance.
Oct 02 08:37:35 compute-0 nova_compute[260603]: 2025-10-02 08:37:35.075 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:35 compute-0 nova_compute[260603]: 2025-10-02 08:37:35.076 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:35 compute-0 nova_compute[260603]: 2025-10-02 08:37:35.150 2 DEBUG nova.compute.manager [req-6a0314f8-4df6-4ee6-9e40-975e2bb4bc36 req-14a01032-6603-41d6-84ac-adb778c87965 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-deleted-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:35 compute-0 nova_compute[260603]: 2025-10-02 08:37:35.154 2 DEBUG oslo_concurrency.processutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 169 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 73 KiB/s wr, 584 op/s
Oct 02 08:37:35 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:37:35 compute-0 ceph-mon[74477]: osdmap e246: 3 total, 3 up, 3 in
Oct 02 08:37:35 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:37:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Oct 02 08:37:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Oct 02 08:37:35 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Oct 02 08:37:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:37:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336699256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:37:35 compute-0 nova_compute[260603]: 2025-10-02 08:37:35.629 2 DEBUG oslo_concurrency.processutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:35 compute-0 nova_compute[260603]: 2025-10-02 08:37:35.639 2 DEBUG nova.compute.provider_tree [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:37:35 compute-0 nova_compute[260603]: 2025-10-02 08:37:35.664 2 DEBUG nova.scheduler.client.report [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:37:35 compute-0 nova_compute[260603]: 2025-10-02 08:37:35.692 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:35 compute-0 nova_compute[260603]: 2025-10-02 08:37:35.715 2 INFO nova.scheduler.client.report [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Deleted allocations for instance 7392e1c1-40db-4b57-8ed8-278f89402f65
Oct 02 08:37:35 compute-0 nova_compute[260603]: 2025-10-02 08:37:35.806 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Oct 02 08:37:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Oct 02 08:37:36 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Oct 02 08:37:36 compute-0 ceph-mon[74477]: pgmap v1740: 305 pgs: 305 active+clean; 169 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 73 KiB/s wr, 584 op/s
Oct 02 08:37:36 compute-0 ceph-mon[74477]: osdmap e247: 3 total, 3 up, 3 in
Oct 02 08:37:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3336699256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:37:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 163 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 11 KiB/s wr, 130 op/s
Oct 02 08:37:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:37:37 compute-0 ceph-mon[74477]: osdmap e248: 3 total, 3 up, 3 in
Oct 02 08:37:38 compute-0 ceph-mon[74477]: pgmap v1743: 305 pgs: 305 active+clean; 163 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 11 KiB/s wr, 130 op/s
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001020440088396171 of space, bias 1.0, pg target 0.30613202651885135 quantized to 32 (current 32)
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670665250374422 of space, bias 1.0, pg target 0.20011995751123268 quantized to 32 (current 32)
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:37:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:37:38 compute-0 nova_compute[260603]: 2025-10-02 08:37:38.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:38 compute-0 nova_compute[260603]: 2025-10-02 08:37:38.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:39 compute-0 nova_compute[260603]: 2025-10-02 08:37:39.142 2 DEBUG oslo_concurrency.lockutils [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:39 compute-0 nova_compute[260603]: 2025-10-02 08:37:39.142 2 DEBUG oslo_concurrency.lockutils [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:39 compute-0 nova_compute[260603]: 2025-10-02 08:37:39.143 2 DEBUG nova.compute.manager [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:39 compute-0 nova_compute[260603]: 2025-10-02 08:37:39.148 2 DEBUG nova.compute.manager [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 08:37:39 compute-0 nova_compute[260603]: 2025-10-02 08:37:39.149 2 DEBUG nova.objects.instance [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'flavor' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 123 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 15 KiB/s wr, 160 op/s
Oct 02 08:37:39 compute-0 nova_compute[260603]: 2025-10-02 08:37:39.178 2 DEBUG nova.virt.libvirt.driver [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:37:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Oct 02 08:37:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Oct 02 08:37:39 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Oct 02 08:37:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Oct 02 08:37:40 compute-0 ceph-mon[74477]: pgmap v1744: 305 pgs: 305 active+clean; 123 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 15 KiB/s wr, 160 op/s
Oct 02 08:37:40 compute-0 ceph-mon[74477]: osdmap e249: 3 total, 3 up, 3 in
Oct 02 08:37:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Oct 02 08:37:40 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Oct 02 08:37:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 123 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 15 KiB/s wr, 161 op/s
Oct 02 08:37:41 compute-0 kernel: tap961da5ba-b0 (unregistering): left promiscuous mode
Oct 02 08:37:41 compute-0 NetworkManager[45129]: <info>  [1759394261.4106] device (tap961da5ba-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:37:41 compute-0 ovn_controller[152344]: 2025-10-02T08:37:41Z|00909|binding|INFO|Releasing lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 from this chassis (sb_readonly=0)
Oct 02 08:37:41 compute-0 ovn_controller[152344]: 2025-10-02T08:37:41Z|00910|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 down in Southbound
Oct 02 08:37:41 compute-0 ovn_controller[152344]: 2025-10-02T08:37:41Z|00911|binding|INFO|Removing iface tap961da5ba-b0 ovn-installed in OVS
Oct 02 08:37:41 compute-0 nova_compute[260603]: 2025-10-02 08:37:41.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.425 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.427 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.428 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74f187c2-780c-418d-98eb-b25294872ab0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.430 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a15901dc-8a63-400b-bc92-e63989db388f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.430 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace which is not needed anymore
Oct 02 08:37:41 compute-0 nova_compute[260603]: 2025-10-02 08:37:41.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:41 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 02 08:37:41 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000055.scope: Consumed 16.521s CPU time.
Oct 02 08:37:41 compute-0 systemd-machined[214636]: Machine qemu-108-instance-00000055 terminated.
Oct 02 08:37:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Oct 02 08:37:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Oct 02 08:37:41 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Oct 02 08:37:41 compute-0 ceph-mon[74477]: osdmap e250: 3 total, 3 up, 3 in
Oct 02 08:37:41 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[346621]: [NOTICE]   (346641) : haproxy version is 2.8.14-c23fe91
Oct 02 08:37:41 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[346621]: [NOTICE]   (346641) : path to executable is /usr/sbin/haproxy
Oct 02 08:37:41 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[346621]: [WARNING]  (346641) : Exiting Master process...
Oct 02 08:37:41 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[346621]: [ALERT]    (346641) : Current worker (346652) exited with code 143 (Terminated)
Oct 02 08:37:41 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[346621]: [WARNING]  (346641) : All workers exited. Exiting... (0)
Oct 02 08:37:41 compute-0 systemd[1]: libpod-7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd.scope: Deactivated successfully.
Oct 02 08:37:41 compute-0 podman[351037]: 2025-10-02 08:37:41.621502308 +0000 UTC m=+0.055106148 container died 7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:37:41 compute-0 nova_compute[260603]: 2025-10-02 08:37:41.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:41 compute-0 nova_compute[260603]: 2025-10-02 08:37:41.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd-userdata-shm.mount: Deactivated successfully.
Oct 02 08:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3ae6c2f20a3b91c5f145f721853a053d0238e20c8cff6c32cd1844fef5ff94f-merged.mount: Deactivated successfully.
Oct 02 08:37:41 compute-0 podman[351037]: 2025-10-02 08:37:41.668655394 +0000 UTC m=+0.102259234 container cleanup 7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:37:41 compute-0 systemd[1]: libpod-conmon-7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd.scope: Deactivated successfully.
Oct 02 08:37:41 compute-0 podman[351075]: 2025-10-02 08:37:41.750851405 +0000 UTC m=+0.049929622 container remove 7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.764 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[73ba634b-aea0-4a72-827d-351270be1954]: (4, ('Thu Oct  2 08:37:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd)\n7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd\nThu Oct  2 08:37:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd)\n7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.766 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5a28dd97-c062-4f93-934f-d22f71a2d170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.767 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:41 compute-0 nova_compute[260603]: 2025-10-02 08:37:41.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:41 compute-0 kernel: tap74f187c2-70: left promiscuous mode
Oct 02 08:37:41 compute-0 nova_compute[260603]: 2025-10-02 08:37:41.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.797 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[402a0f93-d8c6-4972-8d00-fcb4b77edaa2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.829 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[527c3bde-5aaa-43f0-a002-3cd8e00884b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.830 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3d19bb-f061-4e92-88a0-b54e739825d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.847 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[00cd76f0-47f0-4082-b4a1-23a1803c804b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505571, 'reachable_time': 20585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351095, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.850 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:37:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.850 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb67c22-d2f9-4f89-8d36-d10602c1c1f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d74f187c2\x2d780c\x2d418d\x2d98eb\x2db25294872ab0.mount: Deactivated successfully.
Oct 02 08:37:42 compute-0 nova_compute[260603]: 2025-10-02 08:37:42.045 2 DEBUG nova.compute.manager [req-8f5c0a48-c155-4569-9870-103b13aebee5 req-96a0d589-159f-4f66-b3d7-5c148486dba0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:42 compute-0 nova_compute[260603]: 2025-10-02 08:37:42.046 2 DEBUG oslo_concurrency.lockutils [req-8f5c0a48-c155-4569-9870-103b13aebee5 req-96a0d589-159f-4f66-b3d7-5c148486dba0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:42 compute-0 nova_compute[260603]: 2025-10-02 08:37:42.046 2 DEBUG oslo_concurrency.lockutils [req-8f5c0a48-c155-4569-9870-103b13aebee5 req-96a0d589-159f-4f66-b3d7-5c148486dba0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:42 compute-0 nova_compute[260603]: 2025-10-02 08:37:42.046 2 DEBUG oslo_concurrency.lockutils [req-8f5c0a48-c155-4569-9870-103b13aebee5 req-96a0d589-159f-4f66-b3d7-5c148486dba0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:42 compute-0 nova_compute[260603]: 2025-10-02 08:37:42.046 2 DEBUG nova.compute.manager [req-8f5c0a48-c155-4569-9870-103b13aebee5 req-96a0d589-159f-4f66-b3d7-5c148486dba0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:37:42 compute-0 nova_compute[260603]: 2025-10-02 08:37:42.046 2 WARNING nova.compute.manager [req-8f5c0a48-c155-4569-9870-103b13aebee5 req-96a0d589-159f-4f66-b3d7-5c148486dba0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state powering-off.
Oct 02 08:37:42 compute-0 nova_compute[260603]: 2025-10-02 08:37:42.198 2 INFO nova.virt.libvirt.driver [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance shutdown successfully after 3 seconds.
Oct 02 08:37:42 compute-0 nova_compute[260603]: 2025-10-02 08:37:42.203 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance destroyed successfully.
Oct 02 08:37:42 compute-0 nova_compute[260603]: 2025-10-02 08:37:42.203 2 DEBUG nova.objects.instance [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'numa_topology' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:42 compute-0 nova_compute[260603]: 2025-10-02 08:37:42.222 2 DEBUG nova.compute.manager [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:37:42 compute-0 nova_compute[260603]: 2025-10-02 08:37:42.279 2 DEBUG oslo_concurrency.lockutils [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Oct 02 08:37:42 compute-0 ceph-mon[74477]: pgmap v1747: 305 pgs: 305 active+clean; 123 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 15 KiB/s wr, 161 op/s
Oct 02 08:37:42 compute-0 ceph-mon[74477]: osdmap e251: 3 total, 3 up, 3 in
Oct 02 08:37:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Oct 02 08:37:42 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Oct 02 08:37:42 compute-0 nova_compute[260603]: 2025-10-02 08:37:42.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 123 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 30 KiB/s wr, 217 op/s
Oct 02 08:37:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct 02 08:37:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct 02 08:37:43 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct 02 08:37:43 compute-0 ceph-mon[74477]: osdmap e252: 3 total, 3 up, 3 in
Oct 02 08:37:43 compute-0 nova_compute[260603]: 2025-10-02 08:37:43.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:43 compute-0 nova_compute[260603]: 2025-10-02 08:37:43.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:43 compute-0 nova_compute[260603]: 2025-10-02 08:37:43.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:43 compute-0 nova_compute[260603]: 2025-10-02 08:37:43.923 2 DEBUG nova.objects.instance [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'flavor' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:43 compute-0 nova_compute[260603]: 2025-10-02 08:37:43.955 2 DEBUG oslo_concurrency.lockutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:37:43 compute-0 nova_compute[260603]: 2025-10-02 08:37:43.956 2 DEBUG oslo_concurrency.lockutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:37:43 compute-0 nova_compute[260603]: 2025-10-02 08:37:43.957 2 DEBUG nova.network.neutron [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:37:43 compute-0 nova_compute[260603]: 2025-10-02 08:37:43.957 2 DEBUG nova.objects.instance [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'info_cache' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:44 compute-0 nova_compute[260603]: 2025-10-02 08:37:44.141 2 DEBUG nova.compute.manager [req-8f0cd429-d06a-407a-b5d6-d6b0f03a9806 req-3506c745-ec78-424c-ac0e-1003b80447fc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:44 compute-0 nova_compute[260603]: 2025-10-02 08:37:44.142 2 DEBUG oslo_concurrency.lockutils [req-8f0cd429-d06a-407a-b5d6-d6b0f03a9806 req-3506c745-ec78-424c-ac0e-1003b80447fc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:44 compute-0 nova_compute[260603]: 2025-10-02 08:37:44.142 2 DEBUG oslo_concurrency.lockutils [req-8f0cd429-d06a-407a-b5d6-d6b0f03a9806 req-3506c745-ec78-424c-ac0e-1003b80447fc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:44 compute-0 nova_compute[260603]: 2025-10-02 08:37:44.142 2 DEBUG oslo_concurrency.lockutils [req-8f0cd429-d06a-407a-b5d6-d6b0f03a9806 req-3506c745-ec78-424c-ac0e-1003b80447fc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:44 compute-0 nova_compute[260603]: 2025-10-02 08:37:44.142 2 DEBUG nova.compute.manager [req-8f0cd429-d06a-407a-b5d6-d6b0f03a9806 req-3506c745-ec78-424c-ac0e-1003b80447fc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:37:44 compute-0 nova_compute[260603]: 2025-10-02 08:37:44.143 2 WARNING nova.compute.manager [req-8f0cd429-d06a-407a-b5d6-d6b0f03a9806 req-3506c745-ec78-424c-ac0e-1003b80447fc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state stopped and task_state powering-on.
Oct 02 08:37:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct 02 08:37:44 compute-0 ceph-mon[74477]: pgmap v1750: 305 pgs: 305 active+clean; 123 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 30 KiB/s wr, 217 op/s
Oct 02 08:37:44 compute-0 ceph-mon[74477]: osdmap e253: 3 total, 3 up, 3 in
Oct 02 08:37:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct 02 08:37:44 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct 02 08:37:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 123 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 30 KiB/s wr, 217 op/s
Oct 02 08:37:45 compute-0 ceph-mon[74477]: osdmap e254: 3 total, 3 up, 3 in
Oct 02 08:37:45 compute-0 nova_compute[260603]: 2025-10-02 08:37:45.993 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394250.9924216, 496c1bb3-a098-41ab-ac67-5a6a89a0de53 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:45 compute-0 nova_compute[260603]: 2025-10-02 08:37:45.994 2 INFO nova.compute.manager [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] VM Stopped (Lifecycle Event)
Oct 02 08:37:46 compute-0 nova_compute[260603]: 2025-10-02 08:37:46.040 2 DEBUG nova.compute.manager [None req-5593822c-8a4d-4219-b08f-e6bf1280780f - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:46 compute-0 ceph-mon[74477]: pgmap v1753: 305 pgs: 305 active+clean; 123 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 30 KiB/s wr, 217 op/s
Oct 02 08:37:46 compute-0 nova_compute[260603]: 2025-10-02 08:37:46.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:46 compute-0 nova_compute[260603]: 2025-10-02 08:37:46.988 2 DEBUG nova.network.neutron [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.013 2 DEBUG oslo_concurrency.lockutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.042 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance destroyed successfully.
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.043 2 DEBUG nova.objects.instance [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'numa_topology' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.060 2 DEBUG nova.objects.instance [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'resources' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.077 2 DEBUG nova.virt.libvirt.vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:37:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.078 2 DEBUG nova.network.os_vif_util [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.079 2 DEBUG nova.network.os_vif_util [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.080 2 DEBUG os_vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap961da5ba-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.091 2 INFO os_vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.101 2 DEBUG nova.virt.libvirt.driver [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Start _get_guest_xml network_info=[{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.111 2 WARNING nova.virt.libvirt.driver [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.118 2 DEBUG nova.virt.libvirt.host [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.119 2 DEBUG nova.virt.libvirt.host [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.123 2 DEBUG nova.virt.libvirt.host [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.124 2 DEBUG nova.virt.libvirt.host [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.125 2 DEBUG nova.virt.libvirt.driver [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.125 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.126 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.127 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.127 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.128 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.129 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.129 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.130 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.131 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.131 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.132 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.133 2 DEBUG nova.objects.instance [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'vcpu_model' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.150 2 DEBUG oslo_concurrency.processutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 21 KiB/s wr, 176 op/s
Oct 02 08:37:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:37:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct 02 08:37:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct 02 08:37:47 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct 02 08:37:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:37:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3736315215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.651 2 DEBUG oslo_concurrency.processutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:47 compute-0 nova_compute[260603]: 2025-10-02 08:37:47.696 2 DEBUG oslo_concurrency.processutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:37:48 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532120028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.147 2 DEBUG oslo_concurrency.processutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.150 2 DEBUG nova.virt.libvirt.vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:37:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.150 2 DEBUG nova.network.os_vif_util [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.151 2 DEBUG nova.network.os_vif_util [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.153 2 DEBUG nova.objects.instance [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.176 2 DEBUG nova.virt.libvirt.driver [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:37:48 compute-0 nova_compute[260603]:   <uuid>ba2cf934-ce76-4de7-a495-285f144bdab7</uuid>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   <name>instance-00000055</name>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerActionsTestJSON-server-276449458</nova:name>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:37:47</nova:creationTime>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:37:48 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:37:48 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:37:48 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:37:48 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:37:48 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:37:48 compute-0 nova_compute[260603]:         <nova:user uuid="bb1b3a5ae9514259b27a0b7a28f23cda">tempest-ServerActionsTestJSON-1407264397-project-member</nova:user>
Oct 02 08:37:48 compute-0 nova_compute[260603]:         <nova:project uuid="b43ebc87104041aba179e47c5e6ecc5f">tempest-ServerActionsTestJSON-1407264397</nova:project>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:37:48 compute-0 nova_compute[260603]:         <nova:port uuid="961da5ba-b0ac-4a87-a74c-26d0d2d2bf50">
Oct 02 08:37:48 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <system>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <entry name="serial">ba2cf934-ce76-4de7-a495-285f144bdab7</entry>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <entry name="uuid">ba2cf934-ce76-4de7-a495-285f144bdab7</entry>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     </system>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   <os>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   </os>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   <features>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   </features>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ba2cf934-ce76-4de7-a495-285f144bdab7_disk">
Oct 02 08:37:48 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       </source>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:37:48 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ba2cf934-ce76-4de7-a495-285f144bdab7_disk.config">
Oct 02 08:37:48 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       </source>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:37:48 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:bb:af:04"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <target dev="tap961da5ba-b0"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/console.log" append="off"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <video>
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     </video>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:37:48 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:37:48 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:37:48 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:37:48 compute-0 nova_compute[260603]: </domain>
Oct 02 08:37:48 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.178 2 DEBUG nova.virt.libvirt.driver [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.178 2 DEBUG nova.virt.libvirt.driver [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.179 2 DEBUG nova.virt.libvirt.vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:37:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.180 2 DEBUG nova.network.os_vif_util [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.180 2 DEBUG nova.network.os_vif_util [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.181 2 DEBUG os_vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.182 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap961da5ba-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap961da5ba-b0, col_values=(('external_ids', {'iface-id': '961da5ba-b0ac-4a87-a74c-26d0d2d2bf50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:af:04', 'vm-uuid': 'ba2cf934-ce76-4de7-a495-285f144bdab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:48 compute-0 NetworkManager[45129]: <info>  [1759394268.1897] manager: (tap961da5ba-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.194 2 INFO os_vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')
Oct 02 08:37:48 compute-0 ceph-mon[74477]: pgmap v1754: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 21 KiB/s wr, 176 op/s
Oct 02 08:37:48 compute-0 ceph-mon[74477]: osdmap e255: 3 total, 3 up, 3 in
Oct 02 08:37:48 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3736315215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:48 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3532120028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:37:48 compute-0 kernel: tap961da5ba-b0: entered promiscuous mode
Oct 02 08:37:48 compute-0 NetworkManager[45129]: <info>  [1759394268.2807] manager: (tap961da5ba-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/363)
Oct 02 08:37:48 compute-0 ovn_controller[152344]: 2025-10-02T08:37:48Z|00912|binding|INFO|Claiming lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for this chassis.
Oct 02 08:37:48 compute-0 ovn_controller[152344]: 2025-10-02T08:37:48Z|00913|binding|INFO|961da5ba-b0ac-4a87-a74c-26d0d2d2bf50: Claiming fa:16:3e:bb:af:04 10.100.0.8
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.291 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.292 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 bound to our chassis
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.294 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:37:48 compute-0 ovn_controller[152344]: 2025-10-02T08:37:48Z|00914|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 ovn-installed in OVS
Oct 02 08:37:48 compute-0 ovn_controller[152344]: 2025-10-02T08:37:48Z|00915|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 up in Southbound
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.305 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[374c73d4-b76f-418e-a729-834eab29eea6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.306 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74f187c2-71 in ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.310 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74f187c2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.310 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[291ace97-7082-4582-b631-16d5255f400f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.311 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7d8223-b469-477c-9e89-f89716179c65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:48 compute-0 systemd-udevd[351175]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.322 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1a219caf-1341-4325-88da-5fb0e2648cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 systemd-machined[214636]: New machine qemu-114-instance-00000055.
Oct 02 08:37:48 compute-0 NetworkManager[45129]: <info>  [1759394268.3322] device (tap961da5ba-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:37:48 compute-0 NetworkManager[45129]: <info>  [1759394268.3330] device (tap961da5ba-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:37:48 compute-0 systemd[1]: Started Virtual Machine qemu-114-instance-00000055.
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.345 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e7c006-e16a-40b9-9e84-24ca0ccee3c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.373 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d73d2187-c9db-4805-8c13-17b7404a04f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 systemd-udevd[351178]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:37:48 compute-0 NetworkManager[45129]: <info>  [1759394268.3856] manager: (tap74f187c2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/364)
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.386 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd06fdd-16cb-4eac-815c-6c0a21834df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.416 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[85c5a2d2-d263-4a25-84da-b8b9a9ac471f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.420 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a837c333-959f-4c34-99e8-52fa48e3f127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 NetworkManager[45129]: <info>  [1759394268.4399] device (tap74f187c2-70): carrier: link connected
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.444 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4eeadc-dfa8-4c30-ab2c-ba124cec1f64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.461 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6995bb1e-4913-439d-8a44-04f6977554d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514704, 'reachable_time': 30969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351206, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.476 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[81fc2d1c-3651-4c44-bd2a-7d0d8ed92268]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:7f62'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514704, 'tstamp': 514704}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351207, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.498 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0f525473-097e-472b-bce4-b5e23e0c9e12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514704, 'reachable_time': 30969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351208, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.535 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[985fe9cd-4ff9-4d9d-a93c-6a3bbc25d3e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.615 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ef73a8-c506-48cc-b3d0-a32803da2500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.616 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.617 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.617 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:48 compute-0 NetworkManager[45129]: <info>  [1759394268.6608] manager: (tap74f187c2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Oct 02 08:37:48 compute-0 kernel: tap74f187c2-70: entered promiscuous mode
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.670 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:48 compute-0 ovn_controller[152344]: 2025-10-02T08:37:48Z|00916|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.674 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.675 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5f655b-0b5f-4650-bbc4-abadffb92f7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.676 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:37:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.677 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'env', 'PROCESS_TAG=haproxy-74f187c2-780c-418d-98eb-b25294872ab0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74f187c2-780c-418d-98eb-b25294872ab0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.820 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394253.8185918, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.820 2 INFO nova.compute.manager [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Stopped (Lifecycle Event)
Oct 02 08:37:48 compute-0 nova_compute[260603]: 2025-10-02 08:37:48.847 2 DEBUG nova.compute.manager [None req-e2513301-b215-46df-b754-814b56162aa1 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:49 compute-0 podman[351241]: 2025-10-02 08:37:49.032938703 +0000 UTC m=+0.064723206 container create 694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 08:37:49 compute-0 systemd[1]: Started libpod-conmon-694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d.scope.
Oct 02 08:37:49 compute-0 podman[351241]: 2025-10-02 08:37:49.002159178 +0000 UTC m=+0.033943751 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:37:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12325eae80981c183471de2cbf2c2a6d257a59fa83ba595ae5891a5266d01cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:37:49 compute-0 podman[351241]: 2025-10-02 08:37:49.126252688 +0000 UTC m=+0.158037231 container init 694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 08:37:49 compute-0 podman[351241]: 2025-10-02 08:37:49.131553857 +0000 UTC m=+0.163338360 container start 694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 08:37:49 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [NOTICE]   (351303) : New worker (351305) forked
Oct 02 08:37:49 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [NOTICE]   (351303) : Loading success.
Oct 02 08:37:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 5.5 KiB/s wr, 112 op/s
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.284 2 DEBUG nova.compute.manager [req-d71d85ac-1c6a-4969-b5a2-1f705d46043a req-ce328010-5903-4086-a57a-6783b89f5aac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.286 2 DEBUG oslo_concurrency.lockutils [req-d71d85ac-1c6a-4969-b5a2-1f705d46043a req-ce328010-5903-4086-a57a-6783b89f5aac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.286 2 DEBUG oslo_concurrency.lockutils [req-d71d85ac-1c6a-4969-b5a2-1f705d46043a req-ce328010-5903-4086-a57a-6783b89f5aac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.287 2 DEBUG oslo_concurrency.lockutils [req-d71d85ac-1c6a-4969-b5a2-1f705d46043a req-ce328010-5903-4086-a57a-6783b89f5aac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.287 2 DEBUG nova.compute.manager [req-d71d85ac-1c6a-4969-b5a2-1f705d46043a req-ce328010-5903-4086-a57a-6783b89f5aac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.288 2 WARNING nova.compute.manager [req-d71d85ac-1c6a-4969-b5a2-1f705d46043a req-ce328010-5903-4086-a57a-6783b89f5aac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state stopped and task_state powering-on.
Oct 02 08:37:49 compute-0 ovn_controller[152344]: 2025-10-02T08:37:49Z|00917|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.510 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for ba2cf934-ce76-4de7-a495-285f144bdab7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.510 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394269.5089588, ba2cf934-ce76-4de7-a495-285f144bdab7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.510 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Resumed (Lifecycle Event)
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.512 2 DEBUG nova.compute.manager [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.515 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance rebooted successfully.
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.516 2 DEBUG nova.compute.manager [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.559 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.563 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.595 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.596 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394269.5118883, ba2cf934-ce76-4de7-a495-285f144bdab7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.596 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Started (Lifecycle Event)
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.626 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:49 compute-0 nova_compute[260603]: 2025-10-02 08:37:49.629 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:37:50 compute-0 ceph-mon[74477]: pgmap v1756: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 5.5 KiB/s wr, 112 op/s
Oct 02 08:37:51 compute-0 ovn_controller[152344]: 2025-10-02T08:37:51Z|00918|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:37:51 compute-0 nova_compute[260603]: 2025-10-02 08:37:51.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 4.3 KiB/s wr, 88 op/s
Oct 02 08:37:52 compute-0 nova_compute[260603]: 2025-10-02 08:37:52.064 2 DEBUG nova.compute.manager [req-ab0fea8c-12e6-450b-af80-458d52ae0f25 req-d014f8d3-9444-4de9-a01e-48f66861abe6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:52 compute-0 nova_compute[260603]: 2025-10-02 08:37:52.065 2 DEBUG oslo_concurrency.lockutils [req-ab0fea8c-12e6-450b-af80-458d52ae0f25 req-d014f8d3-9444-4de9-a01e-48f66861abe6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:52 compute-0 nova_compute[260603]: 2025-10-02 08:37:52.065 2 DEBUG oslo_concurrency.lockutils [req-ab0fea8c-12e6-450b-af80-458d52ae0f25 req-d014f8d3-9444-4de9-a01e-48f66861abe6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:52 compute-0 nova_compute[260603]: 2025-10-02 08:37:52.066 2 DEBUG oslo_concurrency.lockutils [req-ab0fea8c-12e6-450b-af80-458d52ae0f25 req-d014f8d3-9444-4de9-a01e-48f66861abe6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:52 compute-0 nova_compute[260603]: 2025-10-02 08:37:52.066 2 DEBUG nova.compute.manager [req-ab0fea8c-12e6-450b-af80-458d52ae0f25 req-d014f8d3-9444-4de9-a01e-48f66861abe6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:37:52 compute-0 nova_compute[260603]: 2025-10-02 08:37:52.067 2 WARNING nova.compute.manager [req-ab0fea8c-12e6-450b-af80-458d52ae0f25 req-d014f8d3-9444-4de9-a01e-48f66861abe6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state None.
Oct 02 08:37:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:37:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct 02 08:37:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct 02 08:37:52 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct 02 08:37:52 compute-0 ceph-mon[74477]: pgmap v1757: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 4.3 KiB/s wr, 88 op/s
Oct 02 08:37:52 compute-0 ceph-mon[74477]: osdmap e256: 3 total, 3 up, 3 in
Oct 02 08:37:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.2 KiB/s wr, 188 op/s
Oct 02 08:37:53 compute-0 nova_compute[260603]: 2025-10-02 08:37:53.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:53 compute-0 nova_compute[260603]: 2025-10-02 08:37:53.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.201 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.201 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.223 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:37:54 compute-0 ceph-mon[74477]: pgmap v1759: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.2 KiB/s wr, 188 op/s
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.303 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.304 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.310 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.310 2 INFO nova.compute.claims [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.429 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:37:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4223145461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.866 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.872 2 DEBUG nova.compute.provider_tree [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.894 2 DEBUG nova.scheduler.client.report [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.925 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.926 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.976 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.977 2 DEBUG nova.network.neutron [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:37:54 compute-0 podman[351337]: 2025-10-02 08:37:54.985773888 +0000 UTC m=+0.057333814 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 08:37:54 compute-0 nova_compute[260603]: 2025-10-02 08:37:54.997 2 INFO nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.015 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:37:55 compute-0 podman[351336]: 2025-10-02 08:37:55.022171872 +0000 UTC m=+0.094764309 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.152 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.153 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.154 2 INFO nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Creating image(s)
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.173 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.1 KiB/s wr, 172 op/s
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.193 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.210 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.213 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4223145461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.304 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.305 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.306 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.306 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.324 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.326 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.546 2 DEBUG nova.policy [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '15da3bbf2c9f49b68e7a7e0ccd557067', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b85786f28a064d75924559acd4f6137e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.584 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.646 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] resizing rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.754 2 DEBUG nova.objects.instance [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'migration_context' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.790 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.791 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Ensure instance console log exists: /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.792 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.792 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.792 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.799 2 DEBUG nova.objects.instance [None req-8ed03c22-56ca-4879-8127-9e0fcfb5a370 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.827 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394275.8274956, ba2cf934-ce76-4de7-a495-285f144bdab7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.828 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Paused (Lifecycle Event)
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.860 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.863 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:37:55 compute-0 nova_compute[260603]: 2025-10-02 08:37:55.892 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 08:37:56 compute-0 kernel: tap961da5ba-b0 (unregistering): left promiscuous mode
Oct 02 08:37:56 compute-0 NetworkManager[45129]: <info>  [1759394276.2619] device (tap961da5ba-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:37:56 compute-0 ovn_controller[152344]: 2025-10-02T08:37:56Z|00919|binding|INFO|Releasing lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 from this chassis (sb_readonly=0)
Oct 02 08:37:56 compute-0 ovn_controller[152344]: 2025-10-02T08:37:56Z|00920|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 down in Southbound
Oct 02 08:37:56 compute-0 ovn_controller[152344]: 2025-10-02T08:37:56Z|00921|binding|INFO|Removing iface tap961da5ba-b0 ovn-installed in OVS
Oct 02 08:37:56 compute-0 nova_compute[260603]: 2025-10-02 08:37:56.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.276 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '12', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.277 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.278 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74f187c2-780c-418d-98eb-b25294872ab0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.279 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7a22604b-9dbc-4500-b869-a5a3b060cdff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.279 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace which is not needed anymore
Oct 02 08:37:56 compute-0 ceph-mon[74477]: pgmap v1760: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.1 KiB/s wr, 172 op/s
Oct 02 08:37:56 compute-0 nova_compute[260603]: 2025-10-02 08:37:56.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:56 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 02 08:37:56 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000055.scope: Consumed 7.892s CPU time.
Oct 02 08:37:56 compute-0 systemd-machined[214636]: Machine qemu-114-instance-00000055 terminated.
Oct 02 08:37:56 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [NOTICE]   (351303) : haproxy version is 2.8.14-c23fe91
Oct 02 08:37:56 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [NOTICE]   (351303) : path to executable is /usr/sbin/haproxy
Oct 02 08:37:56 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [WARNING]  (351303) : Exiting Master process...
Oct 02 08:37:56 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [WARNING]  (351303) : Exiting Master process...
Oct 02 08:37:56 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [ALERT]    (351303) : Current worker (351305) exited with code 143 (Terminated)
Oct 02 08:37:56 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [WARNING]  (351303) : All workers exited. Exiting... (0)
Oct 02 08:37:56 compute-0 systemd[1]: libpod-694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d.scope: Deactivated successfully.
Oct 02 08:37:56 compute-0 podman[351575]: 2025-10-02 08:37:56.406732695 +0000 UTC m=+0.044077456 container died 694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:37:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d-userdata-shm.mount: Deactivated successfully.
Oct 02 08:37:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f12325eae80981c183471de2cbf2c2a6d257a59fa83ba595ae5891a5266d01cc-merged.mount: Deactivated successfully.
Oct 02 08:37:56 compute-0 podman[351575]: 2025-10-02 08:37:56.443720027 +0000 UTC m=+0.081064828 container cleanup 694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:37:56 compute-0 nova_compute[260603]: 2025-10-02 08:37:56.458 2 DEBUG nova.compute.manager [None req-8ed03c22-56ca-4879-8127-9e0fcfb5a370 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:37:56 compute-0 systemd[1]: libpod-conmon-694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d.scope: Deactivated successfully.
Oct 02 08:37:56 compute-0 podman[351613]: 2025-10-02 08:37:56.516288578 +0000 UTC m=+0.047558901 container remove 694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.526 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4803f5-eff9-4e4a-a333-3cc8b8882f2f]: (4, ('Thu Oct  2 08:37:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d)\n694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d\nThu Oct  2 08:37:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d)\n694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.528 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e62015-192c-4beb-b9b3-0cb4123d2503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.529 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:37:56 compute-0 kernel: tap74f187c2-70: left promiscuous mode
Oct 02 08:37:56 compute-0 nova_compute[260603]: 2025-10-02 08:37:56.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:56 compute-0 nova_compute[260603]: 2025-10-02 08:37:56.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.552 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[661101d1-ca48-4e20-9d41-60158f8b64da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.584 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[263c85aa-db56-4ce5-a415-d4fdcb5faf50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.585 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[679fd53a-ee06-45b0-89fe-0651c043d70e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.599 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4595b304-09fe-4adf-84cd-88ea667a18c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514697, 'reachable_time': 30369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351635, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d74f187c2\x2d780c\x2d418d\x2d98eb\x2db25294872ab0.mount: Deactivated successfully.
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.601 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:37:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.601 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0aeba379-e5d4-4e5f-9483-0bb93697920e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:37:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 138 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 566 KiB/s wr, 153 op/s
Oct 02 08:37:57 compute-0 nova_compute[260603]: 2025-10-02 08:37:57.181 2 DEBUG nova.network.neutron [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Successfully created port: 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:37:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:37:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:37:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:37:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:37:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:37:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:37:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:37:58 compute-0 nova_compute[260603]: 2025-10-02 08:37:58.179 2 DEBUG nova.network.neutron [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Successfully updated port: 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:37:58 compute-0 nova_compute[260603]: 2025-10-02 08:37:58.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:58 compute-0 nova_compute[260603]: 2025-10-02 08:37:58.197 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:37:58 compute-0 nova_compute[260603]: 2025-10-02 08:37:58.197 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:37:58 compute-0 nova_compute[260603]: 2025-10-02 08:37:58.198 2 DEBUG nova.network.neutron [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:37:58 compute-0 ceph-mon[74477]: pgmap v1761: 305 pgs: 305 active+clean; 138 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 566 KiB/s wr, 153 op/s
Oct 02 08:37:58 compute-0 nova_compute[260603]: 2025-10-02 08:37:58.522 2 DEBUG nova.network.neutron [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:37:58 compute-0 nova_compute[260603]: 2025-10-02 08:37:58.698 2 DEBUG nova.compute.manager [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-changed-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:37:58 compute-0 nova_compute[260603]: 2025-10-02 08:37:58.699 2 DEBUG nova.compute.manager [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Refreshing instance network info cache due to event network-changed-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:37:58 compute-0 nova_compute[260603]: 2025-10-02 08:37:58.699 2 DEBUG oslo_concurrency.lockutils [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:37:58 compute-0 nova_compute[260603]: 2025-10-02 08:37:58.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:37:59 compute-0 nova_compute[260603]: 2025-10-02 08:37:59.019 2 INFO nova.compute.manager [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Resuming
Oct 02 08:37:59 compute-0 nova_compute[260603]: 2025-10-02 08:37:59.021 2 DEBUG nova.objects.instance [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'flavor' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:37:59 compute-0 nova_compute[260603]: 2025-10-02 08:37:59.072 2 DEBUG oslo_concurrency.lockutils [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:37:59 compute-0 nova_compute[260603]: 2025-10-02 08:37:59.072 2 DEBUG oslo_concurrency.lockutils [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:37:59 compute-0 nova_compute[260603]: 2025-10-02 08:37:59.073 2 DEBUG nova.network.neutron [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:37:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 169 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.293 2 DEBUG nova.network.neutron [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:38:00 compute-0 ceph-mon[74477]: pgmap v1762: 305 pgs: 305 active+clean; 169 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.321 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.322 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance network_info: |[{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.322 2 DEBUG oslo_concurrency.lockutils [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.323 2 DEBUG nova.network.neutron [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Refreshing network info cache for port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.326 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Start _get_guest_xml network_info=[{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.332 2 WARNING nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.339 2 DEBUG nova.virt.libvirt.host [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.339 2 DEBUG nova.virt.libvirt.host [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.344 2 DEBUG nova.virt.libvirt.host [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.345 2 DEBUG nova.virt.libvirt.host [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.346 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.346 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.347 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.347 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.347 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.348 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.348 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.348 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.348 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.349 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.349 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.349 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.352 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.558 2 DEBUG nova.compute.manager [req-81b45cc7-f346-4b6b-8838-fc6aa348b383 req-d7f10036-3ed5-4946-8eb0-8af70db5779c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.559 2 DEBUG oslo_concurrency.lockutils [req-81b45cc7-f346-4b6b-8838-fc6aa348b383 req-d7f10036-3ed5-4946-8eb0-8af70db5779c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.559 2 DEBUG oslo_concurrency.lockutils [req-81b45cc7-f346-4b6b-8838-fc6aa348b383 req-d7f10036-3ed5-4946-8eb0-8af70db5779c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.560 2 DEBUG oslo_concurrency.lockutils [req-81b45cc7-f346-4b6b-8838-fc6aa348b383 req-d7f10036-3ed5-4946-8eb0-8af70db5779c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.560 2 DEBUG nova.compute.manager [req-81b45cc7-f346-4b6b-8838-fc6aa348b383 req-d7f10036-3ed5-4946-8eb0-8af70db5779c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.560 2 WARNING nova.compute.manager [req-81b45cc7-f346-4b6b-8838-fc6aa348b383 req-d7f10036-3ed5-4946-8eb0-8af70db5779c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state suspended and task_state resuming.
Oct 02 08:38:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:38:00 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141647230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.807 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.841 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:00 compute-0 nova_compute[260603]: 2025-10-02 08:38:00.847 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 169 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.267 2 DEBUG nova.network.neutron [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:38:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:38:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1349762954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.285 2 DEBUG oslo_concurrency.lockutils [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.293 2 DEBUG nova.virt.libvirt.vif [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:37:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.293 2 DEBUG nova.network.os_vif_util [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.294 2 DEBUG nova.network.os_vif_util [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.294 2 DEBUG os_vif [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.295 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.297 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.298 2 DEBUG nova.virt.libvirt.vif [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:37:55Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.298 2 DEBUG nova.network.os_vif_util [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.299 2 DEBUG nova.network.os_vif_util [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.300 2 DEBUG nova.objects.instance [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'pci_devices' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.304 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap961da5ba-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.304 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap961da5ba-b0, col_values=(('external_ids', {'iface-id': '961da5ba-b0ac-4a87-a74c-26d0d2d2bf50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:af:04', 'vm-uuid': 'ba2cf934-ce76-4de7-a495-285f144bdab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.305 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.305 2 INFO os_vif [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.315 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:38:01 compute-0 nova_compute[260603]:   <uuid>fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b</uuid>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   <name>instance-0000005d</name>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersNegativeTestJSON-server-1260783938</nova:name>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:38:00</nova:creationTime>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:38:01 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:38:01 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:38:01 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:38:01 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:38:01 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:38:01 compute-0 nova_compute[260603]:         <nova:user uuid="15da3bbf2c9f49b68e7a7e0ccd557067">tempest-ServersNegativeTestJSON-2088330606-project-member</nova:user>
Oct 02 08:38:01 compute-0 nova_compute[260603]:         <nova:project uuid="b85786f28a064d75924559acd4f6137e">tempest-ServersNegativeTestJSON-2088330606</nova:project>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:38:01 compute-0 nova_compute[260603]:         <nova:port uuid="00fa1373-e4cc-4245-9cfa-5a58c77aa4eb">
Oct 02 08:38:01 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <system>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <entry name="serial">fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b</entry>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <entry name="uuid">fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b</entry>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     </system>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   <os>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   </os>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   <features>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   </features>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk">
Oct 02 08:38:01 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       </source>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:38:01 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config">
Oct 02 08:38:01 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       </source>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:38:01 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:11:b9:2b"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <target dev="tap00fa1373-e4"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/console.log" append="off"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <video>
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     </video>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:38:01 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:38:01 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:38:01 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:38:01 compute-0 nova_compute[260603]: </domain>
Oct 02 08:38:01 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.317 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Preparing to wait for external event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.318 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.318 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.319 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.320 2 DEBUG nova.virt.libvirt.vif [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:37:55Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.320 2 DEBUG nova.network.os_vif_util [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.321 2 DEBUG nova.network.os_vif_util [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.322 2 DEBUG os_vif [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:01 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3141647230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:38:01 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1349762954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.326 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00fa1373-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.327 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00fa1373-e4, col_values=(('external_ids', {'iface-id': '00fa1373-e4cc-4245-9cfa-5a58c77aa4eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:b9:2b', 'vm-uuid': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:01 compute-0 NetworkManager[45129]: <info>  [1759394281.3699] manager: (tap00fa1373-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.373 2 DEBUG nova.objects.instance [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'numa_topology' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.380 2 INFO os_vif [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4')
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.446 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.446 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.447 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No VIF found with MAC fa:16:3e:11:b9:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.447 2 INFO nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Using config drive
Oct 02 08:38:01 compute-0 kernel: tap961da5ba-b0: entered promiscuous mode
Oct 02 08:38:01 compute-0 NetworkManager[45129]: <info>  [1759394281.4600] manager: (tap961da5ba-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/367)
Oct 02 08:38:01 compute-0 ovn_controller[152344]: 2025-10-02T08:38:01Z|00922|binding|INFO|Claiming lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for this chassis.
Oct 02 08:38:01 compute-0 ovn_controller[152344]: 2025-10-02T08:38:01Z|00923|binding|INFO|961da5ba-b0ac-4a87-a74c-26d0d2d2bf50: Claiming fa:16:3e:bb:af:04 10.100.0.8
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.472 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '13', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.475 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 bound to our chassis
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.477 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.489 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.490 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c1561d3e-0d55-455e-b461-36a448f8e2ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.491 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74f187c2-71 in ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:38:01 compute-0 ovn_controller[152344]: 2025-10-02T08:38:01Z|00924|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 ovn-installed in OVS
Oct 02 08:38:01 compute-0 ovn_controller[152344]: 2025-10-02T08:38:01Z|00925|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 up in Southbound
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.493 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74f187c2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.494 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f54f2f81-cdc2-45a3-a222-3761b4fa87b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.495 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[865d4c6a-240f-4e50-ac35-32d2c3924620]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.506 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5b3fc7eb-cdd0-44b7-b68b-37add089ca35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 systemd-machined[214636]: New machine qemu-115-instance-00000055.
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.520 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aac1e3f2-b467-4495-af55-c257692c7e18]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 systemd[1]: Started Virtual Machine qemu-115-instance-00000055.
Oct 02 08:38:01 compute-0 systemd-udevd[351741]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.550 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[444e36d3-1589-401d-aa80-945404217969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 NetworkManager[45129]: <info>  [1759394281.5542] device (tap961da5ba-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:38:01 compute-0 NetworkManager[45129]: <info>  [1759394281.5564] manager: (tap74f187c2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/368)
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.555 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[252cfba1-76f4-4d6b-b45c-0f196b76ac3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 NetworkManager[45129]: <info>  [1759394281.5571] device (tap961da5ba-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:38:01 compute-0 systemd-udevd[351748]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.588 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0e677e58-d838-48d5-9898-e1266c5ebbd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.592 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[03aa6978-0eba-41c2-b0b1-146e1797d0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 NetworkManager[45129]: <info>  [1759394281.6178] device (tap74f187c2-70): carrier: link connected
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.624 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4bbb03-a4e5-422d-97ac-4944a0d53e84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.644 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48b52a0b-ac97-4f53-916d-c24fa3897a08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516022, 'reachable_time': 21099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351772, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.661 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5a4be05c-ea5d-4bda-bb5a-6bcf4b47647f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:7f62'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516022, 'tstamp': 516022}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351773, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.679 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6889d2-e175-47c8-a824-4d6658baad36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516022, 'reachable_time': 21099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351774, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.717 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd285ab-70d7-41e0-8955-e6b2c6f9f4f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.797 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[de709c52-2632-40f3-b6a4-b7ae9b2e8c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.799 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.799 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.800 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 kernel: tap74f187c2-70: entered promiscuous mode
Oct 02 08:38:01 compute-0 NetworkManager[45129]: <info>  [1759394281.8030] manager: (tap74f187c2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.809 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:01 compute-0 ovn_controller[152344]: 2025-10-02T08:38:01Z|00926|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 nova_compute[260603]: 2025-10-02 08:38:01.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.852 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.854 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5402d4-beea-46f0-ad08-76bb72d98718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.857 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 74f187c2-780c-418d-98eb-b25294872ab0
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:38:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.859 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'env', 'PROCESS_TAG=haproxy-74f187c2-780c-418d-98eb-b25294872ab0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74f187c2-780c-418d-98eb-b25294872ab0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.172 2 INFO nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Creating config drive at /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.178 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa1__r0w3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.212 2 DEBUG nova.network.neutron [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updated VIF entry in instance network info cache for port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.214 2 DEBUG nova.network.neutron [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.231 2 DEBUG oslo_concurrency.lockutils [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:38:02 compute-0 podman[351852]: 2025-10-02 08:38:02.237443642 +0000 UTC m=+0.059575531 container create 37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 08:38:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.252817) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394282252846, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1751, "num_deletes": 260, "total_data_size": 2505247, "memory_usage": 2550584, "flush_reason": "Manual Compaction"}
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394282260817, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1610002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35331, "largest_seqno": 37081, "table_properties": {"data_size": 1603537, "index_size": 3411, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16599, "raw_average_key_size": 21, "raw_value_size": 1589466, "raw_average_value_size": 2050, "num_data_blocks": 152, "num_entries": 775, "num_filter_entries": 775, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394140, "oldest_key_time": 1759394140, "file_creation_time": 1759394282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 8041 microseconds, and 4094 cpu microseconds.
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.260854) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1610002 bytes OK
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.260876) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.263124) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.263143) EVENT_LOG_v1 {"time_micros": 1759394282263137, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.263163) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2497578, prev total WAL file size 2497578, number of live WAL files 2.
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.263918) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1572KB)], [77(9524KB)]
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394282263961, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11363023, "oldest_snapshot_seqno": -1}
Oct 02 08:38:02 compute-0 systemd[1]: Started libpod-conmon-37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f.scope.
Oct 02 08:38:02 compute-0 podman[351852]: 2025-10-02 08:38:02.20079195 +0000 UTC m=+0.022923889 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6330 keys, 8909469 bytes, temperature: kUnknown
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394282299640, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 8909469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8867180, "index_size": 25347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 158961, "raw_average_key_size": 25, "raw_value_size": 8753857, "raw_average_value_size": 1382, "num_data_blocks": 1030, "num_entries": 6330, "num_filter_entries": 6330, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.299870) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 8909469 bytes
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.301149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 318.0 rd, 249.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.3 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(12.6) write-amplify(5.5) OK, records in: 6792, records dropped: 462 output_compression: NoCompression
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.301164) EVENT_LOG_v1 {"time_micros": 1759394282301157, "job": 44, "event": "compaction_finished", "compaction_time_micros": 35732, "compaction_time_cpu_micros": 19237, "output_level": 6, "num_output_files": 1, "total_output_size": 8909469, "num_input_records": 6792, "num_output_records": 6330, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394282301496, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394282303030, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.263868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.303057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.303061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.303063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.303064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:38:02 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.303065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:38:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fb83696c42c7f8209aee60e425fd95b280fe9157eca6c3109314b6ebc0a1f8b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.321 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa1__r0w3" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:02 compute-0 ceph-mon[74477]: pgmap v1763: 305 pgs: 305 active+clean; 169 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct 02 08:38:02 compute-0 podman[351852]: 2025-10-02 08:38:02.338145988 +0000 UTC m=+0.160277967 container init 37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 08:38:02 compute-0 podman[351852]: 2025-10-02 08:38:02.347695365 +0000 UTC m=+0.169827264 container start 37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.360 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:02 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [NOTICE]   (351898) : New worker (351909) forked
Oct 02 08:38:02 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [NOTICE]   (351898) : Loading success.
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.366 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.404 2 DEBUG nova.compute.manager [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.405 2 DEBUG nova.objects.instance [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.406 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for ba2cf934-ce76-4de7-a495-285f144bdab7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.406 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394282.3539577, ba2cf934-ce76-4de7-a495-285f144bdab7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.407 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Started (Lifecycle Event)
Oct 02 08:38:02 compute-0 podman[351866]: 2025-10-02 08:38:02.410971207 +0000 UTC m=+0.122979817 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.427 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.431 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance running successfully.
Oct 02 08:38:02 compute-0 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.434 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.436 2 DEBUG nova.virt.libvirt.guest [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.436 2 DEBUG nova.compute.manager [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.456 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.456 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394282.3588698, ba2cf934-ce76-4de7-a495-285f144bdab7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.456 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Resumed (Lifecycle Event)
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.473 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.477 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.497 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.527 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.528 2 INFO nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deleting local config drive /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config because it was imported into RBD.
Oct 02 08:38:02 compute-0 kernel: tap00fa1373-e4: entered promiscuous mode
Oct 02 08:38:02 compute-0 systemd-udevd[351755]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:38:02 compute-0 NetworkManager[45129]: <info>  [1759394282.5845] manager: (tap00fa1373-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Oct 02 08:38:02 compute-0 ovn_controller[152344]: 2025-10-02T08:38:02Z|00927|binding|INFO|Claiming lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for this chassis.
Oct 02 08:38:02 compute-0 ovn_controller[152344]: 2025-10-02T08:38:02Z|00928|binding|INFO|00fa1373-e4cc-4245-9cfa-5a58c77aa4eb: Claiming fa:16:3e:11:b9:2b 10.100.0.10
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.591 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:b9:2b 10.100.0.10'], port_security=['fa:16:3e:11:b9:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.593 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 bound to our chassis
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.594 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:02 compute-0 NetworkManager[45129]: <info>  [1759394282.5969] device (tap00fa1373-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:38:02 compute-0 NetworkManager[45129]: <info>  [1759394282.5999] device (tap00fa1373-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:38:02 compute-0 ovn_controller[152344]: 2025-10-02T08:38:02Z|00929|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb ovn-installed in OVS
Oct 02 08:38:02 compute-0 ovn_controller[152344]: 2025-10-02T08:38:02Z|00930|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb up in Southbound
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.613 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd81306f-0d61-45a9-b139-86f57abc3076]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.614 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap19d584c3-e1 in ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.616 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap19d584c3-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.616 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[16199697-f999-402b-9ee4-afe963654272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.617 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6c884a56-c810-49d5-a804-f38d885f7c06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.630 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[cdbfc6b7-cc1c-466e-a068-a9f16e993b19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 systemd-machined[214636]: New machine qemu-116-instance-0000005d.
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.646 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6891bf13-da14-46f4-abdb-e17429e4f7c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 systemd[1]: Started Virtual Machine qemu-116-instance-0000005d.
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.677 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8840e8d3-87cb-48d4-b514-0281439bb0dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.688 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e287b466-1782-417c-a1ef-1111a9ff47de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 NetworkManager[45129]: <info>  [1759394282.6902] manager: (tap19d584c3-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.721 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bcfd5d24-3bf3-4fdc-8325-1d074b0e4c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.724 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1411e3f5-18a2-48c8-b1a1-0db83899fb3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.742 2 DEBUG nova.compute.manager [req-f281f8ea-febb-439c-b786-477ee7e3d577 req-d27adf7d-9bd8-4231-b9d5-66a9ff72b1a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.742 2 DEBUG oslo_concurrency.lockutils [req-f281f8ea-febb-439c-b786-477ee7e3d577 req-d27adf7d-9bd8-4231-b9d5-66a9ff72b1a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.742 2 DEBUG oslo_concurrency.lockutils [req-f281f8ea-febb-439c-b786-477ee7e3d577 req-d27adf7d-9bd8-4231-b9d5-66a9ff72b1a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.743 2 DEBUG oslo_concurrency.lockutils [req-f281f8ea-febb-439c-b786-477ee7e3d577 req-d27adf7d-9bd8-4231-b9d5-66a9ff72b1a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.743 2 DEBUG nova.compute.manager [req-f281f8ea-febb-439c-b786-477ee7e3d577 req-d27adf7d-9bd8-4231-b9d5-66a9ff72b1a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.743 2 WARNING nova.compute.manager [req-f281f8ea-febb-439c-b786-477ee7e3d577 req-d27adf7d-9bd8-4231-b9d5-66a9ff72b1a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state None.
Oct 02 08:38:02 compute-0 NetworkManager[45129]: <info>  [1759394282.7468] device (tap19d584c3-e0): carrier: link connected
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.755 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6b86ad76-fe10-4412-bfe9-3d9fac0e97cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.779 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3a97054e-6e88-48c0-87bc-4a5e556f266b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516134, 'reachable_time': 18062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351968, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.797 2 DEBUG nova.compute.manager [req-40725915-9961-4b72-983d-9758b6a82d7f req-7de78a51-2ab2-46d7-9e50-246be40205d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.797 2 DEBUG oslo_concurrency.lockutils [req-40725915-9961-4b72-983d-9758b6a82d7f req-7de78a51-2ab2-46d7-9e50-246be40205d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.798 2 DEBUG oslo_concurrency.lockutils [req-40725915-9961-4b72-983d-9758b6a82d7f req-7de78a51-2ab2-46d7-9e50-246be40205d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.798 2 DEBUG oslo_concurrency.lockutils [req-40725915-9961-4b72-983d-9758b6a82d7f req-7de78a51-2ab2-46d7-9e50-246be40205d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.798 2 DEBUG nova.compute.manager [req-40725915-9961-4b72-983d-9758b6a82d7f req-7de78a51-2ab2-46d7-9e50-246be40205d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Processing event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.799 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dd04d01b-d4a0-4ff1-a4fe-7530588c115c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:8c4f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516134, 'tstamp': 516134}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351969, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.814 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[abc12dd1-022d-4299-af52-19177ed8b0dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516134, 'reachable_time': 18062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351970, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.844 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[46ef09e6-e45e-4f23-b957-eb2dffdb4e2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.898 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[55854c80-d8ec-46ea-9600-384113b5131a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.899 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.900 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.900 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:02 compute-0 NetworkManager[45129]: <info>  [1759394282.9027] manager: (tap19d584c3-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:02 compute-0 kernel: tap19d584c3-e0: entered promiscuous mode
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.905 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:02 compute-0 ovn_controller[152344]: 2025-10-02T08:38:02Z|00931|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.908 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.909 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0af1a5a8-514a-4f0e-81a9-635f17f4c263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.910 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:38:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.911 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'env', 'PROCESS_TAG=haproxy-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/19d584c3-e754-47d1-9cdf-c6badbd670d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:38:02 compute-0 nova_compute[260603]: 2025-10-02 08:38:02.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 169 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 76 op/s
Oct 02 08:38:03 compute-0 podman[352044]: 2025-10-02 08:38:03.304779502 +0000 UTC m=+0.076593084 container create db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:38:03 compute-0 podman[352044]: 2025-10-02 08:38:03.254028345 +0000 UTC m=+0.025841977 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:38:03 compute-0 systemd[1]: Started libpod-conmon-db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1.scope.
Oct 02 08:38:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a74f8d6643ae70d1fa74bb44614c3aa06e7ac08d3f73b3369e5ce67a1513ec1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:03 compute-0 podman[352044]: 2025-10-02 08:38:03.418668164 +0000 UTC m=+0.190481716 container init db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:38:03 compute-0 podman[352044]: 2025-10-02 08:38:03.424946103 +0000 UTC m=+0.196759635 container start db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:38:03 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[352059]: [NOTICE]   (352063) : New worker (352065) forked
Oct 02 08:38:03 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[352059]: [NOTICE]   (352063) : Loading success.
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.569 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394283.5686479, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.569 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Started (Lifecycle Event)
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.571 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.583 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.586 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.589 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance spawned successfully.
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.589 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.592 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.609 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.610 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394283.5695937, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.610 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Paused (Lifecycle Event)
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.614 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.615 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.615 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.616 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.617 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.617 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.638 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.641 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394283.5750816, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.641 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Resumed (Lifecycle Event)
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.665 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.668 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.675 2 INFO nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Took 8.52 seconds to spawn the instance on the hypervisor.
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.675 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.688 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.737 2 INFO nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Took 9.46 seconds to build instance.
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.752 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:03 compute-0 nova_compute[260603]: 2025-10-02 08:38:03.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.310 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.311 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.311 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.312 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.313 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.314 2 INFO nova.compute.manager [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Terminating instance
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.315 2 DEBUG nova.compute.manager [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:38:04 compute-0 ceph-mon[74477]: pgmap v1764: 305 pgs: 305 active+clean; 169 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 76 op/s
Oct 02 08:38:04 compute-0 kernel: tap961da5ba-b0 (unregistering): left promiscuous mode
Oct 02 08:38:04 compute-0 NetworkManager[45129]: <info>  [1759394284.3545] device (tap961da5ba-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:04 compute-0 ovn_controller[152344]: 2025-10-02T08:38:04Z|00932|binding|INFO|Releasing lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 from this chassis (sb_readonly=0)
Oct 02 08:38:04 compute-0 ovn_controller[152344]: 2025-10-02T08:38:04Z|00933|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 down in Southbound
Oct 02 08:38:04 compute-0 ovn_controller[152344]: 2025-10-02T08:38:04Z|00934|binding|INFO|Removing iface tap961da5ba-b0 ovn-installed in OVS
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.371 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '13', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.373 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.374 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74f187c2-780c-418d-98eb-b25294872ab0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.374 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb6b0fb-31e1-4e0c-adbd-490ae2811d6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.375 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace which is not needed anymore
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:04 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 02 08:38:04 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000055.scope: Consumed 2.644s CPU time.
Oct 02 08:38:04 compute-0 systemd-machined[214636]: Machine qemu-115-instance-00000055 terminated.
Oct 02 08:38:04 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [NOTICE]   (351898) : haproxy version is 2.8.14-c23fe91
Oct 02 08:38:04 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [NOTICE]   (351898) : path to executable is /usr/sbin/haproxy
Oct 02 08:38:04 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [WARNING]  (351898) : Exiting Master process...
Oct 02 08:38:04 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [WARNING]  (351898) : Exiting Master process...
Oct 02 08:38:04 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [ALERT]    (351898) : Current worker (351909) exited with code 143 (Terminated)
Oct 02 08:38:04 compute-0 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [WARNING]  (351898) : All workers exited. Exiting... (0)
Oct 02 08:38:04 compute-0 systemd[1]: libpod-37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f.scope: Deactivated successfully.
Oct 02 08:38:04 compute-0 podman[352087]: 2025-10-02 08:38:04.505618983 +0000 UTC m=+0.058681274 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 08:38:04 compute-0 podman[352101]: 2025-10-02 08:38:04.508185981 +0000 UTC m=+0.042918221 container died 37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f-userdata-shm.mount: Deactivated successfully.
Oct 02 08:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fb83696c42c7f8209aee60e425fd95b280fe9157eca6c3109314b6ebc0a1f8b-merged.mount: Deactivated successfully.
Oct 02 08:38:04 compute-0 podman[352101]: 2025-10-02 08:38:04.545032518 +0000 UTC m=+0.079764758 container cleanup 37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.557 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance destroyed successfully.
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.558 2 DEBUG nova.objects.instance [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'resources' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:38:04 compute-0 systemd[1]: libpod-conmon-37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f.scope: Deactivated successfully.
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.582 2 DEBUG nova.virt.libvirt.vif [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:38:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.584 2 DEBUG nova.network.os_vif_util [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.585 2 DEBUG nova.network.os_vif_util [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.585 2 DEBUG os_vif [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap961da5ba-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.592 2 INFO os_vif [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')
Oct 02 08:38:04 compute-0 podman[352146]: 2025-10-02 08:38:04.617129844 +0000 UTC m=+0.044413205 container remove 37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.623 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5f76ce4c-ce30-4b26-b63c-cbd7fa9f0d60]: (4, ('Thu Oct  2 08:38:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f)\n37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f\nThu Oct  2 08:38:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f)\n37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.625 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[51688357-adaa-45fd-aee8-2e390df6b45e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.626 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:04 compute-0 kernel: tap74f187c2-70: left promiscuous mode
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.645 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b3de07e6-aa66-430e-887d-c6f3f3ffa788]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.664 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[480e9aa9-f6ef-4244-bc58-af30fd17fddc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.665 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cab7a08c-0260-4910-b3f6-1ee882db81ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.683 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6491e1-3dd5-4b2b-a677-628b1554dde3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516014, 'reachable_time': 30252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352189, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.685 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:38:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.685 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e58636da-8863-4d74-8a1e-f1569640cde7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d74f187c2\x2d780c\x2d418d\x2d98eb\x2db25294872ab0.mount: Deactivated successfully.
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.905 2 INFO nova.virt.libvirt.driver [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Deleting instance files /var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7_del
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.906 2 INFO nova.virt.libvirt.driver [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Deletion of /var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7_del complete
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.963 2 INFO nova.compute.manager [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Took 0.65 seconds to destroy the instance on the hypervisor.
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.964 2 DEBUG oslo.service.loopingcall [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.964 2 DEBUG nova.compute.manager [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.965 2 DEBUG nova.network.neutron [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.978 2 DEBUG nova.compute.manager [req-77c17d3a-e7be-475b-b81f-b63f009b0784 req-da9d6aae-3f18-4e8a-9919-477fc7252f87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.978 2 DEBUG oslo_concurrency.lockutils [req-77c17d3a-e7be-475b-b81f-b63f009b0784 req-da9d6aae-3f18-4e8a-9919-477fc7252f87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.979 2 DEBUG oslo_concurrency.lockutils [req-77c17d3a-e7be-475b-b81f-b63f009b0784 req-da9d6aae-3f18-4e8a-9919-477fc7252f87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.979 2 DEBUG oslo_concurrency.lockutils [req-77c17d3a-e7be-475b-b81f-b63f009b0784 req-da9d6aae-3f18-4e8a-9919-477fc7252f87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.979 2 DEBUG nova.compute.manager [req-77c17d3a-e7be-475b-b81f-b63f009b0784 req-da9d6aae-3f18-4e8a-9919-477fc7252f87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:38:04 compute-0 nova_compute[260603]: 2025-10-02 08:38:04.979 2 WARNING nova.compute.manager [req-77c17d3a-e7be-475b-b81f-b63f009b0784 req-da9d6aae-3f18-4e8a-9919-477fc7252f87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state active and task_state None.
Oct 02 08:38:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 169 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 08:38:05 compute-0 nova_compute[260603]: 2025-10-02 08:38:05.794 2 DEBUG nova.network.neutron [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:38:05 compute-0 nova_compute[260603]: 2025-10-02 08:38:05.817 2 INFO nova.compute.manager [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Took 0.85 seconds to deallocate network for instance.
Oct 02 08:38:05 compute-0 nova_compute[260603]: 2025-10-02 08:38:05.867 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:05 compute-0 nova_compute[260603]: 2025-10-02 08:38:05.868 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:05 compute-0 nova_compute[260603]: 2025-10-02 08:38:05.910 2 DEBUG nova.compute.manager [req-79aca5e3-26b7-47b6-8e97-d177f69daecc req-d4f85ea3-5a0b-42c4-af1d-d89f3c03494e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-deleted-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:05 compute-0 nova_compute[260603]: 2025-10-02 08:38:05.972 2 DEBUG oslo_concurrency.processutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:06 compute-0 ceph-mon[74477]: pgmap v1765: 305 pgs: 305 active+clean; 169 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 08:38:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:38:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/84441315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:38:06 compute-0 nova_compute[260603]: 2025-10-02 08:38:06.498 2 DEBUG oslo_concurrency.processutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:06 compute-0 nova_compute[260603]: 2025-10-02 08:38:06.506 2 DEBUG nova.compute.provider_tree [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:38:06 compute-0 nova_compute[260603]: 2025-10-02 08:38:06.526 2 DEBUG nova.scheduler.client.report [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:38:06 compute-0 nova_compute[260603]: 2025-10-02 08:38:06.560 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:06 compute-0 nova_compute[260603]: 2025-10-02 08:38:06.600 2 INFO nova.scheduler.client.report [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Deleted allocations for instance ba2cf934-ce76-4de7-a495-285f144bdab7
Oct 02 08:38:06 compute-0 nova_compute[260603]: 2025-10-02 08:38:06.674 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 528 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 02 08:38:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:38:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/84441315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:38:07 compute-0 nova_compute[260603]: 2025-10-02 08:38:07.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:07.836 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:38:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:07.837 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:38:08 compute-0 ceph-mon[74477]: pgmap v1766: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 528 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 02 08:38:08 compute-0 nova_compute[260603]: 2025-10-02 08:38:08.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 88 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 121 op/s
Oct 02 08:38:09 compute-0 nova_compute[260603]: 2025-10-02 08:38:09.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:10 compute-0 nova_compute[260603]: 2025-10-02 08:38:10.299 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:10 compute-0 nova_compute[260603]: 2025-10-02 08:38:10.300 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:10 compute-0 nova_compute[260603]: 2025-10-02 08:38:10.334 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:38:10 compute-0 ceph-mon[74477]: pgmap v1767: 305 pgs: 305 active+clean; 88 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 121 op/s
Oct 02 08:38:10 compute-0 nova_compute[260603]: 2025-10-02 08:38:10.452 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:10 compute-0 nova_compute[260603]: 2025-10-02 08:38:10.453 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:10 compute-0 nova_compute[260603]: 2025-10-02 08:38:10.465 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:38:10 compute-0 nova_compute[260603]: 2025-10-02 08:38:10.466 2 INFO nova.compute.claims [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:38:10 compute-0 nova_compute[260603]: 2025-10-02 08:38:10.645 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:38:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3406974725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.171 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.179 2 DEBUG nova.compute.provider_tree [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:38:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 88 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 106 op/s
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.206 2 DEBUG nova.scheduler.client.report [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.235 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.239 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.292 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.292 2 DEBUG nova.network.neutron [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.309 2 INFO nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.329 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:38:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3406974725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.447 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.449 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.450 2 INFO nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Creating image(s)
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.484 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.523 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.559 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.565 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.660 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.661 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.662 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.662 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.687 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.690 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 b6a5d839-2362-461b-a536-078f2c86d9b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.733 2 DEBUG nova.policy [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '15da3bbf2c9f49b68e7a7e0ccd557067', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b85786f28a064d75924559acd4f6137e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.943 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 b6a5d839-2362-461b-a536-078f2c86d9b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:11 compute-0 nova_compute[260603]: 2025-10-02 08:38:11.996 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] resizing rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:38:12 compute-0 nova_compute[260603]: 2025-10-02 08:38:12.085 2 DEBUG nova.objects.instance [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'migration_context' on Instance uuid b6a5d839-2362-461b-a536-078f2c86d9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:38:12 compute-0 nova_compute[260603]: 2025-10-02 08:38:12.116 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:38:12 compute-0 nova_compute[260603]: 2025-10-02 08:38:12.117 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Ensure instance console log exists: /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:38:12 compute-0 nova_compute[260603]: 2025-10-02 08:38:12.117 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:12 compute-0 nova_compute[260603]: 2025-10-02 08:38:12.118 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:12 compute-0 nova_compute[260603]: 2025-10-02 08:38:12.118 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:38:12 compute-0 ceph-mon[74477]: pgmap v1768: 305 pgs: 305 active+clean; 88 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 106 op/s
Oct 02 08:38:12 compute-0 nova_compute[260603]: 2025-10-02 08:38:12.825 2 DEBUG nova.network.neutron [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Successfully created port: 31d7db70-01e7-4886-b589-dc252f8fc7b5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:38:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 134 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Oct 02 08:38:13 compute-0 ovn_controller[152344]: 2025-10-02T08:38:13Z|00935|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct 02 08:38:13 compute-0 nova_compute[260603]: 2025-10-02 08:38:13.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:13 compute-0 ovn_controller[152344]: 2025-10-02T08:38:13Z|00936|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct 02 08:38:13 compute-0 nova_compute[260603]: 2025-10-02 08:38:13.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:13 compute-0 nova_compute[260603]: 2025-10-02 08:38:13.786 2 DEBUG nova.network.neutron [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Successfully updated port: 31d7db70-01e7-4886-b589-dc252f8fc7b5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:38:13 compute-0 nova_compute[260603]: 2025-10-02 08:38:13.811 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "refresh_cache-b6a5d839-2362-461b-a536-078f2c86d9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:38:13 compute-0 nova_compute[260603]: 2025-10-02 08:38:13.812 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquired lock "refresh_cache-b6a5d839-2362-461b-a536-078f2c86d9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:38:13 compute-0 nova_compute[260603]: 2025-10-02 08:38:13.812 2 DEBUG nova.network.neutron [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:38:13 compute-0 nova_compute[260603]: 2025-10-02 08:38:13.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:14 compute-0 nova_compute[260603]: 2025-10-02 08:38:14.034 2 DEBUG nova.compute.manager [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-changed-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:14 compute-0 nova_compute[260603]: 2025-10-02 08:38:14.035 2 DEBUG nova.compute.manager [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Refreshing instance network info cache due to event network-changed-31d7db70-01e7-4886-b589-dc252f8fc7b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:38:14 compute-0 nova_compute[260603]: 2025-10-02 08:38:14.035 2 DEBUG oslo_concurrency.lockutils [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-b6a5d839-2362-461b-a536-078f2c86d9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:38:14 compute-0 nova_compute[260603]: 2025-10-02 08:38:14.163 2 DEBUG nova.network.neutron [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:38:14 compute-0 ceph-mon[74477]: pgmap v1769: 305 pgs: 305 active+clean; 134 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Oct 02 08:38:14 compute-0 nova_compute[260603]: 2025-10-02 08:38:14.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:14 compute-0 ovn_controller[152344]: 2025-10-02T08:38:14Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:b9:2b 10.100.0.10
Oct 02 08:38:14 compute-0 ovn_controller[152344]: 2025-10-02T08:38:14Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:b9:2b 10.100.0.10
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.151 2 DEBUG nova.network.neutron [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Updating instance_info_cache with network_info: [{"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.175 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Releasing lock "refresh_cache-b6a5d839-2362-461b-a536-078f2c86d9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.175 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Instance network_info: |[{"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.175 2 DEBUG oslo_concurrency.lockutils [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-b6a5d839-2362-461b-a536-078f2c86d9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.176 2 DEBUG nova.network.neutron [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Refreshing network info cache for port 31d7db70-01e7-4886-b589-dc252f8fc7b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.178 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Start _get_guest_xml network_info=[{"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.183 2 WARNING nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:38:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 134 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.187 2 DEBUG nova.virt.libvirt.host [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.188 2 DEBUG nova.virt.libvirt.host [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.191 2 DEBUG nova.virt.libvirt.host [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.191 2 DEBUG nova.virt.libvirt.host [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.192 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.192 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.193 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.193 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.193 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.193 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.193 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.194 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.194 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.194 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.194 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.195 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.197 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:38:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183046515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.633 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.677 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:15 compute-0 nova_compute[260603]: 2025-10-02 08:38:15.683 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:38:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880683933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.157 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.160 2 DEBUG nova.virt.libvirt.vif [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:38:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-187016526',display_name='tempest-ServersNegativeTestJSON-server-187016526',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-187016526',id=94,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-ex30m3ht',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:38:11Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=b6a5d839-2362-461b-a536-078f2c86d9b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.161 2 DEBUG nova.network.os_vif_util [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.163 2 DEBUG nova.network.os_vif_util [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.165 2 DEBUG nova.objects.instance [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'pci_devices' on Instance uuid b6a5d839-2362-461b-a536-078f2c86d9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.182 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:38:16 compute-0 nova_compute[260603]:   <uuid>b6a5d839-2362-461b-a536-078f2c86d9b9</uuid>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   <name>instance-0000005e</name>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersNegativeTestJSON-server-187016526</nova:name>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:38:15</nova:creationTime>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:38:16 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:38:16 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:38:16 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:38:16 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:38:16 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:38:16 compute-0 nova_compute[260603]:         <nova:user uuid="15da3bbf2c9f49b68e7a7e0ccd557067">tempest-ServersNegativeTestJSON-2088330606-project-member</nova:user>
Oct 02 08:38:16 compute-0 nova_compute[260603]:         <nova:project uuid="b85786f28a064d75924559acd4f6137e">tempest-ServersNegativeTestJSON-2088330606</nova:project>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:38:16 compute-0 nova_compute[260603]:         <nova:port uuid="31d7db70-01e7-4886-b589-dc252f8fc7b5">
Oct 02 08:38:16 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <system>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <entry name="serial">b6a5d839-2362-461b-a536-078f2c86d9b9</entry>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <entry name="uuid">b6a5d839-2362-461b-a536-078f2c86d9b9</entry>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     </system>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   <os>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   </os>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   <features>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   </features>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/b6a5d839-2362-461b-a536-078f2c86d9b9_disk">
Oct 02 08:38:16 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       </source>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:38:16 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/b6a5d839-2362-461b-a536-078f2c86d9b9_disk.config">
Oct 02 08:38:16 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       </source>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:38:16 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:5d:35:da"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <target dev="tap31d7db70-01"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/console.log" append="off"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <video>
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     </video>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:38:16 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:38:16 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:38:16 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:38:16 compute-0 nova_compute[260603]: </domain>
Oct 02 08:38:16 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.184 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Preparing to wait for external event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.184 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.185 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.185 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.186 2 DEBUG nova.virt.libvirt.vif [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:38:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-187016526',display_name='tempest-ServersNegativeTestJSON-server-187016526',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-187016526',id=94,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-ex30m3ht',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:38:11Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=b6a5d839-2362-461b-a536-078f2c86d9b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.187 2 DEBUG nova.network.os_vif_util [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.188 2 DEBUG nova.network.os_vif_util [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.188 2 DEBUG os_vif [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31d7db70-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.201 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31d7db70-01, col_values=(('external_ids', {'iface-id': '31d7db70-01e7-4886-b589-dc252f8fc7b5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:35:da', 'vm-uuid': 'b6a5d839-2362-461b-a536-078f2c86d9b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:16 compute-0 NetworkManager[45129]: <info>  [1759394296.2040] manager: (tap31d7db70-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.210 2 INFO os_vif [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01')
Oct 02 08:38:16 compute-0 ceph-mon[74477]: pgmap v1770: 305 pgs: 305 active+clean; 134 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 02 08:38:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1183046515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:38:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3880683933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.437 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.437 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.438 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No VIF found with MAC fa:16:3e:5d:35:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.439 2 INFO nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Using config drive
Oct 02 08:38:16 compute-0 nova_compute[260603]: 2025-10-02 08:38:16.469 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:16.839 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.025 2 INFO nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Creating config drive at /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/disk.config
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.036 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzpupaatt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 145 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 138 op/s
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.211 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzpupaatt" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.253 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.258 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/disk.config b6a5d839-2362-461b-a536-078f2c86d9b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.338 2 DEBUG nova.network.neutron [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Updated VIF entry in instance network info cache for port 31d7db70-01e7-4886-b589-dc252f8fc7b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.339 2 DEBUG nova.network.neutron [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Updating instance_info_cache with network_info: [{"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.362 2 DEBUG oslo_concurrency.lockutils [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-b6a5d839-2362-461b-a536-078f2c86d9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.455 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/disk.config b6a5d839-2362-461b-a536-078f2c86d9b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.456 2 INFO nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Deleting local config drive /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/disk.config because it was imported into RBD.
Oct 02 08:38:17 compute-0 kernel: tap31d7db70-01: entered promiscuous mode
Oct 02 08:38:17 compute-0 NetworkManager[45129]: <info>  [1759394297.5128] manager: (tap31d7db70-01): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:17 compute-0 ovn_controller[152344]: 2025-10-02T08:38:17Z|00937|binding|INFO|Claiming lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 for this chassis.
Oct 02 08:38:17 compute-0 ovn_controller[152344]: 2025-10-02T08:38:17Z|00938|binding|INFO|31d7db70-01e7-4886-b589-dc252f8fc7b5: Claiming fa:16:3e:5d:35:da 10.100.0.9
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.524 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:35:da 10.100.0.9'], port_security=['fa:16:3e:5d:35:da 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6a5d839-2362-461b-a536-078f2c86d9b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=31d7db70-01e7-4886-b589-dc252f8fc7b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.525 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 31d7db70-01e7-4886-b589-dc252f8fc7b5 in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 bound to our chassis
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.527 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:38:17 compute-0 ovn_controller[152344]: 2025-10-02T08:38:17Z|00939|binding|INFO|Setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 ovn-installed in OVS
Oct 02 08:38:17 compute-0 ovn_controller[152344]: 2025-10-02T08:38:17Z|00940|binding|INFO|Setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 up in Southbound
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:17 compute-0 systemd-udevd[352539]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.546 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b5afd0-5ec8-4915-904a-98e57f917c73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:17 compute-0 NetworkManager[45129]: <info>  [1759394297.5565] device (tap31d7db70-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:38:17 compute-0 systemd-machined[214636]: New machine qemu-117-instance-0000005e.
Oct 02 08:38:17 compute-0 NetworkManager[45129]: <info>  [1759394297.5581] device (tap31d7db70-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.580 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd33cde-50b6-4da2-ad6f-565cb2232c4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:17 compute-0 systemd[1]: Started Virtual Machine qemu-117-instance-0000005e.
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.583 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f79a6f-3e83-47b2-a61d-5a54c6baea62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.614 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c452e51f-bfb1-43e6-a3ba-3f0923a42751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.641 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4c6423-4e2c-45ea-bf6b-b6117cf7428b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516134, 'reachable_time': 18062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352549, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.658 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e591ffd-fe35-46c3-943d-2cbade8354f5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516147, 'tstamp': 516147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352553, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516149, 'tstamp': 516149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352553, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.660 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.704 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.705 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:17 compute-0 nova_compute[260603]: 2025-10-02 08:38:17.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.705 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.706 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.004 2 DEBUG nova.compute.manager [req-c85d4bf6-7c3c-4164-8ace-09026358ddd9 req-6eae71a8-cd48-4641-8c9e-aae15932f668 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.005 2 DEBUG oslo_concurrency.lockutils [req-c85d4bf6-7c3c-4164-8ace-09026358ddd9 req-6eae71a8-cd48-4641-8c9e-aae15932f668 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.005 2 DEBUG oslo_concurrency.lockutils [req-c85d4bf6-7c3c-4164-8ace-09026358ddd9 req-6eae71a8-cd48-4641-8c9e-aae15932f668 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.005 2 DEBUG oslo_concurrency.lockutils [req-c85d4bf6-7c3c-4164-8ace-09026358ddd9 req-6eae71a8-cd48-4641-8c9e-aae15932f668 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.005 2 DEBUG nova.compute.manager [req-c85d4bf6-7c3c-4164-8ace-09026358ddd9 req-6eae71a8-cd48-4641-8c9e-aae15932f668 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Processing event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.369 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394298.3693032, b6a5d839-2362-461b-a536-078f2c86d9b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.370 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] VM Started (Lifecycle Event)
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.373 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.378 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.386 2 INFO nova.virt.libvirt.driver [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Instance spawned successfully.
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.386 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.394 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.398 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.405 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.405 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.406 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.406 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.406 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.407 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:18 compute-0 ceph-mon[74477]: pgmap v1771: 305 pgs: 305 active+clean; 145 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 138 op/s
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.433 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.434 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394298.3694646, b6a5d839-2362-461b-a536-078f2c86d9b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.434 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] VM Paused (Lifecycle Event)
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.460 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.463 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394298.3785062, b6a5d839-2362-461b-a536-078f2c86d9b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.464 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] VM Resumed (Lifecycle Event)
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.469 2 INFO nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Took 7.02 seconds to spawn the instance on the hypervisor.
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.470 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.497 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.499 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.533 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.554 2 INFO nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Took 8.14 seconds to build instance.
Oct 02 08:38:18 compute-0 nova_compute[260603]: 2025-10-02 08:38:18.582 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.422 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.425 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.425 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.425 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.426 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.427 2 INFO nova.compute.manager [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Terminating instance
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.428 2 DEBUG nova.compute.manager [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:38:19 compute-0 kernel: tap31d7db70-01 (unregistering): left promiscuous mode
Oct 02 08:38:19 compute-0 NetworkManager[45129]: <info>  [1759394299.4678] device (tap31d7db70-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00941|binding|INFO|Releasing lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 from this chassis (sb_readonly=0)
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00942|binding|INFO|Setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 down in Southbound
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00943|binding|INFO|Removing iface tap31d7db70-01 ovn-installed in OVS
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.487 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:35:da 10.100.0.9'], port_security=['fa:16:3e:5d:35:da 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6a5d839-2362-461b-a536-078f2c86d9b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=31d7db70-01e7-4886-b589-dc252f8fc7b5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.488 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 31d7db70-01e7-4886-b589-dc252f8fc7b5 in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 unbound from our chassis
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.490 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.516 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1d00a924-05a3-40d8-ac94-f6ca5fe65d0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Oct 02 08:38:19 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005e.scope: Consumed 1.817s CPU time.
Oct 02 08:38:19 compute-0 systemd-machined[214636]: Machine qemu-117-instance-0000005e terminated.
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.551 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394284.5497692, ba2cf934-ce76-4de7-a495-285f144bdab7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.551 2 INFO nova.compute.manager [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Stopped (Lifecycle Event)
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.551 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[db3dfd10-69f6-496e-9e0c-cda08787b14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.555 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[182a25e1-eaa6-4949-b8ca-a89580ec1428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.579 2 DEBUG nova.compute.manager [None req-611e1687-5dd5-4b58-9911-25fbd6564efa - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.592 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[180bc79f-3a4a-4559-b038-739aa516c638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.620 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6b6e92-5cce-44f0-acec-69ca9c63d536]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516134, 'reachable_time': 18062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352606, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.647 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[658a0856-0b9a-4fb4-a682-c827ddfea792]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516147, 'tstamp': 516147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352607, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516149, 'tstamp': 516149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352607, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.649 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:19 compute-0 kernel: tap31d7db70-01: entered promiscuous mode
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:19 compute-0 NetworkManager[45129]: <info>  [1759394299.6549] manager: (tap31d7db70-01): new Tun device (/org/freedesktop/NetworkManager/Devices/375)
Oct 02 08:38:19 compute-0 kernel: tap31d7db70-01 (unregistering): left promiscuous mode
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00944|binding|INFO|Claiming lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 for this chassis.
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00945|binding|INFO|31d7db70-01e7-4886-b589-dc252f8fc7b5: Claiming fa:16:3e:5d:35:da 10.100.0.9
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.677 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:35:da 10.100.0.9'], port_security=['fa:16:3e:5d:35:da 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6a5d839-2362-461b-a536-078f2c86d9b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=31d7db70-01e7-4886-b589-dc252f8fc7b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.687 2 INFO nova.virt.libvirt.driver [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Instance destroyed successfully.
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.687 2 DEBUG nova.objects.instance [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'resources' on Instance uuid b6a5d839-2362-461b-a536-078f2c86d9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00946|binding|INFO|Setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 ovn-installed in OVS
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00947|binding|INFO|Setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 up in Southbound
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00948|binding|INFO|Releasing lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 from this chassis (sb_readonly=1)
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00949|if_status|INFO|Dropped 3 log messages in last 55 seconds (most recently, 55 seconds ago) due to excessive rate
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00950|if_status|INFO|Not setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 down as sb is readonly
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00951|binding|INFO|Removing iface tap31d7db70-01 ovn-installed in OVS
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00952|binding|INFO|Releasing lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 from this chassis (sb_readonly=0)
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.702 2 DEBUG nova.virt.libvirt.vif [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:38:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-187016526',display_name='tempest-ServersNegativeTestJSON-server-187016526',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-187016526',id=94,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:38:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-ex30m3ht',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:38:18Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=b6a5d839-2362-461b-a536-078f2c86d9b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.703 2 DEBUG nova.network.os_vif_util [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.704 2 DEBUG nova.network.os_vif_util [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.705 2 DEBUG os_vif [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.707 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.708 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:19 compute-0 ovn_controller[152344]: 2025-10-02T08:38:19Z|00953|binding|INFO|Setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 down in Southbound
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.710 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.711 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31d7db70-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.715 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 31d7db70-01e7-4886-b589-dc252f8fc7b5 in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 unbound from our chassis
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.719 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:35:da 10.100.0.9'], port_security=['fa:16:3e:5d:35:da 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6a5d839-2362-461b-a536-078f2c86d9b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=31d7db70-01e7-4886-b589-dc252f8fc7b5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.722 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.740 2 INFO os_vif [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01')
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.749 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5178e37d-ebca-4b95-81e9-6a20f2515657]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.801 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[333c5c38-f87e-4ad5-9894-793af3d710a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.806 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[384f2951-34cd-4fc8-8b82-ea1a4cf83c3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.849 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c8ecbe-d096-427a-84a0-d2425868f243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.877 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1970f3a1-2121-41d2-ba24-25960abe6472]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 832, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 832, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516134, 'reachable_time': 18062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352636, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.903 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5482bfe2-e672-4547-9d88-4a720e893235]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516147, 'tstamp': 516147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352637, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516149, 'tstamp': 516149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352637, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.906 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:19 compute-0 nova_compute[260603]: 2025-10-02 08:38:19.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.911 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.912 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.913 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.914 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.916 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 31d7db70-01e7-4886-b589-dc252f8fc7b5 in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 unbound from our chassis
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.921 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.950 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4c52a9-e478-41e7-ad27-8961da51db95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.996 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2e69a8f1-8e5c-4ef9-9bee-103972fcdefc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.001 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3d299eff-fc6e-4c8c-9970-84768c8bc8be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.045 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6d80d7b7-bd53-4198-97e8-328b0172d321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.077 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa0b073-bcb4-4778-90e5-9d0dff941e0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 832, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 832, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516134, 'reachable_time': 18062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352644, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.102 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d43c0e-5af4-4e9a-979d-6466d03e7067]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516147, 'tstamp': 516147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352645, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516149, 'tstamp': 516149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352645, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.104 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.130 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.131 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.132 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.132 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.147 2 INFO nova.virt.libvirt.driver [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Deleting instance files /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9_del
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.148 2 INFO nova.virt.libvirt.driver [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Deletion of /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9_del complete
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.158 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.158 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.159 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.159 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.160 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.160 2 WARNING nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received unexpected event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with vm_state active and task_state deleting.
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.161 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-unplugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.161 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.161 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.162 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.162 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-unplugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.163 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-unplugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.163 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.163 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.164 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.164 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.165 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.165 2 WARNING nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received unexpected event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with vm_state active and task_state deleting.
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.212 2 INFO nova.compute.manager [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.213 2 DEBUG oslo.service.loopingcall [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.214 2 DEBUG nova.compute.manager [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:38:20 compute-0 nova_compute[260603]: 2025-10-02 08:38:20.214 2 DEBUG nova.network.neutron [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:38:20 compute-0 ceph-mon[74477]: pgmap v1772: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 02 08:38:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 02 08:38:21 compute-0 nova_compute[260603]: 2025-10-02 08:38:21.712 2 DEBUG nova.network.neutron [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:38:21 compute-0 nova_compute[260603]: 2025-10-02 08:38:21.734 2 INFO nova.compute.manager [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Took 1.52 seconds to deallocate network for instance.
Oct 02 08:38:21 compute-0 nova_compute[260603]: 2025-10-02 08:38:21.779 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:21 compute-0 nova_compute[260603]: 2025-10-02 08:38:21.779 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:38:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987305785' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:38:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:38:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987305785' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:38:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:38:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 29K writes, 117K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.80 writes per sync, written: 0.11 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 47.48 MB, 0.08 MB/s
                                           Interval WAL: 12K writes, 4782 syncs, 2.51 writes per sync, written: 0.05 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 08:38:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.262 2 DEBUG oslo_concurrency.processutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.347 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.348 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.348 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.349 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.349 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.350 2 WARNING nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received unexpected event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with vm_state deleted and task_state None.
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.350 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.351 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.351 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.352 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.352 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.352 2 WARNING nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received unexpected event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with vm_state deleted and task_state None.
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.353 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-unplugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.353 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.354 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.354 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.355 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-unplugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.355 2 WARNING nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received unexpected event network-vif-unplugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with vm_state deleted and task_state None.
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.356 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-deleted-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.356 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.356 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.357 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.357 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.358 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.358 2 WARNING nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received unexpected event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with vm_state deleted and task_state None.
Oct 02 08:38:22 compute-0 ceph-mon[74477]: pgmap v1773: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 02 08:38:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1987305785' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:38:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1987305785' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:38:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:38:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1036439364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.692 2 DEBUG oslo_concurrency.processutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.700 2 DEBUG nova.compute.provider_tree [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.725 2 DEBUG nova.scheduler.client.report [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.749 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.780 2 INFO nova.scheduler.client.report [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Deleted allocations for instance b6a5d839-2362-461b-a536-078f2c86d9b9
Oct 02 08:38:22 compute-0 nova_compute[260603]: 2025-10-02 08:38:22.853 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Oct 02 08:38:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1036439364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:38:23 compute-0 nova_compute[260603]: 2025-10-02 08:38:23.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:23 compute-0 nova_compute[260603]: 2025-10-02 08:38:23.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:24 compute-0 nova_compute[260603]: 2025-10-02 08:38:24.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:24 compute-0 ceph-mon[74477]: pgmap v1774: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Oct 02 08:38:24 compute-0 nova_compute[260603]: 2025-10-02 08:38:24.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:24 compute-0 nova_compute[260603]: 2025-10-02 08:38:24.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:38:24 compute-0 nova_compute[260603]: 2025-10-02 08:38:24.574 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:38:24 compute-0 nova_compute[260603]: 2025-10-02 08:38:24.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Oct 02 08:38:25 compute-0 nova_compute[260603]: 2025-10-02 08:38:25.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:26 compute-0 podman[352669]: 2025-10-02 08:38:26.032443218 +0000 UTC m=+0.088154170 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 02 08:38:26 compute-0 podman[352668]: 2025-10-02 08:38:26.091626087 +0000 UTC m=+0.148510885 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:38:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:38:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 31K writes, 120K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s
                                           Cumulative WAL: 31K writes, 11K syncs, 2.81 writes per sync, written: 0.11 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 46.85 MB, 0.08 MB/s
                                           Interval WAL: 12K writes, 5142 syncs, 2.43 writes per sync, written: 0.05 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 08:38:26 compute-0 ceph-mon[74477]: pgmap v1775: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Oct 02 08:38:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Oct 02 08:38:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:38:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:38:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:38:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:38:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:38:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:38:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:38:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:38:27
Oct 02 08:38:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:38:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:38:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['volumes', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta']
Oct 02 08:38:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:38:28 compute-0 ceph-mon[74477]: pgmap v1776: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Oct 02 08:38:28 compute-0 nova_compute[260603]: 2025-10-02 08:38:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:28 compute-0 nova_compute[260603]: 2025-10-02 08:38:28.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:28 compute-0 nova_compute[260603]: 2025-10-02 08:38:28.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:28 compute-0 nova_compute[260603]: 2025-10-02 08:38:28.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:28 compute-0 nova_compute[260603]: 2025-10-02 08:38:28.545 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:38:28 compute-0 nova_compute[260603]: 2025-10-02 08:38:28.545 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:38:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/270479168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.011 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.113 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.114 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:38:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 137 op/s
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.322 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.323 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3704MB free_disk=59.94279479980469GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.323 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.324 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.403 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.404 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.405 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.441 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/270479168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:38:29 compute-0 nova_compute[260603]: 2025-10-02 08:38:29.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:38:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143985717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:38:30 compute-0 nova_compute[260603]: 2025-10-02 08:38:30.015 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:30 compute-0 nova_compute[260603]: 2025-10-02 08:38:30.024 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:38:30 compute-0 nova_compute[260603]: 2025-10-02 08:38:30.047 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:38:30 compute-0 nova_compute[260603]: 2025-10-02 08:38:30.068 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:38:30 compute-0 nova_compute[260603]: 2025-10-02 08:38:30.069 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:30 compute-0 nova_compute[260603]: 2025-10-02 08:38:30.069 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:30 compute-0 nova_compute[260603]: 2025-10-02 08:38:30.069 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 08:38:30 compute-0 nova_compute[260603]: 2025-10-02 08:38:30.090 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 08:38:30 compute-0 ceph-mon[74477]: pgmap v1777: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 137 op/s
Oct 02 08:38:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4143985717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:38:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 25 KiB/s wr, 79 op/s
Oct 02 08:38:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:38:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 25K writes, 96K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 25K writes, 9018 syncs, 2.81 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9319 writes, 34K keys, 9319 commit groups, 1.0 writes per commit group, ingest: 33.26 MB, 0.06 MB/s
                                           Interval WAL: 9319 writes, 3866 syncs, 2.41 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 08:38:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:38:32 compute-0 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct 02 08:38:32 compute-0 ceph-mon[74477]: pgmap v1778: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 25 KiB/s wr, 79 op/s
Oct 02 08:38:32 compute-0 nova_compute[260603]: 2025-10-02 08:38:32.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:32 compute-0 nova_compute[260603]: 2025-10-02 08:38:32.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:32 compute-0 nova_compute[260603]: 2025-10-02 08:38:32.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:32 compute-0 nova_compute[260603]: 2025-10-02 08:38:32.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 08:38:33 compute-0 podman[352757]: 2025-10-02 08:38:33.030313584 +0000 UTC m=+0.091830121 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 08:38:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 25 KiB/s wr, 79 op/s
Oct 02 08:38:33 compute-0 nova_compute[260603]: 2025-10-02 08:38:33.546 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.106 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.106 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.133 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.218 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.219 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.229 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.229 2 INFO nova.compute.claims [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.384 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:34 compute-0 ceph-mon[74477]: pgmap v1779: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 25 KiB/s wr, 79 op/s
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.684 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394299.6828716, b6a5d839-2362-461b-a536-078f2c86d9b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.685 2 INFO nova.compute.manager [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] VM Stopped (Lifecycle Event)
Oct 02 08:38:34 compute-0 sudo[352797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.712 2 DEBUG nova.compute.manager [None req-468de743-2d12-4b20-8c1d-f81f3d057055 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:34 compute-0 sudo[352797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:34 compute-0 sudo[352797]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:34 compute-0 sudo[352828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:38:34 compute-0 podman[352821]: 2025-10-02 08:38:34.800635604 +0000 UTC m=+0.079544703 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:38:34 compute-0 sudo[352828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:34 compute-0 sudo[352828]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:34.822 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:34.823 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:34.824 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:38:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676199611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.869 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.877 2 DEBUG nova.compute.provider_tree [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:38:34 compute-0 sudo[352867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:38:34 compute-0 sudo[352867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:34 compute-0 sudo[352867]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.902 2 DEBUG nova.scheduler.client.report [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.927 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.928 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:38:34 compute-0 sudo[352894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:38:34 compute-0 sudo[352894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.982 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:38:34 compute-0 nova_compute[260603]: 2025-10-02 08:38:34.982 2 DEBUG nova.network.neutron [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.017 2 INFO nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.034 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.124 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.127 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.128 2 INFO nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Creating image(s)
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.165 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.218 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.249 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.257 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.354 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.356 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.356 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.357 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.381 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.385 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/676199611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:38:35 compute-0 sudo[352894]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:38:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:38:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:38:35 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:38:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:38:35 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:38:35 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 325fd12a-6b8a-422a-b19c-48e9c2c44400 does not exist
Oct 02 08:38:35 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9a63e180-113a-425f-9c8c-618b951ccb53 does not exist
Oct 02 08:38:35 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9e3c2a59-079a-4305-af44-0aecab3ea8bb does not exist
Oct 02 08:38:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:38:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:38:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:38:35 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:38:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:38:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.636 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:35 compute-0 sudo[353044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:38:35 compute-0 sudo[353044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:35 compute-0 sudo[353044]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.700 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] resizing rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:38:35 compute-0 sudo[353087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:38:35 compute-0 sudo[353087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.727 2 DEBUG nova.policy [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd3802fedfb914c27b9b09ad6ea6f4c27', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c75535fe577642038c638a0b01f74d09', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:38:35 compute-0 sudo[353087]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:35 compute-0 sudo[353148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.782 2 DEBUG nova.objects.instance [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'migration_context' on Instance uuid 4145f1a3-c327-49ee-9af1-1ace3afb70a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:38:35 compute-0 sudo[353148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:35 compute-0 sudo[353148]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.795 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.796 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Ensure instance console log exists: /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.796 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.796 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:35 compute-0 nova_compute[260603]: 2025-10-02 08:38:35.796 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:35 compute-0 sudo[353191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:38:35 compute-0 sudo[353191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:36 compute-0 podman[353258]: 2025-10-02 08:38:36.120677918 +0000 UTC m=+0.038587210 container create a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 08:38:36 compute-0 systemd[1]: Started libpod-conmon-a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b.scope.
Oct 02 08:38:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:38:36 compute-0 podman[353258]: 2025-10-02 08:38:36.195717563 +0000 UTC m=+0.113626875 container init a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:38:36 compute-0 podman[353258]: 2025-10-02 08:38:36.101499992 +0000 UTC m=+0.019409304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:38:36 compute-0 podman[353258]: 2025-10-02 08:38:36.202705383 +0000 UTC m=+0.120614665 container start a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 08:38:36 compute-0 podman[353258]: 2025-10-02 08:38:36.205937261 +0000 UTC m=+0.123846583 container attach a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:38:36 compute-0 adoring_gauss[353276]: 167 167
Oct 02 08:38:36 compute-0 systemd[1]: libpod-a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b.scope: Deactivated successfully.
Oct 02 08:38:36 compute-0 conmon[353276]: conmon a3cf4f13d7b48bf25958 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b.scope/container/memory.events
Oct 02 08:38:36 compute-0 podman[353258]: 2025-10-02 08:38:36.210258751 +0000 UTC m=+0.128168043 container died a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:38:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6e48f8e18e6392cb921e5441d2e676a5cd5c737165dc33fc2eedece7d15d3e8-merged.mount: Deactivated successfully.
Oct 02 08:38:36 compute-0 podman[353258]: 2025-10-02 08:38:36.249729757 +0000 UTC m=+0.167639069 container remove a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:38:36 compute-0 systemd[1]: libpod-conmon-a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b.scope: Deactivated successfully.
Oct 02 08:38:36 compute-0 podman[353300]: 2025-10-02 08:38:36.432047127 +0000 UTC m=+0.048944742 container create 922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 08:38:36 compute-0 systemd[1]: Started libpod-conmon-922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719.scope.
Oct 02 08:38:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f60c3a3c6d3a6e207f6e07ceb89974ae3a1cffc07413e2fffb6aa8974b6bce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f60c3a3c6d3a6e207f6e07ceb89974ae3a1cffc07413e2fffb6aa8974b6bce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f60c3a3c6d3a6e207f6e07ceb89974ae3a1cffc07413e2fffb6aa8974b6bce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f60c3a3c6d3a6e207f6e07ceb89974ae3a1cffc07413e2fffb6aa8974b6bce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f60c3a3c6d3a6e207f6e07ceb89974ae3a1cffc07413e2fffb6aa8974b6bce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:36 compute-0 podman[353300]: 2025-10-02 08:38:36.416386726 +0000 UTC m=+0.033284331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:38:36 compute-0 podman[353300]: 2025-10-02 08:38:36.515382151 +0000 UTC m=+0.132279776 container init 922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 08:38:36 compute-0 nova_compute[260603]: 2025-10-02 08:38:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:36 compute-0 podman[353300]: 2025-10-02 08:38:36.527564668 +0000 UTC m=+0.144462263 container start 922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:38:36 compute-0 podman[353300]: 2025-10-02 08:38:36.53198088 +0000 UTC m=+0.148878515 container attach 922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:38:36 compute-0 ceph-mon[74477]: pgmap v1780: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:38:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:38:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:38:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:38:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:38:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:38:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:38:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 139 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 390 KiB/s wr, 12 op/s
Oct 02 08:38:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:38:37 compute-0 nova_compute[260603]: 2025-10-02 08:38:37.271 2 DEBUG nova.network.neutron [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Successfully created port: 4340f7c5-2f3a-4608-b77c-40798457ce79 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:38:37 compute-0 nova_compute[260603]: 2025-10-02 08:38:37.526 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:37 compute-0 eager_heyrovsky[353316]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:38:37 compute-0 eager_heyrovsky[353316]: --> relative data size: 1.0
Oct 02 08:38:37 compute-0 eager_heyrovsky[353316]: --> All data devices are unavailable
Oct 02 08:38:37 compute-0 systemd[1]: libpod-922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719.scope: Deactivated successfully.
Oct 02 08:38:37 compute-0 systemd[1]: libpod-922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719.scope: Consumed 1.160s CPU time.
Oct 02 08:38:37 compute-0 conmon[353316]: conmon 922472ebf58c49a079b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719.scope/container/memory.events
Oct 02 08:38:37 compute-0 podman[353300]: 2025-10-02 08:38:37.751738261 +0000 UTC m=+1.368635896 container died 922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:38:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-12f60c3a3c6d3a6e207f6e07ceb89974ae3a1cffc07413e2fffb6aa8974b6bce-merged.mount: Deactivated successfully.
Oct 02 08:38:37 compute-0 podman[353300]: 2025-10-02 08:38:37.822427836 +0000 UTC m=+1.439325481 container remove 922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:38:37 compute-0 systemd[1]: libpod-conmon-922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719.scope: Deactivated successfully.
Oct 02 08:38:37 compute-0 sudo[353191]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:37 compute-0 sudo[353358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:38:37 compute-0 sudo[353358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:37 compute-0 sudo[353358]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:37 compute-0 sudo[353383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:38:38 compute-0 sudo[353383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:38 compute-0 sudo[353383]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:38 compute-0 sudo[353408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:38:38 compute-0 sudo[353408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:38 compute-0 sudo[353408]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:38 compute-0 sudo[353433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:38:38 compute-0 sudo[353433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:38 compute-0 nova_compute[260603]: 2025-10-02 08:38:38.219 2 DEBUG nova.network.neutron [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Successfully updated port: 4340f7c5-2f3a-4608-b77c-40798457ce79 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:38:38 compute-0 nova_compute[260603]: 2025-10-02 08:38:38.234 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:38:38 compute-0 nova_compute[260603]: 2025-10-02 08:38:38.235 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquired lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:38:38 compute-0 nova_compute[260603]: 2025-10-02 08:38:38.235 2 DEBUG nova.network.neutron [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:38:38 compute-0 nova_compute[260603]: 2025-10-02 08:38:38.441 2 DEBUG nova.network.neutron [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:38:38 compute-0 ceph-mon[74477]: pgmap v1781: 305 pgs: 305 active+clean; 139 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 390 KiB/s wr, 12 op/s
Oct 02 08:38:38 compute-0 podman[353500]: 2025-10-02 08:38:38.627545064 +0000 UTC m=+0.063182870 container create 46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008327203189099434 of space, bias 1.0, pg target 0.24981609567298302 quantized to 32 (current 32)
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:38:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:38:38 compute-0 nova_compute[260603]: 2025-10-02 08:38:38.665 2 DEBUG nova.compute.manager [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-changed-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:38 compute-0 nova_compute[260603]: 2025-10-02 08:38:38.665 2 DEBUG nova.compute.manager [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Refreshing instance network info cache due to event network-changed-4340f7c5-2f3a-4608-b77c-40798457ce79. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:38:38 compute-0 nova_compute[260603]: 2025-10-02 08:38:38.666 2 DEBUG oslo_concurrency.lockutils [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:38:38 compute-0 systemd[1]: Started libpod-conmon-46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0.scope.
Oct 02 08:38:38 compute-0 podman[353500]: 2025-10-02 08:38:38.59283597 +0000 UTC m=+0.028473816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:38:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:38:38 compute-0 podman[353500]: 2025-10-02 08:38:38.74687997 +0000 UTC m=+0.182517786 container init 46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:38:38 compute-0 podman[353500]: 2025-10-02 08:38:38.759306584 +0000 UTC m=+0.194944350 container start 46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 08:38:38 compute-0 podman[353500]: 2025-10-02 08:38:38.763332125 +0000 UTC m=+0.198969921 container attach 46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:38:38 compute-0 serene_swanson[353516]: 167 167
Oct 02 08:38:38 compute-0 systemd[1]: libpod-46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0.scope: Deactivated successfully.
Oct 02 08:38:38 compute-0 podman[353500]: 2025-10-02 08:38:38.769400307 +0000 UTC m=+0.205038073 container died 46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:38:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c787e74c16c7a58d22c73e0cbf88ee697df9318df4ecbc24aebd29e632f2d34-merged.mount: Deactivated successfully.
Oct 02 08:38:38 compute-0 podman[353500]: 2025-10-02 08:38:38.813975937 +0000 UTC m=+0.249613713 container remove 46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 08:38:38 compute-0 systemd[1]: libpod-conmon-46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0.scope: Deactivated successfully.
Oct 02 08:38:39 compute-0 podman[353539]: 2025-10-02 08:38:39.03030798 +0000 UTC m=+0.058606554 container create e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 08:38:39 compute-0 nova_compute[260603]: 2025-10-02 08:38:39.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:39 compute-0 systemd[1]: Started libpod-conmon-e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3.scope.
Oct 02 08:38:39 compute-0 podman[353539]: 2025-10-02 08:38:39.012453643 +0000 UTC m=+0.040752227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:38:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5031850c6e4b1d7f4bad7d63f44914f388f07bf835ceccd9f4c99d2c5a5daa2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5031850c6e4b1d7f4bad7d63f44914f388f07bf835ceccd9f4c99d2c5a5daa2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5031850c6e4b1d7f4bad7d63f44914f388f07bf835ceccd9f4c99d2c5a5daa2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5031850c6e4b1d7f4bad7d63f44914f388f07bf835ceccd9f4c99d2c5a5daa2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:39 compute-0 podman[353539]: 2025-10-02 08:38:39.125860701 +0000 UTC m=+0.154159295 container init e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 08:38:39 compute-0 podman[353539]: 2025-10-02 08:38:39.136665866 +0000 UTC m=+0.164964440 container start e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:38:39 compute-0 podman[353539]: 2025-10-02 08:38:39.139911894 +0000 UTC m=+0.168210558 container attach e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 08:38:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:38:39 compute-0 nova_compute[260603]: 2025-10-02 08:38:39.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]: {
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:     "0": [
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:         {
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "devices": [
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "/dev/loop3"
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             ],
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_name": "ceph_lv0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_size": "21470642176",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "name": "ceph_lv0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "tags": {
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.cluster_name": "ceph",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.crush_device_class": "",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.encrypted": "0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.osd_id": "0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.type": "block",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.vdo": "0"
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             },
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "type": "block",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "vg_name": "ceph_vg0"
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:         }
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:     ],
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:     "1": [
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:         {
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "devices": [
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "/dev/loop4"
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             ],
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_name": "ceph_lv1",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_size": "21470642176",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "name": "ceph_lv1",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "tags": {
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.cluster_name": "ceph",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.crush_device_class": "",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.encrypted": "0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.osd_id": "1",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.type": "block",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.vdo": "0"
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             },
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "type": "block",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "vg_name": "ceph_vg1"
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:         }
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:     ],
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:     "2": [
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:         {
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "devices": [
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "/dev/loop5"
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             ],
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_name": "ceph_lv2",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_size": "21470642176",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "name": "ceph_lv2",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "tags": {
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.cluster_name": "ceph",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.crush_device_class": "",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.encrypted": "0",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.osd_id": "2",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.type": "block",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:                 "ceph.vdo": "0"
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             },
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "type": "block",
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:             "vg_name": "ceph_vg2"
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:         }
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]:     ]
Oct 02 08:38:39 compute-0 nostalgic_jepsen[353556]: }
Oct 02 08:38:39 compute-0 systemd[1]: libpod-e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3.scope: Deactivated successfully.
Oct 02 08:38:39 compute-0 podman[353539]: 2025-10-02 08:38:39.896295878 +0000 UTC m=+0.924594472 container died e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5031850c6e4b1d7f4bad7d63f44914f388f07bf835ceccd9f4c99d2c5a5daa2f-merged.mount: Deactivated successfully.
Oct 02 08:38:39 compute-0 podman[353539]: 2025-10-02 08:38:39.968993022 +0000 UTC m=+0.997291626 container remove e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:38:39 compute-0 nova_compute[260603]: 2025-10-02 08:38:39.977 2 DEBUG nova.network.neutron [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Updating instance_info_cache with network_info: [{"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:38:39 compute-0 systemd[1]: libpod-conmon-e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3.scope: Deactivated successfully.
Oct 02 08:38:40 compute-0 sudo[353433]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.015 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Releasing lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.016 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Instance network_info: |[{"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.017 2 DEBUG oslo_concurrency.lockutils [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.018 2 DEBUG nova.network.neutron [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Refreshing network info cache for port 4340f7c5-2f3a-4608-b77c-40798457ce79 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.022 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Start _get_guest_xml network_info=[{"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.028 2 WARNING nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.035 2 DEBUG nova.virt.libvirt.host [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.036 2 DEBUG nova.virt.libvirt.host [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.044 2 DEBUG nova.virt.libvirt.host [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.045 2 DEBUG nova.virt.libvirt.host [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.045 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.046 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.047 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.047 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.047 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.048 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.048 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.048 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.049 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.049 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.050 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.050 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.053 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:40 compute-0 sudo[353579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:38:40 compute-0 sudo[353579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:40 compute-0 sudo[353579]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:40 compute-0 sudo[353605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:38:40 compute-0 sudo[353605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:40 compute-0 sudo[353605]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:40 compute-0 sudo[353631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:38:40 compute-0 sudo[353631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:40 compute-0 sudo[353631]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:40 compute-0 sudo[353674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:38:40 compute-0 sudo[353674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:38:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2762823883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.547 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:40 compute-0 ceph-mon[74477]: pgmap v1782: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:38:40 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2762823883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.586 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:40 compute-0 nova_compute[260603]: 2025-10-02 08:38:40.592 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:40 compute-0 podman[353760]: 2025-10-02 08:38:40.745417118 +0000 UTC m=+0.068426828 container create 94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:38:40 compute-0 systemd[1]: Started libpod-conmon-94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5.scope.
Oct 02 08:38:40 compute-0 podman[353760]: 2025-10-02 08:38:40.715999344 +0000 UTC m=+0.039009144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:38:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:38:40 compute-0 podman[353760]: 2025-10-02 08:38:40.846168547 +0000 UTC m=+0.169178287 container init 94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 08:38:40 compute-0 podman[353760]: 2025-10-02 08:38:40.857801966 +0000 UTC m=+0.180811686 container start 94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:38:40 compute-0 podman[353760]: 2025-10-02 08:38:40.861235289 +0000 UTC m=+0.184245019 container attach 94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:38:40 compute-0 naughty_hofstadter[353795]: 167 167
Oct 02 08:38:40 compute-0 systemd[1]: libpod-94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5.scope: Deactivated successfully.
Oct 02 08:38:40 compute-0 conmon[353795]: conmon 94303197026bb03ffc76 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5.scope/container/memory.events
Oct 02 08:38:40 compute-0 podman[353760]: 2025-10-02 08:38:40.866134957 +0000 UTC m=+0.189144677 container died 94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 08:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b52fd148f2844ec0fe8cc68b818875d60738f5b6cb0a3e1019b92a590ed273d-merged.mount: Deactivated successfully.
Oct 02 08:38:40 compute-0 podman[353760]: 2025-10-02 08:38:40.905455649 +0000 UTC m=+0.228465369 container remove 94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 08:38:40 compute-0 systemd[1]: libpod-conmon-94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5.scope: Deactivated successfully.
Oct 02 08:38:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:38:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470657921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.043 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.047 2 DEBUG nova.virt.libvirt.vif [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:38:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2131100839',display_name='tempest-ServerActionsTestOtherA-server-2131100839',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2131100839',id=95,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC0HjsYe2bQh07GPDTkE/Hkn9NHAzOfs+WPsOxgVRJ14fyGEBr+vw6dokOlyhdtA2fxAJhqFEPbCShkjGLLEAdJQr2B1DlLsi6qyPK3AKei/w52/HIPGV/pd20ma4wEBfQ==',key_name='tempest-keypair-312406508',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-ccglcikc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:38:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=4145f1a3-c327-49ee-9af1-1ace3afb70a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.048 2 DEBUG nova.network.os_vif_util [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.049 2 DEBUG nova.network.os_vif_util [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.052 2 DEBUG nova.objects.instance [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4145f1a3-c327-49ee-9af1-1ace3afb70a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:38:41 compute-0 podman[353821]: 2025-10-02 08:38:41.088416997 +0000 UTC m=+0.040411546 container create cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.124 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:38:41 compute-0 nova_compute[260603]:   <uuid>4145f1a3-c327-49ee-9af1-1ace3afb70a5</uuid>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   <name>instance-0000005f</name>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerActionsTestOtherA-server-2131100839</nova:name>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:38:40</nova:creationTime>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:38:41 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:38:41 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:38:41 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:38:41 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:38:41 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:38:41 compute-0 nova_compute[260603]:         <nova:user uuid="d3802fedfb914c27b9b09ad6ea6f4c27">tempest-ServerActionsTestOtherA-249618595-project-member</nova:user>
Oct 02 08:38:41 compute-0 nova_compute[260603]:         <nova:project uuid="c75535fe577642038c638a0b01f74d09">tempest-ServerActionsTestOtherA-249618595</nova:project>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:38:41 compute-0 nova_compute[260603]:         <nova:port uuid="4340f7c5-2f3a-4608-b77c-40798457ce79">
Oct 02 08:38:41 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <system>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <entry name="serial">4145f1a3-c327-49ee-9af1-1ace3afb70a5</entry>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <entry name="uuid">4145f1a3-c327-49ee-9af1-1ace3afb70a5</entry>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     </system>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   <os>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   </os>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   <features>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   </features>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk">
Oct 02 08:38:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       </source>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:38:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk.config">
Oct 02 08:38:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       </source>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:38:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:37:12:17"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <target dev="tap4340f7c5-2f"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/console.log" append="off"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <video>
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     </video>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:38:41 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:38:41 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:38:41 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:38:41 compute-0 nova_compute[260603]: </domain>
Oct 02 08:38:41 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.126 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Preparing to wait for external event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.126 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.127 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.127 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.128 2 DEBUG nova.virt.libvirt.vif [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:38:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2131100839',display_name='tempest-ServerActionsTestOtherA-server-2131100839',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2131100839',id=95,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC0HjsYe2bQh07GPDTkE/Hkn9NHAzOfs+WPsOxgVRJ14fyGEBr+vw6dokOlyhdtA2fxAJhqFEPbCShkjGLLEAdJQr2B1DlLsi6qyPK3AKei/w52/HIPGV/pd20ma4wEBfQ==',key_name='tempest-keypair-312406508',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-ccglcikc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:38:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=4145f1a3-c327-49ee-9af1-1ace3afb70a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.128 2 DEBUG nova.network.os_vif_util [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.129 2 DEBUG nova.network.os_vif_util [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.129 2 DEBUG os_vif [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:41 compute-0 systemd[1]: Started libpod-conmon-cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930.scope.
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.137 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4340f7c5-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.138 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4340f7c5-2f, col_values=(('external_ids', {'iface-id': '4340f7c5-2f3a-4608-b77c-40798457ce79', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:12:17', 'vm-uuid': '4145f1a3-c327-49ee-9af1-1ace3afb70a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:41 compute-0 NetworkManager[45129]: <info>  [1759394321.1408] manager: (tap4340f7c5-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.148 2 INFO os_vif [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f')
Oct 02 08:38:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee11d73360d280e50d105667a8d0b8867d24412da271144dc227f1570c11ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee11d73360d280e50d105667a8d0b8867d24412da271144dc227f1570c11ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee11d73360d280e50d105667a8d0b8867d24412da271144dc227f1570c11ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee11d73360d280e50d105667a8d0b8867d24412da271144dc227f1570c11ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:41 compute-0 podman[353821]: 2025-10-02 08:38:41.0731997 +0000 UTC m=+0.025194259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:38:41 compute-0 podman[353821]: 2025-10-02 08:38:41.179578897 +0000 UTC m=+0.131573486 container init cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 08:38:41 compute-0 podman[353821]: 2025-10-02 08:38:41.189931708 +0000 UTC m=+0.141926277 container start cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:38:41 compute-0 podman[353821]: 2025-10-02 08:38:41.19498967 +0000 UTC m=+0.146984219 container attach cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:38:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.216 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.217 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.217 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No VIF found with MAC fa:16:3e:37:12:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.218 2 INFO nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Using config drive
Oct 02 08:38:41 compute-0 nova_compute[260603]: 2025-10-02 08:38:41.238 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3470657921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.148 2 INFO nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Creating config drive at /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/disk.config
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.157 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqzwllo4y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]: {
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "osd_id": 2,
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "type": "bluestore"
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:     },
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "osd_id": 1,
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "type": "bluestore"
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:     },
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "osd_id": 0,
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:         "type": "bluestore"
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]:     }
Oct 02 08:38:42 compute-0 eloquent_sammet[353838]: }
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.303 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqzwllo4y" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:42 compute-0 systemd[1]: libpod-cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930.scope: Deactivated successfully.
Oct 02 08:38:42 compute-0 systemd[1]: libpod-cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930.scope: Consumed 1.122s CPU time.
Oct 02 08:38:42 compute-0 podman[353821]: 2025-10-02 08:38:42.312422956 +0000 UTC m=+1.264417535 container died cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.337 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.343 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/disk.config 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-78ee11d73360d280e50d105667a8d0b8867d24412da271144dc227f1570c11ed-merged.mount: Deactivated successfully.
Oct 02 08:38:42 compute-0 podman[353821]: 2025-10-02 08:38:42.381457351 +0000 UTC m=+1.333451900 container remove cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 08:38:42 compute-0 systemd[1]: libpod-conmon-cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930.scope: Deactivated successfully.
Oct 02 08:38:42 compute-0 sudo[353674]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:38:42 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:38:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:38:42 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:38:42 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0ade339c-beae-4a00-94d3-896940daaea5 does not exist
Oct 02 08:38:42 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a66f40be-364a-4ac9-8657-9d528b4f7cf3 does not exist
Oct 02 08:38:42 compute-0 sudo[353930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:38:42 compute-0 sudo[353930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:42 compute-0 sudo[353930]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.544 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/disk.config 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.546 2 INFO nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Deleting local config drive /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/disk.config because it was imported into RBD.
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.549 2 DEBUG nova.network.neutron [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Updated VIF entry in instance network info cache for port 4340f7c5-2f3a-4608-b77c-40798457ce79. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.549 2 DEBUG nova.network.neutron [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Updating instance_info_cache with network_info: [{"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:38:42 compute-0 sudo[353966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.575 2 DEBUG oslo_concurrency.lockutils [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:38:42 compute-0 sudo[353966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:38:42 compute-0 ceph-mon[74477]: pgmap v1783: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:38:42 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:38:42 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:38:42 compute-0 sudo[353966]: pam_unix(sudo:session): session closed for user root
Oct 02 08:38:42 compute-0 kernel: tap4340f7c5-2f: entered promiscuous mode
Oct 02 08:38:42 compute-0 NetworkManager[45129]: <info>  [1759394322.6243] manager: (tap4340f7c5-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Oct 02 08:38:42 compute-0 systemd-udevd[354001]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:38:42 compute-0 ovn_controller[152344]: 2025-10-02T08:38:42Z|00954|binding|INFO|Claiming lport 4340f7c5-2f3a-4608-b77c-40798457ce79 for this chassis.
Oct 02 08:38:42 compute-0 ovn_controller[152344]: 2025-10-02T08:38:42Z|00955|binding|INFO|4340f7c5-2f3a-4608-b77c-40798457ce79: Claiming fa:16:3e:37:12:17 10.100.0.14
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:42 compute-0 NetworkManager[45129]: <info>  [1759394322.6796] device (tap4340f7c5-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:38:42 compute-0 NetworkManager[45129]: <info>  [1759394322.6821] device (tap4340f7c5-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.681 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:12:17 10.100.0.14'], port_security=['fa:16:3e:37:12:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4145f1a3-c327-49ee-9af1-1ace3afb70a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7fef51c8-51f5-4bd3-92a8-fdacaab334b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4340f7c5-2f3a-4608-b77c-40798457ce79) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.682 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4340f7c5-2f3a-4608-b77c-40798457ce79 in datapath 28f843b2-396a-4167-9840-21c273bdc044 bound to our chassis
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.683 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.697 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb8b2a0-f60a-4fb6-ab4d-0e26d97a4281]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.699 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28f843b2-31 in ovnmeta-28f843b2-396a-4167-9840-21c273bdc044 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.701 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28f843b2-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.701 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09b2b099-8a64-49fd-8ccd-3ec409050a06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.702 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1f15f370-9439-4a37-96f5-5f142cba0e75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.717 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d77daa-ec2f-46c6-99a6-235fa0b05d7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 systemd-machined[214636]: New machine qemu-118-instance-0000005f.
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.745 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8045ef4-bf21-4fa7-95ee-cf3410a1b856]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 systemd[1]: Started Virtual Machine qemu-118-instance-0000005f.
Oct 02 08:38:42 compute-0 ovn_controller[152344]: 2025-10-02T08:38:42Z|00956|binding|INFO|Setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 ovn-installed in OVS
Oct 02 08:38:42 compute-0 ovn_controller[152344]: 2025-10-02T08:38:42Z|00957|binding|INFO|Setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 up in Southbound
Oct 02 08:38:42 compute-0 nova_compute[260603]: 2025-10-02 08:38:42.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.784 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ab86afe8-20c8-4b95-8a18-3169140f79bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.790 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b643427b-ac30-41f2-b8cc-b72afe5d667f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 systemd-udevd[354003]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:38:42 compute-0 NetworkManager[45129]: <info>  [1759394322.7925] manager: (tap28f843b2-30): new Veth device (/org/freedesktop/NetworkManager/Devices/378)
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.833 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d318107a-6201-498e-b48b-cb3d818e0c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.838 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e286db65-9e7e-4fe0-89d7-c98d551a12bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 NetworkManager[45129]: <info>  [1759394322.8677] device (tap28f843b2-30): carrier: link connected
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.875 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[967dc41b-8b13-4873-99cf-fe08b42d33bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.895 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a609f0-78d8-42f8-bc74-129539b5525e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354037, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.919 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[493521ae-f53e-4f16-800f-1196e2234ba1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:e0ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520146, 'tstamp': 520146}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354038, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.939 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[db8cd795-e25d-4b61-a94c-b3fcd9d190af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354039, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.980 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc11c85-f131-4dd6-8105-dc622ff6f870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.055 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[63917c07-2cff-427c-a680-4a17f1c8347b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.056 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.057 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.057 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:43 compute-0 kernel: tap28f843b2-30: entered promiscuous mode
Oct 02 08:38:43 compute-0 NetworkManager[45129]: <info>  [1759394323.0615] manager: (tap28f843b2-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.066 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:43 compute-0 ovn_controller[152344]: 2025-10-02T08:38:43Z|00958|binding|INFO|Releasing lport 73da1479-3a84-4798-9da3-841fe88c5e3a from this chassis (sb_readonly=0)
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.102 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28f843b2-396a-4167-9840-21c273bdc044.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28f843b2-396a-4167-9840-21c273bdc044.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.104 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29b1521f-9fcf-47e5-9407-76896b954ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.105 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-28f843b2-396a-4167-9840-21c273bdc044
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/28f843b2-396a-4167-9840-21c273bdc044.pid.haproxy
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 28f843b2-396a-4167-9840-21c273bdc044
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:38:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.107 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'env', 'PROCESS_TAG=haproxy-28f843b2-396a-4167-9840-21c273bdc044', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28f843b2-396a-4167-9840-21c273bdc044.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:38:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.258 2 DEBUG nova.compute.manager [req-bfb4b1df-f184-4f50-ace2-91dcff364e4f req-9b90115d-e32d-44ed-bffc-c004157aa383 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.259 2 DEBUG oslo_concurrency.lockutils [req-bfb4b1df-f184-4f50-ace2-91dcff364e4f req-9b90115d-e32d-44ed-bffc-c004157aa383 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.260 2 DEBUG oslo_concurrency.lockutils [req-bfb4b1df-f184-4f50-ace2-91dcff364e4f req-9b90115d-e32d-44ed-bffc-c004157aa383 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.260 2 DEBUG oslo_concurrency.lockutils [req-bfb4b1df-f184-4f50-ace2-91dcff364e4f req-9b90115d-e32d-44ed-bffc-c004157aa383 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.261 2 DEBUG nova.compute.manager [req-bfb4b1df-f184-4f50-ace2-91dcff364e4f req-9b90115d-e32d-44ed-bffc-c004157aa383 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Processing event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:38:43 compute-0 podman[354113]: 2025-10-02 08:38:43.554740734 +0000 UTC m=+0.077143749 container create 3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 08:38:43 compute-0 systemd[1]: Started libpod-conmon-3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052.scope.
Oct 02 08:38:43 compute-0 podman[354113]: 2025-10-02 08:38:43.527474265 +0000 UTC m=+0.049877290 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:38:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb0969f191443a386916053342273928abb5dcb10048e58d342dcf2424fd2bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:38:43 compute-0 podman[354113]: 2025-10-02 08:38:43.651667828 +0000 UTC m=+0.174070853 container init 3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 08:38:43 compute-0 podman[354113]: 2025-10-02 08:38:43.661657598 +0000 UTC m=+0.184060603 container start 3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 08:38:43 compute-0 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [NOTICE]   (354132) : New worker (354134) forked
Oct 02 08:38:43 compute-0 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [NOTICE]   (354132) : Loading success.
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.863 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.864 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394323.8622792, 4145f1a3-c327-49ee-9af1-1ace3afb70a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.864 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] VM Started (Lifecycle Event)
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.871 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.874 2 INFO nova.virt.libvirt.driver [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Instance spawned successfully.
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.875 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.887 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.894 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.900 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.900 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.901 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.901 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.902 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.902 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.911 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.912 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394323.8638384, 4145f1a3-c327-49ee-9af1-1ace3afb70a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.912 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] VM Paused (Lifecycle Event)
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.931 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.936 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394323.8699152, 4145f1a3-c327-49ee-9af1-1ace3afb70a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.937 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] VM Resumed (Lifecycle Event)
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.964 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.969 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.977 2 INFO nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Took 8.85 seconds to spawn the instance on the hypervisor.
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.978 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:43 compute-0 nova_compute[260603]: 2025-10-02 08:38:43.993 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:38:44 compute-0 nova_compute[260603]: 2025-10-02 08:38:44.059 2 INFO nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Took 9.87 seconds to build instance.
Oct 02 08:38:44 compute-0 nova_compute[260603]: 2025-10-02 08:38:44.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:44 compute-0 nova_compute[260603]: 2025-10-02 08:38:44.087 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:44 compute-0 ceph-mon[74477]: pgmap v1784: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 08:38:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 08:38:45 compute-0 nova_compute[260603]: 2025-10-02 08:38:45.389 2 DEBUG nova.compute.manager [req-2f161c8f-b8d2-41df-a15c-a6eec48fbf7d req-6738fbdb-f104-4c1a-867f-6149f4161741 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:45 compute-0 nova_compute[260603]: 2025-10-02 08:38:45.389 2 DEBUG oslo_concurrency.lockutils [req-2f161c8f-b8d2-41df-a15c-a6eec48fbf7d req-6738fbdb-f104-4c1a-867f-6149f4161741 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:38:45 compute-0 nova_compute[260603]: 2025-10-02 08:38:45.389 2 DEBUG oslo_concurrency.lockutils [req-2f161c8f-b8d2-41df-a15c-a6eec48fbf7d req-6738fbdb-f104-4c1a-867f-6149f4161741 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:38:45 compute-0 nova_compute[260603]: 2025-10-02 08:38:45.389 2 DEBUG oslo_concurrency.lockutils [req-2f161c8f-b8d2-41df-a15c-a6eec48fbf7d req-6738fbdb-f104-4c1a-867f-6149f4161741 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:38:45 compute-0 nova_compute[260603]: 2025-10-02 08:38:45.389 2 DEBUG nova.compute.manager [req-2f161c8f-b8d2-41df-a15c-a6eec48fbf7d req-6738fbdb-f104-4c1a-867f-6149f4161741 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] No waiting events found dispatching network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:38:45 compute-0 nova_compute[260603]: 2025-10-02 08:38:45.389 2 WARNING nova.compute.manager [req-2f161c8f-b8d2-41df-a15c-a6eec48fbf7d req-6738fbdb-f104-4c1a-867f-6149f4161741 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received unexpected event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 for instance with vm_state active and task_state None.
Oct 02 08:38:46 compute-0 nova_compute[260603]: 2025-10-02 08:38:46.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:46 compute-0 ceph-mon[74477]: pgmap v1785: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 08:38:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 02 08:38:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:38:48 compute-0 ceph-mon[74477]: pgmap v1786: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 02 08:38:48 compute-0 ovn_controller[152344]: 2025-10-02T08:38:48Z|00959|binding|INFO|Releasing lport 73da1479-3a84-4798-9da3-841fe88c5e3a from this chassis (sb_readonly=0)
Oct 02 08:38:48 compute-0 nova_compute[260603]: 2025-10-02 08:38:48.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:48 compute-0 NetworkManager[45129]: <info>  [1759394328.6411] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Oct 02 08:38:48 compute-0 ovn_controller[152344]: 2025-10-02T08:38:48Z|00960|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct 02 08:38:48 compute-0 NetworkManager[45129]: <info>  [1759394328.6429] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Oct 02 08:38:48 compute-0 ovn_controller[152344]: 2025-10-02T08:38:48Z|00961|binding|INFO|Releasing lport 73da1479-3a84-4798-9da3-841fe88c5e3a from this chassis (sb_readonly=0)
Oct 02 08:38:48 compute-0 ovn_controller[152344]: 2025-10-02T08:38:48Z|00962|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct 02 08:38:48 compute-0 nova_compute[260603]: 2025-10-02 08:38:48.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:48 compute-0 nova_compute[260603]: 2025-10-02 08:38:48.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:49.110 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:38:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:49.112 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:38:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:38:49.115 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:38:49 compute-0 nova_compute[260603]: 2025-10-02 08:38:49.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Oct 02 08:38:49 compute-0 nova_compute[260603]: 2025-10-02 08:38:49.309 2 DEBUG nova.compute.manager [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-changed-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:38:49 compute-0 nova_compute[260603]: 2025-10-02 08:38:49.309 2 DEBUG nova.compute.manager [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Refreshing instance network info cache due to event network-changed-4340f7c5-2f3a-4608-b77c-40798457ce79. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:38:49 compute-0 nova_compute[260603]: 2025-10-02 08:38:49.310 2 DEBUG oslo_concurrency.lockutils [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:38:49 compute-0 nova_compute[260603]: 2025-10-02 08:38:49.310 2 DEBUG oslo_concurrency.lockutils [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:38:49 compute-0 nova_compute[260603]: 2025-10-02 08:38:49.310 2 DEBUG nova.network.neutron [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Refreshing network info cache for port 4340f7c5-2f3a-4608-b77c-40798457ce79 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:38:50 compute-0 ceph-mon[74477]: pgmap v1787: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Oct 02 08:38:51 compute-0 nova_compute[260603]: 2025-10-02 08:38:51.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Oct 02 08:38:51 compute-0 nova_compute[260603]: 2025-10-02 08:38:51.402 2 DEBUG nova.network.neutron [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Updated VIF entry in instance network info cache for port 4340f7c5-2f3a-4608-b77c-40798457ce79. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:38:51 compute-0 nova_compute[260603]: 2025-10-02 08:38:51.403 2 DEBUG nova.network.neutron [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Updating instance_info_cache with network_info: [{"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:38:51 compute-0 nova_compute[260603]: 2025-10-02 08:38:51.421 2 DEBUG oslo_concurrency.lockutils [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.623770) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394331623801, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 683, "num_deletes": 251, "total_data_size": 778606, "memory_usage": 791352, "flush_reason": "Manual Compaction"}
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394331629674, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 770876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37082, "largest_seqno": 37764, "table_properties": {"data_size": 767315, "index_size": 1405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8299, "raw_average_key_size": 19, "raw_value_size": 760138, "raw_average_value_size": 1784, "num_data_blocks": 62, "num_entries": 426, "num_filter_entries": 426, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394282, "oldest_key_time": 1759394282, "file_creation_time": 1759394331, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 5940 microseconds, and 2752 cpu microseconds.
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.629710) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 770876 bytes OK
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.629725) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.631243) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.631273) EVENT_LOG_v1 {"time_micros": 1759394331631263, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.631298) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 775014, prev total WAL file size 775014, number of live WAL files 2.
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.632007) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(752KB)], [80(8700KB)]
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394331632035, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 9680345, "oldest_snapshot_seqno": -1}
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6243 keys, 8045459 bytes, temperature: kUnknown
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394331667424, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8045459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8004616, "index_size": 24157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157857, "raw_average_key_size": 25, "raw_value_size": 7893594, "raw_average_value_size": 1264, "num_data_blocks": 972, "num_entries": 6243, "num_filter_entries": 6243, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394331, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.667589) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8045459 bytes
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.668690) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 273.1 rd, 227.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.5 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(23.0) write-amplify(10.4) OK, records in: 6756, records dropped: 513 output_compression: NoCompression
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.668706) EVENT_LOG_v1 {"time_micros": 1759394331668698, "job": 46, "event": "compaction_finished", "compaction_time_micros": 35442, "compaction_time_cpu_micros": 18014, "output_level": 6, "num_output_files": 1, "total_output_size": 8045459, "num_input_records": 6756, "num_output_records": 6243, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394331668903, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394331670396, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.631956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.670425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.670429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.670431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.670433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:38:51 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.670435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:38:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:38:52 compute-0 ceph-mon[74477]: pgmap v1788: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Oct 02 08:38:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Oct 02 08:38:54 compute-0 nova_compute[260603]: 2025-10-02 08:38:54.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:54 compute-0 ceph-mon[74477]: pgmap v1789: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Oct 02 08:38:54 compute-0 nova_compute[260603]: 2025-10-02 08:38:54.971 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:38:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 70 op/s
Oct 02 08:38:56 compute-0 ovn_controller[152344]: 2025-10-02T08:38:56Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:12:17 10.100.0.14
Oct 02 08:38:56 compute-0 ovn_controller[152344]: 2025-10-02T08:38:56Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:12:17 10.100.0.14
Oct 02 08:38:56 compute-0 nova_compute[260603]: 2025-10-02 08:38:56.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:56 compute-0 ceph-mon[74477]: pgmap v1790: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 70 op/s
Oct 02 08:38:57 compute-0 podman[354146]: 2025-10-02 08:38:57.030011324 +0000 UTC m=+0.078364296 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:38:57 compute-0 podman[354145]: 2025-10-02 08:38:57.065606594 +0000 UTC m=+0.124512054 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 08:38:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 178 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 779 KiB/s wr, 95 op/s
Oct 02 08:38:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:38:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:38:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:38:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:38:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:38:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:38:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:38:58 compute-0 ceph-mon[74477]: pgmap v1791: 305 pgs: 305 active+clean; 178 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 779 KiB/s wr, 95 op/s
Oct 02 08:38:59 compute-0 nova_compute[260603]: 2025-10-02 08:38:59.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:38:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 02 08:38:59 compute-0 nova_compute[260603]: 2025-10-02 08:38:59.387 2 INFO nova.compute.manager [None req-6d4e50e0-bb53-4a84-ba7f-4f0f1d2a48f4 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Pausing
Oct 02 08:38:59 compute-0 nova_compute[260603]: 2025-10-02 08:38:59.388 2 DEBUG nova.objects.instance [None req-6d4e50e0-bb53-4a84-ba7f-4f0f1d2a48f4 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'flavor' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:38:59 compute-0 nova_compute[260603]: 2025-10-02 08:38:59.418 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394339.4183438, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:38:59 compute-0 nova_compute[260603]: 2025-10-02 08:38:59.419 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Paused (Lifecycle Event)
Oct 02 08:38:59 compute-0 nova_compute[260603]: 2025-10-02 08:38:59.421 2 DEBUG nova.compute.manager [None req-6d4e50e0-bb53-4a84-ba7f-4f0f1d2a48f4 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:59 compute-0 nova_compute[260603]: 2025-10-02 08:38:59.461 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:38:59 compute-0 nova_compute[260603]: 2025-10-02 08:38:59.465 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:39:00 compute-0 ceph-mon[74477]: pgmap v1792: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 02 08:39:01 compute-0 nova_compute[260603]: 2025-10-02 08:39:01.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:39:01 compute-0 nova_compute[260603]: 2025-10-02 08:39:01.908 2 INFO nova.compute.manager [None req-3a5d1c68-1330-44c8-a6b5-9a52ec2ff462 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Unpausing
Oct 02 08:39:01 compute-0 nova_compute[260603]: 2025-10-02 08:39:01.909 2 DEBUG nova.objects.instance [None req-3a5d1c68-1330-44c8-a6b5-9a52ec2ff462 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'flavor' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:01 compute-0 nova_compute[260603]: 2025-10-02 08:39:01.948 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394341.9481072, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:01 compute-0 nova_compute[260603]: 2025-10-02 08:39:01.949 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Resumed (Lifecycle Event)
Oct 02 08:39:01 compute-0 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct 02 08:39:01 compute-0 nova_compute[260603]: 2025-10-02 08:39:01.954 2 DEBUG nova.virt.libvirt.guest [None req-3a5d1c68-1330-44c8-a6b5-9a52ec2ff462 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 08:39:01 compute-0 nova_compute[260603]: 2025-10-02 08:39:01.955 2 DEBUG nova.compute.manager [None req-3a5d1c68-1330-44c8-a6b5-9a52ec2ff462 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:01 compute-0 nova_compute[260603]: 2025-10-02 08:39:01.986 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:01 compute-0 nova_compute[260603]: 2025-10-02 08:39:01.990 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:39:02 compute-0 nova_compute[260603]: 2025-10-02 08:39:02.026 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 02 08:39:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:39:02 compute-0 ceph-mon[74477]: pgmap v1793: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:39:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:39:04 compute-0 podman[354189]: 2025-10-02 08:39:04.043546101 +0000 UTC m=+0.098500021 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 08:39:04 compute-0 nova_compute[260603]: 2025-10-02 08:39:04.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:04 compute-0 ceph-mon[74477]: pgmap v1794: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:39:05 compute-0 podman[354209]: 2025-10-02 08:39:05.04795735 +0000 UTC m=+0.104576464 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:39:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:39:06 compute-0 nova_compute[260603]: 2025-10-02 08:39:06.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:06 compute-0 ceph-mon[74477]: pgmap v1795: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:39:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 08:39:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:39:08 compute-0 ceph-mon[74477]: pgmap v1796: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 08:39:09 compute-0 nova_compute[260603]: 2025-10-02 08:39:09.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Oct 02 08:39:09 compute-0 nova_compute[260603]: 2025-10-02 08:39:09.496 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:09 compute-0 nova_compute[260603]: 2025-10-02 08:39:09.496 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:09 compute-0 nova_compute[260603]: 2025-10-02 08:39:09.516 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:39:09 compute-0 nova_compute[260603]: 2025-10-02 08:39:09.594 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:09 compute-0 nova_compute[260603]: 2025-10-02 08:39:09.595 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:09 compute-0 nova_compute[260603]: 2025-10-02 08:39:09.604 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:39:09 compute-0 nova_compute[260603]: 2025-10-02 08:39:09.604 2 INFO nova.compute.claims [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:39:09 compute-0 nova_compute[260603]: 2025-10-02 08:39:09.733 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:39:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652893759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.153 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.161 2 DEBUG nova.compute.provider_tree [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.182 2 DEBUG nova.scheduler.client.report [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.207 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.213 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.258 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.258 2 DEBUG nova.network.neutron [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.275 2 INFO nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.299 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.403 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.404 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.404 2 INFO nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Creating image(s)
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.433 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.459 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.488 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.492 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.535 2 DEBUG nova.policy [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd3802fedfb914c27b9b09ad6ea6f4c27', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c75535fe577642038c638a0b01f74d09', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.575 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.576 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.577 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.577 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.608 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.613 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:10 compute-0 ceph-mon[74477]: pgmap v1797: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Oct 02 08:39:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/652893759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:39:10 compute-0 nova_compute[260603]: 2025-10-02 08:39:10.980 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:11 compute-0 nova_compute[260603]: 2025-10-02 08:39:11.065 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] resizing rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:39:11 compute-0 nova_compute[260603]: 2025-10-02 08:39:11.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:11 compute-0 nova_compute[260603]: 2025-10-02 08:39:11.179 2 DEBUG nova.objects.instance [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'migration_context' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:11 compute-0 nova_compute[260603]: 2025-10-02 08:39:11.208 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:39:11 compute-0 nova_compute[260603]: 2025-10-02 08:39:11.208 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Ensure instance console log exists: /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:39:11 compute-0 nova_compute[260603]: 2025-10-02 08:39:11.209 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:11 compute-0 nova_compute[260603]: 2025-10-02 08:39:11.209 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:11 compute-0 nova_compute[260603]: 2025-10-02 08:39:11.210 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Oct 02 08:39:11 compute-0 nova_compute[260603]: 2025-10-02 08:39:11.287 2 DEBUG nova.network.neutron [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Successfully created port: 664c458b-abee-4d23-a60f-a0032a8f6058 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:39:12 compute-0 nova_compute[260603]: 2025-10-02 08:39:12.164 2 DEBUG nova.network.neutron [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Successfully updated port: 664c458b-abee-4d23-a60f-a0032a8f6058 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:39:12 compute-0 nova_compute[260603]: 2025-10-02 08:39:12.189 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "refresh_cache-49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:39:12 compute-0 nova_compute[260603]: 2025-10-02 08:39:12.189 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquired lock "refresh_cache-49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:39:12 compute-0 nova_compute[260603]: 2025-10-02 08:39:12.189 2 DEBUG nova.network.neutron [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:39:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:39:12 compute-0 nova_compute[260603]: 2025-10-02 08:39:12.278 2 DEBUG nova.compute.manager [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-changed-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:12 compute-0 nova_compute[260603]: 2025-10-02 08:39:12.279 2 DEBUG nova.compute.manager [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Refreshing instance network info cache due to event network-changed-664c458b-abee-4d23-a60f-a0032a8f6058. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:39:12 compute-0 nova_compute[260603]: 2025-10-02 08:39:12.279 2 DEBUG oslo_concurrency.lockutils [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:39:12 compute-0 nova_compute[260603]: 2025-10-02 08:39:12.349 2 DEBUG nova.network.neutron [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:39:12 compute-0 ceph-mon[74477]: pgmap v1798: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Oct 02 08:39:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.273 2 DEBUG nova.network.neutron [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Updating instance_info_cache with network_info: [{"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.297 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Releasing lock "refresh_cache-49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.298 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance network_info: |[{"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.298 2 DEBUG oslo_concurrency.lockutils [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.298 2 DEBUG nova.network.neutron [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Refreshing network info cache for port 664c458b-abee-4d23-a60f-a0032a8f6058 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.301 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Start _get_guest_xml network_info=[{"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.306 2 WARNING nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.312 2 DEBUG nova.virt.libvirt.host [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.312 2 DEBUG nova.virt.libvirt.host [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.316 2 DEBUG nova.virt.libvirt.host [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.317 2 DEBUG nova.virt.libvirt.host [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.317 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.318 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.318 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.318 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.318 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.319 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.319 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.319 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.319 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.319 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.320 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.320 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.323 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:39:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4062935910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.805 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.841 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:13 compute-0 nova_compute[260603]: 2025-10-02 08:39:13.847 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:39:14 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446225819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.309 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.312 2 DEBUG nova.virt.libvirt.vif [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-487849428',display_name='tempest-tempest.common.compute-instance-487849428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-487849428',id=96,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-il5cp84q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:10Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=49cc6a50-fcc0-4336-a786-4fe32e5d5c5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.313 2 DEBUG nova.network.os_vif_util [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.315 2 DEBUG nova.network.os_vif_util [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.317 2 DEBUG nova.objects.instance [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.342 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:39:14 compute-0 nova_compute[260603]:   <uuid>49cc6a50-fcc0-4336-a786-4fe32e5d5c5a</uuid>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   <name>instance-00000060</name>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <nova:name>tempest-tempest.common.compute-instance-487849428</nova:name>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:39:13</nova:creationTime>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:39:14 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:39:14 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:39:14 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:39:14 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:39:14 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:39:14 compute-0 nova_compute[260603]:         <nova:user uuid="d3802fedfb914c27b9b09ad6ea6f4c27">tempest-ServerActionsTestOtherA-249618595-project-member</nova:user>
Oct 02 08:39:14 compute-0 nova_compute[260603]:         <nova:project uuid="c75535fe577642038c638a0b01f74d09">tempest-ServerActionsTestOtherA-249618595</nova:project>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:39:14 compute-0 nova_compute[260603]:         <nova:port uuid="664c458b-abee-4d23-a60f-a0032a8f6058">
Oct 02 08:39:14 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <system>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <entry name="serial">49cc6a50-fcc0-4336-a786-4fe32e5d5c5a</entry>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <entry name="uuid">49cc6a50-fcc0-4336-a786-4fe32e5d5c5a</entry>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     </system>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   <os>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   </os>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   <features>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   </features>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk">
Oct 02 08:39:14 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       </source>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:39:14 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config">
Oct 02 08:39:14 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       </source>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:39:14 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:f7:c6:d9"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <target dev="tap664c458b-ab"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/console.log" append="off"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <video>
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     </video>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:39:14 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:39:14 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:39:14 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:39:14 compute-0 nova_compute[260603]: </domain>
Oct 02 08:39:14 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.345 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Preparing to wait for external event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.346 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.346 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.347 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.349 2 DEBUG nova.virt.libvirt.vif [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-487849428',display_name='tempest-tempest.common.compute-instance-487849428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-487849428',id=96,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-il5cp84q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:10Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=49cc6a50-fcc0-4336-a786-4fe32e5d5c5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.349 2 DEBUG nova.network.os_vif_util [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.350 2 DEBUG nova.network.os_vif_util [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.351 2 DEBUG os_vif [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.354 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.361 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap664c458b-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap664c458b-ab, col_values=(('external_ids', {'iface-id': '664c458b-abee-4d23-a60f-a0032a8f6058', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:c6:d9', 'vm-uuid': '49cc6a50-fcc0-4336-a786-4fe32e5d5c5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:14 compute-0 NetworkManager[45129]: <info>  [1759394354.3679] manager: (tap664c458b-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.376 2 INFO os_vif [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab')
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.452 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.453 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.454 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No VIF found with MAC fa:16:3e:f7:c6:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.454 2 INFO nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Using config drive
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.486 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:14 compute-0 ceph-mon[74477]: pgmap v1799: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:39:14 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4062935910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:39:14 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/446225819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.950 2 INFO nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Creating config drive at /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config
Oct 02 08:39:14 compute-0 nova_compute[260603]: 2025-10-02 08:39:14.956 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2gzr6lj1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:15 compute-0 nova_compute[260603]: 2025-10-02 08:39:15.119 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2gzr6lj1" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:15 compute-0 nova_compute[260603]: 2025-10-02 08:39:15.161 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:15 compute-0 nova_compute[260603]: 2025-10-02 08:39:15.165 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:39:15 compute-0 nova_compute[260603]: 2025-10-02 08:39:15.248 2 DEBUG nova.network.neutron [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Updated VIF entry in instance network info cache for port 664c458b-abee-4d23-a60f-a0032a8f6058. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:39:15 compute-0 nova_compute[260603]: 2025-10-02 08:39:15.249 2 DEBUG nova.network.neutron [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Updating instance_info_cache with network_info: [{"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:39:15 compute-0 nova_compute[260603]: 2025-10-02 08:39:15.270 2 DEBUG oslo_concurrency.lockutils [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:39:15 compute-0 nova_compute[260603]: 2025-10-02 08:39:15.363 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:15 compute-0 nova_compute[260603]: 2025-10-02 08:39:15.364 2 INFO nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deleting local config drive /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config because it was imported into RBD.
Oct 02 08:39:15 compute-0 kernel: tap664c458b-ab: entered promiscuous mode
Oct 02 08:39:15 compute-0 ovn_controller[152344]: 2025-10-02T08:39:15Z|00963|binding|INFO|Claiming lport 664c458b-abee-4d23-a60f-a0032a8f6058 for this chassis.
Oct 02 08:39:15 compute-0 ovn_controller[152344]: 2025-10-02T08:39:15Z|00964|binding|INFO|664c458b-abee-4d23-a60f-a0032a8f6058: Claiming fa:16:3e:f7:c6:d9 10.100.0.6
Oct 02 08:39:15 compute-0 NetworkManager[45129]: <info>  [1759394355.4253] manager: (tap664c458b-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/383)
Oct 02 08:39:15 compute-0 nova_compute[260603]: 2025-10-02 08:39:15.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.434 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:c6:d9 10.100.0.6'], port_security=['fa:16:3e:f7:c6:d9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '49cc6a50-fcc0-4336-a786-4fe32e5d5c5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd37b83fc-7239-49dc-8ab1-fc95753c436a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=664c458b-abee-4d23-a60f-a0032a8f6058) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.436 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 664c458b-abee-4d23-a60f-a0032a8f6058 in datapath 28f843b2-396a-4167-9840-21c273bdc044 bound to our chassis
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.438 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044
Oct 02 08:39:15 compute-0 ovn_controller[152344]: 2025-10-02T08:39:15Z|00965|binding|INFO|Setting lport 664c458b-abee-4d23-a60f-a0032a8f6058 ovn-installed in OVS
Oct 02 08:39:15 compute-0 ovn_controller[152344]: 2025-10-02T08:39:15Z|00966|binding|INFO|Setting lport 664c458b-abee-4d23-a60f-a0032a8f6058 up in Southbound
Oct 02 08:39:15 compute-0 nova_compute[260603]: 2025-10-02 08:39:15.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.463 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bb6aff00-15f8-4672-a47c-1b6aab358091]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:15 compute-0 systemd-machined[214636]: New machine qemu-119-instance-00000060.
Oct 02 08:39:15 compute-0 systemd[1]: Started Virtual Machine qemu-119-instance-00000060.
Oct 02 08:39:15 compute-0 systemd-udevd[354556]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.496 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f67d31-a4be-43a4-9553-e80f95bd11d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.501 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[44b7d020-c6e3-4acd-a89d-fd902241c5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:15 compute-0 NetworkManager[45129]: <info>  [1759394355.5067] device (tap664c458b-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:39:15 compute-0 NetworkManager[45129]: <info>  [1759394355.5074] device (tap664c458b-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.546 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[19b87f7b-d698-4e3d-a334-5eb9836d2005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.562 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4a53fd-88eb-499f-937f-15e3ed3e130c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354566, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.579 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f20d59-9e5a-4b24-b691-85dde08ec94b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520161, 'tstamp': 520161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354568, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520164, 'tstamp': 520164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354568, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.581 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:15 compute-0 nova_compute[260603]: 2025-10-02 08:39:15.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:15 compute-0 nova_compute[260603]: 2025-10-02 08:39:15.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.584 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.585 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.585 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.586 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:16 compute-0 nova_compute[260603]: 2025-10-02 08:39:16.782 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394356.7820427, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:16 compute-0 nova_compute[260603]: 2025-10-02 08:39:16.782 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Started (Lifecycle Event)
Oct 02 08:39:16 compute-0 nova_compute[260603]: 2025-10-02 08:39:16.806 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:16 compute-0 ceph-mon[74477]: pgmap v1800: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:39:16 compute-0 nova_compute[260603]: 2025-10-02 08:39:16.812 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394356.784772, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:16 compute-0 nova_compute[260603]: 2025-10-02 08:39:16.813 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Paused (Lifecycle Event)
Oct 02 08:39:16 compute-0 nova_compute[260603]: 2025-10-02 08:39:16.835 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:16 compute-0 nova_compute[260603]: 2025-10-02 08:39:16.839 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:39:16 compute-0 nova_compute[260603]: 2025-10-02 08:39:16.862 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:39:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 08:39:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:39:17 compute-0 nova_compute[260603]: 2025-10-02 08:39:17.561 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:39:17 compute-0 nova_compute[260603]: 2025-10-02 08:39:17.561 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:39:18 compute-0 ceph-mon[74477]: pgmap v1801: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 08:39:19 compute-0 nova_compute[260603]: 2025-10-02 08:39:19.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 08:39:19 compute-0 nova_compute[260603]: 2025-10-02 08:39:19.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.644 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.645 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.645 2 INFO nova.compute.manager [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Shelving
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.668 2 DEBUG nova.virt.libvirt.driver [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.885 2 DEBUG nova.compute.manager [req-da29672c-63e0-4208-989f-ac3e5f12e6d2 req-e46d1a8f-4921-4fb0-826b-bf6075f94e0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.886 2 DEBUG oslo_concurrency.lockutils [req-da29672c-63e0-4208-989f-ac3e5f12e6d2 req-e46d1a8f-4921-4fb0-826b-bf6075f94e0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.886 2 DEBUG oslo_concurrency.lockutils [req-da29672c-63e0-4208-989f-ac3e5f12e6d2 req-e46d1a8f-4921-4fb0-826b-bf6075f94e0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.887 2 DEBUG oslo_concurrency.lockutils [req-da29672c-63e0-4208-989f-ac3e5f12e6d2 req-e46d1a8f-4921-4fb0-826b-bf6075f94e0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.887 2 DEBUG nova.compute.manager [req-da29672c-63e0-4208-989f-ac3e5f12e6d2 req-e46d1a8f-4921-4fb0-826b-bf6075f94e0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Processing event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.888 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.894 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394360.8936667, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.894 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Resumed (Lifecycle Event)
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.897 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.902 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance spawned successfully.
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.903 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.915 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.926 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.935 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.936 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.936 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.937 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.938 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.939 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:39:20 compute-0 nova_compute[260603]: 2025-10-02 08:39:20.947 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:39:20 compute-0 ceph-mon[74477]: pgmap v1802: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 08:39:21 compute-0 nova_compute[260603]: 2025-10-02 08:39:21.001 2 INFO nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Took 10.60 seconds to spawn the instance on the hypervisor.
Oct 02 08:39:21 compute-0 nova_compute[260603]: 2025-10-02 08:39:21.001 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:21 compute-0 nova_compute[260603]: 2025-10-02 08:39:21.129 2 INFO nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Took 11.57 seconds to build instance.
Oct 02 08:39:21 compute-0 nova_compute[260603]: 2025-10-02 08:39:21.148 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 08:39:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:39:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/688866439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:39:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:39:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/688866439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:39:22 compute-0 nova_compute[260603]: 2025-10-02 08:39:22.101 2 DEBUG oslo_concurrency.lockutils [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:22 compute-0 nova_compute[260603]: 2025-10-02 08:39:22.102 2 DEBUG oslo_concurrency.lockutils [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:22 compute-0 nova_compute[260603]: 2025-10-02 08:39:22.102 2 DEBUG nova.compute.manager [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:22 compute-0 nova_compute[260603]: 2025-10-02 08:39:22.107 2 DEBUG nova.compute.manager [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 08:39:22 compute-0 nova_compute[260603]: 2025-10-02 08:39:22.108 2 DEBUG nova.objects.instance [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'flavor' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:22 compute-0 nova_compute[260603]: 2025-10-02 08:39:22.137 2 DEBUG nova.virt.libvirt.driver [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:39:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:39:23 compute-0 ceph-mon[74477]: pgmap v1803: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 08:39:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/688866439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:39:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/688866439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:39:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 02 08:39:23 compute-0 nova_compute[260603]: 2025-10-02 08:39:23.343 2 DEBUG nova.compute.manager [req-49bbc126-9fa9-4dfd-8415-0bc9dcc4e491 req-fd3e0879-a024-4f47-ba34-4801e78b7d98 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:23 compute-0 nova_compute[260603]: 2025-10-02 08:39:23.344 2 DEBUG oslo_concurrency.lockutils [req-49bbc126-9fa9-4dfd-8415-0bc9dcc4e491 req-fd3e0879-a024-4f47-ba34-4801e78b7d98 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:23 compute-0 nova_compute[260603]: 2025-10-02 08:39:23.344 2 DEBUG oslo_concurrency.lockutils [req-49bbc126-9fa9-4dfd-8415-0bc9dcc4e491 req-fd3e0879-a024-4f47-ba34-4801e78b7d98 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:23 compute-0 nova_compute[260603]: 2025-10-02 08:39:23.344 2 DEBUG oslo_concurrency.lockutils [req-49bbc126-9fa9-4dfd-8415-0bc9dcc4e491 req-fd3e0879-a024-4f47-ba34-4801e78b7d98 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:23 compute-0 nova_compute[260603]: 2025-10-02 08:39:23.344 2 DEBUG nova.compute.manager [req-49bbc126-9fa9-4dfd-8415-0bc9dcc4e491 req-fd3e0879-a024-4f47-ba34-4801e78b7d98 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] No waiting events found dispatching network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:23 compute-0 nova_compute[260603]: 2025-10-02 08:39:23.345 2 WARNING nova.compute.manager [req-49bbc126-9fa9-4dfd-8415-0bc9dcc4e491 req-fd3e0879-a024-4f47-ba34-4801e78b7d98 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received unexpected event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 for instance with vm_state active and task_state powering-off.
Oct 02 08:39:23 compute-0 kernel: tap00fa1373-e4 (unregistering): left promiscuous mode
Oct 02 08:39:23 compute-0 nova_compute[260603]: 2025-10-02 08:39:23.694 2 INFO nova.virt.libvirt.driver [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance shutdown successfully after 3 seconds.
Oct 02 08:39:23 compute-0 NetworkManager[45129]: <info>  [1759394363.7002] device (tap00fa1373-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:39:23 compute-0 ovn_controller[152344]: 2025-10-02T08:39:23Z|00967|binding|INFO|Releasing lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb from this chassis (sb_readonly=0)
Oct 02 08:39:23 compute-0 ovn_controller[152344]: 2025-10-02T08:39:23Z|00968|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb down in Southbound
Oct 02 08:39:23 compute-0 ovn_controller[152344]: 2025-10-02T08:39:23Z|00969|binding|INFO|Removing iface tap00fa1373-e4 ovn-installed in OVS
Oct 02 08:39:23 compute-0 nova_compute[260603]: 2025-10-02 08:39:23.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:23.723 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:b9:2b 10.100.0.10'], port_security=['fa:16:3e:11:b9:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:23.724 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 unbound from our chassis
Oct 02 08:39:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:23.726 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19d584c3-e754-47d1-9cdf-c6badbd670d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:39:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:23.727 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3e0017-388a-485a-9aef-0f80202a7dac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:23.727 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 namespace which is not needed anymore
Oct 02 08:39:23 compute-0 nova_compute[260603]: 2025-10-02 08:39:23.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:23 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct 02 08:39:23 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005d.scope: Consumed 15.766s CPU time.
Oct 02 08:39:23 compute-0 systemd-machined[214636]: Machine qemu-116-instance-0000005d terminated.
Oct 02 08:39:23 compute-0 nova_compute[260603]: 2025-10-02 08:39:23.949 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance destroyed successfully.
Oct 02 08:39:23 compute-0 nova_compute[260603]: 2025-10-02 08:39:23.950 2 DEBUG nova.objects.instance [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'numa_topology' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:23 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[352059]: [NOTICE]   (352063) : haproxy version is 2.8.14-c23fe91
Oct 02 08:39:23 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[352059]: [NOTICE]   (352063) : path to executable is /usr/sbin/haproxy
Oct 02 08:39:23 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[352059]: [ALERT]    (352063) : Current worker (352065) exited with code 143 (Terminated)
Oct 02 08:39:23 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[352059]: [WARNING]  (352063) : All workers exited. Exiting... (0)
Oct 02 08:39:23 compute-0 systemd[1]: libpod-db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1.scope: Deactivated successfully.
Oct 02 08:39:23 compute-0 podman[354633]: 2025-10-02 08:39:23.965163729 +0000 UTC m=+0.094625445 container died db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 08:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1-userdata-shm.mount: Deactivated successfully.
Oct 02 08:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a74f8d6643ae70d1fa74bb44614c3aa06e7ac08d3f73b3369e5ce67a1513ec1-merged.mount: Deactivated successfully.
Oct 02 08:39:24 compute-0 podman[354633]: 2025-10-02 08:39:24.068404022 +0000 UTC m=+0.197865738 container cleanup db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 08:39:24 compute-0 systemd[1]: libpod-conmon-db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1.scope: Deactivated successfully.
Oct 02 08:39:24 compute-0 ceph-mon[74477]: pgmap v1804: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 02 08:39:24 compute-0 podman[354673]: 2025-10-02 08:39:24.1558356 +0000 UTC m=+0.050933842 container remove db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:39:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.163 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6b630e23-b37c-407e-9622-08374c92bbcf]: (4, ('Thu Oct  2 08:39:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 (db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1)\ndb9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1\nThu Oct  2 08:39:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 (db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1)\ndb9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.164 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7d27c7-7297-4c29-bf05-0b5391c696bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.166 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:24 compute-0 kernel: tap19d584c3-e0: left promiscuous mode
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.192 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d1dbcf4f-59be-4ffa-8e2e-43cee198fc90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.231 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[685da682-26f9-4b5b-8544-cc5832eb6093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.232 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b57b3e2f-d766-42a7-86ec-bccbacf50efe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.254 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[635786a6-59da-4ee4-92b6-1643bd67f16c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516127, 'reachable_time': 17502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354691, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d19d584c3\x2de754\x2d47d1\x2d9cdf\x2dc6badbd670d7.mount: Deactivated successfully.
Oct 02 08:39:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.259 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:39:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.259 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9b0437-164a-42c0-9cfb-3afd6729fc5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.269 2 INFO nova.virt.libvirt.driver [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Beginning cold snapshot process
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.442 2 DEBUG nova.virt.libvirt.imagebackend [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.546 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.547 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.547 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:24 compute-0 nova_compute[260603]: 2025-10-02 08:39:24.681 2 DEBUG nova.storage.rbd_utils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] creating snapshot(72de10cf6cef4bf1bbe60a5c520f8744) on rbd image(fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:39:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct 02 08:39:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct 02 08:39:25 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.195 2 DEBUG nova.storage.rbd_utils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] cloning vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk@72de10cf6cef4bf1bbe60a5c520f8744 to images/1ca55329-185d-4f33-a25c-d8eaed50208d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:39:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 92 op/s
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.298 2 DEBUG nova.storage.rbd_utils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] flattening images/1ca55329-185d-4f33-a25c-d8eaed50208d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.478 2 DEBUG nova.compute.manager [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.478 2 DEBUG oslo_concurrency.lockutils [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.478 2 DEBUG oslo_concurrency.lockutils [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.479 2 DEBUG oslo_concurrency.lockutils [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.479 2 DEBUG nova.compute.manager [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.479 2 WARNING nova.compute.manager [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.479 2 DEBUG nova.compute.manager [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.480 2 DEBUG oslo_concurrency.lockutils [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.480 2 DEBUG oslo_concurrency.lockutils [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.481 2 DEBUG oslo_concurrency.lockutils [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.483 2 DEBUG nova.compute.manager [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.483 2 WARNING nova.compute.manager [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 08:39:25 compute-0 nova_compute[260603]: 2025-10-02 08:39:25.713 2 DEBUG nova.storage.rbd_utils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] removing snapshot(72de10cf6cef4bf1bbe60a5c520f8744) on rbd image(fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:39:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct 02 08:39:26 compute-0 nova_compute[260603]: 2025-10-02 08:39:26.122 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:39:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct 02 08:39:26 compute-0 ceph-mon[74477]: osdmap e257: 3 total, 3 up, 3 in
Oct 02 08:39:26 compute-0 ceph-mon[74477]: pgmap v1806: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 92 op/s
Oct 02 08:39:26 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct 02 08:39:26 compute-0 nova_compute[260603]: 2025-10-02 08:39:26.143 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:39:26 compute-0 nova_compute[260603]: 2025-10-02 08:39:26.144 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:39:26 compute-0 nova_compute[260603]: 2025-10-02 08:39:26.144 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:39:26 compute-0 nova_compute[260603]: 2025-10-02 08:39:26.145 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:39:26 compute-0 nova_compute[260603]: 2025-10-02 08:39:26.165 2 DEBUG nova.storage.rbd_utils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] creating snapshot(snap) on rbd image(1ca55329-185d-4f33-a25c-d8eaed50208d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:39:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct 02 08:39:27 compute-0 ceph-mon[74477]: osdmap e258: 3 total, 3 up, 3 in
Oct 02 08:39:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct 02 08:39:27 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct 02 08:39:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 279 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.0 MiB/s wr, 229 op/s
Oct 02 08:39:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.275521) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394367275559, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 548, "num_deletes": 256, "total_data_size": 543014, "memory_usage": 553480, "flush_reason": "Manual Compaction"}
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394367279926, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 538054, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37765, "largest_seqno": 38312, "table_properties": {"data_size": 534994, "index_size": 1032, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6963, "raw_average_key_size": 18, "raw_value_size": 528848, "raw_average_value_size": 1406, "num_data_blocks": 46, "num_entries": 376, "num_filter_entries": 376, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394332, "oldest_key_time": 1759394332, "file_creation_time": 1759394367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 4432 microseconds, and 2288 cpu microseconds.
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.279956) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 538054 bytes OK
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.279969) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.283308) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.283318) EVENT_LOG_v1 {"time_micros": 1759394367283315, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.283331) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 539894, prev total WAL file size 539894, number of live WAL files 2.
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.283692) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323532' seq:72057594037927935, type:22 .. '6C6F676D0031353034' seq:0, type:0; will stop at (end)
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(525KB)], [83(7856KB)]
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394367283719, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8583513, "oldest_snapshot_seqno": -1}
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6095 keys, 8465720 bytes, temperature: kUnknown
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394367322270, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8465720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8424740, "index_size": 24614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 155752, "raw_average_key_size": 25, "raw_value_size": 8315204, "raw_average_value_size": 1364, "num_data_blocks": 989, "num_entries": 6095, "num_filter_entries": 6095, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.322441) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8465720 bytes
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.323546) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.4 rd, 219.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 7.7 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(31.7) write-amplify(15.7) OK, records in: 6619, records dropped: 524 output_compression: NoCompression
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.323565) EVENT_LOG_v1 {"time_micros": 1759394367323556, "job": 48, "event": "compaction_finished", "compaction_time_micros": 38597, "compaction_time_cpu_micros": 19437, "output_level": 6, "num_output_files": 1, "total_output_size": 8465720, "num_input_records": 6619, "num_output_records": 6095, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394367323741, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394367324946, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.283617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.324975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.324980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.324982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.324984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:39:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.324987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:39:27 compute-0 nova_compute[260603]: 2025-10-02 08:39:27.552 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:39:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:39:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:39:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:39:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:39:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:39:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:39:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:39:27
Oct 02 08:39:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:39:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:39:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.control', 'images', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log']
Oct 02 08:39:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:39:28 compute-0 podman[354834]: 2025-10-02 08:39:28.026458914 +0000 UTC m=+0.079988555 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 08:39:28 compute-0 podman[354833]: 2025-10-02 08:39:28.071721325 +0000 UTC m=+0.124780882 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Oct 02 08:39:28 compute-0 ceph-mon[74477]: osdmap e259: 3 total, 3 up, 3 in
Oct 02 08:39:28 compute-0 ceph-mon[74477]: pgmap v1809: 305 pgs: 305 active+clean; 279 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.0 MiB/s wr, 229 op/s
Oct 02 08:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.347 2 INFO nova.virt.libvirt.driver [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Snapshot image upload complete
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.348 2 DEBUG nova.compute.manager [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.403 2 INFO nova.compute.manager [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Shelve offloading
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.410 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance destroyed successfully.
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.411 2 DEBUG nova.compute.manager [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.413 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.414 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.414 2 DEBUG nova.network.neutron [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.543 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:39:28 compute-0 nova_compute[260603]: 2025-10-02 08:39:28.544 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:28.806 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:28.808 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated
Oct 02 08:39:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:28.810 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:39:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:28.811 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[14e2795c-8edb-4fe1-b267-5257ab0f6e7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:39:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2753247979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.002 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.094 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.094 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.099 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.100 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.104 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.104 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:39:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2753247979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 326 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 283 op/s
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.392 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.393 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3460MB free_disk=59.87623596191406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.394 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.394 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.481 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.481 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 4145f1a3-c327-49ee-9af1-1ace3afb70a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.482 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.482 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.482 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.484 2 DEBUG nova.network.neutron [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.499 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.508 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.528 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.528 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.598 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.621 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 08:39:29 compute-0 nova_compute[260603]: 2025-10-02 08:39:29.693 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:30 compute-0 ceph-mon[74477]: pgmap v1810: 305 pgs: 305 active+clean; 326 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 283 op/s
Oct 02 08:39:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:39:30 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554622571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.180 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.187 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.207 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.235 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.236 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.336 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance destroyed successfully.
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.337 2 DEBUG nova.objects.instance [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'resources' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.353 2 DEBUG nova.virt.libvirt.vif [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:38:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member',shelved_at='2025-10-02T08:39:28.348390',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='1ca55329-185d-4f33-a25c-d8eaed50208d'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:39:24Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.354 2 DEBUG nova.network.os_vif_util [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.356 2 DEBUG nova.network.os_vif_util [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.357 2 DEBUG os_vif [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00fa1373-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.392 2 INFO os_vif [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4')
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.438 2 DEBUG nova.compute.manager [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-changed-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.439 2 DEBUG nova.compute.manager [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Refreshing instance network info cache due to event network-changed-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.439 2 DEBUG oslo_concurrency.lockutils [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.440 2 DEBUG oslo_concurrency.lockutils [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.440 2 DEBUG nova.network.neutron [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Refreshing network info cache for port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.856 2 INFO nova.virt.libvirt.driver [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deleting instance files /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_del
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.857 2 INFO nova.virt.libvirt.driver [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deletion of /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_del complete
Oct 02 08:39:30 compute-0 nova_compute[260603]: 2025-10-02 08:39:30.957 2 INFO nova.scheduler.client.report [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Deleted allocations for instance fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b
Oct 02 08:39:31 compute-0 nova_compute[260603]: 2025-10-02 08:39:31.009 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:31 compute-0 nova_compute[260603]: 2025-10-02 08:39:31.010 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:31 compute-0 nova_compute[260603]: 2025-10-02 08:39:31.091 2 DEBUG oslo_concurrency.processutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3554622571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:39:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 326 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.6 MiB/s wr, 278 op/s
Oct 02 08:39:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:39:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3039240848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:39:31 compute-0 nova_compute[260603]: 2025-10-02 08:39:31.580 2 DEBUG oslo_concurrency.processutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:31 compute-0 nova_compute[260603]: 2025-10-02 08:39:31.591 2 DEBUG nova.compute.provider_tree [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:39:31 compute-0 nova_compute[260603]: 2025-10-02 08:39:31.615 2 DEBUG nova.scheduler.client.report [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:39:31 compute-0 nova_compute[260603]: 2025-10-02 08:39:31.640 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:31 compute-0 nova_compute[260603]: 2025-10-02 08:39:31.697 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 11.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:32 compute-0 ceph-mon[74477]: pgmap v1811: 305 pgs: 305 active+clean; 326 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.6 MiB/s wr, 278 op/s
Oct 02 08:39:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3039240848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:39:32 compute-0 nova_compute[260603]: 2025-10-02 08:39:32.228 2 DEBUG nova.virt.libvirt.driver [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:39:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:39:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct 02 08:39:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct 02 08:39:32 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct 02 08:39:32 compute-0 nova_compute[260603]: 2025-10-02 08:39:32.317 2 DEBUG nova.network.neutron [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updated VIF entry in instance network info cache for port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:39:32 compute-0 nova_compute[260603]: 2025-10-02 08:39:32.318 2 DEBUG nova.network.neutron [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": null, "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap00fa1373-e4", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:39:32 compute-0 nova_compute[260603]: 2025-10-02 08:39:32.335 2 DEBUG oslo_concurrency.lockutils [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:39:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:32.832 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 2001:db8::f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:32.836 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated
Oct 02 08:39:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:32.841 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:39:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:32.842 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae8ab4e-347a-49a4-ab2a-79223d453bcc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:33 compute-0 ovn_controller[152344]: 2025-10-02T08:39:33Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:c6:d9 10.100.0.6
Oct 02 08:39:33 compute-0 ovn_controller[152344]: 2025-10-02T08:39:33Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:c6:d9 10.100.0.6
Oct 02 08:39:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 270 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 10 MiB/s wr, 379 op/s
Oct 02 08:39:33 compute-0 nova_compute[260603]: 2025-10-02 08:39:33.237 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:39:33 compute-0 ceph-mon[74477]: osdmap e260: 3 total, 3 up, 3 in
Oct 02 08:39:33 compute-0 nova_compute[260603]: 2025-10-02 08:39:33.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:39:34 compute-0 nova_compute[260603]: 2025-10-02 08:39:34.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:34 compute-0 ceph-mon[74477]: pgmap v1813: 305 pgs: 305 active+clean; 270 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 10 MiB/s wr, 379 op/s
Oct 02 08:39:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.454 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.456 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated
Oct 02 08:39:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.458 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:39:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.459 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2f1e06-3502-436e-a06d-a860f90b2bd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:34 compute-0 nova_compute[260603]: 2025-10-02 08:39:34.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:39:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.823 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.823 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.824 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:35 compute-0 podman[354962]: 2025-10-02 08:39:35.004695329 +0000 UTC m=+0.066996305 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 02 08:39:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 270 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 8.8 MiB/s wr, 332 op/s
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.382 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.382 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.382 2 INFO nova.compute.manager [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Unshelving
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.457 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.457 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.465 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'pci_requests' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.478 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'numa_topology' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:35 compute-0 kernel: tap664c458b-ab (unregistering): left promiscuous mode
Oct 02 08:39:35 compute-0 NetworkManager[45129]: <info>  [1759394375.4867] device (tap664c458b-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.488 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.488 2 INFO nova.compute.claims [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:39:35 compute-0 ovn_controller[152344]: 2025-10-02T08:39:35Z|00970|binding|INFO|Releasing lport 664c458b-abee-4d23-a60f-a0032a8f6058 from this chassis (sb_readonly=0)
Oct 02 08:39:35 compute-0 ovn_controller[152344]: 2025-10-02T08:39:35Z|00971|binding|INFO|Setting lport 664c458b-abee-4d23-a60f-a0032a8f6058 down in Southbound
Oct 02 08:39:35 compute-0 ovn_controller[152344]: 2025-10-02T08:39:35Z|00972|binding|INFO|Removing iface tap664c458b-ab ovn-installed in OVS
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.506 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:c6:d9 10.100.0.6'], port_security=['fa:16:3e:f7:c6:d9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '49cc6a50-fcc0-4336-a786-4fe32e5d5c5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd37b83fc-7239-49dc-8ab1-fc95753c436a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=664c458b-abee-4d23-a60f-a0032a8f6058) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.508 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 664c458b-abee-4d23-a60f-a0032a8f6058 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.510 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.535 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[462116f5-5e3d-4d04-b728-5ac068f6dae7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:35 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct 02 08:39:35 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000060.scope: Consumed 13.047s CPU time.
Oct 02 08:39:35 compute-0 systemd-machined[214636]: Machine qemu-119-instance-00000060 terminated.
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.568 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[711343ba-c1c5-4ced-900a-e0479f7f50c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.571 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3166e6-f5d1-45b0-879d-0405572bfbd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:35 compute-0 podman[354982]: 2025-10-02 08:39:35.60430865 +0000 UTC m=+0.081909732 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.607 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6c535a3a-2cfe-4f87-999a-37d025315f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.628 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.631 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62ecf13e-ecbf-4469-a425-05af9d411333]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355013, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.653 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[47d8ca9d-0721-4ee1-a47e-ccd2d2035f1e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520161, 'tstamp': 520161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355014, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520164, 'tstamp': 520164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355014, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.655 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.662 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.663 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.663 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.664 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.943 2 DEBUG nova.compute.manager [req-5d241ff9-a2ce-42ca-aca9-f3ebf5b3956b req-97785de9-c0f1-4d75-990d-f0fd444167f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-unplugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.943 2 DEBUG oslo_concurrency.lockutils [req-5d241ff9-a2ce-42ca-aca9-f3ebf5b3956b req-97785de9-c0f1-4d75-990d-f0fd444167f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.944 2 DEBUG oslo_concurrency.lockutils [req-5d241ff9-a2ce-42ca-aca9-f3ebf5b3956b req-97785de9-c0f1-4d75-990d-f0fd444167f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.944 2 DEBUG oslo_concurrency.lockutils [req-5d241ff9-a2ce-42ca-aca9-f3ebf5b3956b req-97785de9-c0f1-4d75-990d-f0fd444167f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.945 2 DEBUG nova.compute.manager [req-5d241ff9-a2ce-42ca-aca9-f3ebf5b3956b req-97785de9-c0f1-4d75-990d-f0fd444167f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] No waiting events found dispatching network-vif-unplugged-664c458b-abee-4d23-a60f-a0032a8f6058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:35 compute-0 nova_compute[260603]: 2025-10-02 08:39:35.945 2 WARNING nova.compute.manager [req-5d241ff9-a2ce-42ca-aca9-f3ebf5b3956b req-97785de9-c0f1-4d75-990d-f0fd444167f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received unexpected event network-vif-unplugged-664c458b-abee-4d23-a60f-a0032a8f6058 for instance with vm_state active and task_state powering-off.
Oct 02 08:39:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:39:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2141039080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.053 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.061 2 DEBUG nova.compute.provider_tree [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.082 2 DEBUG nova.scheduler.client.report [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.110 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.251 2 INFO nova.virt.libvirt.driver [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance shutdown successfully after 14 seconds.
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.259 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance destroyed successfully.
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.260 2 DEBUG nova.objects.instance [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'numa_topology' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.280 2 DEBUG nova.compute.manager [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:36 compute-0 ceph-mon[74477]: pgmap v1814: 305 pgs: 305 active+clean; 270 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 8.8 MiB/s wr, 332 op/s
Oct 02 08:39:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2141039080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.307 2 INFO nova.network.neutron [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.335 2 DEBUG oslo_concurrency.lockutils [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.982 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.983 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:39:36 compute-0 nova_compute[260603]: 2025-10-02 08:39:36.983 2 DEBUG nova.network.neutron [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:39:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 272 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.4 MiB/s wr, 221 op/s
Oct 02 08:39:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:39:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:38.007 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:38.008 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated
Oct 02 08:39:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:38.009 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:39:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:38.010 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[33722e1a-7b7e-41d5-a386-f66523559bac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.068 2 DEBUG nova.compute.manager [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.068 2 DEBUG oslo_concurrency.lockutils [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.068 2 DEBUG oslo_concurrency.lockutils [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.069 2 DEBUG oslo_concurrency.lockutils [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.069 2 DEBUG nova.compute.manager [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] No waiting events found dispatching network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.069 2 WARNING nova.compute.manager [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received unexpected event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 for instance with vm_state stopped and task_state rebuilding.
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.069 2 DEBUG nova.compute.manager [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-changed-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.070 2 DEBUG nova.compute.manager [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Refreshing instance network info cache due to event network-changed-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.070 2 DEBUG oslo_concurrency.lockutils [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:39:38 compute-0 ceph-mon[74477]: pgmap v1815: 305 pgs: 305 active+clean; 272 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.4 MiB/s wr, 221 op/s
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.426 2 INFO nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Rebuilding instance
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.601 2 DEBUG nova.network.neutron [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.637 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.639 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.639 2 INFO nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Creating image(s)
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001509897766490851 of space, bias 1.0, pg target 0.4529693299472553 quantized to 32 (current 32)
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001423096450315817 of space, bias 1.0, pg target 0.4269289350947451 quantized to 32 (current 32)
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.672 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:39:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.676 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'trusted_certs' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.681 2 DEBUG oslo_concurrency.lockutils [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.682 2 DEBUG nova.network.neutron [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Refreshing network info cache for port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.739 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.770 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.775 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "c90862ab429e888086fa2d502838e30b06b6fcfb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.776 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "c90862ab429e888086fa2d502838e30b06b6fcfb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.804 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.834 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.887 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'pci_requests' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.906 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.930 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'resources' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.945 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394363.9444623, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.946 2 INFO nova.compute.manager [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Stopped (Lifecycle Event)
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.948 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'migration_context' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.964 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.967 2 DEBUG nova.compute.manager [None req-9971f47f-dff3-455d-959d-75fd20201b1d - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.969 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance already shutdown.
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.974 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance destroyed successfully.
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.978 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance destroyed successfully.
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.979 2 DEBUG nova.virt.libvirt.vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-487849428',display_name='tempest-tempest.common.compute-instance-487849428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-487849428',id=96,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-il5cp84q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:37Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=49cc6a50-fcc0-4336-a786-4fe32e5d5c5a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.979 2 DEBUG nova.network.os_vif_util [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.980 2 DEBUG nova.network.os_vif_util [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.980 2 DEBUG os_vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:38 compute-0 nova_compute[260603]: 2025-10-02 08:39:38.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap664c458b-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.018 2 INFO os_vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab')
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.141 2 DEBUG nova.virt.libvirt.imagebackend [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/1ca55329-185d-4f33-a25c-d8eaed50208d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/1ca55329-185d-4f33-a25c-d8eaed50208d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 08:39:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:39.189 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:39.192 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated
Oct 02 08:39:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:39.194 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:39:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:39.195 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b686c730-605c-4cb4-9026-f4bc97b45b87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.206 2 DEBUG nova.virt.libvirt.imagebackend [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Selected location: {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/1ca55329-185d-4f33-a25c-d8eaed50208d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.206 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] cloning images/1ca55329-185d-4f33-a25c-d8eaed50208d@snap to None/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:39:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 279 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 2.6 MiB/s wr, 118 op/s
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.333 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "c90862ab429e888086fa2d502838e30b06b6fcfb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.421 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deleting instance files /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_del
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.422 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deletion of /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_del complete
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.470 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'migration_context' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.540 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] flattening vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.597 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.597 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Creating image(s)
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.646 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.700 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.733 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.742 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.875 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.876 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.877 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.877 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.898 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.901 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.991 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Image rbd:vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.992 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.993 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Ensure instance console log exists: /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.994 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.995 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.996 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:39 compute-0 nova_compute[260603]: 2025-10-02 08:39:39.999 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Start _get_guest_xml network_info=[{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:39:20Z,direct_url=<?>,disk_format='raw',id=1ca55329-185d-4f33-a25c-d8eaed50208d,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1260783938-shelved',owner='b85786f28a064d75924559acd4f6137e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:39:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.004 2 WARNING nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.012 2 DEBUG nova.virt.libvirt.host [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.013 2 DEBUG nova.virt.libvirt.host [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.016 2 DEBUG nova.virt.libvirt.host [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.017 2 DEBUG nova.virt.libvirt.host [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.018 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.018 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:39:20Z,direct_url=<?>,disk_format='raw',id=1ca55329-185d-4f33-a25c-d8eaed50208d,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1260783938-shelved',owner='b85786f28a064d75924559acd4f6137e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:39:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.019 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.019 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.020 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.020 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.021 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.021 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.021 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.022 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.022 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.023 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.023 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'vcpu_model' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.051 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.220 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.302 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] resizing rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:39:40 compute-0 ceph-mon[74477]: pgmap v1816: 305 pgs: 305 active+clean; 279 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 2.6 MiB/s wr, 118 op/s
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.414 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.414 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Ensure instance console log exists: /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.415 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.415 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.416 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.418 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Start _get_guest_xml network_info=[{"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.421 2 WARNING nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.426 2 DEBUG nova.virt.libvirt.host [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.426 2 DEBUG nova.virt.libvirt.host [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.429 2 DEBUG nova.virt.libvirt.host [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.429 2 DEBUG nova.virt.libvirt.host [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.430 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.430 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.431 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.431 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.431 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.432 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.432 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.432 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.433 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.433 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.433 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.433 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.434 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.449 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:39:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367426997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.499 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.520 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.524 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:39:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553504633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.888 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.904 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.908 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:39:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620838583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.946 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.948 2 DEBUG nova.virt.libvirt.vif [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='1ca55329-185d-4f33-a25c-d8eaed50208d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:38:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member',shelved_at='2025-10-02T08:39:28.348390',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='1ca55329-185d-4f33-a25c-d8eaed50208d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:35Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.949 2 DEBUG nova.network.os_vif_util [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.950 2 DEBUG nova.network.os_vif_util [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.951 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'pci_devices' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.972 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:39:40 compute-0 nova_compute[260603]:   <uuid>fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b</uuid>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   <name>instance-0000005d</name>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersNegativeTestJSON-server-1260783938</nova:name>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:39:40</nova:creationTime>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:39:40 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:39:40 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:39:40 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:39:40 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:39:40 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:39:40 compute-0 nova_compute[260603]:         <nova:user uuid="15da3bbf2c9f49b68e7a7e0ccd557067">tempest-ServersNegativeTestJSON-2088330606-project-member</nova:user>
Oct 02 08:39:40 compute-0 nova_compute[260603]:         <nova:project uuid="b85786f28a064d75924559acd4f6137e">tempest-ServersNegativeTestJSON-2088330606</nova:project>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="1ca55329-185d-4f33-a25c-d8eaed50208d"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:39:40 compute-0 nova_compute[260603]:         <nova:port uuid="00fa1373-e4cc-4245-9cfa-5a58c77aa4eb">
Oct 02 08:39:40 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <system>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <entry name="serial">fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b</entry>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <entry name="uuid">fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b</entry>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     </system>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   <os>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   </os>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   <features>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   </features>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk">
Oct 02 08:39:40 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       </source>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:39:40 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config">
Oct 02 08:39:40 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       </source>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:39:40 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:11:b9:2b"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <target dev="tap00fa1373-e4"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/console.log" append="off"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <video>
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     </video>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:39:40 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:39:40 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:39:40 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:39:40 compute-0 nova_compute[260603]: </domain>
Oct 02 08:39:40 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.973 2 DEBUG nova.compute.manager [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Preparing to wait for external event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.974 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.974 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.974 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.975 2 DEBUG nova.virt.libvirt.vif [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='1ca55329-185d-4f33-a25c-d8eaed50208d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:38:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member',shelved_at='2025-10-02T08:39:28.348390',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='1ca55329-185d-4f33-a25c-d8eaed50208d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:35Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.975 2 DEBUG nova.network.os_vif_util [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.976 2 DEBUG nova.network.os_vif_util [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.977 2 DEBUG os_vif [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.981 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00fa1373-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00fa1373-e4, col_values=(('external_ids', {'iface-id': '00fa1373-e4cc-4245-9cfa-5a58c77aa4eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:b9:2b', 'vm-uuid': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:40 compute-0 NetworkManager[45129]: <info>  [1759394380.9843] manager: (tap00fa1373-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:40 compute-0 nova_compute[260603]: 2025-10-02 08:39:40.988 2 INFO os_vif [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4')
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.058 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.060 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.060 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No VIF found with MAC fa:16:3e:11:b9:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.061 2 INFO nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Using config drive
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.080 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.101 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'ec2_ids' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.159 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'keypairs' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 279 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 2.6 MiB/s wr, 118 op/s
Oct 02 08:39:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:39:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/263512643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.326 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.328 2 DEBUG nova.virt.libvirt.vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-487849428',display_name='tempest-tempest.common.compute-instance-487849428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-487849428',id=96,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-il5cp84q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:39Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=49cc6a50-fcc0-4336-a786-4fe32e5d5c5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:39:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1367426997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:39:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2553504633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:39:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/620838583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:39:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/263512643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.329 2 DEBUG nova.network.os_vif_util [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.331 2 DEBUG nova.network.os_vif_util [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.334 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:39:41 compute-0 nova_compute[260603]:   <uuid>49cc6a50-fcc0-4336-a786-4fe32e5d5c5a</uuid>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   <name>instance-00000060</name>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <nova:name>tempest-tempest.common.compute-instance-487849428</nova:name>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:39:40</nova:creationTime>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:39:41 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:39:41 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:39:41 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:39:41 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:39:41 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:39:41 compute-0 nova_compute[260603]:         <nova:user uuid="d3802fedfb914c27b9b09ad6ea6f4c27">tempest-ServerActionsTestOtherA-249618595-project-member</nova:user>
Oct 02 08:39:41 compute-0 nova_compute[260603]:         <nova:project uuid="c75535fe577642038c638a0b01f74d09">tempest-ServerActionsTestOtherA-249618595</nova:project>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:39:41 compute-0 nova_compute[260603]:         <nova:port uuid="664c458b-abee-4d23-a60f-a0032a8f6058">
Oct 02 08:39:41 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <system>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <entry name="serial">49cc6a50-fcc0-4336-a786-4fe32e5d5c5a</entry>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <entry name="uuid">49cc6a50-fcc0-4336-a786-4fe32e5d5c5a</entry>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     </system>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   <os>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   </os>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   <features>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   </features>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk">
Oct 02 08:39:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       </source>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:39:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config">
Oct 02 08:39:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       </source>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:39:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:f7:c6:d9"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <target dev="tap664c458b-ab"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/console.log" append="off"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <video>
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     </video>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:39:41 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:39:41 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:39:41 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:39:41 compute-0 nova_compute[260603]: </domain>
Oct 02 08:39:41 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.336 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Preparing to wait for external event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.336 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.337 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.337 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.338 2 DEBUG nova.virt.libvirt.vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-487849428',display_name='tempest-tempest.common.compute-instance-487849428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-487849428',id=96,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-il5cp84q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:39Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=49cc6a50-fcc0-4336-a786-4fe32e5d5c5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.338 2 DEBUG nova.network.os_vif_util [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.339 2 DEBUG nova.network.os_vif_util [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.339 2 DEBUG os_vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap664c458b-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap664c458b-ab, col_values=(('external_ids', {'iface-id': '664c458b-abee-4d23-a60f-a0032a8f6058', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:c6:d9', 'vm-uuid': '49cc6a50-fcc0-4336-a786-4fe32e5d5c5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:41 compute-0 NetworkManager[45129]: <info>  [1759394381.3458] manager: (tap664c458b-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.352 2 INFO os_vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab')
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.413 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.414 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.414 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No VIF found with MAC fa:16:3e:f7:c6:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.415 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Using config drive
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.454 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.482 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:41 compute-0 nova_compute[260603]: 2025-10-02 08:39:41.525 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'keypairs' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.149 2 DEBUG nova.network.neutron [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updated VIF entry in instance network info cache for port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.150 2 DEBUG nova.network.neutron [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.156 2 INFO nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Creating config drive at /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.165 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfkwiojiv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.230 2 DEBUG oslo_concurrency.lockutils [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:39:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:39:42 compute-0 ceph-mon[74477]: pgmap v1817: 305 pgs: 305 active+clean; 279 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 2.6 MiB/s wr, 118 op/s
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.339 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfkwiojiv" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.369 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.373 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.425 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Creating config drive at /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.430 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0aeiqcu0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.563 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.564 2 INFO nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deleting local config drive /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config because it was imported into RBD.
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.581 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0aeiqcu0" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.622 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.632 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:42 compute-0 NetworkManager[45129]: <info>  [1759394382.6405] manager: (tap00fa1373-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Oct 02 08:39:42 compute-0 kernel: tap00fa1373-e4: entered promiscuous mode
Oct 02 08:39:42 compute-0 ovn_controller[152344]: 2025-10-02T08:39:42Z|00973|binding|INFO|Claiming lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for this chassis.
Oct 02 08:39:42 compute-0 ovn_controller[152344]: 2025-10-02T08:39:42Z|00974|binding|INFO|00fa1373-e4cc-4245-9cfa-5a58c77aa4eb: Claiming fa:16:3e:11:b9:2b 10.100.0.10
Oct 02 08:39:42 compute-0 systemd-udevd[355710]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.711 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:b9:2b 10.100.0.10'], port_security=['fa:16:3e:11:b9:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.712 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 bound to our chassis
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.713 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:39:42 compute-0 NetworkManager[45129]: <info>  [1759394382.7202] device (tap00fa1373-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:39:42 compute-0 NetworkManager[45129]: <info>  [1759394382.7216] device (tap00fa1373-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.726 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6ab89b-1e11-4e6c-9ccc-248efc7b4f75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.727 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap19d584c3-e1 in ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:42 compute-0 sudo[355659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:39:42 compute-0 ovn_controller[152344]: 2025-10-02T08:39:42Z|00975|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb ovn-installed in OVS
Oct 02 08:39:42 compute-0 ovn_controller[152344]: 2025-10-02T08:39:42Z|00976|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb up in Southbound
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.734 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap19d584c3-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.734 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[25f70c18-cc3b-442f-b5fc-b8ab4940fe82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.735 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1374e98b-c989-4c2d-b5f6-8671f817028b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:42 compute-0 systemd-machined[214636]: New machine qemu-120-instance-0000005d.
Oct 02 08:39:42 compute-0 sudo[355659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:42 compute-0 sudo[355659]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.746 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f30f55b6-bdda-4298-b1cc-b1be896a42c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:42 compute-0 systemd[1]: Started Virtual Machine qemu-120-instance-0000005d.
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.770 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[169d468e-b7d3-451c-98ae-c6a9c1560275]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:42 compute-0 sudo[355721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:39:42 compute-0 sudo[355721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:42 compute-0 sudo[355721]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.819 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[07615201-5402-4218-b064-4dba8f41c64b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.827 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91575618-a9ed-4492-a06f-65c68c94c7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:42 compute-0 NetworkManager[45129]: <info>  [1759394382.8284] manager: (tap19d584c3-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/387)
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.869 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b88c52bb-1982-4b1b-8fa0-d68eb9099216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.872 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9488a6a8-cd0b-4cb2-b9a0-55bf8fa3926c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:42 compute-0 sudo[355769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:39:42 compute-0 sudo[355769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:42 compute-0 sudo[355769]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:42 compute-0 NetworkManager[45129]: <info>  [1759394382.9066] device (tap19d584c3-e0): carrier: link connected
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.913 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c4070ef0-d8e7-46e9-91c9-7c7b4fb58eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.916 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.917 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deleting local config drive /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config because it was imported into RBD.
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.943 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb7c227-576e-4078-9317-9ab5876921e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526150, 'reachable_time': 30671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355833, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:42 compute-0 sudo[355819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 08:39:42 compute-0 sudo[355819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.971 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[55443a99-6cc5-417e-99ee-c85cca41b917]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:8c4f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526150, 'tstamp': 526150}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355848, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:42 compute-0 kernel: tap664c458b-ab: entered promiscuous mode
Oct 02 08:39:42 compute-0 NetworkManager[45129]: <info>  [1759394382.9832] manager: (tap664c458b-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/388)
Oct 02 08:39:42 compute-0 ovn_controller[152344]: 2025-10-02T08:39:42Z|00977|binding|INFO|Claiming lport 664c458b-abee-4d23-a60f-a0032a8f6058 for this chassis.
Oct 02 08:39:42 compute-0 nova_compute[260603]: 2025-10-02 08:39:42.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:42 compute-0 ovn_controller[152344]: 2025-10-02T08:39:42Z|00978|binding|INFO|664c458b-abee-4d23-a60f-a0032a8f6058: Claiming fa:16:3e:f7:c6:d9 10.100.0.6
Oct 02 08:39:42 compute-0 systemd-udevd[355806]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:39:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.996 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:c6:d9 10.100.0.6'], port_security=['fa:16:3e:f7:c6:d9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '49cc6a50-fcc0-4336-a786-4fe32e5d5c5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd37b83fc-7239-49dc-8ab1-fc95753c436a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=664c458b-abee-4d23-a60f-a0032a8f6058) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:43 compute-0 NetworkManager[45129]: <info>  [1759394383.0070] device (tap664c458b-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:39:43 compute-0 NetworkManager[45129]: <info>  [1759394383.0082] device (tap664c458b-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:43 compute-0 ovn_controller[152344]: 2025-10-02T08:39:43Z|00979|binding|INFO|Setting lport 664c458b-abee-4d23-a60f-a0032a8f6058 ovn-installed in OVS
Oct 02 08:39:43 compute-0 ovn_controller[152344]: 2025-10-02T08:39:43Z|00980|binding|INFO|Setting lport 664c458b-abee-4d23-a60f-a0032a8f6058 up in Southbound
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.003 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab297b3-f997-4809-8c80-f6adec002c5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526150, 'reachable_time': 30671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355855, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:43 compute-0 systemd-machined[214636]: New machine qemu-121-instance-00000060.
Oct 02 08:39:43 compute-0 systemd[1]: Started Virtual Machine qemu-121-instance-00000060.
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.053 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b43acb59-4969-4761-aac2-8afe3a9a48e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.134 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9617da-f678-4088-8298-b65677f41784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.136 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.138 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.138 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:43 compute-0 NetworkManager[45129]: <info>  [1759394383.1410] manager: (tap19d584c3-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Oct 02 08:39:43 compute-0 kernel: tap19d584c3-e0: entered promiscuous mode
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.146 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:43 compute-0 ovn_controller[152344]: 2025-10-02T08:39:43Z|00981|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.163 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.164 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[856b57be-396d-4aa7-a9a9-f96bb1ae69d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.168 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.168 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'env', 'PROCESS_TAG=haproxy-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/19d584c3-e754-47d1-9cdf-c6badbd670d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:39:43 compute-0 sudo[355819]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 325 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 7.4 MiB/s wr, 211 op/s
Oct 02 08:39:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:39:43 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:39:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:39:43 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:39:43 compute-0 sudo[355933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:39:43 compute-0 sudo[355933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:43 compute-0 sudo[355933]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:43 compute-0 sudo[356000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:39:43 compute-0 sudo[356000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:43 compute-0 sudo[356000]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:43 compute-0 sudo[356030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:39:43 compute-0 sudo[356030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:43 compute-0 sudo[356030]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:43 compute-0 sudo[356072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:39:43 compute-0 sudo[356072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:43 compute-0 podman[356083]: 2025-10-02 08:39:43.545704983 +0000 UTC m=+0.069342335 container create f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 08:39:43 compute-0 podman[356083]: 2025-10-02 08:39:43.504094273 +0000 UTC m=+0.027731674 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:39:43 compute-0 systemd[1]: Started libpod-conmon-f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7.scope.
Oct 02 08:39:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:39:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8cbea9285c5da7b9818a5a69331acc3f529d43f16cdc486f4c0e52f4c5285b9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:43 compute-0 podman[356083]: 2025-10-02 08:39:43.663434482 +0000 UTC m=+0.187071833 container init f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:39:43 compute-0 podman[356083]: 2025-10-02 08:39:43.669543535 +0000 UTC m=+0.193180866 container start f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:39:43 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [NOTICE]   (356133) : New worker (356137) forked
Oct 02 08:39:43 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [NOTICE]   (356133) : Loading success.
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.757 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 664c458b-abee-4d23-a60f-a0032a8f6058 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.759 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.766 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394383.7656057, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.767 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Started (Lifecycle Event)
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.778 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aaeb3c74-02a3-42bf-b55e-3cc21448a491]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.795 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.800 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394383.7658455, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.800 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Paused (Lifecycle Event)
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.820 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.822 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc65976-cc69-4fae-8625-ae32097cdd0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.826 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e8164567-f417-43d1-822a-a78a22346605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.829 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.851 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.868 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1bafdfb3-d172-41be-a5fa-f2defa940e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.891 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aef7d45d-027a-4e99-baf3-1a4ef36bf606]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356156, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.910 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.911 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394383.910339, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.911 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Started (Lifecycle Event)
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.915 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[03be8aa1-39ae-4c8a-9649-fa7a86805e7c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520161, 'tstamp': 520161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356157, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520164, 'tstamp': 520164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356157, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.917 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.969 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.973 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394383.9105396, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.973 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Paused (Lifecycle Event)
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.975 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.975 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.975 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.976 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:43 compute-0 nova_compute[260603]: 2025-10-02 08:39:43.999 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.002 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.025 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:39:44 compute-0 sudo[356072]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:39:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:39:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:39:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:39:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:39:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b6305d50-ca82-4a77-b39a-d97a6bf285df does not exist
Oct 02 08:39:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 39ca432a-3363-4378-b52d-9fcf682cf633 does not exist
Oct 02 08:39:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 7dd3bedc-c61e-48c9-a797-74a2746ca479 does not exist
Oct 02 08:39:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:39:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.227 2 DEBUG nova.compute.manager [req-cef50f62-a850-4b23-aade-b624ab9a1876 req-a22b6f56-d799-4048-9d1e-556ad3662b61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.227 2 DEBUG oslo_concurrency.lockutils [req-cef50f62-a850-4b23-aade-b624ab9a1876 req-a22b6f56-d799-4048-9d1e-556ad3662b61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.228 2 DEBUG oslo_concurrency.lockutils [req-cef50f62-a850-4b23-aade-b624ab9a1876 req-a22b6f56-d799-4048-9d1e-556ad3662b61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.228 2 DEBUG oslo_concurrency.lockutils [req-cef50f62-a850-4b23-aade-b624ab9a1876 req-a22b6f56-d799-4048-9d1e-556ad3662b61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.229 2 DEBUG nova.compute.manager [req-cef50f62-a850-4b23-aade-b624ab9a1876 req-a22b6f56-d799-4048-9d1e-556ad3662b61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Processing event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:39:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:39:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.230 2 DEBUG nova.compute.manager [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:39:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:39:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.235 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394384.2349272, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.235 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Resumed (Lifecycle Event)
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.238 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.242 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance spawned successfully.
Oct 02 08:39:44 compute-0 ceph-mon[74477]: pgmap v1818: 305 pgs: 305 active+clean; 325 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 7.4 MiB/s wr, 211 op/s
Oct 02 08:39:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:39:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:39:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:39:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:39:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:39:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:39:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:39:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.275 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.281 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:39:44 compute-0 nova_compute[260603]: 2025-10-02 08:39:44.318 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:39:44 compute-0 sudo[356172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:39:44 compute-0 sudo[356172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:44 compute-0 sudo[356172]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:44 compute-0 sudo[356197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:39:44 compute-0 sudo[356197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:44 compute-0 sudo[356197]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:44 compute-0 sudo[356222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:39:44 compute-0 sudo[356222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:44 compute-0 sudo[356222]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:44 compute-0 sudo[356247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:39:44 compute-0 sudo[356247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:44 compute-0 podman[356312]: 2025-10-02 08:39:44.976979062 +0000 UTC m=+0.070076407 container create c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:39:45 compute-0 podman[356312]: 2025-10-02 08:39:44.937122024 +0000 UTC m=+0.030219409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:39:45 compute-0 systemd[1]: Started libpod-conmon-c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756.scope.
Oct 02 08:39:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:39:45 compute-0 podman[356312]: 2025-10-02 08:39:45.139308981 +0000 UTC m=+0.232406336 container init c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:39:45 compute-0 podman[356312]: 2025-10-02 08:39:45.152453625 +0000 UTC m=+0.245551001 container start c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 08:39:45 compute-0 podman[356312]: 2025-10-02 08:39:45.160272791 +0000 UTC m=+0.253370176 container attach c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 08:39:45 compute-0 sleepy_swartz[356329]: 167 167
Oct 02 08:39:45 compute-0 systemd[1]: libpod-c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756.scope: Deactivated successfully.
Oct 02 08:39:45 compute-0 podman[356312]: 2025-10-02 08:39:45.163807207 +0000 UTC m=+0.256904602 container died c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 08:39:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 325 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 151 op/s
Oct 02 08:39:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec14c7f6bf0dfd806de8556f14500632eeb640bf40487535b5342c449addc428-merged.mount: Deactivated successfully.
Oct 02 08:39:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct 02 08:39:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct 02 08:39:45 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct 02 08:39:45 compute-0 podman[356312]: 2025-10-02 08:39:45.345798927 +0000 UTC m=+0.438896262 container remove c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 08:39:45 compute-0 systemd[1]: libpod-conmon-c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756.scope: Deactivated successfully.
Oct 02 08:39:45 compute-0 podman[356355]: 2025-10-02 08:39:45.592927675 +0000 UTC m=+0.071198822 container create be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 08:39:45 compute-0 podman[356355]: 2025-10-02 08:39:45.558857021 +0000 UTC m=+0.037128168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:39:45 compute-0 systemd[1]: Started libpod-conmon-be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d.scope.
Oct 02 08:39:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:39:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d62cd0140d139718d23009ffc81961dbf29cacfa6a050ef22f74c756f5a2dd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d62cd0140d139718d23009ffc81961dbf29cacfa6a050ef22f74c756f5a2dd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d62cd0140d139718d23009ffc81961dbf29cacfa6a050ef22f74c756f5a2dd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d62cd0140d139718d23009ffc81961dbf29cacfa6a050ef22f74c756f5a2dd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d62cd0140d139718d23009ffc81961dbf29cacfa6a050ef22f74c756f5a2dd8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:45 compute-0 podman[356355]: 2025-10-02 08:39:45.733045976 +0000 UTC m=+0.211317103 container init be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 08:39:45 compute-0 podman[356355]: 2025-10-02 08:39:45.743611163 +0000 UTC m=+0.221882300 container start be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:39:45 compute-0 podman[356355]: 2025-10-02 08:39:45.746977265 +0000 UTC m=+0.225248402 container attach be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 08:39:45 compute-0 nova_compute[260603]: 2025-10-02 08:39:45.883 2 DEBUG nova.compute.manager [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:45.893 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:45.894 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated
Oct 02 08:39:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:45.895 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:39:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:45.897 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[645b0e3a-49ed-4691-9bb9-c00617c62c18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:45 compute-0 nova_compute[260603]: 2025-10-02 08:39:45.972 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:46 compute-0 ceph-mon[74477]: pgmap v1819: 305 pgs: 305 active+clean; 325 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 151 op/s
Oct 02 08:39:46 compute-0 ceph-mon[74477]: osdmap e261: 3 total, 3 up, 3 in
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.384 2 DEBUG nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.384 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.384 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.385 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.385 2 DEBUG nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.385 2 WARNING nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state active and task_state None.
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.385 2 DEBUG nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.385 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Processing event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] No waiting events found dispatching network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.387 2 WARNING nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received unexpected event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 for instance with vm_state stopped and task_state rebuild_spawning.
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.391 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.397 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394386.3969915, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.397 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Resumed (Lifecycle Event)
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.401 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.405 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance spawned successfully.
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.406 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.419 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.425 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.428 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.429 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.430 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.431 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.432 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.433 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.457 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.488 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.538 2 INFO nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] bringing vm to original state: 'stopped'
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.598 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.599 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.599 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.604 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 08:39:46 compute-0 kernel: tap664c458b-ab (unregistering): left promiscuous mode
Oct 02 08:39:46 compute-0 NetworkManager[45129]: <info>  [1759394386.6530] device (tap664c458b-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:39:46 compute-0 ovn_controller[152344]: 2025-10-02T08:39:46Z|00982|binding|INFO|Releasing lport 664c458b-abee-4d23-a60f-a0032a8f6058 from this chassis (sb_readonly=0)
Oct 02 08:39:46 compute-0 ovn_controller[152344]: 2025-10-02T08:39:46Z|00983|binding|INFO|Setting lport 664c458b-abee-4d23-a60f-a0032a8f6058 down in Southbound
Oct 02 08:39:46 compute-0 ovn_controller[152344]: 2025-10-02T08:39:46Z|00984|binding|INFO|Removing iface tap664c458b-ab ovn-installed in OVS
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.676 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:c6:d9 10.100.0.6'], port_security=['fa:16:3e:f7:c6:d9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '49cc6a50-fcc0-4336-a786-4fe32e5d5c5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd37b83fc-7239-49dc-8ab1-fc95753c436a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=664c458b-abee-4d23-a60f-a0032a8f6058) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.677 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 664c458b-abee-4d23-a60f-a0032a8f6058 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.679 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.713 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4af63540-1fdc-4122-a17f-0f77df5bcc68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:46 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct 02 08:39:46 compute-0 systemd-machined[214636]: Machine qemu-121-instance-00000060 terminated.
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.759 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8b792c1a-2cd4-4bff-a15c-a9601d423d1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.763 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1416449b-b373-4ac2-a4fd-9825f44a0250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.795 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdc3327-bb48-4430-b069-395ccc5607b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.821 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a720d757-bbef-4402-b5ed-3ecb3f711484]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356407, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.842 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b14a84f-4525-4f5d-a180-7a85320a8d99]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520161, 'tstamp': 520161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356414, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520164, 'tstamp': 520164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356414, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.844 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.849 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.850 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.850 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.850 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance destroyed successfully.
Oct 02 08:39:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.850 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.850 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:46 compute-0 pensive_margulis[356372]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:39:46 compute-0 pensive_margulis[356372]: --> relative data size: 1.0
Oct 02 08:39:46 compute-0 pensive_margulis[356372]: --> All data devices are unavailable
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.915 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:46 compute-0 systemd[1]: libpod-be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d.scope: Deactivated successfully.
Oct 02 08:39:46 compute-0 systemd[1]: libpod-be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d.scope: Consumed 1.065s CPU time.
Oct 02 08:39:46 compute-0 conmon[356372]: conmon be8922e4fb75512a8756 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d.scope/container/memory.events
Oct 02 08:39:46 compute-0 podman[356355]: 2025-10-02 08:39:46.938594559 +0000 UTC m=+1.416865666 container died be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.935 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.953 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.955 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:46 compute-0 nova_compute[260603]: 2025-10-02 08:39:46.955 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:39:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d62cd0140d139718d23009ffc81961dbf29cacfa6a050ef22f74c756f5a2dd8-merged.mount: Deactivated successfully.
Oct 02 08:39:46 compute-0 podman[356355]: 2025-10-02 08:39:46.994986814 +0000 UTC m=+1.473257911 container remove be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:39:47 compute-0 systemd[1]: libpod-conmon-be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d.scope: Deactivated successfully.
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.026 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.027 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 4145f1a3-c327-49ee-9af1-1ace3afb70a5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.027 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.027 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.028 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.028 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.028 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.028 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.029 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:47 compute-0 sudo[356247]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.075 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.078 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.079 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:47 compute-0 nova_compute[260603]: 2025-10-02 08:39:47.100 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:47 compute-0 sudo[356437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:39:47 compute-0 sudo[356437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:47 compute-0 sudo[356437]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:47 compute-0 sudo[356462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:39:47 compute-0 sudo[356462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:47 compute-0 sudo[356462]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 293 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 6.9 MiB/s wr, 212 op/s
Oct 02 08:39:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:39:47 compute-0 sudo[356487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:39:47 compute-0 sudo[356487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:47 compute-0 sudo[356487]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:47.315 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:47.318 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated
Oct 02 08:39:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:47.320 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:39:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:47.321 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffce6e0a-be40-44de-b4bb-34a26fa8089a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:47 compute-0 sudo[356512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:39:47 compute-0 sudo[356512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:47 compute-0 podman[356577]: 2025-10-02 08:39:47.824394893 +0000 UTC m=+0.064298933 container create 0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 08:39:47 compute-0 systemd[1]: Started libpod-conmon-0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c.scope.
Oct 02 08:39:47 compute-0 podman[356577]: 2025-10-02 08:39:47.796559616 +0000 UTC m=+0.036463696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:39:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:39:47 compute-0 podman[356577]: 2025-10-02 08:39:47.94009898 +0000 UTC m=+0.180003020 container init 0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:39:47 compute-0 podman[356577]: 2025-10-02 08:39:47.949572516 +0000 UTC m=+0.189476546 container start 0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:39:47 compute-0 podman[356577]: 2025-10-02 08:39:47.953135262 +0000 UTC m=+0.193039302 container attach 0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 08:39:47 compute-0 musing_hugle[356593]: 167 167
Oct 02 08:39:47 compute-0 systemd[1]: libpod-0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c.scope: Deactivated successfully.
Oct 02 08:39:47 compute-0 conmon[356593]: conmon 0dd6dd60b910598e32f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c.scope/container/memory.events
Oct 02 08:39:48 compute-0 podman[356598]: 2025-10-02 08:39:48.005538847 +0000 UTC m=+0.030466486 container died 0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:39:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b44447731674113029ce105f2505cd0c38ca7d43b728d1ba43d3f98d39e8048e-merged.mount: Deactivated successfully.
Oct 02 08:39:48 compute-0 podman[356598]: 2025-10-02 08:39:48.050018484 +0000 UTC m=+0.074946143 container remove 0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 08:39:48 compute-0 systemd[1]: libpod-conmon-0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c.scope: Deactivated successfully.
Oct 02 08:39:48 compute-0 podman[356620]: 2025-10-02 08:39:48.288291105 +0000 UTC m=+0.052803978 container create 04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:39:48 compute-0 systemd[1]: Started libpod-conmon-04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec.scope.
Oct 02 08:39:48 compute-0 ceph-mon[74477]: pgmap v1821: 305 pgs: 305 active+clean; 293 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 6.9 MiB/s wr, 212 op/s
Oct 02 08:39:48 compute-0 podman[356620]: 2025-10-02 08:39:48.26848173 +0000 UTC m=+0.032994643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:39:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920798367a516114dc06e99e736ea6976d14f46c6640ccb4a58c8cbe696b2a7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920798367a516114dc06e99e736ea6976d14f46c6640ccb4a58c8cbe696b2a7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920798367a516114dc06e99e736ea6976d14f46c6640ccb4a58c8cbe696b2a7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920798367a516114dc06e99e736ea6976d14f46c6640ccb4a58c8cbe696b2a7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:48 compute-0 podman[356620]: 2025-10-02 08:39:48.4055711 +0000 UTC m=+0.170084013 container init 04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 08:39:48 compute-0 podman[356620]: 2025-10-02 08:39:48.413348554 +0000 UTC m=+0.177861467 container start 04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:39:48 compute-0 podman[356620]: 2025-10-02 08:39:48.416801198 +0000 UTC m=+0.181314081 container attach 04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_keldysh, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:39:48 compute-0 nova_compute[260603]: 2025-10-02 08:39:48.653 2 DEBUG nova.compute.manager [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-unplugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:48 compute-0 nova_compute[260603]: 2025-10-02 08:39:48.654 2 DEBUG oslo_concurrency.lockutils [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:48 compute-0 nova_compute[260603]: 2025-10-02 08:39:48.655 2 DEBUG oslo_concurrency.lockutils [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:48 compute-0 nova_compute[260603]: 2025-10-02 08:39:48.655 2 DEBUG oslo_concurrency.lockutils [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:48 compute-0 nova_compute[260603]: 2025-10-02 08:39:48.656 2 DEBUG nova.compute.manager [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] No waiting events found dispatching network-vif-unplugged-664c458b-abee-4d23-a60f-a0032a8f6058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:48 compute-0 nova_compute[260603]: 2025-10-02 08:39:48.656 2 WARNING nova.compute.manager [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received unexpected event network-vif-unplugged-664c458b-abee-4d23-a60f-a0032a8f6058 for instance with vm_state stopped and task_state None.
Oct 02 08:39:48 compute-0 nova_compute[260603]: 2025-10-02 08:39:48.657 2 DEBUG nova.compute.manager [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:48 compute-0 nova_compute[260603]: 2025-10-02 08:39:48.657 2 DEBUG oslo_concurrency.lockutils [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:48 compute-0 nova_compute[260603]: 2025-10-02 08:39:48.658 2 DEBUG oslo_concurrency.lockutils [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:48 compute-0 nova_compute[260603]: 2025-10-02 08:39:48.658 2 DEBUG oslo_concurrency.lockutils [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:48 compute-0 nova_compute[260603]: 2025-10-02 08:39:48.659 2 DEBUG nova.compute.manager [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] No waiting events found dispatching network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:48 compute-0 nova_compute[260603]: 2025-10-02 08:39:48.659 2 WARNING nova.compute.manager [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received unexpected event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 for instance with vm_state stopped and task_state None.
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]: {
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:     "0": [
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:         {
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "devices": [
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "/dev/loop3"
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             ],
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_name": "ceph_lv0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_size": "21470642176",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "name": "ceph_lv0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "tags": {
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.cluster_name": "ceph",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.crush_device_class": "",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.encrypted": "0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.osd_id": "0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.type": "block",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.vdo": "0"
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             },
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "type": "block",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "vg_name": "ceph_vg0"
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:         }
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:     ],
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:     "1": [
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:         {
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "devices": [
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "/dev/loop4"
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             ],
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_name": "ceph_lv1",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_size": "21470642176",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "name": "ceph_lv1",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "tags": {
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.cluster_name": "ceph",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.crush_device_class": "",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.encrypted": "0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.osd_id": "1",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.type": "block",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.vdo": "0"
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             },
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "type": "block",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "vg_name": "ceph_vg1"
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:         }
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:     ],
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:     "2": [
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:         {
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "devices": [
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "/dev/loop5"
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             ],
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_name": "ceph_lv2",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_size": "21470642176",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "name": "ceph_lv2",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "tags": {
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.cluster_name": "ceph",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.crush_device_class": "",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.encrypted": "0",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.osd_id": "2",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.type": "block",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:                 "ceph.vdo": "0"
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             },
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "type": "block",
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:             "vg_name": "ceph_vg2"
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:         }
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]:     ]
Oct 02 08:39:49 compute-0 reverent_keldysh[356637]: }
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:49 compute-0 systemd[1]: libpod-04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec.scope: Deactivated successfully.
Oct 02 08:39:49 compute-0 podman[356620]: 2025-10-02 08:39:49.226271898 +0000 UTC m=+0.990784791 container died 04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:39:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 285 op/s
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.324 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.325 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.325 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.325 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.325 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.327 2 INFO nova.compute.manager [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Terminating instance
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.328 2 DEBUG nova.compute.manager [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.333 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance destroyed successfully.
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.333 2 DEBUG nova.objects.instance [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'resources' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-920798367a516114dc06e99e736ea6976d14f46c6640ccb4a58c8cbe696b2a7f-merged.mount: Deactivated successfully.
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.346 2 DEBUG nova.virt.libvirt.vif [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-487849428',display_name='tempest-tempest.common.compute-instance-487849428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-487849428',id=96,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:39:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-il5cp84q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:39:47Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=49cc6a50-fcc0-4336-a786-4fe32e5d5c5a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.346 2 DEBUG nova.network.os_vif_util [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.347 2 DEBUG nova.network.os_vif_util [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.347 2 DEBUG os_vif [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.349 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap664c458b-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.353 2 INFO os_vif [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab')
Oct 02 08:39:49 compute-0 podman[356620]: 2025-10-02 08:39:49.415534966 +0000 UTC m=+1.180047849 container remove 04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:39:49 compute-0 systemd[1]: libpod-conmon-04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec.scope: Deactivated successfully.
Oct 02 08:39:49 compute-0 sudo[356512]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:49.475 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:49.477 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:49 compute-0 sudo[356679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:39:49 compute-0 sudo[356679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:49 compute-0 sudo[356679]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:49 compute-0 sudo[356704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:39:49 compute-0 sudo[356704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:49 compute-0 sudo[356704]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:49 compute-0 sudo[356730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:39:49 compute-0 sudo[356730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:49 compute-0 sudo[356730]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:49 compute-0 sudo[356755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:39:49 compute-0 sudo[356755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.746 2 INFO nova.virt.libvirt.driver [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deleting instance files /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_del
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.746 2 INFO nova.virt.libvirt.driver [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deletion of /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_del complete
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.804 2 INFO nova.compute.manager [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Took 0.48 seconds to destroy the instance on the hypervisor.
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.804 2 DEBUG oslo.service.loopingcall [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.804 2 DEBUG nova.compute.manager [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:39:49 compute-0 nova_compute[260603]: 2025-10-02 08:39:49.805 2 DEBUG nova.network.neutron [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:39:50 compute-0 podman[356820]: 2025-10-02 08:39:50.086447391 +0000 UTC m=+0.052399317 container create 1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 08:39:50 compute-0 systemd[1]: Started libpod-conmon-1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742.scope.
Oct 02 08:39:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:39:50 compute-0 podman[356820]: 2025-10-02 08:39:50.068337796 +0000 UTC m=+0.034289752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:39:50 compute-0 podman[356820]: 2025-10-02 08:39:50.183387264 +0000 UTC m=+0.149339220 container init 1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shirley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 08:39:50 compute-0 podman[356820]: 2025-10-02 08:39:50.18958532 +0000 UTC m=+0.155537246 container start 1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 02 08:39:50 compute-0 podman[356820]: 2025-10-02 08:39:50.192338404 +0000 UTC m=+0.158290320 container attach 1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 08:39:50 compute-0 magical_shirley[356837]: 167 167
Oct 02 08:39:50 compute-0 podman[356820]: 2025-10-02 08:39:50.196059895 +0000 UTC m=+0.162011841 container died 1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shirley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:39:50 compute-0 systemd[1]: libpod-1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742.scope: Deactivated successfully.
Oct 02 08:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c073dc2159335a6b2caa368ddf3b72c66347fd274e90c52c2308928bd5c75064-merged.mount: Deactivated successfully.
Oct 02 08:39:50 compute-0 podman[356820]: 2025-10-02 08:39:50.238735818 +0000 UTC m=+0.204687764 container remove 1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 08:39:50 compute-0 systemd[1]: libpod-conmon-1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742.scope: Deactivated successfully.
Oct 02 08:39:50 compute-0 ceph-mon[74477]: pgmap v1822: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 285 op/s
Oct 02 08:39:50 compute-0 podman[356860]: 2025-10-02 08:39:50.476803943 +0000 UTC m=+0.063666625 container create 0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 08:39:50 compute-0 nova_compute[260603]: 2025-10-02 08:39:50.487 2 DEBUG nova.network.neutron [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:39:50 compute-0 nova_compute[260603]: 2025-10-02 08:39:50.509 2 INFO nova.compute.manager [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Took 0.70 seconds to deallocate network for instance.
Oct 02 08:39:50 compute-0 systemd[1]: Started libpod-conmon-0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918.scope.
Oct 02 08:39:50 compute-0 podman[356860]: 2025-10-02 08:39:50.456537344 +0000 UTC m=+0.043400046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:39:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10455068dc46890e7fc7e6f1532780a369fff175b19e264e4679e3e23347884b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10455068dc46890e7fc7e6f1532780a369fff175b19e264e4679e3e23347884b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10455068dc46890e7fc7e6f1532780a369fff175b19e264e4679e3e23347884b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10455068dc46890e7fc7e6f1532780a369fff175b19e264e4679e3e23347884b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:50 compute-0 nova_compute[260603]: 2025-10-02 08:39:50.580 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:50 compute-0 nova_compute[260603]: 2025-10-02 08:39:50.580 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:50 compute-0 podman[356860]: 2025-10-02 08:39:50.599190182 +0000 UTC m=+0.186052884 container init 0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_engelbart, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 02 08:39:50 compute-0 podman[356860]: 2025-10-02 08:39:50.609309285 +0000 UTC m=+0.196171977 container start 0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_engelbart, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:39:50 compute-0 podman[356860]: 2025-10-02 08:39:50.619647557 +0000 UTC m=+0.206510269 container attach 0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_engelbart, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:39:50 compute-0 nova_compute[260603]: 2025-10-02 08:39:50.683 2 DEBUG oslo_concurrency.processutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:39:50 compute-0 nova_compute[260603]: 2025-10-02 08:39:50.888 2 DEBUG nova.compute.manager [req-584dca01-89c8-4f1b-a8d6-bb293a2eaefb req-f4de8eeb-c8e4-4e92-b9aa-ae2d1fb754bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-deleted-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:39:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1421145403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:39:51 compute-0 nova_compute[260603]: 2025-10-02 08:39:51.174 2 DEBUG oslo_concurrency.processutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:39:51 compute-0 nova_compute[260603]: 2025-10-02 08:39:51.180 2 DEBUG nova.compute.provider_tree [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:39:51 compute-0 nova_compute[260603]: 2025-10-02 08:39:51.204 2 DEBUG nova.scheduler.client.report [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:39:51 compute-0 nova_compute[260603]: 2025-10-02 08:39:51.238 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 285 op/s
Oct 02 08:39:51 compute-0 nova_compute[260603]: 2025-10-02 08:39:51.268 2 INFO nova.scheduler.client.report [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Deleted allocations for instance 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a
Oct 02 08:39:51 compute-0 nova_compute[260603]: 2025-10-02 08:39:51.335 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1421145403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]: {
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "osd_id": 2,
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "type": "bluestore"
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:     },
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "osd_id": 1,
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "type": "bluestore"
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:     },
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "osd_id": 0,
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:         "type": "bluestore"
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]:     }
Oct 02 08:39:51 compute-0 jolly_engelbart[356877]: }
Oct 02 08:39:51 compute-0 systemd[1]: libpod-0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918.scope: Deactivated successfully.
Oct 02 08:39:51 compute-0 podman[356860]: 2025-10-02 08:39:51.581658101 +0000 UTC m=+1.168520823 container died 0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 02 08:39:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-10455068dc46890e7fc7e6f1532780a369fff175b19e264e4679e3e23347884b-merged.mount: Deactivated successfully.
Oct 02 08:39:51 compute-0 podman[356860]: 2025-10-02 08:39:51.717157943 +0000 UTC m=+1.304020625 container remove 0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:39:51 compute-0 systemd[1]: libpod-conmon-0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918.scope: Deactivated successfully.
Oct 02 08:39:51 compute-0 sudo[356755]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:39:51 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:39:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:39:51 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:39:51 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 4c3c63e7-493c-4207-9b34-370349a968fc does not exist
Oct 02 08:39:51 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9d500eb5-5f46-41cb-b243-e6b0812e75cc does not exist
Oct 02 08:39:51 compute-0 sudo[356945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:39:51 compute-0 sudo[356945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:51 compute-0 sudo[356945]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:51 compute-0 sudo[356970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:39:51 compute-0 sudo[356970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:39:51 compute-0 sudo[356970]: pam_unix(sudo:session): session closed for user root
Oct 02 08:39:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:39:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct 02 08:39:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct 02 08:39:52 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct 02 08:39:52 compute-0 nova_compute[260603]: 2025-10-02 08:39:52.451 2 DEBUG nova.objects.instance [None req-e25dbad2-19c6-42f3-be50-942cb2bdfa20 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'pci_devices' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:52 compute-0 nova_compute[260603]: 2025-10-02 08:39:52.473 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394392.4730625, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:52 compute-0 nova_compute[260603]: 2025-10-02 08:39:52.473 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Paused (Lifecycle Event)
Oct 02 08:39:52 compute-0 nova_compute[260603]: 2025-10-02 08:39:52.494 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:52 compute-0 nova_compute[260603]: 2025-10-02 08:39:52.498 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:39:52 compute-0 nova_compute[260603]: 2025-10-02 08:39:52.518 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 08:39:53 compute-0 ceph-mon[74477]: pgmap v1823: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 285 op/s
Oct 02 08:39:53 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:39:53 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:39:53 compute-0 ceph-mon[74477]: osdmap e262: 3 total, 3 up, 3 in
Oct 02 08:39:53 compute-0 kernel: tap00fa1373-e4 (unregistering): left promiscuous mode
Oct 02 08:39:53 compute-0 NetworkManager[45129]: <info>  [1759394393.0508] device (tap00fa1373-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:39:53 compute-0 nova_compute[260603]: 2025-10-02 08:39:53.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:53 compute-0 ovn_controller[152344]: 2025-10-02T08:39:53Z|00985|binding|INFO|Releasing lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb from this chassis (sb_readonly=0)
Oct 02 08:39:53 compute-0 ovn_controller[152344]: 2025-10-02T08:39:53Z|00986|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb down in Southbound
Oct 02 08:39:53 compute-0 ovn_controller[152344]: 2025-10-02T08:39:53Z|00987|binding|INFO|Removing iface tap00fa1373-e4 ovn-installed in OVS
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.074 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:b9:2b 10.100.0.10'], port_security=['fa:16:3e:11:b9:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '9', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.075 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 unbound from our chassis
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.076 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19d584c3-e754-47d1-9cdf-c6badbd670d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.076 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b0fea3-d9ad-451f-adf0-3c5c5a308523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.077 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 namespace which is not needed anymore
Oct 02 08:39:53 compute-0 nova_compute[260603]: 2025-10-02 08:39:53.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:53 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct 02 08:39:53 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d0000005d.scope: Consumed 9.330s CPU time.
Oct 02 08:39:53 compute-0 systemd-machined[214636]: Machine qemu-120-instance-0000005d terminated.
Oct 02 08:39:53 compute-0 nova_compute[260603]: 2025-10-02 08:39:53.233 2 DEBUG nova.compute.manager [None req-e25dbad2-19c6-42f3-be50-942cb2bdfa20 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 193 op/s
Oct 02 08:39:53 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [NOTICE]   (356133) : haproxy version is 2.8.14-c23fe91
Oct 02 08:39:53 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [NOTICE]   (356133) : path to executable is /usr/sbin/haproxy
Oct 02 08:39:53 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [WARNING]  (356133) : Exiting Master process...
Oct 02 08:39:53 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [ALERT]    (356133) : Current worker (356137) exited with code 143 (Terminated)
Oct 02 08:39:53 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [WARNING]  (356133) : All workers exited. Exiting... (0)
Oct 02 08:39:53 compute-0 systemd[1]: libpod-f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7.scope: Deactivated successfully.
Oct 02 08:39:53 compute-0 podman[357021]: 2025-10-02 08:39:53.263609073 +0000 UTC m=+0.085137010 container died f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:39:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7-userdata-shm.mount: Deactivated successfully.
Oct 02 08:39:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8cbea9285c5da7b9818a5a69331acc3f529d43f16cdc486f4c0e52f4c5285b9-merged.mount: Deactivated successfully.
Oct 02 08:39:53 compute-0 podman[357021]: 2025-10-02 08:39:53.427274172 +0000 UTC m=+0.248802109 container cleanup f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:39:53 compute-0 systemd[1]: libpod-conmon-f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7.scope: Deactivated successfully.
Oct 02 08:39:53 compute-0 nova_compute[260603]: 2025-10-02 08:39:53.446 2 DEBUG nova.compute.manager [req-8c00db52-8197-4a65-b37a-2e892b2caefc req-042fb122-d6c3-48b1-b94c-49f6c1d3aca0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:53 compute-0 nova_compute[260603]: 2025-10-02 08:39:53.446 2 DEBUG oslo_concurrency.lockutils [req-8c00db52-8197-4a65-b37a-2e892b2caefc req-042fb122-d6c3-48b1-b94c-49f6c1d3aca0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:53 compute-0 nova_compute[260603]: 2025-10-02 08:39:53.447 2 DEBUG oslo_concurrency.lockutils [req-8c00db52-8197-4a65-b37a-2e892b2caefc req-042fb122-d6c3-48b1-b94c-49f6c1d3aca0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.447 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 2001:db8::f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:53 compute-0 nova_compute[260603]: 2025-10-02 08:39:53.447 2 DEBUG oslo_concurrency.lockutils [req-8c00db52-8197-4a65-b37a-2e892b2caefc req-042fb122-d6c3-48b1-b94c-49f6c1d3aca0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:53 compute-0 nova_compute[260603]: 2025-10-02 08:39:53.447 2 DEBUG nova.compute.manager [req-8c00db52-8197-4a65-b37a-2e892b2caefc req-042fb122-d6c3-48b1-b94c-49f6c1d3aca0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:53 compute-0 nova_compute[260603]: 2025-10-02 08:39:53.448 2 WARNING nova.compute.manager [req-8c00db52-8197-4a65-b37a-2e892b2caefc req-042fb122-d6c3-48b1-b94c-49f6c1d3aca0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state suspended and task_state None.
Oct 02 08:39:53 compute-0 podman[357061]: 2025-10-02 08:39:53.549486515 +0000 UTC m=+0.082817960 container remove f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.561 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[75b1fa30-a3e1-4ec1-98b1-5e17cdfa0661]: (4, ('Thu Oct  2 08:39:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 (f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7)\nf5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7\nThu Oct  2 08:39:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 (f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7)\nf5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.563 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9ae7dd-c36b-4e93-9e13-d66d38b16a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.564 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:53 compute-0 kernel: tap19d584c3-e0: left promiscuous mode
Oct 02 08:39:53 compute-0 nova_compute[260603]: 2025-10-02 08:39:53.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:53 compute-0 nova_compute[260603]: 2025-10-02 08:39:53.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.608 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ee9e07-f375-4d74-bbc2-bfcff0bb2af6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.643 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[415f887a-2efe-44e6-bf25-161442280bb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.645 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f34fb2a8-6d53-4330-9c50-68e3aa2b0893]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.672 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b81a661-49ef-468f-a793-06d1b91db365]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526141, 'reachable_time': 32525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357080, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.676 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.676 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8c63e4-bcbb-469f-9560-fd39de1a119e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d19d584c3\x2de754\x2d47d1\x2d9cdf\x2dc6badbd670d7.mount: Deactivated successfully.
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.677 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 unbound from our chassis
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.680 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:39:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.681 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[14668628-7e7d-4b52-aa02-0290c16c8bdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:54 compute-0 nova_compute[260603]: 2025-10-02 08:39:54.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:54 compute-0 nova_compute[260603]: 2025-10-02 08:39:54.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:54 compute-0 nova_compute[260603]: 2025-10-02 08:39:54.890 2 INFO nova.compute.manager [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Resuming
Oct 02 08:39:54 compute-0 nova_compute[260603]: 2025-10-02 08:39:54.893 2 DEBUG nova.objects.instance [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'flavor' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:54 compute-0 nova_compute[260603]: 2025-10-02 08:39:54.927 2 DEBUG oslo_concurrency.lockutils [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:39:54 compute-0 nova_compute[260603]: 2025-10-02 08:39:54.928 2 DEBUG oslo_concurrency.lockutils [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:39:54 compute-0 nova_compute[260603]: 2025-10-02 08:39:54.928 2 DEBUG nova.network.neutron [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:39:55 compute-0 ceph-mon[74477]: pgmap v1825: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 193 op/s
Oct 02 08:39:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 156 op/s
Oct 02 08:39:55 compute-0 nova_compute[260603]: 2025-10-02 08:39:55.809 2 DEBUG nova.compute.manager [req-10e69edc-f52b-4943-941c-7b38332e7134 req-8132e681-02e1-4592-94d3-1d08e35a2ce1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:55 compute-0 nova_compute[260603]: 2025-10-02 08:39:55.809 2 DEBUG oslo_concurrency.lockutils [req-10e69edc-f52b-4943-941c-7b38332e7134 req-8132e681-02e1-4592-94d3-1d08e35a2ce1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:55 compute-0 nova_compute[260603]: 2025-10-02 08:39:55.810 2 DEBUG oslo_concurrency.lockutils [req-10e69edc-f52b-4943-941c-7b38332e7134 req-8132e681-02e1-4592-94d3-1d08e35a2ce1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:55 compute-0 nova_compute[260603]: 2025-10-02 08:39:55.810 2 DEBUG oslo_concurrency.lockutils [req-10e69edc-f52b-4943-941c-7b38332e7134 req-8132e681-02e1-4592-94d3-1d08e35a2ce1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:55 compute-0 nova_compute[260603]: 2025-10-02 08:39:55.810 2 DEBUG nova.compute.manager [req-10e69edc-f52b-4943-941c-7b38332e7134 req-8132e681-02e1-4592-94d3-1d08e35a2ce1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:55 compute-0 nova_compute[260603]: 2025-10-02 08:39:55.811 2 WARNING nova.compute.manager [req-10e69edc-f52b-4943-941c-7b38332e7134 req-8132e681-02e1-4592-94d3-1d08e35a2ce1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state suspended and task_state resuming.
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.091 2 DEBUG nova.network.neutron [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.110 2 DEBUG oslo_concurrency.lockutils [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.117 2 DEBUG nova.virt.libvirt.vif [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:39:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:39:53Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.118 2 DEBUG nova.network.os_vif_util [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.119 2 DEBUG nova.network.os_vif_util [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.120 2 DEBUG os_vif [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.127 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00fa1373-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00fa1373-e4, col_values=(('external_ids', {'iface-id': '00fa1373-e4cc-4245-9cfa-5a58c77aa4eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:b9:2b', 'vm-uuid': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.129 2 INFO os_vif [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4')
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.160 2 DEBUG nova.objects.instance [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'numa_topology' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:56 compute-0 kernel: tap00fa1373-e4: entered promiscuous mode
Oct 02 08:39:56 compute-0 ovn_controller[152344]: 2025-10-02T08:39:56Z|00988|binding|INFO|Claiming lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for this chassis.
Oct 02 08:39:56 compute-0 ovn_controller[152344]: 2025-10-02T08:39:56Z|00989|binding|INFO|00fa1373-e4cc-4245-9cfa-5a58c77aa4eb: Claiming fa:16:3e:11:b9:2b 10.100.0.10
Oct 02 08:39:56 compute-0 NetworkManager[45129]: <info>  [1759394396.2580] manager: (tap00fa1373-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/390)
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.265 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:b9:2b 10.100.0.10'], port_security=['fa:16:3e:11:b9:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '10', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.266 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 bound to our chassis
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.267 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:39:56 compute-0 ovn_controller[152344]: 2025-10-02T08:39:56Z|00990|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb ovn-installed in OVS
Oct 02 08:39:56 compute-0 ovn_controller[152344]: 2025-10-02T08:39:56Z|00991|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb up in Southbound
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.284 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fddb123d-d2b4-48fd-af8b-92d49a138ba1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.285 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap19d584c3-e1 in ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.289 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap19d584c3-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.289 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[65406aba-4354-42aa-9e0f-a9c32470eb4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.289 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[689c8214-fef1-4ff4-9541-5b8a3d249bca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 systemd-udevd[357094]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.300 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9f801e-43b8-42e8-b31b-d5417f945a84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 NetworkManager[45129]: <info>  [1759394396.3121] device (tap00fa1373-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:39:56 compute-0 NetworkManager[45129]: <info>  [1759394396.3156] device (tap00fa1373-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.329 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d34053-0483-4c14-8847-b189db450fb6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 systemd-machined[214636]: New machine qemu-122-instance-0000005d.
Oct 02 08:39:56 compute-0 systemd[1]: Started Virtual Machine qemu-122-instance-0000005d.
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.359 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8ce5ca-69ac-4159-8709-5f14c2a26f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.364 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e673d1f8-4a46-4768-8700-4c9b36d2024e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 NetworkManager[45129]: <info>  [1759394396.3654] manager: (tap19d584c3-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/391)
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.392 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[39e352f9-4f62-4398-a608-6ff7326c376a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.395 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f3eb5e36-215d-45e8-9c52-045b07d5cf8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 NetworkManager[45129]: <info>  [1759394396.4132] device (tap19d584c3-e0): carrier: link connected
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.418 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a5f2af-c0ff-48a6-bdb3-34d8615540e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.433 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86879179-3468-44b8-8ff3-45734eb1c583]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527501, 'reachable_time': 28627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357128, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.448 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9863db77-febd-48a7-bd59-00cc66f108a2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:8c4f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527501, 'tstamp': 527501}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357129, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.462 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b21c6a97-8732-48bc-87bb-9755ec39099e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527501, 'reachable_time': 28627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357130, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.478 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.489 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[db82c5bd-8fa5-467f-b22a-2a01e7b318d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.570 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7ca13b-76dc-4b12-9f04-206fabf34a39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.571 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.572 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.572 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:56 compute-0 kernel: tap19d584c3-e0: entered promiscuous mode
Oct 02 08:39:56 compute-0 NetworkManager[45129]: <info>  [1759394396.5745] manager: (tap19d584c3-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.584 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:39:56 compute-0 ovn_controller[152344]: 2025-10-02T08:39:56Z|00992|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.604 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:39:56 compute-0 nova_compute[260603]: 2025-10-02 08:39:56.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.605 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[777fa21e-5178-474f-8f47-1c3624a6462d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.606 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:39:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.607 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'env', 'PROCESS_TAG=haproxy-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/19d584c3-e754-47d1-9cdf-c6badbd670d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:39:57 compute-0 podman[357201]: 2025-10-02 08:39:57.016558329 +0000 UTC m=+0.068670514 container create d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 08:39:57 compute-0 ceph-mon[74477]: pgmap v1826: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 156 op/s
Oct 02 08:39:57 compute-0 podman[357201]: 2025-10-02 08:39:56.979163306 +0000 UTC m=+0.031275451 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:39:57 compute-0 systemd[1]: Started libpod-conmon-d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee.scope.
Oct 02 08:39:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fbca77696254007de7e96072e33264e451582408270a23fe6bb5e724afe7cd3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:39:57 compute-0 podman[357201]: 2025-10-02 08:39:57.141868367 +0000 UTC m=+0.193980532 container init d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 08:39:57 compute-0 podman[357201]: 2025-10-02 08:39:57.151669681 +0000 UTC m=+0.203781816 container start d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 08:39:57 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [NOTICE]   (357223) : New worker (357225) forked
Oct 02 08:39:57 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [NOTICE]   (357223) : Loading success.
Oct 02 08:39:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.2 KiB/s wr, 117 op/s
Oct 02 08:39:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.492 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.493 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394397.492383, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.494 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Started (Lifecycle Event)
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.513 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.529 2 DEBUG nova.compute.manager [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.530 2 DEBUG nova.objects.instance [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'pci_devices' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.533 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.556 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance running successfully.
Oct 02 08:39:57 compute-0 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.560 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.561 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394397.4992716, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.561 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Resumed (Lifecycle Event)
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.564 2 DEBUG nova.virt.libvirt.guest [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.564 2 DEBUG nova.compute.manager [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.591 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.595 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.631 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 08:39:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:39:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:39:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:39:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:39:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:39:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.977 2 DEBUG nova.compute.manager [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.978 2 DEBUG oslo_concurrency.lockutils [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.978 2 DEBUG oslo_concurrency.lockutils [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.978 2 DEBUG oslo_concurrency.lockutils [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.978 2 DEBUG nova.compute.manager [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.979 2 WARNING nova.compute.manager [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state active and task_state None.
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.979 2 DEBUG nova.compute.manager [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.979 2 DEBUG oslo_concurrency.lockutils [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.980 2 DEBUG oslo_concurrency.lockutils [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.980 2 DEBUG oslo_concurrency.lockutils [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.980 2 DEBUG nova.compute.manager [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:39:57 compute-0 nova_compute[260603]: 2025-10-02 08:39:57.980 2 WARNING nova.compute.manager [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state active and task_state None.
Oct 02 08:39:58 compute-0 ceph-mon[74477]: pgmap v1827: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.2 KiB/s wr, 117 op/s
Oct 02 08:39:59 compute-0 podman[357235]: 2025-10-02 08:39:59.031391858 +0000 UTC m=+0.094943565 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:39:59 compute-0 podman[357234]: 2025-10-02 08:39:59.125073123 +0000 UTC m=+0.180754583 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:39:59 compute-0 nova_compute[260603]: 2025-10-02 08:39:59.128 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:59 compute-0 nova_compute[260603]: 2025-10-02 08:39:59.130 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:59 compute-0 nova_compute[260603]: 2025-10-02 08:39:59.159 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:39:59 compute-0 nova_compute[260603]: 2025-10-02 08:39:59.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 12 KiB/s wr, 38 op/s
Oct 02 08:39:59 compute-0 nova_compute[260603]: 2025-10-02 08:39:59.302 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:39:59 compute-0 nova_compute[260603]: 2025-10-02 08:39:59.303 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:39:59 compute-0 nova_compute[260603]: 2025-10-02 08:39:59.314 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:39:59 compute-0 nova_compute[260603]: 2025-10-02 08:39:59.315 2 INFO nova.compute.claims [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:39:59 compute-0 nova_compute[260603]: 2025-10-02 08:39:59.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:39:59 compute-0 nova_compute[260603]: 2025-10-02 08:39:59.516 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:40:00 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465837021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.032 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.041 2 DEBUG nova.compute.provider_tree [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.073 2 DEBUG nova.scheduler.client.report [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.104 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.104 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.172 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.172 2 DEBUG nova.network.neutron [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.189 2 INFO nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.206 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:40:00 compute-0 ceph-mon[74477]: pgmap v1828: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 12 KiB/s wr, 38 op/s
Oct 02 08:40:00 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/465837021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.330 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.331 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.332 2 INFO nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Creating image(s)
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.356 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.382 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.411 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.417 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.528 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.529 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.531 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.531 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.563 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.570 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.925 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:00 compute-0 nova_compute[260603]: 2025-10-02 08:40:00.986 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] resizing rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:40:01 compute-0 nova_compute[260603]: 2025-10-02 08:40:01.098 2 DEBUG nova.objects.instance [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'migration_context' on Instance uuid c33f349a-5d06-4a81-90fd-fe32eebc4cb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:40:01 compute-0 nova_compute[260603]: 2025-10-02 08:40:01.126 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:40:01 compute-0 nova_compute[260603]: 2025-10-02 08:40:01.126 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Ensure instance console log exists: /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:40:01 compute-0 nova_compute[260603]: 2025-10-02 08:40:01.127 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:01 compute-0 nova_compute[260603]: 2025-10-02 08:40:01.127 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:01 compute-0 nova_compute[260603]: 2025-10-02 08:40:01.127 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:01 compute-0 nova_compute[260603]: 2025-10-02 08:40:01.143 2 DEBUG nova.policy [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd3802fedfb914c27b9b09ad6ea6f4c27', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c75535fe577642038c638a0b01f74d09', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:40:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 12 KiB/s wr, 38 op/s
Oct 02 08:40:01 compute-0 ovn_controller[152344]: 2025-10-02T08:40:01Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:b9:2b 10.100.0.10
Oct 02 08:40:01 compute-0 nova_compute[260603]: 2025-10-02 08:40:01.843 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394386.841953, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:40:01 compute-0 nova_compute[260603]: 2025-10-02 08:40:01.844 2 INFO nova.compute.manager [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Stopped (Lifecycle Event)
Oct 02 08:40:01 compute-0 nova_compute[260603]: 2025-10-02 08:40:01.874 2 DEBUG nova.compute.manager [None req-3e374481-55fc-4a7f-9af0-508b3a597f3d - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:40:02 compute-0 ceph-mon[74477]: pgmap v1829: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 12 KiB/s wr, 38 op/s
Oct 02 08:40:02 compute-0 nova_compute[260603]: 2025-10-02 08:40:02.906 2 DEBUG nova.network.neutron [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Successfully created port: de7564c4-d262-44cd-9232-19988049a763 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:40:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 623 KiB/s rd, 2.0 MiB/s wr, 111 op/s
Oct 02 08:40:04 compute-0 nova_compute[260603]: 2025-10-02 08:40:04.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:04 compute-0 ceph-mon[74477]: pgmap v1830: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 623 KiB/s rd, 2.0 MiB/s wr, 111 op/s
Oct 02 08:40:04 compute-0 nova_compute[260603]: 2025-10-02 08:40:04.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:05 compute-0 nova_compute[260603]: 2025-10-02 08:40:05.191 2 DEBUG nova.network.neutron [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Successfully updated port: de7564c4-d262-44cd-9232-19988049a763 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:40:05 compute-0 nova_compute[260603]: 2025-10-02 08:40:05.214 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:40:05 compute-0 nova_compute[260603]: 2025-10-02 08:40:05.215 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquired lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:40:05 compute-0 nova_compute[260603]: 2025-10-02 08:40:05.216 2 DEBUG nova.network.neutron [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:40:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct 02 08:40:05 compute-0 nova_compute[260603]: 2025-10-02 08:40:05.302 2 DEBUG nova.compute.manager [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-changed-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:05 compute-0 nova_compute[260603]: 2025-10-02 08:40:05.303 2 DEBUG nova.compute.manager [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Refreshing instance network info cache due to event network-changed-de7564c4-d262-44cd-9232-19988049a763. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:40:05 compute-0 nova_compute[260603]: 2025-10-02 08:40:05.304 2 DEBUG oslo_concurrency.lockutils [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:40:05 compute-0 nova_compute[260603]: 2025-10-02 08:40:05.472 2 DEBUG nova.network.neutron [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:40:06 compute-0 podman[357466]: 2025-10-02 08:40:06.02453336 +0000 UTC m=+0.086016717 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 08:40:06 compute-0 podman[357467]: 2025-10-02 08:40:06.048969674 +0000 UTC m=+0.107855133 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:40:06 compute-0 ceph-mon[74477]: pgmap v1831: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct 02 08:40:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct 02 08:40:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.126 2 DEBUG nova.network.neutron [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Updating instance_info_cache with network_info: [{"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.156 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Releasing lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.156 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Instance network_info: |[{"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.157 2 DEBUG oslo_concurrency.lockutils [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.157 2 DEBUG nova.network.neutron [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Refreshing network info cache for port de7564c4-d262-44cd-9232-19988049a763 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.161 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Start _get_guest_xml network_info=[{"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.167 2 WARNING nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.174 2 DEBUG nova.virt.libvirt.host [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.175 2 DEBUG nova.virt.libvirt.host [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.179 2 DEBUG nova.virt.libvirt.host [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.180 2 DEBUG nova.virt.libvirt.host [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.180 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.180 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.181 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.181 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.182 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.182 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.182 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.182 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.183 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.183 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.184 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.184 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.187 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:08 compute-0 ceph-mon[74477]: pgmap v1832: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct 02 08:40:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:40:08 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2181872288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.645 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.682 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:08 compute-0 nova_compute[260603]: 2025-10-02 08:40:08.687 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:40:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/113051477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.190 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.191 2 DEBUG nova.virt.libvirt.vif [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:39:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2038300024',display_name='tempest-ServerActionsTestOtherA-server-2038300024',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2038300024',id=97,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-9f4e7pit',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:40:00Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=c33f349a-5d06-4a81-90fd-fe32eebc4cb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.192 2 DEBUG nova.network.os_vif_util [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.192 2 DEBUG nova.network.os_vif_util [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.193 2 DEBUG nova.objects.instance [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'pci_devices' on Instance uuid c33f349a-5d06-4a81-90fd-fe32eebc4cb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.213 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:40:09 compute-0 nova_compute[260603]:   <uuid>c33f349a-5d06-4a81-90fd-fe32eebc4cb9</uuid>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   <name>instance-00000061</name>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerActionsTestOtherA-server-2038300024</nova:name>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:40:08</nova:creationTime>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:40:09 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:40:09 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:40:09 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:40:09 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:40:09 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:40:09 compute-0 nova_compute[260603]:         <nova:user uuid="d3802fedfb914c27b9b09ad6ea6f4c27">tempest-ServerActionsTestOtherA-249618595-project-member</nova:user>
Oct 02 08:40:09 compute-0 nova_compute[260603]:         <nova:project uuid="c75535fe577642038c638a0b01f74d09">tempest-ServerActionsTestOtherA-249618595</nova:project>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:40:09 compute-0 nova_compute[260603]:         <nova:port uuid="de7564c4-d262-44cd-9232-19988049a763">
Oct 02 08:40:09 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <system>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <entry name="serial">c33f349a-5d06-4a81-90fd-fe32eebc4cb9</entry>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <entry name="uuid">c33f349a-5d06-4a81-90fd-fe32eebc4cb9</entry>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     </system>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   <os>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   </os>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   <features>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   </features>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk">
Oct 02 08:40:09 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       </source>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:40:09 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk.config">
Oct 02 08:40:09 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       </source>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:40:09 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:32:fb:55"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <target dev="tapde7564c4-d2"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/console.log" append="off"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <video>
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     </video>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:40:09 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:40:09 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:40:09 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:40:09 compute-0 nova_compute[260603]: </domain>
Oct 02 08:40:09 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.214 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Preparing to wait for external event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.214 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.214 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.215 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.215 2 DEBUG nova.virt.libvirt.vif [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:39:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2038300024',display_name='tempest-ServerActionsTestOtherA-server-2038300024',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2038300024',id=97,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-9f4e7pit',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:40:00Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=c33f349a-5d06-4a81-90fd-fe32eebc4cb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.215 2 DEBUG nova.network.os_vif_util [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.216 2 DEBUG nova.network.os_vif_util [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.216 2 DEBUG os_vif [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde7564c4-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapde7564c4-d2, col_values=(('external_ids', {'iface-id': 'de7564c4-d262-44cd-9232-19988049a763', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:fb:55', 'vm-uuid': 'c33f349a-5d06-4a81-90fd-fe32eebc4cb9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:09 compute-0 NetworkManager[45129]: <info>  [1759394409.2254] manager: (tapde7564c4-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.237 2 INFO os_vif [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2')
Oct 02 08:40:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 248 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.313 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.313 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.313 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No VIF found with MAC fa:16:3e:32:fb:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.314 2 INFO nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Using config drive
Oct 02 08:40:09 compute-0 nova_compute[260603]: 2025-10-02 08:40:09.335 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:09 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2181872288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:40:09 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/113051477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:40:10 compute-0 nova_compute[260603]: 2025-10-02 08:40:10.414 2 INFO nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Creating config drive at /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/disk.config
Oct 02 08:40:10 compute-0 nova_compute[260603]: 2025-10-02 08:40:10.424 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1h0na6bl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:10 compute-0 ceph-mon[74477]: pgmap v1833: 305 pgs: 305 active+clean; 248 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Oct 02 08:40:10 compute-0 nova_compute[260603]: 2025-10-02 08:40:10.603 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1h0na6bl" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:10 compute-0 nova_compute[260603]: 2025-10-02 08:40:10.625 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:10 compute-0 nova_compute[260603]: 2025-10-02 08:40:10.628 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/disk.config c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:10 compute-0 nova_compute[260603]: 2025-10-02 08:40:10.821 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/disk.config c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:10 compute-0 nova_compute[260603]: 2025-10-02 08:40:10.823 2 INFO nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Deleting local config drive /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/disk.config because it was imported into RBD.
Oct 02 08:40:10 compute-0 kernel: tapde7564c4-d2: entered promiscuous mode
Oct 02 08:40:10 compute-0 NetworkManager[45129]: <info>  [1759394410.8958] manager: (tapde7564c4-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Oct 02 08:40:10 compute-0 ovn_controller[152344]: 2025-10-02T08:40:10Z|00993|binding|INFO|Claiming lport de7564c4-d262-44cd-9232-19988049a763 for this chassis.
Oct 02 08:40:10 compute-0 ovn_controller[152344]: 2025-10-02T08:40:10Z|00994|binding|INFO|de7564c4-d262-44cd-9232-19988049a763: Claiming fa:16:3e:32:fb:55 10.100.0.10
Oct 02 08:40:10 compute-0 nova_compute[260603]: 2025-10-02 08:40:10.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:10.907 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:fb:55 10.100.0.10'], port_security=['fa:16:3e:32:fb:55 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c33f349a-5d06-4a81-90fd-fe32eebc4cb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd37b83fc-7239-49dc-8ab1-fc95753c436a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=de7564c4-d262-44cd-9232-19988049a763) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:40:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:10.910 162357 INFO neutron.agent.ovn.metadata.agent [-] Port de7564c4-d262-44cd-9232-19988049a763 in datapath 28f843b2-396a-4167-9840-21c273bdc044 bound to our chassis
Oct 02 08:40:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:10.913 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044
Oct 02 08:40:10 compute-0 ovn_controller[152344]: 2025-10-02T08:40:10Z|00995|binding|INFO|Setting lport de7564c4-d262-44cd-9232-19988049a763 up in Southbound
Oct 02 08:40:10 compute-0 ovn_controller[152344]: 2025-10-02T08:40:10Z|00996|binding|INFO|Setting lport de7564c4-d262-44cd-9232-19988049a763 ovn-installed in OVS
Oct 02 08:40:10 compute-0 nova_compute[260603]: 2025-10-02 08:40:10.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:10 compute-0 nova_compute[260603]: 2025-10-02 08:40:10.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:10 compute-0 systemd-machined[214636]: New machine qemu-123-instance-00000061.
Oct 02 08:40:10 compute-0 nova_compute[260603]: 2025-10-02 08:40:10.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:10.940 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2212b274-ac53-4b78-8f99-393c5c1c2626]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:10 compute-0 systemd[1]: Started Virtual Machine qemu-123-instance-00000061.
Oct 02 08:40:10 compute-0 systemd-udevd[357645]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:40:10 compute-0 NetworkManager[45129]: <info>  [1759394410.9761] device (tapde7564c4-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:40:10 compute-0 NetworkManager[45129]: <info>  [1759394410.9772] device (tapde7564c4-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:40:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:10.979 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[30ff254d-f18c-4f16-a226-1f559f19b32e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:10.983 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[787af4f5-c061-49ed-958e-87f62c2414f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.007 2 DEBUG nova.network.neutron [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Updated VIF entry in instance network info cache for port de7564c4-d262-44cd-9232-19988049a763. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.007 2 DEBUG nova.network.neutron [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Updating instance_info_cache with network_info: [{"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:40:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.016 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a21e51-8bd1-4fc8-b739-2d6d92d35599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.041 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9552e2f-7b8c-4698-983c-073a53760a88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 916, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 916, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357656, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.058 2 DEBUG oslo_concurrency.lockutils [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:40:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.063 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9f78a1-9869-4c13-92b0-c59ba1167859]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520161, 'tstamp': 520161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357657, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520164, 'tstamp': 520164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357657, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.065 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.070 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.070 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:40:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.071 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.072 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:40:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 248 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.289 2 DEBUG nova.compute.manager [req-16020913-d930-4851-ab13-13de035c2a53 req-ba62c74e-f76e-423e-8c93-e08e5ea88e69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.289 2 DEBUG oslo_concurrency.lockutils [req-16020913-d930-4851-ab13-13de035c2a53 req-ba62c74e-f76e-423e-8c93-e08e5ea88e69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.290 2 DEBUG oslo_concurrency.lockutils [req-16020913-d930-4851-ab13-13de035c2a53 req-ba62c74e-f76e-423e-8c93-e08e5ea88e69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.290 2 DEBUG oslo_concurrency.lockutils [req-16020913-d930-4851-ab13-13de035c2a53 req-ba62c74e-f76e-423e-8c93-e08e5ea88e69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.290 2 DEBUG nova.compute.manager [req-16020913-d930-4851-ab13-13de035c2a53 req-ba62c74e-f76e-423e-8c93-e08e5ea88e69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Processing event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.885 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394411.8853679, c33f349a-5d06-4a81-90fd-fe32eebc4cb9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.886 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] VM Started (Lifecycle Event)
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.888 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.892 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.898 2 INFO nova.virt.libvirt.driver [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Instance spawned successfully.
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.898 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.919 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.924 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.928 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.928 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.929 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.929 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.929 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.930 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.967 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.967 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394411.8865168, c33f349a-5d06-4a81-90fd-fe32eebc4cb9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:40:11 compute-0 nova_compute[260603]: 2025-10-02 08:40:11.967 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] VM Paused (Lifecycle Event)
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.017 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.020 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394411.891461, c33f349a-5d06-4a81-90fd-fe32eebc4cb9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.020 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] VM Resumed (Lifecycle Event)
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.054 2 INFO nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Took 11.72 seconds to spawn the instance on the hypervisor.
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.054 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.055 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.061 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.108 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.195 2 INFO nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Took 12.94 seconds to build instance.
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.218 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:40:12 compute-0 ceph-mon[74477]: pgmap v1834: 305 pgs: 305 active+clean; 248 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.669 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.670 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.670 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.671 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.671 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.672 2 INFO nova.compute.manager [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Terminating instance
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.673 2 DEBUG nova.compute.manager [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:40:12 compute-0 kernel: tap00fa1373-e4 (unregistering): left promiscuous mode
Oct 02 08:40:12 compute-0 NetworkManager[45129]: <info>  [1759394412.7267] device (tap00fa1373-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:40:12 compute-0 ovn_controller[152344]: 2025-10-02T08:40:12Z|00997|binding|INFO|Releasing lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb from this chassis (sb_readonly=0)
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:12 compute-0 ovn_controller[152344]: 2025-10-02T08:40:12Z|00998|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb down in Southbound
Oct 02 08:40:12 compute-0 ovn_controller[152344]: 2025-10-02T08:40:12Z|00999|binding|INFO|Removing iface tap00fa1373-e4 ovn-installed in OVS
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:12.751 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:b9:2b 10.100.0.10'], port_security=['fa:16:3e:11:b9:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '11', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:40:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:12.753 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 unbound from our chassis
Oct 02 08:40:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:12.755 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19d584c3-e754-47d1-9cdf-c6badbd670d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:40:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:12.756 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bac09676-2e19-48ad-9b0f-58d5849f7af0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:12.761 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 namespace which is not needed anymore
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:12 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct 02 08:40:12 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d0000005d.scope: Consumed 5.333s CPU time.
Oct 02 08:40:12 compute-0 systemd-machined[214636]: Machine qemu-122-instance-0000005d terminated.
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.903 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance destroyed successfully.
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.904 2 DEBUG nova.objects.instance [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'resources' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.926 2 DEBUG nova.virt.libvirt.vif [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:39:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:39:57Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.928 2 DEBUG nova.network.os_vif_util [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.929 2 DEBUG nova.network.os_vif_util [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.929 2 DEBUG os_vif [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.932 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00fa1373-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.936 2 DEBUG nova.compute.manager [req-84615330-8280-4a83-abbf-e81e640c10af req-129d01b0-2890-4aa2-9533-92ca0162852f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.937 2 DEBUG oslo_concurrency.lockutils [req-84615330-8280-4a83-abbf-e81e640c10af req-129d01b0-2890-4aa2-9533-92ca0162852f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.937 2 DEBUG oslo_concurrency.lockutils [req-84615330-8280-4a83-abbf-e81e640c10af req-129d01b0-2890-4aa2-9533-92ca0162852f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.938 2 DEBUG oslo_concurrency.lockutils [req-84615330-8280-4a83-abbf-e81e640c10af req-129d01b0-2890-4aa2-9533-92ca0162852f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.938 2 DEBUG nova.compute.manager [req-84615330-8280-4a83-abbf-e81e640c10af req-129d01b0-2890-4aa2-9533-92ca0162852f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.938 2 DEBUG nova.compute.manager [req-84615330-8280-4a83-abbf-e81e640c10af req-129d01b0-2890-4aa2-9533-92ca0162852f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:12 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [NOTICE]   (357223) : haproxy version is 2.8.14-c23fe91
Oct 02 08:40:12 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [NOTICE]   (357223) : path to executable is /usr/sbin/haproxy
Oct 02 08:40:12 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [WARNING]  (357223) : Exiting Master process...
Oct 02 08:40:12 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [WARNING]  (357223) : Exiting Master process...
Oct 02 08:40:12 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [ALERT]    (357223) : Current worker (357225) exited with code 143 (Terminated)
Oct 02 08:40:12 compute-0 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [WARNING]  (357223) : All workers exited. Exiting... (0)
Oct 02 08:40:12 compute-0 nova_compute[260603]: 2025-10-02 08:40:12.942 2 INFO os_vif [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4')
Oct 02 08:40:12 compute-0 systemd[1]: libpod-d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee.scope: Deactivated successfully.
Oct 02 08:40:12 compute-0 podman[357723]: 2025-10-02 08:40:12.952090122 +0000 UTC m=+0.062696285 container died d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee-userdata-shm.mount: Deactivated successfully.
Oct 02 08:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fbca77696254007de7e96072e33264e451582408270a23fe6bb5e724afe7cd3-merged.mount: Deactivated successfully.
Oct 02 08:40:13 compute-0 podman[357723]: 2025-10-02 08:40:13.000852077 +0000 UTC m=+0.111458240 container cleanup d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 08:40:13 compute-0 systemd[1]: libpod-conmon-d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee.scope: Deactivated successfully.
Oct 02 08:40:13 compute-0 podman[357781]: 2025-10-02 08:40:13.090434359 +0000 UTC m=+0.057916321 container remove d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 08:40:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.102 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[52947846-eb18-4c6b-85c1-db9d30947abc]: (4, ('Thu Oct  2 08:40:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 (d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee)\nd40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee\nThu Oct  2 08:40:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 (d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee)\nd40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.104 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[95bd5460-98a1-4f01-b17f-202dcaf05d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.105 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:13 compute-0 kernel: tap19d584c3-e0: left promiscuous mode
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.124 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f0cf9c-b553-473c-8877-fcd142421d82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.159 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c19b3ce8-28e9-4e31-a954-3003442a74c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.160 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f29e3fbd-f90a-45d5-9851-664c63bf37e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.180 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[474e064e-ec24-4cb9-9a0d-890f56a1059a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527495, 'reachable_time': 40838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357794, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d19d584c3\x2de754\x2d47d1\x2d9cdf\x2dc6badbd670d7.mount: Deactivated successfully.
Oct 02 08:40:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.182 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:40:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.182 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d819033f-8fbe-4d96-ab57-67f37ab8efa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 248 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.308 2 INFO nova.virt.libvirt.driver [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deleting instance files /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_del
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.309 2 INFO nova.virt.libvirt.driver [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deletion of /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_del complete
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.354 2 INFO nova.compute.manager [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Took 0.68 seconds to destroy the instance on the hypervisor.
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.354 2 DEBUG oslo.service.loopingcall [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.355 2 DEBUG nova.compute.manager [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.355 2 DEBUG nova.network.neutron [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.503 2 DEBUG nova.compute.manager [req-cd7fe29f-d87b-45c5-bbeb-c5b874a897cd req-52053740-44c7-4e99-872e-96018ab6788d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.504 2 DEBUG oslo_concurrency.lockutils [req-cd7fe29f-d87b-45c5-bbeb-c5b874a897cd req-52053740-44c7-4e99-872e-96018ab6788d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.505 2 DEBUG oslo_concurrency.lockutils [req-cd7fe29f-d87b-45c5-bbeb-c5b874a897cd req-52053740-44c7-4e99-872e-96018ab6788d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.506 2 DEBUG oslo_concurrency.lockutils [req-cd7fe29f-d87b-45c5-bbeb-c5b874a897cd req-52053740-44c7-4e99-872e-96018ab6788d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.506 2 DEBUG nova.compute.manager [req-cd7fe29f-d87b-45c5-bbeb-c5b874a897cd req-52053740-44c7-4e99-872e-96018ab6788d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] No waiting events found dispatching network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:40:13 compute-0 nova_compute[260603]: 2025-10-02 08:40:13.507 2 WARNING nova.compute.manager [req-cd7fe29f-d87b-45c5-bbeb-c5b874a897cd req-52053740-44c7-4e99-872e-96018ab6788d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received unexpected event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 for instance with vm_state active and task_state None.
Oct 02 08:40:14 compute-0 nova_compute[260603]: 2025-10-02 08:40:14.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:14 compute-0 nova_compute[260603]: 2025-10-02 08:40:14.318 2 DEBUG nova.network.neutron [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:40:14 compute-0 nova_compute[260603]: 2025-10-02 08:40:14.338 2 INFO nova.compute.manager [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Took 0.98 seconds to deallocate network for instance.
Oct 02 08:40:14 compute-0 nova_compute[260603]: 2025-10-02 08:40:14.399 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:14 compute-0 nova_compute[260603]: 2025-10-02 08:40:14.400 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:14 compute-0 nova_compute[260603]: 2025-10-02 08:40:14.507 2 DEBUG oslo_concurrency.processutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:14 compute-0 ceph-mon[74477]: pgmap v1835: 305 pgs: 305 active+clean; 248 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct 02 08:40:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:40:14 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3945092216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:14 compute-0 nova_compute[260603]: 2025-10-02 08:40:14.983 2 DEBUG oslo_concurrency.processutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:14 compute-0 nova_compute[260603]: 2025-10-02 08:40:14.990 2 DEBUG nova.compute.provider_tree [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.020 2 DEBUG nova.scheduler.client.report [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.054 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.064 2 DEBUG nova.compute.manager [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.065 2 DEBUG oslo_concurrency.lockutils [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.066 2 DEBUG oslo_concurrency.lockutils [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.067 2 DEBUG oslo_concurrency.lockutils [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.067 2 DEBUG nova.compute.manager [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.068 2 WARNING nova.compute.manager [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state deleted and task_state None.
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.069 2 DEBUG nova.compute.manager [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-deleted-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.105 2 INFO nova.scheduler.client.report [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Deleted allocations for instance fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.214 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 248 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 719 KiB/s rd, 25 KiB/s wr, 35 op/s
Oct 02 08:40:15 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3945092216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.647 2 DEBUG nova.compute.manager [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-changed-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.648 2 DEBUG nova.compute.manager [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Refreshing instance network info cache due to event network-changed-de7564c4-d262-44cd-9232-19988049a763. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.649 2 DEBUG oslo_concurrency.lockutils [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.650 2 DEBUG oslo_concurrency.lockutils [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:40:15 compute-0 nova_compute[260603]: 2025-10-02 08:40:15.651 2 DEBUG nova.network.neutron [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Refreshing network info cache for port de7564c4-d262-44cd-9232-19988049a763 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.414 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.415 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.416 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.416 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.417 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.418 2 INFO nova.compute.manager [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Terminating instance
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.420 2 DEBUG nova.compute.manager [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:40:16 compute-0 kernel: tapde7564c4-d2 (unregistering): left promiscuous mode
Oct 02 08:40:16 compute-0 NetworkManager[45129]: <info>  [1759394416.4807] device (tapde7564c4-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:40:16 compute-0 ovn_controller[152344]: 2025-10-02T08:40:16Z|01000|binding|INFO|Releasing lport de7564c4-d262-44cd-9232-19988049a763 from this chassis (sb_readonly=0)
Oct 02 08:40:16 compute-0 ovn_controller[152344]: 2025-10-02T08:40:16Z|01001|binding|INFO|Setting lport de7564c4-d262-44cd-9232-19988049a763 down in Southbound
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:16 compute-0 ovn_controller[152344]: 2025-10-02T08:40:16Z|01002|binding|INFO|Removing iface tapde7564c4-d2 ovn-installed in OVS
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.494 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:fb:55 10.100.0.10'], port_security=['fa:16:3e:32:fb:55 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c33f349a-5d06-4a81-90fd-fe32eebc4cb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=de7564c4-d262-44cd-9232-19988049a763) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.495 162357 INFO neutron.agent.ovn.metadata.agent [-] Port de7564c4-d262-44cd-9232-19988049a763 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.496 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c77fd2-1f24-4d46-b249-68b9238ea032]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.552 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b96120f3-7543-4922-a5fa-18501bf004db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.554 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fbf23d-ece6-4b9d-93c3-a3b42039ab0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:16 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct 02 08:40:16 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000061.scope: Consumed 5.394s CPU time.
Oct 02 08:40:16 compute-0 systemd-machined[214636]: Machine qemu-123-instance-00000061 terminated.
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.595 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c255f5cf-52a7-4083-9af7-d9eace366bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:16 compute-0 ceph-mon[74477]: pgmap v1836: 305 pgs: 305 active+clean; 248 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 719 KiB/s rd, 25 KiB/s wr, 35 op/s
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.620 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3a16497e-4ca4-4a63-82e5-fafed6f3f4e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 916, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 916, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357828, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.640 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d599811-d578-4c24-b8ca-6d17f04f9990]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520161, 'tstamp': 520161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357829, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520164, 'tstamp': 520164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357829, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.642 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.655 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.655 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.655 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.656 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.670 2 INFO nova.virt.libvirt.driver [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Instance destroyed successfully.
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.671 2 DEBUG nova.objects.instance [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'resources' on Instance uuid c33f349a-5d06-4a81-90fd-fe32eebc4cb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.703 2 DEBUG nova.virt.libvirt.vif [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:39:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2038300024',display_name='tempest-ServerActionsTestOtherA-server-2038300024',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2038300024',id=97,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:40:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-9f4e7pit',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:40:12Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=c33f349a-5d06-4a81-90fd-fe32eebc4cb9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.704 2 DEBUG nova.network.os_vif_util [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.705 2 DEBUG nova.network.os_vif_util [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.706 2 DEBUG os_vif [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.708 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde7564c4-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:16 compute-0 nova_compute[260603]: 2025-10-02 08:40:16.714 2 INFO os_vif [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2')
Oct 02 08:40:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 221 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 26 KiB/s wr, 57 op/s
Oct 02 08:40:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.400 2 DEBUG nova.network.neutron [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Updated VIF entry in instance network info cache for port de7564c4-d262-44cd-9232-19988049a763. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.401 2 DEBUG nova.network.neutron [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Updating instance_info_cache with network_info: [{"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.437 2 DEBUG oslo_concurrency.lockutils [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.739 2 DEBUG nova.compute.manager [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-vif-unplugged-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.739 2 DEBUG oslo_concurrency.lockutils [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.739 2 DEBUG oslo_concurrency.lockutils [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.740 2 DEBUG oslo_concurrency.lockutils [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.740 2 DEBUG nova.compute.manager [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] No waiting events found dispatching network-vif-unplugged-de7564c4-d262-44cd-9232-19988049a763 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.740 2 DEBUG nova.compute.manager [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-vif-unplugged-de7564c4-d262-44cd-9232-19988049a763 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.740 2 DEBUG nova.compute.manager [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.741 2 DEBUG oslo_concurrency.lockutils [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.741 2 DEBUG oslo_concurrency.lockutils [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.742 2 DEBUG oslo_concurrency.lockutils [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.742 2 DEBUG nova.compute.manager [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] No waiting events found dispatching network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:40:17 compute-0 nova_compute[260603]: 2025-10-02 08:40:17.742 2 WARNING nova.compute.manager [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received unexpected event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 for instance with vm_state active and task_state deleting.
Oct 02 08:40:18 compute-0 ceph-mon[74477]: pgmap v1837: 305 pgs: 305 active+clean; 221 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 26 KiB/s wr, 57 op/s
Oct 02 08:40:18 compute-0 nova_compute[260603]: 2025-10-02 08:40:18.792 2 INFO nova.virt.libvirt.driver [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Deleting instance files /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9_del
Oct 02 08:40:18 compute-0 nova_compute[260603]: 2025-10-02 08:40:18.793 2 INFO nova.virt.libvirt.driver [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Deletion of /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9_del complete
Oct 02 08:40:18 compute-0 ovn_controller[152344]: 2025-10-02T08:40:18Z|01003|binding|INFO|Releasing lport 73da1479-3a84-4798-9da3-841fe88c5e3a from this chassis (sb_readonly=0)
Oct 02 08:40:18 compute-0 nova_compute[260603]: 2025-10-02 08:40:18.870 2 INFO nova.compute.manager [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Took 2.45 seconds to destroy the instance on the hypervisor.
Oct 02 08:40:18 compute-0 nova_compute[260603]: 2025-10-02 08:40:18.871 2 DEBUG oslo.service.loopingcall [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:40:18 compute-0 nova_compute[260603]: 2025-10-02 08:40:18.871 2 DEBUG nova.compute.manager [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:40:18 compute-0 nova_compute[260603]: 2025-10-02 08:40:18.872 2 DEBUG nova.network.neutron [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:40:18 compute-0 nova_compute[260603]: 2025-10-02 08:40:18.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:19 compute-0 nova_compute[260603]: 2025-10-02 08:40:19.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 147 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 115 op/s
Oct 02 08:40:20 compute-0 nova_compute[260603]: 2025-10-02 08:40:20.144 2 DEBUG nova.network.neutron [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:40:20 compute-0 nova_compute[260603]: 2025-10-02 08:40:20.178 2 INFO nova.compute.manager [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Took 1.31 seconds to deallocate network for instance.
Oct 02 08:40:20 compute-0 nova_compute[260603]: 2025-10-02 08:40:20.235 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:20 compute-0 nova_compute[260603]: 2025-10-02 08:40:20.235 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:20 compute-0 nova_compute[260603]: 2025-10-02 08:40:20.316 2 DEBUG nova.compute.manager [req-1d8d08cb-c00a-4325-92ae-68b6dcace0b8 req-ec300ade-9ce7-46e7-8a7f-9159b9577106 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-vif-deleted-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:20 compute-0 nova_compute[260603]: 2025-10-02 08:40:20.347 2 DEBUG oslo_concurrency.processutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:20 compute-0 ceph-mon[74477]: pgmap v1838: 305 pgs: 305 active+clean; 147 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 115 op/s
Oct 02 08:40:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:40:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183223316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:20 compute-0 nova_compute[260603]: 2025-10-02 08:40:20.840 2 DEBUG oslo_concurrency.processutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:20 compute-0 nova_compute[260603]: 2025-10-02 08:40:20.848 2 DEBUG nova.compute.provider_tree [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:40:20 compute-0 nova_compute[260603]: 2025-10-02 08:40:20.867 2 DEBUG nova.scheduler.client.report [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:40:20 compute-0 nova_compute[260603]: 2025-10-02 08:40:20.894 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:20 compute-0 nova_compute[260603]: 2025-10-02 08:40:20.936 2 INFO nova.scheduler.client.report [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Deleted allocations for instance c33f349a-5d06-4a81-90fd-fe32eebc4cb9
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.046 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 147 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 114 op/s
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.467 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.468 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.468 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.469 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.469 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.471 2 INFO nova.compute.manager [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Terminating instance
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.474 2 DEBUG nova.compute.manager [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:40:21 compute-0 kernel: tap4340f7c5-2f (unregistering): left promiscuous mode
Oct 02 08:40:21 compute-0 NetworkManager[45129]: <info>  [1759394421.5382] device (tap4340f7c5-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01004|binding|INFO|Releasing lport 4340f7c5-2f3a-4608-b77c-40798457ce79 from this chassis (sb_readonly=0)
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01005|binding|INFO|Setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 down in Southbound
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01006|binding|INFO|Removing iface tap4340f7c5-2f ovn-installed in OVS
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.560 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:12:17 10.100.0.14'], port_security=['fa:16:3e:37:12:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4145f1a3-c327-49ee-9af1-1ace3afb70a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fef51c8-51f5-4bd3-92a8-fdacaab334b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4340f7c5-2f3a-4608-b77c-40798457ce79) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.561 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4340f7c5-2f3a-4608-b77c-40798457ce79 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.562 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28f843b2-396a-4167-9840-21c273bdc044, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.563 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb605fbe-db33-4f0f-b0cd-6e5c4c7d0147]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.564 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28f843b2-396a-4167-9840-21c273bdc044 namespace which is not needed anymore
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:21 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Oct 02 08:40:21 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005f.scope: Consumed 16.692s CPU time.
Oct 02 08:40:21 compute-0 systemd-machined[214636]: Machine qemu-118-instance-0000005f terminated.
Oct 02 08:40:21 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2183223316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:21 compute-0 kernel: tap4340f7c5-2f: entered promiscuous mode
Oct 02 08:40:21 compute-0 systemd-udevd[357888]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:40:21 compute-0 kernel: tap4340f7c5-2f (unregistering): left promiscuous mode
Oct 02 08:40:21 compute-0 NetworkManager[45129]: <info>  [1759394421.6995] manager: (tap4340f7c5-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01007|binding|INFO|Claiming lport 4340f7c5-2f3a-4608-b77c-40798457ce79 for this chassis.
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01008|binding|INFO|4340f7c5-2f3a-4608-b77c-40798457ce79: Claiming fa:16:3e:37:12:17 10.100.0.14
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.712 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:12:17 10.100.0.14'], port_security=['fa:16:3e:37:12:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4145f1a3-c327-49ee-9af1-1ace3afb70a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fef51c8-51f5-4bd3-92a8-fdacaab334b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4340f7c5-2f3a-4608-b77c-40798457ce79) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.726 2 INFO nova.virt.libvirt.driver [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Instance destroyed successfully.
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.726 2 DEBUG nova.objects.instance [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'resources' on Instance uuid 4145f1a3-c327-49ee-9af1-1ace3afb70a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01009|binding|INFO|Setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 ovn-installed in OVS
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01010|binding|INFO|Setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 up in Southbound
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01011|binding|INFO|Releasing lport 4340f7c5-2f3a-4608-b77c-40798457ce79 from this chassis (sb_readonly=1)
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01012|if_status|INFO|Dropped 2 log messages in last 122 seconds (most recently, 122 seconds ago) due to excessive rate
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01013|if_status|INFO|Not setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 down as sb is readonly
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01014|binding|INFO|Removing iface tap4340f7c5-2f ovn-installed in OVS
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01015|binding|INFO|Releasing lport 4340f7c5-2f3a-4608-b77c-40798457ce79 from this chassis (sb_readonly=0)
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.746 2 DEBUG nova.virt.libvirt.vif [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:38:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2131100839',display_name='tempest-ServerActionsTestOtherA-server-2131100839',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2131100839',id=95,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC0HjsYe2bQh07GPDTkE/Hkn9NHAzOfs+WPsOxgVRJ14fyGEBr+vw6dokOlyhdtA2fxAJhqFEPbCShkjGLLEAdJQr2B1DlLsi6qyPK3AKei/w52/HIPGV/pd20ma4wEBfQ==',key_name='tempest-keypair-312406508',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:38:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-ccglcikc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:38:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=4145f1a3-c327-49ee-9af1-1ace3afb70a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.747 2 DEBUG nova.network.os_vif_util [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:40:21 compute-0 ovn_controller[152344]: 2025-10-02T08:40:21Z|01016|binding|INFO|Setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 down in Southbound
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.748 2 DEBUG nova.network.os_vif_util [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.748 2 DEBUG os_vif [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:40:21 compute-0 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [NOTICE]   (354132) : haproxy version is 2.8.14-c23fe91
Oct 02 08:40:21 compute-0 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [NOTICE]   (354132) : path to executable is /usr/sbin/haproxy
Oct 02 08:40:21 compute-0 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [WARNING]  (354132) : Exiting Master process...
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.751 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4340f7c5-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:21 compute-0 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [ALERT]    (354132) : Current worker (354134) exited with code 143 (Terminated)
Oct 02 08:40:21 compute-0 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [WARNING]  (354132) : All workers exited. Exiting... (0)
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.753 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:12:17 10.100.0.14'], port_security=['fa:16:3e:37:12:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4145f1a3-c327-49ee-9af1-1ace3afb70a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fef51c8-51f5-4bd3-92a8-fdacaab334b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4340f7c5-2f3a-4608-b77c-40798457ce79) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:40:21 compute-0 systemd[1]: libpod-3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052.scope: Deactivated successfully.
Oct 02 08:40:21 compute-0 podman[357909]: 2025-10-02 08:40:21.762209589 +0000 UTC m=+0.056603643 container died 3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.767 2 INFO os_vif [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f')
Oct 02 08:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052-userdata-shm.mount: Deactivated successfully.
Oct 02 08:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-afb0969f191443a386916053342273928abb5dcb10048e58d342dcf2424fd2bb-merged.mount: Deactivated successfully.
Oct 02 08:40:21 compute-0 podman[357909]: 2025-10-02 08:40:21.811341704 +0000 UTC m=+0.105735748 container cleanup 3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:40:21 compute-0 systemd[1]: libpod-conmon-3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052.scope: Deactivated successfully.
Oct 02 08:40:21 compute-0 podman[357958]: 2025-10-02 08:40:21.885115202 +0000 UTC m=+0.048845779 container remove 3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.898 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5654ab-402e-40c5-88c3-c6711752157e]: (4, ('Thu Oct  2 08:40:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044 (3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052)\n3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052\nThu Oct  2 08:40:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044 (3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052)\n3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.900 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbceb0f-5622-4aac-982c-ec7ed3a84805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.903 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:21 compute-0 kernel: tap28f843b2-30: left promiscuous mode
Oct 02 08:40:21 compute-0 nova_compute[260603]: 2025-10-02 08:40:21.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.923 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a004c1-b0eb-42ea-a6b7-b60209e377da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.948 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b11579d8-f021-41d7-9fc2-84b1db4b169c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.949 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9453214f-173a-4b24-9749-476833d1ba2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.971 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[02f35529-f150-4392-81a9-4eac5296975f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520137, 'reachable_time': 25083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357973, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d28f843b2\x2d396a\x2d4167\x2d9840\x2d21c273bdc044.mount: Deactivated successfully.
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.976 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28f843b2-396a-4167-9840-21c273bdc044 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.976 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[db6a495c-2d52-48e0-9485-5b7c50c07a11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.977 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4340f7c5-2f3a-4608-b77c-40798457ce79 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.978 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28f843b2-396a-4167-9840-21c273bdc044, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.979 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a0eb47f1-ba7d-4c34-94b3-01043f8d3231]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.979 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4340f7c5-2f3a-4608-b77c-40798457ce79 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.980 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28f843b2-396a-4167-9840-21c273bdc044, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:40:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.981 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[abbea1fe-9d6d-41bd-b5fb-59186a35f92e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:40:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671994283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:40:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:40:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671994283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:40:22 compute-0 nova_compute[260603]: 2025-10-02 08:40:22.179 2 INFO nova.virt.libvirt.driver [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Deleting instance files /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5_del
Oct 02 08:40:22 compute-0 nova_compute[260603]: 2025-10-02 08:40:22.180 2 INFO nova.virt.libvirt.driver [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Deletion of /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5_del complete
Oct 02 08:40:22 compute-0 nova_compute[260603]: 2025-10-02 08:40:22.237 2 INFO nova.compute.manager [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 02 08:40:22 compute-0 nova_compute[260603]: 2025-10-02 08:40:22.238 2 DEBUG oslo.service.loopingcall [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:40:22 compute-0 nova_compute[260603]: 2025-10-02 08:40:22.238 2 DEBUG nova.compute.manager [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:40:22 compute-0 nova_compute[260603]: 2025-10-02 08:40:22.238 2 DEBUG nova.network.neutron [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:40:22 compute-0 nova_compute[260603]: 2025-10-02 08:40:22.280 2 DEBUG nova.compute.manager [req-eb23ebcd-c898-4edb-8a99-e147fb75660a req-ad43a273-7ce8-403f-ab53-abbd0ceb080d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-unplugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:22 compute-0 nova_compute[260603]: 2025-10-02 08:40:22.280 2 DEBUG oslo_concurrency.lockutils [req-eb23ebcd-c898-4edb-8a99-e147fb75660a req-ad43a273-7ce8-403f-ab53-abbd0ceb080d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:22 compute-0 nova_compute[260603]: 2025-10-02 08:40:22.280 2 DEBUG oslo_concurrency.lockutils [req-eb23ebcd-c898-4edb-8a99-e147fb75660a req-ad43a273-7ce8-403f-ab53-abbd0ceb080d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:22 compute-0 nova_compute[260603]: 2025-10-02 08:40:22.280 2 DEBUG oslo_concurrency.lockutils [req-eb23ebcd-c898-4edb-8a99-e147fb75660a req-ad43a273-7ce8-403f-ab53-abbd0ceb080d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:22 compute-0 nova_compute[260603]: 2025-10-02 08:40:22.281 2 DEBUG nova.compute.manager [req-eb23ebcd-c898-4edb-8a99-e147fb75660a req-ad43a273-7ce8-403f-ab53-abbd0ceb080d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] No waiting events found dispatching network-vif-unplugged-4340f7c5-2f3a-4608-b77c-40798457ce79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:40:22 compute-0 nova_compute[260603]: 2025-10-02 08:40:22.281 2 DEBUG nova.compute.manager [req-eb23ebcd-c898-4edb-8a99-e147fb75660a req-ad43a273-7ce8-403f-ab53-abbd0ceb080d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-unplugged-4340f7c5-2f3a-4608-b77c-40798457ce79 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:40:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:40:22 compute-0 ceph-mon[74477]: pgmap v1839: 305 pgs: 305 active+clean; 147 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 114 op/s
Oct 02 08:40:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/671994283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:40:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/671994283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:40:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 41 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 155 op/s
Oct 02 08:40:23 compute-0 nova_compute[260603]: 2025-10-02 08:40:23.780 2 DEBUG nova.network.neutron [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:40:23 compute-0 nova_compute[260603]: 2025-10-02 08:40:23.804 2 INFO nova.compute.manager [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Took 1.57 seconds to deallocate network for instance.
Oct 02 08:40:23 compute-0 nova_compute[260603]: 2025-10-02 08:40:23.865 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:23 compute-0 nova_compute[260603]: 2025-10-02 08:40:23.866 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:23 compute-0 nova_compute[260603]: 2025-10-02 08:40:23.932 2 DEBUG oslo_concurrency.processutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:40:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/552991823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.345 2 DEBUG oslo_concurrency.processutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.352 2 DEBUG nova.compute.provider_tree [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.376 2 DEBUG nova.scheduler.client.report [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.403 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.432 2 INFO nova.scheduler.client.report [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Deleted allocations for instance 4145f1a3-c327-49ee-9af1-1ace3afb70a5
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.484 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.508 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.508 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.508 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.509 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.509 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] No waiting events found dispatching network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.509 2 WARNING nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received unexpected event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 for instance with vm_state deleted and task_state None.
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.509 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.509 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.510 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.510 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.510 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] No waiting events found dispatching network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.510 2 WARNING nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received unexpected event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 for instance with vm_state deleted and task_state None.
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.510 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.511 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.511 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.511 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.511 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] No waiting events found dispatching network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.511 2 WARNING nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received unexpected event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 for instance with vm_state deleted and task_state None.
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.512 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.512 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.512 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.512 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.512 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] No waiting events found dispatching network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.513 2 WARNING nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received unexpected event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 for instance with vm_state deleted and task_state None.
Oct 02 08:40:24 compute-0 nova_compute[260603]: 2025-10-02 08:40:24.513 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-deleted-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:40:24 compute-0 ceph-mon[74477]: pgmap v1840: 305 pgs: 305 active+clean; 41 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 155 op/s
Oct 02 08:40:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/552991823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 41 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.5 KiB/s wr, 120 op/s
Oct 02 08:40:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:25.720 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 2001:db8:0:1:f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:40:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:25.722 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated
Oct 02 08:40:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:25.724 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:40:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:25.725 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22566f21-147e-4c81-bb1d-9964db4f632f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:40:26 compute-0 nova_compute[260603]: 2025-10-02 08:40:26.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:40:26 compute-0 nova_compute[260603]: 2025-10-02 08:40:26.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:40:26 compute-0 nova_compute[260603]: 2025-10-02 08:40:26.576 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:40:26 compute-0 nova_compute[260603]: 2025-10-02 08:40:26.577 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:40:26 compute-0 nova_compute[260603]: 2025-10-02 08:40:26.578 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:40:26 compute-0 nova_compute[260603]: 2025-10-02 08:40:26.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:26 compute-0 ceph-mon[74477]: pgmap v1841: 305 pgs: 305 active+clean; 41 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.5 KiB/s wr, 120 op/s
Oct 02 08:40:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.5 KiB/s wr, 120 op/s
Oct 02 08:40:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:40:27 compute-0 nova_compute[260603]: 2025-10-02 08:40:27.901 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394412.90098, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:40:27 compute-0 nova_compute[260603]: 2025-10-02 08:40:27.902 2 INFO nova.compute.manager [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Stopped (Lifecycle Event)
Oct 02 08:40:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:40:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:40:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:40:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:40:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:40:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:40:27 compute-0 nova_compute[260603]: 2025-10-02 08:40:27.931 2 DEBUG nova.compute.manager [None req-c28a6ac7-78b0-4b7f-a835-7491f5c63962 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:40:27
Oct 02 08:40:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:40:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:40:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.log']
Oct 02 08:40:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:40:28 compute-0 nova_compute[260603]: 2025-10-02 08:40:28.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:28 compute-0 nova_compute[260603]: 2025-10-02 08:40:28.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:28 compute-0 nova_compute[260603]: 2025-10-02 08:40:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:40:28 compute-0 nova_compute[260603]: 2025-10-02 08:40:28.538 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:28 compute-0 nova_compute[260603]: 2025-10-02 08:40:28.539 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:28 compute-0 nova_compute[260603]: 2025-10-02 08:40:28.539 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:28 compute-0 nova_compute[260603]: 2025-10-02 08:40:28.539 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:40:28 compute-0 nova_compute[260603]: 2025-10-02 08:40:28.540 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:28 compute-0 ceph-mon[74477]: pgmap v1842: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.5 KiB/s wr, 120 op/s
Oct 02 08:40:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:40:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947673933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.073 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:29 compute-0 podman[358021]: 2025-10-02 08:40:29.215672127 +0000 UTC m=+0.095297175 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.6 KiB/s wr, 99 op/s
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.258 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.258 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3835MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.259 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.259 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:29 compute-0 podman[358040]: 2025-10-02 08:40:29.307705983 +0000 UTC m=+0.084570693 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.340 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.340 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.366 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:40:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313699046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.785 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.794 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.816 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.849 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:40:29 compute-0 nova_compute[260603]: 2025-10-02 08:40:29.849 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1947673933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3313699046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:30 compute-0 nova_compute[260603]: 2025-10-02 08:40:30.851 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:40:30 compute-0 ceph-mon[74477]: pgmap v1843: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.6 KiB/s wr, 99 op/s
Oct 02 08:40:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Oct 02 08:40:31 compute-0 nova_compute[260603]: 2025-10-02 08:40:31.668 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394416.6672437, c33f349a-5d06-4a81-90fd-fe32eebc4cb9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:40:31 compute-0 nova_compute[260603]: 2025-10-02 08:40:31.669 2 INFO nova.compute.manager [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] VM Stopped (Lifecycle Event)
Oct 02 08:40:31 compute-0 nova_compute[260603]: 2025-10-02 08:40:31.700 2 DEBUG nova.compute.manager [None req-df935a1f-77a6-4be8-8cd3-3fde0fb1bdc8 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:31 compute-0 nova_compute[260603]: 2025-10-02 08:40:31.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:40:32 compute-0 nova_compute[260603]: 2025-10-02 08:40:32.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:40:33 compute-0 ceph-mon[74477]: pgmap v1844: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Oct 02 08:40:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Oct 02 08:40:34 compute-0 ceph-mon[74477]: pgmap v1845: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Oct 02 08:40:34 compute-0 nova_compute[260603]: 2025-10-02 08:40:34.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:34 compute-0 nova_compute[260603]: 2025-10-02 08:40:34.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:40:34 compute-0 nova_compute[260603]: 2025-10-02 08:40:34.756 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:34 compute-0 nova_compute[260603]: 2025-10-02 08:40:34.757 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:34 compute-0 nova_compute[260603]: 2025-10-02 08:40:34.779 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:40:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:34.824 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:34.824 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:34.825 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:34 compute-0 nova_compute[260603]: 2025-10-02 08:40:34.912 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:34 compute-0 nova_compute[260603]: 2025-10-02 08:40:34.913 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:34 compute-0 nova_compute[260603]: 2025-10-02 08:40:34.921 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:40:34 compute-0 nova_compute[260603]: 2025-10-02 08:40:34.922 2 INFO nova.compute.claims [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.134 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 02 08:40:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:40:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3793537995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.621 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.626 2 DEBUG nova.compute.provider_tree [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.656 2 DEBUG nova.scheduler.client.report [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.684 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.684 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.748 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.767 2 INFO nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.782 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.914 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.916 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.917 2 INFO nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Creating image(s)
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.948 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:35 compute-0 nova_compute[260603]: 2025-10-02 08:40:35.980 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.011 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.015 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.085 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.085 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.086 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.086 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.108 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.111 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:36 compute-0 ceph-mon[74477]: pgmap v1846: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 02 08:40:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3793537995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.720 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394421.7187707, 4145f1a3-c327-49ee-9af1-1ace3afb70a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.721 2 INFO nova.compute.manager [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] VM Stopped (Lifecycle Event)
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.748 2 DEBUG nova.compute.manager [None req-49adead2-f86f-4611-9766-688f7f07cbc1 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.839 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.728s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:36 compute-0 nova_compute[260603]: 2025-10-02 08:40:36.951 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] resizing rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:40:37 compute-0 podman[358238]: 2025-10-02 08:40:37.038765264 +0000 UTC m=+0.097071869 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:40:37 compute-0 podman[358243]: 2025-10-02 08:40:37.047775354 +0000 UTC m=+0.094280284 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.152 2 DEBUG nova.objects.instance [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lazy-loading 'migration_context' on Instance uuid 15ffbe97-d811-45ba-af92-c1b624b8c8e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.173 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.174 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Ensure instance console log exists: /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.175 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.175 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.176 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.179 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.185 2 WARNING nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.191 2 DEBUG nova.virt.libvirt.host [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.192 2 DEBUG nova.virt.libvirt.host [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.197 2 DEBUG nova.virt.libvirt.host [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.198 2 DEBUG nova.virt.libvirt.host [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.198 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.199 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.200 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.200 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.201 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.201 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.202 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.202 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.203 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.203 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.204 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.205 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.210 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 57 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 393 KiB/s wr, 12 op/s
Oct 02 08:40:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:40:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:40:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355665263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.742 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.779 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:37 compute-0 nova_compute[260603]: 2025-10-02 08:40:37.786 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:40:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021264258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:40:38 compute-0 nova_compute[260603]: 2025-10-02 08:40:38.289 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:38 compute-0 nova_compute[260603]: 2025-10-02 08:40:38.293 2 DEBUG nova.objects.instance [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 15ffbe97-d811-45ba-af92-c1b624b8c8e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:40:38 compute-0 nova_compute[260603]: 2025-10-02 08:40:38.319 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:40:38 compute-0 nova_compute[260603]:   <uuid>15ffbe97-d811-45ba-af92-c1b624b8c8e6</uuid>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   <name>instance-00000062</name>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersAaction247Test-server-56882368</nova:name>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:40:37</nova:creationTime>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:40:38 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:40:38 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:40:38 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:40:38 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:40:38 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:40:38 compute-0 nova_compute[260603]:         <nova:user uuid="c96d3d2fc2b7400ebc99a6eea45149bb">tempest-ServersAaction247Test-1292878896-project-member</nova:user>
Oct 02 08:40:38 compute-0 nova_compute[260603]:         <nova:project uuid="2b4277ce019f42ae884ccde2057ab4a0">tempest-ServersAaction247Test-1292878896</nova:project>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <nova:ports/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <system>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <entry name="serial">15ffbe97-d811-45ba-af92-c1b624b8c8e6</entry>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <entry name="uuid">15ffbe97-d811-45ba-af92-c1b624b8c8e6</entry>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     </system>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   <os>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   </os>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   <features>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   </features>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk">
Oct 02 08:40:38 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       </source>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:40:38 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk.config">
Oct 02 08:40:38 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       </source>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:40:38 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/console.log" append="off"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <video>
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     </video>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:40:38 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:40:38 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:40:38 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:40:38 compute-0 nova_compute[260603]: </domain>
Oct 02 08:40:38 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:40:38 compute-0 ceph-mon[74477]: pgmap v1847: 305 pgs: 305 active+clean; 57 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 393 KiB/s wr, 12 op/s
Oct 02 08:40:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2355665263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:40:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1021264258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:40:38 compute-0 nova_compute[260603]: 2025-10-02 08:40:38.400 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:40:38 compute-0 nova_compute[260603]: 2025-10-02 08:40:38.401 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:40:38 compute-0 nova_compute[260603]: 2025-10-02 08:40:38.401 2 INFO nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Using config drive
Oct 02 08:40:38 compute-0 nova_compute[260603]: 2025-10-02 08:40:38.423 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.497344452041415e-05 of space, bias 1.0, pg target 0.022492033356124243 quantized to 32 (current 32)
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:40:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:40:39 compute-0 nova_compute[260603]: 2025-10-02 08:40:39.073 2 INFO nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Creating config drive at /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/disk.config
Oct 02 08:40:39 compute-0 nova_compute[260603]: 2025-10-02 08:40:39.078 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4vovsovh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:39 compute-0 nova_compute[260603]: 2025-10-02 08:40:39.226 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4vovsovh" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:39 compute-0 nova_compute[260603]: 2025-10-02 08:40:39.250 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:40:39 compute-0 nova_compute[260603]: 2025-10-02 08:40:39.253 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/disk.config 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 88 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:40:39 compute-0 nova_compute[260603]: 2025-10-02 08:40:39.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:39 compute-0 nova_compute[260603]: 2025-10-02 08:40:39.484 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/disk.config 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:39 compute-0 nova_compute[260603]: 2025-10-02 08:40:39.485 2 INFO nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Deleting local config drive /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/disk.config because it was imported into RBD.
Oct 02 08:40:39 compute-0 systemd-machined[214636]: New machine qemu-124-instance-00000062.
Oct 02 08:40:39 compute-0 systemd[1]: Started Virtual Machine qemu-124-instance-00000062.
Oct 02 08:40:40 compute-0 ceph-mon[74477]: pgmap v1848: 305 pgs: 305 active+clean; 88 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.540 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394440.5400782, 15ffbe97-d811-45ba-af92-c1b624b8c8e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.541 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] VM Resumed (Lifecycle Event)
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.544 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.544 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.548 2 INFO nova.virt.libvirt.driver [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Instance spawned successfully.
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.548 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.571 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.578 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.581 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.582 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.582 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.582 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.583 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.583 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.603 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.603 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394440.5413153, 15ffbe97-d811-45ba-af92-c1b624b8c8e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.604 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] VM Started (Lifecycle Event)
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.629 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.632 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.637 2 INFO nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Took 4.72 seconds to spawn the instance on the hypervisor.
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.637 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.657 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.709 2 INFO nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Took 5.83 seconds to build instance.
Oct 02 08:40:40 compute-0 nova_compute[260603]: 2025-10-02 08:40:40.729 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 88 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:40:41 compute-0 nova_compute[260603]: 2025-10-02 08:40:41.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:40:42 compute-0 ceph-mon[74477]: pgmap v1849: 305 pgs: 305 active+clean; 88 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:40:42 compute-0 nova_compute[260603]: 2025-10-02 08:40:42.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:40:42 compute-0 nova_compute[260603]: 2025-10-02 08:40:42.874 2 DEBUG nova.compute.manager [None req-d7247085-b832-42d7-b771-72e45ae47254 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:42 compute-0 nova_compute[260603]: 2025-10-02 08:40:42.909 2 INFO nova.compute.manager [None req-d7247085-b832-42d7-b771-72e45ae47254 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] instance snapshotting
Oct 02 08:40:42 compute-0 nova_compute[260603]: 2025-10-02 08:40:42.910 2 DEBUG nova.objects.instance [None req-d7247085-b832-42d7-b771-72e45ae47254 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lazy-loading 'flavor' on Instance uuid 15ffbe97-d811-45ba-af92-c1b624b8c8e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:40:43 compute-0 nova_compute[260603]: 2025-10-02 08:40:43.081 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:43 compute-0 nova_compute[260603]: 2025-10-02 08:40:43.081 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:43 compute-0 nova_compute[260603]: 2025-10-02 08:40:43.082 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:43 compute-0 nova_compute[260603]: 2025-10-02 08:40:43.082 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:43 compute-0 nova_compute[260603]: 2025-10-02 08:40:43.082 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:43 compute-0 nova_compute[260603]: 2025-10-02 08:40:43.083 2 INFO nova.compute.manager [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Terminating instance
Oct 02 08:40:43 compute-0 nova_compute[260603]: 2025-10-02 08:40:43.085 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "refresh_cache-15ffbe97-d811-45ba-af92-c1b624b8c8e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:40:43 compute-0 nova_compute[260603]: 2025-10-02 08:40:43.085 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquired lock "refresh_cache-15ffbe97-d811-45ba-af92-c1b624b8c8e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:40:43 compute-0 nova_compute[260603]: 2025-10-02 08:40:43.085 2 DEBUG nova.network.neutron [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:40:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 88 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:40:43 compute-0 nova_compute[260603]: 2025-10-02 08:40:43.349 2 INFO nova.virt.libvirt.driver [None req-d7247085-b832-42d7-b771-72e45ae47254 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Beginning live snapshot process
Oct 02 08:40:43 compute-0 nova_compute[260603]: 2025-10-02 08:40:43.354 2 DEBUG nova.network.neutron [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:40:43 compute-0 nova_compute[260603]: 2025-10-02 08:40:43.408 2 DEBUG nova.compute.manager [None req-d7247085-b832-42d7-b771-72e45ae47254 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.097 2 DEBUG nova.network.neutron [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.123 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Releasing lock "refresh_cache-15ffbe97-d811-45ba-af92-c1b624b8c8e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.124 2 DEBUG nova.compute.manager [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:40:44 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct 02 08:40:44 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000062.scope: Consumed 4.682s CPU time.
Oct 02 08:40:44 compute-0 systemd-machined[214636]: Machine qemu-124-instance-00000062 terminated.
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.351 2 INFO nova.virt.libvirt.driver [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Instance destroyed successfully.
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.351 2 DEBUG nova.objects.instance [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lazy-loading 'resources' on Instance uuid 15ffbe97-d811-45ba-af92-c1b624b8c8e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:40:44 compute-0 ceph-mon[74477]: pgmap v1850: 305 pgs: 305 active+clean; 88 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.841 2 DEBUG nova.compute.manager [None req-d7247085-b832-42d7-b771-72e45ae47254 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.919 2 INFO nova.virt.libvirt.driver [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Deleting instance files /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6_del
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.920 2 INFO nova.virt.libvirt.driver [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Deletion of /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6_del complete
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.967 2 INFO nova.compute.manager [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Took 0.84 seconds to destroy the instance on the hypervisor.
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.967 2 DEBUG oslo.service.loopingcall [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.968 2 DEBUG nova.compute.manager [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:40:44 compute-0 nova_compute[260603]: 2025-10-02 08:40:44.968 2 DEBUG nova.network.neutron [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:40:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 88 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:40:45 compute-0 nova_compute[260603]: 2025-10-02 08:40:45.296 2 DEBUG nova.network.neutron [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:40:45 compute-0 nova_compute[260603]: 2025-10-02 08:40:45.313 2 DEBUG nova.network.neutron [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:40:45 compute-0 nova_compute[260603]: 2025-10-02 08:40:45.329 2 INFO nova.compute.manager [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Took 0.36 seconds to deallocate network for instance.
Oct 02 08:40:45 compute-0 nova_compute[260603]: 2025-10-02 08:40:45.405 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:40:45 compute-0 nova_compute[260603]: 2025-10-02 08:40:45.405 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:40:45 compute-0 nova_compute[260603]: 2025-10-02 08:40:45.460 2 DEBUG oslo_concurrency.processutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:40:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:40:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3884168485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:45 compute-0 nova_compute[260603]: 2025-10-02 08:40:45.910 2 DEBUG oslo_concurrency.processutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:40:45 compute-0 nova_compute[260603]: 2025-10-02 08:40:45.918 2 DEBUG nova.compute.provider_tree [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:40:45 compute-0 nova_compute[260603]: 2025-10-02 08:40:45.943 2 DEBUG nova.scheduler.client.report [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:40:45 compute-0 nova_compute[260603]: 2025-10-02 08:40:45.977 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:46 compute-0 nova_compute[260603]: 2025-10-02 08:40:46.013 2 INFO nova.scheduler.client.report [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Deleted allocations for instance 15ffbe97-d811-45ba-af92-c1b624b8c8e6
Oct 02 08:40:46 compute-0 nova_compute[260603]: 2025-10-02 08:40:46.120 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:40:46 compute-0 ceph-mon[74477]: pgmap v1851: 305 pgs: 305 active+clean; 88 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:40:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3884168485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:40:46 compute-0 nova_compute[260603]: 2025-10-02 08:40:46.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 72 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Oct 02 08:40:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:40:48 compute-0 ceph-mon[74477]: pgmap v1852: 305 pgs: 305 active+clean; 72 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Oct 02 08:40:49 compute-0 nova_compute[260603]: 2025-10-02 08:40:49.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 114 op/s
Oct 02 08:40:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:50.142 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:40:50 compute-0 nova_compute[260603]: 2025-10-02 08:40:50.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:50.145 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:40:50 compute-0 ceph-mon[74477]: pgmap v1853: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 114 op/s
Oct 02 08:40:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 02 08:40:51 compute-0 nova_compute[260603]: 2025-10-02 08:40:51.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:52 compute-0 sudo[358539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:40:52 compute-0 sudo[358539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:52 compute-0 sudo[358539]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:52 compute-0 sudo[358564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:40:52 compute-0 sudo[358564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:52 compute-0 sudo[358564]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:52 compute-0 sudo[358589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:40:52 compute-0 sudo[358589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:52 compute-0 sudo[358589]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:52 compute-0 sudo[358614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 08:40:52 compute-0 sudo[358614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:40:52 compute-0 ceph-mon[74477]: pgmap v1854: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 02 08:40:52 compute-0 podman[358713]: 2025-10-02 08:40:52.648383519 +0000 UTC m=+0.064320793 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 08:40:52 compute-0 podman[358713]: 2025-10-02 08:40:52.759601353 +0000 UTC m=+0.175538637 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 08:40:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 02 08:40:53 compute-0 sudo[358614]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:40:53 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:40:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:40:53 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:40:53 compute-0 sudo[358870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:40:53 compute-0 sudo[358870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:53 compute-0 sudo[358870]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:53 compute-0 sudo[358895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:40:53 compute-0 sudo[358895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:53 compute-0 sudo[358895]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:53 compute-0 sudo[358920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:40:53 compute-0 sudo[358920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:53 compute-0 sudo[358920]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:53 compute-0 sudo[358945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:40:53 compute-0 sudo[358945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:54 compute-0 nova_compute[260603]: 2025-10-02 08:40:54.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:54 compute-0 sudo[358945]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 08:40:54 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 08:40:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:40:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:40:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:40:54 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:40:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:40:54 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:40:54 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ee90fa6a-0be5-4389-a477-789a3fcb8660 does not exist
Oct 02 08:40:54 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 906d3843-7a0c-438f-a24d-557767790f35 does not exist
Oct 02 08:40:54 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 938da2f1-00d2-4e39-944b-0e82f773ac4e does not exist
Oct 02 08:40:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:40:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:40:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:40:54 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:40:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:40:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:40:54 compute-0 ceph-mon[74477]: pgmap v1855: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 02 08:40:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:40:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:40:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 08:40:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:40:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:40:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:40:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:40:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:40:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:40:54 compute-0 sudo[359001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:40:54 compute-0 sudo[359001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:54 compute-0 sudo[359001]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:54 compute-0 sudo[359026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:40:54 compute-0 sudo[359026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:54 compute-0 sudo[359026]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:54 compute-0 sudo[359051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:40:54 compute-0 sudo[359051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:54 compute-0 sudo[359051]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:54 compute-0 sudo[359076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:40:54 compute-0 sudo[359076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:55 compute-0 podman[359141]: 2025-10-02 08:40:55.144131551 +0000 UTC m=+0.060908478 container create 58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 02 08:40:55 compute-0 systemd[1]: Started libpod-conmon-58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6.scope.
Oct 02 08:40:55 compute-0 podman[359141]: 2025-10-02 08:40:55.113034069 +0000 UTC m=+0.029811066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:40:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:40:55 compute-0 podman[359141]: 2025-10-02 08:40:55.266265343 +0000 UTC m=+0.183042340 container init 58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:40:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 08:40:55 compute-0 podman[359141]: 2025-10-02 08:40:55.277979305 +0000 UTC m=+0.194756232 container start 58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 02 08:40:55 compute-0 podman[359141]: 2025-10-02 08:40:55.282462454 +0000 UTC m=+0.199239381 container attach 58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 02 08:40:55 compute-0 peaceful_kirch[359157]: 167 167
Oct 02 08:40:55 compute-0 systemd[1]: libpod-58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6.scope: Deactivated successfully.
Oct 02 08:40:55 compute-0 podman[359141]: 2025-10-02 08:40:55.286231951 +0000 UTC m=+0.203008878 container died 58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:40:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa1f4d226d0de6b6debc025e8f3a0193381b68a268e108cb5f1b64e8e8751c76-merged.mount: Deactivated successfully.
Oct 02 08:40:55 compute-0 podman[359141]: 2025-10-02 08:40:55.338888461 +0000 UTC m=+0.255665388 container remove 58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 08:40:55 compute-0 systemd[1]: libpod-conmon-58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6.scope: Deactivated successfully.
Oct 02 08:40:55 compute-0 podman[359180]: 2025-10-02 08:40:55.597218251 +0000 UTC m=+0.069584185 container create fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 08:40:55 compute-0 systemd[1]: Started libpod-conmon-fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a.scope.
Oct 02 08:40:55 compute-0 podman[359180]: 2025-10-02 08:40:55.569148212 +0000 UTC m=+0.041514176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:40:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf5c093e05da884bf01dda82e2729a366d9c465c09a88f310b4b9805aa965ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf5c093e05da884bf01dda82e2729a366d9c465c09a88f310b4b9805aa965ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf5c093e05da884bf01dda82e2729a366d9c465c09a88f310b4b9805aa965ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf5c093e05da884bf01dda82e2729a366d9c465c09a88f310b4b9805aa965ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf5c093e05da884bf01dda82e2729a366d9c465c09a88f310b4b9805aa965ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:40:55 compute-0 podman[359180]: 2025-10-02 08:40:55.707344691 +0000 UTC m=+0.179710625 container init fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 08:40:55 compute-0 podman[359180]: 2025-10-02 08:40:55.724206373 +0000 UTC m=+0.196572277 container start fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cannon, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:40:55 compute-0 podman[359180]: 2025-10-02 08:40:55.727637379 +0000 UTC m=+0.200003313 container attach fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 08:40:56 compute-0 ceph-mon[74477]: pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 08:40:56 compute-0 nova_compute[260603]: 2025-10-02 08:40:56.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:56 compute-0 gracious_cannon[359196]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:40:56 compute-0 gracious_cannon[359196]: --> relative data size: 1.0
Oct 02 08:40:56 compute-0 gracious_cannon[359196]: --> All data devices are unavailable
Oct 02 08:40:56 compute-0 systemd[1]: libpod-fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a.scope: Deactivated successfully.
Oct 02 08:40:56 compute-0 systemd[1]: libpod-fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a.scope: Consumed 1.042s CPU time.
Oct 02 08:40:56 compute-0 conmon[359196]: conmon fe00326d0360809addeb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a.scope/container/memory.events
Oct 02 08:40:56 compute-0 podman[359180]: 2025-10-02 08:40:56.812253465 +0000 UTC m=+1.284619359 container died fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 08:40:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bf5c093e05da884bf01dda82e2729a366d9c465c09a88f310b4b9805aa965ad-merged.mount: Deactivated successfully.
Oct 02 08:40:56 compute-0 podman[359180]: 2025-10-02 08:40:56.89412277 +0000 UTC m=+1.366488694 container remove fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 08:40:56 compute-0 systemd[1]: libpod-conmon-fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a.scope: Deactivated successfully.
Oct 02 08:40:56 compute-0 sudo[359076]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:57 compute-0 sudo[359239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:40:57 compute-0 sudo[359239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:57 compute-0 sudo[359239]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:57 compute-0 sudo[359264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:40:57 compute-0 sudo[359264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:57 compute-0 sudo[359264]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:40:57.147 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:40:57 compute-0 sudo[359289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:40:57 compute-0 sudo[359289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:57 compute-0 sudo[359289]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:57 compute-0 sudo[359314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:40:57 compute-0 sudo[359314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 08:40:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:40:57 compute-0 podman[359380]: 2025-10-02 08:40:57.777558036 +0000 UTC m=+0.085100576 container create e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 08:40:57 compute-0 systemd[1]: Started libpod-conmon-e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9.scope.
Oct 02 08:40:57 compute-0 podman[359380]: 2025-10-02 08:40:57.743978406 +0000 UTC m=+0.051520996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:40:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:40:57 compute-0 podman[359380]: 2025-10-02 08:40:57.889315967 +0000 UTC m=+0.196858547 container init e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 08:40:57 compute-0 podman[359380]: 2025-10-02 08:40:57.902545767 +0000 UTC m=+0.210088307 container start e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:40:57 compute-0 podman[359380]: 2025-10-02 08:40:57.906853279 +0000 UTC m=+0.214395819 container attach e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 08:40:57 compute-0 pedantic_colden[359396]: 167 167
Oct 02 08:40:57 compute-0 systemd[1]: libpod-e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9.scope: Deactivated successfully.
Oct 02 08:40:57 compute-0 podman[359380]: 2025-10-02 08:40:57.913088963 +0000 UTC m=+0.220631503 container died e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 08:40:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:40:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:40:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:40:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:40:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:40:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:40:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a50bf239757309a85d5a686e490d417cda79c4b018e27d5d379482852cbb70fc-merged.mount: Deactivated successfully.
Oct 02 08:40:57 compute-0 podman[359380]: 2025-10-02 08:40:57.970615294 +0000 UTC m=+0.278157834 container remove e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 08:40:57 compute-0 systemd[1]: libpod-conmon-e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9.scope: Deactivated successfully.
Oct 02 08:40:58 compute-0 podman[359420]: 2025-10-02 08:40:58.228423977 +0000 UTC m=+0.075268662 container create 8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:40:58 compute-0 systemd[1]: Started libpod-conmon-8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067.scope.
Oct 02 08:40:58 compute-0 podman[359420]: 2025-10-02 08:40:58.195098965 +0000 UTC m=+0.041943690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:40:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cfa5accb7feba2ab0f315daeab3ad9836f792eb22616e81b749b051a966e75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cfa5accb7feba2ab0f315daeab3ad9836f792eb22616e81b749b051a966e75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cfa5accb7feba2ab0f315daeab3ad9836f792eb22616e81b749b051a966e75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cfa5accb7feba2ab0f315daeab3ad9836f792eb22616e81b749b051a966e75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:40:58 compute-0 podman[359420]: 2025-10-02 08:40:58.355225533 +0000 UTC m=+0.202070228 container init 8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 08:40:58 compute-0 podman[359420]: 2025-10-02 08:40:58.366996428 +0000 UTC m=+0.213841083 container start 8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:40:58 compute-0 podman[359420]: 2025-10-02 08:40:58.370817557 +0000 UTC m=+0.217662302 container attach 8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:40:58 compute-0 ceph-mon[74477]: pgmap v1857: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]: {
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:     "0": [
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:         {
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "devices": [
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "/dev/loop3"
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             ],
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_name": "ceph_lv0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_size": "21470642176",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "name": "ceph_lv0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "tags": {
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.cluster_name": "ceph",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.crush_device_class": "",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.encrypted": "0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.osd_id": "0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.type": "block",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.vdo": "0"
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             },
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "type": "block",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "vg_name": "ceph_vg0"
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:         }
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:     ],
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:     "1": [
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:         {
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "devices": [
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "/dev/loop4"
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             ],
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_name": "ceph_lv1",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_size": "21470642176",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "name": "ceph_lv1",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "tags": {
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.cluster_name": "ceph",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.crush_device_class": "",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.encrypted": "0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.osd_id": "1",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.type": "block",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.vdo": "0"
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             },
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "type": "block",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "vg_name": "ceph_vg1"
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:         }
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:     ],
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:     "2": [
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:         {
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "devices": [
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "/dev/loop5"
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             ],
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_name": "ceph_lv2",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_size": "21470642176",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "name": "ceph_lv2",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "tags": {
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.cluster_name": "ceph",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.crush_device_class": "",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.encrypted": "0",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.osd_id": "2",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.type": "block",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:                 "ceph.vdo": "0"
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             },
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "type": "block",
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:             "vg_name": "ceph_vg2"
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:         }
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]:     ]
Oct 02 08:40:59 compute-0 flamboyant_faraday[359436]: }
Oct 02 08:40:59 compute-0 systemd[1]: libpod-8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067.scope: Deactivated successfully.
Oct 02 08:40:59 compute-0 podman[359420]: 2025-10-02 08:40:59.166815535 +0000 UTC m=+1.013660210 container died 8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:40:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-25cfa5accb7feba2ab0f315daeab3ad9836f792eb22616e81b749b051a966e75-merged.mount: Deactivated successfully.
Oct 02 08:40:59 compute-0 podman[359420]: 2025-10-02 08:40:59.251953751 +0000 UTC m=+1.098798436 container remove 8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 08:40:59 compute-0 nova_compute[260603]: 2025-10-02 08:40:59.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:40:59 compute-0 systemd[1]: libpod-conmon-8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067.scope: Deactivated successfully.
Oct 02 08:40:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 341 B/s wr, 13 op/s
Oct 02 08:40:59 compute-0 sudo[359314]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:59 compute-0 nova_compute[260603]: 2025-10-02 08:40:59.348 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394444.34739, 15ffbe97-d811-45ba-af92-c1b624b8c8e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:40:59 compute-0 nova_compute[260603]: 2025-10-02 08:40:59.349 2 INFO nova.compute.manager [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] VM Stopped (Lifecycle Event)
Oct 02 08:40:59 compute-0 nova_compute[260603]: 2025-10-02 08:40:59.374 2 DEBUG nova.compute.manager [None req-023ebbee-0528-448c-a9ab-c68cae939e08 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:40:59 compute-0 podman[359455]: 2025-10-02 08:40:59.415793834 +0000 UTC m=+0.111220665 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:40:59 compute-0 sudo[359464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:40:59 compute-0 sudo[359464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:59 compute-0 sudo[359464]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:59 compute-0 sudo[359506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:40:59 compute-0 sudo[359506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:59 compute-0 sudo[359506]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:59 compute-0 podman[359494]: 2025-10-02 08:40:59.603670612 +0000 UTC m=+0.169009925 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 08:40:59 compute-0 sudo[359545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:40:59 compute-0 sudo[359545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:40:59 compute-0 sudo[359545]: pam_unix(sudo:session): session closed for user root
Oct 02 08:40:59 compute-0 sudo[359573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:40:59 compute-0 sudo[359573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:41:00 compute-0 podman[359637]: 2025-10-02 08:41:00.182788215 +0000 UTC m=+0.059539945 container create 29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heisenberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 08:41:00 compute-0 systemd[1]: Started libpod-conmon-29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e.scope.
Oct 02 08:41:00 compute-0 podman[359637]: 2025-10-02 08:41:00.161903648 +0000 UTC m=+0.038655388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:41:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:41:00 compute-0 podman[359637]: 2025-10-02 08:41:00.28789797 +0000 UTC m=+0.164649660 container init 29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 08:41:00 compute-0 podman[359637]: 2025-10-02 08:41:00.298765766 +0000 UTC m=+0.175517456 container start 29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heisenberg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 08:41:00 compute-0 podman[359637]: 2025-10-02 08:41:00.302173732 +0000 UTC m=+0.178925412 container attach 29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heisenberg, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 08:41:00 compute-0 reverent_heisenberg[359653]: 167 167
Oct 02 08:41:00 compute-0 systemd[1]: libpod-29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e.scope: Deactivated successfully.
Oct 02 08:41:00 compute-0 conmon[359653]: conmon 29708dd7f4f6bed4d0be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e.scope/container/memory.events
Oct 02 08:41:00 compute-0 podman[359637]: 2025-10-02 08:41:00.309575821 +0000 UTC m=+0.186327521 container died 29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 08:41:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9e5222714f508afb08053d8257e06e8031a3d8ddab447be872b544d6adf5065-merged.mount: Deactivated successfully.
Oct 02 08:41:00 compute-0 podman[359637]: 2025-10-02 08:41:00.360664622 +0000 UTC m=+0.237416352 container remove 29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 08:41:00 compute-0 nova_compute[260603]: 2025-10-02 08:41:00.369 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:00 compute-0 nova_compute[260603]: 2025-10-02 08:41:00.372 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:00 compute-0 systemd[1]: libpod-conmon-29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e.scope: Deactivated successfully.
Oct 02 08:41:00 compute-0 nova_compute[260603]: 2025-10-02 08:41:00.408 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:41:00 compute-0 ceph-mon[74477]: pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 341 B/s wr, 13 op/s
Oct 02 08:41:00 compute-0 nova_compute[260603]: 2025-10-02 08:41:00.488 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:00 compute-0 nova_compute[260603]: 2025-10-02 08:41:00.488 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:00 compute-0 nova_compute[260603]: 2025-10-02 08:41:00.497 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:41:00 compute-0 nova_compute[260603]: 2025-10-02 08:41:00.498 2 INFO nova.compute.claims [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:41:00 compute-0 nova_compute[260603]: 2025-10-02 08:41:00.603 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:00 compute-0 podman[359676]: 2025-10-02 08:41:00.633910834 +0000 UTC m=+0.081226566 container create e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 02 08:41:00 compute-0 systemd[1]: Started libpod-conmon-e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b.scope.
Oct 02 08:41:00 compute-0 podman[359676]: 2025-10-02 08:41:00.605962048 +0000 UTC m=+0.053277870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:41:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:41:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16cdeb3bb32c817f30e00b04e8f6a0ef576a548eedd93b0c06b95f7729021b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:41:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16cdeb3bb32c817f30e00b04e8f6a0ef576a548eedd93b0c06b95f7729021b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:41:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16cdeb3bb32c817f30e00b04e8f6a0ef576a548eedd93b0c06b95f7729021b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:41:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16cdeb3bb32c817f30e00b04e8f6a0ef576a548eedd93b0c06b95f7729021b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:41:00 compute-0 podman[359676]: 2025-10-02 08:41:00.750314689 +0000 UTC m=+0.197630441 container init e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 08:41:00 compute-0 podman[359676]: 2025-10-02 08:41:00.757147739 +0000 UTC m=+0.204463461 container start e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:41:00 compute-0 podman[359676]: 2025-10-02 08:41:00.76167835 +0000 UTC m=+0.208994112 container attach e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 08:41:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:41:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111868253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.128 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.134 2 DEBUG nova.compute.provider_tree [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.155 2 DEBUG nova.scheduler.client.report [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.177 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.177 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.238 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.239 2 DEBUG nova.network.neutron [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.260 2 INFO nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:41:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.277 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.387 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.389 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.389 2 INFO nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Creating image(s)
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.411 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.434 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.455 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.458 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:01 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3111868253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.534 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.535 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.535 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.535 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.555 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.558 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.677 2 DEBUG nova.policy [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:41:01 compute-0 stoic_wing[359692]: {
Oct 02 08:41:01 compute-0 stoic_wing[359692]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "osd_id": 2,
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "type": "bluestore"
Oct 02 08:41:01 compute-0 stoic_wing[359692]:     },
Oct 02 08:41:01 compute-0 stoic_wing[359692]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "osd_id": 1,
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "type": "bluestore"
Oct 02 08:41:01 compute-0 stoic_wing[359692]:     },
Oct 02 08:41:01 compute-0 stoic_wing[359692]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "osd_id": 0,
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:41:01 compute-0 stoic_wing[359692]:         "type": "bluestore"
Oct 02 08:41:01 compute-0 stoic_wing[359692]:     }
Oct 02 08:41:01 compute-0 stoic_wing[359692]: }
Oct 02 08:41:01 compute-0 systemd[1]: libpod-e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b.scope: Deactivated successfully.
Oct 02 08:41:01 compute-0 systemd[1]: libpod-e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b.scope: Consumed 1.024s CPU time.
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:01 compute-0 podman[359840]: 2025-10-02 08:41:01.837118221 +0000 UTC m=+0.029064311 container died e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 08:41:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b16cdeb3bb32c817f30e00b04e8f6a0ef576a548eedd93b0c06b95f7729021b-merged.mount: Deactivated successfully.
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.905 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:01 compute-0 podman[359840]: 2025-10-02 08:41:01.923633471 +0000 UTC m=+0.115579491 container remove e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 08:41:01 compute-0 systemd[1]: libpod-conmon-e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b.scope: Deactivated successfully.
Oct 02 08:41:01 compute-0 sudo[359573]: pam_unix(sudo:session): session closed for user root
Oct 02 08:41:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:41:01 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:41:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:41:01 compute-0 nova_compute[260603]: 2025-10-02 08:41:01.992 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:41:01 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:41:01 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5ecbf165-9a7b-4810-990f-0ddd13bcc634 does not exist
Oct 02 08:41:01 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 7c751c96-4cb4-470a-9f9a-cd8b9f56ecfb does not exist
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.032 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.032 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.058 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:41:02 compute-0 sudo[359905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.090 2 DEBUG nova.objects.instance [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid ddf9efd0-0ac6-4857-96ea-3f1d0e18590d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:41:02 compute-0 sudo[359905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:41:02 compute-0 sudo[359905]: pam_unix(sudo:session): session closed for user root
Oct 02 08:41:02 compute-0 sudo[359952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:41:02 compute-0 sudo[359952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:41:02 compute-0 sudo[359952]: pam_unix(sudo:session): session closed for user root
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.165 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.166 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Ensure instance console log exists: /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.166 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.166 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.167 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.179 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.180 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.185 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.185 2 INFO nova.compute.claims [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.297 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:41:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:41:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19851933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.782 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.793 2 DEBUG nova.compute.provider_tree [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.804 2 DEBUG nova.network.neutron [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Successfully created port: f333ef16-60e3-449a-a6e7-4e7435c4ee30 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.816 2 DEBUG nova.scheduler.client.report [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.848 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.849 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.928 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.929 2 DEBUG nova.network.neutron [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.950 2 INFO nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:41:02 compute-0 nova_compute[260603]: 2025-10-02 08:41:02.978 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:41:02 compute-0 ceph-mon[74477]: pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:41:02 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:41:02 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:41:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/19851933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.095 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.098 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.098 2 INFO nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Creating image(s)
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.134 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.163 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.191 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.195 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.270 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 88 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.271 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.272 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.272 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.298 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.301 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.541 2 DEBUG nova.policy [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923a2daca06b4bf98c21b2604971789f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a0738f9f0c2244bb8bc7a350d5ee5932', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.602 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.654 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] resizing rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.738 2 DEBUG nova.objects.instance [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lazy-loading 'migration_context' on Instance uuid fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.754 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.755 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Ensure instance console log exists: /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.755 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.755 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:03 compute-0 nova_compute[260603]: 2025-10-02 08:41:03.756 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:04 compute-0 nova_compute[260603]: 2025-10-02 08:41:04.011 2 DEBUG nova.network.neutron [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Successfully updated port: f333ef16-60e3-449a-a6e7-4e7435c4ee30 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:41:04 compute-0 nova_compute[260603]: 2025-10-02 08:41:04.040 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:41:04 compute-0 nova_compute[260603]: 2025-10-02 08:41:04.040 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:41:04 compute-0 nova_compute[260603]: 2025-10-02 08:41:04.040 2 DEBUG nova.network.neutron [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:41:04 compute-0 nova_compute[260603]: 2025-10-02 08:41:04.175 2 DEBUG nova.compute.manager [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-changed-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:04 compute-0 nova_compute[260603]: 2025-10-02 08:41:04.176 2 DEBUG nova.compute.manager [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Refreshing instance network info cache due to event network-changed-f333ef16-60e3-449a-a6e7-4e7435c4ee30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:41:04 compute-0 nova_compute[260603]: 2025-10-02 08:41:04.177 2 DEBUG oslo_concurrency.lockutils [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:41:04 compute-0 nova_compute[260603]: 2025-10-02 08:41:04.243 2 DEBUG nova.network.neutron [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:41:04 compute-0 nova_compute[260603]: 2025-10-02 08:41:04.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:04 compute-0 nova_compute[260603]: 2025-10-02 08:41:04.451 2 DEBUG nova.network.neutron [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Successfully created port: c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:41:04 compute-0 ceph-mon[74477]: pgmap v1860: 305 pgs: 305 active+clean; 88 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:41:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 88 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.572 2 DEBUG nova.network.neutron [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.596 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.596 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Instance network_info: |[{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.597 2 DEBUG oslo_concurrency.lockutils [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.597 2 DEBUG nova.network.neutron [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Refreshing network info cache for port f333ef16-60e3-449a-a6e7-4e7435c4ee30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.601 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Start _get_guest_xml network_info=[{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.605 2 WARNING nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.611 2 DEBUG nova.virt.libvirt.host [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.612 2 DEBUG nova.virt.libvirt.host [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.616 2 DEBUG nova.virt.libvirt.host [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.617 2 DEBUG nova.virt.libvirt.host [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.617 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.618 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.618 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.619 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.619 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.619 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.619 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.620 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.620 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.620 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.621 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.621 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.624 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.753 2 DEBUG nova.network.neutron [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Successfully updated port: c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.772 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "refresh_cache-fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.772 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquired lock "refresh_cache-fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.772 2 DEBUG nova.network.neutron [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.937 2 DEBUG nova.compute.manager [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-changed-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.938 2 DEBUG nova.compute.manager [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Refreshing instance network info cache due to event network-changed-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:41:05 compute-0 nova_compute[260603]: 2025-10-02 08:41:05.938 2 DEBUG oslo_concurrency.lockutils [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:41:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:41:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4006417553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.070 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.091 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.094 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.134 2 DEBUG nova.network.neutron [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:41:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:41:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258135181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.538 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.539 2 DEBUG nova.virt.libvirt.vif [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:40:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=99,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEVRDNLCh7cGxmY8USPZ+Uw2QoectKaA792o7mYLTgKSdgVOdamESEYTMfoSKYLlGXQvF8R+2N9sWb5Rc56fBJj8i+BKngnG3c4da6Xe6b+B/+hyHhFsw+SA5clIFi8Ulg==',key_name='tempest-TestSecurityGroupsBasicOps-2045998300',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-r7zcjfhf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:01Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=ddf9efd0-0ac6-4857-96ea-3f1d0e18590d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.540 2 DEBUG nova.network.os_vif_util [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.540 2 DEBUG nova.network.os_vif_util [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.541 2 DEBUG nova.objects.instance [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid ddf9efd0-0ac6-4857-96ea-3f1d0e18590d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.557 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:41:06 compute-0 nova_compute[260603]:   <uuid>ddf9efd0-0ac6-4857-96ea-3f1d0e18590d</uuid>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   <name>instance-00000063</name>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006</nova:name>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:41:05</nova:creationTime>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:41:06 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:41:06 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:41:06 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:41:06 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:41:06 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:41:06 compute-0 nova_compute[260603]:         <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct 02 08:41:06 compute-0 nova_compute[260603]:         <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:41:06 compute-0 nova_compute[260603]:         <nova:port uuid="f333ef16-60e3-449a-a6e7-4e7435c4ee30">
Oct 02 08:41:06 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <system>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <entry name="serial">ddf9efd0-0ac6-4857-96ea-3f1d0e18590d</entry>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <entry name="uuid">ddf9efd0-0ac6-4857-96ea-3f1d0e18590d</entry>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     </system>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   <os>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   </os>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   <features>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   </features>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk">
Oct 02 08:41:06 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       </source>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:41:06 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk.config">
Oct 02 08:41:06 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       </source>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:41:06 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:2c:fb:ed"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <target dev="tapf333ef16-60"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/console.log" append="off"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <video>
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     </video>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:41:06 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:41:06 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:41:06 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:41:06 compute-0 nova_compute[260603]: </domain>
Oct 02 08:41:06 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.558 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Preparing to wait for external event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.558 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.558 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.559 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.559 2 DEBUG nova.virt.libvirt.vif [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:40:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=99,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEVRDNLCh7cGxmY8USPZ+Uw2QoectKaA792o7mYLTgKSdgVOdamESEYTMfoSKYLlGXQvF8R+2N9sWb5Rc56fBJj8i+BKngnG3c4da6Xe6b+B/+hyHhFsw+SA5clIFi8Ulg==',key_name='tempest-TestSecurityGroupsBasicOps-2045998300',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-r7zcjfhf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:01Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=ddf9efd0-0ac6-4857-96ea-3f1d0e18590d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.560 2 DEBUG nova.network.os_vif_util [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.560 2 DEBUG nova.network.os_vif_util [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.561 2 DEBUG os_vif [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.562 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.562 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf333ef16-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf333ef16-60, col_values=(('external_ids', {'iface-id': 'f333ef16-60e3-449a-a6e7-4e7435c4ee30', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:fb:ed', 'vm-uuid': 'ddf9efd0-0ac6-4857-96ea-3f1d0e18590d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:06 compute-0 NetworkManager[45129]: <info>  [1759394466.5679] manager: (tapf333ef16-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.575 2 INFO os_vif [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60')
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.630 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.631 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.631 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:2c:fb:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.631 2 INFO nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Using config drive
Oct 02 08:41:06 compute-0 nova_compute[260603]: 2025-10-02 08:41:06.652 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:07 compute-0 ceph-mon[74477]: pgmap v1861: 305 pgs: 305 active+clean; 88 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:41:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4006417553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1258135181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 103 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.4 MiB/s wr, 29 op/s
Oct 02 08:41:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:41:08 compute-0 podman[360249]: 2025-10-02 08:41:08.052141922 +0000 UTC m=+0.097512180 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 08:41:08 compute-0 podman[360248]: 2025-10-02 08:41:08.060987746 +0000 UTC m=+0.106443567 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.149 2 INFO nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Creating config drive at /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/disk.config
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.157 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7isdnid execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:08 compute-0 ceph-mon[74477]: pgmap v1862: 305 pgs: 305 active+clean; 103 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.4 MiB/s wr, 29 op/s
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.312 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7isdnid" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.343 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.348 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/disk.config ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.455 2 DEBUG nova.network.neutron [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updated VIF entry in instance network info cache for port f333ef16-60e3-449a-a6e7-4e7435c4ee30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.456 2 DEBUG nova.network.neutron [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.485 2 DEBUG oslo_concurrency.lockutils [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.654 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/disk.config ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.656 2 INFO nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Deleting local config drive /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/disk.config because it was imported into RBD.
Oct 02 08:41:08 compute-0 kernel: tapf333ef16-60: entered promiscuous mode
Oct 02 08:41:08 compute-0 NetworkManager[45129]: <info>  [1759394468.7281] manager: (tapf333ef16-60): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Oct 02 08:41:08 compute-0 ovn_controller[152344]: 2025-10-02T08:41:08Z|01017|binding|INFO|Claiming lport f333ef16-60e3-449a-a6e7-4e7435c4ee30 for this chassis.
Oct 02 08:41:08 compute-0 ovn_controller[152344]: 2025-10-02T08:41:08Z|01018|binding|INFO|f333ef16-60e3-449a-a6e7-4e7435c4ee30: Claiming fa:16:3e:2c:fb:ed 10.100.0.5
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.748 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:fb:ed 10.100.0.5'], port_security=['fa:16:3e:2c:fb:ed 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ddf9efd0-0ac6-4857-96ea-3f1d0e18590d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e1b840f-6fce-4ed8-898b-67d0829d6af4 8ef1bc1b-1bb2-413b-9f26-26e2f642b996', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d2a6eb5-cf00-4e48-b8dd-b9d998ac802a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f333ef16-60e3-449a-a6e7-4e7435c4ee30) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.750 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f333ef16-60e3-449a-a6e7-4e7435c4ee30 in datapath 64b91b42-84b6-4429-b137-6399bf4f6ccd bound to our chassis
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.751 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64b91b42-84b6-4429-b137-6399bf4f6ccd
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.763 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3706b93a-dbdd-483a-8c15-75a270d06149]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.764 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap64b91b42-81 in ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:41:08 compute-0 systemd-udevd[360339]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.766 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap64b91b42-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.766 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e632d1d9-f903-4268-96e0-f1d501b93daf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.767 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6167f98d-85f8-4feb-b0bb-277259cf3272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 NetworkManager[45129]: <info>  [1759394468.7834] device (tapf333ef16-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:41:08 compute-0 NetworkManager[45129]: <info>  [1759394468.7844] device (tapf333ef16-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:41:08 compute-0 systemd-machined[214636]: New machine qemu-125-instance-00000063.
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.786 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5e628c30-a39d-4fe3-a703-fd272980a566]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.811 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18d2ebdb-846f-43be-a154-4de3556e3612]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 systemd[1]: Started Virtual Machine qemu-125-instance-00000063.
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:08 compute-0 ovn_controller[152344]: 2025-10-02T08:41:08Z|01019|binding|INFO|Setting lport f333ef16-60e3-449a-a6e7-4e7435c4ee30 ovn-installed in OVS
Oct 02 08:41:08 compute-0 ovn_controller[152344]: 2025-10-02T08:41:08Z|01020|binding|INFO|Setting lport f333ef16-60e3-449a-a6e7-4e7435c4ee30 up in Southbound
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.844 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7a95923c-b245-41d0-a9f6-08453ba9019a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 NetworkManager[45129]: <info>  [1759394468.8515] manager: (tap64b91b42-80): new Veth device (/org/freedesktop/NetworkManager/Devices/398)
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.850 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[85855abf-bd5f-496b-8c2f-0463e4f2293d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.884 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8d235a-3db2-44a2-8093-f2d0fd7fad88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.887 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea50344-a627-4d1a-9c0d-9aca8a17d5c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 NetworkManager[45129]: <info>  [1759394468.9114] device (tap64b91b42-80): carrier: link connected
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.911 2 DEBUG nova.network.neutron [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Updating instance_info_cache with network_info: [{"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.916 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[15765eaf-4965-4a2e-800f-af74451bc245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.930 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7252700b-9ffc-40db-a47f-819f0910d0af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64b91b42-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:92:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534751, 'reachable_time': 24391, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360372, 'error': None, 'target': 'ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.940 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Releasing lock "refresh_cache-fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.941 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Instance network_info: |[{"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.941 2 DEBUG oslo_concurrency.lockutils [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.941 2 DEBUG nova.network.neutron [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Refreshing network info cache for port c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.945 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Start _get_guest_xml network_info=[{"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.947 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[562064a6-9d41-4823-a849-d3e8e8734cd3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:92ef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534751, 'tstamp': 534751}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360373, 'error': None, 'target': 'ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.949 2 WARNING nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.963 2 DEBUG nova.virt.libvirt.host [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.964 2 DEBUG nova.virt.libvirt.host [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.965 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e24c45fe-99d8-4548-a45e-91b5aa6d3aef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64b91b42-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:92:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534751, 'reachable_time': 24391, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360374, 'error': None, 'target': 'ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.969 2 DEBUG nova.virt.libvirt.host [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.969 2 DEBUG nova.virt.libvirt.host [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.970 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.970 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.970 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.971 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.971 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.971 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.971 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.971 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.972 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.972 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.972 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.972 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:41:08 compute-0 nova_compute[260603]: 2025-10-02 08:41:08.975 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.994 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc6efb3-8796-49ac-ac33-3a03a1a13f79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.046 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[44d7a13b-3f50-4f04-8154-a588e8668ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.047 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64b91b42-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.048 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.048 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64b91b42-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:09 compute-0 kernel: tap64b91b42-80: entered promiscuous mode
Oct 02 08:41:09 compute-0 NetworkManager[45129]: <info>  [1759394469.0506] manager: (tap64b91b42-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Oct 02 08:41:09 compute-0 nova_compute[260603]: 2025-10-02 08:41:09.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.052 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64b91b42-80, col_values=(('external_ids', {'iface-id': '6a4c46b3-20fe-4d13-90df-77828898d571'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:09 compute-0 ovn_controller[152344]: 2025-10-02T08:41:09Z|01021|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.070 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/64b91b42-84b6-4429-b137-6399bf4f6ccd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/64b91b42-84b6-4429-b137-6399bf4f6ccd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:41:09 compute-0 nova_compute[260603]: 2025-10-02 08:41:09.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.071 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e718e6b6-8a70-472a-85f7-413e52559f8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.072 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-64b91b42-84b6-4429-b137-6399bf4f6ccd
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/64b91b42-84b6-4429-b137-6399bf4f6ccd.pid.haproxy
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 64b91b42-84b6-4429-b137-6399bf4f6ccd
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:41:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.076 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'env', 'PROCESS_TAG=haproxy-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/64b91b42-84b6-4429-b137-6399bf4f6ccd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:41:09 compute-0 nova_compute[260603]: 2025-10-02 08:41:09.105 2 DEBUG nova.compute.manager [req-8861142b-ae2d-4dfb-b267-3d7b4f605152 req-db617a5d-1ee0-430b-ab0b-63a28863c1ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:09 compute-0 nova_compute[260603]: 2025-10-02 08:41:09.106 2 DEBUG oslo_concurrency.lockutils [req-8861142b-ae2d-4dfb-b267-3d7b4f605152 req-db617a5d-1ee0-430b-ab0b-63a28863c1ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:09 compute-0 nova_compute[260603]: 2025-10-02 08:41:09.106 2 DEBUG oslo_concurrency.lockutils [req-8861142b-ae2d-4dfb-b267-3d7b4f605152 req-db617a5d-1ee0-430b-ab0b-63a28863c1ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:09 compute-0 nova_compute[260603]: 2025-10-02 08:41:09.106 2 DEBUG oslo_concurrency.lockutils [req-8861142b-ae2d-4dfb-b267-3d7b4f605152 req-db617a5d-1ee0-430b-ab0b-63a28863c1ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:09 compute-0 nova_compute[260603]: 2025-10-02 08:41:09.107 2 DEBUG nova.compute.manager [req-8861142b-ae2d-4dfb-b267-3d7b4f605152 req-db617a5d-1ee0-430b-ab0b-63a28863c1ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Processing event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:41:09 compute-0 nova_compute[260603]: 2025-10-02 08:41:09.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 134 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 08:41:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:41:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4097377714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:09 compute-0 nova_compute[260603]: 2025-10-02 08:41:09.449 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:09 compute-0 nova_compute[260603]: 2025-10-02 08:41:09.480 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:09 compute-0 podman[360426]: 2025-10-02 08:41:09.48206726 +0000 UTC m=+0.055075477 container create 439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 08:41:09 compute-0 nova_compute[260603]: 2025-10-02 08:41:09.496 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:09 compute-0 systemd[1]: Started libpod-conmon-439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a.scope.
Oct 02 08:41:09 compute-0 podman[360426]: 2025-10-02 08:41:09.452941989 +0000 UTC m=+0.025950206 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:41:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528c61d1dc02de92e13fd9335f527a4374ee7687c385393aa257801e40808ba5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:41:09 compute-0 podman[360426]: 2025-10-02 08:41:09.616884465 +0000 UTC m=+0.189892722 container init 439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 08:41:09 compute-0 podman[360426]: 2025-10-02 08:41:09.627417671 +0000 UTC m=+0.200425888 container start 439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 08:41:09 compute-0 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [NOTICE]   (360467) : New worker (360470) forked
Oct 02 08:41:09 compute-0 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [NOTICE]   (360467) : Loading success.
Oct 02 08:41:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:41:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2712152692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.035 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.037 2 DEBUG nova.virt.libvirt.vif [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-285132923',display_name='tempest-ServerAddressesNegativeTestJSON-server-285132923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-285132923',id=100,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0738f9f0c2244bb8bc7a350d5ee5932',ramdisk_id='',reservation_id='r-pzphyj7i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1449456229',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1449456229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:03Z,user_data=None,user_id='923a2daca06b4bf98c21b2604971789f',uuid=fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.037 2 DEBUG nova.network.os_vif_util [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Converting VIF {"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.038 2 DEBUG nova.network.os_vif_util [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.039 2 DEBUG nova.objects.instance [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lazy-loading 'pci_devices' on Instance uuid fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.056 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:41:10 compute-0 nova_compute[260603]:   <uuid>fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf</uuid>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   <name>instance-00000064</name>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-285132923</nova:name>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:41:08</nova:creationTime>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:41:10 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:41:10 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:41:10 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:41:10 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:41:10 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:41:10 compute-0 nova_compute[260603]:         <nova:user uuid="923a2daca06b4bf98c21b2604971789f">tempest-ServerAddressesNegativeTestJSON-1449456229-project-member</nova:user>
Oct 02 08:41:10 compute-0 nova_compute[260603]:         <nova:project uuid="a0738f9f0c2244bb8bc7a350d5ee5932">tempest-ServerAddressesNegativeTestJSON-1449456229</nova:project>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:41:10 compute-0 nova_compute[260603]:         <nova:port uuid="c1ea98b4-5e4c-459b-b4ec-2da19438e4e9">
Oct 02 08:41:10 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <system>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <entry name="serial">fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf</entry>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <entry name="uuid">fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf</entry>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     </system>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   <os>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   </os>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   <features>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   </features>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk">
Oct 02 08:41:10 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       </source>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:41:10 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk.config">
Oct 02 08:41:10 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       </source>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:41:10 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:c5:45:0b"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <target dev="tapc1ea98b4-5e"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/console.log" append="off"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <video>
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     </video>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:41:10 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:41:10 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:41:10 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:41:10 compute-0 nova_compute[260603]: </domain>
Oct 02 08:41:10 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.058 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Preparing to wait for external event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.059 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.059 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.060 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.061 2 DEBUG nova.virt.libvirt.vif [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-285132923',display_name='tempest-ServerAddressesNegativeTestJSON-server-285132923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-285132923',id=100,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0738f9f0c2244bb8bc7a350d5ee5932',ramdisk_id='',reservation_id='r-pzphyj7i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1449456229',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1449456229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:03Z,user_data=None,user_id='923a2daca06b4bf98c21b2604971789f',uuid=fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.062 2 DEBUG nova.network.os_vif_util [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Converting VIF {"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.065 2 DEBUG nova.network.os_vif_util [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.066 2 DEBUG os_vif [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.069 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1ea98b4-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.075 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1ea98b4-5e, col_values=(('external_ids', {'iface-id': 'c1ea98b4-5e4c-459b-b4ec-2da19438e4e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:45:0b', 'vm-uuid': 'fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:10 compute-0 NetworkManager[45129]: <info>  [1759394470.0782] manager: (tapc1ea98b4-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.085 2 INFO os_vif [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e')
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.149 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.150 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.150 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] No VIF found with MAC fa:16:3e:c5:45:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.151 2 INFO nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Using config drive
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.173 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:10 compute-0 unix_chkpwd[360562]: password check failed for user (root)
Oct 02 08:41:10 compute-0 sshd-session[360287]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=106.36.198.78  user=root
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.310 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394470.3103132, ddf9efd0-0ac6-4857-96ea-3f1d0e18590d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.311 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] VM Started (Lifecycle Event)
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.313 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.317 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.321 2 INFO nova.virt.libvirt.driver [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Instance spawned successfully.
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.322 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:41:10 compute-0 ceph-mon[74477]: pgmap v1863: 305 pgs: 305 active+clean; 134 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 08:41:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4097377714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2712152692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.336 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.341 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.344 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.344 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.345 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.345 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.346 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.346 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.370 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.370 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394470.3105607, ddf9efd0-0ac6-4857-96ea-3f1d0e18590d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.370 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] VM Paused (Lifecycle Event)
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.395 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.398 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394470.3165314, ddf9efd0-0ac6-4857-96ea-3f1d0e18590d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.398 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] VM Resumed (Lifecycle Event)
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.412 2 INFO nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Took 9.02 seconds to spawn the instance on the hypervisor.
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.412 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.420 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.422 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.447 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.473 2 INFO nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Took 10.02 seconds to build instance.
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.491 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.795 2 INFO nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Creating config drive at /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/disk.config
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.803 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1rnpli6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:10 compute-0 nova_compute[260603]: 2025-10-02 08:41:10.966 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1rnpli6" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.003 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.008 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/disk.config fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.194 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/disk.config fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.195 2 INFO nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Deleting local config drive /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/disk.config because it was imported into RBD.
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.240 2 DEBUG nova.network.neutron [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Updated VIF entry in instance network info cache for port c1ea98b4-5e4c-459b-b4ec-2da19438e4e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.240 2 DEBUG nova.network.neutron [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Updating instance_info_cache with network_info: [{"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:41:11 compute-0 kernel: tapc1ea98b4-5e: entered promiscuous mode
Oct 02 08:41:11 compute-0 NetworkManager[45129]: <info>  [1759394471.2528] manager: (tapc1ea98b4-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Oct 02 08:41:11 compute-0 ovn_controller[152344]: 2025-10-02T08:41:11Z|01022|binding|INFO|Claiming lport c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 for this chassis.
Oct 02 08:41:11 compute-0 ovn_controller[152344]: 2025-10-02T08:41:11Z|01023|binding|INFO|c1ea98b4-5e4c-459b-b4ec-2da19438e4e9: Claiming fa:16:3e:c5:45:0b 10.100.0.8
Oct 02 08:41:11 compute-0 systemd-udevd[360355]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.264 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:45:0b 10.100.0.8'], port_security=['fa:16:3e:c5:45:0b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0738f9f0c2244bb8bc7a350d5ee5932', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64c07fa8-bcda-4bfb-b1e2-06de2925c622', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84a99230-30ce-44c4-9392-f1b665c403c0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.267 2 DEBUG oslo_concurrency.lockutils [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.266 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 in datapath 28754b87-0f3d-4084-a6ca-7b48ab8fade9 bound to our chassis
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.267 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28754b87-0f3d-4084-a6ca-7b48ab8fade9
Oct 02 08:41:11 compute-0 NetworkManager[45129]: <info>  [1759394471.2713] device (tapc1ea98b4-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:41:11 compute-0 NetworkManager[45129]: <info>  [1759394471.2726] device (tapc1ea98b4-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:41:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 134 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.290 2 DEBUG nova.compute.manager [req-53b80dd9-7765-4806-8ae2-48b458131873 req-4160b509-d48d-4747-9879-d5257305d49a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.291 2 DEBUG oslo_concurrency.lockutils [req-53b80dd9-7765-4806-8ae2-48b458131873 req-4160b509-d48d-4747-9879-d5257305d49a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.291 2 DEBUG oslo_concurrency.lockutils [req-53b80dd9-7765-4806-8ae2-48b458131873 req-4160b509-d48d-4747-9879-d5257305d49a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.291 2 DEBUG oslo_concurrency.lockutils [req-53b80dd9-7765-4806-8ae2-48b458131873 req-4160b509-d48d-4747-9879-d5257305d49a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.291 2 DEBUG nova.compute.manager [req-53b80dd9-7765-4806-8ae2-48b458131873 req-4160b509-d48d-4747-9879-d5257305d49a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] No waiting events found dispatching network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.291 2 WARNING nova.compute.manager [req-53b80dd9-7765-4806-8ae2-48b458131873 req-4160b509-d48d-4747-9879-d5257305d49a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received unexpected event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 for instance with vm_state active and task_state None.
Oct 02 08:41:11 compute-0 systemd-machined[214636]: New machine qemu-126-instance-00000064.
Oct 02 08:41:11 compute-0 systemd[1]: Started Virtual Machine qemu-126-instance-00000064.
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:11 compute-0 ovn_controller[152344]: 2025-10-02T08:41:11Z|01024|binding|INFO|Setting lport c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 ovn-installed in OVS
Oct 02 08:41:11 compute-0 ovn_controller[152344]: 2025-10-02T08:41:11Z|01025|binding|INFO|Setting lport c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 up in Southbound
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.364 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[80d88df4-5574-4508-83f2-7b2c3385089d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.365 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28754b87-01 in ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.367 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28754b87-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.367 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[382f1623-7545-4a1e-a8e3-2409953e8fad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.368 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[24cced58-7dc4-440e-b66c-2e844b584891]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.386 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[6c47b3bd-2e36-47f3-956f-b110f4703311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.400 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a003fdb4-6522-4d1e-9e6e-325824020c2e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.436 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[86d12928-f1d4-4766-8f19-bfd6c8b94577]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 NetworkManager[45129]: <info>  [1759394471.4443] manager: (tap28754b87-00): new Veth device (/org/freedesktop/NetworkManager/Devices/402)
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.444 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9c5882-cfe8-4a74-9cc6-3d09295d4397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.495 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[61d6287d-c132-4ddf-b5ad-c34ba36bb40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.499 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6f55c39b-7013-4180-8e70-fbe1d9640550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 NetworkManager[45129]: <info>  [1759394471.5421] device (tap28754b87-00): carrier: link connected
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.553 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c033d1-c329-4711-b1df-104f875e3f23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.577 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c35f78d6-3979-4eaf-842e-4492793d0624]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28754b87-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:7d:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535014, 'reachable_time': 20278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360666, 'error': None, 'target': 'ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.605 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[877d73e5-bc2f-44b0-89bb-3a3faccdfdd3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:7d63'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535014, 'tstamp': 535014}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360671, 'error': None, 'target': 'ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.627 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[188894bd-4cf4-4de4-8960-aec1f83b237b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28754b87-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:7d:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535014, 'reachable_time': 20278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360676, 'error': None, 'target': 'ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.666 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[732f1cd9-0a44-4bd8-a31e-b7c8893f26e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.736 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[632f5316-d126-4280-8a3d-33c1a958b9f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.737 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28754b87-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.738 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.738 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28754b87-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:11 compute-0 NetworkManager[45129]: <info>  [1759394471.7413] manager: (tap28754b87-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Oct 02 08:41:11 compute-0 kernel: tap28754b87-00: entered promiscuous mode
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.744 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28754b87-00, col_values=(('external_ids', {'iface-id': '7619cc0b-6bdd-4351-b97b-66bbd3d3dcca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:11 compute-0 ovn_controller[152344]: 2025-10-02T08:41:11Z|01026|binding|INFO|Releasing lport 7619cc0b-6bdd-4351-b97b-66bbd3d3dcca from this chassis (sb_readonly=0)
Oct 02 08:41:11 compute-0 nova_compute[260603]: 2025-10-02 08:41:11.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.782 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28754b87-0f3d-4084-a6ca-7b48ab8fade9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28754b87-0f3d-4084-a6ca-7b48ab8fade9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.783 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[98f86140-ad38-4b97-a840-e56220ec26c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.783 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-28754b87-0f3d-4084-a6ca-7b48ab8fade9
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/28754b87-0f3d-4084-a6ca-7b48ab8fade9.pid.haproxy
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 28754b87-0f3d-4084-a6ca-7b48ab8fade9
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:41:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.784 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'env', 'PROCESS_TAG=haproxy-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28754b87-0f3d-4084-a6ca-7b48ab8fade9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:41:12 compute-0 nova_compute[260603]: 2025-10-02 08:41:12.101 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394472.1009295, fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:12 compute-0 nova_compute[260603]: 2025-10-02 08:41:12.101 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] VM Started (Lifecycle Event)
Oct 02 08:41:12 compute-0 nova_compute[260603]: 2025-10-02 08:41:12.123 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:12 compute-0 nova_compute[260603]: 2025-10-02 08:41:12.126 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394472.1011906, fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:12 compute-0 nova_compute[260603]: 2025-10-02 08:41:12.127 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] VM Paused (Lifecycle Event)
Oct 02 08:41:12 compute-0 nova_compute[260603]: 2025-10-02 08:41:12.144 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:12 compute-0 nova_compute[260603]: 2025-10-02 08:41:12.148 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:41:12 compute-0 nova_compute[260603]: 2025-10-02 08:41:12.167 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:41:12 compute-0 podman[360709]: 2025-10-02 08:41:12.189852049 +0000 UTC m=+0.072650531 container create b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 08:41:12 compute-0 systemd[1]: Started libpod-conmon-b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271.scope.
Oct 02 08:41:12 compute-0 podman[360709]: 2025-10-02 08:41:12.151309005 +0000 UTC m=+0.034107537 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:41:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1079717454c50e69509e5f2e549434fd41ecc501c00f63e7e50c58f205d34cf1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:41:12 compute-0 podman[360709]: 2025-10-02 08:41:12.288783902 +0000 UTC m=+0.171582394 container init b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:41:12 compute-0 podman[360709]: 2025-10-02 08:41:12.295028775 +0000 UTC m=+0.177827247 container start b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 08:41:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:41:12 compute-0 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [NOTICE]   (360728) : New worker (360730) forked
Oct 02 08:41:12 compute-0 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [NOTICE]   (360728) : Loading success.
Oct 02 08:41:12 compute-0 ceph-mon[74477]: pgmap v1864: 305 pgs: 305 active+clean; 134 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 08:41:12 compute-0 sshd-session[360287]: Failed password for root from 106.36.198.78 port 39084 ssh2
Oct 02 08:41:12 compute-0 sshd-session[360287]: Received disconnect from 106.36.198.78 port 39084:11:  [preauth]
Oct 02 08:41:12 compute-0 sshd-session[360287]: Disconnected from authenticating user root 106.36.198.78 port 39084 [preauth]
Oct 02 08:41:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 134 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 137 op/s
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.792 2 DEBUG nova.compute.manager [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.793 2 DEBUG oslo_concurrency.lockutils [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.794 2 DEBUG oslo_concurrency.lockutils [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.794 2 DEBUG oslo_concurrency.lockutils [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.795 2 DEBUG nova.compute.manager [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Processing event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.795 2 DEBUG nova.compute.manager [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.796 2 DEBUG oslo_concurrency.lockutils [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.796 2 DEBUG oslo_concurrency.lockutils [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.796 2 DEBUG oslo_concurrency.lockutils [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.797 2 DEBUG nova.compute.manager [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] No waiting events found dispatching network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.797 2 WARNING nova.compute.manager [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received unexpected event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 for instance with vm_state building and task_state spawning.
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.798 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.815 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394473.8031614, fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.815 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] VM Resumed (Lifecycle Event)
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.817 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.823 2 INFO nova.virt.libvirt.driver [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Instance spawned successfully.
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.824 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.837 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.857 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.870 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.871 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.873 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.873 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.874 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.874 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.881 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.937 2 INFO nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Took 10.84 seconds to spawn the instance on the hypervisor.
Oct 02 08:41:13 compute-0 nova_compute[260603]: 2025-10-02 08:41:13.937 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:14 compute-0 nova_compute[260603]: 2025-10-02 08:41:14.003 2 INFO nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Took 11.84 seconds to build instance.
Oct 02 08:41:14 compute-0 nova_compute[260603]: 2025-10-02 08:41:14.023 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:14 compute-0 nova_compute[260603]: 2025-10-02 08:41:14.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:14 compute-0 ceph-mon[74477]: pgmap v1865: 305 pgs: 305 active+clean; 134 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 137 op/s
Oct 02 08:41:15 compute-0 nova_compute[260603]: 2025-10-02 08:41:15.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 134 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 02 08:41:15 compute-0 ovn_controller[152344]: 2025-10-02T08:41:15Z|01027|binding|INFO|Releasing lport 7619cc0b-6bdd-4351-b97b-66bbd3d3dcca from this chassis (sb_readonly=0)
Oct 02 08:41:15 compute-0 ovn_controller[152344]: 2025-10-02T08:41:15Z|01028|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct 02 08:41:15 compute-0 NetworkManager[45129]: <info>  [1759394475.8934] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Oct 02 08:41:15 compute-0 NetworkManager[45129]: <info>  [1759394475.8946] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Oct 02 08:41:15 compute-0 nova_compute[260603]: 2025-10-02 08:41:15.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:15 compute-0 ovn_controller[152344]: 2025-10-02T08:41:15Z|01029|binding|INFO|Releasing lport 7619cc0b-6bdd-4351-b97b-66bbd3d3dcca from this chassis (sb_readonly=0)
Oct 02 08:41:15 compute-0 nova_compute[260603]: 2025-10-02 08:41:15.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:15 compute-0 ovn_controller[152344]: 2025-10-02T08:41:15Z|01030|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct 02 08:41:15 compute-0 nova_compute[260603]: 2025-10-02 08:41:15.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.134 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.134 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.134 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.135 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.135 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.136 2 INFO nova.compute.manager [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Terminating instance
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.140 2 DEBUG nova.compute.manager [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:41:16 compute-0 kernel: tapc1ea98b4-5e (unregistering): left promiscuous mode
Oct 02 08:41:16 compute-0 NetworkManager[45129]: <info>  [1759394476.2042] device (tapc1ea98b4-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:16 compute-0 ovn_controller[152344]: 2025-10-02T08:41:16Z|01031|binding|INFO|Releasing lport c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 from this chassis (sb_readonly=0)
Oct 02 08:41:16 compute-0 ovn_controller[152344]: 2025-10-02T08:41:16Z|01032|binding|INFO|Setting lport c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 down in Southbound
Oct 02 08:41:16 compute-0 ovn_controller[152344]: 2025-10-02T08:41:16Z|01033|binding|INFO|Removing iface tapc1ea98b4-5e ovn-installed in OVS
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.224 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:45:0b 10.100.0.8'], port_security=['fa:16:3e:c5:45:0b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0738f9f0c2244bb8bc7a350d5ee5932', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64c07fa8-bcda-4bfb-b1e2-06de2925c622', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84a99230-30ce-44c4-9392-f1b665c403c0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.225 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 in datapath 28754b87-0f3d-4084-a6ca-7b48ab8fade9 unbound from our chassis
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.226 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28754b87-0f3d-4084-a6ca-7b48ab8fade9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.227 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d779b30b-07dc-4373-b07e-bd22712ad518]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.228 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9 namespace which is not needed anymore
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:16 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000064.scope: Deactivated successfully.
Oct 02 08:41:16 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000064.scope: Consumed 3.065s CPU time.
Oct 02 08:41:16 compute-0 systemd-machined[214636]: Machine qemu-126-instance-00000064 terminated.
Oct 02 08:41:16 compute-0 ceph-mon[74477]: pgmap v1866: 305 pgs: 305 active+clean; 134 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 02 08:41:16 compute-0 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [NOTICE]   (360728) : haproxy version is 2.8.14-c23fe91
Oct 02 08:41:16 compute-0 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [NOTICE]   (360728) : path to executable is /usr/sbin/haproxy
Oct 02 08:41:16 compute-0 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [WARNING]  (360728) : Exiting Master process...
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.384 2 INFO nova.virt.libvirt.driver [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Instance destroyed successfully.
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.385 2 DEBUG nova.objects.instance [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lazy-loading 'resources' on Instance uuid fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:41:16 compute-0 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [ALERT]    (360728) : Current worker (360730) exited with code 143 (Terminated)
Oct 02 08:41:16 compute-0 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [WARNING]  (360728) : All workers exited. Exiting... (0)
Oct 02 08:41:16 compute-0 systemd[1]: libpod-b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271.scope: Deactivated successfully.
Oct 02 08:41:16 compute-0 podman[360760]: 2025-10-02 08:41:16.393963582 +0000 UTC m=+0.065047435 container died b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.405 2 DEBUG nova.virt.libvirt.vif [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-285132923',display_name='tempest-ServerAddressesNegativeTestJSON-server-285132923',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-285132923',id=100,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:41:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a0738f9f0c2244bb8bc7a350d5ee5932',ramdisk_id='',reservation_id='r-pzphyj7i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1449456229',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1449456229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:41:13Z,user_data=None,user_id='923a2daca06b4bf98c21b2604971789f',uuid=fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.406 2 DEBUG nova.network.os_vif_util [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Converting VIF {"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.406 2 DEBUG nova.network.os_vif_util [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.407 2 DEBUG os_vif [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.409 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1ea98b4-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.414 2 INFO os_vif [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e')
Oct 02 08:41:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271-userdata-shm.mount: Deactivated successfully.
Oct 02 08:41:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1079717454c50e69509e5f2e549434fd41ecc501c00f63e7e50c58f205d34cf1-merged.mount: Deactivated successfully.
Oct 02 08:41:16 compute-0 podman[360760]: 2025-10-02 08:41:16.44947533 +0000 UTC m=+0.120559183 container cleanup b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:41:16 compute-0 systemd[1]: libpod-conmon-b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271.scope: Deactivated successfully.
Oct 02 08:41:16 compute-0 podman[360817]: 2025-10-02 08:41:16.517435585 +0000 UTC m=+0.043087695 container remove b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.524 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8687ae-6cba-474d-99f3-ce194931b6e1]: (4, ('Thu Oct  2 08:41:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9 (b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271)\nb96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271\nThu Oct  2 08:41:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9 (b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271)\nb96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.526 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[956b8187-4559-4bda-a278-13979590f45f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.527 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28754b87-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.527 2 DEBUG nova.compute.manager [req-46b6a06c-2ae4-43f9-bf2a-fc794dd7eb7c req-76182dd1-0217-4728-83b1-6a7a6aaa2bc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-vif-unplugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.527 2 DEBUG oslo_concurrency.lockutils [req-46b6a06c-2ae4-43f9-bf2a-fc794dd7eb7c req-76182dd1-0217-4728-83b1-6a7a6aaa2bc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.527 2 DEBUG oslo_concurrency.lockutils [req-46b6a06c-2ae4-43f9-bf2a-fc794dd7eb7c req-76182dd1-0217-4728-83b1-6a7a6aaa2bc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.528 2 DEBUG oslo_concurrency.lockutils [req-46b6a06c-2ae4-43f9-bf2a-fc794dd7eb7c req-76182dd1-0217-4728-83b1-6a7a6aaa2bc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.528 2 DEBUG nova.compute.manager [req-46b6a06c-2ae4-43f9-bf2a-fc794dd7eb7c req-76182dd1-0217-4728-83b1-6a7a6aaa2bc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] No waiting events found dispatching network-vif-unplugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.528 2 DEBUG nova.compute.manager [req-46b6a06c-2ae4-43f9-bf2a-fc794dd7eb7c req-76182dd1-0217-4728-83b1-6a7a6aaa2bc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-vif-unplugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:16 compute-0 kernel: tap28754b87-00: left promiscuous mode
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.548 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcb0eb1-baa8-4f3b-80c4-3c491bd41a1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.574 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8b34fe9c-5469-4e56-b7b1-32de97701f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.575 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c1e070-1629-42d1-b866-c6f513e55039]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.590 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91da8211-95fb-405b-9c3f-799af9f9e29a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535003, 'reachable_time': 19959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360832, 'error': None, 'target': 'ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d28754b87\x2d0f3d\x2d4084\x2da6ca\x2d7b48ab8fade9.mount: Deactivated successfully.
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.592 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:41:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.592 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a834383f-3d1b-4109-9f91-72d177073693]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.749 2 INFO nova.virt.libvirt.driver [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Deleting instance files /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_del
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.750 2 INFO nova.virt.libvirt.driver [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Deletion of /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_del complete
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.810 2 INFO nova.compute.manager [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Took 0.67 seconds to destroy the instance on the hypervisor.
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.811 2 DEBUG oslo.service.loopingcall [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.811 2 DEBUG nova.compute.manager [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.811 2 DEBUG nova.network.neutron [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.820 2 DEBUG nova.compute.manager [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-changed-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.820 2 DEBUG nova.compute.manager [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Refreshing instance network info cache due to event network-changed-f333ef16-60e3-449a-a6e7-4e7435c4ee30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.821 2 DEBUG oslo_concurrency.lockutils [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.821 2 DEBUG oslo_concurrency.lockutils [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:41:16 compute-0 nova_compute[260603]: 2025-10-02 08:41:16.821 2 DEBUG nova.network.neutron [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Refreshing network info cache for port f333ef16-60e3-449a-a6e7-4e7435c4ee30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:41:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 119 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Oct 02 08:41:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:41:17 compute-0 nova_compute[260603]: 2025-10-02 08:41:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:41:17 compute-0 nova_compute[260603]: 2025-10-02 08:41:17.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:41:17 compute-0 nova_compute[260603]: 2025-10-02 08:41:17.690 2 DEBUG nova.network.neutron [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:41:17 compute-0 nova_compute[260603]: 2025-10-02 08:41:17.709 2 INFO nova.compute.manager [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Took 0.90 seconds to deallocate network for instance.
Oct 02 08:41:17 compute-0 nova_compute[260603]: 2025-10-02 08:41:17.758 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:17 compute-0 nova_compute[260603]: 2025-10-02 08:41:17.758 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:17 compute-0 nova_compute[260603]: 2025-10-02 08:41:17.837 2 DEBUG oslo_concurrency.processutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:41:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/904911522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.257 2 DEBUG oslo_concurrency.processutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.268 2 DEBUG nova.compute.provider_tree [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.299 2 DEBUG nova.scheduler.client.report [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.343 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.369 2 INFO nova.scheduler.client.report [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Deleted allocations for instance fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf
Oct 02 08:41:18 compute-0 ceph-mon[74477]: pgmap v1867: 305 pgs: 305 active+clean; 119 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Oct 02 08:41:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/904911522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.449 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.646 2 DEBUG nova.compute.manager [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.647 2 DEBUG oslo_concurrency.lockutils [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.647 2 DEBUG oslo_concurrency.lockutils [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.647 2 DEBUG oslo_concurrency.lockutils [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.647 2 DEBUG nova.compute.manager [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] No waiting events found dispatching network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.648 2 WARNING nova.compute.manager [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received unexpected event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 for instance with vm_state deleted and task_state None.
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.648 2 DEBUG nova.compute.manager [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-vif-deleted-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.965 2 DEBUG nova.network.neutron [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updated VIF entry in instance network info cache for port f333ef16-60e3-449a-a6e7-4e7435c4ee30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.966 2 DEBUG nova.network.neutron [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:41:18 compute-0 nova_compute[260603]: 2025-10-02 08:41:18.986 2 DEBUG oslo_concurrency.lockutils [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:41:19 compute-0 nova_compute[260603]: 2025-10-02 08:41:19.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 198 op/s
Oct 02 08:41:20 compute-0 ceph-mon[74477]: pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 198 op/s
Oct 02 08:41:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 88 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 173 op/s
Oct 02 08:41:21 compute-0 nova_compute[260603]: 2025-10-02 08:41:21.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:41:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1714812566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:41:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:41:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1714812566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:41:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:41:22 compute-0 ovn_controller[152344]: 2025-10-02T08:41:22Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:fb:ed 10.100.0.5
Oct 02 08:41:22 compute-0 ovn_controller[152344]: 2025-10-02T08:41:22Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:fb:ed 10.100.0.5
Oct 02 08:41:22 compute-0 ceph-mon[74477]: pgmap v1869: 305 pgs: 305 active+clean; 88 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 173 op/s
Oct 02 08:41:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1714812566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:41:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1714812566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:41:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 118 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Oct 02 08:41:23 compute-0 ovn_controller[152344]: 2025-10-02T08:41:23Z|01034|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct 02 08:41:23 compute-0 nova_compute[260603]: 2025-10-02 08:41:23.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:24 compute-0 nova_compute[260603]: 2025-10-02 08:41:24.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:24 compute-0 ceph-mon[74477]: pgmap v1870: 305 pgs: 305 active+clean; 118 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Oct 02 08:41:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 118 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Oct 02 08:41:26 compute-0 nova_compute[260603]: 2025-10-02 08:41:26.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:26 compute-0 ceph-mon[74477]: pgmap v1871: 305 pgs: 305 active+clean; 118 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Oct 02 08:41:26 compute-0 nova_compute[260603]: 2025-10-02 08:41:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:41:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 121 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Oct 02 08:41:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:41:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:41:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:41:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:41:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:41:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:41:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:41:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:41:27
Oct 02 08:41:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:41:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:41:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'backups', 'vms']
Oct 02 08:41:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:41:28 compute-0 ceph-mon[74477]: pgmap v1872: 305 pgs: 305 active+clean; 121 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Oct 02 08:41:28 compute-0 nova_compute[260603]: 2025-10-02 08:41:28.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:41:28 compute-0 nova_compute[260603]: 2025-10-02 08:41:28.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:41:28 compute-0 nova_compute[260603]: 2025-10-02 08:41:28.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:41:29 compute-0 nova_compute[260603]: 2025-10-02 08:41:29.119 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:41:29 compute-0 nova_compute[260603]: 2025-10-02 08:41:29.119 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:41:29 compute-0 nova_compute[260603]: 2025-10-02 08:41:29.119 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:41:29 compute-0 nova_compute[260603]: 2025-10-02 08:41:29.119 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ddf9efd0-0ac6-4857-96ea-3f1d0e18590d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:41:29 compute-0 nova_compute[260603]: 2025-10-02 08:41:29.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct 02 08:41:29 compute-0 podman[360857]: 2025-10-02 08:41:29.994070813 +0000 UTC m=+0.060823884 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 08:41:30 compute-0 podman[360856]: 2025-10-02 08:41:30.026691053 +0000 UTC m=+0.094699132 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 08:41:30 compute-0 ceph-mon[74477]: pgmap v1873: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.148 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.165 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.165 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.165 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.165 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.166 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.184 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.185 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.185 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.185 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.185 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.383 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394476.3808556, fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.384 2 INFO nova.compute.manager [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] VM Stopped (Lifecycle Event)
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.421 2 DEBUG nova.compute.manager [None req-4f73ff55-b76c-408f-adc0-512d8dc34bc7 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:41:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/974712362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.601 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.688 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.688 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.862 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.864 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3632MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.864 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.864 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.960 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ddf9efd0-0ac6-4857-96ea-3f1d0e18590d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.960 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:41:31 compute-0 nova_compute[260603]: 2025-10-02 08:41:31.960 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:41:32 compute-0 nova_compute[260603]: 2025-10-02 08:41:32.007 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:41:32 compute-0 ceph-mon[74477]: pgmap v1874: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:41:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/974712362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:41:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1512994064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:32 compute-0 nova_compute[260603]: 2025-10-02 08:41:32.534 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:32 compute-0 nova_compute[260603]: 2025-10-02 08:41:32.540 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:41:32 compute-0 nova_compute[260603]: 2025-10-02 08:41:32.565 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:41:32 compute-0 nova_compute[260603]: 2025-10-02 08:41:32.587 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:41:32 compute-0 nova_compute[260603]: 2025-10-02 08:41:32.587 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:41:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1512994064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:33 compute-0 ovn_controller[152344]: 2025-10-02T08:41:33Z|01035|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct 02 08:41:34 compute-0 nova_compute[260603]: 2025-10-02 08:41:34.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:34 compute-0 ceph-mon[74477]: pgmap v1875: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:41:34 compute-0 nova_compute[260603]: 2025-10-02 08:41:34.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:34.825 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:34.825 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:34.826 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 31 KiB/s wr, 10 op/s
Oct 02 08:41:35 compute-0 nova_compute[260603]: 2025-10-02 08:41:35.941 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:41:36 compute-0 nova_compute[260603]: 2025-10-02 08:41:36.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:36 compute-0 ceph-mon[74477]: pgmap v1876: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 31 KiB/s wr, 10 op/s
Oct 02 08:41:36 compute-0 nova_compute[260603]: 2025-10-02 08:41:36.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:41:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 31 KiB/s wr, 10 op/s
Oct 02 08:41:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:41:37 compute-0 nova_compute[260603]: 2025-10-02 08:41:37.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:38 compute-0 ceph-mon[74477]: pgmap v1877: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 31 KiB/s wr, 10 op/s
Oct 02 08:41:38 compute-0 nova_compute[260603]: 2025-10-02 08:41:38.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:41:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:41:38 compute-0 podman[360946]: 2025-10-02 08:41:38.994848476 +0000 UTC m=+0.057972957 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 08:41:39 compute-0 podman[360945]: 2025-10-02 08:41:39.000680017 +0000 UTC m=+0.066010855 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 08:41:39 compute-0 nova_compute[260603]: 2025-10-02 08:41:39.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 12 KiB/s wr, 2 op/s
Oct 02 08:41:40 compute-0 nova_compute[260603]: 2025-10-02 08:41:40.439 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:40 compute-0 nova_compute[260603]: 2025-10-02 08:41:40.440 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:40 compute-0 nova_compute[260603]: 2025-10-02 08:41:40.462 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:41:40 compute-0 ceph-mon[74477]: pgmap v1878: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 12 KiB/s wr, 2 op/s
Oct 02 08:41:40 compute-0 nova_compute[260603]: 2025-10-02 08:41:40.541 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:40 compute-0 nova_compute[260603]: 2025-10-02 08:41:40.542 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:40 compute-0 nova_compute[260603]: 2025-10-02 08:41:40.547 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:41:40 compute-0 nova_compute[260603]: 2025-10-02 08:41:40.547 2 INFO nova.compute.claims [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:41:40 compute-0 nova_compute[260603]: 2025-10-02 08:41:40.678 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:41:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3070704888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.120 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.128 2 DEBUG nova.compute.provider_tree [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.149 2 DEBUG nova.scheduler.client.report [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.188 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.252 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "35470f0c-88c0-45dc-81af-53af166bcc4e" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.252 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "35470f0c-88c0-45dc-81af-53af166bcc4e" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.264 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "35470f0c-88c0-45dc-81af-53af166bcc4e" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.266 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:41:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.323 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.324 2 DEBUG nova.network.neutron [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.345 2 INFO nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.363 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.493 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.495 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.496 2 INFO nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Creating image(s)
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.538 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3070704888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.590 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.627 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.632 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.682 2 DEBUG nova.policy [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4487d5c9ca094c4183c8500d1f6df983', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b230fcb7df44b5f9a434a12364fcaf2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.725 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.726 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.727 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.728 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.753 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:41 compute-0 nova_compute[260603]: 2025-10-02 08:41:41.756 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 47497cd9-93be-482f-b4a8-4529910a9055_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:41:42 compute-0 nova_compute[260603]: 2025-10-02 08:41:42.515 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 47497cd9-93be-482f-b4a8-4529910a9055_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.759s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:42 compute-0 nova_compute[260603]: 2025-10-02 08:41:42.603 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] resizing rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:41:42 compute-0 ceph-mon[74477]: pgmap v1879: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 08:41:42 compute-0 nova_compute[260603]: 2025-10-02 08:41:42.745 2 DEBUG nova.network.neutron [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Successfully created port: 08ea1ec3-2021-4942-8b2a-c699ee4dc052 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:41:42 compute-0 nova_compute[260603]: 2025-10-02 08:41:42.802 2 DEBUG nova.objects.instance [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lazy-loading 'migration_context' on Instance uuid 47497cd9-93be-482f-b4a8-4529910a9055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:41:42 compute-0 nova_compute[260603]: 2025-10-02 08:41:42.872 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:41:42 compute-0 nova_compute[260603]: 2025-10-02 08:41:42.873 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Ensure instance console log exists: /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:41:42 compute-0 nova_compute[260603]: 2025-10-02 08:41:42.873 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:42 compute-0 nova_compute[260603]: 2025-10-02 08:41:42.874 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:42 compute-0 nova_compute[260603]: 2025-10-02 08:41:42.874 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 165 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Oct 02 08:41:43 compute-0 nova_compute[260603]: 2025-10-02 08:41:43.694 2 DEBUG nova.network.neutron [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Successfully updated port: 08ea1ec3-2021-4942-8b2a-c699ee4dc052 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:41:43 compute-0 nova_compute[260603]: 2025-10-02 08:41:43.712 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "refresh_cache-47497cd9-93be-482f-b4a8-4529910a9055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:41:43 compute-0 nova_compute[260603]: 2025-10-02 08:41:43.713 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquired lock "refresh_cache-47497cd9-93be-482f-b4a8-4529910a9055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:41:43 compute-0 nova_compute[260603]: 2025-10-02 08:41:43.713 2 DEBUG nova.network.neutron [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:41:43 compute-0 nova_compute[260603]: 2025-10-02 08:41:43.870 2 DEBUG nova.compute.manager [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-changed-08ea1ec3-2021-4942-8b2a-c699ee4dc052 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:43 compute-0 nova_compute[260603]: 2025-10-02 08:41:43.871 2 DEBUG nova.compute.manager [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Refreshing instance network info cache due to event network-changed-08ea1ec3-2021-4942-8b2a-c699ee4dc052. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:41:43 compute-0 nova_compute[260603]: 2025-10-02 08:41:43.871 2 DEBUG oslo_concurrency.lockutils [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-47497cd9-93be-482f-b4a8-4529910a9055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:41:44 compute-0 nova_compute[260603]: 2025-10-02 08:41:44.053 2 DEBUG nova.network.neutron [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:41:44 compute-0 nova_compute[260603]: 2025-10-02 08:41:44.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:44 compute-0 ceph-mon[74477]: pgmap v1880: 305 pgs: 305 active+clean; 165 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Oct 02 08:41:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 165 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.6 MiB/s wr, 16 op/s
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.335 2 DEBUG nova.network.neutron [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Updating instance_info_cache with network_info: [{"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.362 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Releasing lock "refresh_cache-47497cd9-93be-482f-b4a8-4529910a9055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.362 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Instance network_info: |[{"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.362 2 DEBUG oslo_concurrency.lockutils [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-47497cd9-93be-482f-b4a8-4529910a9055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.363 2 DEBUG nova.network.neutron [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Refreshing network info cache for port 08ea1ec3-2021-4942-8b2a-c699ee4dc052 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.365 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Start _get_guest_xml network_info=[{"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.370 2 WARNING nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.374 2 DEBUG nova.virt.libvirt.host [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.374 2 DEBUG nova.virt.libvirt.host [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.377 2 DEBUG nova.virt.libvirt.host [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.378 2 DEBUG nova.virt.libvirt.host [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.378 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.378 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.379 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.379 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.379 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.379 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.379 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.380 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.380 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.380 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.380 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.381 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.383 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:41:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1690358141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.866 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.867 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.868 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.898 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.902 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:45 compute-0 nova_compute[260603]: 2025-10-02 08:41:45.986 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.255 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.255 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.265 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.266 2 INFO nova.compute.claims [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:41:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:41:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586273505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.396 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.398 2 DEBUG nova.virt.libvirt.vif [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:41:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-793142749',display_name='tempest-ServerGroupTestJSON-server-793142749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-793142749',id=101,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b230fcb7df44b5f9a434a12364fcaf2',ramdisk_id='',reservation_id='r-wpjgykv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1408982320',owner_user_name='tempest-ServerGroupTestJSON-1408982320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:41Z,user_data=None,user_id='4487d5c9ca094c4183c8500d1f6df983',uuid=47497cd9-93be-482f-b4a8-4529910a9055,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.399 2 DEBUG nova.network.os_vif_util [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Converting VIF {"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.400 2 DEBUG nova.network.os_vif_util [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.402 2 DEBUG nova.objects.instance [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 47497cd9-93be-482f-b4a8-4529910a9055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.425 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:41:46 compute-0 nova_compute[260603]:   <uuid>47497cd9-93be-482f-b4a8-4529910a9055</uuid>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   <name>instance-00000065</name>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerGroupTestJSON-server-793142749</nova:name>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:41:45</nova:creationTime>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:41:46 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:41:46 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:41:46 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:41:46 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:41:46 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:41:46 compute-0 nova_compute[260603]:         <nova:user uuid="4487d5c9ca094c4183c8500d1f6df983">tempest-ServerGroupTestJSON-1408982320-project-member</nova:user>
Oct 02 08:41:46 compute-0 nova_compute[260603]:         <nova:project uuid="2b230fcb7df44b5f9a434a12364fcaf2">tempest-ServerGroupTestJSON-1408982320</nova:project>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:41:46 compute-0 nova_compute[260603]:         <nova:port uuid="08ea1ec3-2021-4942-8b2a-c699ee4dc052">
Oct 02 08:41:46 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <system>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <entry name="serial">47497cd9-93be-482f-b4a8-4529910a9055</entry>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <entry name="uuid">47497cd9-93be-482f-b4a8-4529910a9055</entry>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     </system>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   <os>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   </os>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   <features>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   </features>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/47497cd9-93be-482f-b4a8-4529910a9055_disk">
Oct 02 08:41:46 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       </source>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:41:46 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/47497cd9-93be-482f-b4a8-4529910a9055_disk.config">
Oct 02 08:41:46 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       </source>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:41:46 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:38:4a:a1"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <target dev="tap08ea1ec3-20"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/console.log" append="off"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <video>
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     </video>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:41:46 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:41:46 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:41:46 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:41:46 compute-0 nova_compute[260603]: </domain>
Oct 02 08:41:46 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.427 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Preparing to wait for external event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.428 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.429 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.429 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.430 2 DEBUG nova.virt.libvirt.vif [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:41:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-793142749',display_name='tempest-ServerGroupTestJSON-server-793142749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-793142749',id=101,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b230fcb7df44b5f9a434a12364fcaf2',ramdisk_id='',reservation_id='r-wpjgykv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1408982320',owner_user_name='tempest-ServerGroupTestJSON-1408982320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:41Z,user_data=None,user_id='4487d5c9ca094c4183c8500d1f6df983',uuid=47497cd9-93be-482f-b4a8-4529910a9055,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.431 2 DEBUG nova.network.os_vif_util [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Converting VIF {"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.432 2 DEBUG nova.network.os_vif_util [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.433 2 DEBUG os_vif [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.435 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.438 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08ea1ec3-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.512 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap08ea1ec3-20, col_values=(('external_ids', {'iface-id': '08ea1ec3-2021-4942-8b2a-c699ee4dc052', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:4a:a1', 'vm-uuid': '47497cd9-93be-482f-b4a8-4529910a9055'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:46 compute-0 NetworkManager[45129]: <info>  [1759394506.5160] manager: (tap08ea1ec3-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.525 2 INFO os_vif [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20')
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.588 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.589 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.590 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] No VIF found with MAC fa:16:3e:38:4a:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.591 2 INFO nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Using config drive
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.625 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:46 compute-0 ceph-mon[74477]: pgmap v1881: 305 pgs: 305 active+clean; 165 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.6 MiB/s wr, 16 op/s
Oct 02 08:41:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1690358141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/586273505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:41:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274527031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.956 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.964 2 DEBUG nova.compute.provider_tree [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:41:46 compute-0 nova_compute[260603]: 2025-10-02 08:41:46.981 2 DEBUG nova.scheduler.client.report [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.016 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.017 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.073 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.074 2 DEBUG nova.network.neutron [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.102 2 INFO nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.119 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.194 2 DEBUG nova.network.neutron [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Updated VIF entry in instance network info cache for port 08ea1ec3-2021-4942-8b2a-c699ee4dc052. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.196 2 DEBUG nova.network.neutron [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Updating instance_info_cache with network_info: [{"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.208 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.210 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.211 2 INFO nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Creating image(s)
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.243 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.277 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 167 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 02 08:41:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.309 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.314 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.361 2 DEBUG nova.policy [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ffb3b2bafd1c40058c2669d61c40f3f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '28277d49c8814c0691f5e1dce22bf215', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.364 2 DEBUG oslo_concurrency.lockutils [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-47497cd9-93be-482f-b4a8-4529910a9055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.388 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.389 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.389 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.389 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.414 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.418 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5748b14f-5bbb-46f3-b563-062d530e5abd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.537 2 INFO nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Creating config drive at /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/disk.config
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.544 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp526qgzwt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3274527031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.698 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp526qgzwt" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.720 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.726 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/disk.config 47497cd9-93be-482f-b4a8-4529910a9055_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.763 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5748b14f-5bbb-46f3-b563-062d530e5abd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.814 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] resizing rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.897 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/disk.config 47497cd9-93be-482f-b4a8-4529910a9055_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.897 2 INFO nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Deleting local config drive /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/disk.config because it was imported into RBD.
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.904 2 DEBUG nova.objects.instance [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lazy-loading 'migration_context' on Instance uuid 5748b14f-5bbb-46f3-b563-062d530e5abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.917 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.917 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Ensure instance console log exists: /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.918 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.918 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.918 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:47 compute-0 kernel: tap08ea1ec3-20: entered promiscuous mode
Oct 02 08:41:47 compute-0 NetworkManager[45129]: <info>  [1759394507.9419] manager: (tap08ea1ec3-20): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:47 compute-0 ovn_controller[152344]: 2025-10-02T08:41:47Z|01036|binding|INFO|Claiming lport 08ea1ec3-2021-4942-8b2a-c699ee4dc052 for this chassis.
Oct 02 08:41:47 compute-0 ovn_controller[152344]: 2025-10-02T08:41:47Z|01037|binding|INFO|08ea1ec3-2021-4942-8b2a-c699ee4dc052: Claiming fa:16:3e:38:4a:a1 10.100.0.9
Oct 02 08:41:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.951 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:4a:a1 10.100.0.9'], port_security=['fa:16:3e:38:4a:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '47497cd9-93be-482f-b4a8-4529910a9055', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b230fcb7df44b5f9a434a12364fcaf2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ba541694-b209-40e0-95e1-14515fff1dc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=027ad20f-cef6-496f-a5dc-0e38827317a8, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=08ea1ec3-2021-4942-8b2a-c699ee4dc052) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:41:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.952 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 08ea1ec3-2021-4942-8b2a-c699ee4dc052 in datapath fbf4a9fc-7958-460e-b38c-ae01b006309e bound to our chassis
Oct 02 08:41:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.953 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbf4a9fc-7958-460e-b38c-ae01b006309e
Oct 02 08:41:47 compute-0 ovn_controller[152344]: 2025-10-02T08:41:47Z|01038|binding|INFO|Setting lport 08ea1ec3-2021-4942-8b2a-c699ee4dc052 ovn-installed in OVS
Oct 02 08:41:47 compute-0 ovn_controller[152344]: 2025-10-02T08:41:47Z|01039|binding|INFO|Setting lport 08ea1ec3-2021-4942-8b2a-c699ee4dc052 up in Southbound
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:47 compute-0 nova_compute[260603]: 2025-10-02 08:41:47.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.968 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ef272bf1-ab6d-42c3-803c-f1cd02543cd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.969 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbf4a9fc-71 in ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:41:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.971 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbf4a9fc-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:41:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.971 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7d55f24a-fc20-442e-9bfa-5c5a1575ac5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.972 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0f99f0-a973-45e0-9c29-7e717ab60e40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:47 compute-0 systemd-machined[214636]: New machine qemu-127-instance-00000065.
Oct 02 08:41:47 compute-0 systemd-udevd[361498]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:41:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.982 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[df975f91-b364-4a50-9dcc-8140e5f8f856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:47 compute-0 systemd[1]: Started Virtual Machine qemu-127-instance-00000065.
Oct 02 08:41:47 compute-0 NetworkManager[45129]: <info>  [1759394507.9915] device (tap08ea1ec3-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:41:47 compute-0 NetworkManager[45129]: <info>  [1759394507.9926] device (tap08ea1ec3-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.007 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1be2a4bd-4a87-4da6-96cf-4a779acaeb56]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.035 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[daa64a67-2484-4f1e-a60c-40a1529b4078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:48 compute-0 NetworkManager[45129]: <info>  [1759394508.0409] manager: (tapfbf4a9fc-70): new Veth device (/org/freedesktop/NetworkManager/Devices/408)
Oct 02 08:41:48 compute-0 systemd-udevd[361502]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.041 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[24adfeba-e7cf-43f9-ab14-e96564136578]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.070 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[94e9d5b0-4e21-4615-bd6b-8d4307728e94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.074 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4a35c6-7c73-410e-82a6-f0712bf869c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:48 compute-0 NetworkManager[45129]: <info>  [1759394508.0933] device (tapfbf4a9fc-70): carrier: link connected
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.098 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5af5cc59-f11a-4c17-bbb5-56896ccb59c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.119 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d4869b24-0080-47ba-b071-6a0744cb3613]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbf4a9fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:95:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538669, 'reachable_time': 22084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361530, 'error': None, 'target': 'ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.140 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d398658-a356-4258-85d0-2704c979f1f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:95f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538669, 'tstamp': 538669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361531, 'error': None, 'target': 'ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.158 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1029fdd6-c2aa-41a8-98b6-6446f925dee5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbf4a9fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:95:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538669, 'reachable_time': 22084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361532, 'error': None, 'target': 'ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:48 compute-0 nova_compute[260603]: 2025-10-02 08:41:48.193 2 DEBUG nova.compute.manager [req-f816583b-5f8a-4605-8130-a4054c38ed6b req-0cde91fc-007b-4a9a-acfd-907361d7635e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:48 compute-0 nova_compute[260603]: 2025-10-02 08:41:48.194 2 DEBUG oslo_concurrency.lockutils [req-f816583b-5f8a-4605-8130-a4054c38ed6b req-0cde91fc-007b-4a9a-acfd-907361d7635e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:48 compute-0 nova_compute[260603]: 2025-10-02 08:41:48.194 2 DEBUG oslo_concurrency.lockutils [req-f816583b-5f8a-4605-8130-a4054c38ed6b req-0cde91fc-007b-4a9a-acfd-907361d7635e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:48 compute-0 nova_compute[260603]: 2025-10-02 08:41:48.195 2 DEBUG oslo_concurrency.lockutils [req-f816583b-5f8a-4605-8130-a4054c38ed6b req-0cde91fc-007b-4a9a-acfd-907361d7635e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:48 compute-0 nova_compute[260603]: 2025-10-02 08:41:48.195 2 DEBUG nova.compute.manager [req-f816583b-5f8a-4605-8130-a4054c38ed6b req-0cde91fc-007b-4a9a-acfd-907361d7635e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Processing event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.200 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3175cf98-8e89-4b09-9539-143d48585547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.297 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e591801b-7ced-4a86-8775-f1d951d102d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.299 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf4a9fc-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.300 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.302 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbf4a9fc-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:48 compute-0 NetworkManager[45129]: <info>  [1759394508.3061] manager: (tapfbf4a9fc-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Oct 02 08:41:48 compute-0 kernel: tapfbf4a9fc-70: entered promiscuous mode
Oct 02 08:41:48 compute-0 nova_compute[260603]: 2025-10-02 08:41:48.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.309 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbf4a9fc-70, col_values=(('external_ids', {'iface-id': 'fe6c3964-69a6-4349-bab5-7c6c849c9643'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:48 compute-0 ovn_controller[152344]: 2025-10-02T08:41:48Z|01040|binding|INFO|Releasing lport fe6c3964-69a6-4349-bab5-7c6c849c9643 from this chassis (sb_readonly=0)
Oct 02 08:41:48 compute-0 nova_compute[260603]: 2025-10-02 08:41:48.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:48 compute-0 nova_compute[260603]: 2025-10-02 08:41:48.316 2 DEBUG nova.network.neutron [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Successfully created port: 52a482d6-fbe9-4583-af85-5407a6796976 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:41:48 compute-0 nova_compute[260603]: 2025-10-02 08:41:48.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.346 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbf4a9fc-7958-460e-b38c-ae01b006309e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbf4a9fc-7958-460e-b38c-ae01b006309e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.347 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50e03d7e-1daf-434c-82ca-b543d8378cd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.349 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-fbf4a9fc-7958-460e-b38c-ae01b006309e
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/fbf4a9fc-7958-460e-b38c-ae01b006309e.pid.haproxy
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID fbf4a9fc-7958-460e-b38c-ae01b006309e
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:41:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.354 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'env', 'PROCESS_TAG=haproxy-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbf4a9fc-7958-460e-b38c-ae01b006309e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:41:48 compute-0 ceph-mon[74477]: pgmap v1882: 305 pgs: 305 active+clean; 167 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 02 08:41:48 compute-0 podman[361606]: 2025-10-02 08:41:48.819470248 +0000 UTC m=+0.123447574 container create f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 08:41:48 compute-0 podman[361606]: 2025-10-02 08:41:48.735069634 +0000 UTC m=+0.039046990 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:41:48 compute-0 systemd[1]: Started libpod-conmon-f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145.scope.
Oct 02 08:41:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:41:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e103983e2dea04abc89b05140816442de9ff485d4b8298c9a16b43060795d0d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:41:48 compute-0 podman[361606]: 2025-10-02 08:41:48.996353165 +0000 UTC m=+0.300330511 container init f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.002 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394509.001552, 47497cd9-93be-482f-b4a8-4529910a9055 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.003 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] VM Started (Lifecycle Event)
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.006 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:41:49 compute-0 podman[361606]: 2025-10-02 08:41:49.009597865 +0000 UTC m=+0.313575201 container start f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.011 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.019 2 INFO nova.virt.libvirt.driver [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Instance spawned successfully.
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.020 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.024 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.030 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:41:49 compute-0 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [NOTICE]   (361625) : New worker (361627) forked
Oct 02 08:41:49 compute-0 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [NOTICE]   (361625) : Loading success.
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.048 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.049 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.050 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.051 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.052 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.054 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.059 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.060 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394509.0017219, 47497cd9-93be-482f-b4a8-4529910a9055 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.061 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] VM Paused (Lifecycle Event)
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.088 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.092 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394509.0110786, 47497cd9-93be-482f-b4a8-4529910a9055 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.093 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] VM Resumed (Lifecycle Event)
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.112 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.115 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.119 2 INFO nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Took 7.63 seconds to spawn the instance on the hypervisor.
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.119 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.147 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.185 2 INFO nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Took 8.68 seconds to build instance.
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.214 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 185 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.6 MiB/s wr, 42 op/s
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.826 2 DEBUG nova.network.neutron [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Successfully updated port: 52a482d6-fbe9-4583-af85-5407a6796976 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.845 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.846 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquired lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:41:49 compute-0 nova_compute[260603]: 2025-10-02 08:41:49.847 2 DEBUG nova.network.neutron [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.045 2 DEBUG nova.compute.manager [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-changed-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.049 2 DEBUG nova.compute.manager [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Refreshing instance network info cache due to event network-changed-52a482d6-fbe9-4583-af85-5407a6796976. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.050 2 DEBUG oslo_concurrency.lockutils [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.055 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.056 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.057 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.059 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.061 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.063 2 INFO nova.compute.manager [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Terminating instance
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.067 2 DEBUG nova.compute.manager [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.084 2 DEBUG nova.network.neutron [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:41:50 compute-0 kernel: tap08ea1ec3-20 (unregistering): left promiscuous mode
Oct 02 08:41:50 compute-0 NetworkManager[45129]: <info>  [1759394510.2359] device (tap08ea1ec3-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:50 compute-0 ovn_controller[152344]: 2025-10-02T08:41:50Z|01041|binding|INFO|Releasing lport 08ea1ec3-2021-4942-8b2a-c699ee4dc052 from this chassis (sb_readonly=0)
Oct 02 08:41:50 compute-0 ovn_controller[152344]: 2025-10-02T08:41:50Z|01042|binding|INFO|Setting lport 08ea1ec3-2021-4942-8b2a-c699ee4dc052 down in Southbound
Oct 02 08:41:50 compute-0 ovn_controller[152344]: 2025-10-02T08:41:50Z|01043|binding|INFO|Removing iface tap08ea1ec3-20 ovn-installed in OVS
Oct 02 08:41:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.256 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:4a:a1 10.100.0.9'], port_security=['fa:16:3e:38:4a:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '47497cd9-93be-482f-b4a8-4529910a9055', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b230fcb7df44b5f9a434a12364fcaf2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ba541694-b209-40e0-95e1-14515fff1dc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=027ad20f-cef6-496f-a5dc-0e38827317a8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=08ea1ec3-2021-4942-8b2a-c699ee4dc052) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:41:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.259 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 08ea1ec3-2021-4942-8b2a-c699ee4dc052 in datapath fbf4a9fc-7958-460e-b38c-ae01b006309e unbound from our chassis
Oct 02 08:41:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.262 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbf4a9fc-7958-460e-b38c-ae01b006309e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:41:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.266 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[99e03701-eec1-409c-9273-c269d6607776]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.267 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e namespace which is not needed anymore
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:50 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 02 08:41:50 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000065.scope: Consumed 1.972s CPU time.
Oct 02 08:41:50 compute-0 systemd-machined[214636]: Machine qemu-127-instance-00000065 terminated.
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.296 2 DEBUG nova.compute.manager [req-48b34df0-610f-4ec5-bbe0-c3f774050513 req-5d8de077-2567-4456-97d1-d65bdb09da7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.297 2 DEBUG oslo_concurrency.lockutils [req-48b34df0-610f-4ec5-bbe0-c3f774050513 req-5d8de077-2567-4456-97d1-d65bdb09da7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.297 2 DEBUG oslo_concurrency.lockutils [req-48b34df0-610f-4ec5-bbe0-c3f774050513 req-5d8de077-2567-4456-97d1-d65bdb09da7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.298 2 DEBUG oslo_concurrency.lockutils [req-48b34df0-610f-4ec5-bbe0-c3f774050513 req-5d8de077-2567-4456-97d1-d65bdb09da7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.298 2 DEBUG nova.compute.manager [req-48b34df0-610f-4ec5-bbe0-c3f774050513 req-5d8de077-2567-4456-97d1-d65bdb09da7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] No waiting events found dispatching network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.299 2 WARNING nova.compute.manager [req-48b34df0-610f-4ec5-bbe0-c3f774050513 req-5d8de077-2567-4456-97d1-d65bdb09da7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received unexpected event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 for instance with vm_state active and task_state deleting.
Oct 02 08:41:50 compute-0 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [NOTICE]   (361625) : haproxy version is 2.8.14-c23fe91
Oct 02 08:41:50 compute-0 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [NOTICE]   (361625) : path to executable is /usr/sbin/haproxy
Oct 02 08:41:50 compute-0 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [WARNING]  (361625) : Exiting Master process...
Oct 02 08:41:50 compute-0 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [WARNING]  (361625) : Exiting Master process...
Oct 02 08:41:50 compute-0 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [ALERT]    (361625) : Current worker (361627) exited with code 143 (Terminated)
Oct 02 08:41:50 compute-0 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [WARNING]  (361625) : All workers exited. Exiting... (0)
Oct 02 08:41:50 compute-0 systemd[1]: libpod-f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145.scope: Deactivated successfully.
Oct 02 08:41:50 compute-0 podman[361657]: 2025-10-02 08:41:50.503561436 +0000 UTC m=+0.080706800 container died f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.523 2 INFO nova.virt.libvirt.driver [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Instance destroyed successfully.
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.523 2 DEBUG nova.objects.instance [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lazy-loading 'resources' on Instance uuid 47497cd9-93be-482f-b4a8-4529910a9055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.548 2 DEBUG nova.virt.libvirt.vif [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:41:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-793142749',display_name='tempest-ServerGroupTestJSON-server-793142749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-793142749',id=101,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:41:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b230fcb7df44b5f9a434a12364fcaf2',ramdisk_id='',reservation_id='r-wpjgykv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1408982320',owner_user_name='tempest-ServerGroupTestJSON-1408982320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:41:49Z,user_data=None,user_id='4487d5c9ca094c4183c8500d1f6df983',uuid=47497cd9-93be-482f-b4a8-4529910a9055,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.549 2 DEBUG nova.network.os_vif_util [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Converting VIF {"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.551 2 DEBUG nova.network.os_vif_util [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.551 2 DEBUG os_vif [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.553 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08ea1ec3-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.599 2 INFO os_vif [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20')
Oct 02 08:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145-userdata-shm.mount: Deactivated successfully.
Oct 02 08:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e103983e2dea04abc89b05140816442de9ff485d4b8298c9a16b43060795d0d-merged.mount: Deactivated successfully.
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:50 compute-0 ceph-mon[74477]: pgmap v1883: 305 pgs: 305 active+clean; 185 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.6 MiB/s wr, 42 op/s
Oct 02 08:41:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.869 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:41:50 compute-0 podman[361657]: 2025-10-02 08:41:50.880659503 +0000 UTC m=+0.457804897 container cleanup f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 08:41:50 compute-0 systemd[1]: libpod-conmon-f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145.scope: Deactivated successfully.
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.920 2 DEBUG nova.network.neutron [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updating instance_info_cache with network_info: [{"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.949 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Releasing lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.950 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Instance network_info: |[{"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.951 2 DEBUG oslo_concurrency.lockutils [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.952 2 DEBUG nova.network.neutron [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Refreshing network info cache for port 52a482d6-fbe9-4583-af85-5407a6796976 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.958 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Start _get_guest_xml network_info=[{"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.963 2 WARNING nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.970 2 DEBUG nova.virt.libvirt.host [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.971 2 DEBUG nova.virt.libvirt.host [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.980 2 DEBUG nova.virt.libvirt.host [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.982 2 DEBUG nova.virt.libvirt.host [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.983 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.983 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.984 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.985 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.985 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.986 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.987 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:41:50 compute-0 podman[361717]: 2025-10-02 08:41:50.987534143 +0000 UTC m=+0.072719894 container remove f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.987 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.988 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.988 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.989 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.989 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:41:50 compute-0 nova_compute[260603]: 2025-10-02 08:41:50.994 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.997 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e3d860-8f18-40da-8043-8fa2aee77fb2]: (4, ('Thu Oct  2 08:41:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e (f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145)\nf30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145\nThu Oct  2 08:41:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e (f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145)\nf30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.998 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[25e48feb-3263-44a2-a5d6-b21bae8f8227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.999 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf4a9fc-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:51 compute-0 kernel: tapfbf4a9fc-70: left promiscuous mode
Oct 02 08:41:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.019 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[552d94d2-d834-4416-bac4-8d0b775dcf4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.051 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[51dac703-3c1c-48e6-bf28-94c489ecaa38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.052 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f66a94-c920-4e6b-a1b3-f54dac0cd469]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.072 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0bd7b0-387b-403a-b25d-e315418282bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538663, 'reachable_time': 34742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361731, 'error': None, 'target': 'ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.074 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:41:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.075 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f20cccb6-f5ee-423d-8889-fb20f25d0588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:51 compute-0 systemd[1]: run-netns-ovnmeta\x2dfbf4a9fc\x2d7958\x2d460e\x2db38c\x2dae01b006309e.mount: Deactivated successfully.
Oct 02 08:41:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.075 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:41:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 185 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.6 MiB/s wr, 42 op/s
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.344 2 INFO nova.virt.libvirt.driver [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Deleting instance files /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055_del
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.345 2 INFO nova.virt.libvirt.driver [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Deletion of /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055_del complete
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.401 2 INFO nova.compute.manager [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Took 1.33 seconds to destroy the instance on the hypervisor.
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.402 2 DEBUG oslo.service.loopingcall [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.402 2 DEBUG nova.compute.manager [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.402 2 DEBUG nova.network.neutron [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:41:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:41:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591855804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.447 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.471 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.476 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3591855804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:41:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/356067763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.946 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.948 2 DEBUG nova.virt.libvirt.vif [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:41:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-263070957-acc',id=102,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBHUVZCqw3VWkKw3WswXpGtJf1pEavua2ZnmSgdDq3pc4sGnLceByNqUw0H30LytwdI+cAke8ed31ZxradSBqQ5vuq98lJn2fkhaaMuY6USKAqJ3+Ezi42AVRSerZijcMA==',key_name='tempest-TestSecurityGroupsBasicOps-1336991398',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28277d49c8814c0691f5e1dce22bf215',ramdisk_id='',reservation_id='r-740jcdvb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-263070957',owner_user_name='tempest-TestSecurityGroupsBasicOps-263070957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:47Z,user_data=None,user_id='ffb3b2bafd1c40058c2669d61c40f3f9',uuid=5748b14f-5bbb-46f3-b563-062d530e5abd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.948 2 DEBUG nova.network.os_vif_util [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Converting VIF {"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.949 2 DEBUG nova.network.os_vif_util [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.951 2 DEBUG nova.objects.instance [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5748b14f-5bbb-46f3-b563-062d530e5abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:41:51 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.995 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:41:51 compute-0 nova_compute[260603]:   <uuid>5748b14f-5bbb-46f3-b563-062d530e5abd</uuid>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   <name>instance-00000066</name>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405</nova:name>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:41:50</nova:creationTime>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:41:51 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:41:51 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:41:51 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:41:51 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:41:51 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:41:51 compute-0 nova_compute[260603]:         <nova:user uuid="ffb3b2bafd1c40058c2669d61c40f3f9">tempest-TestSecurityGroupsBasicOps-263070957-project-member</nova:user>
Oct 02 08:41:51 compute-0 nova_compute[260603]:         <nova:project uuid="28277d49c8814c0691f5e1dce22bf215">tempest-TestSecurityGroupsBasicOps-263070957</nova:project>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:41:51 compute-0 nova_compute[260603]:         <nova:port uuid="52a482d6-fbe9-4583-af85-5407a6796976">
Oct 02 08:41:51 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <system>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <entry name="serial">5748b14f-5bbb-46f3-b563-062d530e5abd</entry>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <entry name="uuid">5748b14f-5bbb-46f3-b563-062d530e5abd</entry>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     </system>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   <os>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   </os>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   <features>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   </features>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:41:52 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:41:51 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:41:51 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5748b14f-5bbb-46f3-b563-062d530e5abd_disk">
Oct 02 08:41:51 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       </source>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:41:51 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5748b14f-5bbb-46f3-b563-062d530e5abd_disk.config">
Oct 02 08:41:51 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       </source>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:41:51 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:2c:a7:87"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <target dev="tap52a482d6-fb"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/console.log" append="off"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <video>
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     </video>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:41:51 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:41:52 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:41:52 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:41:52 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:41:52 compute-0 nova_compute[260603]: </domain>
Oct 02 08:41:52 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.996 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Preparing to wait for external event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.996 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.996 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.996 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.997 2 DEBUG nova.virt.libvirt.vif [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:41:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-263070957-acc',id=102,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBHUVZCqw3VWkKw3WswXpGtJf1pEavua2ZnmSgdDq3pc4sGnLceByNqUw0H30LytwdI+cAke8ed31ZxradSBqQ5vuq98lJn2fkhaaMuY6USKAqJ3+Ezi42AVRSerZijcMA==',key_name='tempest-TestSecurityGroupsBasicOps-1336991398',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28277d49c8814c0691f5e1dce22bf215',ramdisk_id='',reservation_id='r-740jcdvb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-263070957',owner_user_name='tempest-TestSecurityGroupsBasicOps-263070957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:47Z,user_data=None,user_id='ffb3b2bafd1c40058c2669d61c40f3f9',uuid=5748b14f-5bbb-46f3-b563-062d530e5abd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.997 2 DEBUG nova.network.os_vif_util [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Converting VIF {"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.998 2 DEBUG nova.network.os_vif_util [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.998 2 DEBUG os_vif [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:51.999 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.003 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52a482d6-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.004 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52a482d6-fb, col_values=(('external_ids', {'iface-id': '52a482d6-fbe9-4583-af85-5407a6796976', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:a7:87', 'vm-uuid': '5748b14f-5bbb-46f3-b563-062d530e5abd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:52 compute-0 NetworkManager[45129]: <info>  [1759394512.0064] manager: (tap52a482d6-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.017 2 INFO os_vif [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb')
Oct 02 08:41:52 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.079 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.079 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.079 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] No VIF found with MAC fa:16:3e:2c:a7:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.080 2 INFO nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Using config drive
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.105 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.380 2 DEBUG nova.network.neutron [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.401 2 INFO nova.compute.manager [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Took 1.00 seconds to deallocate network for instance.
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.431 2 DEBUG nova.compute.manager [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-vif-unplugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.431 2 DEBUG oslo_concurrency.lockutils [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.432 2 DEBUG oslo_concurrency.lockutils [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.432 2 DEBUG oslo_concurrency.lockutils [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.433 2 DEBUG nova.compute.manager [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] No waiting events found dispatching network-vif-unplugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.433 2 DEBUG nova.compute.manager [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-vif-unplugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.434 2 DEBUG nova.compute.manager [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.434 2 DEBUG oslo_concurrency.lockutils [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.434 2 DEBUG oslo_concurrency.lockutils [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.435 2 DEBUG oslo_concurrency.lockutils [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.435 2 DEBUG nova.compute.manager [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] No waiting events found dispatching network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.435 2 WARNING nova.compute.manager [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received unexpected event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 for instance with vm_state active and task_state deleting.
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.475 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.476 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.569 2 DEBUG nova.compute.manager [req-2d6d04ef-74e9-4c21-9aaf-d1434d935f8b req-be932978-dcc3-4551-8666-558cd9129205 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-vif-deleted-08ea1ec3-2021-4942-8b2a-c699ee4dc052 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.579 2 DEBUG oslo_concurrency.processutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:52 compute-0 ceph-mon[74477]: pgmap v1884: 305 pgs: 305 active+clean; 185 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.6 MiB/s wr, 42 op/s
Oct 02 08:41:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/356067763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.967 2 INFO nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Creating config drive at /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/disk.config
Oct 02 08:41:52 compute-0 nova_compute[260603]: 2025-10-02 08:41:52.973 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvyg6zi53 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:41:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11347784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.072 2 DEBUG oslo_concurrency.processutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.081 2 DEBUG nova.compute.provider_tree [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.085 2 DEBUG nova.network.neutron [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updated VIF entry in instance network info cache for port 52a482d6-fbe9-4583-af85-5407a6796976. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.085 2 DEBUG nova.network.neutron [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updating instance_info_cache with network_info: [{"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.104 2 DEBUG nova.scheduler.client.report [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.107 2 DEBUG oslo_concurrency.lockutils [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.129 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.141 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvyg6zi53" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.162 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.165 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/disk.config 5748b14f-5bbb-46f3-b563-062d530e5abd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.206 2 INFO nova.scheduler.client.report [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Deleted allocations for instance 47497cd9-93be-482f-b4a8-4529910a9055
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.277 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.331 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/disk.config 5748b14f-5bbb-46f3-b563-062d530e5abd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.331 2 INFO nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Deleting local config drive /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/disk.config because it was imported into RBD.
Oct 02 08:41:53 compute-0 kernel: tap52a482d6-fb: entered promiscuous mode
Oct 02 08:41:53 compute-0 ovn_controller[152344]: 2025-10-02T08:41:53Z|01044|binding|INFO|Claiming lport 52a482d6-fbe9-4583-af85-5407a6796976 for this chassis.
Oct 02 08:41:53 compute-0 ovn_controller[152344]: 2025-10-02T08:41:53Z|01045|binding|INFO|52a482d6-fbe9-4583-af85-5407a6796976: Claiming fa:16:3e:2c:a7:87 10.100.0.3
Oct 02 08:41:53 compute-0 NetworkManager[45129]: <info>  [1759394513.3787] manager: (tap52a482d6-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/411)
Oct 02 08:41:53 compute-0 systemd-udevd[361733]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.387 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:a7:87 10.100.0.3'], port_security=['fa:16:3e:2c:a7:87 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5748b14f-5bbb-46f3-b563-062d530e5abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-075eee4e-656c-44de-9ee6-7589c7382251', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28277d49c8814c0691f5e1dce22bf215', 'neutron:revision_number': '2', 'neutron:security_group_ids': '13b54cc2-9090-4249-b147-09fa8e935774 94805a92-fc8b-4089-aba3-84e7867eb02d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e7b4dd6-37a8-4a1b-88c6-3100272d2b7a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=52a482d6-fbe9-4583-af85-5407a6796976) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.389 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 52a482d6-fbe9-4583-af85-5407a6796976 in datapath 075eee4e-656c-44de-9ee6-7589c7382251 bound to our chassis
Oct 02 08:41:53 compute-0 NetworkManager[45129]: <info>  [1759394513.3898] device (tap52a482d6-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:41:53 compute-0 NetworkManager[45129]: <info>  [1759394513.3909] device (tap52a482d6-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.391 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 075eee4e-656c-44de-9ee6-7589c7382251
Oct 02 08:41:53 compute-0 ovn_controller[152344]: 2025-10-02T08:41:53Z|01046|binding|INFO|Setting lport 52a482d6-fbe9-4583-af85-5407a6796976 ovn-installed in OVS
Oct 02 08:41:53 compute-0 ovn_controller[152344]: 2025-10-02T08:41:53Z|01047|binding|INFO|Setting lport 52a482d6-fbe9-4583-af85-5407a6796976 up in Southbound
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.411 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a36d1949-f9d6-4309-95ff-7d2cb75bd23b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.412 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap075eee4e-61 in ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.415 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap075eee4e-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.415 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3edecf61-8e01-4af9-be56-3ace33ed7d1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.416 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8673aea8-31e1-43ea-84e2-84b030805c9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 systemd-machined[214636]: New machine qemu-128-instance-00000066.
Oct 02 08:41:53 compute-0 systemd[1]: Started Virtual Machine qemu-128-instance-00000066.
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.435 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[be925358-868e-4515-877c-8cad76e1f3d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.464 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9172b3-7301-457b-a604-ae7799bbddc3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.497 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[36a88f5d-4a1f-4ad5-84c6-dd626893d17e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.501 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[965a81dc-8a93-4cab-887a-b6625fc02a65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 NetworkManager[45129]: <info>  [1759394513.5028] manager: (tap075eee4e-60): new Veth device (/org/freedesktop/NetworkManager/Devices/412)
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.534 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[41a1c5f2-dc7b-4b81-b388-8fd74abb44b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.537 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc865a2-afea-4e21-94d6-5609a6ad68b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 NetworkManager[45129]: <info>  [1759394513.5614] device (tap075eee4e-60): carrier: link connected
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.568 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a5a0b7-791b-4fb3-8782-26e1ce1be234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.585 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1d72b586-4275-47a1-ba72-4554d396999c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap075eee4e-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:66:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539216, 'reachable_time': 43381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361926, 'error': None, 'target': 'ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.602 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[84436875-2038-407c-b80c-bec8b70d0169]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:6651'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539216, 'tstamp': 539216}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361927, 'error': None, 'target': 'ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.620 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[016c64af-843c-4eee-9ed7-9911d4e2453e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap075eee4e-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:66:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539216, 'reachable_time': 43381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361928, 'error': None, 'target': 'ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.653 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bb080e60-b11a-4eaa-b176-b3a0d0da9b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.714 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8c193030-73d1-448a-821b-761cf22b42f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.716 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap075eee4e-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.716 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.717 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap075eee4e-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:53 compute-0 NetworkManager[45129]: <info>  [1759394513.7194] manager: (tap075eee4e-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Oct 02 08:41:53 compute-0 kernel: tap075eee4e-60: entered promiscuous mode
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.722 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap075eee4e-60, col_values=(('external_ids', {'iface-id': '52c223e8-68eb-4ed0-921b-f01154a7d913'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:53 compute-0 ovn_controller[152344]: 2025-10-02T08:41:53Z|01048|binding|INFO|Releasing lport 52c223e8-68eb-4ed0-921b-f01154a7d913 from this chassis (sb_readonly=0)
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:53 compute-0 nova_compute[260603]: 2025-10-02 08:41:53.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.740 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/075eee4e-656c-44de-9ee6-7589c7382251.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/075eee4e-656c-44de-9ee6-7589c7382251.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.740 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf5c9f8-1b16-4f20-ad24-18b3b7f80e4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.741 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-075eee4e-656c-44de-9ee6-7589c7382251
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/075eee4e-656c-44de-9ee6-7589c7382251.pid.haproxy
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 075eee4e-656c-44de-9ee6-7589c7382251
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:41:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.742 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251', 'env', 'PROCESS_TAG=haproxy-075eee4e-656c-44de-9ee6-7589c7382251', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/075eee4e-656c-44de-9ee6-7589c7382251.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:41:53 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/11347784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:41:54 compute-0 podman[361961]: 2025-10-02 08:41:54.083353746 +0000 UTC m=+0.051517346 container create c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:41:54 compute-0 systemd[1]: Started libpod-conmon-c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd.scope.
Oct 02 08:41:54 compute-0 podman[361961]: 2025-10-02 08:41:54.056725991 +0000 UTC m=+0.024889641 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:41:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f2cd3807a4bd9a2a9995ba1fc8cf28a130437de2342b17d3b71902d85482d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:41:54 compute-0 podman[361961]: 2025-10-02 08:41:54.179999148 +0000 UTC m=+0.148162758 container init c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:41:54 compute-0 podman[361961]: 2025-10-02 08:41:54.190158403 +0000 UTC m=+0.158321983 container start c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 08:41:54 compute-0 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [NOTICE]   (361980) : New worker (361982) forked
Oct 02 08:41:54 compute-0 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [NOTICE]   (361980) : Loading success.
Oct 02 08:41:54 compute-0 nova_compute[260603]: 2025-10-02 08:41:54.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:54 compute-0 nova_compute[260603]: 2025-10-02 08:41:54.709 2 DEBUG nova.compute.manager [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:54 compute-0 nova_compute[260603]: 2025-10-02 08:41:54.710 2 DEBUG oslo_concurrency.lockutils [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:54 compute-0 nova_compute[260603]: 2025-10-02 08:41:54.711 2 DEBUG oslo_concurrency.lockutils [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:54 compute-0 nova_compute[260603]: 2025-10-02 08:41:54.711 2 DEBUG oslo_concurrency.lockutils [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:54 compute-0 nova_compute[260603]: 2025-10-02 08:41:54.712 2 DEBUG nova.compute.manager [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Processing event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:41:54 compute-0 nova_compute[260603]: 2025-10-02 08:41:54.712 2 DEBUG nova.compute.manager [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:54 compute-0 nova_compute[260603]: 2025-10-02 08:41:54.713 2 DEBUG oslo_concurrency.lockutils [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:41:54 compute-0 nova_compute[260603]: 2025-10-02 08:41:54.714 2 DEBUG oslo_concurrency.lockutils [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:41:54 compute-0 nova_compute[260603]: 2025-10-02 08:41:54.714 2 DEBUG oslo_concurrency.lockutils [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:54 compute-0 nova_compute[260603]: 2025-10-02 08:41:54.715 2 DEBUG nova.compute.manager [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] No waiting events found dispatching network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:41:54 compute-0 nova_compute[260603]: 2025-10-02 08:41:54.715 2 WARNING nova.compute.manager [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received unexpected event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 for instance with vm_state building and task_state spawning.
Oct 02 08:41:54 compute-0 ceph-mon[74477]: pgmap v1885: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Oct 02 08:41:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.321 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.322 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394515.32125, 5748b14f-5bbb-46f3-b563-062d530e5abd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.323 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] VM Started (Lifecycle Event)
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.325 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.330 2 INFO nova.virt.libvirt.driver [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Instance spawned successfully.
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.330 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.346 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.352 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.355 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.355 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.356 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.356 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.357 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.357 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.394 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.394 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394515.3215666, 5748b14f-5bbb-46f3-b563-062d530e5abd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.395 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] VM Paused (Lifecycle Event)
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.427 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.431 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394515.3245556, 5748b14f-5bbb-46f3-b563-062d530e5abd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.431 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] VM Resumed (Lifecycle Event)
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.434 2 INFO nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Took 8.23 seconds to spawn the instance on the hypervisor.
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.435 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.459 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.461 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.483 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.495 2 INFO nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Took 9.29 seconds to build instance.
Oct 02 08:41:55 compute-0 nova_compute[260603]: 2025-10-02 08:41:55.509 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:41:56 compute-0 ceph-mon[74477]: pgmap v1886: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Oct 02 08:41:57 compute-0 nova_compute[260603]: 2025-10-02 08:41:57.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:41:57.078 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:41:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.9 MiB/s wr, 128 op/s
Oct 02 08:41:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:41:57 compute-0 ovn_controller[152344]: 2025-10-02T08:41:57Z|01049|binding|INFO|Releasing lport 52c223e8-68eb-4ed0-921b-f01154a7d913 from this chassis (sb_readonly=0)
Oct 02 08:41:57 compute-0 ovn_controller[152344]: 2025-10-02T08:41:57Z|01050|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct 02 08:41:57 compute-0 nova_compute[260603]: 2025-10-02 08:41:57.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:41:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:41:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:41:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:41:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:41:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:41:58 compute-0 ceph-mon[74477]: pgmap v1887: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.9 MiB/s wr, 128 op/s
Oct 02 08:41:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Oct 02 08:41:59 compute-0 nova_compute[260603]: 2025-10-02 08:41:59.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:41:59 compute-0 nova_compute[260603]: 2025-10-02 08:41:59.950 2 DEBUG nova.compute.manager [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-changed-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:41:59 compute-0 nova_compute[260603]: 2025-10-02 08:41:59.951 2 DEBUG nova.compute.manager [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Refreshing instance network info cache due to event network-changed-52a482d6-fbe9-4583-af85-5407a6796976. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:41:59 compute-0 nova_compute[260603]: 2025-10-02 08:41:59.951 2 DEBUG oslo_concurrency.lockutils [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:41:59 compute-0 nova_compute[260603]: 2025-10-02 08:41:59.952 2 DEBUG oslo_concurrency.lockutils [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:41:59 compute-0 nova_compute[260603]: 2025-10-02 08:41:59.952 2 DEBUG nova.network.neutron [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Refreshing network info cache for port 52a482d6-fbe9-4583-af85-5407a6796976 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:42:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct 02 08:42:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct 02 08:42:00 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct 02 08:42:00 compute-0 ceph-mon[74477]: pgmap v1888: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Oct 02 08:42:01 compute-0 podman[362035]: 2025-10-02 08:42:01.060020741 +0000 UTC m=+0.092945370 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 08:42:01 compute-0 podman[362034]: 2025-10-02 08:42:01.116963034 +0000 UTC m=+0.164283718 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 08:42:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 200 op/s
Oct 02 08:42:01 compute-0 nova_compute[260603]: 2025-10-02 08:42:01.927 2 DEBUG nova.network.neutron [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updated VIF entry in instance network info cache for port 52a482d6-fbe9-4583-af85-5407a6796976. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:42:01 compute-0 nova_compute[260603]: 2025-10-02 08:42:01.928 2 DEBUG nova.network.neutron [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updating instance_info_cache with network_info: [{"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:42:01 compute-0 nova_compute[260603]: 2025-10-02 08:42:01.949 2 DEBUG oslo_concurrency.lockutils [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:42:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct 02 08:42:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct 02 08:42:01 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct 02 08:42:01 compute-0 ceph-mon[74477]: osdmap e263: 3 total, 3 up, 3 in
Oct 02 08:42:02 compute-0 nova_compute[260603]: 2025-10-02 08:42:02.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:02 compute-0 sudo[362076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:42:02 compute-0 sudo[362076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:02 compute-0 sudo[362076]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:42:02 compute-0 sudo[362101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:42:02 compute-0 sudo[362101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:02 compute-0 sudo[362101]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:02 compute-0 sudo[362126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:42:02 compute-0 sudo[362126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:02 compute-0 sudo[362126]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:02 compute-0 sudo[362151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:42:02 compute-0 sudo[362151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:02 compute-0 ceph-mon[74477]: pgmap v1890: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 200 op/s
Oct 02 08:42:02 compute-0 ceph-mon[74477]: osdmap e264: 3 total, 3 up, 3 in
Oct 02 08:42:03 compute-0 sudo[362151]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:42:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:42:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:42:03 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:42:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:42:03 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:42:03 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c373f736-a2aa-45d5-a211-3c162e63eec9 does not exist
Oct 02 08:42:03 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e1f499f3-2625-43c1-95e0-1370887c5f35 does not exist
Oct 02 08:42:03 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f0214a56-9d60-422f-b506-471684895b2b does not exist
Oct 02 08:42:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:42:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:42:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:42:03 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:42:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:42:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:42:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 161 op/s
Oct 02 08:42:03 compute-0 sudo[362210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:42:03 compute-0 sudo[362210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:03 compute-0 sudo[362210]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:03 compute-0 sudo[362235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:42:03 compute-0 sudo[362235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:03 compute-0 sudo[362235]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:03 compute-0 sudo[362260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:42:03 compute-0 sudo[362260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:03 compute-0 sudo[362260]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:03 compute-0 sudo[362285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:42:03 compute-0 sudo[362285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:03 compute-0 podman[362353]: 2025-10-02 08:42:03.940194146 +0000 UTC m=+0.037083200 container create 4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rhodes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct 02 08:42:03 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:42:03 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:42:03 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:42:03 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:42:03 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:42:03 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:42:03 compute-0 systemd[1]: Started libpod-conmon-4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70.scope.
Oct 02 08:42:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:42:04 compute-0 podman[362353]: 2025-10-02 08:42:03.923444887 +0000 UTC m=+0.020333971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:42:04 compute-0 podman[362353]: 2025-10-02 08:42:04.0297791 +0000 UTC m=+0.126668164 container init 4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rhodes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:42:04 compute-0 podman[362353]: 2025-10-02 08:42:04.035843168 +0000 UTC m=+0.132732212 container start 4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 08:42:04 compute-0 podman[362353]: 2025-10-02 08:42:04.038635814 +0000 UTC m=+0.135524868 container attach 4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rhodes, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 08:42:04 compute-0 romantic_rhodes[362370]: 167 167
Oct 02 08:42:04 compute-0 systemd[1]: libpod-4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70.scope: Deactivated successfully.
Oct 02 08:42:04 compute-0 podman[362353]: 2025-10-02 08:42:04.042868655 +0000 UTC m=+0.139757709 container died 4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:42:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-86bb1e76d9f1580399b0a3856f14a43015d3838399bd448d70993069a77ef382-merged.mount: Deactivated successfully.
Oct 02 08:42:04 compute-0 podman[362353]: 2025-10-02 08:42:04.074607528 +0000 UTC m=+0.171496582 container remove 4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rhodes, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 08:42:04 compute-0 systemd[1]: libpod-conmon-4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70.scope: Deactivated successfully.
Oct 02 08:42:04 compute-0 podman[362395]: 2025-10-02 08:42:04.282063612 +0000 UTC m=+0.037165872 container create d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_raman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 08:42:04 compute-0 nova_compute[260603]: 2025-10-02 08:42:04.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:04 compute-0 systemd[1]: Started libpod-conmon-d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e.scope.
Oct 02 08:42:04 compute-0 podman[362395]: 2025-10-02 08:42:04.26647515 +0000 UTC m=+0.021577430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:42:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be432532ccd70e6e78f32d1810e357f6e46c8af5f9cb599eff286a35a8f987b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be432532ccd70e6e78f32d1810e357f6e46c8af5f9cb599eff286a35a8f987b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be432532ccd70e6e78f32d1810e357f6e46c8af5f9cb599eff286a35a8f987b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be432532ccd70e6e78f32d1810e357f6e46c8af5f9cb599eff286a35a8f987b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be432532ccd70e6e78f32d1810e357f6e46c8af5f9cb599eff286a35a8f987b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:04 compute-0 podman[362395]: 2025-10-02 08:42:04.438315461 +0000 UTC m=+0.193417831 container init d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct 02 08:42:04 compute-0 podman[362395]: 2025-10-02 08:42:04.45055269 +0000 UTC m=+0.205654980 container start d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_raman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 08:42:04 compute-0 podman[362395]: 2025-10-02 08:42:04.45444358 +0000 UTC m=+0.209545880 container attach d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_raman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:42:04 compute-0 ceph-mon[74477]: pgmap v1892: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 161 op/s
Oct 02 08:42:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 24 KiB/s wr, 147 op/s
Oct 02 08:42:05 compute-0 frosty_raman[362413]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:42:05 compute-0 frosty_raman[362413]: --> relative data size: 1.0
Oct 02 08:42:05 compute-0 frosty_raman[362413]: --> All data devices are unavailable
Oct 02 08:42:05 compute-0 nova_compute[260603]: 2025-10-02 08:42:05.522 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394510.520441, 47497cd9-93be-482f-b4a8-4529910a9055 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:42:05 compute-0 nova_compute[260603]: 2025-10-02 08:42:05.524 2 INFO nova.compute.manager [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] VM Stopped (Lifecycle Event)
Oct 02 08:42:05 compute-0 systemd[1]: libpod-d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e.scope: Deactivated successfully.
Oct 02 08:42:05 compute-0 podman[362395]: 2025-10-02 08:42:05.540851561 +0000 UTC m=+1.295953821 container died d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:42:05 compute-0 systemd[1]: libpod-d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e.scope: Consumed 1.016s CPU time.
Oct 02 08:42:05 compute-0 nova_compute[260603]: 2025-10-02 08:42:05.547 2 DEBUG nova.compute.manager [None req-25780ba9-6419-4e75-87a5-6f0138ded05e - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:42:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-be432532ccd70e6e78f32d1810e357f6e46c8af5f9cb599eff286a35a8f987b7-merged.mount: Deactivated successfully.
Oct 02 08:42:05 compute-0 podman[362395]: 2025-10-02 08:42:05.735311922 +0000 UTC m=+1.490414182 container remove d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:42:05 compute-0 systemd[1]: libpod-conmon-d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e.scope: Deactivated successfully.
Oct 02 08:42:05 compute-0 sudo[362285]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:05 compute-0 sudo[362455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:42:05 compute-0 sudo[362455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:05 compute-0 sudo[362455]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:05 compute-0 sudo[362480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:42:05 compute-0 sudo[362480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:05 compute-0 sudo[362480]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:06 compute-0 sudo[362505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:42:06 compute-0 sudo[362505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:06 compute-0 sudo[362505]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:06 compute-0 sudo[362531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:42:06 compute-0 sudo[362531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:06 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 02 08:42:06 compute-0 nova_compute[260603]: 2025-10-02 08:42:06.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:06 compute-0 podman[362595]: 2025-10-02 08:42:06.482015624 +0000 UTC m=+0.050024340 container create c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:42:06 compute-0 systemd[1]: Started libpod-conmon-c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5.scope.
Oct 02 08:42:06 compute-0 podman[362595]: 2025-10-02 08:42:06.461667835 +0000 UTC m=+0.029676611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:42:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:42:06 compute-0 podman[362595]: 2025-10-02 08:42:06.576131258 +0000 UTC m=+0.144140014 container init c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:42:06 compute-0 podman[362595]: 2025-10-02 08:42:06.588052218 +0000 UTC m=+0.156060984 container start c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swartz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:42:06 compute-0 podman[362595]: 2025-10-02 08:42:06.592173065 +0000 UTC m=+0.160181841 container attach c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 08:42:06 compute-0 confident_swartz[362612]: 167 167
Oct 02 08:42:06 compute-0 systemd[1]: libpod-c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5.scope: Deactivated successfully.
Oct 02 08:42:06 compute-0 podman[362595]: 2025-10-02 08:42:06.59748268 +0000 UTC m=+0.165491416 container died c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swartz, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:42:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a501cd40a1e1122f1bfcf0af03fcc18db94c039ed7af769717b51a31615d828-merged.mount: Deactivated successfully.
Oct 02 08:42:06 compute-0 podman[362595]: 2025-10-02 08:42:06.689305043 +0000 UTC m=+0.257313809 container remove c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swartz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 08:42:06 compute-0 systemd[1]: libpod-conmon-c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5.scope: Deactivated successfully.
Oct 02 08:42:06 compute-0 podman[362637]: 2025-10-02 08:42:06.921673988 +0000 UTC m=+0.053331002 container create 543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 02 08:42:06 compute-0 systemd[1]: Started libpod-conmon-543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2.scope.
Oct 02 08:42:06 compute-0 podman[362637]: 2025-10-02 08:42:06.899915155 +0000 UTC m=+0.031572169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:42:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:42:07 compute-0 nova_compute[260603]: 2025-10-02 08:42:07.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f3a90ba197c24891ca926a740434ad044c152aedfe656782ce7b4fba1a4d08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f3a90ba197c24891ca926a740434ad044c152aedfe656782ce7b4fba1a4d08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f3a90ba197c24891ca926a740434ad044c152aedfe656782ce7b4fba1a4d08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f3a90ba197c24891ca926a740434ad044c152aedfe656782ce7b4fba1a4d08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:07 compute-0 podman[362637]: 2025-10-02 08:42:07.059358592 +0000 UTC m=+0.191015666 container init 543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:42:07 compute-0 podman[362637]: 2025-10-02 08:42:07.073305454 +0000 UTC m=+0.204962468 container start 543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:42:07 compute-0 ceph-mon[74477]: pgmap v1893: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 24 KiB/s wr, 147 op/s
Oct 02 08:42:07 compute-0 podman[362637]: 2025-10-02 08:42:07.077120152 +0000 UTC m=+0.208777166 container attach 543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:42:07 compute-0 ovn_controller[152344]: 2025-10-02T08:42:07Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:a7:87 10.100.0.3
Oct 02 08:42:07 compute-0 ovn_controller[152344]: 2025-10-02T08:42:07Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:a7:87 10.100.0.3
Oct 02 08:42:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 173 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 592 KiB/s wr, 71 op/s
Oct 02 08:42:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:42:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct 02 08:42:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct 02 08:42:07 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct 02 08:42:07 compute-0 inspiring_newton[362654]: {
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:     "0": [
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:         {
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "devices": [
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "/dev/loop3"
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             ],
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_name": "ceph_lv0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_size": "21470642176",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "name": "ceph_lv0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "tags": {
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.cluster_name": "ceph",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.crush_device_class": "",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.encrypted": "0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.osd_id": "0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.type": "block",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.vdo": "0"
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             },
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "type": "block",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "vg_name": "ceph_vg0"
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:         }
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:     ],
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:     "1": [
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:         {
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "devices": [
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "/dev/loop4"
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             ],
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_name": "ceph_lv1",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_size": "21470642176",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "name": "ceph_lv1",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "tags": {
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.cluster_name": "ceph",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.crush_device_class": "",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.encrypted": "0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.osd_id": "1",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.type": "block",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.vdo": "0"
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             },
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "type": "block",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "vg_name": "ceph_vg1"
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:         }
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:     ],
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:     "2": [
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:         {
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "devices": [
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "/dev/loop5"
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             ],
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_name": "ceph_lv2",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_size": "21470642176",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "name": "ceph_lv2",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "tags": {
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.cluster_name": "ceph",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.crush_device_class": "",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.encrypted": "0",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.osd_id": "2",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.type": "block",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:                 "ceph.vdo": "0"
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             },
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "type": "block",
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:             "vg_name": "ceph_vg2"
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:         }
Oct 02 08:42:07 compute-0 inspiring_newton[362654]:     ]
Oct 02 08:42:07 compute-0 inspiring_newton[362654]: }
Oct 02 08:42:07 compute-0 systemd[1]: libpod-543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2.scope: Deactivated successfully.
Oct 02 08:42:07 compute-0 podman[362663]: 2025-10-02 08:42:07.925835412 +0000 UTC m=+0.022929990 container died 543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:42:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-90f3a90ba197c24891ca926a740434ad044c152aedfe656782ce7b4fba1a4d08-merged.mount: Deactivated successfully.
Oct 02 08:42:07 compute-0 podman[362663]: 2025-10-02 08:42:07.982163197 +0000 UTC m=+0.079257725 container remove 543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 08:42:07 compute-0 systemd[1]: libpod-conmon-543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2.scope: Deactivated successfully.
Oct 02 08:42:08 compute-0 sudo[362531]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:08 compute-0 sudo[362678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:42:08 compute-0 sudo[362678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:08 compute-0 sudo[362678]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:08 compute-0 sudo[362703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:42:08 compute-0 sudo[362703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:08 compute-0 sudo[362703]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:08 compute-0 sudo[362728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:42:08 compute-0 sudo[362728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:08 compute-0 sudo[362728]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:08 compute-0 sudo[362753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:42:08 compute-0 sudo[362753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:08 compute-0 ceph-mon[74477]: pgmap v1894: 305 pgs: 305 active+clean; 173 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 592 KiB/s wr, 71 op/s
Oct 02 08:42:08 compute-0 ceph-mon[74477]: osdmap e265: 3 total, 3 up, 3 in
Oct 02 08:42:08 compute-0 podman[362816]: 2025-10-02 08:42:08.664027511 +0000 UTC m=+0.045284684 container create 0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:42:08 compute-0 systemd[1]: Started libpod-conmon-0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92.scope.
Oct 02 08:42:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:42:08 compute-0 podman[362816]: 2025-10-02 08:42:08.644973971 +0000 UTC m=+0.026231124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:42:08 compute-0 podman[362816]: 2025-10-02 08:42:08.756899687 +0000 UTC m=+0.138156870 container init 0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:42:08 compute-0 podman[362816]: 2025-10-02 08:42:08.762434348 +0000 UTC m=+0.143691481 container start 0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:42:08 compute-0 podman[362816]: 2025-10-02 08:42:08.765802312 +0000 UTC m=+0.147059535 container attach 0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:42:08 compute-0 relaxed_jennings[362832]: 167 167
Oct 02 08:42:08 compute-0 systemd[1]: libpod-0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92.scope: Deactivated successfully.
Oct 02 08:42:08 compute-0 podman[362816]: 2025-10-02 08:42:08.770722325 +0000 UTC m=+0.151979488 container died 0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 08:42:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-72d148e6df89e9d06706a33903b7c95f5cf1e7c6f4c8a8f0632e27a4c6c6c4ab-merged.mount: Deactivated successfully.
Oct 02 08:42:08 compute-0 podman[362816]: 2025-10-02 08:42:08.82258081 +0000 UTC m=+0.203837953 container remove 0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 08:42:08 compute-0 systemd[1]: libpod-conmon-0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92.scope: Deactivated successfully.
Oct 02 08:42:09 compute-0 podman[362858]: 2025-10-02 08:42:09.016176655 +0000 UTC m=+0.064223539 container create a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_albattani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 02 08:42:09 compute-0 systemd[1]: Started libpod-conmon-a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f.scope.
Oct 02 08:42:09 compute-0 podman[362858]: 2025-10-02 08:42:08.982567024 +0000 UTC m=+0.030613958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:42:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:42:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526887cbf4753ccea0ac0e8ebdc16741481e605c2caf7ec916b3a4ed25d6f583/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526887cbf4753ccea0ac0e8ebdc16741481e605c2caf7ec916b3a4ed25d6f583/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526887cbf4753ccea0ac0e8ebdc16741481e605c2caf7ec916b3a4ed25d6f583/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526887cbf4753ccea0ac0e8ebdc16741481e605c2caf7ec916b3a4ed25d6f583/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:09 compute-0 podman[362858]: 2025-10-02 08:42:09.127933866 +0000 UTC m=+0.175980800 container init a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_albattani, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:42:09 compute-0 podman[362858]: 2025-10-02 08:42:09.136855302 +0000 UTC m=+0.184902186 container start a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_albattani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:42:09 compute-0 podman[362858]: 2025-10-02 08:42:09.141272389 +0000 UTC m=+0.189319333 container attach a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 08:42:09 compute-0 podman[362872]: 2025-10-02 08:42:09.164481378 +0000 UTC m=+0.099472202 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:42:09 compute-0 podman[362875]: 2025-10-02 08:42:09.175609302 +0000 UTC m=+0.095592191 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:42:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 192 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 3.1 MiB/s wr, 114 op/s
Oct 02 08:42:09 compute-0 nova_compute[260603]: 2025-10-02 08:42:09.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:10 compute-0 nova_compute[260603]: 2025-10-02 08:42:10.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:10 compute-0 festive_albattani[362881]: {
Oct 02 08:42:10 compute-0 festive_albattani[362881]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "osd_id": 2,
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "type": "bluestore"
Oct 02 08:42:10 compute-0 festive_albattani[362881]:     },
Oct 02 08:42:10 compute-0 festive_albattani[362881]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "osd_id": 1,
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "type": "bluestore"
Oct 02 08:42:10 compute-0 festive_albattani[362881]:     },
Oct 02 08:42:10 compute-0 festive_albattani[362881]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "osd_id": 0,
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:42:10 compute-0 festive_albattani[362881]:         "type": "bluestore"
Oct 02 08:42:10 compute-0 festive_albattani[362881]:     }
Oct 02 08:42:10 compute-0 festive_albattani[362881]: }
Oct 02 08:42:10 compute-0 systemd[1]: libpod-a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f.scope: Deactivated successfully.
Oct 02 08:42:10 compute-0 podman[362858]: 2025-10-02 08:42:10.285635534 +0000 UTC m=+1.333682388 container died a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 08:42:10 compute-0 systemd[1]: libpod-a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f.scope: Consumed 1.145s CPU time.
Oct 02 08:42:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-526887cbf4753ccea0ac0e8ebdc16741481e605c2caf7ec916b3a4ed25d6f583-merged.mount: Deactivated successfully.
Oct 02 08:42:10 compute-0 ceph-mon[74477]: pgmap v1896: 305 pgs: 305 active+clean; 192 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 3.1 MiB/s wr, 114 op/s
Oct 02 08:42:10 compute-0 podman[362858]: 2025-10-02 08:42:10.355732405 +0000 UTC m=+1.403779279 container remove a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_albattani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 08:42:10 compute-0 systemd[1]: libpod-conmon-a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f.scope: Deactivated successfully.
Oct 02 08:42:10 compute-0 sudo[362753]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:42:10 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:42:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:42:10 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:42:10 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9758bd2a-b082-4996-a014-86e00a20cede does not exist
Oct 02 08:42:10 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev fb65e4a9-0ab5-4a15-a50b-d5e85004f46c does not exist
Oct 02 08:42:10 compute-0 sudo[362961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:42:10 compute-0 sudo[362961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:10 compute-0 sudo[362961]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:10 compute-0 sudo[362986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:42:10 compute-0 sudo[362986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:42:10 compute-0 sudo[362986]: pam_unix(sudo:session): session closed for user root
Oct 02 08:42:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 192 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 2.6 MiB/s wr, 98 op/s
Oct 02 08:42:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:42:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:42:12 compute-0 nova_compute[260603]: 2025-10-02 08:42:12.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:42:12 compute-0 ceph-mon[74477]: pgmap v1897: 305 pgs: 305 active+clean; 192 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 2.6 MiB/s wr, 98 op/s
Oct 02 08:42:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 304 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Oct 02 08:42:14 compute-0 nova_compute[260603]: 2025-10-02 08:42:14.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:14 compute-0 nova_compute[260603]: 2025-10-02 08:42:14.484 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:14 compute-0 nova_compute[260603]: 2025-10-02 08:42:14.484 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:14 compute-0 nova_compute[260603]: 2025-10-02 08:42:14.500 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:42:14 compute-0 ceph-mon[74477]: pgmap v1898: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 304 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Oct 02 08:42:14 compute-0 nova_compute[260603]: 2025-10-02 08:42:14.572 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:14 compute-0 nova_compute[260603]: 2025-10-02 08:42:14.572 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:14 compute-0 nova_compute[260603]: 2025-10-02 08:42:14.582 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:42:14 compute-0 nova_compute[260603]: 2025-10-02 08:42:14.582 2 INFO nova.compute.claims [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:42:14 compute-0 nova_compute[260603]: 2025-10-02 08:42:14.745 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:42:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:42:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3995590944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.233 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.244 2 DEBUG nova.compute.provider_tree [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.265 2 DEBUG nova.scheduler.client.report [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.296 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.297 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:42:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 304 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.358 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.359 2 DEBUG nova.network.neutron [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.387 2 INFO nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.409 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.495 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.497 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.498 2 INFO nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Creating image(s)
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.675 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.701 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.724 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.727 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:42:15 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3995590944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.771 2 DEBUG nova.policy [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7fa065d842a644b0891a59bea27e82dd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4f0bc0e400b74a9788079e4f67262fae', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.809 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.811 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.813 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:15 compute-0 nova_compute[260603]: 2025-10-02 08:42:15.813 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:16 compute-0 nova_compute[260603]: 2025-10-02 08:42:16.078 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:42:16 compute-0 nova_compute[260603]: 2025-10-02 08:42:16.082 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 e5370318-fc99-4c4a-9149-54deca5d783e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:42:16 compute-0 nova_compute[260603]: 2025-10-02 08:42:16.659 2 DEBUG nova.network.neutron [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Successfully created port: e1f81d45-cb4b-4514-b606-3b044d09fc3f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:42:17 compute-0 nova_compute[260603]: 2025-10-02 08:42:17.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:17 compute-0 ceph-mon[74477]: pgmap v1899: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 304 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Oct 02 08:42:17 compute-0 nova_compute[260603]: 2025-10-02 08:42:17.248 2 DEBUG nova.network.neutron [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Successfully updated port: e1f81d45-cb4b-4514-b606-3b044d09fc3f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:42:17 compute-0 nova_compute[260603]: 2025-10-02 08:42:17.266 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "refresh_cache-e5370318-fc99-4c4a-9149-54deca5d783e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:42:17 compute-0 nova_compute[260603]: 2025-10-02 08:42:17.267 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquired lock "refresh_cache-e5370318-fc99-4c4a-9149-54deca5d783e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:42:17 compute-0 nova_compute[260603]: 2025-10-02 08:42:17.267 2 DEBUG nova.network.neutron [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:42:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct 02 08:42:17 compute-0 nova_compute[260603]: 2025-10-02 08:42:17.350 2 DEBUG nova.compute.manager [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-changed-e1f81d45-cb4b-4514-b606-3b044d09fc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:17 compute-0 nova_compute[260603]: 2025-10-02 08:42:17.351 2 DEBUG nova.compute.manager [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Refreshing instance network info cache due to event network-changed-e1f81d45-cb4b-4514-b606-3b044d09fc3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:42:17 compute-0 nova_compute[260603]: 2025-10-02 08:42:17.351 2 DEBUG oslo_concurrency.lockutils [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-e5370318-fc99-4c4a-9149-54deca5d783e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:42:17 compute-0 nova_compute[260603]: 2025-10-02 08:42:17.409 2 DEBUG nova.network.neutron [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:42:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:42:18 compute-0 nova_compute[260603]: 2025-10-02 08:42:18.163 2 DEBUG nova.network.neutron [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Updating instance_info_cache with network_info: [{"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:42:18 compute-0 nova_compute[260603]: 2025-10-02 08:42:18.182 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Releasing lock "refresh_cache-e5370318-fc99-4c4a-9149-54deca5d783e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:42:18 compute-0 nova_compute[260603]: 2025-10-02 08:42:18.183 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Instance network_info: |[{"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:42:18 compute-0 nova_compute[260603]: 2025-10-02 08:42:18.184 2 DEBUG oslo_concurrency.lockutils [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-e5370318-fc99-4c4a-9149-54deca5d783e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:42:18 compute-0 nova_compute[260603]: 2025-10-02 08:42:18.184 2 DEBUG nova.network.neutron [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Refreshing network info cache for port e1f81d45-cb4b-4514-b606-3b044d09fc3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:42:18 compute-0 ceph-mon[74477]: pgmap v1900: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct 02 08:42:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 207 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Oct 02 08:42:19 compute-0 nova_compute[260603]: 2025-10-02 08:42:19.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:19 compute-0 nova_compute[260603]: 2025-10-02 08:42:19.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:42:19 compute-0 nova_compute[260603]: 2025-10-02 08:42:19.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:42:19 compute-0 nova_compute[260603]: 2025-10-02 08:42:19.635 2 DEBUG nova.network.neutron [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Updated VIF entry in instance network info cache for port e1f81d45-cb4b-4514-b606-3b044d09fc3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:42:19 compute-0 nova_compute[260603]: 2025-10-02 08:42:19.636 2 DEBUG nova.network.neutron [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Updating instance_info_cache with network_info: [{"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:42:19 compute-0 nova_compute[260603]: 2025-10-02 08:42:19.664 2 DEBUG oslo_concurrency.lockutils [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-e5370318-fc99-4c4a-9149-54deca5d783e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:42:21 compute-0 ceph-mon[74477]: pgmap v1901: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 207 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Oct 02 08:42:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 90 KiB/s wr, 18 op/s
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.570005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394541570091, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1828, "num_deletes": 254, "total_data_size": 2845007, "memory_usage": 2879784, "flush_reason": "Manual Compaction"}
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394541707706, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2771815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38313, "largest_seqno": 40140, "table_properties": {"data_size": 2763413, "index_size": 5088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17839, "raw_average_key_size": 20, "raw_value_size": 2746464, "raw_average_value_size": 3149, "num_data_blocks": 226, "num_entries": 872, "num_filter_entries": 872, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394368, "oldest_key_time": 1759394368, "file_creation_time": 1759394541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 137815 microseconds, and 11452 cpu microseconds.
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:42:21 compute-0 nova_compute[260603]: 2025-10-02 08:42:21.793 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 e5370318-fc99-4c4a-9149-54deca5d783e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.711s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.707829) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2771815 bytes OK
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.707857) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.798941) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.798993) EVENT_LOG_v1 {"time_micros": 1759394541798982, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.799020) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2837117, prev total WAL file size 2837117, number of live WAL files 2.
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.800422) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2706KB)], [86(8267KB)]
Oct 02 08:42:21 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394541800513, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11237535, "oldest_snapshot_seqno": -1}
Oct 02 08:42:21 compute-0 nova_compute[260603]: 2025-10-02 08:42:21.909 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] resizing rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:42:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:42:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3861741082' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:42:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:42:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3861741082' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6443 keys, 9609636 bytes, temperature: kUnknown
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394542103654, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9609636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9565185, "index_size": 27253, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 163774, "raw_average_key_size": 25, "raw_value_size": 9448436, "raw_average_value_size": 1466, "num_data_blocks": 1097, "num_entries": 6443, "num_filter_entries": 6443, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.327 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.328 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.329 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.329 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.330 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.332 2 INFO nova.compute.manager [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Terminating instance
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.335 2 DEBUG nova.compute.manager [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.397 2 DEBUG nova.compute.manager [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-changed-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.398 2 DEBUG nova.compute.manager [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Refreshing instance network info cache due to event network-changed-52a482d6-fbe9-4583-af85-5407a6796976. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.399 2 DEBUG oslo_concurrency.lockutils [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.399 2 DEBUG oslo_concurrency.lockutils [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:42:22 compute-0 nova_compute[260603]: 2025-10-02 08:42:22.400 2 DEBUG nova.network.neutron [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Refreshing network info cache for port 52a482d6-fbe9-4583-af85-5407a6796976 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.104181) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9609636 bytes
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.498812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 37.0 rd, 31.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.1 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 6967, records dropped: 524 output_compression: NoCompression
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.498839) EVENT_LOG_v1 {"time_micros": 1759394542498829, "job": 50, "event": "compaction_finished", "compaction_time_micros": 303442, "compaction_time_cpu_micros": 50314, "output_level": 6, "num_output_files": 1, "total_output_size": 9609636, "num_input_records": 6967, "num_output_records": 6443, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394542499343, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394542500653, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.800254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.500707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.500712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.500714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.500715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:42:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.500718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:42:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:42:23 compute-0 ceph-mon[74477]: pgmap v1902: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 90 KiB/s wr, 18 op/s
Oct 02 08:42:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3861741082' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:42:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3861741082' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:42:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 1.9 MiB/s wr, 41 op/s
Oct 02 08:42:24 compute-0 kernel: tap52a482d6-fb (unregistering): left promiscuous mode
Oct 02 08:42:24 compute-0 NetworkManager[45129]: <info>  [1759394544.2045] device (tap52a482d6-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:42:24 compute-0 ovn_controller[152344]: 2025-10-02T08:42:24Z|01051|binding|INFO|Releasing lport 52a482d6-fbe9-4583-af85-5407a6796976 from this chassis (sb_readonly=0)
Oct 02 08:42:24 compute-0 ovn_controller[152344]: 2025-10-02T08:42:24Z|01052|binding|INFO|Setting lport 52a482d6-fbe9-4583-af85-5407a6796976 down in Southbound
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:24 compute-0 ovn_controller[152344]: 2025-10-02T08:42:24Z|01053|binding|INFO|Removing iface tap52a482d6-fb ovn-installed in OVS
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:24.228 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:a7:87 10.100.0.3'], port_security=['fa:16:3e:2c:a7:87 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5748b14f-5bbb-46f3-b563-062d530e5abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-075eee4e-656c-44de-9ee6-7589c7382251', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28277d49c8814c0691f5e1dce22bf215', 'neutron:revision_number': '4', 'neutron:security_group_ids': '13b54cc2-9090-4249-b147-09fa8e935774 94805a92-fc8b-4089-aba3-84e7867eb02d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e7b4dd6-37a8-4a1b-88c6-3100272d2b7a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=52a482d6-fbe9-4583-af85-5407a6796976) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:42:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:24.230 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 52a482d6-fbe9-4583-af85-5407a6796976 in datapath 075eee4e-656c-44de-9ee6-7589c7382251 unbound from our chassis
Oct 02 08:42:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:24.233 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 075eee4e-656c-44de-9ee6-7589c7382251, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:42:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:24.234 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df0aec4a-d0d4-4bb9-8c2d-035b1b1c7103]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:24.235 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251 namespace which is not needed anymore
Oct 02 08:42:24 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct 02 08:42:24 compute-0 ceph-mon[74477]: pgmap v1903: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 1.9 MiB/s wr, 41 op/s
Oct 02 08:42:24 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000066.scope: Consumed 14.648s CPU time.
Oct 02 08:42:24 compute-0 systemd-machined[214636]: Machine qemu-128-instance-00000066 terminated.
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.377 2 INFO nova.virt.libvirt.driver [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Instance destroyed successfully.
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.378 2 DEBUG nova.objects.instance [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lazy-loading 'resources' on Instance uuid 5748b14f-5bbb-46f3-b563-062d530e5abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.416 2 DEBUG nova.virt.libvirt.vif [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:41:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-263070957-acc',id=102,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBHUVZCqw3VWkKw3WswXpGtJf1pEavua2ZnmSgdDq3pc4sGnLceByNqUw0H30LytwdI+cAke8ed31ZxradSBqQ5vuq98lJn2fkhaaMuY6USKAqJ3+Ezi42AVRSerZijcMA==',key_name='tempest-TestSecurityGroupsBasicOps-1336991398',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:41:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28277d49c8814c0691f5e1dce22bf215',ramdisk_id='',reservation_id='r-740jcdvb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-263070957',owner_user_name='tempest-TestSecurityGroupsBasicOps-263070957-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:41:55Z,user_data=None,user_id='ffb3b2bafd1c40058c2669d61c40f3f9',uuid=5748b14f-5bbb-46f3-b563-062d530e5abd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.416 2 DEBUG nova.network.os_vif_util [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Converting VIF {"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.417 2 DEBUG nova.network.os_vif_util [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.417 2 DEBUG os_vif [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a482d6-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.423 2 INFO os_vif [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb')
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.541 2 DEBUG nova.compute.manager [req-310167db-9579-4bdf-a062-ddd2c2382a72 req-b97849d2-a737-4637-9ab3-8949c7f61832 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-vif-unplugged-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.541 2 DEBUG oslo_concurrency.lockutils [req-310167db-9579-4bdf-a062-ddd2c2382a72 req-b97849d2-a737-4637-9ab3-8949c7f61832 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.541 2 DEBUG oslo_concurrency.lockutils [req-310167db-9579-4bdf-a062-ddd2c2382a72 req-b97849d2-a737-4637-9ab3-8949c7f61832 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.542 2 DEBUG oslo_concurrency.lockutils [req-310167db-9579-4bdf-a062-ddd2c2382a72 req-b97849d2-a737-4637-9ab3-8949c7f61832 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.542 2 DEBUG nova.compute.manager [req-310167db-9579-4bdf-a062-ddd2c2382a72 req-b97849d2-a737-4637-9ab3-8949c7f61832 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] No waiting events found dispatching network-vif-unplugged-52a482d6-fbe9-4583-af85-5407a6796976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.542 2 DEBUG nova.compute.manager [req-310167db-9579-4bdf-a062-ddd2c2382a72 req-b97849d2-a737-4637-9ab3-8949c7f61832 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-vif-unplugged-52a482d6-fbe9-4583-af85-5407a6796976 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.612 2 DEBUG nova.objects.instance [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lazy-loading 'migration_context' on Instance uuid e5370318-fc99-4c4a-9149-54deca5d783e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:42:24 compute-0 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [NOTICE]   (361980) : haproxy version is 2.8.14-c23fe91
Oct 02 08:42:24 compute-0 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [NOTICE]   (361980) : path to executable is /usr/sbin/haproxy
Oct 02 08:42:24 compute-0 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [WARNING]  (361980) : Exiting Master process...
Oct 02 08:42:24 compute-0 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [WARNING]  (361980) : Exiting Master process...
Oct 02 08:42:24 compute-0 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [ALERT]    (361980) : Current worker (361982) exited with code 143 (Terminated)
Oct 02 08:42:24 compute-0 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [WARNING]  (361980) : All workers exited. Exiting... (0)
Oct 02 08:42:24 compute-0 systemd[1]: libpod-c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd.scope: Deactivated successfully.
Oct 02 08:42:24 compute-0 podman[363204]: 2025-10-02 08:42:24.626304219 +0000 UTC m=+0.269791955 container died c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.628 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.629 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Ensure instance console log exists: /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.629 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.630 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.630 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.634 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Start _get_guest_xml network_info=[{"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.647 2 WARNING nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.658 2 DEBUG nova.virt.libvirt.host [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.659 2 DEBUG nova.virt.libvirt.host [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.663 2 DEBUG nova.virt.libvirt.host [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.664 2 DEBUG nova.virt.libvirt.host [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.664 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.665 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.665 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.666 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.666 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.667 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.667 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.667 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.668 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.668 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.669 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.669 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:42:24 compute-0 nova_compute[260603]: 2025-10-02 08:42:24.673 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:42:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:42:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440855643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.168 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.185 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.189 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:42:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.310 2 DEBUG nova.network.neutron [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updated VIF entry in instance network info cache for port 52a482d6-fbe9-4583-af85-5407a6796976. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.311 2 DEBUG nova.network.neutron [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updating instance_info_cache with network_info: [{"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.338 2 DEBUG oslo_concurrency.lockutils [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:42:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:42:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/897564851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.630 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.633 2 DEBUG nova.virt.libvirt.vif [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:42:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1919713365',display_name='tempest-ServerMetadataNegativeTestJSON-server-1919713365',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1919713365',id=103,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4f0bc0e400b74a9788079e4f67262fae',ramdisk_id='',reservation_id='r-xqmiwuso',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1644163991',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1644163991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:42:15Z,user_data=None,user_id='7fa065d842a644b0891a59bea27e82dd',uuid=e5370318-fc99-4c4a-9149-54deca5d783e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.633 2 DEBUG nova.network.os_vif_util [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Converting VIF {"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.635 2 DEBUG nova.network.os_vif_util [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.637 2 DEBUG nova.objects.instance [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lazy-loading 'pci_devices' on Instance uuid e5370318-fc99-4c4a-9149-54deca5d783e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.657 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:42:25 compute-0 nova_compute[260603]:   <uuid>e5370318-fc99-4c4a-9149-54deca5d783e</uuid>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   <name>instance-00000067</name>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1919713365</nova:name>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:42:24</nova:creationTime>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:42:25 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:42:25 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:42:25 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:42:25 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:42:25 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:42:25 compute-0 nova_compute[260603]:         <nova:user uuid="7fa065d842a644b0891a59bea27e82dd">tempest-ServerMetadataNegativeTestJSON-1644163991-project-member</nova:user>
Oct 02 08:42:25 compute-0 nova_compute[260603]:         <nova:project uuid="4f0bc0e400b74a9788079e4f67262fae">tempest-ServerMetadataNegativeTestJSON-1644163991</nova:project>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:42:25 compute-0 nova_compute[260603]:         <nova:port uuid="e1f81d45-cb4b-4514-b606-3b044d09fc3f">
Oct 02 08:42:25 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <system>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <entry name="serial">e5370318-fc99-4c4a-9149-54deca5d783e</entry>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <entry name="uuid">e5370318-fc99-4c4a-9149-54deca5d783e</entry>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     </system>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   <os>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   </os>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   <features>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   </features>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/e5370318-fc99-4c4a-9149-54deca5d783e_disk">
Oct 02 08:42:25 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       </source>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:42:25 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/e5370318-fc99-4c4a-9149-54deca5d783e_disk.config">
Oct 02 08:42:25 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       </source>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:42:25 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:21:bd:92"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <target dev="tape1f81d45-cb"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/console.log" append="off"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <video>
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     </video>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:42:25 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:42:25 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:42:25 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:42:25 compute-0 nova_compute[260603]: </domain>
Oct 02 08:42:25 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.658 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Preparing to wait for external event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.658 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.659 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.659 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.661 2 DEBUG nova.virt.libvirt.vif [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:42:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1919713365',display_name='tempest-ServerMetadataNegativeTestJSON-server-1919713365',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1919713365',id=103,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4f0bc0e400b74a9788079e4f67262fae',ramdisk_id='',reservation_id='r-xqmiwuso',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1644163991',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1644163991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:42:15Z,user_data=None,user_id='7fa065d842a644b0891a59bea27e82dd',uuid=e5370318-fc99-4c4a-9149-54deca5d783e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.661 2 DEBUG nova.network.os_vif_util [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Converting VIF {"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.662 2 DEBUG nova.network.os_vif_util [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.663 2 DEBUG os_vif [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.664 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.668 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1f81d45-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape1f81d45-cb, col_values=(('external_ids', {'iface-id': 'e1f81d45-cb4b-4514-b606-3b044d09fc3f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:bd:92', 'vm-uuid': 'e5370318-fc99-4c4a-9149-54deca5d783e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:25 compute-0 NetworkManager[45129]: <info>  [1759394545.6725] manager: (tape1f81d45-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:25 compute-0 nova_compute[260603]: 2025-10-02 08:42:25.681 2 INFO os_vif [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb')
Oct 02 08:42:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd-userdata-shm.mount: Deactivated successfully.
Oct 02 08:42:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-70f2cd3807a4bd9a2a9995ba1fc8cf28a130437de2342b17d3b71902d85482d4-merged.mount: Deactivated successfully.
Oct 02 08:42:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2440855643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:42:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/897564851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:42:26 compute-0 nova_compute[260603]: 2025-10-02 08:42:26.262 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:42:26 compute-0 nova_compute[260603]: 2025-10-02 08:42:26.263 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:42:26 compute-0 nova_compute[260603]: 2025-10-02 08:42:26.263 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] No VIF found with MAC fa:16:3e:21:bd:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:42:26 compute-0 nova_compute[260603]: 2025-10-02 08:42:26.264 2 INFO nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Using config drive
Oct 02 08:42:26 compute-0 podman[363204]: 2025-10-02 08:42:26.428015489 +0000 UTC m=+2.071503265 container cleanup c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:42:26 compute-0 systemd[1]: libpod-conmon-c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd.scope: Deactivated successfully.
Oct 02 08:42:26 compute-0 nova_compute[260603]: 2025-10-02 08:42:26.978 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:42:26 compute-0 nova_compute[260603]: 2025-10-02 08:42:26.994 2 DEBUG nova.compute.manager [req-6c40b42f-75a3-4556-9d9e-cd450a527e9e req-8fddd788-5bf1-45c0-9612-2485c58aaa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:26 compute-0 nova_compute[260603]: 2025-10-02 08:42:26.995 2 DEBUG oslo_concurrency.lockutils [req-6c40b42f-75a3-4556-9d9e-cd450a527e9e req-8fddd788-5bf1-45c0-9612-2485c58aaa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:26 compute-0 nova_compute[260603]: 2025-10-02 08:42:26.995 2 DEBUG oslo_concurrency.lockutils [req-6c40b42f-75a3-4556-9d9e-cd450a527e9e req-8fddd788-5bf1-45c0-9612-2485c58aaa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:26 compute-0 nova_compute[260603]: 2025-10-02 08:42:26.996 2 DEBUG oslo_concurrency.lockutils [req-6c40b42f-75a3-4556-9d9e-cd450a527e9e req-8fddd788-5bf1-45c0-9612-2485c58aaa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:26 compute-0 nova_compute[260603]: 2025-10-02 08:42:26.997 2 DEBUG nova.compute.manager [req-6c40b42f-75a3-4556-9d9e-cd450a527e9e req-8fddd788-5bf1-45c0-9612-2485c58aaa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] No waiting events found dispatching network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:42:26 compute-0 nova_compute[260603]: 2025-10-02 08:42:26.997 2 WARNING nova.compute.manager [req-6c40b42f-75a3-4556-9d9e-cd450a527e9e req-8fddd788-5bf1-45c0-9612-2485c58aaa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received unexpected event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 for instance with vm_state active and task_state deleting.
Oct 02 08:42:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:42:27 compute-0 nova_compute[260603]: 2025-10-02 08:42:27.361 2 INFO nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Creating config drive at /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/disk.config
Oct 02 08:42:27 compute-0 ceph-mon[74477]: pgmap v1904: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 02 08:42:27 compute-0 nova_compute[260603]: 2025-10-02 08:42:27.371 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwo3u019o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:42:27 compute-0 podman[363353]: 2025-10-02 08:42:27.439950915 +0000 UTC m=+0.975019033 container remove c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:42:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.449 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[103d6c8c-ce68-4ee6-bc2f-10202e2ff9e3]: (4, ('Thu Oct  2 08:42:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251 (c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd)\nc625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd\nThu Oct  2 08:42:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251 (c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd)\nc625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.451 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09789626-8a6d-485f-b1b7-53698699721f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.453 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap075eee4e-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:27 compute-0 kernel: tap075eee4e-60: left promiscuous mode
Oct 02 08:42:27 compute-0 nova_compute[260603]: 2025-10-02 08:42:27.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:27 compute-0 nova_compute[260603]: 2025-10-02 08:42:27.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.530 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e58397bd-f353-4da1-a2fe-f397eb7c12f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:27 compute-0 nova_compute[260603]: 2025-10-02 08:42:27.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:27 compute-0 nova_compute[260603]: 2025-10-02 08:42:27.535 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwo3u019o" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:42:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.559 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[04ff2f70-4e71-4ef5-83f7-2704d87a13ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.560 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2b690ea3-d28b-4bb5-b7e5-d99a0991ee61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:27 compute-0 nova_compute[260603]: 2025-10-02 08:42:27.585 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:42:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.590 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd4aea2-1a0a-4605-9bf3-96abe1ae238a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539209, 'reachable_time': 17407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363396, 'error': None, 'target': 'ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d075eee4e\x2d656c\x2d44de\x2d9ee6\x2d7589c7382251.mount: Deactivated successfully.
Oct 02 08:42:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.599 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:42:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.599 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[861d7bfd-9821-4694-940a-91b38a972b3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:27 compute-0 nova_compute[260603]: 2025-10-02 08:42:27.602 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/disk.config e5370318-fc99-4c4a-9149-54deca5d783e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:42:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:42:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:42:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:42:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:42:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:42:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:42:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:42:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:42:27
Oct 02 08:42:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:42:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:42:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'images', 'default.rgw.control', 'default.rgw.log', 'vms', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 02 08:42:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:42:28 compute-0 ceph-mon[74477]: pgmap v1905: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:42:28 compute-0 nova_compute[260603]: 2025-10-02 08:42:28.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:42:28 compute-0 nova_compute[260603]: 2025-10-02 08:42:28.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:42:28 compute-0 nova_compute[260603]: 2025-10-02 08:42:28.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:42:28 compute-0 nova_compute[260603]: 2025-10-02 08:42:28.547 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 08:42:28 compute-0 nova_compute[260603]: 2025-10-02 08:42:28.548 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 08:42:28 compute-0 nova_compute[260603]: 2025-10-02 08:42:28.750 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:42:28 compute-0 nova_compute[260603]: 2025-10-02 08:42:28.750 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:42:28 compute-0 nova_compute[260603]: 2025-10-02 08:42:28.750 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:42:28 compute-0 nova_compute[260603]: 2025-10-02 08:42:28.751 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ddf9efd0-0ac6-4857-96ea-3f1d0e18590d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:42:29 compute-0 nova_compute[260603]: 2025-10-02 08:42:29.243 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/disk.config e5370318-fc99-4c4a-9149-54deca5d783e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:42:29 compute-0 nova_compute[260603]: 2025-10-02 08:42:29.244 2 INFO nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Deleting local config drive /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/disk.config because it was imported into RBD.
Oct 02 08:42:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 08:42:29 compute-0 nova_compute[260603]: 2025-10-02 08:42:29.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:29 compute-0 kernel: tape1f81d45-cb: entered promiscuous mode
Oct 02 08:42:29 compute-0 systemd-udevd[363402]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:42:29 compute-0 nova_compute[260603]: 2025-10-02 08:42:29.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:29 compute-0 ovn_controller[152344]: 2025-10-02T08:42:29Z|01054|binding|INFO|Claiming lport e1f81d45-cb4b-4514-b606-3b044d09fc3f for this chassis.
Oct 02 08:42:29 compute-0 ovn_controller[152344]: 2025-10-02T08:42:29Z|01055|binding|INFO|e1f81d45-cb4b-4514-b606-3b044d09fc3f: Claiming fa:16:3e:21:bd:92 10.100.0.11
Oct 02 08:42:29 compute-0 NetworkManager[45129]: <info>  [1759394549.3304] manager: (tape1f81d45-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.340 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:bd:92 10.100.0.11'], port_security=['fa:16:3e:21:bd:92 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e5370318-fc99-4c4a-9149-54deca5d783e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cc74904-ee74-4715-87cc-18060dd682a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f0bc0e400b74a9788079e4f67262fae', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03ef8875-7662-4d1c-ab06-cdfdd9edf304', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf9aea74-4830-4e32-899a-3a36573d95df, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e1f81d45-cb4b-4514-b606-3b044d09fc3f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.342 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e1f81d45-cb4b-4514-b606-3b044d09fc3f in datapath 0cc74904-ee74-4715-87cc-18060dd682a0 bound to our chassis
Oct 02 08:42:29 compute-0 NetworkManager[45129]: <info>  [1759394549.3491] device (tape1f81d45-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.348 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cc74904-ee74-4715-87cc-18060dd682a0
Oct 02 08:42:29 compute-0 NetworkManager[45129]: <info>  [1759394549.3508] device (tape1f81d45-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:42:29 compute-0 ovn_controller[152344]: 2025-10-02T08:42:29Z|01056|binding|INFO|Setting lport e1f81d45-cb4b-4514-b606-3b044d09fc3f ovn-installed in OVS
Oct 02 08:42:29 compute-0 ovn_controller[152344]: 2025-10-02T08:42:29Z|01057|binding|INFO|Setting lport e1f81d45-cb4b-4514-b606-3b044d09fc3f up in Southbound
Oct 02 08:42:29 compute-0 nova_compute[260603]: 2025-10-02 08:42:29.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:29 compute-0 nova_compute[260603]: 2025-10-02 08:42:29.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.371 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6f60c5-aadd-40eb-a77c-ad33ef26f0e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.373 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0cc74904-e1 in ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.376 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0cc74904-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.377 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[37221368-8c74-494a-a322-c6f0acd4de6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.378 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6be56dba-c54c-49f0-bdb6-061b13100d5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 systemd-machined[214636]: New machine qemu-129-instance-00000067.
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.400 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[48146ac8-5444-433d-a4b3-426af262e823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 systemd[1]: Started Virtual Machine qemu-129-instance-00000067.
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.445 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[28372283-9fa5-43c1-be55-9c5f6dad0ea4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.500 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e7047e6e-4844-4531-8f72-8fdfaae63255]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 NetworkManager[45129]: <info>  [1759394549.5109] manager: (tap0cc74904-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/416)
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6c90d0-68e0-4201-ab25-63e847aeafc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.567 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0c245bde-d15a-44d1-8ba9-7a753077f2c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.573 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[92d9e63b-67c0-4a7a-8156-75a19408a84e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 NetworkManager[45129]: <info>  [1759394549.6170] device (tap0cc74904-e0): carrier: link connected
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.628 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9acd323f-a637-466e-81b3-cc7fc15de01d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.656 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d6d26d-10d1-41f6-9b56-dfcc9e56c605]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cc74904-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:48:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542821, 'reachable_time': 18724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363468, 'error': None, 'target': 'ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.686 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0e853d2a-3a5e-4dfc-abe5-aa4c3ad83067]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:483b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542821, 'tstamp': 542821}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363469, 'error': None, 'target': 'ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.715 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c642241c-e806-410d-af91-f87a6e489a3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cc74904-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:48:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542821, 'reachable_time': 18724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363470, 'error': None, 'target': 'ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.768 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9739a18a-9261-4742-a804-ce6bfaabc987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.837 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d1b904-56cd-486f-bc71-7fb2d97cbe65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.839 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cc74904-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.840 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.840 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cc74904-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:29 compute-0 nova_compute[260603]: 2025-10-02 08:42:29.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:29 compute-0 NetworkManager[45129]: <info>  [1759394549.8893] manager: (tap0cc74904-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Oct 02 08:42:29 compute-0 kernel: tap0cc74904-e0: entered promiscuous mode
Oct 02 08:42:29 compute-0 nova_compute[260603]: 2025-10-02 08:42:29.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.891 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cc74904-e0, col_values=(('external_ids', {'iface-id': 'e2050fa1-dc06-4988-9771-f46682dc29c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:29 compute-0 nova_compute[260603]: 2025-10-02 08:42:29.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:29 compute-0 ovn_controller[152344]: 2025-10-02T08:42:29Z|01058|binding|INFO|Releasing lport e2050fa1-dc06-4988-9771-f46682dc29c0 from this chassis (sb_readonly=0)
Oct 02 08:42:29 compute-0 nova_compute[260603]: 2025-10-02 08:42:29.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:29 compute-0 nova_compute[260603]: 2025-10-02 08:42:29.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.918 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0cc74904-ee74-4715-87cc-18060dd682a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0cc74904-ee74-4715-87cc-18060dd682a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.919 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5193d7-827b-4637-9d11-7942dbcf2bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.920 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-0cc74904-ee74-4715-87cc-18060dd682a0
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/0cc74904-ee74-4715-87cc-18060dd682a0.pid.haproxy
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 0cc74904-ee74-4715-87cc-18060dd682a0
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:42:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.924 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0', 'env', 'PROCESS_TAG=haproxy-0cc74904-ee74-4715-87cc-18060dd682a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0cc74904-ee74-4715-87cc-18060dd682a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:42:30 compute-0 podman[363540]: 2025-10-02 08:42:30.286282773 +0000 UTC m=+0.031668132 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.414 2 DEBUG nova.compute.manager [req-f5cceb82-d2d6-416c-b9e7-d0d5c491a9ba req-818fd54a-287f-4a27-88bf-d3c65f54b7c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.415 2 DEBUG oslo_concurrency.lockutils [req-f5cceb82-d2d6-416c-b9e7-d0d5c491a9ba req-818fd54a-287f-4a27-88bf-d3c65f54b7c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.415 2 DEBUG oslo_concurrency.lockutils [req-f5cceb82-d2d6-416c-b9e7-d0d5c491a9ba req-818fd54a-287f-4a27-88bf-d3c65f54b7c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.416 2 DEBUG oslo_concurrency.lockutils [req-f5cceb82-d2d6-416c-b9e7-d0d5c491a9ba req-818fd54a-287f-4a27-88bf-d3c65f54b7c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.416 2 DEBUG nova.compute.manager [req-f5cceb82-d2d6-416c-b9e7-d0d5c491a9ba req-818fd54a-287f-4a27-88bf-d3c65f54b7c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Processing event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:42:30 compute-0 ceph-mon[74477]: pgmap v1906: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 08:42:30 compute-0 podman[363540]: 2025-10-02 08:42:30.607736807 +0000 UTC m=+0.353122176 container create 257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.726 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.727 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394550.7270787, e5370318-fc99-4c4a-9149-54deca5d783e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.728 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] VM Started (Lifecycle Event)
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.734 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.738 2 INFO nova.virt.libvirt.driver [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Instance spawned successfully.
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.738 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:42:30 compute-0 systemd[1]: Started libpod-conmon-257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4.scope.
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.760 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.768 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.768 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.769 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.770 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.770 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.771 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.776 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:42:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.789 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:42:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12d23758f8dd1aba6e6960bdd498e43c608dac9f058bfca986ccca7ddd0c5f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.819 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.819 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.820 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.820 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.820 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394550.728044, e5370318-fc99-4c4a-9149-54deca5d783e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.821 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] VM Paused (Lifecycle Event)
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.822 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.846 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.849 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394550.7310724, e5370318-fc99-4c4a-9149-54deca5d783e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.849 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] VM Resumed (Lifecycle Event)
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.854 2 INFO nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Took 15.36 seconds to spawn the instance on the hypervisor.
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.855 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:42:30 compute-0 podman[363540]: 2025-10-02 08:42:30.870813173 +0000 UTC m=+0.616198492 container init 257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.880 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:42:30 compute-0 podman[363540]: 2025-10-02 08:42:30.880978618 +0000 UTC m=+0.626363937 container start 257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.883 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.911 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:42:30 compute-0 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [NOTICE]   (363564) : New worker (363566) forked
Oct 02 08:42:30 compute-0 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [NOTICE]   (363564) : Loading success.
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.921 2 INFO nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Took 16.37 seconds to build instance.
Oct 02 08:42:30 compute-0 nova_compute[260603]: 2025-10-02 08:42:30.937 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 08:42:31 compute-0 nova_compute[260603]: 2025-10-02 08:42:31.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:42:31 compute-0 nova_compute[260603]: 2025-10-02 08:42:31.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:42:31 compute-0 nova_compute[260603]: 2025-10-02 08:42:31.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:31 compute-0 nova_compute[260603]: 2025-10-02 08:42:31.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:31 compute-0 nova_compute[260603]: 2025-10-02 08:42:31.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:31 compute-0 nova_compute[260603]: 2025-10-02 08:42:31.544 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:42:31 compute-0 nova_compute[260603]: 2025-10-02 08:42:31.544 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:42:31 compute-0 podman[363596]: 2025-10-02 08:42:31.998630406 +0000 UTC m=+0.057079229 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:42:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:42:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/296919486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:42:32 compute-0 podman[363595]: 2025-10-02 08:42:32.124004089 +0000 UTC m=+0.183639638 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.125 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.230 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.231 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.236 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.237 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.241 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.242 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.429 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.430 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3490MB free_disk=59.87638473510742GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.431 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.431 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.510 2 DEBUG nova.compute.manager [req-9f2f960e-0e6a-4d91-b04f-6bb0610c9362 req-f8d65e9f-c3fc-4019-9aed-815f61129057 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.510 2 DEBUG oslo_concurrency.lockutils [req-9f2f960e-0e6a-4d91-b04f-6bb0610c9362 req-f8d65e9f-c3fc-4019-9aed-815f61129057 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.511 2 DEBUG oslo_concurrency.lockutils [req-9f2f960e-0e6a-4d91-b04f-6bb0610c9362 req-f8d65e9f-c3fc-4019-9aed-815f61129057 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.511 2 DEBUG oslo_concurrency.lockutils [req-9f2f960e-0e6a-4d91-b04f-6bb0610c9362 req-f8d65e9f-c3fc-4019-9aed-815f61129057 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.511 2 DEBUG nova.compute.manager [req-9f2f960e-0e6a-4d91-b04f-6bb0610c9362 req-f8d65e9f-c3fc-4019-9aed-815f61129057 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] No waiting events found dispatching network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.511 2 WARNING nova.compute.manager [req-9f2f960e-0e6a-4d91-b04f-6bb0610c9362 req-f8d65e9f-c3fc-4019-9aed-815f61129057 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received unexpected event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f for instance with vm_state active and task_state None.
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.540 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ddf9efd0-0ac6-4857-96ea-3f1d0e18590d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.540 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 5748b14f-5bbb-46f3-b563-062d530e5abd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.540 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance e5370318-fc99-4c4a-9149-54deca5d783e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.541 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.541 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.611 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:42:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:42:32 compute-0 ceph-mon[74477]: pgmap v1907: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 08:42:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/296919486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.920 2 INFO nova.virt.libvirt.driver [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Deleting instance files /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd_del
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.922 2 INFO nova.virt.libvirt.driver [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Deletion of /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd_del complete
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.972 2 INFO nova.compute.manager [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Took 10.64 seconds to destroy the instance on the hypervisor.
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.973 2 DEBUG oslo.service.loopingcall [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.974 2 DEBUG nova.compute.manager [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:42:32 compute-0 nova_compute[260603]: 2025-10-02 08:42:32.974 2 DEBUG nova.network.neutron [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:42:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:42:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4025465018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:42:33 compute-0 nova_compute[260603]: 2025-10-02 08:42:33.137 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:42:33 compute-0 nova_compute[260603]: 2025-10-02 08:42:33.148 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:42:33 compute-0 nova_compute[260603]: 2025-10-02 08:42:33.164 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:42:33 compute-0 nova_compute[260603]: 2025-10-02 08:42:33.188 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:42:33 compute-0 nova_compute[260603]: 2025-10-02 08:42:33.189 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 02 08:42:33 compute-0 nova_compute[260603]: 2025-10-02 08:42:33.581 2 DEBUG nova.network.neutron [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:42:33 compute-0 nova_compute[260603]: 2025-10-02 08:42:33.601 2 INFO nova.compute.manager [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Took 0.63 seconds to deallocate network for instance.
Oct 02 08:42:33 compute-0 nova_compute[260603]: 2025-10-02 08:42:33.645 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:33 compute-0 nova_compute[260603]: 2025-10-02 08:42:33.646 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:33 compute-0 nova_compute[260603]: 2025-10-02 08:42:33.688 2 DEBUG nova.compute.manager [req-4feea895-6b7a-4043-9b54-2f851938460f req-dfef1879-ab65-4c0e-861b-84bd05672a61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-vif-deleted-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:33 compute-0 nova_compute[260603]: 2025-10-02 08:42:33.729 2 DEBUG oslo_concurrency.processutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:42:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4025465018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:42:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:42:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440097038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:42:34 compute-0 nova_compute[260603]: 2025-10-02 08:42:34.169 2 DEBUG oslo_concurrency.processutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:42:34 compute-0 nova_compute[260603]: 2025-10-02 08:42:34.176 2 DEBUG nova.compute.provider_tree [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:42:34 compute-0 nova_compute[260603]: 2025-10-02 08:42:34.200 2 DEBUG nova.scheduler.client.report [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:42:34 compute-0 nova_compute[260603]: 2025-10-02 08:42:34.222 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:34 compute-0 nova_compute[260603]: 2025-10-02 08:42:34.244 2 INFO nova.scheduler.client.report [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Deleted allocations for instance 5748b14f-5bbb-46f3-b563-062d530e5abd
Oct 02 08:42:34 compute-0 nova_compute[260603]: 2025-10-02 08:42:34.349 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:34 compute-0 nova_compute[260603]: 2025-10-02 08:42:34.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:34.826 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:34.827 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:34.828 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:35 compute-0 ceph-mon[74477]: pgmap v1908: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 02 08:42:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2440097038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:42:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 106 op/s
Oct 02 08:42:35 compute-0 nova_compute[260603]: 2025-10-02 08:42:35.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:35 compute-0 nova_compute[260603]: 2025-10-02 08:42:35.891 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:35 compute-0 nova_compute[260603]: 2025-10-02 08:42:35.892 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:35 compute-0 nova_compute[260603]: 2025-10-02 08:42:35.892 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:35 compute-0 nova_compute[260603]: 2025-10-02 08:42:35.893 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:35 compute-0 nova_compute[260603]: 2025-10-02 08:42:35.893 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:35 compute-0 nova_compute[260603]: 2025-10-02 08:42:35.895 2 INFO nova.compute.manager [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Terminating instance
Oct 02 08:42:35 compute-0 nova_compute[260603]: 2025-10-02 08:42:35.897 2 DEBUG nova.compute.manager [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:42:36 compute-0 kernel: tape1f81d45-cb (unregistering): left promiscuous mode
Oct 02 08:42:36 compute-0 NetworkManager[45129]: <info>  [1759394556.0911] device (tape1f81d45-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:36 compute-0 ovn_controller[152344]: 2025-10-02T08:42:36Z|01059|binding|INFO|Releasing lport e1f81d45-cb4b-4514-b606-3b044d09fc3f from this chassis (sb_readonly=0)
Oct 02 08:42:36 compute-0 ovn_controller[152344]: 2025-10-02T08:42:36Z|01060|binding|INFO|Setting lport e1f81d45-cb4b-4514-b606-3b044d09fc3f down in Southbound
Oct 02 08:42:36 compute-0 ovn_controller[152344]: 2025-10-02T08:42:36Z|01061|binding|INFO|Removing iface tape1f81d45-cb ovn-installed in OVS
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:36.107 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:bd:92 10.100.0.11'], port_security=['fa:16:3e:21:bd:92 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e5370318-fc99-4c4a-9149-54deca5d783e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cc74904-ee74-4715-87cc-18060dd682a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f0bc0e400b74a9788079e4f67262fae', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03ef8875-7662-4d1c-ab06-cdfdd9edf304', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf9aea74-4830-4e32-899a-3a36573d95df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e1f81d45-cb4b-4514-b606-3b044d09fc3f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:42:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:36.109 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e1f81d45-cb4b-4514-b606-3b044d09fc3f in datapath 0cc74904-ee74-4715-87cc-18060dd682a0 unbound from our chassis
Oct 02 08:42:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:36.110 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0cc74904-ee74-4715-87cc-18060dd682a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:42:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:36.112 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[be37bef7-fbb7-4d9b-8c91-f5affef24c00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:36.112 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0 namespace which is not needed anymore
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:36 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct 02 08:42:36 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Consumed 6.268s CPU time.
Oct 02 08:42:36 compute-0 systemd-machined[214636]: Machine qemu-129-instance-00000067 terminated.
Oct 02 08:42:36 compute-0 ceph-mon[74477]: pgmap v1909: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 106 op/s
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.348 2 INFO nova.virt.libvirt.driver [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Instance destroyed successfully.
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.349 2 DEBUG nova.objects.instance [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lazy-loading 'resources' on Instance uuid e5370318-fc99-4c4a-9149-54deca5d783e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.372 2 DEBUG nova.virt.libvirt.vif [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:42:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1919713365',display_name='tempest-ServerMetadataNegativeTestJSON-server-1919713365',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1919713365',id=103,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:42:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4f0bc0e400b74a9788079e4f67262fae',ramdisk_id='',reservation_id='r-xqmiwuso',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1644163991',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1644163991-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:42:30Z,user_data=None,user_id='7fa065d842a644b0891a59bea27e82dd',uuid=e5370318-fc99-4c4a-9149-54deca5d783e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.373 2 DEBUG nova.network.os_vif_util [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Converting VIF {"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.374 2 DEBUG nova.network.os_vif_util [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.374 2 DEBUG os_vif [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1f81d45-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.386 2 INFO os_vif [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb')
Oct 02 08:42:36 compute-0 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [NOTICE]   (363564) : haproxy version is 2.8.14-c23fe91
Oct 02 08:42:36 compute-0 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [NOTICE]   (363564) : path to executable is /usr/sbin/haproxy
Oct 02 08:42:36 compute-0 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [WARNING]  (363564) : Exiting Master process...
Oct 02 08:42:36 compute-0 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [ALERT]    (363564) : Current worker (363566) exited with code 143 (Terminated)
Oct 02 08:42:36 compute-0 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [WARNING]  (363564) : All workers exited. Exiting... (0)
Oct 02 08:42:36 compute-0 systemd[1]: libpod-257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4.scope: Deactivated successfully.
Oct 02 08:42:36 compute-0 podman[363707]: 2025-10-02 08:42:36.412287937 +0000 UTC m=+0.181391158 container died 257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.500 2 DEBUG nova.compute.manager [req-6525ed1e-745c-475b-a6d5-235ef68f2d96 req-8e063545-9c4f-44b7-a412-cb6c373db162 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-vif-unplugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.501 2 DEBUG oslo_concurrency.lockutils [req-6525ed1e-745c-475b-a6d5-235ef68f2d96 req-8e063545-9c4f-44b7-a412-cb6c373db162 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.501 2 DEBUG oslo_concurrency.lockutils [req-6525ed1e-745c-475b-a6d5-235ef68f2d96 req-8e063545-9c4f-44b7-a412-cb6c373db162 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.501 2 DEBUG oslo_concurrency.lockutils [req-6525ed1e-745c-475b-a6d5-235ef68f2d96 req-8e063545-9c4f-44b7-a412-cb6c373db162 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.501 2 DEBUG nova.compute.manager [req-6525ed1e-745c-475b-a6d5-235ef68f2d96 req-8e063545-9c4f-44b7-a412-cb6c373db162 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] No waiting events found dispatching network-vif-unplugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:42:36 compute-0 nova_compute[260603]: 2025-10-02 08:42:36.502 2 DEBUG nova.compute.manager [req-6525ed1e-745c-475b-a6d5-235ef68f2d96 req-8e063545-9c4f-44b7-a412-cb6c373db162 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-vif-unplugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:42:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4-userdata-shm.mount: Deactivated successfully.
Oct 02 08:42:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f12d23758f8dd1aba6e6960bdd498e43c608dac9f058bfca986ccca7ddd0c5f3-merged.mount: Deactivated successfully.
Oct 02 08:42:37 compute-0 podman[363707]: 2025-10-02 08:42:37.193114665 +0000 UTC m=+0.962217906 container cleanup 257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 08:42:37 compute-0 systemd[1]: libpod-conmon-257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4.scope: Deactivated successfully.
Oct 02 08:42:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 109 op/s
Oct 02 08:42:37 compute-0 podman[363768]: 2025-10-02 08:42:37.71705164 +0000 UTC m=+0.497235018 container remove 257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:42:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.729 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8d70b628-6641-4d99-b4de-a6afbc691020]: (4, ('Thu Oct  2 08:42:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0 (257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4)\n257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4\nThu Oct  2 08:42:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0 (257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4)\n257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.732 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91aaee63-ae1d-41be-b02e-e5dd6efe22eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.734 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cc74904-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:37 compute-0 nova_compute[260603]: 2025-10-02 08:42:37.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:37 compute-0 kernel: tap0cc74904-e0: left promiscuous mode
Oct 02 08:42:37 compute-0 nova_compute[260603]: 2025-10-02 08:42:37.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.773 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[553d59ab-4803-4503-bf8e-007a3d677777]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.801 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[61bdf374-3dc2-44e9-9f93-c9e7fb3132d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.803 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f782d375-115f-4e8c-8e21-a213f3e6f94b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.830 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[02ad5fa6-e9ce-44cb-99f9-340ca42512ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542809, 'reachable_time': 24326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363784, 'error': None, 'target': 'ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.834 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:42:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.835 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e82a8ed5-b6fe-485b-bffb-334109f804d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d0cc74904\x2dee74\x2d4715\x2d87cc\x2d18060dd682a0.mount: Deactivated successfully.
Oct 02 08:42:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:42:38 compute-0 nova_compute[260603]: 2025-10-02 08:42:38.190 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:42:38 compute-0 nova_compute[260603]: 2025-10-02 08:42:38.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:42:38 compute-0 nova_compute[260603]: 2025-10-02 08:42:38.587 2 DEBUG nova.compute.manager [req-46415d6d-cc11-4bed-82ae-480bf04b0e3f req-f8c38206-1878-418a-9785-ff44326488b8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:38 compute-0 nova_compute[260603]: 2025-10-02 08:42:38.588 2 DEBUG oslo_concurrency.lockutils [req-46415d6d-cc11-4bed-82ae-480bf04b0e3f req-f8c38206-1878-418a-9785-ff44326488b8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:38 compute-0 nova_compute[260603]: 2025-10-02 08:42:38.588 2 DEBUG oslo_concurrency.lockutils [req-46415d6d-cc11-4bed-82ae-480bf04b0e3f req-f8c38206-1878-418a-9785-ff44326488b8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:38 compute-0 nova_compute[260603]: 2025-10-02 08:42:38.588 2 DEBUG oslo_concurrency.lockutils [req-46415d6d-cc11-4bed-82ae-480bf04b0e3f req-f8c38206-1878-418a-9785-ff44326488b8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:38 compute-0 nova_compute[260603]: 2025-10-02 08:42:38.589 2 DEBUG nova.compute.manager [req-46415d6d-cc11-4bed-82ae-480bf04b0e3f req-f8c38206-1878-418a-9785-ff44326488b8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] No waiting events found dispatching network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:42:38 compute-0 nova_compute[260603]: 2025-10-02 08:42:38.589 2 WARNING nova.compute.manager [req-46415d6d-cc11-4bed-82ae-480bf04b0e3f req-f8c38206-1878-418a-9785-ff44326488b8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received unexpected event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f for instance with vm_state active and task_state deleting.
Oct 02 08:42:38 compute-0 ceph-mon[74477]: pgmap v1910: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 109 op/s
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011082588558963338 of space, bias 1.0, pg target 0.3324776567689002 quantized to 32 (current 32)
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:42:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:42:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 107 op/s
Oct 02 08:42:39 compute-0 nova_compute[260603]: 2025-10-02 08:42:39.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:39 compute-0 nova_compute[260603]: 2025-10-02 08:42:39.376 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394544.37433, 5748b14f-5bbb-46f3-b563-062d530e5abd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:42:39 compute-0 nova_compute[260603]: 2025-10-02 08:42:39.376 2 INFO nova.compute.manager [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] VM Stopped (Lifecycle Event)
Oct 02 08:42:39 compute-0 nova_compute[260603]: 2025-10-02 08:42:39.411 2 DEBUG nova.compute.manager [None req-30cac69d-3163-4769-bf4d-53a6db5d04da - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:42:40 compute-0 podman[363786]: 2025-10-02 08:42:40.005190654 +0000 UTC m=+0.069192674 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:42:40 compute-0 podman[363787]: 2025-10-02 08:42:40.034042337 +0000 UTC m=+0.085656033 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct 02 08:42:40 compute-0 nova_compute[260603]: 2025-10-02 08:42:40.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:42:40 compute-0 ovn_controller[152344]: 2025-10-02T08:42:40Z|01062|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct 02 08:42:40 compute-0 ceph-mon[74477]: pgmap v1911: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 107 op/s
Oct 02 08:42:40 compute-0 nova_compute[260603]: 2025-10-02 08:42:40.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:41 compute-0 nova_compute[260603]: 2025-10-02 08:42:41.147 2 INFO nova.virt.libvirt.driver [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Deleting instance files /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e_del
Oct 02 08:42:41 compute-0 nova_compute[260603]: 2025-10-02 08:42:41.149 2 INFO nova.virt.libvirt.driver [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Deletion of /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e_del complete
Oct 02 08:42:41 compute-0 nova_compute[260603]: 2025-10-02 08:42:41.253 2 INFO nova.compute.manager [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Took 5.36 seconds to destroy the instance on the hypervisor.
Oct 02 08:42:41 compute-0 nova_compute[260603]: 2025-10-02 08:42:41.254 2 DEBUG oslo.service.loopingcall [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:42:41 compute-0 nova_compute[260603]: 2025-10-02 08:42:41.254 2 DEBUG nova.compute.manager [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:42:41 compute-0 nova_compute[260603]: 2025-10-02 08:42:41.255 2 DEBUG nova.network.neutron [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:42:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Oct 02 08:42:41 compute-0 nova_compute[260603]: 2025-10-02 08:42:41.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:42 compute-0 nova_compute[260603]: 2025-10-02 08:42:42.820 2 DEBUG nova.network.neutron [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:42:42 compute-0 nova_compute[260603]: 2025-10-02 08:42:42.836 2 INFO nova.compute.manager [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Took 1.58 seconds to deallocate network for instance.
Oct 02 08:42:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:42:42 compute-0 ceph-mon[74477]: pgmap v1912: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Oct 02 08:42:42 compute-0 nova_compute[260603]: 2025-10-02 08:42:42.890 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:42 compute-0 nova_compute[260603]: 2025-10-02 08:42:42.890 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:42 compute-0 nova_compute[260603]: 2025-10-02 08:42:42.953 2 DEBUG nova.compute.manager [req-f5297559-e26a-44fc-b296-1186f89ef3ce req-1253ee54-fc42-4d25-aef5-dc115935e362 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-vif-deleted-e1f81d45-cb4b-4514-b606-3b044d09fc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:42 compute-0 nova_compute[260603]: 2025-10-02 08:42:42.970 2 DEBUG oslo_concurrency.processutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:42:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 120 op/s
Oct 02 08:42:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:42:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533730750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:42:43 compute-0 nova_compute[260603]: 2025-10-02 08:42:43.405 2 DEBUG oslo_concurrency.processutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:42:43 compute-0 nova_compute[260603]: 2025-10-02 08:42:43.413 2 DEBUG nova.compute.provider_tree [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:42:43 compute-0 nova_compute[260603]: 2025-10-02 08:42:43.440 2 DEBUG nova.scheduler.client.report [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:42:43 compute-0 nova_compute[260603]: 2025-10-02 08:42:43.472 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:43 compute-0 nova_compute[260603]: 2025-10-02 08:42:43.528 2 INFO nova.scheduler.client.report [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Deleted allocations for instance e5370318-fc99-4c4a-9149-54deca5d783e
Oct 02 08:42:43 compute-0 nova_compute[260603]: 2025-10-02 08:42:43.620 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/533730750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.145 2 DEBUG nova.compute.manager [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-changed-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.146 2 DEBUG nova.compute.manager [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Refreshing instance network info cache due to event network-changed-f333ef16-60e3-449a-a6e7-4e7435c4ee30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.146 2 DEBUG oslo_concurrency.lockutils [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.146 2 DEBUG oslo_concurrency.lockutils [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.146 2 DEBUG nova.network.neutron [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Refreshing network info cache for port f333ef16-60e3-449a-a6e7-4e7435c4ee30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.238 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.238 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.239 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.239 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.239 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.240 2 INFO nova.compute.manager [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Terminating instance
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.241 2 DEBUG nova.compute.manager [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:44 compute-0 kernel: tapf333ef16-60 (unregistering): left promiscuous mode
Oct 02 08:42:44 compute-0 NetworkManager[45129]: <info>  [1759394564.4420] device (tapf333ef16-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:44 compute-0 ovn_controller[152344]: 2025-10-02T08:42:44Z|01063|binding|INFO|Releasing lport f333ef16-60e3-449a-a6e7-4e7435c4ee30 from this chassis (sb_readonly=0)
Oct 02 08:42:44 compute-0 ovn_controller[152344]: 2025-10-02T08:42:44Z|01064|binding|INFO|Setting lport f333ef16-60e3-449a-a6e7-4e7435c4ee30 down in Southbound
Oct 02 08:42:44 compute-0 ovn_controller[152344]: 2025-10-02T08:42:44Z|01065|binding|INFO|Removing iface tapf333ef16-60 ovn-installed in OVS
Oct 02 08:42:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:44.463 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:fb:ed 10.100.0.5'], port_security=['fa:16:3e:2c:fb:ed 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ddf9efd0-0ac6-4857-96ea-3f1d0e18590d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e1b840f-6fce-4ed8-898b-67d0829d6af4 8ef1bc1b-1bb2-413b-9f26-26e2f642b996', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d2a6eb5-cf00-4e48-b8dd-b9d998ac802a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f333ef16-60e3-449a-a6e7-4e7435c4ee30) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:42:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:44.464 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f333ef16-60e3-449a-a6e7-4e7435c4ee30 in datapath 64b91b42-84b6-4429-b137-6399bf4f6ccd unbound from our chassis
Oct 02 08:42:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:44.465 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64b91b42-84b6-4429-b137-6399bf4f6ccd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:42:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:44.466 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[66e9adeb-bd0e-4431-beb1-e5d801983874]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:44.467 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd namespace which is not needed anymore
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:44 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct 02 08:42:44 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000063.scope: Consumed 17.087s CPU time.
Oct 02 08:42:44 compute-0 systemd-machined[214636]: Machine qemu-125-instance-00000063 terminated.
Oct 02 08:42:44 compute-0 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [NOTICE]   (360467) : haproxy version is 2.8.14-c23fe91
Oct 02 08:42:44 compute-0 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [NOTICE]   (360467) : path to executable is /usr/sbin/haproxy
Oct 02 08:42:44 compute-0 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [WARNING]  (360467) : Exiting Master process...
Oct 02 08:42:44 compute-0 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [WARNING]  (360467) : Exiting Master process...
Oct 02 08:42:44 compute-0 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [ALERT]    (360467) : Current worker (360470) exited with code 143 (Terminated)
Oct 02 08:42:44 compute-0 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [WARNING]  (360467) : All workers exited. Exiting... (0)
Oct 02 08:42:44 compute-0 systemd[1]: libpod-439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a.scope: Deactivated successfully.
Oct 02 08:42:44 compute-0 conmon[360463]: conmon 439c91e33d9296b058b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a.scope/container/memory.events
Oct 02 08:42:44 compute-0 podman[363886]: 2025-10-02 08:42:44.694440488 +0000 UTC m=+0.027637077 container died 439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.719 2 INFO nova.virt.libvirt.driver [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Instance destroyed successfully.
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.719 2 DEBUG nova.objects.instance [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid ddf9efd0-0ac6-4857-96ea-3f1d0e18590d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a-userdata-shm.mount: Deactivated successfully.
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.735 2 DEBUG nova.virt.libvirt.vif [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:40:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=99,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEVRDNLCh7cGxmY8USPZ+Uw2QoectKaA792o7mYLTgKSdgVOdamESEYTMfoSKYLlGXQvF8R+2N9sWb5Rc56fBJj8i+BKngnG3c4da6Xe6b+B/+hyHhFsw+SA5clIFi8Ulg==',key_name='tempest-TestSecurityGroupsBasicOps-2045998300',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:41:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-r7zcjfhf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:41:10Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=ddf9efd0-0ac6-4857-96ea-3f1d0e18590d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.736 2 DEBUG nova.network.os_vif_util [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.737 2 DEBUG nova.network.os_vif_util [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.737 2 DEBUG os_vif [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-528c61d1dc02de92e13fd9335f527a4374ee7687c385393aa257801e40808ba5-merged.mount: Deactivated successfully.
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf333ef16-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:42:44 compute-0 nova_compute[260603]: 2025-10-02 08:42:44.756 2 INFO os_vif [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60')
Oct 02 08:42:44 compute-0 podman[363886]: 2025-10-02 08:42:44.809284405 +0000 UTC m=+0.142480984 container cleanup 439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:42:44 compute-0 systemd[1]: libpod-conmon-439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a.scope: Deactivated successfully.
Oct 02 08:42:45 compute-0 ceph-mon[74477]: pgmap v1913: 305 pgs: 305 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 120 op/s
Oct 02 08:42:45 compute-0 podman[363929]: 2025-10-02 08:42:45.013118666 +0000 UTC m=+0.167741435 container remove 439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 08:42:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.021 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb271987-4ff8-4274-917a-44df7a546084]: (4, ('Thu Oct  2 08:42:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd (439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a)\n439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a\nThu Oct  2 08:42:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd (439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a)\n439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.022 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[51a0941f-1148-405e-b1f8-c81f3600bfcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.023 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64b91b42-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:45 compute-0 nova_compute[260603]: 2025-10-02 08:42:45.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:45 compute-0 kernel: tap64b91b42-80: left promiscuous mode
Oct 02 08:42:45 compute-0 nova_compute[260603]: 2025-10-02 08:42:45.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.045 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[25a283cc-0444-4e4c-a1c7-49128d509141]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.072 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebcdd4f-3024-4e5d-926e-f01e370c0aa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.073 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94b9af97-cfb0-4eef-872c-69826a76af81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.087 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94b17f70-2823-45f0-a581-780999371ec1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534744, 'reachable_time': 36039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363944, 'error': None, 'target': 'ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.089 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:42:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.089 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[419d09d0-9f3b-4708-bb30-77cddd72807d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:42:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d64b91b42\x2d84b6\x2d4429\x2db137\x2d6399bf4f6ccd.mount: Deactivated successfully.
Oct 02 08:42:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.094 2 INFO nova.virt.libvirt.driver [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Deleting instance files /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_del
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.095 2 INFO nova.virt.libvirt.driver [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Deletion of /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_del complete
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.156 2 INFO nova.compute.manager [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Took 1.91 seconds to destroy the instance on the hypervisor.
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.157 2 DEBUG oslo.service.loopingcall [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.157 2 DEBUG nova.compute.manager [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.157 2 DEBUG nova.network.neutron [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.423 2 DEBUG nova.network.neutron [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updated VIF entry in instance network info cache for port f333ef16-60e3-449a-a6e7-4e7435c4ee30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.424 2 DEBUG nova.network.neutron [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.444 2 DEBUG oslo_concurrency.lockutils [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.803 2 DEBUG nova.compute.manager [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-vif-unplugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.804 2 DEBUG oslo_concurrency.lockutils [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.804 2 DEBUG oslo_concurrency.lockutils [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.805 2 DEBUG oslo_concurrency.lockutils [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.805 2 DEBUG nova.compute.manager [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] No waiting events found dispatching network-vif-unplugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.806 2 DEBUG nova.compute.manager [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-vif-unplugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.806 2 DEBUG nova.compute.manager [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.807 2 DEBUG oslo_concurrency.lockutils [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.807 2 DEBUG oslo_concurrency.lockutils [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.808 2 DEBUG oslo_concurrency.lockutils [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.808 2 DEBUG nova.compute.manager [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] No waiting events found dispatching network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:42:46 compute-0 nova_compute[260603]: 2025-10-02 08:42:46.808 2 WARNING nova.compute.manager [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received unexpected event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 for instance with vm_state active and task_state deleting.
Oct 02 08:42:47 compute-0 ceph-mon[74477]: pgmap v1914: 305 pgs: 305 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 08:42:47 compute-0 nova_compute[260603]: 2025-10-02 08:42:47.282 2 DEBUG nova.network.neutron [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:42:47 compute-0 nova_compute[260603]: 2025-10-02 08:42:47.304 2 INFO nova.compute.manager [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Took 1.15 seconds to deallocate network for instance.
Oct 02 08:42:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 93 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 40 op/s
Oct 02 08:42:47 compute-0 nova_compute[260603]: 2025-10-02 08:42:47.330 2 DEBUG nova.compute.manager [req-1ca08182-b79d-4387-a31b-05d8b7fde661 req-885fbaea-244b-4f46-af68-eadaf07363cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-vif-deleted-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:42:47 compute-0 nova_compute[260603]: 2025-10-02 08:42:47.360 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:42:47 compute-0 nova_compute[260603]: 2025-10-02 08:42:47.361 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:42:47 compute-0 nova_compute[260603]: 2025-10-02 08:42:47.417 2 DEBUG oslo_concurrency.processutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:42:47 compute-0 nova_compute[260603]: 2025-10-02 08:42:47.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:42:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:42:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:42:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668837629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:42:47 compute-0 nova_compute[260603]: 2025-10-02 08:42:47.880 2 DEBUG oslo_concurrency.processutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:42:47 compute-0 nova_compute[260603]: 2025-10-02 08:42:47.887 2 DEBUG nova.compute.provider_tree [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:42:47 compute-0 nova_compute[260603]: 2025-10-02 08:42:47.910 2 DEBUG nova.scheduler.client.report [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:42:47 compute-0 nova_compute[260603]: 2025-10-02 08:42:47.929 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:47 compute-0 nova_compute[260603]: 2025-10-02 08:42:47.972 2 INFO nova.scheduler.client.report [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance ddf9efd0-0ac6-4857-96ea-3f1d0e18590d
Oct 02 08:42:48 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1668837629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:42:48 compute-0 nova_compute[260603]: 2025-10-02 08:42:48.073 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:42:48 compute-0 nova_compute[260603]: 2025-10-02 08:42:48.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:48 compute-0 nova_compute[260603]: 2025-10-02 08:42:48.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:49 compute-0 ceph-mon[74477]: pgmap v1915: 305 pgs: 305 active+clean; 93 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 40 op/s
Oct 02 08:42:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 51 op/s
Oct 02 08:42:49 compute-0 nova_compute[260603]: 2025-10-02 08:42:49.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:49 compute-0 nova_compute[260603]: 2025-10-02 08:42:49.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:51 compute-0 ceph-mon[74477]: pgmap v1916: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 51 op/s
Oct 02 08:42:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:51.076 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:42:51 compute-0 nova_compute[260603]: 2025-10-02 08:42:51.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:51.077 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:42:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 49 op/s
Oct 02 08:42:51 compute-0 nova_compute[260603]: 2025-10-02 08:42:51.347 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394556.3452084, e5370318-fc99-4c4a-9149-54deca5d783e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:42:51 compute-0 nova_compute[260603]: 2025-10-02 08:42:51.347 2 INFO nova.compute.manager [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] VM Stopped (Lifecycle Event)
Oct 02 08:42:51 compute-0 nova_compute[260603]: 2025-10-02 08:42:51.373 2 DEBUG nova.compute.manager [None req-0a3a5597-c268-4bc5-940e-23193869558b - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:42:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:42:53 compute-0 ceph-mon[74477]: pgmap v1917: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 49 op/s
Oct 02 08:42:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 49 op/s
Oct 02 08:42:54 compute-0 nova_compute[260603]: 2025-10-02 08:42:54.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:54 compute-0 nova_compute[260603]: 2025-10-02 08:42:54.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:55 compute-0 ceph-mon[74477]: pgmap v1918: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 49 op/s
Oct 02 08:42:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:42:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:42:57.079 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:42:57 compute-0 ceph-mon[74477]: pgmap v1919: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:42:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:42:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:42:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:42:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:42:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:42:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:42:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:42:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:42:59 compute-0 ceph-mon[74477]: pgmap v1920: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:42:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 341 B/s wr, 14 op/s
Oct 02 08:42:59 compute-0 nova_compute[260603]: 2025-10-02 08:42:59.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:42:59 compute-0 nova_compute[260603]: 2025-10-02 08:42:59.716 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394564.7156699, ddf9efd0-0ac6-4857-96ea-3f1d0e18590d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:42:59 compute-0 nova_compute[260603]: 2025-10-02 08:42:59.717 2 INFO nova.compute.manager [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] VM Stopped (Lifecycle Event)
Oct 02 08:42:59 compute-0 nova_compute[260603]: 2025-10-02 08:42:59.741 2 DEBUG nova.compute.manager [None req-ce6aaf8b-5182-4ef0-a195-48af610ab871 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:42:59 compute-0 nova_compute[260603]: 2025-10-02 08:42:59.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:01 compute-0 ceph-mon[74477]: pgmap v1921: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 341 B/s wr, 14 op/s
Oct 02 08:43:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:43:02 compute-0 ceph-mon[74477]: pgmap v1922: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:43:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:43:02 compute-0 podman[363970]: 2025-10-02 08:43:02.98765263 +0000 UTC m=+0.052221948 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:43:03 compute-0 podman[363969]: 2025-10-02 08:43:03.012263628 +0000 UTC m=+0.083730252 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:43:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:43:04 compute-0 nova_compute[260603]: 2025-10-02 08:43:04.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:04 compute-0 ceph-mon[74477]: pgmap v1923: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:43:04 compute-0 nova_compute[260603]: 2025-10-02 08:43:04.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.140 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.140 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.169 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.282 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.282 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.291 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.292 2 INFO nova.compute.claims [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:43:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.410 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:43:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391638134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.864 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.872 2 DEBUG nova.compute.provider_tree [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.891 2 DEBUG nova.scheduler.client.report [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.932 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.933 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.989 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:43:05 compute-0 nova_compute[260603]: 2025-10-02 08:43:05.990 2 DEBUG nova.network.neutron [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.012 2 INFO nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.041 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.178 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.180 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.181 2 INFO nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Creating image(s)
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.206 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.230 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.253 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.256 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.343 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.344 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.345 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.346 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.375 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.379 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:06 compute-0 ceph-mon[74477]: pgmap v1924: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:43:06 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2391638134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:43:06 compute-0 nova_compute[260603]: 2025-10-02 08:43:06.517 2 DEBUG nova.policy [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7e27caab7dd34e4a9cac5f4f1880fad8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ae7dae3968e448f1b3ace692d9d76cff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:43:07 compute-0 nova_compute[260603]: 2025-10-02 08:43:07.027 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:07 compute-0 nova_compute[260603]: 2025-10-02 08:43:07.091 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] resizing rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:43:07 compute-0 nova_compute[260603]: 2025-10-02 08:43:07.203 2 DEBUG nova.objects.instance [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'migration_context' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:07 compute-0 nova_compute[260603]: 2025-10-02 08:43:07.218 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:43:07 compute-0 nova_compute[260603]: 2025-10-02 08:43:07.219 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Ensure instance console log exists: /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:43:07 compute-0 nova_compute[260603]: 2025-10-02 08:43:07.220 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:07 compute-0 nova_compute[260603]: 2025-10-02 08:43:07.220 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:07 compute-0 nova_compute[260603]: 2025-10-02 08:43:07.221 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 58 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 755 KiB/s wr, 24 op/s
Oct 02 08:43:07 compute-0 nova_compute[260603]: 2025-10-02 08:43:07.608 2 DEBUG nova.network.neutron [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Successfully created port: d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:43:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.455 2 DEBUG nova.network.neutron [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Successfully updated port: d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.478 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.479 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.479 2 DEBUG nova.network.neutron [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:43:08 compute-0 ceph-mon[74477]: pgmap v1925: 305 pgs: 305 active+clean; 58 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 755 KiB/s wr, 24 op/s
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.538 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.538 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.554 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.585 2 DEBUG nova.compute.manager [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.586 2 DEBUG nova.compute.manager [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing instance network info cache due to event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.586 2 DEBUG oslo_concurrency.lockutils [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.620 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.621 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.628 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.629 2 INFO nova.compute.claims [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.677 2 DEBUG nova.network.neutron [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:43:08 compute-0 nova_compute[260603]: 2025-10-02 08:43:08.769 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:43:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/343716132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.249 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.259 2 DEBUG nova.compute.provider_tree [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.284 2 DEBUG nova.scheduler.client.report [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.319 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.320 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:43:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.394 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.394 2 DEBUG nova.network.neutron [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.417 2 INFO nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.436 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.533 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:43:09 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/343716132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.535 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.536 2 INFO nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Creating image(s)
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.567 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.603 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.629 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.634 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.702 2 DEBUG nova.policy [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.729 2 DEBUG nova.network.neutron [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.740 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.741 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.741 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.742 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.773 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.777 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 bccc9587-6f96-4032-ae07-56ab00988869_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.823 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.824 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance network_info: |[{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.825 2 DEBUG oslo_concurrency.lockutils [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.826 2 DEBUG nova.network.neutron [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.832 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Start _get_guest_xml network_info=[{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.841 2 WARNING nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.849 2 DEBUG nova.virt.libvirt.host [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.851 2 DEBUG nova.virt.libvirt.host [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.865 2 DEBUG nova.virt.libvirt.host [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.867 2 DEBUG nova.virt.libvirt.host [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.868 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.868 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.869 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.869 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.870 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.870 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.871 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.871 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.872 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.872 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.872 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.873 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:43:09 compute-0 nova_compute[260603]: 2025-10-02 08:43:09.879 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:10 compute-0 nova_compute[260603]: 2025-10-02 08:43:10.136 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 bccc9587-6f96-4032-ae07-56ab00988869_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:10 compute-0 nova_compute[260603]: 2025-10-02 08:43:10.198 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:43:10 compute-0 nova_compute[260603]: 2025-10-02 08:43:10.293 2 DEBUG nova.objects.instance [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid bccc9587-6f96-4032-ae07-56ab00988869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:10 compute-0 nova_compute[260603]: 2025-10-02 08:43:10.311 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:43:10 compute-0 nova_compute[260603]: 2025-10-02 08:43:10.312 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Ensure instance console log exists: /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:43:10 compute-0 nova_compute[260603]: 2025-10-02 08:43:10.312 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:10 compute-0 nova_compute[260603]: 2025-10-02 08:43:10.313 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:10 compute-0 nova_compute[260603]: 2025-10-02 08:43:10.313 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:43:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2881609785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:10 compute-0 nova_compute[260603]: 2025-10-02 08:43:10.455 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:10 compute-0 nova_compute[260603]: 2025-10-02 08:43:10.485 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:10 compute-0 nova_compute[260603]: 2025-10-02 08:43:10.490 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:10 compute-0 ceph-mon[74477]: pgmap v1926: 305 pgs: 305 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:43:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2881609785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:10 compute-0 sudo[364433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:43:10 compute-0 sudo[364433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:10 compute-0 sudo[364433]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:10 compute-0 sudo[364488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:43:10 compute-0 sudo[364488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:10 compute-0 sudo[364488]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:10 compute-0 podman[364475]: 2025-10-02 08:43:10.856072798 +0000 UTC m=+0.091644157 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 08:43:10 compute-0 podman[364476]: 2025-10-02 08:43:10.875733892 +0000 UTC m=+0.110507976 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:43:10 compute-0 sudo[364539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:43:10 compute-0 sudo[364539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:10 compute-0 sudo[364539]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:43:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100491301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.004 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.008 2 DEBUG nova.virt.libvirt.vif [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:43:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-618462512',display_name='tempest-ServerRescueTestJSONUnderV235-server-618462512',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-618462512',id=104,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae7dae3968e448f1b3ace692d9d76cff',ramdisk_id='',reservation_id='r-g69g07mg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-299264470',owner_user_name='tempest-ServerRescueTestJSONUnderV235-299264470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:06Z,user_data=None,user_id='7e27caab7dd34e4a9cac5f4f1880fad8',uuid=1c24cd5c-a165-4fcf-b24d-245a60f7ea11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.009 2 DEBUG nova.network.os_vif_util [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converting VIF {"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:43:11 compute-0 sudo[364564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.010 2 DEBUG nova.network.os_vif_util [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:43:11 compute-0 sudo[364564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.014 2 DEBUG nova.objects.instance [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.034 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:43:11 compute-0 nova_compute[260603]:   <uuid>1c24cd5c-a165-4fcf-b24d-245a60f7ea11</uuid>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   <name>instance-00000068</name>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-618462512</nova:name>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:43:09</nova:creationTime>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:43:11 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:43:11 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:43:11 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:43:11 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:43:11 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:43:11 compute-0 nova_compute[260603]:         <nova:user uuid="7e27caab7dd34e4a9cac5f4f1880fad8">tempest-ServerRescueTestJSONUnderV235-299264470-project-member</nova:user>
Oct 02 08:43:11 compute-0 nova_compute[260603]:         <nova:project uuid="ae7dae3968e448f1b3ace692d9d76cff">tempest-ServerRescueTestJSONUnderV235-299264470</nova:project>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:43:11 compute-0 nova_compute[260603]:         <nova:port uuid="d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59">
Oct 02 08:43:11 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <system>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <entry name="serial">1c24cd5c-a165-4fcf-b24d-245a60f7ea11</entry>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <entry name="uuid">1c24cd5c-a165-4fcf-b24d-245a60f7ea11</entry>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     </system>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   <os>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   </os>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   <features>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   </features>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk">
Oct 02 08:43:11 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       </source>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:43:11 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config">
Oct 02 08:43:11 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       </source>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:43:11 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:d2:24:8a"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <target dev="tapd9b60bcb-8d"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/console.log" append="off"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <video>
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     </video>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:43:11 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:43:11 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:43:11 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:43:11 compute-0 nova_compute[260603]: </domain>
Oct 02 08:43:11 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.035 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Preparing to wait for external event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.036 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.036 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.037 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.038 2 DEBUG nova.virt.libvirt.vif [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:43:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-618462512',display_name='tempest-ServerRescueTestJSONUnderV235-server-618462512',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-618462512',id=104,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae7dae3968e448f1b3ace692d9d76cff',ramdisk_id='',reservation_id='r-g69g07mg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-299264470',owner_user_name='tempest-ServerRescueTestJSONUnderV235-299264470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:06Z,user_data=None,user_id='7e27caab7dd34e4a9cac5f4f1880fad8',uuid=1c24cd5c-a165-4fcf-b24d-245a60f7ea11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.038 2 DEBUG nova.network.os_vif_util [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converting VIF {"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.039 2 DEBUG nova.network.os_vif_util [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.040 2 DEBUG os_vif [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.042 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9b60bcb-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd9b60bcb-8d, col_values=(('external_ids', {'iface-id': 'd9b60bcb-8d0e-4638-8ce0-3bb5568a6b59', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:24:8a', 'vm-uuid': '1c24cd5c-a165-4fcf-b24d-245a60f7ea11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:11 compute-0 NetworkManager[45129]: <info>  [1759394591.0508] manager: (tapd9b60bcb-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.057 2 INFO os_vif [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d')
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.113 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.114 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.114 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No VIF found with MAC fa:16:3e:d2:24:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.115 2 INFO nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Using config drive
Oct 02 08:43:11 compute-0 nova_compute[260603]: 2025-10-02 08:43:11.145 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:43:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4100491301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:11 compute-0 sudo[364564]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:43:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:43:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:43:11 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:43:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:43:11 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:43:11 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 1428ad62-3f6e-4d01-a7a8-15b1fbbe87df does not exist
Oct 02 08:43:11 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev d184b9a3-bdd6-4e80-abbf-7173b996d2ea does not exist
Oct 02 08:43:11 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b0df20be-356d-406c-8963-c0ce83f9eb68 does not exist
Oct 02 08:43:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:43:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:43:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:43:11 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:43:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:43:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:43:11 compute-0 sudo[364644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:43:11 compute-0 sudo[364644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:11 compute-0 sudo[364644]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:11 compute-0 sudo[364669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:43:11 compute-0 sudo[364669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:11 compute-0 sudo[364669]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:11 compute-0 sudo[364694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:43:11 compute-0 sudo[364694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:11 compute-0 sudo[364694]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:12 compute-0 sudo[364719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:43:12 compute-0 sudo[364719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:12 compute-0 podman[364786]: 2025-10-02 08:43:12.551210493 +0000 UTC m=+0.064820231 container create ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:43:12 compute-0 ceph-mon[74477]: pgmap v1927: 305 pgs: 305 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:43:12 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:43:12 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:43:12 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:43:12 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:43:12 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:43:12 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:43:12 compute-0 systemd[1]: Started libpod-conmon-ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381.scope.
Oct 02 08:43:12 compute-0 podman[364786]: 2025-10-02 08:43:12.524187261 +0000 UTC m=+0.037797039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:43:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:43:12 compute-0 podman[364786]: 2025-10-02 08:43:12.663142292 +0000 UTC m=+0.176752070 container init ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 02 08:43:12 compute-0 podman[364786]: 2025-10-02 08:43:12.674443154 +0000 UTC m=+0.188052882 container start ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elgamal, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:43:12 compute-0 podman[364786]: 2025-10-02 08:43:12.678854981 +0000 UTC m=+0.192464769 container attach ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elgamal, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:43:12 compute-0 keen_elgamal[364802]: 167 167
Oct 02 08:43:12 compute-0 systemd[1]: libpod-ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381.scope: Deactivated successfully.
Oct 02 08:43:12 compute-0 podman[364786]: 2025-10-02 08:43:12.682309109 +0000 UTC m=+0.195918847 container died ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 08:43:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bf6feb1b1d310c3790632709fbad980e361b44674c88cb777120f373a46bc0e-merged.mount: Deactivated successfully.
Oct 02 08:43:12 compute-0 podman[364786]: 2025-10-02 08:43:12.738223822 +0000 UTC m=+0.251833550 container remove ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elgamal, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:43:12 compute-0 systemd[1]: libpod-conmon-ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381.scope: Deactivated successfully.
Oct 02 08:43:12 compute-0 nova_compute[260603]: 2025-10-02 08:43:12.767 2 INFO nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Creating config drive at /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config
Oct 02 08:43:12 compute-0 nova_compute[260603]: 2025-10-02 08:43:12.779 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpersbka20 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:43:12 compute-0 podman[364826]: 2025-10-02 08:43:12.934648425 +0000 UTC m=+0.039435181 container create 02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_visvesvaraya, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 08:43:12 compute-0 nova_compute[260603]: 2025-10-02 08:43:12.938 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpersbka20" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:12 compute-0 nova_compute[260603]: 2025-10-02 08:43:12.966 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:12 compute-0 nova_compute[260603]: 2025-10-02 08:43:12.970 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:12 compute-0 systemd[1]: Started libpod-conmon-02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e.scope.
Oct 02 08:43:13 compute-0 podman[364826]: 2025-10-02 08:43:12.917741967 +0000 UTC m=+0.022528803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:43:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ac4cdf434ed8002ab1d95fe73fe2cbc5d7ea67ac1ec563aef847af4b806956/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ac4cdf434ed8002ab1d95fe73fe2cbc5d7ea67ac1ec563aef847af4b806956/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ac4cdf434ed8002ab1d95fe73fe2cbc5d7ea67ac1ec563aef847af4b806956/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ac4cdf434ed8002ab1d95fe73fe2cbc5d7ea67ac1ec563aef847af4b806956/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ac4cdf434ed8002ab1d95fe73fe2cbc5d7ea67ac1ec563aef847af4b806956/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:13 compute-0 podman[364826]: 2025-10-02 08:43:13.039600215 +0000 UTC m=+0.144386981 container init 02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:43:13 compute-0 podman[364826]: 2025-10-02 08:43:13.057153423 +0000 UTC m=+0.161940189 container start 02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 08:43:13 compute-0 podman[364826]: 2025-10-02 08:43:13.062317064 +0000 UTC m=+0.167103830 container attach 02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_visvesvaraya, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 08:43:13 compute-0 nova_compute[260603]: 2025-10-02 08:43:13.099 2 DEBUG nova.network.neutron [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Successfully created port: af8bcfc4-3690-4b5f-9893-5555fa376203 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:43:13 compute-0 nova_compute[260603]: 2025-10-02 08:43:13.184 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:13 compute-0 nova_compute[260603]: 2025-10-02 08:43:13.185 2 INFO nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Deleting local config drive /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config because it was imported into RBD.
Oct 02 08:43:13 compute-0 kernel: tapd9b60bcb-8d: entered promiscuous mode
Oct 02 08:43:13 compute-0 NetworkManager[45129]: <info>  [1759394593.2778] manager: (tapd9b60bcb-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Oct 02 08:43:13 compute-0 nova_compute[260603]: 2025-10-02 08:43:13.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:13 compute-0 nova_compute[260603]: 2025-10-02 08:43:13.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:13 compute-0 ovn_controller[152344]: 2025-10-02T08:43:13Z|01066|binding|INFO|Claiming lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for this chassis.
Oct 02 08:43:13 compute-0 ovn_controller[152344]: 2025-10-02T08:43:13Z|01067|binding|INFO|d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59: Claiming fa:16:3e:d2:24:8a 10.100.0.8
Oct 02 08:43:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:13.301 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:24:8a 10.100.0.8'], port_security=['fa:16:3e:d2:24:8a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1c24cd5c-a165-4fcf-b24d-245a60f7ea11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34c4e106-9919-4d6d-a50a-81b3894f2e5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae7dae3968e448f1b3ace692d9d76cff', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6912f4e-082b-4b15-9608-2b9595c16211', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b81b03c-f256-4f5a-8b01-0b7991a52f3e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:43:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:13.303 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 in datapath 34c4e106-9919-4d6d-a50a-81b3894f2e5e bound to our chassis
Oct 02 08:43:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:13.304 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 34c4e106-9919-4d6d-a50a-81b3894f2e5e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:43:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:13.307 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1916c2e8-2118-4a35-9a7a-aa1e0c7f1755]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:13 compute-0 systemd-udevd[364896]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:43:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 134 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 08:43:13 compute-0 systemd-machined[214636]: New machine qemu-130-instance-00000068.
Oct 02 08:43:13 compute-0 NetworkManager[45129]: <info>  [1759394593.3552] device (tapd9b60bcb-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:43:13 compute-0 NetworkManager[45129]: <info>  [1759394593.3561] device (tapd9b60bcb-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:43:13 compute-0 systemd[1]: Started Virtual Machine qemu-130-instance-00000068.
Oct 02 08:43:13 compute-0 nova_compute[260603]: 2025-10-02 08:43:13.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:13 compute-0 ovn_controller[152344]: 2025-10-02T08:43:13Z|01068|binding|INFO|Setting lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 ovn-installed in OVS
Oct 02 08:43:13 compute-0 ovn_controller[152344]: 2025-10-02T08:43:13Z|01069|binding|INFO|Setting lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 up in Southbound
Oct 02 08:43:13 compute-0 nova_compute[260603]: 2025-10-02 08:43:13.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:14 compute-0 nova_compute[260603]: 2025-10-02 08:43:14.209 2 DEBUG nova.network.neutron [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updated VIF entry in instance network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:43:14 compute-0 nova_compute[260603]: 2025-10-02 08:43:14.209 2 DEBUG nova.network.neutron [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:43:14 compute-0 nova_compute[260603]: 2025-10-02 08:43:14.228 2 DEBUG oslo_concurrency.lockutils [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:43:14 compute-0 blissful_visvesvaraya[364861]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:43:14 compute-0 blissful_visvesvaraya[364861]: --> relative data size: 1.0
Oct 02 08:43:14 compute-0 blissful_visvesvaraya[364861]: --> All data devices are unavailable
Oct 02 08:43:14 compute-0 nova_compute[260603]: 2025-10-02 08:43:14.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:14 compute-0 systemd[1]: libpod-02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e.scope: Deactivated successfully.
Oct 02 08:43:14 compute-0 systemd[1]: libpod-02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e.scope: Consumed 1.235s CPU time.
Oct 02 08:43:14 compute-0 podman[364826]: 2025-10-02 08:43:14.381058087 +0000 UTC m=+1.485844843 container died 02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:43:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-96ac4cdf434ed8002ab1d95fe73fe2cbc5d7ea67ac1ec563aef847af4b806956-merged.mount: Deactivated successfully.
Oct 02 08:43:14 compute-0 podman[364826]: 2025-10-02 08:43:14.438231519 +0000 UTC m=+1.543018265 container remove 02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 08:43:14 compute-0 systemd[1]: libpod-conmon-02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e.scope: Deactivated successfully.
Oct 02 08:43:14 compute-0 sudo[364719]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:14 compute-0 nova_compute[260603]: 2025-10-02 08:43:14.549 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394594.5490034, 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:43:14 compute-0 nova_compute[260603]: 2025-10-02 08:43:14.550 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] VM Started (Lifecycle Event)
Oct 02 08:43:14 compute-0 sudo[364986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:43:14 compute-0 sudo[364986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:14 compute-0 sudo[364986]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:14 compute-0 ceph-mon[74477]: pgmap v1928: 305 pgs: 305 active+clean; 134 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 08:43:14 compute-0 nova_compute[260603]: 2025-10-02 08:43:14.577 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:14 compute-0 nova_compute[260603]: 2025-10-02 08:43:14.584 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394594.5492742, 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:43:14 compute-0 nova_compute[260603]: 2025-10-02 08:43:14.585 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] VM Paused (Lifecycle Event)
Oct 02 08:43:14 compute-0 nova_compute[260603]: 2025-10-02 08:43:14.605 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:14 compute-0 nova_compute[260603]: 2025-10-02 08:43:14.608 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:43:14 compute-0 sudo[365011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:43:14 compute-0 sudo[365011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:14 compute-0 nova_compute[260603]: 2025-10-02 08:43:14.626 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:43:14 compute-0 sudo[365011]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:14 compute-0 sudo[365036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:43:14 compute-0 sudo[365036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:14 compute-0 sudo[365036]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:14 compute-0 sudo[365061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:43:14 compute-0 sudo[365061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:15 compute-0 podman[365129]: 2025-10-02 08:43:15.078002939 +0000 UTC m=+0.041725980 container create af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 08:43:15 compute-0 systemd[1]: Started libpod-conmon-af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b.scope.
Oct 02 08:43:15 compute-0 podman[365129]: 2025-10-02 08:43:15.059434611 +0000 UTC m=+0.023157692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:43:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:43:15 compute-0 podman[365129]: 2025-10-02 08:43:15.197217416 +0000 UTC m=+0.160940467 container init af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 08:43:15 compute-0 podman[365129]: 2025-10-02 08:43:15.203659367 +0000 UTC m=+0.167382408 container start af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:43:15 compute-0 reverent_snyder[365145]: 167 167
Oct 02 08:43:15 compute-0 systemd[1]: libpod-af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b.scope: Deactivated successfully.
Oct 02 08:43:15 compute-0 conmon[365145]: conmon af4b363b8bf825c33cbf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b.scope/container/memory.events
Oct 02 08:43:15 compute-0 podman[365129]: 2025-10-02 08:43:15.2214246 +0000 UTC m=+0.185147641 container attach af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:43:15 compute-0 podman[365129]: 2025-10-02 08:43:15.222020549 +0000 UTC m=+0.185743580 container died af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eeaadb0577d972205f43b8797a81184bbcfaff1a11309411ed4ac54e97d453b-merged.mount: Deactivated successfully.
Oct 02 08:43:15 compute-0 podman[365129]: 2025-10-02 08:43:15.26793964 +0000 UTC m=+0.231662691 container remove af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:43:15 compute-0 systemd[1]: libpod-conmon-af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b.scope: Deactivated successfully.
Oct 02 08:43:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 134 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 08:43:15 compute-0 podman[365169]: 2025-10-02 08:43:15.452519413 +0000 UTC m=+0.051843417 container create 226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:43:15 compute-0 systemd[1]: Started libpod-conmon-226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93.scope.
Oct 02 08:43:15 compute-0 podman[365169]: 2025-10-02 08:43:15.424318944 +0000 UTC m=+0.023642988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:43:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9369be09207f84f06dd7289a41d4f1a2c607596ec1b417c76d73e2a7739349e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9369be09207f84f06dd7289a41d4f1a2c607596ec1b417c76d73e2a7739349e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9369be09207f84f06dd7289a41d4f1a2c607596ec1b417c76d73e2a7739349e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9369be09207f84f06dd7289a41d4f1a2c607596ec1b417c76d73e2a7739349e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:15 compute-0 podman[365169]: 2025-10-02 08:43:15.546798831 +0000 UTC m=+0.146122835 container init 226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:43:15 compute-0 podman[365169]: 2025-10-02 08:43:15.556039699 +0000 UTC m=+0.155363693 container start 226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bhaskara, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 08:43:15 compute-0 podman[365169]: 2025-10-02 08:43:15.560081705 +0000 UTC m=+0.159405739 container attach 226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 08:43:15 compute-0 nova_compute[260603]: 2025-10-02 08:43:15.616 2 DEBUG nova.network.neutron [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Successfully updated port: af8bcfc4-3690-4b5f-9893-5555fa376203 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:43:15 compute-0 nova_compute[260603]: 2025-10-02 08:43:15.640 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:43:15 compute-0 nova_compute[260603]: 2025-10-02 08:43:15.640 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:43:15 compute-0 nova_compute[260603]: 2025-10-02 08:43:15.641 2 DEBUG nova.network.neutron [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:43:15 compute-0 nova_compute[260603]: 2025-10-02 08:43:15.754 2 DEBUG nova.compute.manager [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-changed-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:15 compute-0 nova_compute[260603]: 2025-10-02 08:43:15.754 2 DEBUG nova.compute.manager [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Refreshing instance network info cache due to event network-changed-af8bcfc4-3690-4b5f-9893-5555fa376203. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:43:15 compute-0 nova_compute[260603]: 2025-10-02 08:43:15.755 2 DEBUG oslo_concurrency.lockutils [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:43:15 compute-0 nova_compute[260603]: 2025-10-02 08:43:15.834 2 DEBUG nova.network.neutron [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:43:16 compute-0 nova_compute[260603]: 2025-10-02 08:43:16.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]: {
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:     "0": [
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:         {
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "devices": [
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "/dev/loop3"
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             ],
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_name": "ceph_lv0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_size": "21470642176",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "name": "ceph_lv0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "tags": {
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.cluster_name": "ceph",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.crush_device_class": "",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.encrypted": "0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.osd_id": "0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.type": "block",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.vdo": "0"
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             },
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "type": "block",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "vg_name": "ceph_vg0"
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:         }
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:     ],
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:     "1": [
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:         {
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "devices": [
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "/dev/loop4"
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             ],
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_name": "ceph_lv1",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_size": "21470642176",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "name": "ceph_lv1",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "tags": {
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.cluster_name": "ceph",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.crush_device_class": "",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.encrypted": "0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.osd_id": "1",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.type": "block",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.vdo": "0"
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             },
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "type": "block",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "vg_name": "ceph_vg1"
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:         }
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:     ],
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:     "2": [
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:         {
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "devices": [
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "/dev/loop5"
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             ],
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_name": "ceph_lv2",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_size": "21470642176",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "name": "ceph_lv2",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "tags": {
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.cluster_name": "ceph",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.crush_device_class": "",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.encrypted": "0",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.osd_id": "2",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.type": "block",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:                 "ceph.vdo": "0"
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             },
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "type": "block",
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:             "vg_name": "ceph_vg2"
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:         }
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]:     ]
Oct 02 08:43:16 compute-0 suspicious_bhaskara[365186]: }
Oct 02 08:43:16 compute-0 podman[365169]: 2025-10-02 08:43:16.280665325 +0000 UTC m=+0.879989289 container died 226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 08:43:16 compute-0 systemd[1]: libpod-226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93.scope: Deactivated successfully.
Oct 02 08:43:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9369be09207f84f06dd7289a41d4f1a2c607596ec1b417c76d73e2a7739349e-merged.mount: Deactivated successfully.
Oct 02 08:43:16 compute-0 podman[365169]: 2025-10-02 08:43:16.420630027 +0000 UTC m=+1.019954001 container remove 226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bhaskara, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 08:43:16 compute-0 systemd[1]: libpod-conmon-226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93.scope: Deactivated successfully.
Oct 02 08:43:16 compute-0 sudo[365061]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:16 compute-0 sudo[365208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:43:16 compute-0 sudo[365208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:16 compute-0 sudo[365208]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:16 compute-0 ceph-mon[74477]: pgmap v1929: 305 pgs: 305 active+clean; 134 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 08:43:16 compute-0 sudo[365233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:43:16 compute-0 sudo[365233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:16 compute-0 sudo[365233]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:16 compute-0 sudo[365258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:43:16 compute-0 sudo[365258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:16 compute-0 sudo[365258]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:16 compute-0 sudo[365283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:43:16 compute-0 sudo[365283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:17 compute-0 podman[365351]: 2025-10-02 08:43:17.204347835 +0000 UTC m=+0.053527519 container create a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:43:17 compute-0 systemd[1]: Started libpod-conmon-a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874.scope.
Oct 02 08:43:17 compute-0 podman[365351]: 2025-10-02 08:43:17.177733025 +0000 UTC m=+0.026912749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:43:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:43:17 compute-0 podman[365351]: 2025-10-02 08:43:17.296092314 +0000 UTC m=+0.145272048 container init a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:43:17 compute-0 podman[365351]: 2025-10-02 08:43:17.3068864 +0000 UTC m=+0.156066074 container start a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 08:43:17 compute-0 podman[365351]: 2025-10-02 08:43:17.310542175 +0000 UTC m=+0.159721859 container attach a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:43:17 compute-0 lucid_khorana[365367]: 167 167
Oct 02 08:43:17 compute-0 systemd[1]: libpod-a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874.scope: Deactivated successfully.
Oct 02 08:43:17 compute-0 conmon[365367]: conmon a86b600ac1baf3efe657 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874.scope/container/memory.events
Oct 02 08:43:17 compute-0 podman[365351]: 2025-10-02 08:43:17.313554458 +0000 UTC m=+0.162734112 container died a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:43:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Oct 02 08:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b295db08522a35e224d23494706fb44006419feef4fbecb1bc2abab6bc863c9-merged.mount: Deactivated successfully.
Oct 02 08:43:17 compute-0 podman[365351]: 2025-10-02 08:43:17.35691058 +0000 UTC m=+0.206090224 container remove a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:43:17 compute-0 systemd[1]: libpod-conmon-a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874.scope: Deactivated successfully.
Oct 02 08:43:17 compute-0 podman[365391]: 2025-10-02 08:43:17.576124473 +0000 UTC m=+0.055685688 container create bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 08:43:17 compute-0 systemd[1]: Started libpod-conmon-bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a.scope.
Oct 02 08:43:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49761515669b71a43edc02b912cb4cc5147c76eb0b4e2a80cda9298c3651ffd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49761515669b71a43edc02b912cb4cc5147c76eb0b4e2a80cda9298c3651ffd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49761515669b71a43edc02b912cb4cc5147c76eb0b4e2a80cda9298c3651ffd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49761515669b71a43edc02b912cb4cc5147c76eb0b4e2a80cda9298c3651ffd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:17 compute-0 podman[365391]: 2025-10-02 08:43:17.559397151 +0000 UTC m=+0.038958396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:43:17 compute-0 podman[365391]: 2025-10-02 08:43:17.661552825 +0000 UTC m=+0.141114060 container init bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brahmagupta, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 08:43:17 compute-0 podman[365391]: 2025-10-02 08:43:17.673887099 +0000 UTC m=+0.153448344 container start bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 08:43:17 compute-0 podman[365391]: 2025-10-02 08:43:17.677527033 +0000 UTC m=+0.157088308 container attach bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:43:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:43:18 compute-0 ceph-mon[74477]: pgmap v1930: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]: {
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "osd_id": 2,
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "type": "bluestore"
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:     },
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "osd_id": 1,
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "type": "bluestore"
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:     },
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "osd_id": 0,
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:         "type": "bluestore"
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]:     }
Oct 02 08:43:18 compute-0 determined_brahmagupta[365409]: }
Oct 02 08:43:18 compute-0 systemd[1]: libpod-bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a.scope: Deactivated successfully.
Oct 02 08:43:18 compute-0 podman[365391]: 2025-10-02 08:43:18.717649282 +0000 UTC m=+1.197210537 container died bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brahmagupta, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 08:43:18 compute-0 systemd[1]: libpod-bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a.scope: Consumed 1.036s CPU time.
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.715 2 DEBUG nova.network.neutron [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updating instance_info_cache with network_info: [{"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.741 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.741 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Instance network_info: |[{"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.742 2 DEBUG oslo_concurrency.lockutils [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.742 2 DEBUG nova.network.neutron [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Refreshing network info cache for port af8bcfc4-3690-4b5f-9893-5555fa376203 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.746 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Start _get_guest_xml network_info=[{"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:43:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-49761515669b71a43edc02b912cb4cc5147c76eb0b4e2a80cda9298c3651ffd1-merged.mount: Deactivated successfully.
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.760 2 WARNING nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.773 2 DEBUG nova.virt.libvirt.host [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.773 2 DEBUG nova.virt.libvirt.host [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.776 2 DEBUG nova.virt.libvirt.host [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.777 2 DEBUG nova.virt.libvirt.host [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.777 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.777 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.778 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.778 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.778 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.778 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.779 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.779 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.779 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.779 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.779 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.780 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:43:18 compute-0 nova_compute[260603]: 2025-10-02 08:43:18.782 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:18 compute-0 podman[365391]: 2025-10-02 08:43:18.800445223 +0000 UTC m=+1.280006458 container remove bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brahmagupta, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 02 08:43:18 compute-0 systemd[1]: libpod-conmon-bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a.scope: Deactivated successfully.
Oct 02 08:43:18 compute-0 sudo[365283]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:43:18 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:43:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:43:18 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:43:18 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 1acea6ce-11b4-4b1c-a669-d5663e6b4795 does not exist
Oct 02 08:43:18 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 64006efe-7a8f-4bc4-8270-d11f511e27d5 does not exist
Oct 02 08:43:18 compute-0 sudo[365456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:43:18 compute-0 sudo[365456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:18 compute-0 sudo[365456]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:19 compute-0 sudo[365481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:43:19 compute-0 sudo[365481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:43:19 compute-0 sudo[365481]: pam_unix(sudo:session): session closed for user root
Oct 02 08:43:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:43:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/378220706' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.302 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.337 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 MiB/s wr, 39 op/s
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.343 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:43:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:43:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2198398843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.810 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.812 2 DEBUG nova.virt.libvirt.vif [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=105,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBp7O1+NifYmRTPjkz076R6OB0cQraodAPHkd+igs0jgMRa0ylNfh8a4FTcaAs5LMUjKZ6d3T6IfE8uMmH/Vv7/4iSPE6rs9EcqUzLfabYKjHL1D+G2YR0bhQI1PbtyQqg==',key_name='tempest-TestSecurityGroupsBasicOps-388523957',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-lm9mu0gw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:09Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=bccc9587-6f96-4032-ae07-56ab00988869,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.813 2 DEBUG nova.network.os_vif_util [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.814 2 DEBUG nova.network.os_vif_util [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.817 2 DEBUG nova.objects.instance [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid bccc9587-6f96-4032-ae07-56ab00988869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.852 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:43:19 compute-0 nova_compute[260603]:   <uuid>bccc9587-6f96-4032-ae07-56ab00988869</uuid>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   <name>instance-00000069</name>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488</nova:name>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:43:18</nova:creationTime>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:43:19 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:43:19 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:43:19 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:43:19 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:43:19 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:43:19 compute-0 nova_compute[260603]:         <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct 02 08:43:19 compute-0 nova_compute[260603]:         <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:43:19 compute-0 nova_compute[260603]:         <nova:port uuid="af8bcfc4-3690-4b5f-9893-5555fa376203">
Oct 02 08:43:19 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <system>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <entry name="serial">bccc9587-6f96-4032-ae07-56ab00988869</entry>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <entry name="uuid">bccc9587-6f96-4032-ae07-56ab00988869</entry>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     </system>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   <os>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   </os>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   <features>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   </features>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/bccc9587-6f96-4032-ae07-56ab00988869_disk">
Oct 02 08:43:19 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       </source>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:43:19 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/bccc9587-6f96-4032-ae07-56ab00988869_disk.config">
Oct 02 08:43:19 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       </source>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:43:19 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:c6:6d:19"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <target dev="tapaf8bcfc4-36"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/console.log" append="off"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <video>
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     </video>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:43:19 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:43:19 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:43:19 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:43:19 compute-0 nova_compute[260603]: </domain>
Oct 02 08:43:19 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.854 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Preparing to wait for external event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.855 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.855 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.855 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.857 2 DEBUG nova.virt.libvirt.vif [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=105,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBp7O1+NifYmRTPjkz076R6OB0cQraodAPHkd+igs0jgMRa0ylNfh8a4FTcaAs5LMUjKZ6d3T6IfE8uMmH/Vv7/4iSPE6rs9EcqUzLfabYKjHL1D+G2YR0bhQI1PbtyQqg==',key_name='tempest-TestSecurityGroupsBasicOps-388523957',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-lm9mu0gw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:09Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=bccc9587-6f96-4032-ae07-56ab00988869,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.857 2 DEBUG nova.network.os_vif_util [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.858 2 DEBUG nova.network.os_vif_util [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.859 2 DEBUG os_vif [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:43:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:43:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/378220706' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2198398843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.860 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.860 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.865 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaf8bcfc4-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.865 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaf8bcfc4-36, col_values=(('external_ids', {'iface-id': 'af8bcfc4-3690-4b5f-9893-5555fa376203', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:6d:19', 'vm-uuid': 'bccc9587-6f96-4032-ae07-56ab00988869'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:19 compute-0 NetworkManager[45129]: <info>  [1759394599.8680] manager: (tapaf8bcfc4-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.875 2 INFO os_vif [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36')
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.946 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.947 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.947 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:c6:6d:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.948 2 INFO nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Using config drive
Oct 02 08:43:19 compute-0 nova_compute[260603]: 2025-10-02 08:43:19.971 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.670 2 INFO nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Creating config drive at /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/disk.config
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.679 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqcs34ae8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.729 2 DEBUG nova.network.neutron [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updated VIF entry in instance network info cache for port af8bcfc4-3690-4b5f-9893-5555fa376203. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.731 2 DEBUG nova.network.neutron [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updating instance_info_cache with network_info: [{"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.736 2 DEBUG nova.compute.manager [req-0108e192-9c9f-449f-bb05-b3fa63ecf0e5 req-f783b68e-bf15-4395-a0f4-3122cb155090 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.738 2 DEBUG oslo_concurrency.lockutils [req-0108e192-9c9f-449f-bb05-b3fa63ecf0e5 req-f783b68e-bf15-4395-a0f4-3122cb155090 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.739 2 DEBUG oslo_concurrency.lockutils [req-0108e192-9c9f-449f-bb05-b3fa63ecf0e5 req-f783b68e-bf15-4395-a0f4-3122cb155090 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.739 2 DEBUG oslo_concurrency.lockutils [req-0108e192-9c9f-449f-bb05-b3fa63ecf0e5 req-f783b68e-bf15-4395-a0f4-3122cb155090 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.740 2 DEBUG nova.compute.manager [req-0108e192-9c9f-449f-bb05-b3fa63ecf0e5 req-f783b68e-bf15-4395-a0f4-3122cb155090 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Processing event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.742 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.749 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394600.7483912, 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.749 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] VM Resumed (Lifecycle Event)
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.754 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.761 2 INFO nova.virt.libvirt.driver [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance spawned successfully.
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.763 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.766 2 DEBUG oslo_concurrency.lockutils [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.787 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.797 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.805 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.806 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.806 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.807 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.808 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.809 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.819 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.832 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqcs34ae8" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.872 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:20 compute-0 ceph-mon[74477]: pgmap v1931: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 MiB/s wr, 39 op/s
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.879 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/disk.config bccc9587-6f96-4032-ae07-56ab00988869_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.939 2 INFO nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Took 14.76 seconds to spawn the instance on the hypervisor.
Oct 02 08:43:20 compute-0 nova_compute[260603]: 2025-10-02 08:43:20.940 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:21 compute-0 nova_compute[260603]: 2025-10-02 08:43:21.026 2 INFO nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Took 15.78 seconds to build instance.
Oct 02 08:43:21 compute-0 nova_compute[260603]: 2025-10-02 08:43:21.051 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:21 compute-0 nova_compute[260603]: 2025-10-02 08:43:21.071 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/disk.config bccc9587-6f96-4032-ae07-56ab00988869_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:21 compute-0 nova_compute[260603]: 2025-10-02 08:43:21.072 2 INFO nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Deleting local config drive /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/disk.config because it was imported into RBD.
Oct 02 08:43:21 compute-0 kernel: tapaf8bcfc4-36: entered promiscuous mode
Oct 02 08:43:21 compute-0 NetworkManager[45129]: <info>  [1759394601.1480] manager: (tapaf8bcfc4-36): new Tun device (/org/freedesktop/NetworkManager/Devices/421)
Oct 02 08:43:21 compute-0 ovn_controller[152344]: 2025-10-02T08:43:21Z|01070|binding|INFO|Claiming lport af8bcfc4-3690-4b5f-9893-5555fa376203 for this chassis.
Oct 02 08:43:21 compute-0 ovn_controller[152344]: 2025-10-02T08:43:21Z|01071|binding|INFO|af8bcfc4-3690-4b5f-9893-5555fa376203: Claiming fa:16:3e:c6:6d:19 10.100.0.5
Oct 02 08:43:21 compute-0 nova_compute[260603]: 2025-10-02 08:43:21.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.167 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:6d:19 10.100.0.5'], port_security=['fa:16:3e:c6:6d:19 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'bccc9587-6f96-4032-ae07-56ab00988869', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '87949ce5-c546-4e93-ab3f-46b861ef5238 dd3198b8-8ad4-4f59-8253-4227db95b8da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e2b1c02-e727-474f-a9ba-53199387490d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=af8bcfc4-3690-4b5f-9893-5555fa376203) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.169 162357 INFO neutron.agent.ovn.metadata.agent [-] Port af8bcfc4-3690-4b5f-9893-5555fa376203 in datapath c7addd6c-480f-45ed-94c2-18d1d2248acb bound to our chassis
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.171 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c7addd6c-480f-45ed-94c2-18d1d2248acb
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.187 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c79239df-56a2-434c-99bd-da6a822e65cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.189 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc7addd6c-41 in ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.192 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc7addd6c-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.192 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[07a5f01f-d560-4a5d-8750-22de15a53850]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.197 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe33414-204d-4186-a4a8-356e90c8bf2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 systemd-udevd[365641]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:43:21 compute-0 systemd-machined[214636]: New machine qemu-131-instance-00000069.
Oct 02 08:43:21 compute-0 NetworkManager[45129]: <info>  [1759394601.2142] device (tapaf8bcfc4-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:43:21 compute-0 NetworkManager[45129]: <info>  [1759394601.2154] device (tapaf8bcfc4-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.217 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a65dc172-09da-4a87-977a-46e63f381b08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 systemd[1]: Started Virtual Machine qemu-131-instance-00000069.
Oct 02 08:43:21 compute-0 nova_compute[260603]: 2025-10-02 08:43:21.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:21 compute-0 ovn_controller[152344]: 2025-10-02T08:43:21Z|01072|binding|INFO|Setting lport af8bcfc4-3690-4b5f-9893-5555fa376203 ovn-installed in OVS
Oct 02 08:43:21 compute-0 ovn_controller[152344]: 2025-10-02T08:43:21Z|01073|binding|INFO|Setting lport af8bcfc4-3690-4b5f-9893-5555fa376203 up in Southbound
Oct 02 08:43:21 compute-0 nova_compute[260603]: 2025-10-02 08:43:21.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.248 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9121e7c-41a9-45a3-960c-334f3c40d8be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.292 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8885183c-e582-442d-b904-deea3fd29f27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 NetworkManager[45129]: <info>  [1759394601.2989] manager: (tapc7addd6c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/422)
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.298 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e67123e9-c497-4487-8b7a-e36e83a1f8bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 systemd-udevd[365645]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:43:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.346 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4f800863-f091-4c39-b8aa-dd53672db4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.350 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[44b88693-a466-41f2-b295-94a50ecfa73f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 NetworkManager[45129]: <info>  [1759394601.3736] device (tapc7addd6c-40): carrier: link connected
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.387 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f40998a1-d2c5-4991-a854-f646ef95598a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.405 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[61cefd43-b83f-4526-aa11-72aadefec9c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc7addd6c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:47:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547997, 'reachable_time': 26922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365674, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.424 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3bd022-3809-4419-81c2-4f2db578e2e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:47c4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547997, 'tstamp': 547997}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365675, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.441 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5a43d388-6631-40d6-a241-3865052a47be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc7addd6c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:47:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547997, 'reachable_time': 26922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365676, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.481 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7473c21e-636e-45da-96c7-201b6d7e3ede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.545 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[da2e6af9-6af7-44d2-8f93-c0874adcec70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.546 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7addd6c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.547 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.547 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7addd6c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:21 compute-0 NetworkManager[45129]: <info>  [1759394601.5493] manager: (tapc7addd6c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Oct 02 08:43:21 compute-0 kernel: tapc7addd6c-40: entered promiscuous mode
Oct 02 08:43:21 compute-0 nova_compute[260603]: 2025-10-02 08:43:21.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.560 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc7addd6c-40, col_values=(('external_ids', {'iface-id': '8f66d020-a258-4e14-aa3b-234835306a91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:21 compute-0 ovn_controller[152344]: 2025-10-02T08:43:21Z|01074|binding|INFO|Releasing lport 8f66d020-a258-4e14-aa3b-234835306a91 from this chassis (sb_readonly=0)
Oct 02 08:43:21 compute-0 nova_compute[260603]: 2025-10-02 08:43:21.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.573 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c7addd6c-480f-45ed-94c2-18d1d2248acb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c7addd6c-480f-45ed-94c2-18d1d2248acb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:43:21 compute-0 nova_compute[260603]: 2025-10-02 08:43:21.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.575 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8927aa0f-0a7c-4931-b2d2-de4f5a97cd4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.577 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-c7addd6c-480f-45ed-94c2-18d1d2248acb
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/c7addd6c-480f-45ed-94c2-18d1d2248acb.pid.haproxy
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID c7addd6c-480f-45ed-94c2-18d1d2248acb
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:43:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.578 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'env', 'PROCESS_TAG=haproxy-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c7addd6c-480f-45ed-94c2-18d1d2248acb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:43:21 compute-0 podman[365708]: 2025-10-02 08:43:21.946276453 +0000 UTC m=+0.069409214 container create ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 08:43:21 compute-0 systemd[1]: Started libpod-conmon-ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2.scope.
Oct 02 08:43:22 compute-0 podman[365708]: 2025-10-02 08:43:21.90544472 +0000 UTC m=+0.028577491 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:43:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/644ad21fe1b1e836c44e9821eabc2e68b4056b923e74837b9185693307a8e425/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:43:22 compute-0 podman[365708]: 2025-10-02 08:43:22.031290153 +0000 UTC m=+0.154422924 container init ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:43:22 compute-0 podman[365708]: 2025-10-02 08:43:22.03825452 +0000 UTC m=+0.161387291 container start ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 08:43:22 compute-0 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [NOTICE]   (365728) : New worker (365730) forked
Oct 02 08:43:22 compute-0 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [NOTICE]   (365728) : Loading success.
Oct 02 08:43:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:43:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1904757560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:43:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:43:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1904757560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:43:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.877 2 DEBUG nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.878 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.878 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.878 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.878 2 DEBUG nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] No waiting events found dispatching network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.878 2 WARNING nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received unexpected event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for instance with vm_state active and task_state None.
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.878 2 DEBUG nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.879 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.879 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.879 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.879 2 DEBUG nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Processing event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.879 2 DEBUG nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.879 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.880 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.880 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.880 2 DEBUG nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] No waiting events found dispatching network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.880 2 WARNING nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received unexpected event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 for instance with vm_state building and task_state spawning.
Oct 02 08:43:22 compute-0 ceph-mon[74477]: pgmap v1932: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 08:43:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1904757560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:43:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1904757560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.982 2 INFO nova.compute.manager [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Rescuing
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.982 2 DEBUG oslo_concurrency.lockutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.982 2 DEBUG oslo_concurrency.lockutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:43:22 compute-0 nova_compute[260603]: 2025-10-02 08:43:22.983 2 DEBUG nova.network.neutron [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.185 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.186 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394603.1847186, bccc9587-6f96-4032-ae07-56ab00988869 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.186 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] VM Started (Lifecycle Event)
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.190 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.193 2 INFO nova.virt.libvirt.driver [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Instance spawned successfully.
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.194 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.207 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.212 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.223 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.223 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.225 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.226 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.227 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.227 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.254 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.254 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394603.1849718, bccc9587-6f96-4032-ae07-56ab00988869 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.254 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] VM Paused (Lifecycle Event)
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.287 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.292 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394603.1892343, bccc9587-6f96-4032-ae07-56ab00988869 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.292 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] VM Resumed (Lifecycle Event)
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.309 2 INFO nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Took 13.78 seconds to spawn the instance on the hypervisor.
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.310 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.319 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.322 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:43:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.355 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.381 2 INFO nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Took 14.78 seconds to build instance.
Oct 02 08:43:23 compute-0 nova_compute[260603]: 2025-10-02 08:43:23.399 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:24 compute-0 nova_compute[260603]: 2025-10-02 08:43:24.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:24 compute-0 nova_compute[260603]: 2025-10-02 08:43:24.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:24 compute-0 ceph-mon[74477]: pgmap v1933: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 02 08:43:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 78 op/s
Oct 02 08:43:25 compute-0 nova_compute[260603]: 2025-10-02 08:43:25.659 2 DEBUG nova.network.neutron [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:43:25 compute-0 nova_compute[260603]: 2025-10-02 08:43:25.690 2 DEBUG oslo_concurrency.lockutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:43:26 compute-0 nova_compute[260603]: 2025-10-02 08:43:26.199 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:43:26 compute-0 ceph-mon[74477]: pgmap v1934: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 78 op/s
Oct 02 08:43:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 25 KiB/s wr, 107 op/s
Oct 02 08:43:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:43:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:43:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:43:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:43:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:43:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:43:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:43:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:43:27
Oct 02 08:43:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:43:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:43:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'images', 'backups', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta']
Oct 02 08:43:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:43:28 compute-0 ovn_controller[152344]: 2025-10-02T08:43:28Z|01075|binding|INFO|Releasing lport 8f66d020-a258-4e14-aa3b-234835306a91 from this chassis (sb_readonly=0)
Oct 02 08:43:28 compute-0 NetworkManager[45129]: <info>  [1759394608.4272] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Oct 02 08:43:28 compute-0 NetworkManager[45129]: <info>  [1759394608.4280] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Oct 02 08:43:28 compute-0 nova_compute[260603]: 2025-10-02 08:43:28.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:28 compute-0 ovn_controller[152344]: 2025-10-02T08:43:28Z|01076|binding|INFO|Releasing lport 8f66d020-a258-4e14-aa3b-234835306a91 from this chassis (sb_readonly=0)
Oct 02 08:43:28 compute-0 nova_compute[260603]: 2025-10-02 08:43:28.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:28 compute-0 nova_compute[260603]: 2025-10-02 08:43:28.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:43:28 compute-0 ceph-mon[74477]: pgmap v1935: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 25 KiB/s wr, 107 op/s
Oct 02 08:43:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 138 op/s
Oct 02 08:43:29 compute-0 nova_compute[260603]: 2025-10-02 08:43:29.369 2 DEBUG nova.compute.manager [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-changed-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:29 compute-0 nova_compute[260603]: 2025-10-02 08:43:29.369 2 DEBUG nova.compute.manager [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Refreshing instance network info cache due to event network-changed-af8bcfc4-3690-4b5f-9893-5555fa376203. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:43:29 compute-0 nova_compute[260603]: 2025-10-02 08:43:29.369 2 DEBUG oslo_concurrency.lockutils [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:43:29 compute-0 nova_compute[260603]: 2025-10-02 08:43:29.369 2 DEBUG oslo_concurrency.lockutils [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:43:29 compute-0 nova_compute[260603]: 2025-10-02 08:43:29.370 2 DEBUG nova.network.neutron [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Refreshing network info cache for port af8bcfc4-3690-4b5f-9893-5555fa376203 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:43:29 compute-0 nova_compute[260603]: 2025-10-02 08:43:29.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:29 compute-0 nova_compute[260603]: 2025-10-02 08:43:29.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:30 compute-0 nova_compute[260603]: 2025-10-02 08:43:30.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:43:30 compute-0 nova_compute[260603]: 2025-10-02 08:43:30.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:43:30 compute-0 nova_compute[260603]: 2025-10-02 08:43:30.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:43:30 compute-0 nova_compute[260603]: 2025-10-02 08:43:30.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:43:30 compute-0 nova_compute[260603]: 2025-10-02 08:43:30.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:43:30 compute-0 nova_compute[260603]: 2025-10-02 08:43:30.557 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:43:30 compute-0 nova_compute[260603]: 2025-10-02 08:43:30.557 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:30 compute-0 ceph-mon[74477]: pgmap v1936: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 138 op/s
Oct 02 08:43:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 137 op/s
Oct 02 08:43:31 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 02 08:43:32 compute-0 nova_compute[260603]: 2025-10-02 08:43:32.756 2 DEBUG nova.network.neutron [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updated VIF entry in instance network info cache for port af8bcfc4-3690-4b5f-9893-5555fa376203. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:43:32 compute-0 nova_compute[260603]: 2025-10-02 08:43:32.757 2 DEBUG nova.network.neutron [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updating instance_info_cache with network_info: [{"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:43:32 compute-0 nova_compute[260603]: 2025-10-02 08:43:32.787 2 DEBUG oslo_concurrency.lockutils [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:43:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:43:32 compute-0 ceph-mon[74477]: pgmap v1937: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 137 op/s
Oct 02 08:43:33 compute-0 nova_compute[260603]: 2025-10-02 08:43:33.205 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:43:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 160 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 187 op/s
Oct 02 08:43:33 compute-0 nova_compute[260603]: 2025-10-02 08:43:33.457 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:43:33 compute-0 nova_compute[260603]: 2025-10-02 08:43:33.457 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:43:33 compute-0 nova_compute[260603]: 2025-10-02 08:43:33.458 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:43:33 compute-0 nova_compute[260603]: 2025-10-02 08:43:33.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:43:33 compute-0 nova_compute[260603]: 2025-10-02 08:43:33.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:43:33 compute-0 nova_compute[260603]: 2025-10-02 08:43:33.560 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:33 compute-0 nova_compute[260603]: 2025-10-02 08:43:33.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:33 compute-0 nova_compute[260603]: 2025-10-02 08:43:33.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:33 compute-0 nova_compute[260603]: 2025-10-02 08:43:33.561 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:43:33 compute-0 nova_compute[260603]: 2025-10-02 08:43:33.561 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:34 compute-0 podman[365804]: 2025-10-02 08:43:34.0074139 +0000 UTC m=+0.074040858 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 08:43:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:43:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1518484444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.031 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:34 compute-0 podman[365803]: 2025-10-02 08:43:34.06288112 +0000 UTC m=+0.123782260 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.152 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.153 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.157 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.157 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.370 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.371 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3368MB free_disk=59.922119140625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.371 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.372 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.545 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.545 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance bccc9587-6f96-4032-ae07-56ab00988869 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.545 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.546 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.720 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:34.826 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:34.827 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:34.827 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:34 compute-0 nova_compute[260603]: 2025-10-02 08:43:34.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:34 compute-0 ceph-mon[74477]: pgmap v1938: 305 pgs: 305 active+clean; 160 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 187 op/s
Oct 02 08:43:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1518484444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:43:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:43:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434508286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:43:35 compute-0 nova_compute[260603]: 2025-10-02 08:43:35.166 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:35 compute-0 nova_compute[260603]: 2025-10-02 08:43:35.174 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:43:35 compute-0 nova_compute[260603]: 2025-10-02 08:43:35.192 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:43:35 compute-0 nova_compute[260603]: 2025-10-02 08:43:35.217 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:43:35 compute-0 nova_compute[260603]: 2025-10-02 08:43:35.218 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 160 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 02 08:43:35 compute-0 nova_compute[260603]: 2025-10-02 08:43:35.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:43:35 compute-0 nova_compute[260603]: 2025-10-02 08:43:35.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 08:43:35 compute-0 ovn_controller[152344]: 2025-10-02T08:43:35Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:6d:19 10.100.0.5
Oct 02 08:43:35 compute-0 ovn_controller[152344]: 2025-10-02T08:43:35Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:6d:19 10.100.0.5
Oct 02 08:43:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2434508286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:43:36 compute-0 nova_compute[260603]: 2025-10-02 08:43:36.250 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 08:43:36 compute-0 nova_compute[260603]: 2025-10-02 08:43:36.531 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:43:36 compute-0 nova_compute[260603]: 2025-10-02 08:43:36.531 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 08:43:36 compute-0 nova_compute[260603]: 2025-10-02 08:43:36.546 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 08:43:36 compute-0 ceph-mon[74477]: pgmap v1939: 305 pgs: 305 active+clean; 160 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 02 08:43:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 186 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 150 op/s
Oct 02 08:43:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:43:38 compute-0 kernel: tapd9b60bcb-8d (unregistering): left promiscuous mode
Oct 02 08:43:38 compute-0 nova_compute[260603]: 2025-10-02 08:43:38.534 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:43:38 compute-0 NetworkManager[45129]: <info>  [1759394618.5403] device (tapd9b60bcb-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:43:38 compute-0 ovn_controller[152344]: 2025-10-02T08:43:38Z|01077|binding|INFO|Releasing lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 from this chassis (sb_readonly=0)
Oct 02 08:43:38 compute-0 ovn_controller[152344]: 2025-10-02T08:43:38Z|01078|binding|INFO|Setting lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 down in Southbound
Oct 02 08:43:38 compute-0 ovn_controller[152344]: 2025-10-02T08:43:38Z|01079|binding|INFO|Removing iface tapd9b60bcb-8d ovn-installed in OVS
Oct 02 08:43:38 compute-0 nova_compute[260603]: 2025-10-02 08:43:38.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:38 compute-0 nova_compute[260603]: 2025-10-02 08:43:38.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:38.594 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:24:8a 10.100.0.8'], port_security=['fa:16:3e:d2:24:8a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1c24cd5c-a165-4fcf-b24d-245a60f7ea11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34c4e106-9919-4d6d-a50a-81b3894f2e5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae7dae3968e448f1b3ace692d9d76cff', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6912f4e-082b-4b15-9608-2b9595c16211', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b81b03c-f256-4f5a-8b01-0b7991a52f3e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:43:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:38.596 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 in datapath 34c4e106-9919-4d6d-a50a-81b3894f2e5e unbound from our chassis
Oct 02 08:43:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:38.597 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 34c4e106-9919-4d6d-a50a-81b3894f2e5e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:43:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:38.598 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4e5f30-3a77-4f77-b01b-e45baf37cba4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:38 compute-0 nova_compute[260603]: 2025-10-02 08:43:38.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:38 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct 02 08:43:38 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000068.scope: Consumed 13.084s CPU time.
Oct 02 08:43:38 compute-0 systemd-machined[214636]: Machine qemu-130-instance-00000068 terminated.
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013177266474571857 of space, bias 1.0, pg target 0.3953179942371557 quantized to 32 (current 32)
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:43:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:43:38 compute-0 ceph-mon[74477]: pgmap v1940: 305 pgs: 305 active+clean; 186 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 150 op/s
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.268 2 INFO nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance shutdown successfully after 13 seconds.
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.277 2 INFO nova.virt.libvirt.driver [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance destroyed successfully.
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.277 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'numa_topology' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.299 2 INFO nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Attempting rescue
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.300 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.306 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.307 2 INFO nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Creating image(s)
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.332 2 DEBUG nova.storage.rbd_utils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.336 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 200 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 168 op/s
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.380 2 DEBUG nova.storage.rbd_utils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.405 2 DEBUG nova.storage.rbd_utils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.409 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.514 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.515 2 DEBUG oslo_concurrency.lockutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.516 2 DEBUG oslo_concurrency.lockutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.516 2 DEBUG oslo_concurrency.lockutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.537 2 DEBUG nova.storage.rbd_utils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.544 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.853 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.855 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'migration_context' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.872 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.873 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Start _get_guest_xml network_info=[{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "vif_mac": "fa:16:3e:d2:24:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.874 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'resources' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.899 2 WARNING nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.906 2 DEBUG nova.virt.libvirt.host [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.907 2 DEBUG nova.virt.libvirt.host [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.912 2 DEBUG nova.virt.libvirt.host [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.913 2 DEBUG nova.virt.libvirt.host [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.913 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.914 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.915 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.915 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.915 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.916 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.916 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.916 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.917 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.917 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.917 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.918 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.918 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:39 compute-0 nova_compute[260603]: 2025-10-02 08:43:39.952 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:43:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1362286071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:40 compute-0 nova_compute[260603]: 2025-10-02 08:43:40.434 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:40 compute-0 nova_compute[260603]: 2025-10-02 08:43:40.436 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:43:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/168792568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:40 compute-0 nova_compute[260603]: 2025-10-02 08:43:40.922 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:40 compute-0 nova_compute[260603]: 2025-10-02 08:43:40.924 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:40 compute-0 ceph-mon[74477]: pgmap v1941: 305 pgs: 305 active+clean; 200 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 168 op/s
Oct 02 08:43:40 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1362286071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:40 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/168792568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:41 compute-0 podman[366031]: 2025-10-02 08:43:41.030375835 +0000 UTC m=+0.086219178 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 08:43:41 compute-0 podman[366030]: 2025-10-02 08:43:41.052637679 +0000 UTC m=+0.109231005 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 08:43:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 200 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Oct 02 08:43:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:43:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283090342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.425 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.429 2 DEBUG nova.virt.libvirt.vif [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:43:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-618462512',display_name='tempest-ServerRescueTestJSONUnderV235-server-618462512',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-618462512',id=104,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:43:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae7dae3968e448f1b3ace692d9d76cff',ramdisk_id='',reservation_id='r-g69g07mg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-299264470',owner_user_name='tempest-ServerRescueTestJSONUnderV235-299264470-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:21Z,user_data=None,user_id='7e27caab7dd34e4a9cac5f4f1880fad8',uuid=1c24cd5c-a165-4fcf-b24d-245a60f7ea11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "vif_mac": "fa:16:3e:d2:24:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.429 2 DEBUG nova.network.os_vif_util [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converting VIF {"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "vif_mac": "fa:16:3e:d2:24:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.431 2 DEBUG nova.network.os_vif_util [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.434 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.457 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:43:41 compute-0 nova_compute[260603]:   <uuid>1c24cd5c-a165-4fcf-b24d-245a60f7ea11</uuid>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   <name>instance-00000068</name>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-618462512</nova:name>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:43:39</nova:creationTime>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <nova:user uuid="7e27caab7dd34e4a9cac5f4f1880fad8">tempest-ServerRescueTestJSONUnderV235-299264470-project-member</nova:user>
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <nova:project uuid="ae7dae3968e448f1b3ace692d9d76cff">tempest-ServerRescueTestJSONUnderV235-299264470</nova:project>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <nova:port uuid="d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59">
Oct 02 08:43:41 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <system>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <entry name="serial">1c24cd5c-a165-4fcf-b24d-245a60f7ea11</entry>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <entry name="uuid">1c24cd5c-a165-4fcf-b24d-245a60f7ea11</entry>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     </system>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   <os>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   </os>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   <features>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   </features>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue">
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       </source>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk">
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       </source>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <target dev="vdb" bus="virtio"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config.rescue">
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       </source>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:43:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:d2:24:8a"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <target dev="tapd9b60bcb-8d"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/console.log" append="off"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <video>
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     </video>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:43:41 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:43:41 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:43:41 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:43:41 compute-0 nova_compute[260603]: </domain>
Oct 02 08:43:41 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.470 2 INFO nova.virt.libvirt.driver [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance destroyed successfully.
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.529 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.530 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.530 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.530 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No VIF found with MAC fa:16:3e:d2:24:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.531 2 INFO nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Using config drive
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.566 2 DEBUG nova.storage.rbd_utils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.595 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:41 compute-0 nova_compute[260603]: 2025-10-02 08:43:41.633 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'keypairs' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4283090342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.246 2 DEBUG nova.compute.manager [req-67425ac7-9411-497b-b28d-c4425362e9bb req-f69ea8b9-b195-4c2d-8f83-1b9e722cf2a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-unplugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.246 2 DEBUG oslo_concurrency.lockutils [req-67425ac7-9411-497b-b28d-c4425362e9bb req-f69ea8b9-b195-4c2d-8f83-1b9e722cf2a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.247 2 DEBUG oslo_concurrency.lockutils [req-67425ac7-9411-497b-b28d-c4425362e9bb req-f69ea8b9-b195-4c2d-8f83-1b9e722cf2a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.247 2 DEBUG oslo_concurrency.lockutils [req-67425ac7-9411-497b-b28d-c4425362e9bb req-f69ea8b9-b195-4c2d-8f83-1b9e722cf2a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.248 2 DEBUG nova.compute.manager [req-67425ac7-9411-497b-b28d-c4425362e9bb req-f69ea8b9-b195-4c2d-8f83-1b9e722cf2a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] No waiting events found dispatching network-vif-unplugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.248 2 WARNING nova.compute.manager [req-67425ac7-9411-497b-b28d-c4425362e9bb req-f69ea8b9-b195-4c2d-8f83-1b9e722cf2a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received unexpected event network-vif-unplugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for instance with vm_state active and task_state rescuing.
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.337 2 INFO nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Creating config drive at /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config.rescue
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.347 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppx7b47nn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.517 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppx7b47nn" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.554 2 DEBUG nova.storage.rbd_utils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.560 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config.rescue 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.618 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.768 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config.rescue 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.769 2 INFO nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Deleting local config drive /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config.rescue because it was imported into RBD.
Oct 02 08:43:42 compute-0 kernel: tapd9b60bcb-8d: entered promiscuous mode
Oct 02 08:43:42 compute-0 NetworkManager[45129]: <info>  [1759394622.8405] manager: (tapd9b60bcb-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Oct 02 08:43:42 compute-0 ovn_controller[152344]: 2025-10-02T08:43:42Z|01080|binding|INFO|Claiming lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for this chassis.
Oct 02 08:43:42 compute-0 ovn_controller[152344]: 2025-10-02T08:43:42Z|01081|binding|INFO|d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59: Claiming fa:16:3e:d2:24:8a 10.100.0.8
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:43:42 compute-0 ovn_controller[152344]: 2025-10-02T08:43:42Z|01082|binding|INFO|Setting lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 ovn-installed in OVS
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:42 compute-0 nova_compute[260603]: 2025-10-02 08:43:42.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:42 compute-0 systemd-udevd[366158]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:43:42 compute-0 NetworkManager[45129]: <info>  [1759394622.8921] device (tapd9b60bcb-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:43:42 compute-0 NetworkManager[45129]: <info>  [1759394622.8931] device (tapd9b60bcb-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:43:42 compute-0 systemd-machined[214636]: New machine qemu-132-instance-00000068.
Oct 02 08:43:42 compute-0 systemd[1]: Started Virtual Machine qemu-132-instance-00000068.
Oct 02 08:43:42 compute-0 ovn_controller[152344]: 2025-10-02T08:43:42Z|01083|binding|INFO|Setting lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 up in Southbound
Oct 02 08:43:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:42.934 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:24:8a 10.100.0.8'], port_security=['fa:16:3e:d2:24:8a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1c24cd5c-a165-4fcf-b24d-245a60f7ea11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34c4e106-9919-4d6d-a50a-81b3894f2e5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae7dae3968e448f1b3ace692d9d76cff', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e6912f4e-082b-4b15-9608-2b9595c16211', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b81b03c-f256-4f5a-8b01-0b7991a52f3e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:43:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:42.935 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 in datapath 34c4e106-9919-4d6d-a50a-81b3894f2e5e bound to our chassis
Oct 02 08:43:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:42.936 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 34c4e106-9919-4d6d-a50a-81b3894f2e5e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:43:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:42.937 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b60795d9-0ed3-46df-ac87-c8a166529696]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:42 compute-0 ceph-mon[74477]: pgmap v1942: 305 pgs: 305 active+clean; 200 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Oct 02 08:43:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 6.1 MiB/s wr, 149 op/s
Oct 02 08:43:43 compute-0 nova_compute[260603]: 2025-10-02 08:43:43.820 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:43:43 compute-0 nova_compute[260603]: 2025-10-02 08:43:43.822 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394623.8203838, 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:43:43 compute-0 nova_compute[260603]: 2025-10-02 08:43:43.823 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] VM Resumed (Lifecycle Event)
Oct 02 08:43:43 compute-0 nova_compute[260603]: 2025-10-02 08:43:43.831 2 DEBUG nova.compute.manager [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:43 compute-0 nova_compute[260603]: 2025-10-02 08:43:43.886 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:43 compute-0 nova_compute[260603]: 2025-10-02 08:43:43.892 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:43:43 compute-0 nova_compute[260603]: 2025-10-02 08:43:43.934 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394623.82243, 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:43:43 compute-0 nova_compute[260603]: 2025-10-02 08:43:43.934 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] VM Started (Lifecycle Event)
Oct 02 08:43:43 compute-0 nova_compute[260603]: 2025-10-02 08:43:43.953 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:43 compute-0 nova_compute[260603]: 2025-10-02 08:43:43.959 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.456 2 DEBUG nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.457 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.457 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.458 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.458 2 DEBUG nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] No waiting events found dispatching network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.459 2 WARNING nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received unexpected event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for instance with vm_state rescued and task_state None.
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.459 2 DEBUG nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.460 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.460 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.461 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.461 2 DEBUG nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] No waiting events found dispatching network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.461 2 WARNING nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received unexpected event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for instance with vm_state rescued and task_state None.
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.462 2 DEBUG nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.462 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.463 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.463 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.463 2 DEBUG nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] No waiting events found dispatching network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.463 2 WARNING nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received unexpected event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for instance with vm_state rescued and task_state None.
Oct 02 08:43:44 compute-0 nova_compute[260603]: 2025-10-02 08:43:44.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:45 compute-0 ceph-mon[74477]: pgmap v1943: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 6.1 MiB/s wr, 149 op/s
Oct 02 08:43:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 391 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Oct 02 08:43:47 compute-0 ceph-mon[74477]: pgmap v1944: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 391 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Oct 02 08:43:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.0 MiB/s wr, 133 op/s
Oct 02 08:43:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:43:49 compute-0 ceph-mon[74477]: pgmap v1945: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.0 MiB/s wr, 133 op/s
Oct 02 08:43:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 140 op/s
Oct 02 08:43:49 compute-0 nova_compute[260603]: 2025-10-02 08:43:49.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:49 compute-0 nova_compute[260603]: 2025-10-02 08:43:49.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:50 compute-0 nova_compute[260603]: 2025-10-02 08:43:50.229 2 DEBUG nova.compute.manager [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:50 compute-0 nova_compute[260603]: 2025-10-02 08:43:50.230 2 DEBUG nova.compute.manager [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing instance network info cache due to event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:43:50 compute-0 nova_compute[260603]: 2025-10-02 08:43:50.230 2 DEBUG oslo_concurrency.lockutils [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:43:50 compute-0 nova_compute[260603]: 2025-10-02 08:43:50.231 2 DEBUG oslo_concurrency.lockutils [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:43:50 compute-0 nova_compute[260603]: 2025-10-02 08:43:50.231 2 DEBUG nova.network.neutron [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:43:51 compute-0 ceph-mon[74477]: pgmap v1946: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 140 op/s
Oct 02 08:43:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 02 08:43:51 compute-0 nova_compute[260603]: 2025-10-02 08:43:51.408 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:51 compute-0 nova_compute[260603]: 2025-10-02 08:43:51.409 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:51 compute-0 nova_compute[260603]: 2025-10-02 08:43:51.425 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:43:51 compute-0 nova_compute[260603]: 2025-10-02 08:43:51.507 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:51 compute-0 nova_compute[260603]: 2025-10-02 08:43:51.508 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:51 compute-0 nova_compute[260603]: 2025-10-02 08:43:51.515 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:43:51 compute-0 nova_compute[260603]: 2025-10-02 08:43:51.515 2 INFO nova.compute.claims [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:43:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:51.645 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:43:51 compute-0 nova_compute[260603]: 2025-10-02 08:43:51.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:51.647 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:43:51 compute-0 nova_compute[260603]: 2025-10-02 08:43:51.683 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:43:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/567126948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.172 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.178 2 DEBUG nova.compute.provider_tree [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.195 2 DEBUG nova.scheduler.client.report [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.221 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.222 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.283 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.283 2 DEBUG nova.network.neutron [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.305 2 INFO nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.328 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.438 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.439 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.440 2 INFO nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Creating image(s)
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.464 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.491 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.517 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.521 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.559 2 DEBUG nova.compute.manager [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.560 2 DEBUG nova.compute.manager [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing instance network info cache due to event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.560 2 DEBUG oslo_concurrency.lockutils [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.588 2 DEBUG nova.policy [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.592 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.592 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.593 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.593 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.617 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.621 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.660 2 DEBUG nova.network.neutron [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updated VIF entry in instance network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.661 2 DEBUG nova.network.neutron [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.710 2 DEBUG oslo_concurrency.lockutils [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.710 2 DEBUG oslo_concurrency.lockutils [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.711 2 DEBUG nova.network.neutron [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:43:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:43:52 compute-0 nova_compute[260603]: 2025-10-02 08:43:52.949 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:53 compute-0 nova_compute[260603]: 2025-10-02 08:43:53.027 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:43:53 compute-0 ceph-mon[74477]: pgmap v1947: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 02 08:43:53 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/567126948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:43:53 compute-0 nova_compute[260603]: 2025-10-02 08:43:53.162 2 DEBUG nova.objects.instance [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:53 compute-0 nova_compute[260603]: 2025-10-02 08:43:53.179 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:43:53 compute-0 nova_compute[260603]: 2025-10-02 08:43:53.180 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Ensure instance console log exists: /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:43:53 compute-0 nova_compute[260603]: 2025-10-02 08:43:53.181 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:53 compute-0 nova_compute[260603]: 2025-10-02 08:43:53.181 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:53 compute-0 nova_compute[260603]: 2025-10-02 08:43:53.182 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 267 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 95 op/s
Oct 02 08:43:53 compute-0 nova_compute[260603]: 2025-10-02 08:43:53.547 2 DEBUG nova.network.neutron [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Successfully created port: 41b1bf69-0f3d-4e03-9c65-112b4a4b731d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:43:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:53.648 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.466 2 DEBUG nova.network.neutron [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Successfully updated port: 41b1bf69-0f3d-4e03-9c65-112b4a4b731d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.487 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-7f00dee7-5a33-4ae8-a230-2ed05afd17c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.487 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-7f00dee7-5a33-4ae8-a230-2ed05afd17c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.488 2 DEBUG nova.network.neutron [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.575 2 DEBUG nova.compute.manager [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-changed-41b1bf69-0f3d-4e03-9c65-112b4a4b731d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.575 2 DEBUG nova.compute.manager [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Refreshing instance network info cache due to event network-changed-41b1bf69-0f3d-4e03-9c65-112b4a4b731d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.576 2 DEBUG oslo_concurrency.lockutils [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7f00dee7-5a33-4ae8-a230-2ed05afd17c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.700 2 DEBUG nova.network.neutron [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.962 2 DEBUG nova.network.neutron [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updated VIF entry in instance network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.963 2 DEBUG nova.network.neutron [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.979 2 DEBUG oslo_concurrency.lockutils [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:43:54 compute-0 nova_compute[260603]: 2025-10-02 08:43:54.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:55 compute-0 ceph-mon[74477]: pgmap v1948: 305 pgs: 305 active+clean; 267 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 95 op/s
Oct 02 08:43:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 267 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 718 KiB/s wr, 75 op/s
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.512 2 DEBUG nova.network.neutron [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Updating instance_info_cache with network_info: [{"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.531 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-7f00dee7-5a33-4ae8-a230-2ed05afd17c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.531 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Instance network_info: |[{"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.532 2 DEBUG oslo_concurrency.lockutils [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7f00dee7-5a33-4ae8-a230-2ed05afd17c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.532 2 DEBUG nova.network.neutron [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Refreshing network info cache for port 41b1bf69-0f3d-4e03-9c65-112b4a4b731d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.535 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Start _get_guest_xml network_info=[{"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.539 2 WARNING nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.543 2 DEBUG nova.virt.libvirt.host [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.544 2 DEBUG nova.virt.libvirt.host [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.549 2 DEBUG nova.virt.libvirt.host [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.549 2 DEBUG nova.virt.libvirt.host [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.550 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.550 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.551 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.551 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.551 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.551 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.552 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.552 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.552 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.552 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.553 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.553 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.555 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.946 2 DEBUG nova.compute.manager [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.947 2 DEBUG nova.compute.manager [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing instance network info cache due to event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.949 2 DEBUG oslo_concurrency.lockutils [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.949 2 DEBUG oslo_concurrency.lockutils [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:43:55 compute-0 nova_compute[260603]: 2025-10-02 08:43:55.950 2 DEBUG nova.network.neutron [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:43:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:43:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4008580821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.040 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:56 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4008580821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.070 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.074 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:43:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2812076317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.526 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.529 2 DEBUG nova.virt.libvirt.vif [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=106,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBp7O1+NifYmRTPjkz076R6OB0cQraodAPHkd+igs0jgMRa0ylNfh8a4FTcaAs5LMUjKZ6d3T6IfE8uMmH/Vv7/4iSPE6rs9EcqUzLfabYKjHL1D+G2YR0bhQI1PbtyQqg==',key_name='tempest-TestSecurityGroupsBasicOps-388523957',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-rpve6319',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:52Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=7f00dee7-5a33-4ae8-a230-2ed05afd17c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.529 2 DEBUG nova.network.os_vif_util [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.532 2 DEBUG nova.network.os_vif_util [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.533 2 DEBUG nova.objects.instance [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.550 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:43:56 compute-0 nova_compute[260603]:   <uuid>7f00dee7-5a33-4ae8-a230-2ed05afd17c3</uuid>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   <name>instance-0000006a</name>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159</nova:name>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:43:55</nova:creationTime>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:43:56 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:43:56 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:43:56 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:43:56 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:43:56 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:43:56 compute-0 nova_compute[260603]:         <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct 02 08:43:56 compute-0 nova_compute[260603]:         <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:43:56 compute-0 nova_compute[260603]:         <nova:port uuid="41b1bf69-0f3d-4e03-9c65-112b4a4b731d">
Oct 02 08:43:56 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <system>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <entry name="serial">7f00dee7-5a33-4ae8-a230-2ed05afd17c3</entry>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <entry name="uuid">7f00dee7-5a33-4ae8-a230-2ed05afd17c3</entry>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     </system>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   <os>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   </os>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   <features>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   </features>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk">
Oct 02 08:43:56 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       </source>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:43:56 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk.config">
Oct 02 08:43:56 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       </source>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:43:56 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:80:a1:dd"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <target dev="tap41b1bf69-0f"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/console.log" append="off"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <video>
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     </video>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:43:56 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:43:56 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:43:56 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:43:56 compute-0 nova_compute[260603]: </domain>
Oct 02 08:43:56 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.552 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Preparing to wait for external event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.553 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.553 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.554 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.555 2 DEBUG nova.virt.libvirt.vif [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=106,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBp7O1+NifYmRTPjkz076R6OB0cQraodAPHkd+igs0jgMRa0ylNfh8a4FTcaAs5LMUjKZ6d3T6IfE8uMmH/Vv7/4iSPE6rs9EcqUzLfabYKjHL1D+G2YR0bhQI1PbtyQqg==',key_name='tempest-TestSecurityGroupsBasicOps-388523957',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-rpve6319',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:52Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=7f00dee7-5a33-4ae8-a230-2ed05afd17c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.555 2 DEBUG nova.network.os_vif_util [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.557 2 DEBUG nova.network.os_vif_util [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.557 2 DEBUG os_vif [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.560 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b1bf69-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.567 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41b1bf69-0f, col_values=(('external_ids', {'iface-id': '41b1bf69-0f3d-4e03-9c65-112b4a4b731d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:a1:dd', 'vm-uuid': '7f00dee7-5a33-4ae8-a230-2ed05afd17c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:56 compute-0 NetworkManager[45129]: <info>  [1759394636.5696] manager: (tap41b1bf69-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.579 2 INFO os_vif [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f')
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.633 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.633 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.634 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:80:a1:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.634 2 INFO nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Using config drive
Oct 02 08:43:56 compute-0 nova_compute[260603]: 2025-10-02 08:43:56.664 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:57 compute-0 ceph-mon[74477]: pgmap v1949: 305 pgs: 305 active+clean; 267 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 718 KiB/s wr, 75 op/s
Oct 02 08:43:57 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2812076317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:43:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 288 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 132 op/s
Oct 02 08:43:57 compute-0 nova_compute[260603]: 2025-10-02 08:43:57.623 2 INFO nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Creating config drive at /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/disk.config
Oct 02 08:43:57 compute-0 nova_compute[260603]: 2025-10-02 08:43:57.633 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpocab6tuf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:57 compute-0 nova_compute[260603]: 2025-10-02 08:43:57.789 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpocab6tuf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:57 compute-0 nova_compute[260603]: 2025-10-02 08:43:57.831 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:43:57 compute-0 nova_compute[260603]: 2025-10-02 08:43:57.836 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/disk.config 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:43:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:43:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:43:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:43:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:43:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:43:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:43:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.077 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/disk.config 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.078 2 INFO nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Deleting local config drive /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/disk.config because it was imported into RBD.
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.120 2 DEBUG nova.network.neutron [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updated VIF entry in instance network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.121 2 DEBUG nova.network.neutron [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:43:58 compute-0 NetworkManager[45129]: <info>  [1759394638.1455] manager: (tap41b1bf69-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/428)
Oct 02 08:43:58 compute-0 kernel: tap41b1bf69-0f: entered promiscuous mode
Oct 02 08:43:58 compute-0 ovn_controller[152344]: 2025-10-02T08:43:58Z|01084|binding|INFO|Claiming lport 41b1bf69-0f3d-4e03-9c65-112b4a4b731d for this chassis.
Oct 02 08:43:58 compute-0 ovn_controller[152344]: 2025-10-02T08:43:58Z|01085|binding|INFO|41b1bf69-0f3d-4e03-9c65-112b4a4b731d: Claiming fa:16:3e:80:a1:dd 10.100.0.11
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.148 2 DEBUG oslo_concurrency.lockutils [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.156 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:a1:dd 10.100.0.11'], port_security=['fa:16:3e:80:a1:dd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '7f00dee7-5a33-4ae8-a230-2ed05afd17c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd3198b8-8ad4-4f59-8253-4227db95b8da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e2b1c02-e727-474f-a9ba-53199387490d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=41b1bf69-0f3d-4e03-9c65-112b4a4b731d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.158 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 41b1bf69-0f3d-4e03-9c65-112b4a4b731d in datapath c7addd6c-480f-45ed-94c2-18d1d2248acb bound to our chassis
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.161 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c7addd6c-480f-45ed-94c2-18d1d2248acb
Oct 02 08:43:58 compute-0 ovn_controller[152344]: 2025-10-02T08:43:58Z|01086|binding|INFO|Setting lport 41b1bf69-0f3d-4e03-9c65-112b4a4b731d up in Southbound
Oct 02 08:43:58 compute-0 ovn_controller[152344]: 2025-10-02T08:43:58Z|01087|binding|INFO|Setting lport 41b1bf69-0f3d-4e03-9c65-112b4a4b731d ovn-installed in OVS
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.189 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8730b7-8cc8-4ed5-baa7-de06302b50a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:58 compute-0 systemd-udevd[366555]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:43:58 compute-0 systemd-machined[214636]: New machine qemu-133-instance-0000006a.
Oct 02 08:43:58 compute-0 systemd[1]: Started Virtual Machine qemu-133-instance-0000006a.
Oct 02 08:43:58 compute-0 NetworkManager[45129]: <info>  [1759394638.2184] device (tap41b1bf69-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:43:58 compute-0 NetworkManager[45129]: <info>  [1759394638.2207] device (tap41b1bf69-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.231 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9c3dc5-09a6-411a-8e9b-b034a55e1ebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.236 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e825acd3-ecda-4ec6-a254-7df34312606f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.281 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[db78ca5f-73ad-408a-985f-06515ed01f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.306 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0035a7-a214-4fbe-9470-98eba81c7512]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc7addd6c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:47:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547997, 'reachable_time': 26922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366567, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.329 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[942d2813-fee7-43b3-b9ed-e7db82a41231]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc7addd6c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548011, 'tstamp': 548011}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366569, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc7addd6c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548014, 'tstamp': 548014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366569, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.332 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7addd6c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.336 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7addd6c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.336 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.337 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc7addd6c-40, col_values=(('external_ids', {'iface-id': '8f66d020-a258-4e14-aa3b-234835306a91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:43:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.338 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.413 2 DEBUG nova.compute.manager [req-ae6cae4d-fc7f-4f66-ad07-3c05861e5220 req-4384b2d1-510a-45da-a765-c25ca00c929f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.414 2 DEBUG oslo_concurrency.lockutils [req-ae6cae4d-fc7f-4f66-ad07-3c05861e5220 req-4384b2d1-510a-45da-a765-c25ca00c929f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.414 2 DEBUG oslo_concurrency.lockutils [req-ae6cae4d-fc7f-4f66-ad07-3c05861e5220 req-4384b2d1-510a-45da-a765-c25ca00c929f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.415 2 DEBUG oslo_concurrency.lockutils [req-ae6cae4d-fc7f-4f66-ad07-3c05861e5220 req-4384b2d1-510a-45da-a765-c25ca00c929f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.415 2 DEBUG nova.compute.manager [req-ae6cae4d-fc7f-4f66-ad07-3c05861e5220 req-4384b2d1-510a-45da-a765-c25ca00c929f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Processing event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.606 2 DEBUG nova.network.neutron [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Updated VIF entry in instance network info cache for port 41b1bf69-0f3d-4e03-9c65-112b4a4b731d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.607 2 DEBUG nova.network.neutron [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Updating instance_info_cache with network_info: [{"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:43:58 compute-0 nova_compute[260603]: 2025-10-02 08:43:58.625 2 DEBUG oslo_concurrency.lockutils [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7f00dee7-5a33-4ae8-a230-2ed05afd17c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:43:59 compute-0 ceph-mon[74477]: pgmap v1950: 305 pgs: 305 active+clean; 288 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 132 op/s
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.104 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394639.1039274, 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.105 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] VM Started (Lifecycle Event)
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.108 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.116 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.122 2 INFO nova.virt.libvirt.driver [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Instance spawned successfully.
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.123 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.128 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.132 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.149 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.149 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.150 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.150 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.151 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.152 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.156 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.156 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394639.1051219, 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.156 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] VM Paused (Lifecycle Event)
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.186 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.193 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394639.112804, 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.193 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] VM Resumed (Lifecycle Event)
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.226 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.229 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.242 2 INFO nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Took 6.80 seconds to spawn the instance on the hypervisor.
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.243 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.255 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.297 2 INFO nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Took 7.83 seconds to build instance.
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.321 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:43:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 293 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 02 08:43:59 compute-0 nova_compute[260603]: 2025-10-02 08:43:59.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:00 compute-0 nova_compute[260603]: 2025-10-02 08:44:00.720 2 DEBUG nova.compute.manager [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:00 compute-0 nova_compute[260603]: 2025-10-02 08:44:00.720 2 DEBUG oslo_concurrency.lockutils [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:00 compute-0 nova_compute[260603]: 2025-10-02 08:44:00.720 2 DEBUG oslo_concurrency.lockutils [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:00 compute-0 nova_compute[260603]: 2025-10-02 08:44:00.721 2 DEBUG oslo_concurrency.lockutils [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:00 compute-0 nova_compute[260603]: 2025-10-02 08:44:00.721 2 DEBUG nova.compute.manager [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] No waiting events found dispatching network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:44:00 compute-0 nova_compute[260603]: 2025-10-02 08:44:00.721 2 WARNING nova.compute.manager [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received unexpected event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d for instance with vm_state active and task_state None.
Oct 02 08:44:00 compute-0 nova_compute[260603]: 2025-10-02 08:44:00.721 2 DEBUG nova.compute.manager [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:00 compute-0 nova_compute[260603]: 2025-10-02 08:44:00.721 2 DEBUG nova.compute.manager [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing instance network info cache due to event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:44:00 compute-0 nova_compute[260603]: 2025-10-02 08:44:00.722 2 DEBUG oslo_concurrency.lockutils [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:44:00 compute-0 nova_compute[260603]: 2025-10-02 08:44:00.722 2 DEBUG oslo_concurrency.lockutils [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:44:00 compute-0 nova_compute[260603]: 2025-10-02 08:44:00.722 2 DEBUG nova.network.neutron [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:44:01 compute-0 ceph-mon[74477]: pgmap v1951: 305 pgs: 305 active+clean; 293 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 02 08:44:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 293 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 08:44:01 compute-0 nova_compute[260603]: 2025-10-02 08:44:01.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:44:03 compute-0 ceph-mon[74477]: pgmap v1952: 305 pgs: 305 active+clean; 293 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 08:44:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 295 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 162 op/s
Oct 02 08:44:03 compute-0 nova_compute[260603]: 2025-10-02 08:44:03.651 2 DEBUG nova.network.neutron [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updated VIF entry in instance network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:44:03 compute-0 nova_compute[260603]: 2025-10-02 08:44:03.652 2 DEBUG nova.network.neutron [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:44:03 compute-0 nova_compute[260603]: 2025-10-02 08:44:03.667 2 DEBUG oslo_concurrency.lockutils [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:44:04 compute-0 nova_compute[260603]: 2025-10-02 08:44:04.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:05 compute-0 podman[366613]: 2025-10-02 08:44:05.055952996 +0000 UTC m=+0.104274990 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 08:44:05 compute-0 podman[366612]: 2025-10-02 08:44:05.083705491 +0000 UTC m=+0.136224266 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 08:44:05 compute-0 ceph-mon[74477]: pgmap v1953: 305 pgs: 305 active+clean; 295 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 162 op/s
Oct 02 08:44:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 295 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 160 op/s
Oct 02 08:44:05 compute-0 nova_compute[260603]: 2025-10-02 08:44:05.948 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:05 compute-0 nova_compute[260603]: 2025-10-02 08:44:05.949 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:05 compute-0 nova_compute[260603]: 2025-10-02 08:44:05.949 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:05 compute-0 nova_compute[260603]: 2025-10-02 08:44:05.949 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:05 compute-0 nova_compute[260603]: 2025-10-02 08:44:05.950 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:05 compute-0 nova_compute[260603]: 2025-10-02 08:44:05.951 2 INFO nova.compute.manager [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Terminating instance
Oct 02 08:44:05 compute-0 nova_compute[260603]: 2025-10-02 08:44:05.952 2 DEBUG nova.compute.manager [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:44:06 compute-0 ceph-mon[74477]: pgmap v1954: 305 pgs: 305 active+clean; 295 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 160 op/s
Oct 02 08:44:06 compute-0 kernel: tapd9b60bcb-8d (unregistering): left promiscuous mode
Oct 02 08:44:06 compute-0 NetworkManager[45129]: <info>  [1759394646.5036] device (tapd9b60bcb-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:06 compute-0 ovn_controller[152344]: 2025-10-02T08:44:06Z|01088|binding|INFO|Releasing lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 from this chassis (sb_readonly=0)
Oct 02 08:44:06 compute-0 ovn_controller[152344]: 2025-10-02T08:44:06Z|01089|binding|INFO|Setting lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 down in Southbound
Oct 02 08:44:06 compute-0 ovn_controller[152344]: 2025-10-02T08:44:06Z|01090|binding|INFO|Removing iface tapd9b60bcb-8d ovn-installed in OVS
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:06.529 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:24:8a 10.100.0.8'], port_security=['fa:16:3e:d2:24:8a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1c24cd5c-a165-4fcf-b24d-245a60f7ea11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34c4e106-9919-4d6d-a50a-81b3894f2e5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae7dae3968e448f1b3ace692d9d76cff', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e6912f4e-082b-4b15-9608-2b9595c16211', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b81b03c-f256-4f5a-8b01-0b7991a52f3e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:44:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:06.532 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 in datapath 34c4e106-9919-4d6d-a50a-81b3894f2e5e unbound from our chassis
Oct 02 08:44:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:06.533 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 34c4e106-9919-4d6d-a50a-81b3894f2e5e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:44:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:06.536 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e099a061-8a73-4be0-a493-b29595cc0280]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:06 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct 02 08:44:06 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000068.scope: Consumed 13.198s CPU time.
Oct 02 08:44:06 compute-0 systemd-machined[214636]: Machine qemu-132-instance-00000068 terminated.
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.804 2 INFO nova.virt.libvirt.driver [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance destroyed successfully.
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.805 2 DEBUG nova.objects.instance [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'resources' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.819 2 DEBUG nova.virt.libvirt.vif [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:43:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-618462512',display_name='tempest-ServerRescueTestJSONUnderV235-server-618462512',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-618462512',id=104,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:43:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae7dae3968e448f1b3ace692d9d76cff',ramdisk_id='',reservation_id='r-g69g07mg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-299264470',owner_user_name='tempest-ServerRescueTestJSONUnderV235-299264470-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:43:43Z,user_data=None,user_id='7e27caab7dd34e4a9cac5f4f1880fad8',uuid=1c24cd5c-a165-4fcf-b24d-245a60f7ea11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.820 2 DEBUG nova.network.os_vif_util [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converting VIF {"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.821 2 DEBUG nova.network.os_vif_util [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.821 2 DEBUG os_vif [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9b60bcb-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:06 compute-0 nova_compute[260603]: 2025-10-02 08:44:06.830 2 INFO os_vif [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d')
Oct 02 08:44:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 265 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 186 op/s
Oct 02 08:44:07 compute-0 nova_compute[260603]: 2025-10-02 08:44:07.503 2 INFO nova.virt.libvirt.driver [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Deleting instance files /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_del
Oct 02 08:44:07 compute-0 nova_compute[260603]: 2025-10-02 08:44:07.504 2 INFO nova.virt.libvirt.driver [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Deletion of /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_del complete
Oct 02 08:44:07 compute-0 nova_compute[260603]: 2025-10-02 08:44:07.588 2 INFO nova.compute.manager [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Took 1.64 seconds to destroy the instance on the hypervisor.
Oct 02 08:44:07 compute-0 nova_compute[260603]: 2025-10-02 08:44:07.589 2 DEBUG oslo.service.loopingcall [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:44:07 compute-0 nova_compute[260603]: 2025-10-02 08:44:07.589 2 DEBUG nova.compute.manager [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:44:07 compute-0 nova_compute[260603]: 2025-10-02 08:44:07.589 2 DEBUG nova.network.neutron [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:44:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:44:08 compute-0 ceph-mon[74477]: pgmap v1955: 305 pgs: 305 active+clean; 265 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 186 op/s
Oct 02 08:44:08 compute-0 nova_compute[260603]: 2025-10-02 08:44:08.523 2 DEBUG nova.network.neutron [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:44:08 compute-0 nova_compute[260603]: 2025-10-02 08:44:08.542 2 INFO nova.compute.manager [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Took 0.95 seconds to deallocate network for instance.
Oct 02 08:44:08 compute-0 nova_compute[260603]: 2025-10-02 08:44:08.586 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:08 compute-0 nova_compute[260603]: 2025-10-02 08:44:08.587 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:08 compute-0 nova_compute[260603]: 2025-10-02 08:44:08.604 2 DEBUG nova.compute.manager [req-ffe8d87a-b661-44f3-ba31-26d69af7fe22 req-0f51a2e2-8415-4cef-87dc-dcb4db95893f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-deleted-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:08 compute-0 nova_compute[260603]: 2025-10-02 08:44:08.679 2 DEBUG oslo_concurrency.processutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:44:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/127347217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:09 compute-0 nova_compute[260603]: 2025-10-02 08:44:09.144 2 DEBUG oslo_concurrency.processutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:09 compute-0 nova_compute[260603]: 2025-10-02 08:44:09.150 2 DEBUG nova.compute.provider_tree [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:44:09 compute-0 nova_compute[260603]: 2025-10-02 08:44:09.166 2 DEBUG nova.scheduler.client.report [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:44:09 compute-0 nova_compute[260603]: 2025-10-02 08:44:09.181 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:09 compute-0 nova_compute[260603]: 2025-10-02 08:44:09.222 2 INFO nova.scheduler.client.report [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Deleted allocations for instance 1c24cd5c-a165-4fcf-b24d-245a60f7ea11
Oct 02 08:44:09 compute-0 nova_compute[260603]: 2025-10-02 08:44:09.281 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 187 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 296 KiB/s wr, 145 op/s
Oct 02 08:44:09 compute-0 nova_compute[260603]: 2025-10-02 08:44:09.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:09 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/127347217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:10 compute-0 ovn_controller[152344]: 2025-10-02T08:44:10Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:a1:dd 10.100.0.11
Oct 02 08:44:10 compute-0 ovn_controller[152344]: 2025-10-02T08:44:10Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:a1:dd 10.100.0.11
Oct 02 08:44:10 compute-0 ceph-mon[74477]: pgmap v1956: 305 pgs: 305 active+clean; 187 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 296 KiB/s wr, 145 op/s
Oct 02 08:44:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 187 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 116 op/s
Oct 02 08:44:11 compute-0 nova_compute[260603]: 2025-10-02 08:44:11.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:12 compute-0 podman[366711]: 2025-10-02 08:44:12.016937509 +0000 UTC m=+0.076298889 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:44:12 compute-0 podman[366712]: 2025-10-02 08:44:12.031148013 +0000 UTC m=+0.077115585 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 08:44:12 compute-0 ceph-mon[74477]: pgmap v1957: 305 pgs: 305 active+clean; 187 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 116 op/s
Oct 02 08:44:12 compute-0 ovn_controller[152344]: 2025-10-02T08:44:12Z|01091|binding|INFO|Releasing lport 8f66d020-a258-4e14-aa3b-234835306a91 from this chassis (sb_readonly=0)
Oct 02 08:44:12 compute-0 nova_compute[260603]: 2025-10-02 08:44:12.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:44:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 200 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 193 op/s
Oct 02 08:44:14 compute-0 nova_compute[260603]: 2025-10-02 08:44:14.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:14 compute-0 ceph-mon[74477]: pgmap v1958: 305 pgs: 305 active+clean; 200 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 193 op/s
Oct 02 08:44:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 200 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 02 08:44:16 compute-0 ceph-mon[74477]: pgmap v1959: 305 pgs: 305 active+clean; 200 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 02 08:44:16 compute-0 nova_compute[260603]: 2025-10-02 08:44:16.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:16 compute-0 nova_compute[260603]: 2025-10-02 08:44:16.850 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:16 compute-0 nova_compute[260603]: 2025-10-02 08:44:16.851 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:16 compute-0 nova_compute[260603]: 2025-10-02 08:44:16.852 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:16 compute-0 nova_compute[260603]: 2025-10-02 08:44:16.852 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:16 compute-0 nova_compute[260603]: 2025-10-02 08:44:16.853 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:16 compute-0 nova_compute[260603]: 2025-10-02 08:44:16.855 2 INFO nova.compute.manager [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Terminating instance
Oct 02 08:44:16 compute-0 nova_compute[260603]: 2025-10-02 08:44:16.857 2 DEBUG nova.compute.manager [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:44:16 compute-0 kernel: tap41b1bf69-0f (unregistering): left promiscuous mode
Oct 02 08:44:16 compute-0 NetworkManager[45129]: <info>  [1759394656.9270] device (tap41b1bf69-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:44:16 compute-0 ovn_controller[152344]: 2025-10-02T08:44:16Z|01092|binding|INFO|Releasing lport 41b1bf69-0f3d-4e03-9c65-112b4a4b731d from this chassis (sb_readonly=0)
Oct 02 08:44:16 compute-0 ovn_controller[152344]: 2025-10-02T08:44:16Z|01093|binding|INFO|Setting lport 41b1bf69-0f3d-4e03-9c65-112b4a4b731d down in Southbound
Oct 02 08:44:16 compute-0 nova_compute[260603]: 2025-10-02 08:44:16.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:16 compute-0 ovn_controller[152344]: 2025-10-02T08:44:16Z|01094|binding|INFO|Removing iface tap41b1bf69-0f ovn-installed in OVS
Oct 02 08:44:16 compute-0 nova_compute[260603]: 2025-10-02 08:44:16.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:16.946 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:a1:dd 10.100.0.11'], port_security=['fa:16:3e:80:a1:dd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '7f00dee7-5a33-4ae8-a230-2ed05afd17c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd3198b8-8ad4-4f59-8253-4227db95b8da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e2b1c02-e727-474f-a9ba-53199387490d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=41b1bf69-0f3d-4e03-9c65-112b4a4b731d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:44:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:16.947 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 41b1bf69-0f3d-4e03-9c65-112b4a4b731d in datapath c7addd6c-480f-45ed-94c2-18d1d2248acb unbound from our chassis
Oct 02 08:44:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:16.949 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c7addd6c-480f-45ed-94c2-18d1d2248acb
Oct 02 08:44:16 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:16.969 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc12519-05f9-4810-804e-1778efb3965b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:16 compute-0 nova_compute[260603]: 2025-10-02 08:44:16.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:16 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct 02 08:44:17 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Consumed 12.243s CPU time.
Oct 02 08:44:17 compute-0 systemd-machined[214636]: Machine qemu-133-instance-0000006a terminated.
Oct 02 08:44:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.016 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f18050-a429-4a06-8c9e-203ca581931c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.021 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f4ce71-48df-4285-8bd4-ea7e84582a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.062 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3f016b2c-1a8e-4cc1-af1d-38a123159584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.099 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d69429-bb52-4d0e-967f-17a49585f212]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc7addd6c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:47:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 8, 'rx_bytes': 958, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 8, 'rx_bytes': 958, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547997, 'reachable_time': 26922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366760, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.113 2 INFO nova.virt.libvirt.driver [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Instance destroyed successfully.
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.113 2 DEBUG nova.objects.instance [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:44:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.132 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3205ebe2-535c-482a-951a-e424691e6f23]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc7addd6c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548011, 'tstamp': 548011}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366771, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc7addd6c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548014, 'tstamp': 548014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366771, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.134 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7addd6c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.134 2 DEBUG nova.virt.libvirt.vif [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=106,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBp7O1+NifYmRTPjkz076R6OB0cQraodAPHkd+igs0jgMRa0ylNfh8a4FTcaAs5LMUjKZ6d3T6IfE8uMmH/Vv7/4iSPE6rs9EcqUzLfabYKjHL1D+G2YR0bhQI1PbtyQqg==',key_name='tempest-TestSecurityGroupsBasicOps-388523957',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:43:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-rpve6319',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:43:59Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=7f00dee7-5a33-4ae8-a230-2ed05afd17c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.134 2 DEBUG nova.network.os_vif_util [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.135 2 DEBUG nova.network.os_vif_util [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.135 2 DEBUG os_vif [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.138 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b1bf69-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.146 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7addd6c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.147 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.148 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc7addd6c-40, col_values=(('external_ids', {'iface-id': '8f66d020-a258-4e14-aa3b-234835306a91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.148 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.149 2 INFO os_vif [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f')
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.327 2 DEBUG nova.compute.manager [req-190840ab-3072-4fa8-92f8-b1bf16e37d67 req-7acdf92f-65b9-4fac-983d-acdb797b7f8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-vif-unplugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.329 2 DEBUG oslo_concurrency.lockutils [req-190840ab-3072-4fa8-92f8-b1bf16e37d67 req-7acdf92f-65b9-4fac-983d-acdb797b7f8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.329 2 DEBUG oslo_concurrency.lockutils [req-190840ab-3072-4fa8-92f8-b1bf16e37d67 req-7acdf92f-65b9-4fac-983d-acdb797b7f8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.330 2 DEBUG oslo_concurrency.lockutils [req-190840ab-3072-4fa8-92f8-b1bf16e37d67 req-7acdf92f-65b9-4fac-983d-acdb797b7f8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.331 2 DEBUG nova.compute.manager [req-190840ab-3072-4fa8-92f8-b1bf16e37d67 req-7acdf92f-65b9-4fac-983d-acdb797b7f8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] No waiting events found dispatching network-vif-unplugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.332 2 DEBUG nova.compute.manager [req-190840ab-3072-4fa8-92f8-b1bf16e37d67 req-7acdf92f-65b9-4fac-983d-acdb797b7f8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-vif-unplugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:44:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 200 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.570 2 INFO nova.virt.libvirt.driver [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Deleting instance files /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3_del
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.571 2 INFO nova.virt.libvirt.driver [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Deletion of /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3_del complete
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.618 2 INFO nova.compute.manager [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.618 2 DEBUG oslo.service.loopingcall [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.618 2 DEBUG nova.compute.manager [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:44:17 compute-0 nova_compute[260603]: 2025-10-02 08:44:17.619 2 DEBUG nova.network.neutron [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:44:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:44:18 compute-0 ceph-mon[74477]: pgmap v1960: 305 pgs: 305 active+clean; 200 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 02 08:44:18 compute-0 nova_compute[260603]: 2025-10-02 08:44:18.973 2 DEBUG nova.network.neutron [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:44:18 compute-0 nova_compute[260603]: 2025-10-02 08:44:18.996 2 INFO nova.compute.manager [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Took 1.38 seconds to deallocate network for instance.
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.035 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.036 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:19 compute-0 sudo[366793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.114 2 DEBUG oslo_concurrency.processutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:19 compute-0 sudo[366793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:19 compute-0 sudo[366793]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:19 compute-0 sudo[366819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:44:19 compute-0 sudo[366819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:19 compute-0 sudo[366819]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:19 compute-0 sudo[366844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:44:19 compute-0 sudo[366844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:19 compute-0 sudo[366844]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:19 compute-0 sudo[366869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:44:19 compute-0 sudo[366869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 159 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.519 2 DEBUG nova.compute.manager [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.520 2 DEBUG oslo_concurrency.lockutils [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.520 2 DEBUG oslo_concurrency.lockutils [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.520 2 DEBUG oslo_concurrency.lockutils [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.520 2 DEBUG nova.compute.manager [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] No waiting events found dispatching network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.521 2 WARNING nova.compute.manager [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received unexpected event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d for instance with vm_state deleted and task_state None.
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.525 2 DEBUG nova.compute.manager [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-vif-deleted-41b1bf69-0f3d-4e03-9c65-112b4a4b731d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.526 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.526 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:44:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:44:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040599607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.567 2 DEBUG oslo_concurrency.processutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.573 2 DEBUG nova.compute.provider_tree [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.591 2 DEBUG nova.scheduler.client.report [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.608 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.639 2 INFO nova.scheduler.client.report [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance 7f00dee7-5a33-4ae8-a230-2ed05afd17c3
Oct 02 08:44:19 compute-0 nova_compute[260603]: 2025-10-02 08:44:19.704 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:19 compute-0 sudo[366869]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:44:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:44:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:44:19 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:44:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:44:19 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:44:19 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6c4babb2-c41d-4446-b207-f3e46ec30239 does not exist
Oct 02 08:44:19 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 917733e1-6c46-4651-bd0f-300007534a22 does not exist
Oct 02 08:44:19 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ceea174c-c05b-4e35-a850-167dabbbe83e does not exist
Oct 02 08:44:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:44:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:44:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:44:19 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:44:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:44:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:44:19 compute-0 sudo[366947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:44:19 compute-0 sudo[366947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:19 compute-0 sudo[366947]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:19 compute-0 sudo[366972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:44:19 compute-0 sudo[366972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:19 compute-0 sudo[366972]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:19 compute-0 sudo[366997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:44:20 compute-0 sudo[366997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:20 compute-0 sudo[366997]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:20 compute-0 sudo[367022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:44:20 compute-0 sudo[367022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:20 compute-0 ceph-mon[74477]: pgmap v1961: 305 pgs: 305 active+clean; 159 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct 02 08:44:20 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4040599607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:20 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:44:20 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:44:20 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:44:20 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:44:20 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:44:20 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:44:20 compute-0 podman[367088]: 2025-10-02 08:44:20.545137111 +0000 UTC m=+0.069547179 container create 722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:44:20 compute-0 systemd[1]: Started libpod-conmon-722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e.scope.
Oct 02 08:44:20 compute-0 podman[367088]: 2025-10-02 08:44:20.515839477 +0000 UTC m=+0.040249606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:44:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:44:20 compute-0 podman[367088]: 2025-10-02 08:44:20.658599287 +0000 UTC m=+0.183009405 container init 722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:44:20 compute-0 podman[367088]: 2025-10-02 08:44:20.670624362 +0000 UTC m=+0.195034430 container start 722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:44:20 compute-0 podman[367088]: 2025-10-02 08:44:20.674859144 +0000 UTC m=+0.199269222 container attach 722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 08:44:20 compute-0 wonderful_almeida[367105]: 167 167
Oct 02 08:44:20 compute-0 systemd[1]: libpod-722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e.scope: Deactivated successfully.
Oct 02 08:44:20 compute-0 podman[367088]: 2025-10-02 08:44:20.680639324 +0000 UTC m=+0.205049402 container died 722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce7dddef25a0a59b300450fd697cab93de6c2bb25a2b1003400e071a8ba87787-merged.mount: Deactivated successfully.
Oct 02 08:44:20 compute-0 podman[367088]: 2025-10-02 08:44:20.743001708 +0000 UTC m=+0.267411776 container remove 722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:44:20 compute-0 systemd[1]: libpod-conmon-722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e.scope: Deactivated successfully.
Oct 02 08:44:20 compute-0 podman[367128]: 2025-10-02 08:44:20.98724096 +0000 UTC m=+0.057261665 container create 9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:44:21 compute-0 systemd[1]: Started libpod-conmon-9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a.scope.
Oct 02 08:44:21 compute-0 podman[367128]: 2025-10-02 08:44:20.95482486 +0000 UTC m=+0.024845555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:44:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4857f30f5eecfd101687d68127dad841c59f8f950c2a1e98423158c4e605327/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4857f30f5eecfd101687d68127dad841c59f8f950c2a1e98423158c4e605327/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4857f30f5eecfd101687d68127dad841c59f8f950c2a1e98423158c4e605327/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4857f30f5eecfd101687d68127dad841c59f8f950c2a1e98423158c4e605327/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4857f30f5eecfd101687d68127dad841c59f8f950c2a1e98423158c4e605327/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:21 compute-0 podman[367128]: 2025-10-02 08:44:21.093002697 +0000 UTC m=+0.163023372 container init 9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gould, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:44:21 compute-0 podman[367128]: 2025-10-02 08:44:21.112010819 +0000 UTC m=+0.182031514 container start 9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 08:44:21 compute-0 podman[367128]: 2025-10-02 08:44:21.116975104 +0000 UTC m=+0.186995799 container attach 9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gould, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.179 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.183 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.183 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.184 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.184 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.186 2 INFO nova.compute.manager [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Terminating instance
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.188 2 DEBUG nova.compute.manager [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:44:21 compute-0 kernel: tapaf8bcfc4-36 (unregistering): left promiscuous mode
Oct 02 08:44:21 compute-0 NetworkManager[45129]: <info>  [1759394661.2503] device (tapaf8bcfc4-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:21 compute-0 ovn_controller[152344]: 2025-10-02T08:44:21Z|01095|binding|INFO|Releasing lport af8bcfc4-3690-4b5f-9893-5555fa376203 from this chassis (sb_readonly=0)
Oct 02 08:44:21 compute-0 ovn_controller[152344]: 2025-10-02T08:44:21Z|01096|binding|INFO|Setting lport af8bcfc4-3690-4b5f-9893-5555fa376203 down in Southbound
Oct 02 08:44:21 compute-0 ovn_controller[152344]: 2025-10-02T08:44:21Z|01097|binding|INFO|Removing iface tapaf8bcfc4-36 ovn-installed in OVS
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.318 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:6d:19 10.100.0.5'], port_security=['fa:16:3e:c6:6d:19 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'bccc9587-6f96-4032-ae07-56ab00988869', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87949ce5-c546-4e93-ab3f-46b861ef5238 dd3198b8-8ad4-4f59-8253-4227db95b8da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e2b1c02-e727-474f-a9ba-53199387490d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=af8bcfc4-3690-4b5f-9893-5555fa376203) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.319 162357 INFO neutron.agent.ovn.metadata.agent [-] Port af8bcfc4-3690-4b5f-9893-5555fa376203 in datapath c7addd6c-480f-45ed-94c2-18d1d2248acb unbound from our chassis
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.320 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c7addd6c-480f-45ed-94c2-18d1d2248acb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.323 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8afc47f-ad50-46d6-8e5c-f74ef01641aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.324 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb namespace which is not needed anymore
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:21 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000069.scope: Deactivated successfully.
Oct 02 08:44:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 159 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct 02 08:44:21 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000069.scope: Consumed 15.906s CPU time.
Oct 02 08:44:21 compute-0 systemd-machined[214636]: Machine qemu-131-instance-00000069 terminated.
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.443 2 INFO nova.virt.libvirt.driver [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Instance destroyed successfully.
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.444 2 DEBUG nova.objects.instance [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid bccc9587-6f96-4032-ae07-56ab00988869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.466 2 DEBUG nova.virt.libvirt.vif [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=105,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBp7O1+NifYmRTPjkz076R6OB0cQraodAPHkd+igs0jgMRa0ylNfh8a4FTcaAs5LMUjKZ6d3T6IfE8uMmH/Vv7/4iSPE6rs9EcqUzLfabYKjHL1D+G2YR0bhQI1PbtyQqg==',key_name='tempest-TestSecurityGroupsBasicOps-388523957',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:43:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-lm9mu0gw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:43:23Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=bccc9587-6f96-4032-ae07-56ab00988869,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.467 2 DEBUG nova.network.os_vif_util [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.470 2 DEBUG nova.network.os_vif_util [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.471 2 DEBUG os_vif [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.474 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf8bcfc4-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.487 2 INFO os_vif [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36')
Oct 02 08:44:21 compute-0 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [NOTICE]   (365728) : haproxy version is 2.8.14-c23fe91
Oct 02 08:44:21 compute-0 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [NOTICE]   (365728) : path to executable is /usr/sbin/haproxy
Oct 02 08:44:21 compute-0 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [WARNING]  (365728) : Exiting Master process...
Oct 02 08:44:21 compute-0 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [WARNING]  (365728) : Exiting Master process...
Oct 02 08:44:21 compute-0 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [ALERT]    (365728) : Current worker (365730) exited with code 143 (Terminated)
Oct 02 08:44:21 compute-0 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [WARNING]  (365728) : All workers exited. Exiting... (0)
Oct 02 08:44:21 compute-0 systemd[1]: libpod-ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2.scope: Deactivated successfully.
Oct 02 08:44:21 compute-0 podman[367182]: 2025-10-02 08:44:21.553401107 +0000 UTC m=+0.061524199 container died ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 08:44:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2-userdata-shm.mount: Deactivated successfully.
Oct 02 08:44:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-644ad21fe1b1e836c44e9821eabc2e68b4056b923e74837b9185693307a8e425-merged.mount: Deactivated successfully.
Oct 02 08:44:21 compute-0 podman[367182]: 2025-10-02 08:44:21.630967474 +0000 UTC m=+0.139090536 container cleanup ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 08:44:21 compute-0 systemd[1]: libpod-conmon-ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2.scope: Deactivated successfully.
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.675 2 DEBUG nova.compute.manager [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-changed-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.678 2 DEBUG nova.compute.manager [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Refreshing instance network info cache due to event network-changed-af8bcfc4-3690-4b5f-9893-5555fa376203. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.679 2 DEBUG oslo_concurrency.lockutils [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.679 2 DEBUG oslo_concurrency.lockutils [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.679 2 DEBUG nova.network.neutron [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Refreshing network info cache for port af8bcfc4-3690-4b5f-9893-5555fa376203 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:44:21 compute-0 podman[367229]: 2025-10-02 08:44:21.730218637 +0000 UTC m=+0.066595406 container remove ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.737 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6693c2ef-be59-46c8-a037-928198bafefd]: (4, ('Thu Oct  2 08:44:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb (ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2)\nba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2\nThu Oct  2 08:44:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb (ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2)\nba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.740 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4f87ac-2428-4255-a0c1-13089e436fe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.742 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7addd6c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:21 compute-0 kernel: tapc7addd6c-40: left promiscuous mode
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.780 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[217ecd99-b017-444e-8155-56687a929e6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.798 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394646.7972877, 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.799 2 INFO nova.compute.manager [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] VM Stopped (Lifecycle Event)
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.801 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2e148327-7f9e-4b5f-8f98-09590d7718a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.807 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f783d9-3176-4c0a-bed6-b371bf6dc65c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.817 2 DEBUG nova.compute.manager [req-e5f1dec8-584c-4d98-8f74-964d713b2d38 req-7f934826-a140-4a03-8c0a-0be8025e55b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-vif-unplugged-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.818 2 DEBUG oslo_concurrency.lockutils [req-e5f1dec8-584c-4d98-8f74-964d713b2d38 req-7f934826-a140-4a03-8c0a-0be8025e55b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.818 2 DEBUG oslo_concurrency.lockutils [req-e5f1dec8-584c-4d98-8f74-964d713b2d38 req-7f934826-a140-4a03-8c0a-0be8025e55b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.818 2 DEBUG oslo_concurrency.lockutils [req-e5f1dec8-584c-4d98-8f74-964d713b2d38 req-7f934826-a140-4a03-8c0a-0be8025e55b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.819 2 DEBUG nova.compute.manager [req-e5f1dec8-584c-4d98-8f74-964d713b2d38 req-7f934826-a140-4a03-8c0a-0be8025e55b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] No waiting events found dispatching network-vif-unplugged-af8bcfc4-3690-4b5f-9893-5555fa376203 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.819 2 DEBUG nova.compute.manager [req-e5f1dec8-584c-4d98-8f74-964d713b2d38 req-7f934826-a140-4a03-8c0a-0be8025e55b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-vif-unplugged-af8bcfc4-3690-4b5f-9893-5555fa376203 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.826 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[965d589f-c877-498c-92f4-b949d4ee07bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547988, 'reachable_time': 20763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367249, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.829 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:44:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.830 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1434ef32-3940-483e-b8f5-1e689146ae49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:21 compute-0 systemd[1]: run-netns-ovnmeta\x2dc7addd6c\x2d480f\x2d45ed\x2d94c2\x2d18d1d2248acb.mount: Deactivated successfully.
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.834 2 DEBUG nova.compute.manager [None req-dcc39f44-02ac-482e-ac64-f477158307a6 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.952 2 INFO nova.virt.libvirt.driver [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Deleting instance files /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869_del
Oct 02 08:44:21 compute-0 nova_compute[260603]: 2025-10-02 08:44:21.953 2 INFO nova.virt.libvirt.driver [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Deletion of /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869_del complete
Oct 02 08:44:22 compute-0 nova_compute[260603]: 2025-10-02 08:44:22.018 2 INFO nova.compute.manager [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 02 08:44:22 compute-0 nova_compute[260603]: 2025-10-02 08:44:22.019 2 DEBUG oslo.service.loopingcall [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:44:22 compute-0 nova_compute[260603]: 2025-10-02 08:44:22.020 2 DEBUG nova.compute.manager [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:44:22 compute-0 nova_compute[260603]: 2025-10-02 08:44:22.020 2 DEBUG nova.network.neutron [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:44:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:44:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2575932264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:44:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:44:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2575932264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:44:22 compute-0 compassionate_gould[367145]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:44:22 compute-0 compassionate_gould[367145]: --> relative data size: 1.0
Oct 02 08:44:22 compute-0 compassionate_gould[367145]: --> All data devices are unavailable
Oct 02 08:44:22 compute-0 systemd[1]: libpod-9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a.scope: Deactivated successfully.
Oct 02 08:44:22 compute-0 systemd[1]: libpod-9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a.scope: Consumed 1.057s CPU time.
Oct 02 08:44:22 compute-0 podman[367128]: 2025-10-02 08:44:22.236065044 +0000 UTC m=+1.306085739 container died 9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 02 08:44:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4857f30f5eecfd101687d68127dad841c59f8f950c2a1e98423158c4e605327-merged.mount: Deactivated successfully.
Oct 02 08:44:22 compute-0 podman[367128]: 2025-10-02 08:44:22.312881308 +0000 UTC m=+1.382901973 container remove 9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gould, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:44:22 compute-0 systemd[1]: libpod-conmon-9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a.scope: Deactivated successfully.
Oct 02 08:44:22 compute-0 sudo[367022]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:22 compute-0 sudo[367282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:44:22 compute-0 sudo[367282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:22 compute-0 sudo[367282]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:22 compute-0 ceph-mon[74477]: pgmap v1962: 305 pgs: 305 active+clean; 159 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct 02 08:44:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2575932264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:44:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2575932264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:44:22 compute-0 sudo[367307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:44:22 compute-0 sudo[367307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:22 compute-0 sudo[367307]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:22 compute-0 sudo[367332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:44:22 compute-0 sudo[367332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:22 compute-0 sudo[367332]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:22 compute-0 sudo[367357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:44:22 compute-0 sudo[367357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:44:23 compute-0 podman[367424]: 2025-10-02 08:44:23.221844859 +0000 UTC m=+0.053057924 container create 4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:44:23 compute-0 systemd[1]: Started libpod-conmon-4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b.scope.
Oct 02 08:44:23 compute-0 podman[367424]: 2025-10-02 08:44:23.195458708 +0000 UTC m=+0.026671833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:44:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:44:23 compute-0 podman[367424]: 2025-10-02 08:44:23.321291299 +0000 UTC m=+0.152504424 container init 4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_margulis, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 08:44:23 compute-0 podman[367424]: 2025-10-02 08:44:23.333174849 +0000 UTC m=+0.164387924 container start 4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_margulis, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:44:23 compute-0 podman[367424]: 2025-10-02 08:44:23.337878046 +0000 UTC m=+0.169091181 container attach 4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_margulis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:44:23 compute-0 affectionate_margulis[367440]: 167 167
Oct 02 08:44:23 compute-0 systemd[1]: libpod-4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b.scope: Deactivated successfully.
Oct 02 08:44:23 compute-0 podman[367424]: 2025-10-02 08:44:23.343314686 +0000 UTC m=+0.174527751 container died 4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_margulis, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 08:44:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 41 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 02 08:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-28a91eb7152b7677524d3d1ecff4cfe435fe16ca16f6691c6d4898125d405d75-merged.mount: Deactivated successfully.
Oct 02 08:44:23 compute-0 podman[367424]: 2025-10-02 08:44:23.39319206 +0000 UTC m=+0.224405125 container remove 4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_margulis, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:44:23 compute-0 systemd[1]: libpod-conmon-4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b.scope: Deactivated successfully.
Oct 02 08:44:23 compute-0 nova_compute[260603]: 2025-10-02 08:44:23.453 2 DEBUG nova.network.neutron [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:44:23 compute-0 nova_compute[260603]: 2025-10-02 08:44:23.475 2 INFO nova.compute.manager [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Took 1.46 seconds to deallocate network for instance.
Oct 02 08:44:23 compute-0 nova_compute[260603]: 2025-10-02 08:44:23.557 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:23 compute-0 nova_compute[260603]: 2025-10-02 08:44:23.557 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:23 compute-0 podman[367463]: 2025-10-02 08:44:23.607940353 +0000 UTC m=+0.051654040 container create 7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:44:23 compute-0 nova_compute[260603]: 2025-10-02 08:44:23.614 2 DEBUG oslo_concurrency.processutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:23 compute-0 systemd[1]: Started libpod-conmon-7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242.scope.
Oct 02 08:44:23 compute-0 podman[367463]: 2025-10-02 08:44:23.585096311 +0000 UTC m=+0.028810018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:44:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:44:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cab69e81ef290eb7f6a772b7db2515b7a861d006fe1b6c31729f91abde5346e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cab69e81ef290eb7f6a772b7db2515b7a861d006fe1b6c31729f91abde5346e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cab69e81ef290eb7f6a772b7db2515b7a861d006fe1b6c31729f91abde5346e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cab69e81ef290eb7f6a772b7db2515b7a861d006fe1b6c31729f91abde5346e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:23 compute-0 podman[367463]: 2025-10-02 08:44:23.721571335 +0000 UTC m=+0.165285022 container init 7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:44:23 compute-0 podman[367463]: 2025-10-02 08:44:23.739208995 +0000 UTC m=+0.182922682 container start 7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:44:23 compute-0 podman[367463]: 2025-10-02 08:44:23.745662796 +0000 UTC m=+0.189376533 container attach 7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:44:23 compute-0 nova_compute[260603]: 2025-10-02 08:44:23.841 2 DEBUG nova.compute.manager [req-5529cff5-d743-4530-88a1-3930e8f7bca5 req-b9e88d1f-2ad6-4039-9126-64abac55380e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-vif-deleted-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:23 compute-0 nova_compute[260603]: 2025-10-02 08:44:23.996 2 DEBUG nova.compute.manager [req-3af6e8b2-2a36-4dfb-a556-0af606880b38 req-12e446f4-9362-41a2-a207-b50804f0cfe2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:23 compute-0 nova_compute[260603]: 2025-10-02 08:44:23.996 2 DEBUG oslo_concurrency.lockutils [req-3af6e8b2-2a36-4dfb-a556-0af606880b38 req-12e446f4-9362-41a2-a207-b50804f0cfe2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:23 compute-0 nova_compute[260603]: 2025-10-02 08:44:23.997 2 DEBUG oslo_concurrency.lockutils [req-3af6e8b2-2a36-4dfb-a556-0af606880b38 req-12e446f4-9362-41a2-a207-b50804f0cfe2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:23 compute-0 nova_compute[260603]: 2025-10-02 08:44:23.997 2 DEBUG oslo_concurrency.lockutils [req-3af6e8b2-2a36-4dfb-a556-0af606880b38 req-12e446f4-9362-41a2-a207-b50804f0cfe2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:23 compute-0 nova_compute[260603]: 2025-10-02 08:44:23.998 2 DEBUG nova.compute.manager [req-3af6e8b2-2a36-4dfb-a556-0af606880b38 req-12e446f4-9362-41a2-a207-b50804f0cfe2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] No waiting events found dispatching network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:44:23 compute-0 nova_compute[260603]: 2025-10-02 08:44:23.998 2 WARNING nova.compute.manager [req-3af6e8b2-2a36-4dfb-a556-0af606880b38 req-12e446f4-9362-41a2-a207-b50804f0cfe2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received unexpected event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 for instance with vm_state deleted and task_state None.
Oct 02 08:44:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:44:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509340627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:24 compute-0 nova_compute[260603]: 2025-10-02 08:44:24.066 2 DEBUG oslo_concurrency.processutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:24 compute-0 nova_compute[260603]: 2025-10-02 08:44:24.075 2 DEBUG nova.compute.provider_tree [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:44:24 compute-0 nova_compute[260603]: 2025-10-02 08:44:24.102 2 DEBUG nova.scheduler.client.report [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:44:24 compute-0 nova_compute[260603]: 2025-10-02 08:44:24.146 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:24 compute-0 nova_compute[260603]: 2025-10-02 08:44:24.182 2 INFO nova.scheduler.client.report [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance bccc9587-6f96-4032-ae07-56ab00988869
Oct 02 08:44:24 compute-0 nova_compute[260603]: 2025-10-02 08:44:24.264 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:24 compute-0 nova_compute[260603]: 2025-10-02 08:44:24.268 2 DEBUG nova.network.neutron [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updated VIF entry in instance network info cache for port af8bcfc4-3690-4b5f-9893-5555fa376203. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:44:24 compute-0 nova_compute[260603]: 2025-10-02 08:44:24.269 2 DEBUG nova.network.neutron [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updating instance_info_cache with network_info: [{"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:44:24 compute-0 nova_compute[260603]: 2025-10-02 08:44:24.287 2 DEBUG oslo_concurrency.lockutils [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:44:24 compute-0 nova_compute[260603]: 2025-10-02 08:44:24.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:24 compute-0 ceph-mon[74477]: pgmap v1963: 305 pgs: 305 active+clean; 41 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 02 08:44:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1509340627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:24 compute-0 jovial_yalow[367480]: {
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:     "0": [
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:         {
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "devices": [
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "/dev/loop3"
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             ],
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_name": "ceph_lv0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_size": "21470642176",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "name": "ceph_lv0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "tags": {
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.cluster_name": "ceph",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.crush_device_class": "",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.encrypted": "0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.osd_id": "0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.type": "block",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.vdo": "0"
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             },
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "type": "block",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "vg_name": "ceph_vg0"
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:         }
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:     ],
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:     "1": [
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:         {
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "devices": [
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "/dev/loop4"
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             ],
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_name": "ceph_lv1",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_size": "21470642176",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "name": "ceph_lv1",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "tags": {
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.cluster_name": "ceph",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.crush_device_class": "",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.encrypted": "0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.osd_id": "1",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.type": "block",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.vdo": "0"
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             },
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "type": "block",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "vg_name": "ceph_vg1"
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:         }
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:     ],
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:     "2": [
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:         {
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "devices": [
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "/dev/loop5"
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             ],
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_name": "ceph_lv2",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_size": "21470642176",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "name": "ceph_lv2",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "tags": {
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.cluster_name": "ceph",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.crush_device_class": "",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.encrypted": "0",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.osd_id": "2",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.type": "block",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:                 "ceph.vdo": "0"
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             },
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "type": "block",
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:             "vg_name": "ceph_vg2"
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:         }
Oct 02 08:44:24 compute-0 jovial_yalow[367480]:     ]
Oct 02 08:44:24 compute-0 jovial_yalow[367480]: }
Oct 02 08:44:24 compute-0 systemd[1]: libpod-7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242.scope: Deactivated successfully.
Oct 02 08:44:24 compute-0 podman[367463]: 2025-10-02 08:44:24.564625732 +0000 UTC m=+1.008339409 container died 7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 08:44:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cab69e81ef290eb7f6a772b7db2515b7a861d006fe1b6c31729f91abde5346e-merged.mount: Deactivated successfully.
Oct 02 08:44:24 compute-0 podman[367463]: 2025-10-02 08:44:24.640745184 +0000 UTC m=+1.084458871 container remove 7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 02 08:44:24 compute-0 systemd[1]: libpod-conmon-7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242.scope: Deactivated successfully.
Oct 02 08:44:24 compute-0 sudo[367357]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:24 compute-0 sudo[367521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:44:24 compute-0 sudo[367521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:24 compute-0 sudo[367521]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:24 compute-0 sudo[367546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:44:24 compute-0 sudo[367546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:24 compute-0 sudo[367546]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:24 compute-0 sudo[367571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:44:24 compute-0 sudo[367571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:24 compute-0 sudo[367571]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:25 compute-0 sudo[367596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:44:25 compute-0 sudo[367596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 41 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 02 08:44:25 compute-0 podman[367664]: 2025-10-02 08:44:25.550842551 +0000 UTC m=+0.058431242 container create cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:44:25 compute-0 systemd[1]: Started libpod-conmon-cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f.scope.
Oct 02 08:44:25 compute-0 podman[367664]: 2025-10-02 08:44:25.532035564 +0000 UTC m=+0.039624295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:44:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:44:25 compute-0 podman[367664]: 2025-10-02 08:44:25.64805346 +0000 UTC m=+0.155642211 container init cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 08:44:25 compute-0 podman[367664]: 2025-10-02 08:44:25.660107017 +0000 UTC m=+0.167695738 container start cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:44:25 compute-0 podman[367664]: 2025-10-02 08:44:25.66439333 +0000 UTC m=+0.171982051 container attach cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 08:44:25 compute-0 great_lovelace[367680]: 167 167
Oct 02 08:44:25 compute-0 systemd[1]: libpod-cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f.scope: Deactivated successfully.
Oct 02 08:44:25 compute-0 podman[367664]: 2025-10-02 08:44:25.669675295 +0000 UTC m=+0.177263996 container died cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 08:44:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f80e4e92436759268a1a32363e53f404da6ea67cdb9b514860f0639dd81c6af3-merged.mount: Deactivated successfully.
Oct 02 08:44:25 compute-0 podman[367664]: 2025-10-02 08:44:25.71604133 +0000 UTC m=+0.223630041 container remove cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:44:25 compute-0 systemd[1]: libpod-conmon-cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f.scope: Deactivated successfully.
Oct 02 08:44:25 compute-0 podman[367704]: 2025-10-02 08:44:25.968330003 +0000 UTC m=+0.078686754 container create ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:44:26 compute-0 systemd[1]: Started libpod-conmon-ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5.scope.
Oct 02 08:44:26 compute-0 podman[367704]: 2025-10-02 08:44:25.937494242 +0000 UTC m=+0.047851053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:44:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622cb652e8078d7da671491a8083773840148c2323c654a8f3a34c03f57224e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622cb652e8078d7da671491a8083773840148c2323c654a8f3a34c03f57224e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622cb652e8078d7da671491a8083773840148c2323c654a8f3a34c03f57224e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622cb652e8078d7da671491a8083773840148c2323c654a8f3a34c03f57224e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:26 compute-0 podman[367704]: 2025-10-02 08:44:26.0975198 +0000 UTC m=+0.207876591 container init ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hamilton, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:44:26 compute-0 nova_compute[260603]: 2025-10-02 08:44:26.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:26 compute-0 podman[367704]: 2025-10-02 08:44:26.111736173 +0000 UTC m=+0.222092904 container start ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:44:26 compute-0 podman[367704]: 2025-10-02 08:44:26.115927384 +0000 UTC m=+0.226284165 container attach ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 08:44:26 compute-0 nova_compute[260603]: 2025-10-02 08:44:26.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:26 compute-0 ceph-mon[74477]: pgmap v1964: 305 pgs: 305 active+clean; 41 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]: {
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "osd_id": 2,
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "type": "bluestore"
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:     },
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "osd_id": 1,
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "type": "bluestore"
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:     },
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "osd_id": 0,
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:         "type": "bluestore"
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]:     }
Oct 02 08:44:27 compute-0 nostalgic_hamilton[367720]: }
Oct 02 08:44:27 compute-0 systemd[1]: libpod-ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5.scope: Deactivated successfully.
Oct 02 08:44:27 compute-0 systemd[1]: libpod-ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5.scope: Consumed 1.208s CPU time.
Oct 02 08:44:27 compute-0 podman[367753]: 2025-10-02 08:44:27.358883305 +0000 UTC m=+0.030023447 container died ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hamilton, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 41 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 02 08:44:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-622cb652e8078d7da671491a8083773840148c2323c654a8f3a34c03f57224e6-merged.mount: Deactivated successfully.
Oct 02 08:44:27 compute-0 podman[367753]: 2025-10-02 08:44:27.410808463 +0000 UTC m=+0.081948595 container remove ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hamilton, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 08:44:27 compute-0 systemd[1]: libpod-conmon-ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5.scope: Deactivated successfully.
Oct 02 08:44:27 compute-0 sudo[367596]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:44:27 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:44:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:44:27 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ae49bea7-0394-472f-bb2d-f21922ec9726 does not exist
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ed7ec83d-7982-47aa-9fc7-40e7264f8a23 does not exist
Oct 02 08:44:27 compute-0 sudo[367769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:44:27 compute-0 sudo[367769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:27 compute-0 sudo[367769]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:27 compute-0 sudo[367794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:44:27 compute-0 sudo[367794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:44:27 compute-0 sudo[367794]: pam_unix(sudo:session): session closed for user root
Oct 02 08:44:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.870098) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394667870153, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1298, "num_deletes": 250, "total_data_size": 1939184, "memory_usage": 1963776, "flush_reason": "Manual Compaction"}
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394667885129, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 1910056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40141, "largest_seqno": 41438, "table_properties": {"data_size": 1903944, "index_size": 3379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12049, "raw_average_key_size": 18, "raw_value_size": 1891725, "raw_average_value_size": 2870, "num_data_blocks": 151, "num_entries": 659, "num_filter_entries": 659, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394542, "oldest_key_time": 1759394542, "file_creation_time": 1759394667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 15096 microseconds, and 10334 cpu microseconds.
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.885193) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 1910056 bytes OK
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.885219) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.886959) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.886984) EVENT_LOG_v1 {"time_micros": 1759394667886976, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.887008) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 1933328, prev total WAL file size 1933328, number of live WAL files 2.
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.888209) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1865KB)], [89(9384KB)]
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394667888292, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11519692, "oldest_snapshot_seqno": -1}
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6590 keys, 10808407 bytes, temperature: kUnknown
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394667982744, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10808407, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10761370, "index_size": 29453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 168651, "raw_average_key_size": 25, "raw_value_size": 10640464, "raw_average_value_size": 1614, "num_data_blocks": 1174, "num_entries": 6590, "num_filter_entries": 6590, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.983085) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10808407 bytes
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.984510) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.8 rd, 114.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(11.7) write-amplify(5.7) OK, records in: 7102, records dropped: 512 output_compression: NoCompression
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.984541) EVENT_LOG_v1 {"time_micros": 1759394667984527, "job": 52, "event": "compaction_finished", "compaction_time_micros": 94565, "compaction_time_cpu_micros": 47386, "output_level": 6, "num_output_files": 1, "total_output_size": 10808407, "num_input_records": 7102, "num_output_records": 6590, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394667985426, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:44:27
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394667989217, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.888039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.989325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.989334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.989337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.989339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:44:27 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.989342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'vms', '.mgr', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups']
Oct 02 08:44:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:44:28 compute-0 ceph-mon[74477]: pgmap v1965: 305 pgs: 305 active+clean; 41 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 02 08:44:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:44:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:44:28 compute-0 nova_compute[260603]: 2025-10-02 08:44:28.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:28 compute-0 nova_compute[260603]: 2025-10-02 08:44:28.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 45 op/s
Oct 02 08:44:29 compute-0 nova_compute[260603]: 2025-10-02 08:44:29.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:29 compute-0 nova_compute[260603]: 2025-10-02 08:44:29.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:44:30 compute-0 ceph-mon[74477]: pgmap v1966: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 45 op/s
Oct 02 08:44:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.1 KiB/s wr, 37 op/s
Oct 02 08:44:31 compute-0 nova_compute[260603]: 2025-10-02 08:44:31.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:32 compute-0 nova_compute[260603]: 2025-10-02 08:44:32.110 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394657.1091447, 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:44:32 compute-0 nova_compute[260603]: 2025-10-02 08:44:32.111 2 INFO nova.compute.manager [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] VM Stopped (Lifecycle Event)
Oct 02 08:44:32 compute-0 nova_compute[260603]: 2025-10-02 08:44:32.175 2 DEBUG nova.compute.manager [None req-10bef5f5-bfc2-4046-a516-2ebb1c41d168 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:44:32 compute-0 ceph-mon[74477]: pgmap v1967: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.1 KiB/s wr, 37 op/s
Oct 02 08:44:32 compute-0 nova_compute[260603]: 2025-10-02 08:44:32.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:44:32 compute-0 nova_compute[260603]: 2025-10-02 08:44:32.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:44:32 compute-0 nova_compute[260603]: 2025-10-02 08:44:32.542 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:44:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:44:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.1 KiB/s wr, 37 op/s
Oct 02 08:44:33 compute-0 nova_compute[260603]: 2025-10-02 08:44:33.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:44:34 compute-0 nova_compute[260603]: 2025-10-02 08:44:34.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:34 compute-0 ceph-mon[74477]: pgmap v1968: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.1 KiB/s wr, 37 op/s
Oct 02 08:44:34 compute-0 nova_compute[260603]: 2025-10-02 08:44:34.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:44:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:34.827 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:34.828 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:34.828 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:44:35 compute-0 nova_compute[260603]: 2025-10-02 08:44:35.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:44:35 compute-0 nova_compute[260603]: 2025-10-02 08:44:35.556 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:35 compute-0 nova_compute[260603]: 2025-10-02 08:44:35.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:35 compute-0 nova_compute[260603]: 2025-10-02 08:44:35.558 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:35 compute-0 nova_compute[260603]: 2025-10-02 08:44:35.558 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:44:35 compute-0 nova_compute[260603]: 2025-10-02 08:44:35.559 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:36 compute-0 podman[367841]: 2025-10-02 08:44:36.039585177 +0000 UTC m=+0.095648162 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 02 08:44:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:44:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1312186581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:36 compute-0 podman[367840]: 2025-10-02 08:44:36.116534395 +0000 UTC m=+0.172686923 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.124 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.324 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.326 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3816MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.326 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.327 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.413 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.414 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.437 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394661.436321, bccc9587-6f96-4032-ae07-56ab00988869 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.438 2 INFO nova.compute.manager [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] VM Stopped (Lifecycle Event)
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.442 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.457 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.458 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.460 2 DEBUG nova.compute.manager [None req-bfb7a08e-5fb2-4f68-a197-3107b273efce - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.482 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:36 compute-0 ceph-mon[74477]: pgmap v1969: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:44:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1312186581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.526 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 08:44:36 compute-0 nova_compute[260603]: 2025-10-02 08:44:36.542 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:44:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2438127581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:37 compute-0 nova_compute[260603]: 2025-10-02 08:44:37.018 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:37 compute-0 nova_compute[260603]: 2025-10-02 08:44:37.026 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:44:37 compute-0 nova_compute[260603]: 2025-10-02 08:44:37.043 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:44:37 compute-0 nova_compute[260603]: 2025-10-02 08:44:37.068 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:44:37 compute-0 nova_compute[260603]: 2025-10-02 08:44:37.069 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:44:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2438127581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:44:38 compute-0 ceph-mon[74477]: pgmap v1970: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:44:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:44:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:44:39 compute-0 nova_compute[260603]: 2025-10-02 08:44:39.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:40 compute-0 nova_compute[260603]: 2025-10-02 08:44:40.070 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:44:40 compute-0 nova_compute[260603]: 2025-10-02 08:44:40.071 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:44:40 compute-0 ceph-mon[74477]: pgmap v1971: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:44:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:44:41 compute-0 nova_compute[260603]: 2025-10-02 08:44:41.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:42.350 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:70:42 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0d051606-6f26-4b92-af35-30ce92246a08', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d051606-6f26-4b92-af35-30ce92246a08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc1228dc2b0140318899a7d8a6bc11d6', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fd76562-7c34-4301-b34a-62f8c08775ba, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cfcee3ef-d64f-4220-a1c8-19d56cff5646) old=Port_Binding(mac=['fa:16:3e:65:70:42 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0d051606-6f26-4b92-af35-30ce92246a08', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d051606-6f26-4b92-af35-30ce92246a08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc1228dc2b0140318899a7d8a6bc11d6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:44:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:42.352 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cfcee3ef-d64f-4220-a1c8-19d56cff5646 in datapath 0d051606-6f26-4b92-af35-30ce92246a08 updated
Oct 02 08:44:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:42.354 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d051606-6f26-4b92-af35-30ce92246a08, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:44:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:42.356 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d4586650-98e4-43e9-a4a1-42c3fd0a9d73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:42 compute-0 ceph-mon[74477]: pgmap v1972: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:44:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:44:43 compute-0 podman[367911]: 2025-10-02 08:44:43.033129894 +0000 UTC m=+0.090529812 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct 02 08:44:43 compute-0 podman[367910]: 2025-10-02 08:44:43.03971701 +0000 UTC m=+0.091730640 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:44:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:44:44 compute-0 nova_compute[260603]: 2025-10-02 08:44:44.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:44 compute-0 nova_compute[260603]: 2025-10-02 08:44:44.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:44:44 compute-0 ceph-mon[74477]: pgmap v1973: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:44:44 compute-0 nova_compute[260603]: 2025-10-02 08:44:44.751 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:44 compute-0 nova_compute[260603]: 2025-10-02 08:44:44.752 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:44 compute-0 nova_compute[260603]: 2025-10-02 08:44:44.773 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:44:44 compute-0 nova_compute[260603]: 2025-10-02 08:44:44.850 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:44 compute-0 nova_compute[260603]: 2025-10-02 08:44:44.851 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:44 compute-0 nova_compute[260603]: 2025-10-02 08:44:44.857 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:44:44 compute-0 nova_compute[260603]: 2025-10-02 08:44:44.858 2 INFO nova.compute.claims [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:44:44 compute-0 nova_compute[260603]: 2025-10-02 08:44:44.972 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:44:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:44:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1731150562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.399 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.408 2 DEBUG nova.compute.provider_tree [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.429 2 DEBUG nova.scheduler.client.report [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.457 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.459 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.509 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.510 2 DEBUG nova.network.neutron [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.536 2 INFO nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.555 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:44:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1731150562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.652 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.653 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.654 2 INFO nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Creating image(s)
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.672 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.691 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.710 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.713 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.816 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.817 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.818 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.818 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.839 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.842 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f8c66e48-8186-4434-8a82-a9be6fd98570_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:45 compute-0 nova_compute[260603]: 2025-10-02 08:44:45.884 2 DEBUG nova.policy [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.183 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f8c66e48-8186-4434-8a82-a9be6fd98570_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.222 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.223 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.258 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.265 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.334 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.335 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.343 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.343 2 INFO nova.compute.claims [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.388 2 DEBUG nova.objects.instance [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid f8c66e48-8186-4434-8a82-a9be6fd98570 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.401 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.402 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Ensure instance console log exists: /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.402 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.403 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.403 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.515 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:46 compute-0 ceph-mon[74477]: pgmap v1974: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:44:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:44:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3291319667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:46 compute-0 nova_compute[260603]: 2025-10-02 08:44:46.995 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.002 2 DEBUG nova.compute.provider_tree [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.006 2 DEBUG nova.network.neutron [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Successfully created port: 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.024 2 DEBUG nova.scheduler.client.report [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.051 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.052 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.099 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.100 2 DEBUG nova.network.neutron [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.117 2 INFO nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.131 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.219 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.222 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.223 2 INFO nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Creating image(s)
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.260 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.294 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.326 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.331 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 59 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 671 KiB/s wr, 12 op/s
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.390 2 DEBUG nova.policy [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2e201c8855514748b06d7da4f56ed1b5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75aaa01d4f144326a24dea8ff25b20a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.441 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.442 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.443 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.443 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.470 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.475 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 2ea36b69-0b0d-4253-8207-a159c75280b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.525 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:44:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3291319667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.835 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 2ea36b69-0b0d-4253-8207-a159c75280b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.885 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] resizing rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.972 2 DEBUG nova.network.neutron [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Successfully created port: 46d394ee-978a-48af-81e0-3175e3fedd10 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.978 2 DEBUG nova.objects.instance [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lazy-loading 'migration_context' on Instance uuid 2ea36b69-0b0d-4253-8207-a159c75280b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.995 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.996 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Ensure instance console log exists: /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.996 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.997 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:47 compute-0 nova_compute[260603]: 2025-10-02 08:44:47.997 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:48 compute-0 ceph-mon[74477]: pgmap v1975: 305 pgs: 305 active+clean; 59 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 671 KiB/s wr, 12 op/s
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.042 2 DEBUG nova.network.neutron [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Successfully updated port: 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.060 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.061 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.061 2 DEBUG nova.network.neutron [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.161 2 DEBUG nova.compute.manager [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-changed-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.162 2 DEBUG nova.compute.manager [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Refreshing instance network info cache due to event network-changed-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.163 2 DEBUG oslo_concurrency.lockutils [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.236 2 DEBUG nova.network.neutron [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.263 2 DEBUG nova.network.neutron [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Successfully updated port: 46d394ee-978a-48af-81e0-3175e3fedd10 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.279 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "refresh_cache-2ea36b69-0b0d-4253-8207-a159c75280b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.280 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquired lock "refresh_cache-2ea36b69-0b0d-4253-8207-a159c75280b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.280 2 DEBUG nova.network.neutron [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:44:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 103 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 MiB/s wr, 39 op/s
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.386 2 DEBUG nova.compute.manager [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-changed-46d394ee-978a-48af-81e0-3175e3fedd10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.386 2 DEBUG nova.compute.manager [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Refreshing instance network info cache due to event network-changed-46d394ee-978a-48af-81e0-3175e3fedd10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.387 2 DEBUG oslo_concurrency.lockutils [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-2ea36b69-0b0d-4253-8207-a159c75280b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:49 compute-0 nova_compute[260603]: 2025-10-02 08:44:49.495 2 DEBUG nova.network.neutron [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.537 2 DEBUG nova.network.neutron [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updating instance_info_cache with network_info: [{"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.576 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.577 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Instance network_info: |[{"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.578 2 DEBUG oslo_concurrency.lockutils [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.579 2 DEBUG nova.network.neutron [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Refreshing network info cache for port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.588 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Start _get_guest_xml network_info=[{"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.594 2 WARNING nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.599 2 DEBUG nova.virt.libvirt.host [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.599 2 DEBUG nova.virt.libvirt.host [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.603 2 DEBUG nova.virt.libvirt.host [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.604 2 DEBUG nova.virt.libvirt.host [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.604 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.605 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.605 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.605 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.606 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.606 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.606 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.607 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.607 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.607 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.608 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.608 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.612 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:50 compute-0 ceph-mon[74477]: pgmap v1976: 305 pgs: 305 active+clean; 103 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 MiB/s wr, 39 op/s
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.818 2 DEBUG nova.network.neutron [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Updating instance_info_cache with network_info: [{"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.851 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Releasing lock "refresh_cache-2ea36b69-0b0d-4253-8207-a159c75280b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.851 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Instance network_info: |[{"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.851 2 DEBUG oslo_concurrency.lockutils [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-2ea36b69-0b0d-4253-8207-a159c75280b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.852 2 DEBUG nova.network.neutron [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Refreshing network info cache for port 46d394ee-978a-48af-81e0-3175e3fedd10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.854 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Start _get_guest_xml network_info=[{"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.859 2 WARNING nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.864 2 DEBUG nova.virt.libvirt.host [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.864 2 DEBUG nova.virt.libvirt.host [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.867 2 DEBUG nova.virt.libvirt.host [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.867 2 DEBUG nova.virt.libvirt.host [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.868 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.868 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.868 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.868 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.869 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.869 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.869 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.869 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.869 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.870 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.870 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.870 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:44:50 compute-0 nova_compute[260603]: 2025-10-02 08:44:50.873 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:44:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1436933517' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.133 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.165 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.170 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:44:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635793004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.330 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.362 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.368 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 103 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 MiB/s wr, 39 op/s
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:44:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3457303530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.593 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.596 2 DEBUG nova.virt.libvirt.vif [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:44:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=107,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA8lO59ZZnrYbP7wEOczQOvY36PKA+gVe/fckBMMuh4THRBaRJXFK1RXQfVOkub4S1cwnd2kmq3jyKyYaZVJ6dSkF3B7eITAKra64ROntkkxqN5HCVmiUlIAa9236hzUHg==',key_name='tempest-TestSecurityGroupsBasicOps-1941794773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-zb68sy0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:44:45Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=f8c66e48-8186-4434-8a82-a9be6fd98570,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.597 2 DEBUG nova.network.os_vif_util [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.598 2 DEBUG nova.network.os_vif_util [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.600 2 DEBUG nova.objects.instance [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid f8c66e48-8186-4434-8a82-a9be6fd98570 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.634 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <uuid>f8c66e48-8186-4434-8a82-a9be6fd98570</uuid>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <name>instance-0000006b</name>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145</nova:name>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:44:50</nova:creationTime>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:port uuid="2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f">
Oct 02 08:44:51 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <system>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <entry name="serial">f8c66e48-8186-4434-8a82-a9be6fd98570</entry>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <entry name="uuid">f8c66e48-8186-4434-8a82-a9be6fd98570</entry>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </system>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <os>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </os>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <features>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </features>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f8c66e48-8186-4434-8a82-a9be6fd98570_disk">
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </source>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f8c66e48-8186-4434-8a82-a9be6fd98570_disk.config">
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </source>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:4e:e4:9d"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <target dev="tap2fe5cc8e-26"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/console.log" append="off"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <video>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </video>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:44:51 compute-0 nova_compute[260603]: </domain>
Oct 02 08:44:51 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.636 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Preparing to wait for external event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.636 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.637 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.637 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.638 2 DEBUG nova.virt.libvirt.vif [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:44:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=107,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA8lO59ZZnrYbP7wEOczQOvY36PKA+gVe/fckBMMuh4THRBaRJXFK1RXQfVOkub4S1cwnd2kmq3jyKyYaZVJ6dSkF3B7eITAKra64ROntkkxqN5HCVmiUlIAa9236hzUHg==',key_name='tempest-TestSecurityGroupsBasicOps-1941794773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-zb68sy0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:44:45Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=f8c66e48-8186-4434-8a82-a9be6fd98570,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.639 2 DEBUG nova.network.os_vif_util [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.640 2 DEBUG nova.network.os_vif_util [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.641 2 DEBUG os_vif [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fe5cc8e-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2fe5cc8e-26, col_values=(('external_ids', {'iface-id': '2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:e4:9d', 'vm-uuid': 'f8c66e48-8186-4434-8a82-a9be6fd98570'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:51 compute-0 NetworkManager[45129]: <info>  [1759394691.6511] manager: (tap2fe5cc8e-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.657 2 INFO os_vif [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26')
Oct 02 08:44:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1436933517' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:44:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2635793004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:44:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3457303530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.712 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.713 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.713 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:4e:e4:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.714 2 INFO nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Using config drive
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.741 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:44:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3288734153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.866 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.867 2 DEBUG nova.virt.libvirt.vif [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:44:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-459466961',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-459466961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-459466961',id=108,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75aaa01d4f144326a24dea8ff25b20a7',ramdisk_id='',reservation_id='r-qjxua0d9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-200952902',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-200952902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:44:47Z,user_data=None,user_id='2e201c8855514748b06d7da4f56ed1b5',uuid=2ea36b69-0b0d-4253-8207-a159c75280b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.868 2 DEBUG nova.network.os_vif_util [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Converting VIF {"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.868 2 DEBUG nova.network.os_vif_util [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.869 2 DEBUG nova.objects.instance [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ea36b69-0b0d-4253-8207-a159c75280b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.897 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <uuid>2ea36b69-0b0d-4253-8207-a159c75280b3</uuid>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <name>instance-0000006c</name>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-459466961</nova:name>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:44:50</nova:creationTime>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:user uuid="2e201c8855514748b06d7da4f56ed1b5">tempest-ServersNegativeTestMultiTenantJSON-200952902-project-member</nova:user>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:project uuid="75aaa01d4f144326a24dea8ff25b20a7">tempest-ServersNegativeTestMultiTenantJSON-200952902</nova:project>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <nova:port uuid="46d394ee-978a-48af-81e0-3175e3fedd10">
Oct 02 08:44:51 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <system>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <entry name="serial">2ea36b69-0b0d-4253-8207-a159c75280b3</entry>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <entry name="uuid">2ea36b69-0b0d-4253-8207-a159c75280b3</entry>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </system>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <os>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </os>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <features>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </features>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/2ea36b69-0b0d-4253-8207-a159c75280b3_disk">
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </source>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/2ea36b69-0b0d-4253-8207-a159c75280b3_disk.config">
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </source>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:44:51 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:1b:11:18"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <target dev="tap46d394ee-97"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/console.log" append="off"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <video>
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </video>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:44:51 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:44:51 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:44:51 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:44:51 compute-0 nova_compute[260603]: </domain>
Oct 02 08:44:51 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.899 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Preparing to wait for external event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.899 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.899 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.899 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.900 2 DEBUG nova.virt.libvirt.vif [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:44:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-459466961',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-459466961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-459466961',id=108,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75aaa01d4f144326a24dea8ff25b20a7',ramdisk_id='',reservation_id='r-qjxua0d9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-200952902',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-200952902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:44:47Z,user_data=None,user_id='2e201c8855514748b06d7da4f56ed1b5',uuid=2ea36b69-0b0d-4253-8207-a159c75280b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.901 2 DEBUG nova.network.os_vif_util [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Converting VIF {"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.901 2 DEBUG nova.network.os_vif_util [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.902 2 DEBUG os_vif [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap46d394ee-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap46d394ee-97, col_values=(('external_ids', {'iface-id': '46d394ee-978a-48af-81e0-3175e3fedd10', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:11:18', 'vm-uuid': '2ea36b69-0b0d-4253-8207-a159c75280b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:51 compute-0 NetworkManager[45129]: <info>  [1759394691.9087] manager: (tap46d394ee-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.915 2 INFO os_vif [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97')
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.980 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.981 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.981 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] No VIF found with MAC fa:16:3e:1b:11:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:44:51 compute-0 nova_compute[260603]: 2025-10-02 08:44:51.982 2 INFO nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Using config drive
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.014 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.198 2 INFO nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Creating config drive at /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/disk.config
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.208 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgsy6u3yq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.376 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgsy6u3yq" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.415 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.418 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/disk.config f8c66e48-8186-4434-8a82-a9be6fd98570_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.472 2 INFO nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Creating config drive at /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/disk.config
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.477 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf3zus22g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.596 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/disk.config f8c66e48-8186-4434-8a82-a9be6fd98570_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.598 2 INFO nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Deleting local config drive /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/disk.config because it was imported into RBD.
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.648 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf3zus22g" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:52 compute-0 kernel: tap2fe5cc8e-26: entered promiscuous mode
Oct 02 08:44:52 compute-0 NetworkManager[45129]: <info>  [1759394692.6554] manager: (tap2fe5cc8e-26): new Tun device (/org/freedesktop/NetworkManager/Devices/431)
Oct 02 08:44:52 compute-0 ovn_controller[152344]: 2025-10-02T08:44:52Z|01098|binding|INFO|Claiming lport 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f for this chassis.
Oct 02 08:44:52 compute-0 ovn_controller[152344]: 2025-10-02T08:44:52Z|01099|binding|INFO|2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f: Claiming fa:16:3e:4e:e4:9d 10.100.0.6
Oct 02 08:44:52 compute-0 ceph-mon[74477]: pgmap v1977: 305 pgs: 305 active+clean; 103 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 MiB/s wr, 39 op/s
Oct 02 08:44:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3288734153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.726 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:e4:9d 10.100.0.6'], port_security=['fa:16:3e:4e:e4:9d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f8c66e48-8186-4434-8a82-a9be6fd98570', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97c204ff-8b60-40d3-b073-c7cc05dde092 b7a9cf86-0b36-4647-8672-befd80171150', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6baff0c8-9572-4afd-b1a8-e77a5daa40e0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.727 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f in datapath 7f709a68-4708-4cf6-ab7a-ec213a44899e bound to our chassis
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.728 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7f709a68-4708-4cf6-ab7a-ec213a44899e
Oct 02 08:44:52 compute-0 systemd-udevd[368562]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.746 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.746 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[65b2eb93-87db-4779-9240-55a79f3f9a01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.747 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7f709a68-41 in ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.750 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7f709a68-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.750 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[15a7b11e-8550-4918-917e-74334f180b97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.752 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a411b7dd-93d6-4b2b-bd5e-251ec8fa07fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:52 compute-0 systemd-machined[214636]: New machine qemu-134-instance-0000006b.
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.756 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/disk.config 2ea36b69-0b0d-4253-8207-a159c75280b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:44:52 compute-0 NetworkManager[45129]: <info>  [1759394692.7631] device (tap2fe5cc8e-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:44:52 compute-0 NetworkManager[45129]: <info>  [1759394692.7640] device (tap2fe5cc8e-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.772 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[93286f1d-b3de-4951-933d-3c787184fc6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:52 compute-0 systemd[1]: Started Virtual Machine qemu-134-instance-0000006b.
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.800 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[341225d8-212f-439f-82c2-92e8b42ea04a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:52 compute-0 ovn_controller[152344]: 2025-10-02T08:44:52Z|01100|binding|INFO|Setting lport 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f ovn-installed in OVS
Oct 02 08:44:52 compute-0 ovn_controller[152344]: 2025-10-02T08:44:52Z|01101|binding|INFO|Setting lport 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f up in Southbound
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.816 2 DEBUG nova.network.neutron [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updated VIF entry in instance network info cache for port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.818 2 DEBUG nova.network.neutron [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updating instance_info_cache with network_info: [{"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.826 2 DEBUG nova.network.neutron [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Updated VIF entry in instance network info cache for port 46d394ee-978a-48af-81e0-3175e3fedd10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.827 2 DEBUG nova.network.neutron [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Updating instance_info_cache with network_info: [{"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.845 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bac249f1-5ac0-43ab-8130-f2c95b6357d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.846 2 DEBUG oslo_concurrency.lockutils [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.847 2 DEBUG oslo_concurrency.lockutils [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-2ea36b69-0b0d-4253-8207-a159c75280b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:44:52 compute-0 systemd-udevd[368569]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:44:52 compute-0 NetworkManager[45129]: <info>  [1759394692.8523] manager: (tap7f709a68-40): new Veth device (/org/freedesktop/NetworkManager/Devices/432)
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.851 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd80981-7fd2-436e-98ad-92be666761b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.887 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ee59424b-8878-4974-81d5-e24de15ac943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.892 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fe423b0f-52ca-417d-9e1e-9caf3d7b6163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:52 compute-0 NetworkManager[45129]: <info>  [1759394692.9248] device (tap7f709a68-40): carrier: link connected
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.932 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9c80b634-741d-4dad-873c-4607b9180eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.959 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3cb0ea-8a4e-44fe-a630-e8b461527ec0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f709a68-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:84:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557152, 'reachable_time': 42578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368622, 'error': None, 'target': 'ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.960 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/disk.config 2ea36b69-0b0d-4253-8207-a159c75280b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:44:52 compute-0 nova_compute[260603]: 2025-10-02 08:44:52.962 2 INFO nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Deleting local config drive /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/disk.config because it was imported into RBD.
Oct 02 08:44:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.986 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e38f8c67-b3e1-4621-a1a8-a71764d16c8a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:8476'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 557152, 'tstamp': 557152}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368623, 'error': None, 'target': 'ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.010 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83b7ab20-9720-494a-bb12-45937ca938d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f709a68-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:84:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557152, 'reachable_time': 42578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368627, 'error': None, 'target': 'ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 kernel: tap46d394ee-97: entered promiscuous mode
Oct 02 08:44:53 compute-0 NetworkManager[45129]: <info>  [1759394693.0232] manager: (tap46d394ee-97): new Tun device (/org/freedesktop/NetworkManager/Devices/433)
Oct 02 08:44:53 compute-0 ovn_controller[152344]: 2025-10-02T08:44:53Z|01102|binding|INFO|Claiming lport 46d394ee-978a-48af-81e0-3175e3fedd10 for this chassis.
Oct 02 08:44:53 compute-0 ovn_controller[152344]: 2025-10-02T08:44:53Z|01103|binding|INFO|46d394ee-978a-48af-81e0-3175e3fedd10: Claiming fa:16:3e:1b:11:18 10.100.0.5
Oct 02 08:44:53 compute-0 nova_compute[260603]: 2025-10-02 08:44:53.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:53 compute-0 systemd-udevd[368604]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:44:53 compute-0 NetworkManager[45129]: <info>  [1759394693.0362] device (tap46d394ee-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:44:53 compute-0 NetworkManager[45129]: <info>  [1759394693.0372] device (tap46d394ee-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.040 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:11:18 10.100.0.5'], port_security=['fa:16:3e:1b:11:18 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2ea36b69-0b0d-4253-8207-a159c75280b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a70dde6b-eda7-404f-be3f-fdfd21130765', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75aaa01d4f144326a24dea8ff25b20a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b642237-0371-4fa9-9985-5a0296cfa4d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b63137b-fac5-4aa7-bab2-c0c2acc8a529, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=46d394ee-978a-48af-81e0-3175e3fedd10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.048 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0b0118-f6e5-4f8f-b5da-ccdb20a278ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 systemd-machined[214636]: New machine qemu-135-instance-0000006c.
Oct 02 08:44:53 compute-0 systemd[1]: Started Virtual Machine qemu-135-instance-0000006c.
Oct 02 08:44:53 compute-0 nova_compute[260603]: 2025-10-02 08:44:53.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:53 compute-0 ovn_controller[152344]: 2025-10-02T08:44:53Z|01104|binding|INFO|Setting lport 46d394ee-978a-48af-81e0-3175e3fedd10 ovn-installed in OVS
Oct 02 08:44:53 compute-0 ovn_controller[152344]: 2025-10-02T08:44:53Z|01105|binding|INFO|Setting lport 46d394ee-978a-48af-81e0-3175e3fedd10 up in Southbound
Oct 02 08:44:53 compute-0 nova_compute[260603]: 2025-10-02 08:44:53.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.116 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18871649-b129-4d65-a5ee-e401022066f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.117 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f709a68-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.117 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.118 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f709a68-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:53 compute-0 nova_compute[260603]: 2025-10-02 08:44:53.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:53 compute-0 NetworkManager[45129]: <info>  [1759394693.1202] manager: (tap7f709a68-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Oct 02 08:44:53 compute-0 kernel: tap7f709a68-40: entered promiscuous mode
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.123 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7f709a68-40, col_values=(('external_ids', {'iface-id': 'd2cd5a12-3184-4d72-ac2f-758e53e7613e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:53 compute-0 ovn_controller[152344]: 2025-10-02T08:44:53Z|01106|binding|INFO|Releasing lport d2cd5a12-3184-4d72-ac2f-758e53e7613e from this chassis (sb_readonly=0)
Oct 02 08:44:53 compute-0 nova_compute[260603]: 2025-10-02 08:44:53.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:53 compute-0 nova_compute[260603]: 2025-10-02 08:44:53.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.140 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7f709a68-4708-4cf6-ab7a-ec213a44899e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7f709a68-4708-4cf6-ab7a-ec213a44899e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.141 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[621297bc-d0f1-4516-bc4f-355d5ded0696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.142 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-7f709a68-4708-4cf6-ab7a-ec213a44899e
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/7f709a68-4708-4cf6-ab7a-ec213a44899e.pid.haproxy
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 7f709a68-4708-4cf6-ab7a-ec213a44899e
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.142 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'env', 'PROCESS_TAG=haproxy-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7f709a68-4708-4cf6-ab7a-ec213a44899e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:44:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Oct 02 08:44:53 compute-0 nova_compute[260603]: 2025-10-02 08:44:53.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.421 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:44:53 compute-0 podman[368673]: 2025-10-02 08:44:53.503779039 +0000 UTC m=+0.069961241 container create e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 08:44:53 compute-0 systemd[1]: Started libpod-conmon-e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2.scope.
Oct 02 08:44:53 compute-0 podman[368673]: 2025-10-02 08:44:53.477921994 +0000 UTC m=+0.044104216 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:44:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5042a5e1a9fd8d273a882aed1c11acd286e1034728129aad25bceba67731ade/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:53 compute-0 podman[368673]: 2025-10-02 08:44:53.575674311 +0000 UTC m=+0.141856503 container init e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 08:44:53 compute-0 podman[368673]: 2025-10-02 08:44:53.580572414 +0000 UTC m=+0.146754606 container start e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:44:53 compute-0 neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e[368688]: [NOTICE]   (368692) : New worker (368694) forked
Oct 02 08:44:53 compute-0 neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e[368688]: [NOTICE]   (368692) : Loading success.
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.656 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 46d394ee-978a-48af-81e0-3175e3fedd10 in datapath a70dde6b-eda7-404f-be3f-fdfd21130765 unbound from our chassis
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.659 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a70dde6b-eda7-404f-be3f-fdfd21130765
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.670 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[899d265d-f4d1-4928-bf81-a0af14b56bc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.671 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa70dde6b-e1 in ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.673 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa70dde6b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.673 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[05af059d-5ba7-4b93-9de7-d02c0f31f4f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.674 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[85732702-c317-4c76-9446-b9db4ac54ed6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.685 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[53f97b12-9a0e-4577-82c0-afa388dba6a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.710 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[93b78820-507c-45ff-b0da-7fd28e7623fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.740 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c91275e5-4539-4c03-b20b-fdfd45123e3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.750 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[864cbf4e-8fa6-4489-ac34-763359edcd8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 NetworkManager[45129]: <info>  [1759394693.7509] manager: (tapa70dde6b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/435)
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.785 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0e31f57c-2c43-4c53-8a31-d674968e068f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.791 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9051dd6e-644c-428f-bb39-0bb4d6efb3b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 NetworkManager[45129]: <info>  [1759394693.8173] device (tapa70dde6b-e0): carrier: link connected
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.822 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf5c76d-c24c-4573-92c8-13c3828cd550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.842 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2f84c9-9bba-4a80-b2e3-302673d8e392]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa70dde6b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:9a:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557242, 'reachable_time': 43247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368715, 'error': None, 'target': 'ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.859 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[378bda73-4863-4ae4-a8d3-136710617a8d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9d:9a19'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 557242, 'tstamp': 557242}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368716, 'error': None, 'target': 'ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.878 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ded4db-061b-491e-a067-bac65cf948ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa70dde6b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:9a:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557242, 'reachable_time': 43247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368717, 'error': None, 'target': 'ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.913 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[be664e51-4ada-46bd-87d7-a2e21cc00d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.989 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[59ef813e-0914-4a38-9e05-e7724c862e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.990 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa70dde6b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.991 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.991 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa70dde6b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:53 compute-0 kernel: tapa70dde6b-e0: entered promiscuous mode
Oct 02 08:44:53 compute-0 NetworkManager[45129]: <info>  [1759394693.9942] manager: (tapa70dde6b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Oct 02 08:44:53 compute-0 nova_compute[260603]: 2025-10-02 08:44:53.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.996 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa70dde6b-e0, col_values=(('external_ids', {'iface-id': 'fdf0eff9-d3e0-4347-9246-553457a706bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:53 compute-0 ovn_controller[152344]: 2025-10-02T08:44:53Z|01107|binding|INFO|Releasing lport fdf0eff9-d3e0-4347-9246-553457a706bc from this chassis (sb_readonly=0)
Oct 02 08:44:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.999 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a70dde6b-eda7-404f-be3f-fdfd21130765.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a70dde6b-eda7-404f-be3f-fdfd21130765.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:54.001 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6129f4e6-8577-435d-99a7-9d4e3acffb9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:54.002 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-a70dde6b-eda7-404f-be3f-fdfd21130765
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/a70dde6b-eda7-404f-be3f-fdfd21130765.pid.haproxy
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID a70dde6b-eda7-404f-be3f-fdfd21130765
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:54.004 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765', 'env', 'PROCESS_TAG=haproxy-a70dde6b-eda7-404f-be3f-fdfd21130765', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a70dde6b-eda7-404f-be3f-fdfd21130765.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:54 compute-0 podman[368833]: 2025-10-02 08:44:54.361273177 +0000 UTC m=+0.044704605 container create 572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:44:54 compute-0 systemd[1]: Started libpod-conmon-572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383.scope.
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8c8c7591665252ab2243dbe65cfdd1efc17d9bbb25418b141b6fa64fb6308b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:44:54 compute-0 podman[368833]: 2025-10-02 08:44:54.336204835 +0000 UTC m=+0.019636273 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:44:54 compute-0 podman[368833]: 2025-10-02 08:44:54.432262589 +0000 UTC m=+0.115694048 container init 572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 08:44:54 compute-0 podman[368833]: 2025-10-02 08:44:54.437323987 +0000 UTC m=+0.120755425 container start 572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:44:54 compute-0 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [NOTICE]   (368852) : New worker (368854) forked
Oct 02 08:44:54 compute-0 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [NOTICE]   (368852) : Loading success.
Oct 02 08:44:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:54.484 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.511 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394694.5110915, 2ea36b69-0b0d-4253-8207-a159c75280b3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.512 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] VM Started (Lifecycle Event)
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.534 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.538 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394694.5118325, 2ea36b69-0b0d-4253-8207-a159c75280b3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.538 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] VM Paused (Lifecycle Event)
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.564 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.568 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.594 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.689 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394694.6891563, f8c66e48-8186-4434-8a82-a9be6fd98570 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.690 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] VM Started (Lifecycle Event)
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.709 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.712 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394694.6900394, f8c66e48-8186-4434-8a82-a9be6fd98570 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.712 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] VM Paused (Lifecycle Event)
Oct 02 08:44:54 compute-0 ceph-mon[74477]: pgmap v1978: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.732 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.735 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.853 2 DEBUG nova.compute.manager [req-43f3f298-edcf-43e2-81e1-539fe3c7519a req-e0a7b4c1-71a9-4aa0-8c93-3fc8d1ce02b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.853 2 DEBUG oslo_concurrency.lockutils [req-43f3f298-edcf-43e2-81e1-539fe3c7519a req-e0a7b4c1-71a9-4aa0-8c93-3fc8d1ce02b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.853 2 DEBUG oslo_concurrency.lockutils [req-43f3f298-edcf-43e2-81e1-539fe3c7519a req-e0a7b4c1-71a9-4aa0-8c93-3fc8d1ce02b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.853 2 DEBUG oslo_concurrency.lockutils [req-43f3f298-edcf-43e2-81e1-539fe3c7519a req-e0a7b4c1-71a9-4aa0-8c93-3fc8d1ce02b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.854 2 DEBUG nova.compute.manager [req-43f3f298-edcf-43e2-81e1-539fe3c7519a req-e0a7b4c1-71a9-4aa0-8c93-3fc8d1ce02b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Processing event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.854 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.857 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.860 2 INFO nova.virt.libvirt.driver [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Instance spawned successfully.
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.860 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.871 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.871 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394694.8571382, f8c66e48-8186-4434-8a82-a9be6fd98570 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.872 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] VM Resumed (Lifecycle Event)
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.879 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.880 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.880 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.881 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.881 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.882 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.887 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.890 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.910 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.962 2 INFO nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Took 9.31 seconds to spawn the instance on the hypervisor.
Oct 02 08:44:54 compute-0 nova_compute[260603]: 2025-10-02 08:44:54.963 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:44:55 compute-0 nova_compute[260603]: 2025-10-02 08:44:55.037 2 INFO nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Took 10.21 seconds to build instance.
Oct 02 08:44:55 compute-0 nova_compute[260603]: 2025-10-02 08:44:55.058 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Oct 02 08:44:56 compute-0 ceph-mon[74477]: pgmap v1979: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Oct 02 08:44:56 compute-0 nova_compute[260603]: 2025-10-02 08:44:56.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 705 KiB/s rd, 3.6 MiB/s wr, 90 op/s
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.533 2 DEBUG nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.534 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.534 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.534 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.535 2 DEBUG nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] No waiting events found dispatching network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.535 2 WARNING nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received unexpected event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f for instance with vm_state active and task_state None.
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.536 2 DEBUG nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.536 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.536 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.537 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.537 2 DEBUG nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Processing event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.537 2 DEBUG nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.538 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.538 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.539 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.539 2 DEBUG nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] No waiting events found dispatching network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.540 2 WARNING nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received unexpected event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 for instance with vm_state building and task_state spawning.
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.541 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.551 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.552 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394697.5510383, 2ea36b69-0b0d-4253-8207-a159c75280b3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.552 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] VM Resumed (Lifecycle Event)
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.560 2 INFO nova.virt.libvirt.driver [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Instance spawned successfully.
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.561 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.598 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.604 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.604 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.604 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.605 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.605 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.605 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.610 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.650 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.682 2 INFO nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Took 10.46 seconds to spawn the instance on the hypervisor.
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.683 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.750 2 INFO nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Took 11.44 seconds to build instance.
Oct 02 08:44:57 compute-0 nova_compute[260603]: 2025-10-02 08:44:57.777 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:44:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:44:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:44:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:44:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:44:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:44:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:44:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:44:58 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:44:58.486 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:44:58 compute-0 ceph-mon[74477]: pgmap v1980: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 705 KiB/s rd, 3.6 MiB/s wr, 90 op/s
Oct 02 08:44:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 127 op/s
Oct 02 08:44:59 compute-0 nova_compute[260603]: 2025-10-02 08:44:59.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:59 compute-0 nova_compute[260603]: 2025-10-02 08:44:59.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:59 compute-0 NetworkManager[45129]: <info>  [1759394699.6584] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Oct 02 08:44:59 compute-0 NetworkManager[45129]: <info>  [1759394699.6593] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Oct 02 08:44:59 compute-0 nova_compute[260603]: 2025-10-02 08:44:59.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:44:59 compute-0 ovn_controller[152344]: 2025-10-02T08:44:59Z|01108|binding|INFO|Releasing lport fdf0eff9-d3e0-4347-9246-553457a706bc from this chassis (sb_readonly=0)
Oct 02 08:44:59 compute-0 ovn_controller[152344]: 2025-10-02T08:44:59Z|01109|binding|INFO|Releasing lport d2cd5a12-3184-4d72-ac2f-758e53e7613e from this chassis (sb_readonly=0)
Oct 02 08:44:59 compute-0 nova_compute[260603]: 2025-10-02 08:44:59.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:00 compute-0 nova_compute[260603]: 2025-10-02 08:45:00.658 2 DEBUG nova.compute.manager [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-changed-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:00 compute-0 nova_compute[260603]: 2025-10-02 08:45:00.659 2 DEBUG nova.compute.manager [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Refreshing instance network info cache due to event network-changed-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:45:00 compute-0 nova_compute[260603]: 2025-10-02 08:45:00.659 2 DEBUG oslo_concurrency.lockutils [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:45:00 compute-0 nova_compute[260603]: 2025-10-02 08:45:00.660 2 DEBUG oslo_concurrency.lockutils [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:45:00 compute-0 nova_compute[260603]: 2025-10-02 08:45:00.660 2 DEBUG nova.network.neutron [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Refreshing network info cache for port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:45:00 compute-0 ceph-mon[74477]: pgmap v1981: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 127 op/s
Oct 02 08:45:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 99 op/s
Oct 02 08:45:01 compute-0 nova_compute[260603]: 2025-10-02 08:45:01.776 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:01 compute-0 nova_compute[260603]: 2025-10-02 08:45:01.777 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:01 compute-0 nova_compute[260603]: 2025-10-02 08:45:01.778 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:01 compute-0 nova_compute[260603]: 2025-10-02 08:45:01.778 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:01 compute-0 nova_compute[260603]: 2025-10-02 08:45:01.778 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:01 compute-0 nova_compute[260603]: 2025-10-02 08:45:01.779 2 INFO nova.compute.manager [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Terminating instance
Oct 02 08:45:01 compute-0 nova_compute[260603]: 2025-10-02 08:45:01.780 2 DEBUG nova.compute.manager [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:45:01 compute-0 kernel: tap46d394ee-97 (unregistering): left promiscuous mode
Oct 02 08:45:01 compute-0 NetworkManager[45129]: <info>  [1759394701.8238] device (tap46d394ee-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:45:01 compute-0 ovn_controller[152344]: 2025-10-02T08:45:01Z|01110|binding|INFO|Releasing lport 46d394ee-978a-48af-81e0-3175e3fedd10 from this chassis (sb_readonly=0)
Oct 02 08:45:01 compute-0 ovn_controller[152344]: 2025-10-02T08:45:01Z|01111|binding|INFO|Setting lport 46d394ee-978a-48af-81e0-3175e3fedd10 down in Southbound
Oct 02 08:45:01 compute-0 ovn_controller[152344]: 2025-10-02T08:45:01Z|01112|binding|INFO|Removing iface tap46d394ee-97 ovn-installed in OVS
Oct 02 08:45:01 compute-0 nova_compute[260603]: 2025-10-02 08:45:01.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:01.836 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:11:18 10.100.0.5'], port_security=['fa:16:3e:1b:11:18 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2ea36b69-0b0d-4253-8207-a159c75280b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a70dde6b-eda7-404f-be3f-fdfd21130765', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75aaa01d4f144326a24dea8ff25b20a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b642237-0371-4fa9-9985-5a0296cfa4d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b63137b-fac5-4aa7-bab2-c0c2acc8a529, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=46d394ee-978a-48af-81e0-3175e3fedd10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:45:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:01.837 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 46d394ee-978a-48af-81e0-3175e3fedd10 in datapath a70dde6b-eda7-404f-be3f-fdfd21130765 unbound from our chassis
Oct 02 08:45:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:01.839 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a70dde6b-eda7-404f-be3f-fdfd21130765, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:45:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:01.841 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c807c2e8-18ec-476f-af60-22a65c19f886]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:01.842 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765 namespace which is not needed anymore
Oct 02 08:45:01 compute-0 nova_compute[260603]: 2025-10-02 08:45:01.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:01 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct 02 08:45:01 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Consumed 5.668s CPU time.
Oct 02 08:45:01 compute-0 systemd-machined[214636]: Machine qemu-135-instance-0000006c terminated.
Oct 02 08:45:01 compute-0 nova_compute[260603]: 2025-10-02 08:45:01.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:01 compute-0 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [NOTICE]   (368852) : haproxy version is 2.8.14-c23fe91
Oct 02 08:45:01 compute-0 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [NOTICE]   (368852) : path to executable is /usr/sbin/haproxy
Oct 02 08:45:01 compute-0 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [WARNING]  (368852) : Exiting Master process...
Oct 02 08:45:01 compute-0 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [WARNING]  (368852) : Exiting Master process...
Oct 02 08:45:01 compute-0 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [ALERT]    (368852) : Current worker (368854) exited with code 143 (Terminated)
Oct 02 08:45:01 compute-0 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [WARNING]  (368852) : All workers exited. Exiting... (0)
Oct 02 08:45:01 compute-0 systemd[1]: libpod-572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383.scope: Deactivated successfully.
Oct 02 08:45:01 compute-0 conmon[368848]: conmon 572bf55a670b000ba6c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383.scope/container/memory.events
Oct 02 08:45:01 compute-0 podman[368887]: 2025-10-02 08:45:01.981522668 +0000 UTC m=+0.042628560 container died 572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.015 2 INFO nova.virt.libvirt.driver [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Instance destroyed successfully.
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.015 2 DEBUG nova.objects.instance [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lazy-loading 'resources' on Instance uuid 2ea36b69-0b0d-4253-8207-a159c75280b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:45:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa8c8c7591665252ab2243dbe65cfdd1efc17d9bbb25418b141b6fa64fb6308b-merged.mount: Deactivated successfully.
Oct 02 08:45:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383-userdata-shm.mount: Deactivated successfully.
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.033 2 DEBUG nova.virt.libvirt.vif [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:44:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-459466961',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-459466961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-459466961',id=108,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:44:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75aaa01d4f144326a24dea8ff25b20a7',ramdisk_id='',reservation_id='r-qjxua0d9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-200952902',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-200952902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:44:57Z,user_data=None,user_id='2e201c8855514748b06d7da4f56ed1b5',uuid=2ea36b69-0b0d-4253-8207-a159c75280b3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.034 2 DEBUG nova.network.os_vif_util [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Converting VIF {"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.035 2 DEBUG nova.network.os_vif_util [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.035 2 DEBUG os_vif [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:45:02 compute-0 podman[368887]: 2025-10-02 08:45:02.038608018 +0000 UTC m=+0.099713900 container cleanup 572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.039 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap46d394ee-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.049 2 INFO os_vif [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97')
Oct 02 08:45:02 compute-0 systemd[1]: libpod-conmon-572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383.scope: Deactivated successfully.
Oct 02 08:45:02 compute-0 podman[368926]: 2025-10-02 08:45:02.143709393 +0000 UTC m=+0.081190571 container remove 572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:45:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.152 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[547d9847-5a03-4b52-ab69-33eee4bcdaba]: (4, ('Thu Oct  2 08:45:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765 (572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383)\n572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383\nThu Oct  2 08:45:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765 (572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383)\n572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.154 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[af309ce0-2fee-49ae-8391-11ac8fdd3b16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.156 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa70dde6b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:02 compute-0 kernel: tapa70dde6b-e0: left promiscuous mode
Oct 02 08:45:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.163 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72338187-3723-4545-9d03-d8aa421f5dae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.196 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[44a07e20-ff41-4113-b55c-5569021be744]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.197 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d3cba5-491d-4987-a945-d66505141236]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.212 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e99ebc8f-8a8a-47a9-92eb-520732fe223f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557233, 'reachable_time': 39595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368958, 'error': None, 'target': 'ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:02 compute-0 systemd[1]: run-netns-ovnmeta\x2da70dde6b\x2deda7\x2d404f\x2dbe3f\x2dfdfd21130765.mount: Deactivated successfully.
Oct 02 08:45:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.218 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:45:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.218 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[323135b7-71bd-4f5c-ba40-8c3323159984]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.384 2 DEBUG nova.network.neutron [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updated VIF entry in instance network info cache for port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.384 2 DEBUG nova.network.neutron [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updating instance_info_cache with network_info: [{"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.416 2 DEBUG oslo_concurrency.lockutils [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.664 2 INFO nova.virt.libvirt.driver [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Deleting instance files /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3_del
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.665 2 INFO nova.virt.libvirt.driver [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Deletion of /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3_del complete
Oct 02 08:45:02 compute-0 ceph-mon[74477]: pgmap v1982: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 99 op/s
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.754 2 INFO nova.compute.manager [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Took 0.97 seconds to destroy the instance on the hypervisor.
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.755 2 DEBUG oslo.service.loopingcall [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.756 2 DEBUG nova.compute.manager [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.756 2 DEBUG nova.network.neutron [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:45:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.914 2 DEBUG nova.compute.manager [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-vif-unplugged-46d394ee-978a-48af-81e0-3175e3fedd10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.914 2 DEBUG oslo_concurrency.lockutils [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.915 2 DEBUG oslo_concurrency.lockutils [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.915 2 DEBUG oslo_concurrency.lockutils [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.916 2 DEBUG nova.compute.manager [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] No waiting events found dispatching network-vif-unplugged-46d394ee-978a-48af-81e0-3175e3fedd10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.916 2 DEBUG nova.compute.manager [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-vif-unplugged-46d394ee-978a-48af-81e0-3175e3fedd10 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.917 2 DEBUG nova.compute.manager [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.917 2 DEBUG oslo_concurrency.lockutils [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.918 2 DEBUG oslo_concurrency.lockutils [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.918 2 DEBUG oslo_concurrency.lockutils [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.918 2 DEBUG nova.compute.manager [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] No waiting events found dispatching network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:45:02 compute-0 nova_compute[260603]: 2025-10-02 08:45:02.919 2 WARNING nova.compute.manager [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received unexpected event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 for instance with vm_state active and task_state deleting.
Oct 02 08:45:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 109 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 175 op/s
Oct 02 08:45:03 compute-0 nova_compute[260603]: 2025-10-02 08:45:03.533 2 DEBUG nova.network.neutron [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:45:03 compute-0 nova_compute[260603]: 2025-10-02 08:45:03.557 2 INFO nova.compute.manager [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Took 0.80 seconds to deallocate network for instance.
Oct 02 08:45:03 compute-0 nova_compute[260603]: 2025-10-02 08:45:03.625 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:03 compute-0 nova_compute[260603]: 2025-10-02 08:45:03.626 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:03 compute-0 nova_compute[260603]: 2025-10-02 08:45:03.675 2 DEBUG nova.compute.manager [req-4b63b552-23ac-4d71-8ba6-6360fa2fb894 req-19878b0b-f0cd-4ef6-a0ca-7109e07a4379 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-vif-deleted-46d394ee-978a-48af-81e0-3175e3fedd10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:03 compute-0 nova_compute[260603]: 2025-10-02 08:45:03.694 2 DEBUG oslo_concurrency.processutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:45:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3288667511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:45:04 compute-0 nova_compute[260603]: 2025-10-02 08:45:04.190 2 DEBUG oslo_concurrency.processutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:04 compute-0 nova_compute[260603]: 2025-10-02 08:45:04.199 2 DEBUG nova.compute.provider_tree [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:45:04 compute-0 nova_compute[260603]: 2025-10-02 08:45:04.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:04 compute-0 nova_compute[260603]: 2025-10-02 08:45:04.419 2 DEBUG nova.scheduler.client.report [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:45:04 compute-0 nova_compute[260603]: 2025-10-02 08:45:04.447 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:04 compute-0 nova_compute[260603]: 2025-10-02 08:45:04.470 2 INFO nova.scheduler.client.report [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Deleted allocations for instance 2ea36b69-0b0d-4253-8207-a159c75280b3
Oct 02 08:45:04 compute-0 nova_compute[260603]: 2025-10-02 08:45:04.552 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:04 compute-0 ceph-mon[74477]: pgmap v1983: 305 pgs: 305 active+clean; 109 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 175 op/s
Oct 02 08:45:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3288667511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:45:05 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 02 08:45:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 109 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 155 op/s
Oct 02 08:45:06 compute-0 ovn_controller[152344]: 2025-10-02T08:45:06Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4e:e4:9d 10.100.0.6
Oct 02 08:45:06 compute-0 ovn_controller[152344]: 2025-10-02T08:45:06Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4e:e4:9d 10.100.0.6
Oct 02 08:45:06 compute-0 ceph-mon[74477]: pgmap v1984: 305 pgs: 305 active+clean; 109 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 155 op/s
Oct 02 08:45:07 compute-0 nova_compute[260603]: 2025-10-02 08:45:07.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:07 compute-0 podman[368984]: 2025-10-02 08:45:07.071496575 +0000 UTC m=+0.116453291 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 08:45:07 compute-0 podman[368983]: 2025-10-02 08:45:07.113649509 +0000 UTC m=+0.158336486 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 08:45:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 97 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 933 KiB/s wr, 182 op/s
Oct 02 08:45:07 compute-0 nova_compute[260603]: 2025-10-02 08:45:07.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:45:08 compute-0 ceph-mon[74477]: pgmap v1985: 305 pgs: 305 active+clean; 97 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 933 KiB/s wr, 182 op/s
Oct 02 08:45:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 200 op/s
Oct 02 08:45:09 compute-0 nova_compute[260603]: 2025-10-02 08:45:09.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:09 compute-0 ovn_controller[152344]: 2025-10-02T08:45:09Z|01113|binding|INFO|Releasing lport d2cd5a12-3184-4d72-ac2f-758e53e7613e from this chassis (sb_readonly=0)
Oct 02 08:45:09 compute-0 nova_compute[260603]: 2025-10-02 08:45:09.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:10 compute-0 ceph-mon[74477]: pgmap v1986: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 200 op/s
Oct 02 08:45:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Oct 02 08:45:12 compute-0 nova_compute[260603]: 2025-10-02 08:45:12.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:12 compute-0 nova_compute[260603]: 2025-10-02 08:45:12.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:12 compute-0 ceph-mon[74477]: pgmap v1987: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Oct 02 08:45:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:45:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 02 08:45:14 compute-0 podman[369028]: 2025-10-02 08:45:14.051033567 +0000 UTC m=+0.095783067 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:45:14 compute-0 podman[369027]: 2025-10-02 08:45:14.057822599 +0000 UTC m=+0.105825180 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 08:45:14 compute-0 nova_compute[260603]: 2025-10-02 08:45:14.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:14.809 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:9f:d1 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc1228dc2b0140318899a7d8a6bc11d6', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71241621-328b-4cc6-9189-8a3f2523fa85, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5d7367a8-24c1-4575-a359-215362e30ab8) old=Port_Binding(mac=['fa:16:3e:f0:9f:d1 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc1228dc2b0140318899a7d8a6bc11d6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:45:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:14.812 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5d7367a8-24c1-4575-a359-215362e30ab8 in datapath b4dae717-ce03-4a30-b5fe-ad727373e453 updated
Oct 02 08:45:14 compute-0 ceph-mon[74477]: pgmap v1988: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 02 08:45:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:14.815 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4dae717-ce03-4a30-b5fe-ad727373e453, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:45:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:14.820 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa9e47c6-d72f-4b8f-8038-99dec0a6d795]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 08:45:15 compute-0 ovn_controller[152344]: 2025-10-02T08:45:15Z|01114|binding|INFO|Releasing lport d2cd5a12-3184-4d72-ac2f-758e53e7613e from this chassis (sb_readonly=0)
Oct 02 08:45:15 compute-0 nova_compute[260603]: 2025-10-02 08:45:15.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:16 compute-0 ceph-mon[74477]: pgmap v1989: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 08:45:17 compute-0 nova_compute[260603]: 2025-10-02 08:45:17.013 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394702.0108202, 2ea36b69-0b0d-4253-8207-a159c75280b3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:45:17 compute-0 nova_compute[260603]: 2025-10-02 08:45:17.013 2 INFO nova.compute.manager [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] VM Stopped (Lifecycle Event)
Oct 02 08:45:17 compute-0 nova_compute[260603]: 2025-10-02 08:45:17.040 2 DEBUG nova.compute.manager [None req-4d1ff898-7376-4e9e-96aa-4e67cba91d91 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:45:17 compute-0 nova_compute[260603]: 2025-10-02 08:45:17.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 08:45:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:45:18 compute-0 ceph-mon[74477]: pgmap v1990: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 08:45:19 compute-0 nova_compute[260603]: 2025-10-02 08:45:19.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Oct 02 08:45:19 compute-0 nova_compute[260603]: 2025-10-02 08:45:19.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:45:20 compute-0 ceph-mon[74477]: pgmap v1991: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.971 2 DEBUG nova.compute.manager [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-changed-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.971 2 DEBUG nova.compute.manager [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Refreshing instance network info cache due to event network-changed-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.972 2 DEBUG oslo_concurrency.lockutils [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.972 2 DEBUG oslo_concurrency.lockutils [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.973 2 DEBUG nova.network.neutron [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Refreshing network info cache for port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.988 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.988 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.989 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.989 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.990 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.992 2 INFO nova.compute.manager [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Terminating instance
Oct 02 08:45:20 compute-0 nova_compute[260603]: 2025-10-02 08:45:20.994 2 DEBUG nova.compute.manager [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:45:21 compute-0 kernel: tap2fe5cc8e-26 (unregistering): left promiscuous mode
Oct 02 08:45:21 compute-0 NetworkManager[45129]: <info>  [1759394721.0749] device (tap2fe5cc8e-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:45:21 compute-0 ovn_controller[152344]: 2025-10-02T08:45:21Z|01115|binding|INFO|Releasing lport 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f from this chassis (sb_readonly=0)
Oct 02 08:45:21 compute-0 ovn_controller[152344]: 2025-10-02T08:45:21Z|01116|binding|INFO|Setting lport 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f down in Southbound
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:21 compute-0 ovn_controller[152344]: 2025-10-02T08:45:21Z|01117|binding|INFO|Removing iface tap2fe5cc8e-26 ovn-installed in OVS
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.097 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:e4:9d 10.100.0.6'], port_security=['fa:16:3e:4e:e4:9d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f8c66e48-8186-4434-8a82-a9be6fd98570', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '97c204ff-8b60-40d3-b073-c7cc05dde092 b7a9cf86-0b36-4647-8672-befd80171150', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6baff0c8-9572-4afd-b1a8-e77a5daa40e0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.099 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f in datapath 7f709a68-4708-4cf6-ab7a-ec213a44899e unbound from our chassis
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.102 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f709a68-4708-4cf6-ab7a-ec213a44899e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.103 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbdfe33-31b8-487b-8e65-dd1678016e80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.105 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e namespace which is not needed anymore
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:21 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct 02 08:45:21 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Consumed 14.072s CPU time.
Oct 02 08:45:21 compute-0 systemd-machined[214636]: Machine qemu-134-instance-0000006b terminated.
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.249 2 INFO nova.virt.libvirt.driver [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Instance destroyed successfully.
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.250 2 DEBUG nova.objects.instance [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid f8c66e48-8186-4434-8a82-a9be6fd98570 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.269 2 DEBUG nova.virt.libvirt.vif [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:44:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=107,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA8lO59ZZnrYbP7wEOczQOvY36PKA+gVe/fckBMMuh4THRBaRJXFK1RXQfVOkub4S1cwnd2kmq3jyKyYaZVJ6dSkF3B7eITAKra64ROntkkxqN5HCVmiUlIAa9236hzUHg==',key_name='tempest-TestSecurityGroupsBasicOps-1941794773',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:44:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-zb68sy0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:44:55Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=f8c66e48-8186-4434-8a82-a9be6fd98570,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.269 2 DEBUG nova.network.os_vif_util [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:45:21 compute-0 neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e[368688]: [NOTICE]   (368692) : haproxy version is 2.8.14-c23fe91
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.270 2 DEBUG nova.network.os_vif_util [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:45:21 compute-0 neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e[368688]: [NOTICE]   (368692) : path to executable is /usr/sbin/haproxy
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.271 2 DEBUG os_vif [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:45:21 compute-0 neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e[368688]: [ALERT]    (368692) : Current worker (368694) exited with code 143 (Terminated)
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fe5cc8e-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:21 compute-0 neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e[368688]: [WARNING]  (368692) : All workers exited. Exiting... (0)
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:21 compute-0 systemd[1]: libpod-e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2.scope: Deactivated successfully.
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.280 2 INFO os_vif [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26')
Oct 02 08:45:21 compute-0 podman[369092]: 2025-10-02 08:45:21.288463955 +0000 UTC m=+0.059397872 container died e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 08:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2-userdata-shm.mount: Deactivated successfully.
Oct 02 08:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5042a5e1a9fd8d273a882aed1c11acd286e1034728129aad25bceba67731ade-merged.mount: Deactivated successfully.
Oct 02 08:45:21 compute-0 podman[369092]: 2025-10-02 08:45:21.339855297 +0000 UTC m=+0.110789174 container cleanup e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:45:21 compute-0 systemd[1]: libpod-conmon-e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2.scope: Deactivated successfully.
Oct 02 08:45:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 08:45:21 compute-0 podman[369146]: 2025-10-02 08:45:21.427355495 +0000 UTC m=+0.059338251 container remove e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.435 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6c97c432-fde8-48fd-addc-747368a4ef54]: (4, ('Thu Oct  2 08:45:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e (e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2)\ne32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2\nThu Oct  2 08:45:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e (e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2)\ne32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a51c093a-4625-4979-b40c-1cc2b84f0d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.438 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f709a68-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:21 compute-0 kernel: tap7f709a68-40: left promiscuous mode
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.450 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[537018ee-ce9a-4bc5-b33a-463d22b2cf69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.483 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ced586db-5e89-491d-ade7-850f534c6c74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.484 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac59f53-9335-4c17-b708-735206188173]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9cbaeed-50c9-4f75-8c9d-b8d975fb1ed6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557144, 'reachable_time': 42358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369161, 'error': None, 'target': 'ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d7f709a68\x2d4708\x2d4cf6\x2dab7a\x2dec213a44899e.mount: Deactivated successfully.
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.517 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:45:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.517 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7d59673d-8dd7-459a-8e8f-101c00d98369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.693 2 INFO nova.virt.libvirt.driver [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Deleting instance files /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570_del
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.694 2 INFO nova.virt.libvirt.driver [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Deletion of /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570_del complete
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.758 2 INFO nova.compute.manager [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.759 2 DEBUG oslo.service.loopingcall [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.759 2 DEBUG nova.compute.manager [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:45:21 compute-0 nova_compute[260603]: 2025-10-02 08:45:21.760 2 DEBUG nova.network.neutron [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:45:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:45:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1429547899' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:45:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:45:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1429547899' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:45:22 compute-0 nova_compute[260603]: 2025-10-02 08:45:22.498 2 DEBUG nova.network.neutron [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:45:22 compute-0 nova_compute[260603]: 2025-10-02 08:45:22.539 2 INFO nova.compute.manager [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Took 0.78 seconds to deallocate network for instance.
Oct 02 08:45:22 compute-0 nova_compute[260603]: 2025-10-02 08:45:22.620 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:22 compute-0 nova_compute[260603]: 2025-10-02 08:45:22.620 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:22 compute-0 nova_compute[260603]: 2025-10-02 08:45:22.683 2 DEBUG oslo_concurrency.processutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:22 compute-0 nova_compute[260603]: 2025-10-02 08:45:22.740 2 DEBUG nova.compute.manager [req-67a12357-33ff-4c40-a2ad-9f3104a59b33 req-2a0fdf83-66ea-4294-8777-fce210cf53c0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:22 compute-0 nova_compute[260603]: 2025-10-02 08:45:22.740 2 DEBUG oslo_concurrency.lockutils [req-67a12357-33ff-4c40-a2ad-9f3104a59b33 req-2a0fdf83-66ea-4294-8777-fce210cf53c0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:22 compute-0 nova_compute[260603]: 2025-10-02 08:45:22.741 2 DEBUG oslo_concurrency.lockutils [req-67a12357-33ff-4c40-a2ad-9f3104a59b33 req-2a0fdf83-66ea-4294-8777-fce210cf53c0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:22 compute-0 nova_compute[260603]: 2025-10-02 08:45:22.741 2 DEBUG oslo_concurrency.lockutils [req-67a12357-33ff-4c40-a2ad-9f3104a59b33 req-2a0fdf83-66ea-4294-8777-fce210cf53c0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:22 compute-0 nova_compute[260603]: 2025-10-02 08:45:22.742 2 DEBUG nova.compute.manager [req-67a12357-33ff-4c40-a2ad-9f3104a59b33 req-2a0fdf83-66ea-4294-8777-fce210cf53c0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] No waiting events found dispatching network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:45:22 compute-0 nova_compute[260603]: 2025-10-02 08:45:22.742 2 WARNING nova.compute.manager [req-67a12357-33ff-4c40-a2ad-9f3104a59b33 req-2a0fdf83-66ea-4294-8777-fce210cf53c0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received unexpected event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f for instance with vm_state deleted and task_state None.
Oct 02 08:45:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:45:22 compute-0 ceph-mon[74477]: pgmap v1992: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 08:45:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1429547899' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:45:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1429547899' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:45:23 compute-0 nova_compute[260603]: 2025-10-02 08:45:23.162 2 DEBUG nova.compute.manager [req-2a7ddda0-5dbc-4ab4-9350-b2874c045dca req-55d9ba5e-c3f9-4910-aa56-f1f6a065caf4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-vif-deleted-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:45:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2712699084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:45:23 compute-0 nova_compute[260603]: 2025-10-02 08:45:23.197 2 DEBUG oslo_concurrency.processutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:23 compute-0 nova_compute[260603]: 2025-10-02 08:45:23.203 2 DEBUG nova.compute.provider_tree [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:45:23 compute-0 nova_compute[260603]: 2025-10-02 08:45:23.226 2 DEBUG nova.scheduler.client.report [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:45:23 compute-0 nova_compute[260603]: 2025-10-02 08:45:23.253 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:23 compute-0 nova_compute[260603]: 2025-10-02 08:45:23.278 2 INFO nova.scheduler.client.report [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance f8c66e48-8186-4434-8a82-a9be6fd98570
Oct 02 08:45:23 compute-0 nova_compute[260603]: 2025-10-02 08:45:23.350 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:23 compute-0 nova_compute[260603]: 2025-10-02 08:45:23.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct 02 08:45:23 compute-0 nova_compute[260603]: 2025-10-02 08:45:23.451 2 DEBUG nova.network.neutron [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updated VIF entry in instance network info cache for port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:45:23 compute-0 nova_compute[260603]: 2025-10-02 08:45:23.452 2 DEBUG nova.network.neutron [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updating instance_info_cache with network_info: [{"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:45:23 compute-0 nova_compute[260603]: 2025-10-02 08:45:23.468 2 DEBUG oslo_concurrency.lockutils [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:45:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2712699084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:45:24 compute-0 nova_compute[260603]: 2025-10-02 08:45:24.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:24.527 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:9f:d1 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc1228dc2b0140318899a7d8a6bc11d6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71241621-328b-4cc6-9189-8a3f2523fa85, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5d7367a8-24c1-4575-a359-215362e30ab8) old=Port_Binding(mac=['fa:16:3e:f0:9f:d1 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc1228dc2b0140318899a7d8a6bc11d6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:45:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:24.528 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5d7367a8-24c1-4575-a359-215362e30ab8 in datapath b4dae717-ce03-4a30-b5fe-ad727373e453 updated
Oct 02 08:45:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:24.529 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4dae717-ce03-4a30-b5fe-ad727373e453, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:45:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:24.530 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ce66c5aa-a43c-47c9-8298-f09b9d9dbf79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:24 compute-0 ceph-mon[74477]: pgmap v1993: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct 02 08:45:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:45:26 compute-0 nova_compute[260603]: 2025-10-02 08:45:26.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:26 compute-0 nova_compute[260603]: 2025-10-02 08:45:26.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:26 compute-0 ceph-mon[74477]: pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:45:27 compute-0 nova_compute[260603]: 2025-10-02 08:45:27.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:45:27 compute-0 sudo[369186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:45:27 compute-0 sudo[369186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:27 compute-0 sudo[369186]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:27 compute-0 sudo[369211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:45:27 compute-0 sudo[369211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:27 compute-0 sudo[369211]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:45:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:45:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:45:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:45:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:45:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:45:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:45:27 compute-0 sudo[369236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:45:27 compute-0 sudo[369236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:27 compute-0 sudo[369236]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:45:27
Oct 02 08:45:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:45:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:45:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'backups', 'vms', 'default.rgw.control', 'volumes', 'images']
Oct 02 08:45:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:45:28 compute-0 sudo[369261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:45:28 compute-0 sudo[369261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:45:28 compute-0 sudo[369261]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:45:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:45:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:45:28 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:45:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:45:28 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 7a24c125-2ecb-4a3e-8f06-fb802ac63f31 does not exist
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev bfabb4a4-4520-4031-8d5a-c583fafbc2a7 does not exist
Oct 02 08:45:28 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c502f0d3-ec1a-4f92-b78d-625a431dc724 does not exist
Oct 02 08:45:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:45:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:45:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:45:28 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:45:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:45:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:45:28 compute-0 sudo[369317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:45:28 compute-0 sudo[369317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:28 compute-0 sudo[369317]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:28 compute-0 sudo[369342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:45:28 compute-0 sudo[369342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:28 compute-0 sudo[369342]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:28 compute-0 sudo[369367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:45:28 compute-0 sudo[369367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:28 compute-0 sudo[369367]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:28 compute-0 ceph-mon[74477]: pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:45:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:45:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:45:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:45:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:45:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:45:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:45:28 compute-0 sudo[369392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:45:28 compute-0 sudo[369392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:29 compute-0 podman[369457]: 2025-10-02 08:45:29.371506282 +0000 UTC m=+0.053367434 container create 6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lewin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:45:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:45:29 compute-0 systemd[1]: Started libpod-conmon-6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e.scope.
Oct 02 08:45:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:45:29 compute-0 podman[369457]: 2025-10-02 08:45:29.350418235 +0000 UTC m=+0.032279457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:45:29 compute-0 podman[369457]: 2025-10-02 08:45:29.453720895 +0000 UTC m=+0.135582087 container init 6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lewin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:45:29 compute-0 nova_compute[260603]: 2025-10-02 08:45:29.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:29 compute-0 podman[369457]: 2025-10-02 08:45:29.464738588 +0000 UTC m=+0.146599780 container start 6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lewin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 08:45:29 compute-0 podman[369457]: 2025-10-02 08:45:29.469550728 +0000 UTC m=+0.151411970 container attach 6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lewin, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 08:45:29 compute-0 hopeful_lewin[369474]: 167 167
Oct 02 08:45:29 compute-0 systemd[1]: libpod-6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e.scope: Deactivated successfully.
Oct 02 08:45:29 compute-0 podman[369457]: 2025-10-02 08:45:29.475477023 +0000 UTC m=+0.157338185 container died 6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 02 08:45:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-57f329b43aedb0ab1c5ed8b7164e08311fbe0404549df65491e45f2a15b6befa-merged.mount: Deactivated successfully.
Oct 02 08:45:29 compute-0 podman[369457]: 2025-10-02 08:45:29.521325622 +0000 UTC m=+0.203186814 container remove 6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:45:29 compute-0 systemd[1]: libpod-conmon-6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e.scope: Deactivated successfully.
Oct 02 08:45:29 compute-0 podman[369498]: 2025-10-02 08:45:29.77344878 +0000 UTC m=+0.067862357 container create e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:45:29 compute-0 systemd[1]: Started libpod-conmon-e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30.scope.
Oct 02 08:45:29 compute-0 podman[369498]: 2025-10-02 08:45:29.749992888 +0000 UTC m=+0.044406515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:45:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e570d7f3a2c10fc92d6b36fc2d503a6af7aa5eea60c4b4e576b826b5fbcc1ba1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e570d7f3a2c10fc92d6b36fc2d503a6af7aa5eea60c4b4e576b826b5fbcc1ba1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e570d7f3a2c10fc92d6b36fc2d503a6af7aa5eea60c4b4e576b826b5fbcc1ba1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e570d7f3a2c10fc92d6b36fc2d503a6af7aa5eea60c4b4e576b826b5fbcc1ba1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e570d7f3a2c10fc92d6b36fc2d503a6af7aa5eea60c4b4e576b826b5fbcc1ba1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:29 compute-0 podman[369498]: 2025-10-02 08:45:29.876343107 +0000 UTC m=+0.170756714 container init e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kepler, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 08:45:29 compute-0 podman[369498]: 2025-10-02 08:45:29.891141408 +0000 UTC m=+0.185554985 container start e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kepler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:45:29 compute-0 podman[369498]: 2025-10-02 08:45:29.895611287 +0000 UTC m=+0.190024904 container attach e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:45:30 compute-0 ceph-mon[74477]: pgmap v1996: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:45:30 compute-0 sharp_kepler[369515]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:45:30 compute-0 sharp_kepler[369515]: --> relative data size: 1.0
Oct 02 08:45:30 compute-0 sharp_kepler[369515]: --> All data devices are unavailable
Oct 02 08:45:31 compute-0 systemd[1]: libpod-e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30.scope: Deactivated successfully.
Oct 02 08:45:31 compute-0 podman[369498]: 2025-10-02 08:45:31.020026503 +0000 UTC m=+1.314440040 container died e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kepler, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:45:31 compute-0 systemd[1]: libpod-e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30.scope: Consumed 1.088s CPU time.
Oct 02 08:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e570d7f3a2c10fc92d6b36fc2d503a6af7aa5eea60c4b4e576b826b5fbcc1ba1-merged.mount: Deactivated successfully.
Oct 02 08:45:31 compute-0 podman[369498]: 2025-10-02 08:45:31.07540612 +0000 UTC m=+1.369819657 container remove e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:45:31 compute-0 systemd[1]: libpod-conmon-e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30.scope: Deactivated successfully.
Oct 02 08:45:31 compute-0 sudo[369392]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:31 compute-0 sudo[369556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:45:31 compute-0 sudo[369556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:31 compute-0 sudo[369556]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:31 compute-0 sudo[369581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:45:31 compute-0 sudo[369581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:31 compute-0 sudo[369581]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:31 compute-0 sudo[369606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:45:31 compute-0 nova_compute[260603]: 2025-10-02 08:45:31.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:31 compute-0 sudo[369606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:31 compute-0 sudo[369606]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:31 compute-0 auditd[702]: Audit daemon rotating log files
Oct 02 08:45:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:45:31 compute-0 sudo[369631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:45:31 compute-0 sudo[369631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:31 compute-0 nova_compute[260603]: 2025-10-02 08:45:31.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:45:31 compute-0 podman[369696]: 2025-10-02 08:45:31.775423928 +0000 UTC m=+0.065975346 container create a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:45:31 compute-0 systemd[1]: Started libpod-conmon-a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610.scope.
Oct 02 08:45:31 compute-0 podman[369696]: 2025-10-02 08:45:31.752582986 +0000 UTC m=+0.043134494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:45:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:45:31 compute-0 podman[369696]: 2025-10-02 08:45:31.884093566 +0000 UTC m=+0.174645064 container init a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kilby, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:45:31 compute-0 podman[369696]: 2025-10-02 08:45:31.895179861 +0000 UTC m=+0.185731309 container start a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kilby, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 08:45:31 compute-0 podman[369696]: 2025-10-02 08:45:31.899539397 +0000 UTC m=+0.190090845 container attach a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kilby, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:45:31 compute-0 happy_kilby[369713]: 167 167
Oct 02 08:45:31 compute-0 systemd[1]: libpod-a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610.scope: Deactivated successfully.
Oct 02 08:45:31 compute-0 podman[369696]: 2025-10-02 08:45:31.907034451 +0000 UTC m=+0.197585899 container died a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 08:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-aef05fd193a24fe1c1ae426f38d282ea8f55016a2f24818cb9c3d2ab48a5d892-merged.mount: Deactivated successfully.
Oct 02 08:45:31 compute-0 podman[369696]: 2025-10-02 08:45:31.965515264 +0000 UTC m=+0.256066712 container remove a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kilby, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:45:31 compute-0 systemd[1]: libpod-conmon-a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610.scope: Deactivated successfully.
Oct 02 08:45:32 compute-0 podman[369736]: 2025-10-02 08:45:32.222498423 +0000 UTC m=+0.068159205 container create e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:45:32 compute-0 systemd[1]: Started libpod-conmon-e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90.scope.
Oct 02 08:45:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:45:32 compute-0 podman[369736]: 2025-10-02 08:45:32.203055977 +0000 UTC m=+0.048716769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25e945644f6bd8fae9b836b11839dd4ee60718e7c47eeb2c72b726d708cf4cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25e945644f6bd8fae9b836b11839dd4ee60718e7c47eeb2c72b726d708cf4cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25e945644f6bd8fae9b836b11839dd4ee60718e7c47eeb2c72b726d708cf4cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25e945644f6bd8fae9b836b11839dd4ee60718e7c47eeb2c72b726d708cf4cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:32 compute-0 podman[369736]: 2025-10-02 08:45:32.313599092 +0000 UTC m=+0.159259874 container init e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hawking, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 08:45:32 compute-0 podman[369736]: 2025-10-02 08:45:32.325601787 +0000 UTC m=+0.171262539 container start e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 08:45:32 compute-0 podman[369736]: 2025-10-02 08:45:32.329107236 +0000 UTC m=+0.174768018 container attach e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:45:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:45:32 compute-0 ceph-mon[74477]: pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:45:33 compute-0 epic_hawking[369752]: {
Oct 02 08:45:33 compute-0 epic_hawking[369752]:     "0": [
Oct 02 08:45:33 compute-0 epic_hawking[369752]:         {
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "devices": [
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "/dev/loop3"
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             ],
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_name": "ceph_lv0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_size": "21470642176",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "name": "ceph_lv0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "tags": {
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.cluster_name": "ceph",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.crush_device_class": "",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.encrypted": "0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.osd_id": "0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.type": "block",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.vdo": "0"
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             },
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "type": "block",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "vg_name": "ceph_vg0"
Oct 02 08:45:33 compute-0 epic_hawking[369752]:         }
Oct 02 08:45:33 compute-0 epic_hawking[369752]:     ],
Oct 02 08:45:33 compute-0 epic_hawking[369752]:     "1": [
Oct 02 08:45:33 compute-0 epic_hawking[369752]:         {
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "devices": [
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "/dev/loop4"
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             ],
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_name": "ceph_lv1",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_size": "21470642176",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "name": "ceph_lv1",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "tags": {
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.cluster_name": "ceph",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.crush_device_class": "",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.encrypted": "0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.osd_id": "1",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.type": "block",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.vdo": "0"
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             },
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "type": "block",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "vg_name": "ceph_vg1"
Oct 02 08:45:33 compute-0 epic_hawking[369752]:         }
Oct 02 08:45:33 compute-0 epic_hawking[369752]:     ],
Oct 02 08:45:33 compute-0 epic_hawking[369752]:     "2": [
Oct 02 08:45:33 compute-0 epic_hawking[369752]:         {
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "devices": [
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "/dev/loop5"
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             ],
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_name": "ceph_lv2",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_size": "21470642176",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "name": "ceph_lv2",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "tags": {
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.cluster_name": "ceph",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.crush_device_class": "",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.encrypted": "0",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.osd_id": "2",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.type": "block",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:                 "ceph.vdo": "0"
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             },
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "type": "block",
Oct 02 08:45:33 compute-0 epic_hawking[369752]:             "vg_name": "ceph_vg2"
Oct 02 08:45:33 compute-0 epic_hawking[369752]:         }
Oct 02 08:45:33 compute-0 epic_hawking[369752]:     ]
Oct 02 08:45:33 compute-0 epic_hawking[369752]: }
Oct 02 08:45:33 compute-0 systemd[1]: libpod-e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90.scope: Deactivated successfully.
Oct 02 08:45:33 compute-0 podman[369761]: 2025-10-02 08:45:33.180643717 +0000 UTC m=+0.050344570 container died e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 08:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a25e945644f6bd8fae9b836b11839dd4ee60718e7c47eeb2c72b726d708cf4cf-merged.mount: Deactivated successfully.
Oct 02 08:45:33 compute-0 podman[369761]: 2025-10-02 08:45:33.237065026 +0000 UTC m=+0.106765839 container remove e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 08:45:33 compute-0 systemd[1]: libpod-conmon-e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90.scope: Deactivated successfully.
Oct 02 08:45:33 compute-0 sudo[369631]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:33 compute-0 sudo[369776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:45:33 compute-0 sudo[369776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:33 compute-0 sudo[369776]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:45:33 compute-0 sudo[369801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:45:33 compute-0 sudo[369801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:33 compute-0 sudo[369801]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:33 compute-0 nova_compute[260603]: 2025-10-02 08:45:33.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:45:33 compute-0 sudo[369826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:45:33 compute-0 sudo[369826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:33 compute-0 sudo[369826]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:33 compute-0 sudo[369851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:45:33 compute-0 sudo[369851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:33 compute-0 podman[369918]: 2025-10-02 08:45:33.92510854 +0000 UTC m=+0.035165517 container create c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:45:33 compute-0 systemd[1]: Started libpod-conmon-c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3.scope.
Oct 02 08:45:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:45:34 compute-0 podman[369918]: 2025-10-02 08:45:34.002251116 +0000 UTC m=+0.112308113 container init c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swanson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 08:45:34 compute-0 podman[369918]: 2025-10-02 08:45:33.910285099 +0000 UTC m=+0.020342076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:45:34 compute-0 podman[369918]: 2025-10-02 08:45:34.008640805 +0000 UTC m=+0.118697782 container start c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swanson, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 08:45:34 compute-0 podman[369918]: 2025-10-02 08:45:34.011869715 +0000 UTC m=+0.121926702 container attach c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 08:45:34 compute-0 elegant_swanson[369934]: 167 167
Oct 02 08:45:34 compute-0 systemd[1]: libpod-c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3.scope: Deactivated successfully.
Oct 02 08:45:34 compute-0 podman[369918]: 2025-10-02 08:45:34.013855647 +0000 UTC m=+0.123912654 container died c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swanson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 08:45:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5831877b59822680e43142374c88ad214ddf4855f7fc05aaedf7fd09a31263c5-merged.mount: Deactivated successfully.
Oct 02 08:45:34 compute-0 podman[369918]: 2025-10-02 08:45:34.050154688 +0000 UTC m=+0.160211675 container remove c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 02 08:45:34 compute-0 systemd[1]: libpod-conmon-c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3.scope: Deactivated successfully.
Oct 02 08:45:34 compute-0 podman[369958]: 2025-10-02 08:45:34.267230395 +0000 UTC m=+0.048755611 container create 4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:45:34 compute-0 systemd[1]: Started libpod-conmon-4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7.scope.
Oct 02 08:45:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:45:34 compute-0 podman[369958]: 2025-10-02 08:45:34.246638003 +0000 UTC m=+0.028163229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70872ac383fa5dd4a5ae093f6582d015ffe61d760f26e95e6b8e9b4375211f93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70872ac383fa5dd4a5ae093f6582d015ffe61d760f26e95e6b8e9b4375211f93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70872ac383fa5dd4a5ae093f6582d015ffe61d760f26e95e6b8e9b4375211f93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70872ac383fa5dd4a5ae093f6582d015ffe61d760f26e95e6b8e9b4375211f93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:34 compute-0 podman[369958]: 2025-10-02 08:45:34.361303526 +0000 UTC m=+0.142828712 container init 4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:45:34 compute-0 podman[369958]: 2025-10-02 08:45:34.37809746 +0000 UTC m=+0.159622666 container start 4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 08:45:34 compute-0 podman[369958]: 2025-10-02 08:45:34.382629551 +0000 UTC m=+0.164154767 container attach 4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 08:45:34 compute-0 nova_compute[260603]: 2025-10-02 08:45:34.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:34 compute-0 nova_compute[260603]: 2025-10-02 08:45:34.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:45:34 compute-0 nova_compute[260603]: 2025-10-02 08:45:34.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:45:34 compute-0 nova_compute[260603]: 2025-10-02 08:45:34.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:45:34 compute-0 nova_compute[260603]: 2025-10-02 08:45:34.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:45:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:34.828 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:34.828 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:34.829 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:34 compute-0 ceph-mon[74477]: pgmap v1998: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:45:35 compute-0 festive_yalow[369974]: {
Oct 02 08:45:35 compute-0 festive_yalow[369974]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "osd_id": 2,
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "type": "bluestore"
Oct 02 08:45:35 compute-0 festive_yalow[369974]:     },
Oct 02 08:45:35 compute-0 festive_yalow[369974]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "osd_id": 1,
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "type": "bluestore"
Oct 02 08:45:35 compute-0 festive_yalow[369974]:     },
Oct 02 08:45:35 compute-0 festive_yalow[369974]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "osd_id": 0,
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:45:35 compute-0 festive_yalow[369974]:         "type": "bluestore"
Oct 02 08:45:35 compute-0 festive_yalow[369974]:     }
Oct 02 08:45:35 compute-0 festive_yalow[369974]: }
Oct 02 08:45:35 compute-0 systemd[1]: libpod-4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7.scope: Deactivated successfully.
Oct 02 08:45:35 compute-0 systemd[1]: libpod-4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7.scope: Consumed 1.006s CPU time.
Oct 02 08:45:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:45:35 compute-0 podman[370007]: 2025-10-02 08:45:35.424454013 +0000 UTC m=+0.033722452 container died 4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-70872ac383fa5dd4a5ae093f6582d015ffe61d760f26e95e6b8e9b4375211f93-merged.mount: Deactivated successfully.
Oct 02 08:45:35 compute-0 podman[370007]: 2025-10-02 08:45:35.475393081 +0000 UTC m=+0.084661470 container remove 4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 08:45:35 compute-0 systemd[1]: libpod-conmon-4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7.scope: Deactivated successfully.
Oct 02 08:45:35 compute-0 sudo[369851]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:45:35 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:45:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:45:35 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:45:35 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b2f43bc1-5754-4d6b-9843-e490559ca6c2 does not exist
Oct 02 08:45:35 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 59b70159-d11d-428e-b3be-4868f54871d3 does not exist
Oct 02 08:45:35 compute-0 sudo[370022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:45:35 compute-0 sudo[370022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:35 compute-0 sudo[370022]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:35 compute-0 sudo[370047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:45:35 compute-0 sudo[370047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:45:35 compute-0 sudo[370047]: pam_unix(sudo:session): session closed for user root
Oct 02 08:45:36 compute-0 nova_compute[260603]: 2025-10-02 08:45:36.247 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394721.2456565, f8c66e48-8186-4434-8a82-a9be6fd98570 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:45:36 compute-0 nova_compute[260603]: 2025-10-02 08:45:36.248 2 INFO nova.compute.manager [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] VM Stopped (Lifecycle Event)
Oct 02 08:45:36 compute-0 nova_compute[260603]: 2025-10-02 08:45:36.268 2 DEBUG nova.compute.manager [None req-ba906367-c0f4-45d0-a88b-af9f65205c7e - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:45:36 compute-0 nova_compute[260603]: 2025-10-02 08:45:36.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:36 compute-0 nova_compute[260603]: 2025-10-02 08:45:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:45:36 compute-0 ceph-mon[74477]: pgmap v1999: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:45:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:45:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:45:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:45:37 compute-0 nova_compute[260603]: 2025-10-02 08:45:37.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:45:37 compute-0 nova_compute[260603]: 2025-10-02 08:45:37.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:37 compute-0 nova_compute[260603]: 2025-10-02 08:45:37.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:37 compute-0 nova_compute[260603]: 2025-10-02 08:45:37.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:37 compute-0 nova_compute[260603]: 2025-10-02 08:45:37.545 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:45:37 compute-0 nova_compute[260603]: 2025-10-02 08:45:37.545 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:45:38 compute-0 podman[370094]: 2025-10-02 08:45:38.02473615 +0000 UTC m=+0.085225297 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 08:45:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:45:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/809159189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.062 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:38 compute-0 podman[370093]: 2025-10-02 08:45:38.10944919 +0000 UTC m=+0.172874639 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.250 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.251 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3813MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.251 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.252 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.334 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.335 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.356 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:38 compute-0 ceph-mon[74477]: pgmap v2000: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:45:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/809159189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:45:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:45:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:45:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/812370792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.843 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.853 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.873 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.899 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:45:38 compute-0 nova_compute[260603]: 2025-10-02 08:45:38.900 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:45:39 compute-0 nova_compute[260603]: 2025-10-02 08:45:39.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/812370792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:45:39 compute-0 nova_compute[260603]: 2025-10-02 08:45:39.901 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:45:39 compute-0 nova_compute[260603]: 2025-10-02 08:45:39.902 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:45:40 compute-0 ceph-mon[74477]: pgmap v2001: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:45:41 compute-0 nova_compute[260603]: 2025-10-02 08:45:41.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:45:42 compute-0 ceph-mon[74477]: pgmap v2002: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:45:42 compute-0 nova_compute[260603]: 2025-10-02 08:45:42.598 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:42 compute-0 nova_compute[260603]: 2025-10-02 08:45:42.598 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:42 compute-0 nova_compute[260603]: 2025-10-02 08:45:42.613 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:45:42 compute-0 nova_compute[260603]: 2025-10-02 08:45:42.685 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:42 compute-0 nova_compute[260603]: 2025-10-02 08:45:42.686 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:42 compute-0 nova_compute[260603]: 2025-10-02 08:45:42.695 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:45:42 compute-0 nova_compute[260603]: 2025-10-02 08:45:42.695 2 INFO nova.compute.claims [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:45:42 compute-0 nova_compute[260603]: 2025-10-02 08:45:42.820 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:45:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:45:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2575695996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.275 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.281 2 DEBUG nova.compute.provider_tree [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.305 2 DEBUG nova.scheduler.client.report [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.336 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.337 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.404 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.404 2 DEBUG nova.network.neutron [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:45:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.434 2 INFO nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.457 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.578 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.580 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.581 2 INFO nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Creating image(s)
Oct 02 08:45:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2575695996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.624 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.659 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.695 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.701 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.807 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.808 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.809 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.809 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.834 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:43 compute-0 nova_compute[260603]: 2025-10-02 08:45:43.840 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.158 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.251 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.331 2 DEBUG nova.policy [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.393 2 DEBUG nova.objects.instance [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid fc71f095-bde6-43da-bec6-e0a30dc1b71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.411 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.412 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Ensure instance console log exists: /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.413 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.413 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.414 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:45:44 compute-0 ceph-mon[74477]: pgmap v2003: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.660 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.662 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.681 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.764 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.765 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.774 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.774 2 INFO nova.compute.claims [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:45:44 compute-0 nova_compute[260603]: 2025-10-02 08:45:44.910 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:45 compute-0 podman[370351]: 2025-10-02 08:45:45.049582642 +0000 UTC m=+0.087456266 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct 02 08:45:45 compute-0 podman[370349]: 2025-10-02 08:45:45.05687738 +0000 UTC m=+0.105372796 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.390 2 DEBUG nova.network.neutron [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Successfully created port: 71e3efd4-125e-40e4-bdda-59df254d21f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:45:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:45:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:45:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314088705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.448 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.455 2 DEBUG nova.compute.provider_tree [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.477 2 DEBUG nova.scheduler.client.report [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.503 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.504 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.586 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.587 2 DEBUG nova.network.neutron [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.610 2 INFO nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:45:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2314088705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.642 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.754 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.757 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.758 2 INFO nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Creating image(s)
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.803 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.844 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.878 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.883 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.997 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:45 compute-0 nova_compute[260603]: 2025-10-02 08:45:45.999 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.000 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.001 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.046 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.054 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ece43baf-b502-44c4-9065-61d6c3271ae4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.108 2 DEBUG nova.policy [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a1aa1da2fbc74e148134df6efbe63791', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5461da915a3245579ae75d81001ad2c2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.400 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ece43baf-b502-44c4-9065-61d6c3271ae4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.482 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] resizing rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.615 2 DEBUG nova.objects.instance [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'migration_context' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:45:46 compute-0 ceph-mon[74477]: pgmap v2004: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.638 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.638 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Ensure instance console log exists: /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.639 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.639 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.640 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.685 2 DEBUG nova.network.neutron [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Successfully updated port: 71e3efd4-125e-40e4-bdda-59df254d21f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.709 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.709 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.709 2 DEBUG nova.network.neutron [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.849 2 DEBUG nova.compute.manager [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received event network-changed-71e3efd4-125e-40e4-bdda-59df254d21f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.850 2 DEBUG nova.compute.manager [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Refreshing instance network info cache due to event network-changed-71e3efd4-125e-40e4-bdda-59df254d21f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:45:46 compute-0 nova_compute[260603]: 2025-10-02 08:45:46.851 2 DEBUG oslo_concurrency.lockutils [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:45:47 compute-0 nova_compute[260603]: 2025-10-02 08:45:47.014 2 DEBUG nova.network.neutron [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:45:47 compute-0 nova_compute[260603]: 2025-10-02 08:45:47.040 2 DEBUG nova.network.neutron [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Successfully created port: bceec1b4-ab90-4791-a600-a62ec7870928 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:45:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 92 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.7 MiB/s wr, 13 op/s
Oct 02 08:45:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.028 2 DEBUG nova.network.neutron [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Successfully updated port: bceec1b4-ab90-4791-a600-a62ec7870928 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.056 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.056 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquired lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.056 2 DEBUG nova.network.neutron [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.175 2 DEBUG nova.network.neutron [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.206 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.207 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Instance network_info: |[{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.208 2 DEBUG oslo_concurrency.lockutils [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.209 2 DEBUG nova.network.neutron [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Refreshing network info cache for port 71e3efd4-125e-40e4-bdda-59df254d21f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.214 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Start _get_guest_xml network_info=[{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.220 2 WARNING nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.233 2 DEBUG nova.virt.libvirt.host [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.234 2 DEBUG nova.virt.libvirt.host [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.239 2 DEBUG nova.virt.libvirt.host [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.239 2 DEBUG nova.virt.libvirt.host [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.240 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.241 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.242 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.242 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.243 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.243 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.244 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.244 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.244 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.245 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.246 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.246 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.251 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.348 2 DEBUG nova.network.neutron [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:45:48 compute-0 ceph-mon[74477]: pgmap v2005: 305 pgs: 305 active+clean; 92 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.7 MiB/s wr, 13 op/s
Oct 02 08:45:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:45:48 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1383214943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.694 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.730 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.737 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.980 2 DEBUG nova.compute.manager [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-changed-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.981 2 DEBUG nova.compute.manager [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Refreshing instance network info cache due to event network-changed-bceec1b4-ab90-4791-a600-a62ec7870928. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:45:48 compute-0 nova_compute[260603]: 2025-10-02 08:45:48.981 2 DEBUG oslo_concurrency.lockutils [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:45:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:45:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3887349724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.226 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.228 2 DEBUG nova.virt.libvirt.vif [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:45:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=109,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVA79qbd3NtJv84RrbhGptrnsrVMgvbKHQMrDgaSLfUQo5hxQDp/dq5BHyqGmWd6dJ7yqexCddmRMhWAszT1sZolLZV1xXu34aiHfjKSbuLnXtvoVyFqHt1Oka+6ZlQ1g==',key_name='tempest-TestSecurityGroupsBasicOps-1359298851',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-dp87un5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:45:43Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=fc71f095-bde6-43da-bec6-e0a30dc1b71a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.228 2 DEBUG nova.network.os_vif_util [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.229 2 DEBUG nova.network.os_vif_util [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.230 2 DEBUG nova.objects.instance [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid fc71f095-bde6-43da-bec6-e0a30dc1b71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.249 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:45:49 compute-0 nova_compute[260603]:   <uuid>fc71f095-bde6-43da-bec6-e0a30dc1b71a</uuid>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   <name>instance-0000006d</name>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978</nova:name>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:45:48</nova:creationTime>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:45:49 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:45:49 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:45:49 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:45:49 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:45:49 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:45:49 compute-0 nova_compute[260603]:         <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct 02 08:45:49 compute-0 nova_compute[260603]:         <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:45:49 compute-0 nova_compute[260603]:         <nova:port uuid="71e3efd4-125e-40e4-bdda-59df254d21f9">
Oct 02 08:45:49 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <system>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <entry name="serial">fc71f095-bde6-43da-bec6-e0a30dc1b71a</entry>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <entry name="uuid">fc71f095-bde6-43da-bec6-e0a30dc1b71a</entry>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     </system>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   <os>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   </os>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   <features>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   </features>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk">
Oct 02 08:45:49 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       </source>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:45:49 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk.config">
Oct 02 08:45:49 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       </source>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:45:49 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:e8:06:68"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <target dev="tap71e3efd4-12"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/console.log" append="off"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <video>
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     </video>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:45:49 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:45:49 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:45:49 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:45:49 compute-0 nova_compute[260603]: </domain>
Oct 02 08:45:49 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.251 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Preparing to wait for external event network-vif-plugged-71e3efd4-125e-40e4-bdda-59df254d21f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.252 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.252 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.253 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.254 2 DEBUG nova.virt.libvirt.vif [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:45:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=109,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVA79qbd3NtJv84RrbhGptrnsrVMgvbKHQMrDgaSLfUQo5hxQDp/dq5BHyqGmWd6dJ7yqexCddmRMhWAszT1sZolLZV1xXu34aiHfjKSbuLnXtvoVyFqHt1Oka+6ZlQ1g==',key_name='tempest-TestSecurityGroupsBasicOps-1359298851',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-dp87un5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:45:43Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=fc71f095-bde6-43da-bec6-e0a30dc1b71a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.255 2 DEBUG nova.network.os_vif_util [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.256 2 DEBUG nova.network.os_vif_util [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.257 2 DEBUG os_vif [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.259 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.269 2 DEBUG nova.network.neutron [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Updating instance_info_cache with network_info: [{"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71e3efd4-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap71e3efd4-12, col_values=(('external_ids', {'iface-id': '71e3efd4-125e-40e4-bdda-59df254d21f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:06:68', 'vm-uuid': 'fc71f095-bde6-43da-bec6-e0a30dc1b71a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:49 compute-0 NetworkManager[45129]: <info>  [1759394749.2776] manager: (tap71e3efd4-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.285 2 INFO os_vif [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12')
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.295 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Releasing lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.296 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance network_info: |[{"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.297 2 DEBUG oslo_concurrency.lockutils [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.298 2 DEBUG nova.network.neutron [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Refreshing network info cache for port bceec1b4-ab90-4791-a600-a62ec7870928 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.303 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Start _get_guest_xml network_info=[{"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.309 2 WARNING nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.314 2 DEBUG nova.virt.libvirt.host [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.315 2 DEBUG nova.virt.libvirt.host [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.320 2 DEBUG nova.virt.libvirt.host [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.320 2 DEBUG nova.virt.libvirt.host [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.321 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.321 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.322 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.323 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.323 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.324 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.324 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.324 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.325 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.325 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.326 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.326 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.331 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.417 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.418 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.419 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:e8:06:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.419 2 INFO nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Using config drive
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.453 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1383214943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:45:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3887349724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:45:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:45:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701068349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.817 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.848 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.853 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.908 2 DEBUG nova.network.neutron [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updated VIF entry in instance network info cache for port 71e3efd4-125e-40e4-bdda-59df254d21f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.910 2 DEBUG nova.network.neutron [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.918 2 INFO nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Creating config drive at /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/disk.config
Oct 02 08:45:49 compute-0 nova_compute[260603]: 2025-10-02 08:45:49.928 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2l7evrii execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.004 2 DEBUG oslo_concurrency.lockutils [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.099 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2l7evrii" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.127 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.131 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/disk.config fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:45:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948405515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.329 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/disk.config fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.331 2 INFO nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Deleting local config drive /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/disk.config because it was imported into RBD.
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.336 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.339 2 DEBUG nova.virt.libvirt.vif [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:45:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1660606380',display_name='tempest-TestServerAdvancedOps-server-1660606380',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1660606380',id=110,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5461da915a3245579ae75d81001ad2c2',ramdisk_id='',reservation_id='r-g3wq9z05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-77325392',owner_user_name='tempest-TestServerAdvancedOps-77325392-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:45:45Z,user_data=None,user_id='a1aa1da2fbc74e148134df6efbe63791',uuid=ece43baf-b502-44c4-9065-61d6c3271ae4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.340 2 DEBUG nova.network.os_vif_util [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converting VIF {"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.341 2 DEBUG nova.network.os_vif_util [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.343 2 DEBUG nova.objects.instance [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.372 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:45:50 compute-0 nova_compute[260603]:   <uuid>ece43baf-b502-44c4-9065-61d6c3271ae4</uuid>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   <name>instance-0000006e</name>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <nova:name>tempest-TestServerAdvancedOps-server-1660606380</nova:name>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:45:49</nova:creationTime>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:45:50 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:45:50 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:45:50 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:45:50 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:45:50 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:45:50 compute-0 nova_compute[260603]:         <nova:user uuid="a1aa1da2fbc74e148134df6efbe63791">tempest-TestServerAdvancedOps-77325392-project-member</nova:user>
Oct 02 08:45:50 compute-0 nova_compute[260603]:         <nova:project uuid="5461da915a3245579ae75d81001ad2c2">tempest-TestServerAdvancedOps-77325392</nova:project>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:45:50 compute-0 nova_compute[260603]:         <nova:port uuid="bceec1b4-ab90-4791-a600-a62ec7870928">
Oct 02 08:45:50 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <system>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <entry name="serial">ece43baf-b502-44c4-9065-61d6c3271ae4</entry>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <entry name="uuid">ece43baf-b502-44c4-9065-61d6c3271ae4</entry>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     </system>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   <os>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   </os>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   <features>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   </features>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ece43baf-b502-44c4-9065-61d6c3271ae4_disk">
Oct 02 08:45:50 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       </source>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:45:50 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ece43baf-b502-44c4-9065-61d6c3271ae4_disk.config">
Oct 02 08:45:50 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       </source>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:45:50 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:b0:e4:55"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <target dev="tapbceec1b4-ab"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/console.log" append="off"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <video>
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     </video>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:45:50 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:45:50 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:45:50 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:45:50 compute-0 nova_compute[260603]: </domain>
Oct 02 08:45:50 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.375 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Preparing to wait for external event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.375 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.376 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.376 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.377 2 DEBUG nova.virt.libvirt.vif [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:45:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1660606380',display_name='tempest-TestServerAdvancedOps-server-1660606380',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1660606380',id=110,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5461da915a3245579ae75d81001ad2c2',ramdisk_id='',reservation_id='r-g3wq9z05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-77325392',owner_user_name='tempest-TestServerAdvancedOps-77325392-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:45:45Z,user_data=None,user_id='a1aa1da2fbc74e148134df6efbe63791',uuid=ece43baf-b502-44c4-9065-61d6c3271ae4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.378 2 DEBUG nova.network.os_vif_util [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converting VIF {"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.379 2 DEBUG nova.network.os_vif_util [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.379 2 DEBUG os_vif [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.386 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbceec1b4-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.387 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbceec1b4-ab, col_values=(('external_ids', {'iface-id': 'bceec1b4-ab90-4791-a600-a62ec7870928', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:e4:55', 'vm-uuid': 'ece43baf-b502-44c4-9065-61d6c3271ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:50 compute-0 NetworkManager[45129]: <info>  [1759394750.3905] manager: (tapbceec1b4-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.400 2 INFO os_vif [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab')
Oct 02 08:45:50 compute-0 kernel: tap71e3efd4-12: entered promiscuous mode
Oct 02 08:45:50 compute-0 NetworkManager[45129]: <info>  [1759394750.4071] manager: (tap71e3efd4-12): new Tun device (/org/freedesktop/NetworkManager/Devices/441)
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:50 compute-0 ovn_controller[152344]: 2025-10-02T08:45:50Z|01118|binding|INFO|Claiming lport 71e3efd4-125e-40e4-bdda-59df254d21f9 for this chassis.
Oct 02 08:45:50 compute-0 ovn_controller[152344]: 2025-10-02T08:45:50Z|01119|binding|INFO|71e3efd4-125e-40e4-bdda-59df254d21f9: Claiming fa:16:3e:e8:06:68 10.100.0.4
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.431 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:06:68 10.100.0.4'], port_security=['fa:16:3e:e8:06:68 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fc71f095-bde6-43da-bec6-e0a30dc1b71a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '026f2ff0-b634-4fa0-8159-21403eda59a0 d0f166da-6d6f-4ea3-ab29-33f59ca9931c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12ca4751-2801-43b7-bd66-26826481ad08, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=71e3efd4-125e-40e4-bdda-59df254d21f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.432 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 71e3efd4-125e-40e4-bdda-59df254d21f9 in datapath 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 bound to our chassis
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.433 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758
Oct 02 08:45:50 compute-0 systemd-machined[214636]: New machine qemu-136-instance-0000006d.
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.453 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe236f8-9e73-4f2a-92bb-3260b1dcc43d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.454 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0ee1fadc-d1 in ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.456 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0ee1fadc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.456 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb7f390-8759-4c7d-8e78-3b347e661493]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.458 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[03531e11-a79c-454e-8220-c1f9f369c8ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.465 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.466 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.466 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] No VIF found with MAC fa:16:3e:b0:e4:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.466 2 INFO nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Using config drive
Oct 02 08:45:50 compute-0 systemd[1]: Started Virtual Machine qemu-136-instance-0000006d.
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.471 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[585f96df-eb54-4842-a60a-3fa89859f5a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 systemd-udevd[370795]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.487 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b25505ad-c429-4d9b-9e71-79b70518e4e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 ovn_controller[152344]: 2025-10-02T08:45:50Z|01120|binding|INFO|Setting lport 71e3efd4-125e-40e4-bdda-59df254d21f9 ovn-installed in OVS
Oct 02 08:45:50 compute-0 ovn_controller[152344]: 2025-10-02T08:45:50Z|01121|binding|INFO|Setting lport 71e3efd4-125e-40e4-bdda-59df254d21f9 up in Southbound
Oct 02 08:45:50 compute-0 NetworkManager[45129]: <info>  [1759394750.4989] device (tap71e3efd4-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.499 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:50 compute-0 NetworkManager[45129]: <info>  [1759394750.5005] device (tap71e3efd4-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.523 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[214fd372-8ba1-4bbd-aa1d-d730c4b3dbc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.527 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[600e0662-e352-44ba-a3ba-fcd4b3393d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 systemd-udevd[370803]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:45:50 compute-0 NetworkManager[45129]: <info>  [1759394750.5285] manager: (tap0ee1fadc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/442)
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.559 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c8269d31-ac89-4186-a300-25139a232529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.561 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9834d6bc-9b14-483b-87d0-a3f318ffa2c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 NetworkManager[45129]: <info>  [1759394750.5870] device (tap0ee1fadc-d0): carrier: link connected
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.594 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ee3ba2-1da9-4afc-bbef-2a852a4a7190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09fe2fa5-7bad-4c05-abd7-2327ebd7b377]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ee1fadc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:f0:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562918, 'reachable_time': 43019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370835, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.631 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f719a1d-a36d-4ccc-8e5f-43c9821c2d4a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe07:f032'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562918, 'tstamp': 562918}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370836, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.647 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9813cc85-1888-4ec5-9f93-aac3762c2182]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ee1fadc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:f0:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562918, 'reachable_time': 43019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370837, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 ceph-mon[74477]: pgmap v2006: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 08:45:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/701068349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:45:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/948405515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.676 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e44439d6-c1b4-4224-a9a9-916ab66e2c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.728 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[be2dd7d4-0e5b-4cd7-9cfc-7cd72151d9b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.729 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ee1fadc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.729 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.729 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ee1fadc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:50 compute-0 kernel: tap0ee1fadc-d0: entered promiscuous mode
Oct 02 08:45:50 compute-0 NetworkManager[45129]: <info>  [1759394750.7325] manager: (tap0ee1fadc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.736 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ee1fadc-d0, col_values=(('external_ids', {'iface-id': 'f994afa7-373d-469a-a6b3-0a33b20c9e54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:50 compute-0 ovn_controller[152344]: 2025-10-02T08:45:50Z|01122|binding|INFO|Releasing lport f994afa7-373d-469a-a6b3-0a33b20c9e54 from this chassis (sb_readonly=0)
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.757 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.757 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb4cc5c-6155-41a1-aa44-3c1cefd258d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.758 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758.pid.haproxy
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:45:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.759 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'env', 'PROCESS_TAG=haproxy-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.763 2 DEBUG nova.compute.manager [req-0c50758b-ea6e-4abd-bc8a-3c1c29683921 req-8f30f015-5df1-4fd1-b617-2d8eb7dc51c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received event network-vif-plugged-71e3efd4-125e-40e4-bdda-59df254d21f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.764 2 DEBUG oslo_concurrency.lockutils [req-0c50758b-ea6e-4abd-bc8a-3c1c29683921 req-8f30f015-5df1-4fd1-b617-2d8eb7dc51c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.764 2 DEBUG oslo_concurrency.lockutils [req-0c50758b-ea6e-4abd-bc8a-3c1c29683921 req-8f30f015-5df1-4fd1-b617-2d8eb7dc51c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.764 2 DEBUG oslo_concurrency.lockutils [req-0c50758b-ea6e-4abd-bc8a-3c1c29683921 req-8f30f015-5df1-4fd1-b617-2d8eb7dc51c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:50 compute-0 nova_compute[260603]: 2025-10-02 08:45:50.764 2 DEBUG nova.compute.manager [req-0c50758b-ea6e-4abd-bc8a-3c1c29683921 req-8f30f015-5df1-4fd1-b617-2d8eb7dc51c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Processing event network-vif-plugged-71e3efd4-125e-40e4-bdda-59df254d21f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:45:51 compute-0 podman[370916]: 2025-10-02 08:45:51.132977922 +0000 UTC m=+0.066320588 container create be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:45:51 compute-0 systemd[1]: Started libpod-conmon-be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2.scope.
Oct 02 08:45:51 compute-0 podman[370916]: 2025-10-02 08:45:51.091339025 +0000 UTC m=+0.024681770 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:45:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a8bfea6155d379b6c0d528c11335822e5f651d1b1147fcb20b7f259f8bc47f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:45:51 compute-0 podman[370916]: 2025-10-02 08:45:51.215666091 +0000 UTC m=+0.149008756 container init be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:45:51 compute-0 podman[370916]: 2025-10-02 08:45:51.220984986 +0000 UTC m=+0.154327651 container start be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:45:51 compute-0 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [NOTICE]   (370935) : New worker (370937) forked
Oct 02 08:45:51 compute-0 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [NOTICE]   (370935) : Loading success.
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.296 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394751.295631, fc71f095-bde6-43da-bec6-e0a30dc1b71a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.296 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] VM Started (Lifecycle Event)
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.298 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.301 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.305 2 INFO nova.virt.libvirt.driver [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Instance spawned successfully.
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.305 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.329 2 INFO nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Creating config drive at /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/disk.config
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.338 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzrw3cmno execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.387 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.394 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.400 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.401 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.402 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.402 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.403 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.403 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:45:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.437 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.438 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394751.296004, fc71f095-bde6-43da-bec6-e0a30dc1b71a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.438 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] VM Paused (Lifecycle Event)
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.479 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.484 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394751.3008535, fc71f095-bde6-43da-bec6-e0a30dc1b71a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.484 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] VM Resumed (Lifecycle Event)
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.491 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzrw3cmno" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.519 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.524 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/disk.config ece43baf-b502-44c4-9065-61d6c3271ae4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.581 2 INFO nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Took 8.00 seconds to spawn the instance on the hypervisor.
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.582 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.583 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.599 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.632 2 DEBUG nova.network.neutron [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Updated VIF entry in instance network info cache for port bceec1b4-ab90-4791-a600-a62ec7870928. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.633 2 DEBUG nova.network.neutron [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Updating instance_info_cache with network_info: [{"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.643 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.664 2 DEBUG oslo_concurrency.lockutils [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.684 2 INFO nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Took 9.02 seconds to build instance.
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.703 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.735 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/disk.config ece43baf-b502-44c4-9065-61d6c3271ae4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.736 2 INFO nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Deleting local config drive /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/disk.config because it was imported into RBD.
Oct 02 08:45:51 compute-0 kernel: tapbceec1b4-ab: entered promiscuous mode
Oct 02 08:45:51 compute-0 NetworkManager[45129]: <info>  [1759394751.8224] manager: (tapbceec1b4-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/444)
Oct 02 08:45:51 compute-0 ovn_controller[152344]: 2025-10-02T08:45:51Z|01123|binding|INFO|Claiming lport bceec1b4-ab90-4791-a600-a62ec7870928 for this chassis.
Oct 02 08:45:51 compute-0 ovn_controller[152344]: 2025-10-02T08:45:51Z|01124|binding|INFO|bceec1b4-ab90-4791-a600-a62ec7870928: Claiming fa:16:3e:b0:e4:55 10.100.0.13
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:51 compute-0 systemd-udevd[370825]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:45:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:51.834 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:e4:55 10.100.0.13'], port_security=['fa:16:3e:b0:e4:55 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ece43baf-b502-44c4-9065-61d6c3271ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56bd5300-f7cc-484e-a7dc-25ea062dfd97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5461da915a3245579ae75d81001ad2c2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd611935a-c399-480f-8308-94b3315588fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9fc10f-d634-42a9-a474-31b6181a157a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bceec1b4-ab90-4791-a600-a62ec7870928) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:45:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:51.837 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bceec1b4-ab90-4791-a600-a62ec7870928 in datapath 56bd5300-f7cc-484e-a7dc-25ea062dfd97 bound to our chassis
Oct 02 08:45:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:51.838 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 56bd5300-f7cc-484e-a7dc-25ea062dfd97 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:45:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:51.839 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9a860ddc-bca9-41e0-9795-1333363c9876]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:51 compute-0 NetworkManager[45129]: <info>  [1759394751.8535] device (tapbceec1b4-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:45:51 compute-0 NetworkManager[45129]: <info>  [1759394751.8558] device (tapbceec1b4-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:51 compute-0 ovn_controller[152344]: 2025-10-02T08:45:51Z|01125|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 ovn-installed in OVS
Oct 02 08:45:51 compute-0 ovn_controller[152344]: 2025-10-02T08:45:51Z|01126|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 up in Southbound
Oct 02 08:45:51 compute-0 nova_compute[260603]: 2025-10-02 08:45:51.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:51 compute-0 systemd-machined[214636]: New machine qemu-137-instance-0000006e.
Oct 02 08:45:51 compute-0 systemd[1]: Started Virtual Machine qemu-137-instance-0000006e.
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.438 2 DEBUG nova.compute.manager [req-8b3ecc4b-271c-49e8-b147-083a2cdbee0d req-7e1003e6-92b2-4659-85d7-b2b663a6c2f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.438 2 DEBUG oslo_concurrency.lockutils [req-8b3ecc4b-271c-49e8-b147-083a2cdbee0d req-7e1003e6-92b2-4659-85d7-b2b663a6c2f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.438 2 DEBUG oslo_concurrency.lockutils [req-8b3ecc4b-271c-49e8-b147-083a2cdbee0d req-7e1003e6-92b2-4659-85d7-b2b663a6c2f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.438 2 DEBUG oslo_concurrency.lockutils [req-8b3ecc4b-271c-49e8-b147-083a2cdbee0d req-7e1003e6-92b2-4659-85d7-b2b663a6c2f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.439 2 DEBUG nova.compute.manager [req-8b3ecc4b-271c-49e8-b147-083a2cdbee0d req-7e1003e6-92b2-4659-85d7-b2b663a6c2f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Processing event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:45:52 compute-0 ceph-mon[74477]: pgmap v2007: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.861 2 DEBUG nova.compute.manager [req-6561361e-7730-4c95-b0fa-5e41a7429728 req-aaea254f-1beb-4b3e-b0f1-d97d80c11a51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received event network-vif-plugged-71e3efd4-125e-40e4-bdda-59df254d21f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.861 2 DEBUG oslo_concurrency.lockutils [req-6561361e-7730-4c95-b0fa-5e41a7429728 req-aaea254f-1beb-4b3e-b0f1-d97d80c11a51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.862 2 DEBUG oslo_concurrency.lockutils [req-6561361e-7730-4c95-b0fa-5e41a7429728 req-aaea254f-1beb-4b3e-b0f1-d97d80c11a51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.862 2 DEBUG oslo_concurrency.lockutils [req-6561361e-7730-4c95-b0fa-5e41a7429728 req-aaea254f-1beb-4b3e-b0f1-d97d80c11a51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.862 2 DEBUG nova.compute.manager [req-6561361e-7730-4c95-b0fa-5e41a7429728 req-aaea254f-1beb-4b3e-b0f1-d97d80c11a51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] No waiting events found dispatching network-vif-plugged-71e3efd4-125e-40e4-bdda-59df254d21f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.863 2 WARNING nova.compute.manager [req-6561361e-7730-4c95-b0fa-5e41a7429728 req-aaea254f-1beb-4b3e-b0f1-d97d80c11a51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received unexpected event network-vif-plugged-71e3efd4-125e-40e4-bdda-59df254d21f9 for instance with vm_state active and task_state None.
Oct 02 08:45:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.946 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394752.9466424, ece43baf-b502-44c4-9065-61d6c3271ae4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.947 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Started (Lifecycle Event)
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.949 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.953 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.956 2 INFO nova.virt.libvirt.driver [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance spawned successfully.
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.957 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.969 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.974 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.979 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.979 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.979 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.980 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.980 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:45:52 compute-0 nova_compute[260603]: 2025-10-02 08:45:52.981 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.007 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.008 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394752.9473505, ece43baf-b502-44c4-9065-61d6c3271ae4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.008 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Paused (Lifecycle Event)
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.044 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.047 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394752.954984, ece43baf-b502-44c4-9065-61d6c3271ae4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.047 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Resumed (Lifecycle Event)
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.058 2 INFO nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Took 7.30 seconds to spawn the instance on the hypervisor.
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.058 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.066 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.069 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.104 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.135 2 INFO nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Took 8.39 seconds to build instance.
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.157 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 117 op/s
Oct 02 08:45:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:53.593 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:45:53 compute-0 nova_compute[260603]: 2025-10-02 08:45:53.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:53.595 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:45:54 compute-0 nova_compute[260603]: 2025-10-02 08:45:54.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:54 compute-0 nova_compute[260603]: 2025-10-02 08:45:54.573 2 DEBUG nova.compute.manager [req-6b4f234a-b8bf-420a-a5be-659814e8d846 req-5e31a21d-6bfc-4805-81ed-731b0eeed4ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:54 compute-0 nova_compute[260603]: 2025-10-02 08:45:54.574 2 DEBUG oslo_concurrency.lockutils [req-6b4f234a-b8bf-420a-a5be-659814e8d846 req-5e31a21d-6bfc-4805-81ed-731b0eeed4ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:54 compute-0 nova_compute[260603]: 2025-10-02 08:45:54.574 2 DEBUG oslo_concurrency.lockutils [req-6b4f234a-b8bf-420a-a5be-659814e8d846 req-5e31a21d-6bfc-4805-81ed-731b0eeed4ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:54 compute-0 nova_compute[260603]: 2025-10-02 08:45:54.574 2 DEBUG oslo_concurrency.lockutils [req-6b4f234a-b8bf-420a-a5be-659814e8d846 req-5e31a21d-6bfc-4805-81ed-731b0eeed4ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:54 compute-0 nova_compute[260603]: 2025-10-02 08:45:54.574 2 DEBUG nova.compute.manager [req-6b4f234a-b8bf-420a-a5be-659814e8d846 req-5e31a21d-6bfc-4805-81ed-731b0eeed4ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:45:54 compute-0 nova_compute[260603]: 2025-10-02 08:45:54.574 2 WARNING nova.compute.manager [req-6b4f234a-b8bf-420a-a5be-659814e8d846 req-5e31a21d-6bfc-4805-81ed-731b0eeed4ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state active and task_state None.
Oct 02 08:45:54 compute-0 ceph-mon[74477]: pgmap v2008: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 117 op/s
Oct 02 08:45:55 compute-0 ovn_controller[152344]: 2025-10-02T08:45:55Z|01127|binding|INFO|Releasing lport f994afa7-373d-469a-a6b3-0a33b20c9e54 from this chassis (sb_readonly=0)
Oct 02 08:45:55 compute-0 NetworkManager[45129]: <info>  [1759394755.1891] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Oct 02 08:45:55 compute-0 NetworkManager[45129]: <info>  [1759394755.1898] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.279 2 DEBUG nova.objects.instance [None req-1472c8d9-d03a-44bc-a16e-cc3a23084743 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:45:55 compute-0 ovn_controller[152344]: 2025-10-02T08:45:55Z|01128|binding|INFO|Releasing lport f994afa7-373d-469a-a6b3-0a33b20c9e54 from this chassis (sb_readonly=0)
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.312 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394755.312049, ece43baf-b502-44c4-9065-61d6c3271ae4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.312 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Paused (Lifecycle Event)
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.330 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.334 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.350 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 117 op/s
Oct 02 08:45:55 compute-0 kernel: tapbceec1b4-ab (unregistering): left promiscuous mode
Oct 02 08:45:55 compute-0 NetworkManager[45129]: <info>  [1759394755.7242] device (tapbceec1b4-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:45:55 compute-0 ovn_controller[152344]: 2025-10-02T08:45:55Z|01129|binding|INFO|Releasing lport bceec1b4-ab90-4791-a600-a62ec7870928 from this chassis (sb_readonly=0)
Oct 02 08:45:55 compute-0 ovn_controller[152344]: 2025-10-02T08:45:55Z|01130|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 down in Southbound
Oct 02 08:45:55 compute-0 ovn_controller[152344]: 2025-10-02T08:45:55Z|01131|binding|INFO|Removing iface tapbceec1b4-ab ovn-installed in OVS
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:55.742 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:e4:55 10.100.0.13'], port_security=['fa:16:3e:b0:e4:55 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ece43baf-b502-44c4-9065-61d6c3271ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56bd5300-f7cc-484e-a7dc-25ea062dfd97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5461da915a3245579ae75d81001ad2c2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd611935a-c399-480f-8308-94b3315588fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9fc10f-d634-42a9-a474-31b6181a157a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bceec1b4-ab90-4791-a600-a62ec7870928) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:45:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:55.743 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bceec1b4-ab90-4791-a600-a62ec7870928 in datapath 56bd5300-f7cc-484e-a7dc-25ea062dfd97 unbound from our chassis
Oct 02 08:45:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:55.743 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 56bd5300-f7cc-484e-a7dc-25ea062dfd97 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:45:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:55.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7524aa5e-5c68-4e8a-bce4-adc4e992015c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:55 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct 02 08:45:55 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006e.scope: Consumed 3.465s CPU time.
Oct 02 08:45:55 compute-0 systemd-machined[214636]: Machine qemu-137-instance-0000006e terminated.
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.916 2 DEBUG nova.compute.manager [None req-1472c8d9-d03a-44bc-a16e-cc3a23084743 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.930 2 DEBUG nova.compute.manager [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received event network-changed-71e3efd4-125e-40e4-bdda-59df254d21f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.931 2 DEBUG nova.compute.manager [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Refreshing instance network info cache due to event network-changed-71e3efd4-125e-40e4-bdda-59df254d21f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.935 2 DEBUG oslo_concurrency.lockutils [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.935 2 DEBUG oslo_concurrency.lockutils [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:45:55 compute-0 nova_compute[260603]: 2025-10-02 08:45:55.937 2 DEBUG nova.network.neutron [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Refreshing network info cache for port 71e3efd4-125e-40e4-bdda-59df254d21f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:45:56 compute-0 ceph-mon[74477]: pgmap v2009: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 117 op/s
Oct 02 08:45:56 compute-0 nova_compute[260603]: 2025-10-02 08:45:56.719 2 DEBUG nova.compute.manager [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:56 compute-0 nova_compute[260603]: 2025-10-02 08:45:56.719 2 DEBUG oslo_concurrency.lockutils [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:56 compute-0 nova_compute[260603]: 2025-10-02 08:45:56.719 2 DEBUG oslo_concurrency.lockutils [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:56 compute-0 nova_compute[260603]: 2025-10-02 08:45:56.719 2 DEBUG oslo_concurrency.lockutils [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:56 compute-0 nova_compute[260603]: 2025-10-02 08:45:56.719 2 DEBUG nova.compute.manager [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:45:56 compute-0 nova_compute[260603]: 2025-10-02 08:45:56.720 2 WARNING nova.compute.manager [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state suspended and task_state None.
Oct 02 08:45:56 compute-0 nova_compute[260603]: 2025-10-02 08:45:56.720 2 DEBUG nova.compute.manager [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:45:56 compute-0 nova_compute[260603]: 2025-10-02 08:45:56.720 2 DEBUG oslo_concurrency.lockutils [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:45:56 compute-0 nova_compute[260603]: 2025-10-02 08:45:56.720 2 DEBUG oslo_concurrency.lockutils [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:45:56 compute-0 nova_compute[260603]: 2025-10-02 08:45:56.720 2 DEBUG oslo_concurrency.lockutils [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:45:56 compute-0 nova_compute[260603]: 2025-10-02 08:45:56.720 2 DEBUG nova.compute.manager [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:45:56 compute-0 nova_compute[260603]: 2025-10-02 08:45:56.720 2 WARNING nova.compute.manager [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state suspended and task_state None.
Oct 02 08:45:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Oct 02 08:45:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:45:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:45:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:45:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:45:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:45:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:45:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:45:57 compute-0 nova_compute[260603]: 2025-10-02 08:45:57.972 2 DEBUG nova.network.neutron [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updated VIF entry in instance network info cache for port 71e3efd4-125e-40e4-bdda-59df254d21f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:45:57 compute-0 nova_compute[260603]: 2025-10-02 08:45:57.972 2 DEBUG nova.network.neutron [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:45:58 compute-0 nova_compute[260603]: 2025-10-02 08:45:58.003 2 DEBUG oslo_concurrency.lockutils [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:45:58 compute-0 nova_compute[260603]: 2025-10-02 08:45:58.317 2 INFO nova.compute.manager [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Resuming
Oct 02 08:45:58 compute-0 nova_compute[260603]: 2025-10-02 08:45:58.318 2 DEBUG nova.objects.instance [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'flavor' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:45:58 compute-0 nova_compute[260603]: 2025-10-02 08:45:58.356 2 DEBUG oslo_concurrency.lockutils [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:45:58 compute-0 nova_compute[260603]: 2025-10-02 08:45:58.356 2 DEBUG oslo_concurrency.lockutils [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquired lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:45:58 compute-0 nova_compute[260603]: 2025-10-02 08:45:58.357 2 DEBUG nova.network.neutron [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:45:58 compute-0 ceph-mon[74477]: pgmap v2010: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Oct 02 08:45:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 188 op/s
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.742 2 DEBUG nova.network.neutron [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Updating instance_info_cache with network_info: [{"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.762 2 DEBUG oslo_concurrency.lockutils [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Releasing lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.767 2 DEBUG nova.virt.libvirt.vif [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:45:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1660606380',display_name='tempest-TestServerAdvancedOps-server-1660606380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1660606380',id=110,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:45:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='5461da915a3245579ae75d81001ad2c2',ramdisk_id='',reservation_id='r-g3wq9z05',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-77325392',owner_user_name='tempest-TestServerAdvancedOps-77325392-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:45:55Z,user_data=None,user_id='a1aa1da2fbc74e148134df6efbe63791',uuid=ece43baf-b502-44c4-9065-61d6c3271ae4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.767 2 DEBUG nova.network.os_vif_util [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converting VIF {"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.768 2 DEBUG nova.network.os_vif_util [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.768 2 DEBUG os_vif [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.769 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.769 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.772 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbceec1b4-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.772 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbceec1b4-ab, col_values=(('external_ids', {'iface-id': 'bceec1b4-ab90-4791-a600-a62ec7870928', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:e4:55', 'vm-uuid': 'ece43baf-b502-44c4-9065-61d6c3271ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.772 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.773 2 INFO os_vif [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab')
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.794 2 DEBUG nova.objects.instance [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'numa_topology' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:45:59 compute-0 kernel: tapbceec1b4-ab: entered promiscuous mode
Oct 02 08:45:59 compute-0 NetworkManager[45129]: <info>  [1759394759.8621] manager: (tapbceec1b4-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/447)
Oct 02 08:45:59 compute-0 ovn_controller[152344]: 2025-10-02T08:45:59Z|01132|binding|INFO|Claiming lport bceec1b4-ab90-4791-a600-a62ec7870928 for this chassis.
Oct 02 08:45:59 compute-0 ovn_controller[152344]: 2025-10-02T08:45:59Z|01133|binding|INFO|bceec1b4-ab90-4791-a600-a62ec7870928: Claiming fa:16:3e:b0:e4:55 10.100.0.13
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:59.871 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:e4:55 10.100.0.13'], port_security=['fa:16:3e:b0:e4:55 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ece43baf-b502-44c4-9065-61d6c3271ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56bd5300-f7cc-484e-a7dc-25ea062dfd97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5461da915a3245579ae75d81001ad2c2', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd611935a-c399-480f-8308-94b3315588fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9fc10f-d634-42a9-a474-31b6181a157a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bceec1b4-ab90-4791-a600-a62ec7870928) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:45:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:59.873 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bceec1b4-ab90-4791-a600-a62ec7870928 in datapath 56bd5300-f7cc-484e-a7dc-25ea062dfd97 bound to our chassis
Oct 02 08:45:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:59.873 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 56bd5300-f7cc-484e-a7dc-25ea062dfd97 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:45:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:45:59.874 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[352965a3-bcc6-4c8d-8c83-51b772b8ab09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:59 compute-0 ovn_controller[152344]: 2025-10-02T08:45:59Z|01134|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 up in Southbound
Oct 02 08:45:59 compute-0 ovn_controller[152344]: 2025-10-02T08:45:59Z|01135|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 ovn-installed in OVS
Oct 02 08:45:59 compute-0 nova_compute[260603]: 2025-10-02 08:45:59.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:45:59 compute-0 systemd-machined[214636]: New machine qemu-138-instance-0000006e.
Oct 02 08:45:59 compute-0 systemd[1]: Started Virtual Machine qemu-138-instance-0000006e.
Oct 02 08:45:59 compute-0 systemd-udevd[371082]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:45:59 compute-0 NetworkManager[45129]: <info>  [1759394759.9410] device (tapbceec1b4-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:45:59 compute-0 NetworkManager[45129]: <info>  [1759394759.9422] device (tapbceec1b4-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.165 2 DEBUG nova.compute.manager [req-9d5ed20f-7fdc-4067-ad19-168a29fae4e1 req-8f70da69-da26-44c6-9727-2965e9396a35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.166 2 DEBUG oslo_concurrency.lockutils [req-9d5ed20f-7fdc-4067-ad19-168a29fae4e1 req-8f70da69-da26-44c6-9727-2965e9396a35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.166 2 DEBUG oslo_concurrency.lockutils [req-9d5ed20f-7fdc-4067-ad19-168a29fae4e1 req-8f70da69-da26-44c6-9727-2965e9396a35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.166 2 DEBUG oslo_concurrency.lockutils [req-9d5ed20f-7fdc-4067-ad19-168a29fae4e1 req-8f70da69-da26-44c6-9727-2965e9396a35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.166 2 DEBUG nova.compute.manager [req-9d5ed20f-7fdc-4067-ad19-168a29fae4e1 req-8f70da69-da26-44c6-9727-2965e9396a35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.166 2 WARNING nova.compute.manager [req-9d5ed20f-7fdc-4067-ad19-168a29fae4e1 req-8f70da69-da26-44c6-9727-2965e9396a35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state suspended and task_state resuming.
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:00 compute-0 ceph-mon[74477]: pgmap v2011: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 188 op/s
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.711226) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394760711258, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 1014, "num_deletes": 251, "total_data_size": 1377317, "memory_usage": 1401744, "flush_reason": "Manual Compaction"}
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394760720447, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 1363402, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41439, "largest_seqno": 42452, "table_properties": {"data_size": 1358483, "index_size": 2443, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10926, "raw_average_key_size": 19, "raw_value_size": 1348536, "raw_average_value_size": 2443, "num_data_blocks": 109, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394668, "oldest_key_time": 1759394668, "file_creation_time": 1759394760, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 9258 microseconds, and 4527 cpu microseconds.
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.720486) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 1363402 bytes OK
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.720504) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.721867) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.721884) EVENT_LOG_v1 {"time_micros": 1759394760721879, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.721901) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 1372512, prev total WAL file size 1372512, number of live WAL files 2.
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.722491) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(1331KB)], [92(10MB)]
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394760722541, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 12171809, "oldest_snapshot_seqno": -1}
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6628 keys, 10508803 bytes, temperature: kUnknown
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394760782727, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 10508803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10461984, "index_size": 29171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 170150, "raw_average_key_size": 25, "raw_value_size": 10340808, "raw_average_value_size": 1560, "num_data_blocks": 1155, "num_entries": 6628, "num_filter_entries": 6628, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394760, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.783079) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 10508803 bytes
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.784276) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.5 rd, 174.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(16.6) write-amplify(7.7) OK, records in: 7142, records dropped: 514 output_compression: NoCompression
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.784292) EVENT_LOG_v1 {"time_micros": 1759394760784284, "job": 54, "event": "compaction_finished", "compaction_time_micros": 60402, "compaction_time_cpu_micros": 21748, "output_level": 6, "num_output_files": 1, "total_output_size": 10508803, "num_input_records": 7142, "num_output_records": 6628, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394760784891, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394760786667, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.722408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.786829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.786833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.786835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.786836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:00 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.786838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.827 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for ece43baf-b502-44c4-9065-61d6c3271ae4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.827 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394760.8269885, ece43baf-b502-44c4-9065-61d6c3271ae4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.828 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Started (Lifecycle Event)
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.849 2 DEBUG nova.compute.manager [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.850 2 DEBUG nova.objects.instance [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.856 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.860 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.876 2 INFO nova.virt.libvirt.driver [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance running successfully.
Oct 02 08:46:00 compute-0 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.879 2 DEBUG nova.virt.libvirt.guest [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.879 2 DEBUG nova.compute.manager [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.886 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.886 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394760.8328013, ece43baf-b502-44c4-9065-61d6c3271ae4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.886 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Resumed (Lifecycle Event)
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.907 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.910 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:46:00 compute-0 nova_compute[260603]: 2025-10-02 08:46:00.943 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 08:46:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 02 08:46:02 compute-0 nova_compute[260603]: 2025-10-02 08:46:02.381 2 DEBUG nova.compute.manager [req-2fa214f1-5c2d-44ff-9f83-0455b9ed01cc req-b2ae2e3a-9f90-4d37-82e7-d616bf80977a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:02 compute-0 nova_compute[260603]: 2025-10-02 08:46:02.382 2 DEBUG oslo_concurrency.lockutils [req-2fa214f1-5c2d-44ff-9f83-0455b9ed01cc req-b2ae2e3a-9f90-4d37-82e7-d616bf80977a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:02 compute-0 nova_compute[260603]: 2025-10-02 08:46:02.382 2 DEBUG oslo_concurrency.lockutils [req-2fa214f1-5c2d-44ff-9f83-0455b9ed01cc req-b2ae2e3a-9f90-4d37-82e7-d616bf80977a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:02 compute-0 nova_compute[260603]: 2025-10-02 08:46:02.382 2 DEBUG oslo_concurrency.lockutils [req-2fa214f1-5c2d-44ff-9f83-0455b9ed01cc req-b2ae2e3a-9f90-4d37-82e7-d616bf80977a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:02 compute-0 nova_compute[260603]: 2025-10-02 08:46:02.382 2 DEBUG nova.compute.manager [req-2fa214f1-5c2d-44ff-9f83-0455b9ed01cc req-b2ae2e3a-9f90-4d37-82e7-d616bf80977a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:46:02 compute-0 nova_compute[260603]: 2025-10-02 08:46:02.382 2 WARNING nova.compute.manager [req-2fa214f1-5c2d-44ff-9f83-0455b9ed01cc req-b2ae2e3a-9f90-4d37-82e7-d616bf80977a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state active and task_state None.
Oct 02 08:46:02 compute-0 ceph-mon[74477]: pgmap v2012: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 02 08:46:02 compute-0 nova_compute[260603]: 2025-10-02 08:46:02.831 2 DEBUG nova.objects.instance [None req-7ff53994-cb4c-4cc3-a546-c5e4aee8a164 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:02 compute-0 nova_compute[260603]: 2025-10-02 08:46:02.856 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394762.8559055, ece43baf-b502-44c4-9065-61d6c3271ae4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:46:02 compute-0 nova_compute[260603]: 2025-10-02 08:46:02.856 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Paused (Lifecycle Event)
Oct 02 08:46:02 compute-0 nova_compute[260603]: 2025-10-02 08:46:02.877 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:02 compute-0 nova_compute[260603]: 2025-10-02 08:46:02.884 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:46:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:46:02 compute-0 nova_compute[260603]: 2025-10-02 08:46:02.908 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 08:46:03 compute-0 kernel: tapbceec1b4-ab (unregistering): left promiscuous mode
Oct 02 08:46:03 compute-0 NetworkManager[45129]: <info>  [1759394763.3509] device (tapbceec1b4-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:46:03 compute-0 ovn_controller[152344]: 2025-10-02T08:46:03Z|01136|binding|INFO|Releasing lport bceec1b4-ab90-4791-a600-a62ec7870928 from this chassis (sb_readonly=0)
Oct 02 08:46:03 compute-0 ovn_controller[152344]: 2025-10-02T08:46:03Z|01137|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 down in Southbound
Oct 02 08:46:03 compute-0 ovn_controller[152344]: 2025-10-02T08:46:03Z|01138|binding|INFO|Removing iface tapbceec1b4-ab ovn-installed in OVS
Oct 02 08:46:03 compute-0 nova_compute[260603]: 2025-10-02 08:46:03.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:03.383 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:e4:55 10.100.0.13'], port_security=['fa:16:3e:b0:e4:55 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ece43baf-b502-44c4-9065-61d6c3271ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56bd5300-f7cc-484e-a7dc-25ea062dfd97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5461da915a3245579ae75d81001ad2c2', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd611935a-c399-480f-8308-94b3315588fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9fc10f-d634-42a9-a474-31b6181a157a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bceec1b4-ab90-4791-a600-a62ec7870928) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:46:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:03.386 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bceec1b4-ab90-4791-a600-a62ec7870928 in datapath 56bd5300-f7cc-484e-a7dc-25ea062dfd97 unbound from our chassis
Oct 02 08:46:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:03.387 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 56bd5300-f7cc-484e-a7dc-25ea062dfd97 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:46:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:03.388 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9dd12f-36d2-4fb5-8ea8-c10ea8dd2eb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:03 compute-0 ovn_controller[152344]: 2025-10-02T08:46:03Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:06:68 10.100.0.4
Oct 02 08:46:03 compute-0 ovn_controller[152344]: 2025-10-02T08:46:03Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:06:68 10.100.0.4
Oct 02 08:46:03 compute-0 nova_compute[260603]: 2025-10-02 08:46:03.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 165 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 196 op/s
Oct 02 08:46:03 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct 02 08:46:03 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006e.scope: Consumed 2.878s CPU time.
Oct 02 08:46:03 compute-0 systemd-machined[214636]: Machine qemu-138-instance-0000006e terminated.
Oct 02 08:46:03 compute-0 nova_compute[260603]: 2025-10-02 08:46:03.537 2 DEBUG nova.compute.manager [None req-7ff53994-cb4c-4cc3-a546-c5e4aee8a164 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:03.597 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.521 2 DEBUG nova.compute.manager [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.521 2 DEBUG oslo_concurrency.lockutils [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.522 2 DEBUG oslo_concurrency.lockutils [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.522 2 DEBUG oslo_concurrency.lockutils [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.522 2 DEBUG nova.compute.manager [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.522 2 WARNING nova.compute.manager [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state suspended and task_state None.
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.522 2 DEBUG nova.compute.manager [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.522 2 DEBUG oslo_concurrency.lockutils [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.523 2 DEBUG oslo_concurrency.lockutils [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.523 2 DEBUG oslo_concurrency.lockutils [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.523 2 DEBUG nova.compute.manager [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:46:04 compute-0 nova_compute[260603]: 2025-10-02 08:46:04.523 2 WARNING nova.compute.manager [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state suspended and task_state None.
Oct 02 08:46:04 compute-0 ceph-mon[74477]: pgmap v2013: 305 pgs: 305 active+clean; 165 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 196 op/s
Oct 02 08:46:05 compute-0 nova_compute[260603]: 2025-10-02 08:46:05.132 2 INFO nova.compute.manager [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Resuming
Oct 02 08:46:05 compute-0 nova_compute[260603]: 2025-10-02 08:46:05.133 2 DEBUG nova.objects.instance [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'flavor' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:05 compute-0 nova_compute[260603]: 2025-10-02 08:46:05.171 2 DEBUG oslo_concurrency.lockutils [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:46:05 compute-0 nova_compute[260603]: 2025-10-02 08:46:05.172 2 DEBUG oslo_concurrency.lockutils [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquired lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:46:05 compute-0 nova_compute[260603]: 2025-10-02 08:46:05.172 2 DEBUG nova.network.neutron [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:46:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 165 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.0 MiB/s wr, 132 op/s
Oct 02 08:46:05 compute-0 nova_compute[260603]: 2025-10-02 08:46:05.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:06 compute-0 ceph-mon[74477]: pgmap v2014: 305 pgs: 305 active+clean; 165 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.0 MiB/s wr, 132 op/s
Oct 02 08:46:06 compute-0 nova_compute[260603]: 2025-10-02 08:46:06.990 2 DEBUG nova.network.neutron [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Updating instance_info_cache with network_info: [{"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.012 2 DEBUG oslo_concurrency.lockutils [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Releasing lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.019 2 DEBUG nova.virt.libvirt.vif [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:45:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1660606380',display_name='tempest-TestServerAdvancedOps-server-1660606380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1660606380',id=110,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:45:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='5461da915a3245579ae75d81001ad2c2',ramdisk_id='',reservation_id='r-g3wq9z05',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-77325392',owner_user_name='tempest-TestServerAdvancedOps-77325392-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:46:03Z,user_data=None,user_id='a1aa1da2fbc74e148134df6efbe63791',uuid=ece43baf-b502-44c4-9065-61d6c3271ae4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.019 2 DEBUG nova.network.os_vif_util [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converting VIF {"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.021 2 DEBUG nova.network.os_vif_util [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.021 2 DEBUG os_vif [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.028 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbceec1b4-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbceec1b4-ab, col_values=(('external_ids', {'iface-id': 'bceec1b4-ab90-4791-a600-a62ec7870928', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:e4:55', 'vm-uuid': 'ece43baf-b502-44c4-9065-61d6c3271ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.030 2 INFO os_vif [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab')
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.063 2 DEBUG nova.objects.instance [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'numa_topology' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:07 compute-0 kernel: tapbceec1b4-ab: entered promiscuous mode
Oct 02 08:46:07 compute-0 ovn_controller[152344]: 2025-10-02T08:46:07Z|01139|binding|INFO|Claiming lport bceec1b4-ab90-4791-a600-a62ec7870928 for this chassis.
Oct 02 08:46:07 compute-0 ovn_controller[152344]: 2025-10-02T08:46:07Z|01140|binding|INFO|bceec1b4-ab90-4791-a600-a62ec7870928: Claiming fa:16:3e:b0:e4:55 10.100.0.13
Oct 02 08:46:07 compute-0 NetworkManager[45129]: <info>  [1759394767.1689] manager: (tapbceec1b4-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:07.179 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:e4:55 10.100.0.13'], port_security=['fa:16:3e:b0:e4:55 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ece43baf-b502-44c4-9065-61d6c3271ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56bd5300-f7cc-484e-a7dc-25ea062dfd97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5461da915a3245579ae75d81001ad2c2', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'd611935a-c399-480f-8308-94b3315588fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9fc10f-d634-42a9-a474-31b6181a157a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bceec1b4-ab90-4791-a600-a62ec7870928) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:46:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:07.181 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bceec1b4-ab90-4791-a600-a62ec7870928 in datapath 56bd5300-f7cc-484e-a7dc-25ea062dfd97 bound to our chassis
Oct 02 08:46:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:07.183 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 56bd5300-f7cc-484e-a7dc-25ea062dfd97 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:46:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:07.184 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0962e167-7cd6-4679-9ab7-6b825e3460ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:07 compute-0 ovn_controller[152344]: 2025-10-02T08:46:07Z|01141|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 up in Southbound
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:07 compute-0 ovn_controller[152344]: 2025-10-02T08:46:07Z|01142|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 ovn-installed in OVS
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:07 compute-0 systemd-udevd[371166]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:46:07 compute-0 systemd-machined[214636]: New machine qemu-139-instance-0000006e.
Oct 02 08:46:07 compute-0 systemd[1]: Started Virtual Machine qemu-139-instance-0000006e.
Oct 02 08:46:07 compute-0 NetworkManager[45129]: <info>  [1759394767.2414] device (tapbceec1b4-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:46:07 compute-0 NetworkManager[45129]: <info>  [1759394767.2428] device (tapbceec1b4-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.409 2 DEBUG nova.compute.manager [req-bc1560ea-a672-4c72-965c-4da812bbad28 req-434fad10-f356-4f1f-b939-e35929068f69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.410 2 DEBUG oslo_concurrency.lockutils [req-bc1560ea-a672-4c72-965c-4da812bbad28 req-434fad10-f356-4f1f-b939-e35929068f69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.410 2 DEBUG oslo_concurrency.lockutils [req-bc1560ea-a672-4c72-965c-4da812bbad28 req-434fad10-f356-4f1f-b939-e35929068f69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.411 2 DEBUG oslo_concurrency.lockutils [req-bc1560ea-a672-4c72-965c-4da812bbad28 req-434fad10-f356-4f1f-b939-e35929068f69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.411 2 DEBUG nova.compute.manager [req-bc1560ea-a672-4c72-965c-4da812bbad28 req-434fad10-f356-4f1f-b939-e35929068f69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:46:07 compute-0 nova_compute[260603]: 2025-10-02 08:46:07.411 2 WARNING nova.compute.manager [req-bc1560ea-a672-4c72-965c-4da812bbad28 req-434fad10-f356-4f1f-b939-e35929068f69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state suspended and task_state resuming.
Oct 02 08:46:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 167 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Oct 02 08:46:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.153 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for ece43baf-b502-44c4-9065-61d6c3271ae4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.154 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394768.1530848, ece43baf-b502-44c4-9065-61d6c3271ae4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.154 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Started (Lifecycle Event)
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.174 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.180 2 DEBUG nova.compute.manager [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.181 2 DEBUG nova.objects.instance [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.185 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.199 2 INFO nova.virt.libvirt.driver [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance running successfully.
Oct 02 08:46:08 compute-0 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.203 2 DEBUG nova.virt.libvirt.guest [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.204 2 DEBUG nova.compute.manager [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.208 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.208 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394768.1582346, ece43baf-b502-44c4-9065-61d6c3271ae4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.209 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Resumed (Lifecycle Event)
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.236 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.242 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:46:08 compute-0 nova_compute[260603]: 2025-10-02 08:46:08.262 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 08:46:08 compute-0 ceph-mon[74477]: pgmap v2015: 305 pgs: 305 active+clean; 167 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Oct 02 08:46:09 compute-0 podman[371219]: 2025-10-02 08:46:09.041245656 +0000 UTC m=+0.091857824 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.146 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.147 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.147 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.148 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.148 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:09 compute-0 podman[371218]: 2025-10-02 08:46:09.150002706 +0000 UTC m=+0.201126500 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.150 2 INFO nova.compute.manager [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Terminating instance
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.151 2 DEBUG nova.compute.manager [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:46:09 compute-0 kernel: tapbceec1b4-ab (unregistering): left promiscuous mode
Oct 02 08:46:09 compute-0 NetworkManager[45129]: <info>  [1759394769.1955] device (tapbceec1b4-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:09 compute-0 ovn_controller[152344]: 2025-10-02T08:46:09Z|01143|binding|INFO|Releasing lport bceec1b4-ab90-4791-a600-a62ec7870928 from this chassis (sb_readonly=0)
Oct 02 08:46:09 compute-0 ovn_controller[152344]: 2025-10-02T08:46:09Z|01144|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 down in Southbound
Oct 02 08:46:09 compute-0 ovn_controller[152344]: 2025-10-02T08:46:09Z|01145|binding|INFO|Removing iface tapbceec1b4-ab ovn-installed in OVS
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:09.211 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:e4:55 10.100.0.13'], port_security=['fa:16:3e:b0:e4:55 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ece43baf-b502-44c4-9065-61d6c3271ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56bd5300-f7cc-484e-a7dc-25ea062dfd97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5461da915a3245579ae75d81001ad2c2', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd611935a-c399-480f-8308-94b3315588fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9fc10f-d634-42a9-a474-31b6181a157a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bceec1b4-ab90-4791-a600-a62ec7870928) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:46:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:09.211 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bceec1b4-ab90-4791-a600-a62ec7870928 in datapath 56bd5300-f7cc-484e-a7dc-25ea062dfd97 unbound from our chassis
Oct 02 08:46:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:09.212 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 56bd5300-f7cc-484e-a7dc-25ea062dfd97 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 08:46:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:09.213 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a895e600-2c06-423f-aa29-10967d105493]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:09 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct 02 08:46:09 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006e.scope: Consumed 1.784s CPU time.
Oct 02 08:46:09 compute-0 systemd-machined[214636]: Machine qemu-139-instance-0000006e terminated.
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.390 2 INFO nova.virt.libvirt.driver [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance destroyed successfully.
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.392 2 DEBUG nova.objects.instance [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'resources' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.405 2 DEBUG nova.virt.libvirt.vif [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:45:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1660606380',display_name='tempest-TestServerAdvancedOps-server-1660606380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1660606380',id=110,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:45:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5461da915a3245579ae75d81001ad2c2',ramdisk_id='',reservation_id='r-g3wq9z05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-77325392',owner_user_name='tempest-TestServerAdvancedOps-77325392-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:46:08Z,user_data=None,user_id='a1aa1da2fbc74e148134df6efbe63791',uuid=ece43baf-b502-44c4-9065-61d6c3271ae4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.406 2 DEBUG nova.network.os_vif_util [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converting VIF {"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.407 2 DEBUG nova.network.os_vif_util [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.407 2 DEBUG os_vif [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbceec1b4-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.424 2 INFO os_vif [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab')
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.522 2 DEBUG nova.compute.manager [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.523 2 DEBUG oslo_concurrency.lockutils [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.523 2 DEBUG oslo_concurrency.lockutils [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.524 2 DEBUG oslo_concurrency.lockutils [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.525 2 DEBUG nova.compute.manager [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.525 2 WARNING nova.compute.manager [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state active and task_state deleting.
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.525 2 DEBUG nova.compute.manager [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.526 2 DEBUG oslo_concurrency.lockutils [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.526 2 DEBUG oslo_concurrency.lockutils [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.526 2 DEBUG oslo_concurrency.lockutils [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.527 2 DEBUG nova.compute.manager [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.527 2 DEBUG nova.compute.manager [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.754 2 INFO nova.virt.libvirt.driver [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Deleting instance files /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4_del
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.754 2 INFO nova.virt.libvirt.driver [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Deletion of /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4_del complete
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.822 2 INFO nova.compute.manager [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Took 0.67 seconds to destroy the instance on the hypervisor.
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.823 2 DEBUG oslo.service.loopingcall [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.823 2 DEBUG nova.compute.manager [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:46:09 compute-0 nova_compute[260603]: 2025-10-02 08:46:09.823 2 DEBUG nova.network.neutron [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:46:10 compute-0 nova_compute[260603]: 2025-10-02 08:46:10.666 2 DEBUG nova.network.neutron [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:10 compute-0 nova_compute[260603]: 2025-10-02 08:46:10.690 2 INFO nova.compute.manager [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Took 0.87 seconds to deallocate network for instance.
Oct 02 08:46:10 compute-0 nova_compute[260603]: 2025-10-02 08:46:10.744 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:10 compute-0 nova_compute[260603]: 2025-10-02 08:46:10.744 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:10 compute-0 ceph-mon[74477]: pgmap v2016: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct 02 08:46:10 compute-0 nova_compute[260603]: 2025-10-02 08:46:10.832 2 DEBUG oslo_concurrency.processutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:10 compute-0 nova_compute[260603]: 2025-10-02 08:46:10.896 2 DEBUG nova.compute.manager [req-b77dec63-63e5-4e80-b2fc-a570a32b69b1 req-e84c443e-68c7-40ca-bc21-11321d713542 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-deleted-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:46:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4008508401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:11 compute-0 nova_compute[260603]: 2025-10-02 08:46:11.253 2 DEBUG oslo_concurrency.processutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:11 compute-0 nova_compute[260603]: 2025-10-02 08:46:11.261 2 DEBUG nova.compute.provider_tree [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:46:11 compute-0 nova_compute[260603]: 2025-10-02 08:46:11.279 2 DEBUG nova.scheduler.client.report [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:46:11 compute-0 nova_compute[260603]: 2025-10-02 08:46:11.306 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:11 compute-0 nova_compute[260603]: 2025-10-02 08:46:11.338 2 INFO nova.scheduler.client.report [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Deleted allocations for instance ece43baf-b502-44c4-9065-61d6c3271ae4
Oct 02 08:46:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct 02 08:46:11 compute-0 nova_compute[260603]: 2025-10-02 08:46:11.425 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:11 compute-0 nova_compute[260603]: 2025-10-02 08:46:11.733 2 DEBUG nova.compute.manager [req-ca75133c-403e-469f-8e72-c04d5b1b080e req-3c980c74-e1cf-4a65-bcdc-7299e2f23ee0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:11 compute-0 nova_compute[260603]: 2025-10-02 08:46:11.733 2 DEBUG oslo_concurrency.lockutils [req-ca75133c-403e-469f-8e72-c04d5b1b080e req-3c980c74-e1cf-4a65-bcdc-7299e2f23ee0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:11 compute-0 nova_compute[260603]: 2025-10-02 08:46:11.734 2 DEBUG oslo_concurrency.lockutils [req-ca75133c-403e-469f-8e72-c04d5b1b080e req-3c980c74-e1cf-4a65-bcdc-7299e2f23ee0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:11 compute-0 nova_compute[260603]: 2025-10-02 08:46:11.735 2 DEBUG oslo_concurrency.lockutils [req-ca75133c-403e-469f-8e72-c04d5b1b080e req-3c980c74-e1cf-4a65-bcdc-7299e2f23ee0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:11 compute-0 nova_compute[260603]: 2025-10-02 08:46:11.735 2 DEBUG nova.compute.manager [req-ca75133c-403e-469f-8e72-c04d5b1b080e req-3c980c74-e1cf-4a65-bcdc-7299e2f23ee0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:46:11 compute-0 nova_compute[260603]: 2025-10-02 08:46:11.736 2 WARNING nova.compute.manager [req-ca75133c-403e-469f-8e72-c04d5b1b080e req-3c980c74-e1cf-4a65-bcdc-7299e2f23ee0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state deleted and task_state None.
Oct 02 08:46:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4008508401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:12 compute-0 ceph-mon[74477]: pgmap v2017: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct 02 08:46:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:46:13 compute-0 ovn_controller[152344]: 2025-10-02T08:46:13Z|01146|binding|INFO|Releasing lport f994afa7-373d-469a-a6b3-0a33b20c9e54 from this chassis (sb_readonly=0)
Oct 02 08:46:13 compute-0 nova_compute[260603]: 2025-10-02 08:46:13.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 121 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct 02 08:46:14 compute-0 nova_compute[260603]: 2025-10-02 08:46:14.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:14 compute-0 nova_compute[260603]: 2025-10-02 08:46:14.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:14 compute-0 ceph-mon[74477]: pgmap v2018: 305 pgs: 305 active+clean; 121 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct 02 08:46:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 121 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 102 KiB/s wr, 51 op/s
Oct 02 08:46:16 compute-0 podman[371318]: 2025-10-02 08:46:16.020894811 +0000 UTC m=+0.082632267 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 08:46:16 compute-0 podman[371319]: 2025-10-02 08:46:16.027675403 +0000 UTC m=+0.080186360 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct 02 08:46:16 compute-0 nova_compute[260603]: 2025-10-02 08:46:16.402 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "a541632d-06cf-48a9-a44d-19bcce4df36f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:16 compute-0 nova_compute[260603]: 2025-10-02 08:46:16.403 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:16 compute-0 nova_compute[260603]: 2025-10-02 08:46:16.419 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:46:16 compute-0 nova_compute[260603]: 2025-10-02 08:46:16.483 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:16 compute-0 nova_compute[260603]: 2025-10-02 08:46:16.483 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:16 compute-0 nova_compute[260603]: 2025-10-02 08:46:16.489 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:46:16 compute-0 nova_compute[260603]: 2025-10-02 08:46:16.490 2 INFO nova.compute.claims [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:46:16 compute-0 nova_compute[260603]: 2025-10-02 08:46:16.596 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:16 compute-0 ceph-mon[74477]: pgmap v2019: 305 pgs: 305 active+clean; 121 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 102 KiB/s wr, 51 op/s
Oct 02 08:46:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:46:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4223246840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.033 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.043 2 DEBUG nova.compute.provider_tree [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.064 2 DEBUG nova.scheduler.client.report [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.088 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.089 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.136 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.137 2 DEBUG nova.network.neutron [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.164 2 INFO nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.187 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.292 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.294 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.294 2 INFO nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Creating image(s)
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.330 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.367 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.403 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.408 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 121 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 102 KiB/s wr, 51 op/s
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.452 2 DEBUG nova.policy [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.487 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.488 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.489 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.489 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.514 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.518 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 a541632d-06cf-48a9-a44d-19bcce4df36f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4223246840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.879 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 a541632d-06cf-48a9-a44d-19bcce4df36f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:46:17 compute-0 nova_compute[260603]: 2025-10-02 08:46:17.943 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:46:18 compute-0 nova_compute[260603]: 2025-10-02 08:46:18.059 2 DEBUG nova.objects.instance [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid a541632d-06cf-48a9-a44d-19bcce4df36f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:18 compute-0 nova_compute[260603]: 2025-10-02 08:46:18.074 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:46:18 compute-0 nova_compute[260603]: 2025-10-02 08:46:18.075 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Ensure instance console log exists: /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:46:18 compute-0 nova_compute[260603]: 2025-10-02 08:46:18.075 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:18 compute-0 nova_compute[260603]: 2025-10-02 08:46:18.076 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:18 compute-0 nova_compute[260603]: 2025-10-02 08:46:18.076 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:18 compute-0 nova_compute[260603]: 2025-10-02 08:46:18.325 2 DEBUG nova.network.neutron [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Successfully created port: 4475b0be-d2ac-4bf1-8ad7-e117311654a3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:46:18 compute-0 ceph-mon[74477]: pgmap v2020: 305 pgs: 305 active+clean; 121 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 102 KiB/s wr, 51 op/s
Oct 02 08:46:19 compute-0 nova_compute[260603]: 2025-10-02 08:46:19.410 2 DEBUG nova.network.neutron [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Successfully updated port: 4475b0be-d2ac-4bf1-8ad7-e117311654a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:46:19 compute-0 nova_compute[260603]: 2025-10-02 08:46:19.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 151 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 1.2 MiB/s wr, 56 op/s
Oct 02 08:46:19 compute-0 nova_compute[260603]: 2025-10-02 08:46:19.438 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:46:19 compute-0 nova_compute[260603]: 2025-10-02 08:46:19.438 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:46:19 compute-0 nova_compute[260603]: 2025-10-02 08:46:19.439 2 DEBUG nova.network.neutron [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:46:19 compute-0 nova_compute[260603]: 2025-10-02 08:46:19.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:19 compute-0 nova_compute[260603]: 2025-10-02 08:46:19.595 2 DEBUG nova.compute.manager [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received event network-changed-4475b0be-d2ac-4bf1-8ad7-e117311654a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:19 compute-0 nova_compute[260603]: 2025-10-02 08:46:19.596 2 DEBUG nova.compute.manager [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Refreshing instance network info cache due to event network-changed-4475b0be-d2ac-4bf1-8ad7-e117311654a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:46:19 compute-0 nova_compute[260603]: 2025-10-02 08:46:19.596 2 DEBUG oslo_concurrency.lockutils [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:46:19 compute-0 nova_compute[260603]: 2025-10-02 08:46:19.660 2 DEBUG nova.network.neutron [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.598 2 DEBUG nova.network.neutron [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updating instance_info_cache with network_info: [{"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.629 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.630 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Instance network_info: |[{"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.630 2 DEBUG oslo_concurrency.lockutils [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.631 2 DEBUG nova.network.neutron [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Refreshing network info cache for port 4475b0be-d2ac-4bf1-8ad7-e117311654a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.637 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Start _get_guest_xml network_info=[{"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.644 2 WARNING nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.656 2 DEBUG nova.virt.libvirt.host [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.657 2 DEBUG nova.virt.libvirt.host [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.662 2 DEBUG nova.virt.libvirt.host [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.663 2 DEBUG nova.virt.libvirt.host [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.664 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.665 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.665 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.666 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.666 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.667 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.667 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.668 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.668 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.669 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.669 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.670 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:46:20 compute-0 nova_compute[260603]: 2025-10-02 08:46:20.675 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:20 compute-0 ceph-mon[74477]: pgmap v2021: 305 pgs: 305 active+clean; 151 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 1.2 MiB/s wr, 56 op/s
Oct 02 08:46:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:46:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/365795224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.190 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.223 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.227 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 151 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 MiB/s wr, 42 op/s
Oct 02 08:46:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:46:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995743722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.652 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.654 2 DEBUG nova.virt.libvirt.vif [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:46:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=111,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVA79qbd3NtJv84RrbhGptrnsrVMgvbKHQMrDgaSLfUQo5hxQDp/dq5BHyqGmWd6dJ7yqexCddmRMhWAszT1sZolLZV1xXu34aiHfjKSbuLnXtvoVyFqHt1Oka+6ZlQ1g==',key_name='tempest-TestSecurityGroupsBasicOps-1359298851',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-xidjkow3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:46:17Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=a541632d-06cf-48a9-a44d-19bcce4df36f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.654 2 DEBUG nova.network.os_vif_util [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.655 2 DEBUG nova.network.os_vif_util [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.656 2 DEBUG nova.objects.instance [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid a541632d-06cf-48a9-a44d-19bcce4df36f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.682 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:46:21 compute-0 nova_compute[260603]:   <uuid>a541632d-06cf-48a9-a44d-19bcce4df36f</uuid>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   <name>instance-0000006f</name>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778</nova:name>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:46:20</nova:creationTime>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:46:21 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:46:21 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:46:21 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:46:21 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:46:21 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:46:21 compute-0 nova_compute[260603]:         <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct 02 08:46:21 compute-0 nova_compute[260603]:         <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:46:21 compute-0 nova_compute[260603]:         <nova:port uuid="4475b0be-d2ac-4bf1-8ad7-e117311654a3">
Oct 02 08:46:21 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <system>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <entry name="serial">a541632d-06cf-48a9-a44d-19bcce4df36f</entry>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <entry name="uuid">a541632d-06cf-48a9-a44d-19bcce4df36f</entry>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     </system>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   <os>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   </os>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   <features>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   </features>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/a541632d-06cf-48a9-a44d-19bcce4df36f_disk">
Oct 02 08:46:21 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       </source>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:46:21 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/a541632d-06cf-48a9-a44d-19bcce4df36f_disk.config">
Oct 02 08:46:21 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       </source>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:46:21 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:56:7c:24"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <target dev="tap4475b0be-d2"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/console.log" append="off"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <video>
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     </video>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:46:21 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:46:21 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:46:21 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:46:21 compute-0 nova_compute[260603]: </domain>
Oct 02 08:46:21 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.684 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Preparing to wait for external event network-vif-plugged-4475b0be-d2ac-4bf1-8ad7-e117311654a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.684 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.685 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.685 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.686 2 DEBUG nova.virt.libvirt.vif [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:46:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=111,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVA79qbd3NtJv84RrbhGptrnsrVMgvbKHQMrDgaSLfUQo5hxQDp/dq5BHyqGmWd6dJ7yqexCddmRMhWAszT1sZolLZV1xXu34aiHfjKSbuLnXtvoVyFqHt1Oka+6ZlQ1g==',key_name='tempest-TestSecurityGroupsBasicOps-1359298851',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-xidjkow3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:46:17Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=a541632d-06cf-48a9-a44d-19bcce4df36f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.687 2 DEBUG nova.network.os_vif_util [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.688 2 DEBUG nova.network.os_vif_util [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.689 2 DEBUG os_vif [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4475b0be-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.697 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4475b0be-d2, col_values=(('external_ids', {'iface-id': '4475b0be-d2ac-4bf1-8ad7-e117311654a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:7c:24', 'vm-uuid': 'a541632d-06cf-48a9-a44d-19bcce4df36f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:21 compute-0 NetworkManager[45129]: <info>  [1759394781.7011] manager: (tap4475b0be-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.706 2 INFO os_vif [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2')
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.770 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.771 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.772 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:56:7c:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.773 2 INFO nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Using config drive
Oct 02 08:46:21 compute-0 nova_compute[260603]: 2025-10-02 08:46:21.800 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:21 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/365795224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:46:21 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2995743722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.066 2 DEBUG nova.network.neutron [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updated VIF entry in instance network info cache for port 4475b0be-d2ac-4bf1-8ad7-e117311654a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.067 2 DEBUG nova.network.neutron [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updating instance_info_cache with network_info: [{"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.092 2 DEBUG oslo_concurrency.lockutils [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:46:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:46:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1313432951' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:46:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:46:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1313432951' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.263 2 INFO nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Creating config drive at /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/disk.config
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.273 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_pudoozx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.441 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_pudoozx" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.482 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.488 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/disk.config a541632d-06cf-48a9-a44d-19bcce4df36f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.675 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/disk.config a541632d-06cf-48a9-a44d-19bcce4df36f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.677 2 INFO nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Deleting local config drive /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/disk.config because it was imported into RBD.
Oct 02 08:46:22 compute-0 kernel: tap4475b0be-d2: entered promiscuous mode
Oct 02 08:46:22 compute-0 NetworkManager[45129]: <info>  [1759394782.7652] manager: (tap4475b0be-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Oct 02 08:46:22 compute-0 systemd-udevd[371679]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:46:22 compute-0 ovn_controller[152344]: 2025-10-02T08:46:22Z|01147|binding|INFO|Claiming lport 4475b0be-d2ac-4bf1-8ad7-e117311654a3 for this chassis.
Oct 02 08:46:22 compute-0 ovn_controller[152344]: 2025-10-02T08:46:22Z|01148|binding|INFO|4475b0be-d2ac-4bf1-8ad7-e117311654a3: Claiming fa:16:3e:56:7c:24 10.100.0.12
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.821 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:7c:24 10.100.0.12'], port_security=['fa:16:3e:56:7c:24 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a541632d-06cf-48a9-a44d-19bcce4df36f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0f166da-6d6f-4ea3-ab29-33f59ca9931c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12ca4751-2801-43b7-bd66-26826481ad08, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4475b0be-d2ac-4bf1-8ad7-e117311654a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.824 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4475b0be-d2ac-4bf1-8ad7-e117311654a3 in datapath 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 bound to our chassis
Oct 02 08:46:22 compute-0 NetworkManager[45129]: <info>  [1759394782.8299] device (tap4475b0be-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.827 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758
Oct 02 08:46:22 compute-0 NetworkManager[45129]: <info>  [1759394782.8331] device (tap4475b0be-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:46:22 compute-0 ovn_controller[152344]: 2025-10-02T08:46:22Z|01149|binding|INFO|Setting lport 4475b0be-d2ac-4bf1-8ad7-e117311654a3 ovn-installed in OVS
Oct 02 08:46:22 compute-0 ovn_controller[152344]: 2025-10-02T08:46:22Z|01150|binding|INFO|Setting lport 4475b0be-d2ac-4bf1-8ad7-e117311654a3 up in Southbound
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:22 compute-0 systemd-machined[214636]: New machine qemu-140-instance-0000006f.
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.857 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[01d71aa7-3d3b-49ff-a093-bcdbcf86d428]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:22 compute-0 ceph-mon[74477]: pgmap v2022: 305 pgs: 305 active+clean; 151 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 MiB/s wr, 42 op/s
Oct 02 08:46:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1313432951' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:46:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1313432951' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:46:22 compute-0 systemd[1]: Started Virtual Machine qemu-140-instance-0000006f.
Oct 02 08:46:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.894441) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782894527, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 440, "num_deletes": 255, "total_data_size": 330339, "memory_usage": 340056, "flush_reason": "Manual Compaction"}
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.899 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[64211ba9-b0f8-4ae1-8cbc-31cc3aa0efc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782900555, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 327308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42453, "largest_seqno": 42892, "table_properties": {"data_size": 324802, "index_size": 606, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5995, "raw_average_key_size": 18, "raw_value_size": 319765, "raw_average_value_size": 963, "num_data_blocks": 27, "num_entries": 332, "num_filter_entries": 332, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394761, "oldest_key_time": 1759394761, "file_creation_time": 1759394782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 6160 microseconds, and 2861 cpu microseconds.
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.900609) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 327308 bytes OK
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.900633) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.902124) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.902152) EVENT_LOG_v1 {"time_micros": 1759394782902142, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.902179) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 327628, prev total WAL file size 339949, number of live WAL files 2.
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.902949) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353033' seq:72057594037927935, type:22 .. '6C6F676D0031373534' seq:0, type:0; will stop at (end)
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(319KB)], [95(10MB)]
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782902983, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 10836111, "oldest_snapshot_seqno": -1}
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.907 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[100fbec6-01d2-4e66-a389-2133f5f36769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.941 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[49019a29-fc45-499f-ad80-b3d6d75750f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6442 keys, 10710675 bytes, temperature: kUnknown
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782950588, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10710675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10664317, "index_size": 29160, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 167219, "raw_average_key_size": 25, "raw_value_size": 10545569, "raw_average_value_size": 1637, "num_data_blocks": 1151, "num_entries": 6442, "num_filter_entries": 6442, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.950881) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10710675 bytes
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.952069) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.3 rd, 224.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.0 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(65.8) write-amplify(32.7) OK, records in: 6960, records dropped: 518 output_compression: NoCompression
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.952089) EVENT_LOG_v1 {"time_micros": 1759394782952080, "job": 56, "event": "compaction_finished", "compaction_time_micros": 47677, "compaction_time_cpu_micros": 22249, "output_level": 6, "num_output_files": 1, "total_output_size": 10710675, "num_input_records": 6960, "num_output_records": 6442, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782952283, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782954643, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.902843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.954719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.954725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.954728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.954730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.954732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.955178) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782955282, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 255, "num_deletes": 250, "total_data_size": 14332, "memory_usage": 20112, "flush_reason": "Manual Compaction"}
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782957576, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 13850, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42893, "largest_seqno": 43147, "table_properties": {"data_size": 12098, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394782, "oldest_key_time": 1759394782, "file_creation_time": 1759394782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 2435 microseconds, and 1214 cpu microseconds.
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.957629) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 13850 bytes OK
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.957650) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.958879) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.958901) EVENT_LOG_v1 {"time_micros": 1759394782958894, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.958923) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 12321, prev total WAL file size 12321, number of live WAL files 2.
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.959334) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373533' seq:0, type:0; will stop at (end)
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(13KB)], [98(10MB)]
Oct 02 08:46:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782959370, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 10724525, "oldest_snapshot_seqno": -1}
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.963 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc399ce-fa43-418c-83c2-76f20f9c3b1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ee1fadc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:f0:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562918, 'reachable_time': 43019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371695, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.982 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e77b0b15-07ed-471f-9dae-5d5def5743b0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0ee1fadc-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562930, 'tstamp': 562930}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371697, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0ee1fadc-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562932, 'tstamp': 562932}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371697, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.983 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ee1fadc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:22 compute-0 nova_compute[260603]: 2025-10-02 08:46:22.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.985 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ee1fadc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.986 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.986 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ee1fadc-d0, col_values=(('external_ids', {'iface-id': 'f994afa7-373d-469a-a6b3-0a33b20c9e54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.986 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6193 keys, 7435108 bytes, temperature: kUnknown
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394783003585, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 7435108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7395347, "index_size": 23203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 162237, "raw_average_key_size": 26, "raw_value_size": 7285814, "raw_average_value_size": 1176, "num_data_blocks": 902, "num_entries": 6193, "num_filter_entries": 6193, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.003791) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 7435108 bytes
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.005092) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.3 rd, 167.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.2 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(1311.2) write-amplify(536.8) OK, records in: 6697, records dropped: 504 output_compression: NoCompression
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.005108) EVENT_LOG_v1 {"time_micros": 1759394783005100, "job": 58, "event": "compaction_finished", "compaction_time_micros": 44270, "compaction_time_cpu_micros": 20456, "output_level": 6, "num_output_files": 1, "total_output_size": 7435108, "num_input_records": 6697, "num_output_records": 6193, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394783005214, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394783006935, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.959268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.006989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.006995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.006997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.006999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.007001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.106 2 DEBUG nova.compute.manager [req-fc13baaf-d630-4b40-8ed2-d0a3e0ade417 req-b52e5947-ee26-4da3-b55e-014640c2a3e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received event network-vif-plugged-4475b0be-d2ac-4bf1-8ad7-e117311654a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.106 2 DEBUG oslo_concurrency.lockutils [req-fc13baaf-d630-4b40-8ed2-d0a3e0ade417 req-b52e5947-ee26-4da3-b55e-014640c2a3e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.107 2 DEBUG oslo_concurrency.lockutils [req-fc13baaf-d630-4b40-8ed2-d0a3e0ade417 req-b52e5947-ee26-4da3-b55e-014640c2a3e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.107 2 DEBUG oslo_concurrency.lockutils [req-fc13baaf-d630-4b40-8ed2-d0a3e0ade417 req-b52e5947-ee26-4da3-b55e-014640c2a3e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.107 2 DEBUG nova.compute.manager [req-fc13baaf-d630-4b40-8ed2-d0a3e0ade417 req-b52e5947-ee26-4da3-b55e-014640c2a3e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Processing event network-vif-plugged-4475b0be-d2ac-4bf1-8ad7-e117311654a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:46:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.737 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394783.73671, a541632d-06cf-48a9-a44d-19bcce4df36f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.738 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] VM Started (Lifecycle Event)
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.742 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.747 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.752 2 INFO nova.virt.libvirt.driver [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Instance spawned successfully.
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.752 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.761 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.767 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.784 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.785 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.786 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.787 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.787 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.788 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.795 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.796 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394783.7369533, a541632d-06cf-48a9-a44d-19bcce4df36f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.796 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] VM Paused (Lifecycle Event)
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.823 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.826 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394783.7464197, a541632d-06cf-48a9-a44d-19bcce4df36f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.826 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] VM Resumed (Lifecycle Event)
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.850 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.853 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.880 2 INFO nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Took 6.59 seconds to spawn the instance on the hypervisor.
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.881 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:23 compute-0 nova_compute[260603]: 2025-10-02 08:46:23.882 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:46:24 compute-0 nova_compute[260603]: 2025-10-02 08:46:24.001 2 INFO nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Took 7.54 seconds to build instance.
Oct 02 08:46:24 compute-0 nova_compute[260603]: 2025-10-02 08:46:24.025 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:24 compute-0 nova_compute[260603]: 2025-10-02 08:46:24.388 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394769.387032, ece43baf-b502-44c4-9065-61d6c3271ae4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:46:24 compute-0 nova_compute[260603]: 2025-10-02 08:46:24.388 2 INFO nova.compute.manager [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Stopped (Lifecycle Event)
Oct 02 08:46:24 compute-0 nova_compute[260603]: 2025-10-02 08:46:24.407 2 DEBUG nova.compute.manager [None req-d7d789b1-d8f5-4ad4-88ad-193419a98a6e - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:24 compute-0 nova_compute[260603]: 2025-10-02 08:46:24.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:24 compute-0 ceph-mon[74477]: pgmap v2023: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 08:46:25 compute-0 nova_compute[260603]: 2025-10-02 08:46:25.227 2 DEBUG nova.compute.manager [req-3cc34c20-16fa-4706-9955-d647cc35376b req-944679d4-0b12-48dc-b023-4f11ac1f5c4d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received event network-vif-plugged-4475b0be-d2ac-4bf1-8ad7-e117311654a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:25 compute-0 nova_compute[260603]: 2025-10-02 08:46:25.228 2 DEBUG oslo_concurrency.lockutils [req-3cc34c20-16fa-4706-9955-d647cc35376b req-944679d4-0b12-48dc-b023-4f11ac1f5c4d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:25 compute-0 nova_compute[260603]: 2025-10-02 08:46:25.229 2 DEBUG oslo_concurrency.lockutils [req-3cc34c20-16fa-4706-9955-d647cc35376b req-944679d4-0b12-48dc-b023-4f11ac1f5c4d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:25 compute-0 nova_compute[260603]: 2025-10-02 08:46:25.230 2 DEBUG oslo_concurrency.lockutils [req-3cc34c20-16fa-4706-9955-d647cc35376b req-944679d4-0b12-48dc-b023-4f11ac1f5c4d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:25 compute-0 nova_compute[260603]: 2025-10-02 08:46:25.230 2 DEBUG nova.compute.manager [req-3cc34c20-16fa-4706-9955-d647cc35376b req-944679d4-0b12-48dc-b023-4f11ac1f5c4d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] No waiting events found dispatching network-vif-plugged-4475b0be-d2ac-4bf1-8ad7-e117311654a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:46:25 compute-0 nova_compute[260603]: 2025-10-02 08:46:25.231 2 WARNING nova.compute.manager [req-3cc34c20-16fa-4706-9955-d647cc35376b req-944679d4-0b12-48dc-b023-4f11ac1f5c4d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received unexpected event network-vif-plugged-4475b0be-d2ac-4bf1-8ad7-e117311654a3 for instance with vm_state active and task_state None.
Oct 02 08:46:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 08:46:26 compute-0 nova_compute[260603]: 2025-10-02 08:46:26.343 2 DEBUG nova.compute.manager [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received event network-changed-4475b0be-d2ac-4bf1-8ad7-e117311654a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:26 compute-0 nova_compute[260603]: 2025-10-02 08:46:26.345 2 DEBUG nova.compute.manager [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Refreshing instance network info cache due to event network-changed-4475b0be-d2ac-4bf1-8ad7-e117311654a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:46:26 compute-0 nova_compute[260603]: 2025-10-02 08:46:26.346 2 DEBUG oslo_concurrency.lockutils [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:46:26 compute-0 nova_compute[260603]: 2025-10-02 08:46:26.347 2 DEBUG oslo_concurrency.lockutils [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:46:26 compute-0 nova_compute[260603]: 2025-10-02 08:46:26.347 2 DEBUG nova.network.neutron [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Refreshing network info cache for port 4475b0be-d2ac-4bf1-8ad7-e117311654a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:46:26 compute-0 nova_compute[260603]: 2025-10-02 08:46:26.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:26 compute-0 ceph-mon[74477]: pgmap v2024: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 08:46:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 02 08:46:27 compute-0 nova_compute[260603]: 2025-10-02 08:46:27.596 2 DEBUG nova.network.neutron [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updated VIF entry in instance network info cache for port 4475b0be-d2ac-4bf1-8ad7-e117311654a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:46:27 compute-0 nova_compute[260603]: 2025-10-02 08:46:27.598 2 DEBUG nova.network.neutron [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updating instance_info_cache with network_info: [{"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:27 compute-0 nova_compute[260603]: 2025-10-02 08:46:27.617 2 DEBUG oslo_concurrency.lockutils [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:46:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:46:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:46:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:46:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:46:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:46:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:46:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:46:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:46:27
Oct 02 08:46:27 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:46:27 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:46:27 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.rgw.root', '.mgr', 'backups', 'vms', 'volumes', 'default.rgw.meta']
Oct 02 08:46:27 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:46:28 compute-0 nova_compute[260603]: 2025-10-02 08:46:28.081 2 DEBUG nova.compute.manager [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received event network-changed-4475b0be-d2ac-4bf1-8ad7-e117311654a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:28 compute-0 nova_compute[260603]: 2025-10-02 08:46:28.082 2 DEBUG nova.compute.manager [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Refreshing instance network info cache due to event network-changed-4475b0be-d2ac-4bf1-8ad7-e117311654a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:46:28 compute-0 nova_compute[260603]: 2025-10-02 08:46:28.083 2 DEBUG oslo_concurrency.lockutils [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:46:28 compute-0 nova_compute[260603]: 2025-10-02 08:46:28.083 2 DEBUG oslo_concurrency.lockutils [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:46:28 compute-0 nova_compute[260603]: 2025-10-02 08:46:28.084 2 DEBUG nova.network.neutron [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Refreshing network info cache for port 4475b0be-d2ac-4bf1-8ad7-e117311654a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:46:28 compute-0 ceph-mon[74477]: pgmap v2025: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 02 08:46:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 08:46:29 compute-0 nova_compute[260603]: 2025-10-02 08:46:29.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:29 compute-0 nova_compute[260603]: 2025-10-02 08:46:29.576 2 DEBUG nova.network.neutron [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updated VIF entry in instance network info cache for port 4475b0be-d2ac-4bf1-8ad7-e117311654a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:46:29 compute-0 nova_compute[260603]: 2025-10-02 08:46:29.576 2 DEBUG nova.network.neutron [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updating instance_info_cache with network_info: [{"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:29 compute-0 nova_compute[260603]: 2025-10-02 08:46:29.590 2 DEBUG oslo_concurrency.lockutils [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:46:30 compute-0 ceph-mon[74477]: pgmap v2026: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 08:46:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 651 KiB/s wr, 86 op/s
Oct 02 08:46:31 compute-0 nova_compute[260603]: 2025-10-02 08:46:31.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:46:32 compute-0 ceph-mon[74477]: pgmap v2027: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 651 KiB/s wr, 86 op/s
Oct 02 08:46:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 654 KiB/s wr, 86 op/s
Oct 02 08:46:33 compute-0 nova_compute[260603]: 2025-10-02 08:46:33.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:46:34 compute-0 nova_compute[260603]: 2025-10-02 08:46:34.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:34 compute-0 nova_compute[260603]: 2025-10-02 08:46:34.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:46:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:34.828 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:34.829 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:34.830 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:34 compute-0 ceph-mon[74477]: pgmap v2028: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 654 KiB/s wr, 86 op/s
Oct 02 08:46:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 72 op/s
Oct 02 08:46:35 compute-0 nova_compute[260603]: 2025-10-02 08:46:35.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:46:35 compute-0 nova_compute[260603]: 2025-10-02 08:46:35.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:46:35 compute-0 nova_compute[260603]: 2025-10-02 08:46:35.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:46:35 compute-0 sudo[371741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:46:35 compute-0 sudo[371741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:35 compute-0 sudo[371741]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:35 compute-0 sudo[371766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:46:35 compute-0 sudo[371766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:35 compute-0 sudo[371766]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:35 compute-0 sudo[371791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:46:35 compute-0 sudo[371791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:35 compute-0 sudo[371791]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:35 compute-0 nova_compute[260603]: 2025-10-02 08:46:35.911 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:46:35 compute-0 nova_compute[260603]: 2025-10-02 08:46:35.911 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:46:35 compute-0 nova_compute[260603]: 2025-10-02 08:46:35.912 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:46:35 compute-0 nova_compute[260603]: 2025-10-02 08:46:35.912 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fc71f095-bde6-43da-bec6-e0a30dc1b71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:35 compute-0 sudo[371816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:46:35 compute-0 sudo[371816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:36 compute-0 sudo[371816]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:46:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:46:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:46:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:46:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:46:36 compute-0 ovn_controller[152344]: 2025-10-02T08:46:36Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:7c:24 10.100.0.12
Oct 02 08:46:36 compute-0 ovn_controller[152344]: 2025-10-02T08:46:36Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:7c:24 10.100.0.12
Oct 02 08:46:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:46:36 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e8ee8fce-37af-4fae-9249-f6f6c981c3de does not exist
Oct 02 08:46:36 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 1cb8fc4c-1299-4188-bbd2-17fab0b47493 does not exist
Oct 02 08:46:36 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 36f32b54-e701-4f9a-834b-a56b969a35b1 does not exist
Oct 02 08:46:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:46:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:46:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:46:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:46:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:46:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:46:36 compute-0 sudo[371874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:46:36 compute-0 sudo[371874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:36 compute-0 sudo[371874]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:36 compute-0 nova_compute[260603]: 2025-10-02 08:46:36.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:36 compute-0 sudo[371899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:46:36 compute-0 sudo[371899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:36 compute-0 sudo[371899]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:36 compute-0 sudo[371924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:46:36 compute-0 sudo[371924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:36 compute-0 sudo[371924]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:36 compute-0 sudo[371949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:46:36 compute-0 sudo[371949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct 02 08:46:36 compute-0 ceph-mon[74477]: pgmap v2029: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 72 op/s
Oct 02 08:46:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:46:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:46:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:46:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:46:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:46:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:46:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct 02 08:46:36 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct 02 08:46:37 compute-0 nova_compute[260603]: 2025-10-02 08:46:37.415 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:37 compute-0 podman[372015]: 2025-10-02 08:46:37.417800608 +0000 UTC m=+0.059860497 container create 6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct 02 08:46:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 180 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 748 KiB/s wr, 99 op/s
Oct 02 08:46:37 compute-0 nova_compute[260603]: 2025-10-02 08:46:37.440 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:46:37 compute-0 nova_compute[260603]: 2025-10-02 08:46:37.440 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:46:37 compute-0 systemd[1]: Started libpod-conmon-6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c.scope.
Oct 02 08:46:37 compute-0 podman[372015]: 2025-10-02 08:46:37.394635386 +0000 UTC m=+0.036695275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:46:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:46:37 compute-0 podman[372015]: 2025-10-02 08:46:37.546165909 +0000 UTC m=+0.188225848 container init 6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 08:46:37 compute-0 podman[372015]: 2025-10-02 08:46:37.558916336 +0000 UTC m=+0.200976225 container start 6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 08:46:37 compute-0 podman[372015]: 2025-10-02 08:46:37.562684294 +0000 UTC m=+0.204744173 container attach 6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:46:37 compute-0 suspicious_pare[372032]: 167 167
Oct 02 08:46:37 compute-0 systemd[1]: libpod-6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c.scope: Deactivated successfully.
Oct 02 08:46:37 compute-0 podman[372015]: 2025-10-02 08:46:37.570351422 +0000 UTC m=+0.212411311 container died 6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 08:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec71db7b19e3f351e9e910cd27dc24ce2c8f9e3e35b915bf2c1a0f588c33d332-merged.mount: Deactivated successfully.
Oct 02 08:46:37 compute-0 podman[372015]: 2025-10-02 08:46:37.622570591 +0000 UTC m=+0.264630480 container remove 6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 02 08:46:37 compute-0 systemd[1]: libpod-conmon-6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c.scope: Deactivated successfully.
Oct 02 08:46:37 compute-0 podman[372057]: 2025-10-02 08:46:37.869326621 +0000 UTC m=+0.077747533 container create a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 08:46:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:46:37 compute-0 systemd[1]: Started libpod-conmon-a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f.scope.
Oct 02 08:46:37 compute-0 podman[372057]: 2025-10-02 08:46:37.838953194 +0000 UTC m=+0.047374176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:46:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:46:37 compute-0 ceph-mon[74477]: osdmap e266: 3 total, 3 up, 3 in
Oct 02 08:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d295d5db41934d71067c3c97189ceda25ed87f235e790bf487239a65c617b438/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d295d5db41934d71067c3c97189ceda25ed87f235e790bf487239a65c617b438/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d295d5db41934d71067c3c97189ceda25ed87f235e790bf487239a65c617b438/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d295d5db41934d71067c3c97189ceda25ed87f235e790bf487239a65c617b438/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d295d5db41934d71067c3c97189ceda25ed87f235e790bf487239a65c617b438/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:38 compute-0 podman[372057]: 2025-10-02 08:46:38.001024846 +0000 UTC m=+0.209445798 container init a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 08:46:38 compute-0 podman[372057]: 2025-10-02 08:46:38.01365981 +0000 UTC m=+0.222080722 container start a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 02 08:46:38 compute-0 podman[372057]: 2025-10-02 08:46:38.017835611 +0000 UTC m=+0.226256523 container attach a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:46:38 compute-0 nova_compute[260603]: 2025-10-02 08:46:38.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001223548809174969 of space, bias 1.0, pg target 0.36706464275249073 quantized to 32 (current 32)
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:46:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:46:38 compute-0 ceph-mon[74477]: pgmap v2031: 305 pgs: 305 active+clean; 180 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 748 KiB/s wr, 99 op/s
Oct 02 08:46:39 compute-0 epic_wright[372074]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:46:39 compute-0 epic_wright[372074]: --> relative data size: 1.0
Oct 02 08:46:39 compute-0 epic_wright[372074]: --> All data devices are unavailable
Oct 02 08:46:39 compute-0 systemd[1]: libpod-a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f.scope: Deactivated successfully.
Oct 02 08:46:39 compute-0 systemd[1]: libpod-a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f.scope: Consumed 1.005s CPU time.
Oct 02 08:46:39 compute-0 podman[372057]: 2025-10-02 08:46:39.070401727 +0000 UTC m=+1.278822639 container died a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d295d5db41934d71067c3c97189ceda25ed87f235e790bf487239a65c617b438-merged.mount: Deactivated successfully.
Oct 02 08:46:39 compute-0 podman[372057]: 2025-10-02 08:46:39.129956084 +0000 UTC m=+1.338376966 container remove a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:46:39 compute-0 systemd[1]: libpod-conmon-a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f.scope: Deactivated successfully.
Oct 02 08:46:39 compute-0 sudo[371949]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:39 compute-0 podman[372104]: 2025-10-02 08:46:39.180702675 +0000 UTC m=+0.082398310 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 02 08:46:39 compute-0 sudo[372134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:46:39 compute-0 sudo[372134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:39 compute-0 sudo[372134]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:39 compute-0 podman[372133]: 2025-10-02 08:46:39.370576784 +0000 UTC m=+0.159265526 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:46:39 compute-0 sudo[372178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:46:39 compute-0 sudo[372178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:39 compute-0 sudo[372178]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 200 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:39 compute-0 sudo[372210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:46:39 compute-0 sudo[372210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:39 compute-0 sudo[372210]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.550 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.550 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:39 compute-0 sudo[372235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:46:39 compute-0 sudo[372235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.884 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.885 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.909 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:46:39 compute-0 podman[372321]: 2025-10-02 08:46:39.961176752 +0000 UTC m=+0.040496664 container create 027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.998 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:39 compute-0 nova_compute[260603]: 2025-10-02 08:46:39.999 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:40 compute-0 systemd[1]: Started libpod-conmon-027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527.scope.
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.007 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.007 2 INFO nova.compute.claims [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:46:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:46:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565884205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:46:40 compute-0 podman[372321]: 2025-10-02 08:46:39.944183412 +0000 UTC m=+0.023503344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.047 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:40 compute-0 podman[372321]: 2025-10-02 08:46:40.05994495 +0000 UTC m=+0.139264922 container init 027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 08:46:40 compute-0 podman[372321]: 2025-10-02 08:46:40.074794473 +0000 UTC m=+0.154114425 container start 027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 08:46:40 compute-0 podman[372321]: 2025-10-02 08:46:40.079160009 +0000 UTC m=+0.158479951 container attach 027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:46:40 compute-0 dazzling_darwin[372337]: 167 167
Oct 02 08:46:40 compute-0 systemd[1]: libpod-027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527.scope: Deactivated successfully.
Oct 02 08:46:40 compute-0 conmon[372337]: conmon 027d9c7a929b4f20abe6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527.scope/container/memory.events
Oct 02 08:46:40 compute-0 podman[372321]: 2025-10-02 08:46:40.084474284 +0000 UTC m=+0.163794236 container died 027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-041dad26d87b6184cd87c2fc80071ff5c2598dd3b3fb14a2c39b061d4c86038c-merged.mount: Deactivated successfully.
Oct 02 08:46:40 compute-0 podman[372321]: 2025-10-02 08:46:40.138146318 +0000 UTC m=+0.217466240 container remove 027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.142 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.142 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.148 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.149 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:46:40 compute-0 systemd[1]: libpod-conmon-027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527.scope: Deactivated successfully.
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.216 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:40 compute-0 podman[372365]: 2025-10-02 08:46:40.389600805 +0000 UTC m=+0.052502178 container create 84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:46:40 compute-0 systemd[1]: Started libpod-conmon-84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4.scope.
Oct 02 08:46:40 compute-0 podman[372365]: 2025-10-02 08:46:40.366539466 +0000 UTC m=+0.029440869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.467 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:46:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.470 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3325MB free_disk=59.897300720214844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.470 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1ca78facd261c72bfae5d0e82aaee30c72adf946d44251456ad3da9ba73940/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1ca78facd261c72bfae5d0e82aaee30c72adf946d44251456ad3da9ba73940/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1ca78facd261c72bfae5d0e82aaee30c72adf946d44251456ad3da9ba73940/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1ca78facd261c72bfae5d0e82aaee30c72adf946d44251456ad3da9ba73940/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:40 compute-0 podman[372365]: 2025-10-02 08:46:40.484295446 +0000 UTC m=+0.147196809 container init 84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 08:46:40 compute-0 podman[372365]: 2025-10-02 08:46:40.499425018 +0000 UTC m=+0.162326401 container start 84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 08:46:40 compute-0 podman[372365]: 2025-10-02 08:46:40.503451393 +0000 UTC m=+0.166352776 container attach 84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 08:46:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:46:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886248276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.676 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.685 2 DEBUG nova.compute.provider_tree [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.709 2 DEBUG nova.scheduler.client.report [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.743 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.745 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.749 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.845 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.845 2 DEBUG nova.network.neutron [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.880 2 INFO nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.885 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance fc71f095-bde6-43da-bec6-e0a30dc1b71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.885 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance a541632d-06cf-48a9-a44d-19bcce4df36f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.886 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance bf1ef571-5f72-49ef-9c3f-f12f88c79a03 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.886 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.887 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:46:40 compute-0 nova_compute[260603]: 2025-10-02 08:46:40.905 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:46:40 compute-0 ceph-mon[74477]: pgmap v2032: 305 pgs: 305 active+clean; 200 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Oct 02 08:46:40 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2565884205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:40 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1886248276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.002 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.101 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.107 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.108 2 INFO nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Creating image(s)
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.146 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.179 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.226 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.231 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "6933fc953345c6934eae28c9b7b7a2121b435e49" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.232 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "6933fc953345c6934eae28c9b7b7a2121b435e49" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]: {
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:     "0": [
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:         {
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "devices": [
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "/dev/loop3"
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             ],
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_name": "ceph_lv0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_size": "21470642176",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "name": "ceph_lv0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "tags": {
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.cluster_name": "ceph",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.crush_device_class": "",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.encrypted": "0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.osd_id": "0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.type": "block",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.vdo": "0"
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             },
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "type": "block",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "vg_name": "ceph_vg0"
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:         }
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:     ],
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:     "1": [
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:         {
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "devices": [
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "/dev/loop4"
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             ],
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_name": "ceph_lv1",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_size": "21470642176",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "name": "ceph_lv1",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "tags": {
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.cluster_name": "ceph",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.crush_device_class": "",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.encrypted": "0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.osd_id": "1",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.type": "block",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.vdo": "0"
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             },
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "type": "block",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "vg_name": "ceph_vg1"
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:         }
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:     ],
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:     "2": [
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:         {
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "devices": [
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "/dev/loop5"
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             ],
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_name": "ceph_lv2",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_size": "21470642176",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "name": "ceph_lv2",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "tags": {
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.cluster_name": "ceph",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.crush_device_class": "",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.encrypted": "0",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.osd_id": "2",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.type": "block",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:                 "ceph.vdo": "0"
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             },
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "type": "block",
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:             "vg_name": "ceph_vg2"
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:         }
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]:     ]
Oct 02 08:46:41 compute-0 stupefied_sutherland[372400]: }
Oct 02 08:46:41 compute-0 systemd[1]: libpod-84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4.scope: Deactivated successfully.
Oct 02 08:46:41 compute-0 podman[372365]: 2025-10-02 08:46:41.269780888 +0000 UTC m=+0.932682251 container died 84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 08:46:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef1ca78facd261c72bfae5d0e82aaee30c72adf946d44251456ad3da9ba73940-merged.mount: Deactivated successfully.
Oct 02 08:46:41 compute-0 podman[372365]: 2025-10-02 08:46:41.328273682 +0000 UTC m=+0.991175045 container remove 84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 08:46:41 compute-0 systemd[1]: libpod-conmon-84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4.scope: Deactivated successfully.
Oct 02 08:46:41 compute-0 sudo[372235]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 200 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.443 2 DEBUG nova.network.neutron [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.445 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:46:41 compute-0 sudo[372496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:46:41 compute-0 sudo[372496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:41 compute-0 sudo[372496]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:46:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213203243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:46:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9385 writes, 43K keys, 9385 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 9385 writes, 9385 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1490 writes, 7453 keys, 1490 commit groups, 1.0 writes per commit group, ingest: 9.21 MB, 0.02 MB/s
                                           Interval WAL: 1490 writes, 1490 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    106.9      0.48              0.20        29    0.017       0      0       0.0       0.0
                                             L6      1/0    7.09 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.4    169.7    141.6      1.59              0.74        28    0.057    156K    15K       0.0       0.0
                                            Sum      1/0    7.09 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.4    130.1    133.5      2.07              0.94        57    0.036    156K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.0     96.7     94.1      0.85              0.26        16    0.053     55K   4071       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    169.7    141.6      1.59              0.74        28    0.057    156K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    108.0      0.48              0.20        28    0.017       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.27 GB write, 0.08 MB/s write, 0.26 GB read, 0.07 MB/s read, 2.1 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 29.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000321 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1905,28.08 MB,9.23808%) FilterBlock(58,439.61 KB,0.141219%) IndexBlock(58,770.64 KB,0.247559%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.514 2 DEBUG nova.virt.libvirt.imagebackend [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 08:46:41 compute-0 sudo[372521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:46:41 compute-0 sudo[372521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:41 compute-0 sudo[372521]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.588 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.593 2 DEBUG nova.virt.libvirt.imagebackend [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Selected location: {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.594 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] cloning images/bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4@snap to None/bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.640 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:46:41 compute-0 sudo[372579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:46:41 compute-0 sudo[372579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:41 compute-0 sudo[372579]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.675 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.720 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "6933fc953345c6934eae28c9b7b7a2121b435e49" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:41 compute-0 sudo[372639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:46:41 compute-0 sudo[372639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.766 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.767 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.900 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] resizing rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:41 compute-0 nova_compute[260603]: 2025-10-02 08:46:41.989 2 DEBUG nova.objects.instance [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lazy-loading 'migration_context' on Instance uuid bf1ef571-5f72-49ef-9c3f-f12f88c79a03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/213203243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.011 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.012 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Ensure instance console log exists: /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.012 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.013 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.013 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.015 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='860376c145374e34a3becd9382e32f37',container_format='bare',created_at=2025-10-02T08:46:35Z,direct_url=<?>,disk_format='raw',id=bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4,min_disk=0,min_ram=0,name='tempest-image-dependency-test-425033708',owner='ced39caa9dc64a9c8f98ed8725b23025',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-02T08:46:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.019 2 WARNING nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.024 2 DEBUG nova.virt.libvirt.host [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.025 2 DEBUG nova.virt.libvirt.host [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.028 2 DEBUG nova.virt.libvirt.host [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.029 2 DEBUG nova.virt.libvirt.host [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.029 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.029 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='860376c145374e34a3becd9382e32f37',container_format='bare',created_at=2025-10-02T08:46:35Z,direct_url=<?>,disk_format='raw',id=bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4,min_disk=0,min_ram=0,name='tempest-image-dependency-test-425033708',owner='ced39caa9dc64a9c8f98ed8725b23025',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-02T08:46:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.030 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.030 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.031 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.031 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.031 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.031 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.031 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.032 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.032 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.032 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.036 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:42 compute-0 podman[372778]: 2025-10-02 08:46:42.142735537 +0000 UTC m=+0.050206076 container create 28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:46:42 compute-0 systemd[1]: Started libpod-conmon-28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5.scope.
Oct 02 08:46:42 compute-0 podman[372778]: 2025-10-02 08:46:42.119955747 +0000 UTC m=+0.027426336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:46:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:46:42 compute-0 podman[372778]: 2025-10-02 08:46:42.245592163 +0000 UTC m=+0.153062732 container init 28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_einstein, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 08:46:42 compute-0 podman[372778]: 2025-10-02 08:46:42.259108715 +0000 UTC m=+0.166579254 container start 28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_einstein, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 08:46:42 compute-0 podman[372778]: 2025-10-02 08:46:42.262761588 +0000 UTC m=+0.170232127 container attach 28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_einstein, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:46:42 compute-0 sleepy_einstein[372797]: 167 167
Oct 02 08:46:42 compute-0 systemd[1]: libpod-28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5.scope: Deactivated successfully.
Oct 02 08:46:42 compute-0 podman[372778]: 2025-10-02 08:46:42.266590848 +0000 UTC m=+0.174061397 container died 28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_einstein, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 08:46:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-264bee3033c3083fe32fef8a022ca89767c3005ce0699029a4a9397903a00895-merged.mount: Deactivated successfully.
Oct 02 08:46:42 compute-0 podman[372778]: 2025-10-02 08:46:42.311549859 +0000 UTC m=+0.219020388 container remove 28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 08:46:42 compute-0 systemd[1]: libpod-conmon-28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5.scope: Deactivated successfully.
Oct 02 08:46:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:46:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1792084536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.508 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.535 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.545 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:42 compute-0 podman[372838]: 2025-10-02 08:46:42.565292667 +0000 UTC m=+0.069893299 container create ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamport, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 08:46:42 compute-0 systemd[1]: Started libpod-conmon-ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4.scope.
Oct 02 08:46:42 compute-0 podman[372838]: 2025-10-02 08:46:42.542442525 +0000 UTC m=+0.047043177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:46:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b76dd7d1354c5d1325fa28412b5b676e76db8310503ebca2b8dce2ff68cda6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b76dd7d1354c5d1325fa28412b5b676e76db8310503ebca2b8dce2ff68cda6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b76dd7d1354c5d1325fa28412b5b676e76db8310503ebca2b8dce2ff68cda6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b76dd7d1354c5d1325fa28412b5b676e76db8310503ebca2b8dce2ff68cda6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:46:42 compute-0 podman[372838]: 2025-10-02 08:46:42.659775152 +0000 UTC m=+0.164375774 container init ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:46:42 compute-0 podman[372838]: 2025-10-02 08:46:42.672287482 +0000 UTC m=+0.176888114 container start ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamport, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 08:46:42 compute-0 podman[372838]: 2025-10-02 08:46:42.675355828 +0000 UTC m=+0.179956470 container attach ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamport, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.769 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.772 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:46:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.949 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "a541632d-06cf-48a9-a44d-19bcce4df36f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.950 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.950 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:46:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120015341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.951 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.951 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.952 2 INFO nova.compute.manager [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Terminating instance
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.953 2 DEBUG nova.compute.manager [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.968 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.970 2 DEBUG nova.objects.instance [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lazy-loading 'pci_devices' on Instance uuid bf1ef571-5f72-49ef-9c3f-f12f88c79a03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:42 compute-0 nova_compute[260603]: 2025-10-02 08:46:42.990 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:46:42 compute-0 nova_compute[260603]:   <uuid>bf1ef571-5f72-49ef-9c3f-f12f88c79a03</uuid>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   <name>instance-00000070</name>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <nova:name>instance-depend-image</nova:name>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:46:42</nova:creationTime>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:46:42 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:46:42 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:46:42 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:46:42 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:46:42 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:46:42 compute-0 nova_compute[260603]:         <nova:user uuid="84fa2fc10124476dac4b89b077888ca1">tempest-ImageDependencyTests-1169999881-project-member</nova:user>
Oct 02 08:46:42 compute-0 nova_compute[260603]:         <nova:project uuid="ced39caa9dc64a9c8f98ed8725b23025">tempest-ImageDependencyTests-1169999881</nova:project>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <nova:ports/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <system>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <entry name="serial">bf1ef571-5f72-49ef-9c3f-f12f88c79a03</entry>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <entry name="uuid">bf1ef571-5f72-49ef-9c3f-f12f88c79a03</entry>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     </system>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   <os>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   </os>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   <features>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   </features>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk">
Oct 02 08:46:42 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       </source>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:46:42 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk.config">
Oct 02 08:46:42 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       </source>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:46:42 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/console.log" append="off"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <video>
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     </video>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:46:42 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:46:42 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:46:42 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:46:42 compute-0 nova_compute[260603]: </domain>
Oct 02 08:46:42 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:46:43 compute-0 kernel: tap4475b0be-d2 (unregistering): left promiscuous mode
Oct 02 08:46:43 compute-0 ceph-mon[74477]: pgmap v2033: 305 pgs: 305 active+clean; 200 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Oct 02 08:46:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1792084536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:46:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1120015341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:46:43 compute-0 NetworkManager[45129]: <info>  [1759394803.0358] device (tap4475b0be-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:46:43 compute-0 ovn_controller[152344]: 2025-10-02T08:46:43Z|01151|binding|INFO|Releasing lport 4475b0be-d2ac-4bf1-8ad7-e117311654a3 from this chassis (sb_readonly=0)
Oct 02 08:46:43 compute-0 ovn_controller[152344]: 2025-10-02T08:46:43Z|01152|binding|INFO|Setting lport 4475b0be-d2ac-4bf1-8ad7-e117311654a3 down in Southbound
Oct 02 08:46:43 compute-0 ovn_controller[152344]: 2025-10-02T08:46:43Z|01153|binding|INFO|Removing iface tap4475b0be-d2 ovn-installed in OVS
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.049 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:7c:24 10.100.0.12', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a541632d-06cf-48a9-a44d-19bcce4df36f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12ca4751-2801-43b7-bd66-26826481ad08, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4475b0be-d2ac-4bf1-8ad7-e117311654a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.051 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4475b0be-d2ac-4bf1-8ad7-e117311654a3 in datapath 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 unbound from our chassis
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.052 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.068 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.069 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.069 2 INFO nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Using config drive
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.075 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8275c703-a6c6-4119-90d3-34f6618645a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:43 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Oct 02 08:46:43 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006f.scope: Consumed 12.583s CPU time.
Oct 02 08:46:43 compute-0 systemd-machined[214636]: Machine qemu-140-instance-0000006f terminated.
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.106 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[eabfa8fa-8081-41fc-a020-cd63ab4a06d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.106 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.109 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fe849cd5-99d7-4da0-8b77-dc5f902c02e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.137 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[71c4aa9d-2d0d-46af-9ea1-22498263d2fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.156 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e48e5a52-0a83-4fa4-b436-3177c4271079]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ee1fadc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:f0:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562918, 'reachable_time': 43019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 15, 'inoctets': 1208, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 15, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1208, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 15, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372931, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.173 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b77e1b-63d0-4b72-943f-a7f762f4fd5a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0ee1fadc-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562930, 'tstamp': 562930}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372932, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0ee1fadc-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562932, 'tstamp': 562932}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372932, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.175 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ee1fadc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.191 2 INFO nova.virt.libvirt.driver [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Instance destroyed successfully.
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.192 2 DEBUG nova.objects.instance [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid a541632d-06cf-48a9-a44d-19bcce4df36f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.195 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ee1fadc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.196 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.196 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ee1fadc-d0, col_values=(('external_ids', {'iface-id': 'f994afa7-373d-469a-a6b3-0a33b20c9e54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.197 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.215 2 DEBUG nova.virt.libvirt.vif [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:46:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=111,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVA79qbd3NtJv84RrbhGptrnsrVMgvbKHQMrDgaSLfUQo5hxQDp/dq5BHyqGmWd6dJ7yqexCddmRMhWAszT1sZolLZV1xXu34aiHfjKSbuLnXtvoVyFqHt1Oka+6ZlQ1g==',key_name='tempest-TestSecurityGroupsBasicOps-1359298851',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:46:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-xidjkow3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:46:23Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=a541632d-06cf-48a9-a44d-19bcce4df36f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.215 2 DEBUG nova.network.os_vif_util [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.216 2 DEBUG nova.network.os_vif_util [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.216 2 DEBUG os_vif [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4475b0be-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.229 2 INFO os_vif [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2')
Oct 02 08:46:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 200 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 492 KiB/s rd, 2.6 MiB/s wr, 135 op/s
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.443 2 INFO nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Creating config drive at /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/disk.config
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.448 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0jx8jipb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.599 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0jx8jipb" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.621 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.624 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/disk.config bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.670 2 INFO nova.virt.libvirt.driver [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Deleting instance files /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f_del
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.671 2 INFO nova.virt.libvirt.driver [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Deletion of /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f_del complete
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]: {
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "osd_id": 2,
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "type": "bluestore"
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:     },
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "osd_id": 1,
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "type": "bluestore"
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:     },
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "osd_id": 0,
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:         "type": "bluestore"
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]:     }
Oct 02 08:46:43 compute-0 vigorous_lamport[372875]: }
Oct 02 08:46:43 compute-0 systemd[1]: libpod-ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4.scope: Deactivated successfully.
Oct 02 08:46:43 compute-0 systemd[1]: libpod-ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4.scope: Consumed 1.012s CPU time.
Oct 02 08:46:43 compute-0 conmon[372875]: conmon ac59cb75709636a54aa4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4.scope/container/memory.events
Oct 02 08:46:43 compute-0 podman[372838]: 2025-10-02 08:46:43.716787348 +0000 UTC m=+1.221387980 container died ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamport, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.724 2 INFO nova.compute.manager [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.725 2 DEBUG oslo.service.loopingcall [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.725 2 DEBUG nova.compute.manager [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.725 2 DEBUG nova.network.neutron [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:46:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-12b76dd7d1354c5d1325fa28412b5b676e76db8310503ebca2b8dce2ff68cda6-merged.mount: Deactivated successfully.
Oct 02 08:46:43 compute-0 podman[372838]: 2025-10-02 08:46:43.773212117 +0000 UTC m=+1.277812739 container remove ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 08:46:43 compute-0 systemd[1]: libpod-conmon-ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4.scope: Deactivated successfully.
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.800 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/disk.config bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:43 compute-0 nova_compute[260603]: 2025-10-02 08:46:43.802 2 INFO nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Deleting local config drive /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/disk.config because it was imported into RBD.
Oct 02 08:46:43 compute-0 sudo[372639]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:46:43 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:46:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:46:43 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:46:43 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 751ed800-49b0-42b7-8348-7ab20bb2445b does not exist
Oct 02 08:46:43 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev fea4ff8a-d4a4-4bdc-9820-beaa3d20065f does not exist
Oct 02 08:46:43 compute-0 systemd-machined[214636]: New machine qemu-141-instance-00000070.
Oct 02 08:46:43 compute-0 systemd[1]: Started Virtual Machine qemu-141-instance-00000070.
Oct 02 08:46:43 compute-0 sudo[373048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:46:43 compute-0 sudo[373048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:43 compute-0 sudo[373048]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:43 compute-0 sudo[373079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:46:43 compute-0 sudo[373079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:46:43 compute-0 sudo[373079]: pam_unix(sudo:session): session closed for user root
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:44 compute-0 ceph-mon[74477]: pgmap v2034: 305 pgs: 305 active+clean; 200 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 492 KiB/s rd, 2.6 MiB/s wr, 135 op/s
Oct 02 08:46:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:46:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.834 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394804.8344653, bf1ef571-5f72-49ef-9c3f-f12f88c79a03 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.835 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] VM Resumed (Lifecycle Event)
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.837 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.837 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.841 2 INFO nova.virt.libvirt.driver [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Instance spawned successfully.
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.841 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.868 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.868 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.869 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.869 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.870 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.870 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.876 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.879 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.910 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.911 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394804.8367307, bf1ef571-5f72-49ef-9c3f-f12f88c79a03 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.911 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] VM Started (Lifecycle Event)
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.958 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.962 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.971 2 INFO nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Took 3.87 seconds to spawn the instance on the hypervisor.
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.971 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:44 compute-0 nova_compute[260603]: 2025-10-02 08:46:44.996 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:46:45 compute-0 nova_compute[260603]: 2025-10-02 08:46:45.031 2 INFO nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Took 5.07 seconds to build instance.
Oct 02 08:46:45 compute-0 nova_compute[260603]: 2025-10-02 08:46:45.048 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:45 compute-0 nova_compute[260603]: 2025-10-02 08:46:45.374 2 DEBUG nova.network.neutron [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:45 compute-0 nova_compute[260603]: 2025-10-02 08:46:45.391 2 INFO nova.compute.manager [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Took 1.67 seconds to deallocate network for instance.
Oct 02 08:46:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 200 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 492 KiB/s rd, 2.6 MiB/s wr, 135 op/s
Oct 02 08:46:45 compute-0 nova_compute[260603]: 2025-10-02 08:46:45.434 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:45 compute-0 nova_compute[260603]: 2025-10-02 08:46:45.434 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:45 compute-0 nova_compute[260603]: 2025-10-02 08:46:45.500 2 DEBUG nova.compute.manager [req-19490c9a-4c11-42a5-993a-9124ad55bcbe req-4bd4e116-01d2-4a61-a717-dc313328976b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received event network-vif-deleted-4475b0be-d2ac-4bf1-8ad7-e117311654a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:45 compute-0 nova_compute[260603]: 2025-10-02 08:46:45.514 2 DEBUG oslo_concurrency.processutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:46:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4004492399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:45 compute-0 nova_compute[260603]: 2025-10-02 08:46:45.977 2 DEBUG oslo_concurrency.processutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:45 compute-0 nova_compute[260603]: 2025-10-02 08:46:45.983 2 DEBUG nova.compute.provider_tree [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:46:46 compute-0 nova_compute[260603]: 2025-10-02 08:46:46.007 2 DEBUG nova.scheduler.client.report [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:46:46 compute-0 nova_compute[260603]: 2025-10-02 08:46:46.031 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:46 compute-0 nova_compute[260603]: 2025-10-02 08:46:46.057 2 INFO nova.scheduler.client.report [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance a541632d-06cf-48a9-a44d-19bcce4df36f
Oct 02 08:46:46 compute-0 nova_compute[260603]: 2025-10-02 08:46:46.180 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:46 compute-0 nova_compute[260603]: 2025-10-02 08:46:46.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:46:46 compute-0 ceph-mon[74477]: pgmap v2035: 305 pgs: 305 active+clean; 200 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 492 KiB/s rd, 2.6 MiB/s wr, 135 op/s
Oct 02 08:46:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4004492399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:47 compute-0 podman[373174]: 2025-10-02 08:46:47.016811905 +0000 UTC m=+0.066155903 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:46:47 compute-0 podman[373173]: 2025-10-02 08:46:47.026238129 +0000 UTC m=+0.080994506 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.425 2 DEBUG nova.compute.manager [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 176 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 490 KiB/s rd, 2.5 MiB/s wr, 156 op/s
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.449 2 DEBUG nova.compute.manager [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received event network-changed-71e3efd4-125e-40e4-bdda-59df254d21f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.449 2 DEBUG nova.compute.manager [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Refreshing instance network info cache due to event network-changed-71e3efd4-125e-40e4-bdda-59df254d21f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.450 2 DEBUG oslo_concurrency.lockutils [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.451 2 DEBUG oslo_concurrency.lockutils [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.452 2 DEBUG nova.network.neutron [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Refreshing network info cache for port 71e3efd4-125e-40e4-bdda-59df254d21f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.499 2 INFO nova.compute.manager [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] instance snapshotting
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.509 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.510 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.511 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.511 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.512 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.513 2 INFO nova.compute.manager [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Terminating instance
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.515 2 DEBUG nova.compute.manager [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:46:47 compute-0 kernel: tap71e3efd4-12 (unregistering): left promiscuous mode
Oct 02 08:46:47 compute-0 NetworkManager[45129]: <info>  [1759394807.5953] device (tap71e3efd4-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:47 compute-0 ovn_controller[152344]: 2025-10-02T08:46:47Z|01154|binding|INFO|Releasing lport 71e3efd4-125e-40e4-bdda-59df254d21f9 from this chassis (sb_readonly=0)
Oct 02 08:46:47 compute-0 ovn_controller[152344]: 2025-10-02T08:46:47Z|01155|binding|INFO|Setting lport 71e3efd4-125e-40e4-bdda-59df254d21f9 down in Southbound
Oct 02 08:46:47 compute-0 ovn_controller[152344]: 2025-10-02T08:46:47Z|01156|binding|INFO|Removing iface tap71e3efd4-12 ovn-installed in OVS
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.697 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:06:68 10.100.0.4'], port_security=['fa:16:3e:e8:06:68 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fc71f095-bde6-43da-bec6-e0a30dc1b71a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '026f2ff0-b634-4fa0-8159-21403eda59a0 d0f166da-6d6f-4ea3-ab29-33f59ca9931c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12ca4751-2801-43b7-bd66-26826481ad08, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=71e3efd4-125e-40e4-bdda-59df254d21f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:46:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.698 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 71e3efd4-125e-40e4-bdda-59df254d21f9 in datapath 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 unbound from our chassis
Oct 02 08:46:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.699 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:46:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.699 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4a63c090-7936-4e96-962e-e5d7a5a94327]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.700 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 namespace which is not needed anymore
Oct 02 08:46:47 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct 02 08:46:47 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Consumed 14.806s CPU time.
Oct 02 08:46:47 compute-0 systemd-machined[214636]: Machine qemu-136-instance-0000006d terminated.
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.770 2 INFO nova.virt.libvirt.driver [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Instance destroyed successfully.
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.771 2 DEBUG nova.objects.instance [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid fc71f095-bde6-43da-bec6-e0a30dc1b71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.806 2 DEBUG nova.virt.libvirt.vif [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:45:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=109,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVA79qbd3NtJv84RrbhGptrnsrVMgvbKHQMrDgaSLfUQo5hxQDp/dq5BHyqGmWd6dJ7yqexCddmRMhWAszT1sZolLZV1xXu34aiHfjKSbuLnXtvoVyFqHt1Oka+6ZlQ1g==',key_name='tempest-TestSecurityGroupsBasicOps-1359298851',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:45:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-dp87un5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:45:51Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=fc71f095-bde6-43da-bec6-e0a30dc1b71a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.807 2 DEBUG nova.network.os_vif_util [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.808 2 DEBUG nova.network.os_vif_util [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.808 2 DEBUG os_vif [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.810 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71e3efd4-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.817 2 INFO os_vif [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12')
Oct 02 08:46:47 compute-0 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [NOTICE]   (370935) : haproxy version is 2.8.14-c23fe91
Oct 02 08:46:47 compute-0 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [NOTICE]   (370935) : path to executable is /usr/sbin/haproxy
Oct 02 08:46:47 compute-0 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [WARNING]  (370935) : Exiting Master process...
Oct 02 08:46:47 compute-0 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [WARNING]  (370935) : Exiting Master process...
Oct 02 08:46:47 compute-0 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [ALERT]    (370935) : Current worker (370937) exited with code 143 (Terminated)
Oct 02 08:46:47 compute-0 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [WARNING]  (370935) : All workers exited. Exiting... (0)
Oct 02 08:46:47 compute-0 systemd[1]: libpod-be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2.scope: Deactivated successfully.
Oct 02 08:46:47 compute-0 podman[373249]: 2025-10-02 08:46:47.835319366 +0000 UTC m=+0.043416254 container died be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2-userdata-shm.mount: Deactivated successfully.
Oct 02 08:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2a8bfea6155d379b6c0d528c11335822e5f651d1b1147fcb20b7f259f8bc47f-merged.mount: Deactivated successfully.
Oct 02 08:46:47 compute-0 podman[373249]: 2025-10-02 08:46:47.872226567 +0000 UTC m=+0.080323445 container cleanup be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 08:46:47 compute-0 systemd[1]: libpod-conmon-be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2.scope: Deactivated successfully.
Oct 02 08:46:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:46:47 compute-0 podman[373298]: 2025-10-02 08:46:47.940100183 +0000 UTC m=+0.045448628 container remove be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.947 2 INFO nova.virt.libvirt.driver [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Beginning live snapshot process
Oct 02 08:46:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.948 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[669fc4ec-819d-496d-82db-463ecad03e9d]: (4, ('Thu Oct  2 08:46:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 (be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2)\nbe55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2\nThu Oct  2 08:46:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 (be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2)\nbe55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.949 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c91b23-868e-456b-b176-544ee4a04270]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.950 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ee1fadc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:47 compute-0 kernel: tap0ee1fadc-d0: left promiscuous mode
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.960 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ac16571f-e5c9-4dda-923a-09c8975724da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:47 compute-0 nova_compute[260603]: 2025-10-02 08:46:47.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.985 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4e291726-a418-4138-8747-ffbb914077ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.986 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[acd8beaf-b5e5-47bc-ba3e-5a5356b92b7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:48.009 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df95fe2f-d59e-487e-a05a-b5e75b21dd32]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562912, 'reachable_time': 28143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373313, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:48.011 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:46:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:48.012 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4e166d66-0c27-4bd2-8284-d2d4ae2b306e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:46:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d0ee1fadc\x2dd3e5\x2d4c06\x2db0fc\x2d8a72fa56e758.mount: Deactivated successfully.
Oct 02 08:46:48 compute-0 nova_compute[260603]: 2025-10-02 08:46:48.148 2 INFO nova.virt.libvirt.driver [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Deleting instance files /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a_del
Oct 02 08:46:48 compute-0 nova_compute[260603]: 2025-10-02 08:46:48.149 2 INFO nova.virt.libvirt.driver [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Deletion of /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a_del complete
Oct 02 08:46:48 compute-0 nova_compute[260603]: 2025-10-02 08:46:48.156 2 DEBUG nova.storage.rbd_utils [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] creating snapshot(d9af03ccea004794b28fd44bb03c004a) on rbd image(bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:46:48 compute-0 nova_compute[260603]: 2025-10-02 08:46:48.215 2 INFO nova.compute.manager [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 02 08:46:48 compute-0 nova_compute[260603]: 2025-10-02 08:46:48.216 2 DEBUG oslo.service.loopingcall [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:46:48 compute-0 nova_compute[260603]: 2025-10-02 08:46:48.217 2 DEBUG nova.compute.manager [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:46:48 compute-0 nova_compute[260603]: 2025-10-02 08:46:48.217 2 DEBUG nova.network.neutron [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:46:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct 02 08:46:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct 02 08:46:48 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct 02 08:46:48 compute-0 ceph-mon[74477]: pgmap v2036: 305 pgs: 305 active+clean; 176 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 490 KiB/s rd, 2.5 MiB/s wr, 156 op/s
Oct 02 08:46:48 compute-0 nova_compute[260603]: 2025-10-02 08:46:48.910 2 DEBUG nova.storage.rbd_utils [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] cloning vms/bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk@d9af03ccea004794b28fd44bb03c004a to images/f9e3d91e-e186-463d-8f89-5f30afae86fb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.035 2 DEBUG nova.storage.rbd_utils [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] flattening images/f9e3d91e-e186-463d-8f89-5f30afae86fb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.101 2 DEBUG nova.network.neutron [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.163 2 INFO nova.compute.manager [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Took 0.95 seconds to deallocate network for instance.
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.177 2 DEBUG nova.storage.rbd_utils [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] removing snapshot(d9af03ccea004794b28fd44bb03c004a) on rbd image(bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.202 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.203 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.258 2 DEBUG nova.network.neutron [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updated VIF entry in instance network info cache for port 71e3efd4-125e-40e4-bdda-59df254d21f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.258 2 DEBUG nova.network.neutron [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.276 2 DEBUG oslo_concurrency.processutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.317 2 DEBUG oslo_concurrency.lockutils [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:46:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 96 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 33 KiB/s wr, 115 op/s
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.558 2 DEBUG nova.compute.manager [req-98e3aa7f-38a7-4ca1-aa90-02ff364df75c req-b2acda4b-3aea-40ea-bdd1-f6c5eb23a91e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received event network-vif-deleted-71e3efd4-125e-40e4-bdda-59df254d21f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.558 2 INFO nova.compute.manager [req-98e3aa7f-38a7-4ca1-aa90-02ff364df75c req-b2acda4b-3aea-40ea-bdd1-f6c5eb23a91e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Neutron deleted interface 71e3efd4-125e-40e4-bdda-59df254d21f9; detaching it from the instance and deleting it from the info cache
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.559 2 DEBUG nova.network.neutron [req-98e3aa7f-38a7-4ca1-aa90-02ff364df75c req-b2acda4b-3aea-40ea-bdd1-f6c5eb23a91e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.594 2 DEBUG nova.compute.manager [req-98e3aa7f-38a7-4ca1-aa90-02ff364df75c req-b2acda4b-3aea-40ea-bdd1-f6c5eb23a91e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Detach interface failed, port_id=71e3efd4-125e-40e4-bdda-59df254d21f9, reason: Instance fc71f095-bde6-43da-bec6-e0a30dc1b71a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 08:46:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:46:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1890090234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.746 2 DEBUG oslo_concurrency.processutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.751 2 DEBUG nova.compute.provider_tree [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.773 2 DEBUG nova.scheduler.client.report [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.799 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.833 2 INFO nova.scheduler.client.report [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance fc71f095-bde6-43da-bec6-e0a30dc1b71a
Oct 02 08:46:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct 02 08:46:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct 02 08:46:49 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct 02 08:46:49 compute-0 ceph-mon[74477]: osdmap e267: 3 total, 3 up, 3 in
Oct 02 08:46:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1890090234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.911 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:49 compute-0 nova_compute[260603]: 2025-10-02 08:46:49.917 2 DEBUG nova.storage.rbd_utils [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] creating snapshot(snap) on rbd image(f9e3d91e-e186-463d-8f89-5f30afae86fb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:46:50 compute-0 nova_compute[260603]: 2025-10-02 08:46:50.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:46:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct 02 08:46:50 compute-0 ceph-mon[74477]: pgmap v2038: 305 pgs: 305 active+clean; 96 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 33 KiB/s wr, 115 op/s
Oct 02 08:46:50 compute-0 ceph-mon[74477]: osdmap e268: 3 total, 3 up, 3 in
Oct 02 08:46:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct 02 08:46:50 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct 02 08:46:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 96 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 28 KiB/s wr, 123 op/s
Oct 02 08:46:51 compute-0 ceph-mon[74477]: osdmap e269: 3 total, 3 up, 3 in
Oct 02 08:46:52 compute-0 nova_compute[260603]: 2025-10-02 08:46:52.652 2 INFO nova.virt.libvirt.driver [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Snapshot image upload complete
Oct 02 08:46:52 compute-0 nova_compute[260603]: 2025-10-02 08:46:52.653 2 INFO nova.compute.manager [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Took 5.15 seconds to snapshot the instance on the hypervisor.
Oct 02 08:46:52 compute-0 nova_compute[260603]: 2025-10-02 08:46:52.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:52 compute-0 ceph-mon[74477]: pgmap v2041: 305 pgs: 305 active+clean; 96 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 28 KiB/s wr, 123 op/s
Oct 02 08:46:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:46:53 compute-0 nova_compute[260603]: 2025-10-02 08:46:53.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 33 KiB/s wr, 220 op/s
Oct 02 08:46:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:53.904 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:46:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:53.905 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:46:53 compute-0 nova_compute[260603]: 2025-10-02 08:46:53.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:54 compute-0 nova_compute[260603]: 2025-10-02 08:46:54.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct 02 08:46:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct 02 08:46:54 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct 02 08:46:54 compute-0 ceph-mon[74477]: pgmap v2042: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 33 KiB/s wr, 220 op/s
Oct 02 08:46:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 6.7 KiB/s wr, 143 op/s
Oct 02 08:46:55 compute-0 nova_compute[260603]: 2025-10-02 08:46:55.661 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:55 compute-0 nova_compute[260603]: 2025-10-02 08:46:55.662 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:55 compute-0 nova_compute[260603]: 2025-10-02 08:46:55.662 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:55 compute-0 nova_compute[260603]: 2025-10-02 08:46:55.663 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:55 compute-0 nova_compute[260603]: 2025-10-02 08:46:55.663 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:55 compute-0 nova_compute[260603]: 2025-10-02 08:46:55.665 2 INFO nova.compute.manager [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Terminating instance
Oct 02 08:46:55 compute-0 nova_compute[260603]: 2025-10-02 08:46:55.666 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "refresh_cache-bf1ef571-5f72-49ef-9c3f-f12f88c79a03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:46:55 compute-0 nova_compute[260603]: 2025-10-02 08:46:55.667 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquired lock "refresh_cache-bf1ef571-5f72-49ef-9c3f-f12f88c79a03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:46:55 compute-0 nova_compute[260603]: 2025-10-02 08:46:55.667 2 DEBUG nova.network.neutron [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:46:55 compute-0 ceph-mon[74477]: osdmap e270: 3 total, 3 up, 3 in
Oct 02 08:46:56 compute-0 nova_compute[260603]: 2025-10-02 08:46:56.276 2 DEBUG nova.network.neutron [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:46:56 compute-0 nova_compute[260603]: 2025-10-02 08:46:56.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:56 compute-0 nova_compute[260603]: 2025-10-02 08:46:56.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:56 compute-0 nova_compute[260603]: 2025-10-02 08:46:56.704 2 DEBUG nova.network.neutron [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:56 compute-0 nova_compute[260603]: 2025-10-02 08:46:56.729 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Releasing lock "refresh_cache-bf1ef571-5f72-49ef-9c3f-f12f88c79a03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:46:56 compute-0 nova_compute[260603]: 2025-10-02 08:46:56.730 2 DEBUG nova.compute.manager [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:46:56 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct 02 08:46:56 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Consumed 1.328s CPU time.
Oct 02 08:46:56 compute-0 systemd-machined[214636]: Machine qemu-141-instance-00000070 terminated.
Oct 02 08:46:56 compute-0 ceph-mon[74477]: pgmap v2044: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 6.7 KiB/s wr, 143 op/s
Oct 02 08:46:56 compute-0 nova_compute[260603]: 2025-10-02 08:46:56.954 2 INFO nova.virt.libvirt.driver [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Instance destroyed successfully.
Oct 02 08:46:56 compute-0 nova_compute[260603]: 2025-10-02 08:46:56.955 2 DEBUG nova.objects.instance [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lazy-loading 'resources' on Instance uuid bf1ef571-5f72-49ef-9c3f-f12f88c79a03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:46:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 7.3 KiB/s wr, 171 op/s
Oct 02 08:46:57 compute-0 nova_compute[260603]: 2025-10-02 08:46:57.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:46:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct 02 08:46:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct 02 08:46:57 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct 02 08:46:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:46:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:46:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:46:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:46:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:46:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.155 2 INFO nova.virt.libvirt.driver [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Deleting instance files /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03_del
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.156 2 INFO nova.virt.libvirt.driver [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Deletion of /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03_del complete
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.190 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394803.1892493, a541632d-06cf-48a9-a44d-19bcce4df36f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.190 2 INFO nova.compute.manager [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] VM Stopped (Lifecycle Event)
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.216 2 DEBUG nova.compute.manager [None req-e5721f86-68d1-43b6-88e8-71e3f3ed6b63 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.236 2 INFO nova.compute.manager [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Took 1.51 seconds to destroy the instance on the hypervisor.
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.236 2 DEBUG oslo.service.loopingcall [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.237 2 DEBUG nova.compute.manager [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.237 2 DEBUG nova.network.neutron [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.394 2 DEBUG nova.network.neutron [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.407 2 DEBUG nova.network.neutron [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.430 2 INFO nova.compute.manager [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Took 0.19 seconds to deallocate network for instance.
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.483 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.484 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:46:58 compute-0 nova_compute[260603]: 2025-10-02 08:46:58.543 2 DEBUG oslo_concurrency.processutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:46:58 compute-0 ceph-mon[74477]: pgmap v2045: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 7.3 KiB/s wr, 171 op/s
Oct 02 08:46:58 compute-0 ceph-mon[74477]: osdmap e271: 3 total, 3 up, 3 in
Oct 02 08:46:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:46:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/438301172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:46:59 compute-0 nova_compute[260603]: 2025-10-02 08:46:59.015 2 DEBUG oslo_concurrency.processutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:46:59 compute-0 nova_compute[260603]: 2025-10-02 08:46:59.025 2 DEBUG nova.compute.provider_tree [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:46:59 compute-0 nova_compute[260603]: 2025-10-02 08:46:59.051 2 DEBUG nova.scheduler.client.report [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:46:59 compute-0 nova_compute[260603]: 2025-10-02 08:46:59.077 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:59 compute-0 nova_compute[260603]: 2025-10-02 08:46:59.122 2 INFO nova.scheduler.client.report [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Deleted allocations for instance bf1ef571-5f72-49ef-9c3f-f12f88c79a03
Oct 02 08:46:59 compute-0 nova_compute[260603]: 2025-10-02 08:46:59.203 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:46:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 145 KiB/s rd, 8.2 KiB/s wr, 192 op/s
Oct 02 08:46:59 compute-0 nova_compute[260603]: 2025-10-02 08:46:59.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:46:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:46:59.907 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:46:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/438301172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:47:00 compute-0 ceph-mon[74477]: pgmap v2047: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 145 KiB/s rd, 8.2 KiB/s wr, 192 op/s
Oct 02 08:47:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 3.2 KiB/s wr, 84 op/s
Oct 02 08:47:02 compute-0 nova_compute[260603]: 2025-10-02 08:47:02.769 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394807.7678638, fc71f095-bde6-43da-bec6-e0a30dc1b71a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:02 compute-0 nova_compute[260603]: 2025-10-02 08:47:02.770 2 INFO nova.compute.manager [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] VM Stopped (Lifecycle Event)
Oct 02 08:47:02 compute-0 nova_compute[260603]: 2025-10-02 08:47:02.796 2 DEBUG nova.compute.manager [None req-72e98fc3-c52d-47d4-a34b-8c07326299c9 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:02 compute-0 nova_compute[260603]: 2025-10-02 08:47:02.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:47:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct 02 08:47:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct 02 08:47:02 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct 02 08:47:02 compute-0 ceph-mon[74477]: pgmap v2048: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 3.2 KiB/s wr, 84 op/s
Oct 02 08:47:02 compute-0 ceph-mon[74477]: osdmap e272: 3 total, 3 up, 3 in
Oct 02 08:47:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 4.2 KiB/s wr, 104 op/s
Oct 02 08:47:04 compute-0 nova_compute[260603]: 2025-10-02 08:47:04.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:04 compute-0 ceph-mon[74477]: pgmap v2050: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 4.2 KiB/s wr, 104 op/s
Oct 02 08:47:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.4 KiB/s wr, 50 op/s
Oct 02 08:47:06 compute-0 ceph-mon[74477]: pgmap v2051: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.4 KiB/s wr, 50 op/s
Oct 02 08:47:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Oct 02 08:47:07 compute-0 nova_compute[260603]: 2025-10-02 08:47:07.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:47:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct 02 08:47:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct 02 08:47:07 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct 02 08:47:08 compute-0 ceph-mon[74477]: pgmap v2052: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Oct 02 08:47:08 compute-0 ceph-mon[74477]: osdmap e273: 3 total, 3 up, 3 in
Oct 02 08:47:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1023 B/s wr, 20 op/s
Oct 02 08:47:09 compute-0 nova_compute[260603]: 2025-10-02 08:47:09.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:09 compute-0 nova_compute[260603]: 2025-10-02 08:47:09.667 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:09 compute-0 nova_compute[260603]: 2025-10-02 08:47:09.668 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:09 compute-0 nova_compute[260603]: 2025-10-02 08:47:09.692 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:47:09 compute-0 nova_compute[260603]: 2025-10-02 08:47:09.982 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:09 compute-0 nova_compute[260603]: 2025-10-02 08:47:09.982 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:09 compute-0 nova_compute[260603]: 2025-10-02 08:47:09.990 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:47:09 compute-0 nova_compute[260603]: 2025-10-02 08:47:09.990 2 INFO nova.compute.claims [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:47:10 compute-0 podman[373524]: 2025-10-02 08:47:10.039599788 +0000 UTC m=+0.084886427 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:47:10 compute-0 podman[373523]: 2025-10-02 08:47:10.043966024 +0000 UTC m=+0.101294618 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.090 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:47:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187534461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.506 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.513 2 DEBUG nova.compute.provider_tree [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.539 2 DEBUG nova.scheduler.client.report [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.579 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.580 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.646 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.647 2 DEBUG nova.network.neutron [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.671 2 INFO nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.689 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.807 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.810 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.811 2 INFO nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Creating image(s)
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.843 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.867 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.885 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.888 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:10 compute-0 ceph-mon[74477]: pgmap v2054: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1023 B/s wr, 20 op/s
Oct 02 08:47:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1187534461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.984 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.985 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.985 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:10 compute-0 nova_compute[260603]: 2025-10-02 08:47:10.986 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.010 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.013 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 36a84233-3256-49b2-ae05-1569eb78b50f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.334 2 DEBUG nova.policy [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.341 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 36a84233-3256-49b2-ae05-1569eb78b50f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.421 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:47:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.522 2 DEBUG nova.objects.instance [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid 36a84233-3256-49b2-ae05-1569eb78b50f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.547 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.547 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Ensure instance console log exists: /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.548 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.548 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.548 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.953 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394816.9528518, bf1ef571-5f72-49ef-9c3f-f12f88c79a03 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.954 2 INFO nova.compute.manager [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] VM Stopped (Lifecycle Event)
Oct 02 08:47:11 compute-0 nova_compute[260603]: 2025-10-02 08:47:11.975 2 DEBUG nova.compute.manager [None req-c26d2266-072f-4fc5-8b63-e26d44f8cd43 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:47:12 compute-0 nova_compute[260603]: 2025-10-02 08:47:12.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:12 compute-0 ceph-mon[74477]: pgmap v2055: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:47:13 compute-0 nova_compute[260603]: 2025-10-02 08:47:13.215 2 DEBUG nova.network.neutron [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Successfully created port: d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:47:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct 02 08:47:14 compute-0 nova_compute[260603]: 2025-10-02 08:47:14.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:14 compute-0 nova_compute[260603]: 2025-10-02 08:47:14.746 2 DEBUG nova.network.neutron [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Successfully updated port: d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:47:14 compute-0 nova_compute[260603]: 2025-10-02 08:47:14.768 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:47:14 compute-0 nova_compute[260603]: 2025-10-02 08:47:14.768 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:47:14 compute-0 nova_compute[260603]: 2025-10-02 08:47:14.769 2 DEBUG nova.network.neutron [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:47:14 compute-0 ceph-mon[74477]: pgmap v2056: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct 02 08:47:14 compute-0 nova_compute[260603]: 2025-10-02 08:47:14.961 2 DEBUG nova.compute.manager [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-changed-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:47:14 compute-0 nova_compute[260603]: 2025-10-02 08:47:14.962 2 DEBUG nova.compute.manager [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Refreshing instance network info cache due to event network-changed-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:47:14 compute-0 nova_compute[260603]: 2025-10-02 08:47:14.962 2 DEBUG oslo_concurrency.lockutils [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:47:15 compute-0 nova_compute[260603]: 2025-10-02 08:47:15.106 2 DEBUG nova.network.neutron [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:47:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.824 2 DEBUG nova.network.neutron [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updating instance_info_cache with network_info: [{"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.862 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.863 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Instance network_info: |[{"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.864 2 DEBUG oslo_concurrency.lockutils [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.865 2 DEBUG nova.network.neutron [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Refreshing network info cache for port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.870 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Start _get_guest_xml network_info=[{"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.878 2 WARNING nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.887 2 DEBUG nova.virt.libvirt.host [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.888 2 DEBUG nova.virt.libvirt.host [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.898 2 DEBUG nova.virt.libvirt.host [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.899 2 DEBUG nova.virt.libvirt.host [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.900 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.900 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.902 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.902 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.903 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.903 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.903 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.904 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.904 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.905 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.905 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.906 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:47:16 compute-0 nova_compute[260603]: 2025-10-02 08:47:16.911 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:16 compute-0 ceph-mon[74477]: pgmap v2057: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct 02 08:47:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:47:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040014468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.427 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.468 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.473 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:47:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/66998200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:47:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.915 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.917 2 DEBUG nova.virt.libvirt.vif [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:47:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=113,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUeT37zgtyGodwUyb3iGY7HM/3FmTKFRI0uJBAJFSKjatvrWQTObfDMZE8YBSZZGIkssGkh11uDOpS/CEO9ZlzyZpSQ7kHVbROBWfktxGuem67Vh+IUVlFakqZKCOP1Eg==',key_name='tempest-TestSecurityGroupsBasicOps-112352132',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-cn883z4w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:47:10Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=36a84233-3256-49b2-ae05-1569eb78b50f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.918 2 DEBUG nova.network.os_vif_util [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.918 2 DEBUG nova.network.os_vif_util [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.920 2 DEBUG nova.objects.instance [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 36a84233-3256-49b2-ae05-1569eb78b50f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.942 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:47:17 compute-0 nova_compute[260603]:   <uuid>36a84233-3256-49b2-ae05-1569eb78b50f</uuid>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   <name>instance-00000071</name>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742</nova:name>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:47:16</nova:creationTime>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:47:17 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:47:17 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:47:17 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:47:17 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:47:17 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:47:17 compute-0 nova_compute[260603]:         <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct 02 08:47:17 compute-0 nova_compute[260603]:         <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:47:17 compute-0 nova_compute[260603]:         <nova:port uuid="d1e15184-f166-48f4-a04d-1c6fa2a9f2a2">
Oct 02 08:47:17 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <system>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <entry name="serial">36a84233-3256-49b2-ae05-1569eb78b50f</entry>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <entry name="uuid">36a84233-3256-49b2-ae05-1569eb78b50f</entry>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     </system>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   <os>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   </os>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   <features>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   </features>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/36a84233-3256-49b2-ae05-1569eb78b50f_disk">
Oct 02 08:47:17 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       </source>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:47:17 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/36a84233-3256-49b2-ae05-1569eb78b50f_disk.config">
Oct 02 08:47:17 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       </source>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:47:17 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:5e:d6:a0"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <target dev="tapd1e15184-f1"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/console.log" append="off"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <video>
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     </video>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:47:17 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:47:17 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:47:17 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:47:17 compute-0 nova_compute[260603]: </domain>
Oct 02 08:47:17 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.942 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Preparing to wait for external event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.943 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.943 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.943 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.944 2 DEBUG nova.virt.libvirt.vif [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:47:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=113,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUeT37zgtyGodwUyb3iGY7HM/3FmTKFRI0uJBAJFSKjatvrWQTObfDMZE8YBSZZGIkssGkh11uDOpS/CEO9ZlzyZpSQ7kHVbROBWfktxGuem67Vh+IUVlFakqZKCOP1Eg==',key_name='tempest-TestSecurityGroupsBasicOps-112352132',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-cn883z4w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:47:10Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=36a84233-3256-49b2-ae05-1569eb78b50f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.944 2 DEBUG nova.network.os_vif_util [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.944 2 DEBUG nova.network.os_vif_util [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.945 2 DEBUG os_vif [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.945 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.946 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:47:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2040014468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:47:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/66998200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1e15184-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1e15184-f1, col_values=(('external_ids', {'iface-id': 'd1e15184-f166-48f4-a04d-1c6fa2a9f2a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:d6:a0', 'vm-uuid': '36a84233-3256-49b2-ae05-1569eb78b50f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:17 compute-0 NetworkManager[45129]: <info>  [1759394837.9529] manager: (tapd1e15184-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:17 compute-0 nova_compute[260603]: 2025-10-02 08:47:17.958 2 INFO os_vif [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1')
Oct 02 08:47:18 compute-0 podman[373817]: 2025-10-02 08:47:18.001477828 +0000 UTC m=+0.063791040 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 08:47:18 compute-0 podman[373816]: 2025-10-02 08:47:18.00731905 +0000 UTC m=+0.069826547 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.028 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.029 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.029 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:5e:d6:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.029 2 INFO nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Using config drive
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.047 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.606 2 INFO nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Creating config drive at /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/disk.config
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.616 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwjiv218d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.680 2 DEBUG nova.network.neutron [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updated VIF entry in instance network info cache for port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.681 2 DEBUG nova.network.neutron [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updating instance_info_cache with network_info: [{"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.702 2 DEBUG oslo_concurrency.lockutils [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.788 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwjiv218d" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.825 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:18 compute-0 nova_compute[260603]: 2025-10-02 08:47:18.830 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/disk.config 36a84233-3256-49b2-ae05-1569eb78b50f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:18 compute-0 ceph-mon[74477]: pgmap v2058: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.031 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/disk.config 36a84233-3256-49b2-ae05-1569eb78b50f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.032 2 INFO nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Deleting local config drive /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/disk.config because it was imported into RBD.
Oct 02 08:47:19 compute-0 kernel: tapd1e15184-f1: entered promiscuous mode
Oct 02 08:47:19 compute-0 NetworkManager[45129]: <info>  [1759394839.1093] manager: (tapd1e15184-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/452)
Oct 02 08:47:19 compute-0 ovn_controller[152344]: 2025-10-02T08:47:19Z|01157|binding|INFO|Claiming lport d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 for this chassis.
Oct 02 08:47:19 compute-0 ovn_controller[152344]: 2025-10-02T08:47:19Z|01158|binding|INFO|d1e15184-f166-48f4-a04d-1c6fa2a9f2a2: Claiming fa:16:3e:5e:d6:a0 10.100.0.7
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.126 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:d6:a0 10.100.0.7'], port_security=['fa:16:3e:5e:d6:a0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '36a84233-3256-49b2-ae05-1569eb78b50f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '57638211-3760-47a5-8fdb-d4470031f4cf 580c7ec0-94be-443b-88b7-53672ca4c5d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2af26aac-073a-413a-88d9-fc16235c9487, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.127 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 in datapath 828d3558-d7f1-4d90-8e6f-bf6eff4d744e bound to our chassis
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.129 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 828d3558-d7f1-4d90-8e6f-bf6eff4d744e
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.146 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a34304-4556-423a-9aad-fec55c87cf95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.148 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap828d3558-d1 in ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.150 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap828d3558-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.150 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0159d78-5290-446a-9f14-eb08d75c83f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.151 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0131d353-aa9a-4c70-b876-99b8c3076fdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 systemd-machined[214636]: New machine qemu-142-instance-00000071.
Oct 02 08:47:19 compute-0 systemd-udevd[373934]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.175 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[efe496c4-9649-4272-8a4d-e80d725c5450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 NetworkManager[45129]: <info>  [1759394839.1860] device (tapd1e15184-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:47:19 compute-0 NetworkManager[45129]: <info>  [1759394839.1870] device (tapd1e15184-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:47:19 compute-0 systemd[1]: Started Virtual Machine qemu-142-instance-00000071.
Oct 02 08:47:19 compute-0 ovn_controller[152344]: 2025-10-02T08:47:19Z|01159|binding|INFO|Setting lport d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 ovn-installed in OVS
Oct 02 08:47:19 compute-0 ovn_controller[152344]: 2025-10-02T08:47:19Z|01160|binding|INFO|Setting lport d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 up in Southbound
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.207 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[20049194-c7ef-4a83-b591-97f21727e40d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.248 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f36631eb-7eaf-469d-8500-6d76e4091729]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 NetworkManager[45129]: <info>  [1759394839.2560] manager: (tap828d3558-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/453)
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.255 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9f5c1bd5-3301-4bc6-bd64-b0e2aef28a81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.298 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9141b91a-8df6-49b3-9fc4-78382796b1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.301 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ad830d0c-e8c5-48f6-9caa-f8ccc4616bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 NetworkManager[45129]: <info>  [1759394839.3375] device (tap828d3558-d0): carrier: link connected
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.344 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e26207f9-43fa-4af5-af82-3cf918c1c50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.369 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c11ee92f-c0e3-46db-9df2-5a69225e4851]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap828d3558-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:2f:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571794, 'reachable_time': 28448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373965, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.391 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb128c8-6dcd-4242-bde7-87db42be65d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:2f9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571794, 'tstamp': 571794}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373966, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.410 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e3aaff0b-76af-4e0c-a0dc-bec1400cab1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap828d3558-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:2f:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571794, 'reachable_time': 28448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373967, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.449 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b69a6bf2-ea9e-4ebe-b09c-b13bd476241d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50057331-00bf-412d-b8c0-ec1730db58a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.524 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap828d3558-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.524 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.524 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap828d3558-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:19 compute-0 kernel: tap828d3558-d0: entered promiscuous mode
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.528 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap828d3558-d0, col_values=(('external_ids', {'iface-id': '8765a176-985e-405d-870e-44dc7d3390b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:19 compute-0 NetworkManager[45129]: <info>  [1759394839.5296] manager: (tap828d3558-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/454)
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:19 compute-0 ovn_controller[152344]: 2025-10-02T08:47:19Z|01161|binding|INFO|Releasing lport 8765a176-985e-405d-870e-44dc7d3390b4 from this chassis (sb_readonly=0)
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.549 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/828d3558-d7f1-4d90-8e6f-bf6eff4d744e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/828d3558-d7f1-4d90-8e6f-bf6eff4d744e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.550 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0526f5b0-83bd-45a7-a250-7c843bc27a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.550 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-828d3558-d7f1-4d90-8e6f-bf6eff4d744e
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/828d3558-d7f1-4d90-8e6f-bf6eff4d744e.pid.haproxy
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 828d3558-d7f1-4d90-8e6f-bf6eff4d744e
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:47:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.551 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'env', 'PROCESS_TAG=haproxy-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/828d3558-d7f1-4d90-8e6f-bf6eff4d744e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.638 2 DEBUG nova.compute.manager [req-2dd67d04-5c05-4aee-b8b5-97db63ba606c req-14571408-57dd-4d3f-8fe9-e9b42d4f70dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.639 2 DEBUG oslo_concurrency.lockutils [req-2dd67d04-5c05-4aee-b8b5-97db63ba606c req-14571408-57dd-4d3f-8fe9-e9b42d4f70dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.639 2 DEBUG oslo_concurrency.lockutils [req-2dd67d04-5c05-4aee-b8b5-97db63ba606c req-14571408-57dd-4d3f-8fe9-e9b42d4f70dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.639 2 DEBUG oslo_concurrency.lockutils [req-2dd67d04-5c05-4aee-b8b5-97db63ba606c req-14571408-57dd-4d3f-8fe9-e9b42d4f70dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:19 compute-0 nova_compute[260603]: 2025-10-02 08:47:19.639 2 DEBUG nova.compute.manager [req-2dd67d04-5c05-4aee-b8b5-97db63ba606c req-14571408-57dd-4d3f-8fe9-e9b42d4f70dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Processing event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:47:19 compute-0 podman[374041]: 2025-10-02 08:47:19.955566523 +0000 UTC m=+0.061354203 container create ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:47:20 compute-0 systemd[1]: Started libpod-conmon-ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985.scope.
Oct 02 08:47:20 compute-0 podman[374041]: 2025-10-02 08:47:19.92820181 +0000 UTC m=+0.033989530 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:47:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:47:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09d31a5b55753d611210a316cb1a0d76d716a9026841d73fcc07cd5dc9fe4ce7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:20 compute-0 podman[374041]: 2025-10-02 08:47:20.070816746 +0000 UTC m=+0.176604436 container init ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:47:20 compute-0 podman[374041]: 2025-10-02 08:47:20.081739946 +0000 UTC m=+0.187527636 container start ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 08:47:20 compute-0 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [NOTICE]   (374061) : New worker (374063) forked
Oct 02 08:47:20 compute-0 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [NOTICE]   (374061) : Loading success.
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.185 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394840.1844373, 36a84233-3256-49b2-ae05-1569eb78b50f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.185 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] VM Started (Lifecycle Event)
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.189 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.193 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.198 2 INFO nova.virt.libvirt.driver [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Instance spawned successfully.
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.199 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.211 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.216 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.227 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.227 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.228 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.228 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.229 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.229 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.260 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.261 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394840.1846619, 36a84233-3256-49b2-ae05-1569eb78b50f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.261 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] VM Paused (Lifecycle Event)
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.287 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.291 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394840.1929514, 36a84233-3256-49b2-ae05-1569eb78b50f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.292 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] VM Resumed (Lifecycle Event)
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.301 2 INFO nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Took 9.49 seconds to spawn the instance on the hypervisor.
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.302 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.310 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.314 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.339 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.382 2 INFO nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Took 10.43 seconds to build instance.
Oct 02 08:47:20 compute-0 nova_compute[260603]: 2025-10-02 08:47:20.401 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:20 compute-0 ceph-mon[74477]: pgmap v2059: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 08:47:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:47:21 compute-0 nova_compute[260603]: 2025-10-02 08:47:21.831 2 DEBUG nova.compute.manager [req-63c7f4be-8f55-4d15-a17a-3d93939c0a0d req-6ad6982a-cad8-47e1-98ac-7e95d63d094c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:47:21 compute-0 nova_compute[260603]: 2025-10-02 08:47:21.832 2 DEBUG oslo_concurrency.lockutils [req-63c7f4be-8f55-4d15-a17a-3d93939c0a0d req-6ad6982a-cad8-47e1-98ac-7e95d63d094c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:21 compute-0 nova_compute[260603]: 2025-10-02 08:47:21.832 2 DEBUG oslo_concurrency.lockutils [req-63c7f4be-8f55-4d15-a17a-3d93939c0a0d req-6ad6982a-cad8-47e1-98ac-7e95d63d094c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:21 compute-0 nova_compute[260603]: 2025-10-02 08:47:21.833 2 DEBUG oslo_concurrency.lockutils [req-63c7f4be-8f55-4d15-a17a-3d93939c0a0d req-6ad6982a-cad8-47e1-98ac-7e95d63d094c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:21 compute-0 nova_compute[260603]: 2025-10-02 08:47:21.833 2 DEBUG nova.compute.manager [req-63c7f4be-8f55-4d15-a17a-3d93939c0a0d req-6ad6982a-cad8-47e1-98ac-7e95d63d094c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] No waiting events found dispatching network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:47:21 compute-0 nova_compute[260603]: 2025-10-02 08:47:21.833 2 WARNING nova.compute.manager [req-63c7f4be-8f55-4d15-a17a-3d93939c0a0d req-6ad6982a-cad8-47e1-98ac-7e95d63d094c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received unexpected event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 for instance with vm_state active and task_state None.
Oct 02 08:47:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:47:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3657045289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:47:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:47:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3657045289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:47:22 compute-0 nova_compute[260603]: 2025-10-02 08:47:22.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:47:22 compute-0 nova_compute[260603]: 2025-10-02 08:47:22.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:47:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:47:22 compute-0 nova_compute[260603]: 2025-10-02 08:47:22.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:23 compute-0 ceph-mon[74477]: pgmap v2060: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:47:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3657045289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:47:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3657045289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:47:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:47:24 compute-0 NetworkManager[45129]: <info>  [1759394844.4262] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Oct 02 08:47:24 compute-0 NetworkManager[45129]: <info>  [1759394844.4271] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Oct 02 08:47:24 compute-0 nova_compute[260603]: 2025-10-02 08:47:24.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:24 compute-0 nova_compute[260603]: 2025-10-02 08:47:24.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:24 compute-0 ovn_controller[152344]: 2025-10-02T08:47:24Z|01162|binding|INFO|Releasing lport 8765a176-985e-405d-870e-44dc7d3390b4 from this chassis (sb_readonly=0)
Oct 02 08:47:24 compute-0 nova_compute[260603]: 2025-10-02 08:47:24.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:24 compute-0 nova_compute[260603]: 2025-10-02 08:47:24.721 2 DEBUG nova.compute.manager [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-changed-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:47:24 compute-0 nova_compute[260603]: 2025-10-02 08:47:24.722 2 DEBUG nova.compute.manager [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Refreshing instance network info cache due to event network-changed-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:47:24 compute-0 nova_compute[260603]: 2025-10-02 08:47:24.722 2 DEBUG oslo_concurrency.lockutils [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:47:24 compute-0 nova_compute[260603]: 2025-10-02 08:47:24.723 2 DEBUG oslo_concurrency.lockutils [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:47:24 compute-0 nova_compute[260603]: 2025-10-02 08:47:24.724 2 DEBUG nova.network.neutron [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Refreshing network info cache for port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:47:24 compute-0 nova_compute[260603]: 2025-10-02 08:47:24.878 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:24 compute-0 nova_compute[260603]: 2025-10-02 08:47:24.878 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:24 compute-0 nova_compute[260603]: 2025-10-02 08:47:24.901 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.023 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.024 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.030 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.031 2 INFO nova.compute.claims [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:47:25 compute-0 ceph-mon[74477]: pgmap v2061: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.169 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:47:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:47:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898768290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.672 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.682 2 DEBUG nova.compute.provider_tree [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.699 2 DEBUG nova.scheduler.client.report [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.722 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.723 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.768 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.769 2 DEBUG nova.network.neutron [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.787 2 INFO nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.807 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.894 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.896 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.896 2 INFO nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Creating image(s)
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.918 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.944 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.966 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:25 compute-0 nova_compute[260603]: 2025-10-02 08:47:25.971 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.016 2 DEBUG nova.policy [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7767630a5b1049f48d7e0fed29e221ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c86b416fdb524f21b0228639a3a14116', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:47:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2898768290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.048 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.049 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.050 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.050 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.076 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.079 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 42e467eb-b532-4383-91dd-4c8e9f68328c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.412 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 42e467eb-b532-4383-91dd-4c8e9f68328c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.490 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] resizing rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.609 2 DEBUG nova.objects.instance [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'migration_context' on Instance uuid 42e467eb-b532-4383-91dd-4c8e9f68328c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.642 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.643 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Ensure instance console log exists: /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.644 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.645 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.646 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.837 2 DEBUG nova.network.neutron [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updated VIF entry in instance network info cache for port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.839 2 DEBUG nova.network.neutron [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updating instance_info_cache with network_info: [{"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.861 2 DEBUG oslo_concurrency.lockutils [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:47:26 compute-0 nova_compute[260603]: 2025-10-02 08:47:26.952 2 DEBUG nova.network.neutron [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Successfully created port: 98d1bfd0-f123-48c7-a32d-bca6b92ab19d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:47:27 compute-0 ceph-mon[74477]: pgmap v2062: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:47:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 107 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 85 op/s
Oct 02 08:47:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:47:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:47:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:47:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:47:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:47:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:47:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:47:27 compute-0 nova_compute[260603]: 2025-10-02 08:47:27.989 2 DEBUG nova.network.neutron [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Successfully updated port: 98d1bfd0-f123-48c7-a32d-bca6b92ab19d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:47:27 compute-0 nova_compute[260603]: 2025-10-02 08:47:27.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:47:27
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.log', 'backups']
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:47:28 compute-0 nova_compute[260603]: 2025-10-02 08:47:28.011 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:47:28 compute-0 nova_compute[260603]: 2025-10-02 08:47:28.011 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:47:28 compute-0 nova_compute[260603]: 2025-10-02 08:47:28.012 2 DEBUG nova.network.neutron [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:47:28 compute-0 nova_compute[260603]: 2025-10-02 08:47:28.181 2 DEBUG nova.compute.manager [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-changed-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:47:28 compute-0 nova_compute[260603]: 2025-10-02 08:47:28.182 2 DEBUG nova.compute.manager [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Refreshing instance network info cache due to event network-changed-98d1bfd0-f123-48c7-a32d-bca6b92ab19d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:47:28 compute-0 nova_compute[260603]: 2025-10-02 08:47:28.183 2 DEBUG oslo_concurrency.lockutils [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:47:28 compute-0 ceph-mgr[74774]: client.0 ms_handle_reset on v2:192.168.122.100:6800/860957497
Oct 02 08:47:29 compute-0 ceph-mon[74477]: pgmap v2063: 305 pgs: 305 active+clean; 107 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 85 op/s
Oct 02 08:47:29 compute-0 nova_compute[260603]: 2025-10-02 08:47:29.332 2 DEBUG nova.network.neutron [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:47:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 134 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:47:29 compute-0 nova_compute[260603]: 2025-10-02 08:47:29.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:31 compute-0 ceph-mon[74477]: pgmap v2064: 305 pgs: 305 active+clean; 134 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:47:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 134 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:47:32 compute-0 ovn_controller[152344]: 2025-10-02T08:47:32Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:d6:a0 10.100.0.7
Oct 02 08:47:32 compute-0 ovn_controller[152344]: 2025-10-02T08:47:32Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:d6:a0 10.100.0.7
Oct 02 08:47:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:33 compute-0 ceph-mon[74477]: pgmap v2065: 305 pgs: 305 active+clean; 134 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.366 2 DEBUG nova.network.neutron [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updating instance_info_cache with network_info: [{"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.393 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.394 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Instance network_info: |[{"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.394 2 DEBUG oslo_concurrency.lockutils [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.395 2 DEBUG nova.network.neutron [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Refreshing network info cache for port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.400 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Start _get_guest_xml network_info=[{"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.406 2 WARNING nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.412 2 DEBUG nova.virt.libvirt.host [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.413 2 DEBUG nova.virt.libvirt.host [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.421 2 DEBUG nova.virt.libvirt.host [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.422 2 DEBUG nova.virt.libvirt.host [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.424 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.424 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.425 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.426 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.426 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.427 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.427 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.428 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.428 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.429 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.429 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.430 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.435 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 164 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:47:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:47:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944547598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.908 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.947 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:33 compute-0 nova_compute[260603]: 2025-10-02 08:47:33.953 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2944547598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:47:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:47:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921354071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.403 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.406 2 DEBUG nova.virt.libvirt.vif [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:47:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-440315055',display_name='tempest-TestNetworkAdvancedServerOps-server-440315055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-440315055',id=114,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAL7Jay6VGgIbZM/uTD3le9XaJZEa2h1Gi+bt0pESVUIc4MgOYJNkH9P9hwcT+CLKqZhGrKxvTkkVtq+Eg9Dj0D8XlduRBRW3I1LeGVNjAVQ1naigScKF/Jy3yqgcndKBQ==',key_name='tempest-TestNetworkAdvancedServerOps-1091937656',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-l13jj000',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:47:25Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=42e467eb-b532-4383-91dd-4c8e9f68328c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.407 2 DEBUG nova.network.os_vif_util [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.408 2 DEBUG nova.network.os_vif_util [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.411 2 DEBUG nova.objects.instance [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 42e467eb-b532-4383-91dd-4c8e9f68328c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.438 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:47:34 compute-0 nova_compute[260603]:   <uuid>42e467eb-b532-4383-91dd-4c8e9f68328c</uuid>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   <name>instance-00000072</name>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-440315055</nova:name>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:47:33</nova:creationTime>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:47:34 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:47:34 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:47:34 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:47:34 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:47:34 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:47:34 compute-0 nova_compute[260603]:         <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct 02 08:47:34 compute-0 nova_compute[260603]:         <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:47:34 compute-0 nova_compute[260603]:         <nova:port uuid="98d1bfd0-f123-48c7-a32d-bca6b92ab19d">
Oct 02 08:47:34 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <system>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <entry name="serial">42e467eb-b532-4383-91dd-4c8e9f68328c</entry>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <entry name="uuid">42e467eb-b532-4383-91dd-4c8e9f68328c</entry>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     </system>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   <os>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   </os>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   <features>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   </features>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/42e467eb-b532-4383-91dd-4c8e9f68328c_disk">
Oct 02 08:47:34 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       </source>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:47:34 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/42e467eb-b532-4383-91dd-4c8e9f68328c_disk.config">
Oct 02 08:47:34 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       </source>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:47:34 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:cf:15:11"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <target dev="tap98d1bfd0-f1"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/console.log" append="off"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <video>
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     </video>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:47:34 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:47:34 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:47:34 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:47:34 compute-0 nova_compute[260603]: </domain>
Oct 02 08:47:34 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.440 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Preparing to wait for external event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.440 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.441 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.442 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.443 2 DEBUG nova.virt.libvirt.vif [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:47:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-440315055',display_name='tempest-TestNetworkAdvancedServerOps-server-440315055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-440315055',id=114,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAL7Jay6VGgIbZM/uTD3le9XaJZEa2h1Gi+bt0pESVUIc4MgOYJNkH9P9hwcT+CLKqZhGrKxvTkkVtq+Eg9Dj0D8XlduRBRW3I1LeGVNjAVQ1naigScKF/Jy3yqgcndKBQ==',key_name='tempest-TestNetworkAdvancedServerOps-1091937656',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-l13jj000',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:47:25Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=42e467eb-b532-4383-91dd-4c8e9f68328c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.444 2 DEBUG nova.network.os_vif_util [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.445 2 DEBUG nova.network.os_vif_util [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.446 2 DEBUG os_vif [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.448 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98d1bfd0-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98d1bfd0-f1, col_values=(('external_ids', {'iface-id': '98d1bfd0-f123-48c7-a32d-bca6b92ab19d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:15:11', 'vm-uuid': '42e467eb-b532-4383-91dd-4c8e9f68328c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:34 compute-0 NetworkManager[45129]: <info>  [1759394854.4610] manager: (tap98d1bfd0-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.469 2 INFO os_vif [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1')
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.557 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.558 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.558 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No VIF found with MAC fa:16:3e:cf:15:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.559 2 INFO nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Using config drive
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.598 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:34.829 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:34.830 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:34.831 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.924 2 INFO nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Creating config drive at /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/disk.config
Oct 02 08:47:34 compute-0 nova_compute[260603]: 2025-10-02 08:47:34.935 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_450hxwk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.113 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_450hxwk" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:35 compute-0 ceph-mon[74477]: pgmap v2066: 305 pgs: 305 active+clean; 164 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Oct 02 08:47:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/921354071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.156 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.160 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/disk.config 42e467eb-b532-4383-91dd-4c8e9f68328c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.349 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/disk.config 42e467eb-b532-4383-91dd-4c8e9f68328c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.350 2 INFO nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Deleting local config drive /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/disk.config because it was imported into RBD.
Oct 02 08:47:35 compute-0 kernel: tap98d1bfd0-f1: entered promiscuous mode
Oct 02 08:47:35 compute-0 NetworkManager[45129]: <info>  [1759394855.3994] manager: (tap98d1bfd0-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/458)
Oct 02 08:47:35 compute-0 ovn_controller[152344]: 2025-10-02T08:47:35Z|01163|binding|INFO|Claiming lport 98d1bfd0-f123-48c7-a32d-bca6b92ab19d for this chassis.
Oct 02 08:47:35 compute-0 ovn_controller[152344]: 2025-10-02T08:47:35Z|01164|binding|INFO|98d1bfd0-f123-48c7-a32d-bca6b92ab19d: Claiming fa:16:3e:cf:15:11 10.100.0.12
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.413 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:15:11 10.100.0.12'], port_security=['fa:16:3e:cf:15:11 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '42e467eb-b532-4383-91dd-4c8e9f68328c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8c633b73-9b65-49fc-b72d-a746e692a924', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6683c145-41ec-4e92-b711-415f2e27650f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=98d1bfd0-f123-48c7-a32d-bca6b92ab19d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.414 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d in datapath d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 bound to our chassis
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.415 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d4ef4078-5ea1-4ac7-a0f4-0c2647248d03
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.421 2 DEBUG nova.network.neutron [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updated VIF entry in instance network info cache for port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.422 2 DEBUG nova.network.neutron [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updating instance_info_cache with network_info: [{"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:47:35 compute-0 ovn_controller[152344]: 2025-10-02T08:47:35Z|01165|binding|INFO|Setting lport 98d1bfd0-f123-48c7-a32d-bca6b92ab19d ovn-installed in OVS
Oct 02 08:47:35 compute-0 ovn_controller[152344]: 2025-10-02T08:47:35Z|01166|binding|INFO|Setting lport 98d1bfd0-f123-48c7-a32d-bca6b92ab19d up in Southbound
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.433 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c850792e-4914-4cae-a5a8-6b6683c47d0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.434 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd4ef4078-51 in ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.435 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd4ef4078-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.435 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[52788b8f-e16b-4896-a362-90e620d92481]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 systemd-udevd[374399]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b960aa-61db-4817-8f2f-cc3decf9cb64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 systemd-machined[214636]: New machine qemu-143-instance-00000072.
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.445 2 DEBUG oslo_concurrency.lockutils [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.449 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b39a23-f7fa-4304-ba0b-3afb221006bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 systemd[1]: Started Virtual Machine qemu-143-instance-00000072.
Oct 02 08:47:35 compute-0 NetworkManager[45129]: <info>  [1759394855.4536] device (tap98d1bfd0-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:47:35 compute-0 NetworkManager[45129]: <info>  [1759394855.4545] device (tap98d1bfd0-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:47:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 164 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 3.9 MiB/s wr, 82 op/s
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.478 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4bb2b23-eca6-478a-bf5e-417b1a38fac4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.511 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e8d0ff-fb37-4ef2-9b0e-1f14a6bcd688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 NetworkManager[45129]: <info>  [1759394855.5183] manager: (tapd4ef4078-50): new Veth device (/org/freedesktop/NetworkManager/Devices/459)
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.517 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d73c9b12-5236-4b46-8ee0-cef1131c46e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.551 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[94a4b8ca-eb20-469a-b6b6-125d49680f57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.554 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb074a5-efe2-480d-92ee-d89c7a1b827d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 NetworkManager[45129]: <info>  [1759394855.5778] device (tapd4ef4078-50): carrier: link connected
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.582 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[870b5c66-04bd-42be-aaa3-e71fb2228b68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.599 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf14064b-0350-4de7-84e2-b21832d3e99c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd4ef4078-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:d2:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 334], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573417, 'reachable_time': 38911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374431, 'error': None, 'target': 'ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f6bdaaa4-be15-48cd-abe0-65d4e3213e58]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:d2ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573417, 'tstamp': 573417}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374439, 'error': None, 'target': 'ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.628 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0401dbca-8970-4a21-bf4d-afc2ef837afa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd4ef4078-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:d2:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 334], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573417, 'reachable_time': 38911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374449, 'error': None, 'target': 'ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.657 2 DEBUG nova.compute.manager [req-e16629d0-72ff-4aaf-a169-4436f0031086 req-eef1f677-5f1e-40fc-8c34-8fc206c85cc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.658 2 DEBUG oslo_concurrency.lockutils [req-e16629d0-72ff-4aaf-a169-4436f0031086 req-eef1f677-5f1e-40fc-8c34-8fc206c85cc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.658 2 DEBUG oslo_concurrency.lockutils [req-e16629d0-72ff-4aaf-a169-4436f0031086 req-eef1f677-5f1e-40fc-8c34-8fc206c85cc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.659 2 DEBUG oslo_concurrency.lockutils [req-e16629d0-72ff-4aaf-a169-4436f0031086 req-eef1f677-5f1e-40fc-8c34-8fc206c85cc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.659 2 DEBUG nova.compute.manager [req-e16629d0-72ff-4aaf-a169-4436f0031086 req-eef1f677-5f1e-40fc-8c34-8fc206c85cc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Processing event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.660 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[89c41aaf-03b3-4f72-a6a0-9ec5e4b635b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.717 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0643020-35eb-4ca1-acc9-26a96a3cdf55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.721 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4ef4078-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.722 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.722 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4ef4078-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:35 compute-0 kernel: tapd4ef4078-50: entered promiscuous mode
Oct 02 08:47:35 compute-0 NetworkManager[45129]: <info>  [1759394855.7256] manager: (tapd4ef4078-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.729 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd4ef4078-50, col_values=(('external_ids', {'iface-id': 'd2b8281e-e8e6-435e-9d86-c9ccb80a7650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:35 compute-0 ovn_controller[152344]: 2025-10-02T08:47:35Z|01167|binding|INFO|Releasing lport d2b8281e-e8e6-435e-9d86-c9ccb80a7650 from this chassis (sb_readonly=0)
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.734 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d4ef4078-5ea1-4ac7-a0f4-0c2647248d03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d4ef4078-5ea1-4ac7-a0f4-0c2647248d03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.735 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5351be-0682-4b87-96b7-9a3e1085f800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.736 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/d4ef4078-5ea1-4ac7-a0f4-0c2647248d03.pid.haproxy
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID d4ef4078-5ea1-4ac7-a0f4-0c2647248d03
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:47:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.737 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'env', 'PROCESS_TAG=haproxy-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d4ef4078-5ea1-4ac7-a0f4-0c2647248d03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:47:35 compute-0 nova_compute[260603]: 2025-10-02 08:47:35.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:36 compute-0 podman[374507]: 2025-10-02 08:47:36.140222644 +0000 UTC m=+0.066094370 container create 9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:47:36 compute-0 podman[374507]: 2025-10-02 08:47:36.102341885 +0000 UTC m=+0.028213631 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:47:36 compute-0 systemd[1]: Started libpod-conmon-9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514.scope.
Oct 02 08:47:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02087fd71ae88db67d2279610b147edf86c03fa40d0a2813a2b3379bd99958ff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:36 compute-0 podman[374507]: 2025-10-02 08:47:36.2491621 +0000 UTC m=+0.175033856 container init 9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 08:47:36 compute-0 podman[374507]: 2025-10-02 08:47:36.254536758 +0000 UTC m=+0.180408494 container start 9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:47:36 compute-0 neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03[374522]: [NOTICE]   (374526) : New worker (374528) forked
Oct 02 08:47:36 compute-0 neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03[374522]: [NOTICE]   (374526) : Loading success.
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.332 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394856.3316793, 42e467eb-b532-4383-91dd-4c8e9f68328c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.332 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] VM Started (Lifecycle Event)
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.335 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.338 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.342 2 INFO nova.virt.libvirt.driver [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Instance spawned successfully.
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.342 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.392 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.392 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.393 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.394 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.394 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.395 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.401 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.404 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.459 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.459 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394856.3318453, 42e467eb-b532-4383-91dd-4c8e9f68328c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.460 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] VM Paused (Lifecycle Event)
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.491 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.496 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394856.337718, 42e467eb-b532-4383-91dd-4c8e9f68328c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.496 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] VM Resumed (Lifecycle Event)
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.555 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.560 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.591 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.598 2 INFO nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Took 10.70 seconds to spawn the instance on the hypervisor.
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.598 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.723 2 INFO nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Took 11.75 seconds to build instance.
Oct 02 08:47:36 compute-0 nova_compute[260603]: 2025-10-02 08:47:36.792 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:37 compute-0 ceph-mon[74477]: pgmap v2067: 305 pgs: 305 active+clean; 164 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 3.9 MiB/s wr, 82 op/s
Oct 02 08:47:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 852 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Oct 02 08:47:37 compute-0 nova_compute[260603]: 2025-10-02 08:47:37.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:47:37 compute-0 nova_compute[260603]: 2025-10-02 08:47:37.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:47:37 compute-0 nova_compute[260603]: 2025-10-02 08:47:37.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:47:37 compute-0 nova_compute[260603]: 2025-10-02 08:47:37.859 2 DEBUG nova.compute.manager [req-82812d47-c6f0-44ed-a87c-4c260c27d369 req-c870cc02-bad1-4090-9908-15436ce19b9c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:47:37 compute-0 nova_compute[260603]: 2025-10-02 08:47:37.860 2 DEBUG oslo_concurrency.lockutils [req-82812d47-c6f0-44ed-a87c-4c260c27d369 req-c870cc02-bad1-4090-9908-15436ce19b9c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:37 compute-0 nova_compute[260603]: 2025-10-02 08:47:37.860 2 DEBUG oslo_concurrency.lockutils [req-82812d47-c6f0-44ed-a87c-4c260c27d369 req-c870cc02-bad1-4090-9908-15436ce19b9c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:37 compute-0 nova_compute[260603]: 2025-10-02 08:47:37.860 2 DEBUG oslo_concurrency.lockutils [req-82812d47-c6f0-44ed-a87c-4c260c27d369 req-c870cc02-bad1-4090-9908-15436ce19b9c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:37 compute-0 nova_compute[260603]: 2025-10-02 08:47:37.861 2 DEBUG nova.compute.manager [req-82812d47-c6f0-44ed-a87c-4c260c27d369 req-c870cc02-bad1-4090-9908-15436ce19b9c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] No waiting events found dispatching network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:47:37 compute-0 nova_compute[260603]: 2025-10-02 08:47:37.861 2 WARNING nova.compute.manager [req-82812d47-c6f0-44ed-a87c-4c260c27d369 req-c870cc02-bad1-4090-9908-15436ce19b9c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received unexpected event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d for instance with vm_state active and task_state None.
Oct 02 08:47:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011026628736081265 of space, bias 1.0, pg target 0.33079886208243797 quantized to 32 (current 32)
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:47:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:47:39 compute-0 ceph-mon[74477]: pgmap v2068: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 852 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Oct 02 08:47:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 131 op/s
Oct 02 08:47:39 compute-0 nova_compute[260603]: 2025-10-02 08:47:39.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:39 compute-0 nova_compute[260603]: 2025-10-02 08:47:39.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:47:39 compute-0 nova_compute[260603]: 2025-10-02 08:47:39.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:47:39 compute-0 nova_compute[260603]: 2025-10-02 08:47:39.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:39 compute-0 nova_compute[260603]: 2025-10-02 08:47:39.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:39 compute-0 nova_compute[260603]: 2025-10-02 08:47:39.558 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:39 compute-0 nova_compute[260603]: 2025-10-02 08:47:39.558 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:39 compute-0 nova_compute[260603]: 2025-10-02 08:47:39.558 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:47:39 compute-0 nova_compute[260603]: 2025-10-02 08:47:39.559 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:47:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1044921527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.069 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:40 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1044921527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.192 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.193 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.201 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.201 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:47:40 compute-0 podman[374562]: 2025-10-02 08:47:40.224989471 +0000 UTC m=+0.089018395 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 08:47:40 compute-0 podman[374560]: 2025-10-02 08:47:40.245861532 +0000 UTC m=+0.119805195 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.435 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.437 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3388MB free_disk=59.92213439941406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.437 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.438 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.521 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 36a84233-3256-49b2-ae05-1569eb78b50f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.521 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 42e467eb-b532-4383-91dd-4c8e9f68328c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.522 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.522 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:47:40 compute-0 nova_compute[260603]: 2025-10-02 08:47:40.579 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:47:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4130836935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:47:41 compute-0 nova_compute[260603]: 2025-10-02 08:47:41.051 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:41 compute-0 nova_compute[260603]: 2025-10-02 08:47:41.056 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:47:41 compute-0 nova_compute[260603]: 2025-10-02 08:47:41.079 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:47:41 compute-0 nova_compute[260603]: 2025-10-02 08:47:41.105 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:47:41 compute-0 nova_compute[260603]: 2025-10-02 08:47:41.105 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:41 compute-0 ceph-mon[74477]: pgmap v2069: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 131 op/s
Oct 02 08:47:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4130836935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:47:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 02 08:47:42 compute-0 nova_compute[260603]: 2025-10-02 08:47:42.770 2 DEBUG nova.compute.manager [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-changed-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:47:42 compute-0 nova_compute[260603]: 2025-10-02 08:47:42.771 2 DEBUG nova.compute.manager [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Refreshing instance network info cache due to event network-changed-98d1bfd0-f123-48c7-a32d-bca6b92ab19d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:47:42 compute-0 nova_compute[260603]: 2025-10-02 08:47:42.771 2 DEBUG oslo_concurrency.lockutils [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:47:42 compute-0 nova_compute[260603]: 2025-10-02 08:47:42.772 2 DEBUG oslo_concurrency.lockutils [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:47:42 compute-0 nova_compute[260603]: 2025-10-02 08:47:42.772 2 DEBUG nova.network.neutron [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Refreshing network info cache for port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:47:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:47:43 compute-0 nova_compute[260603]: 2025-10-02 08:47:43.101 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:47:43 compute-0 nova_compute[260603]: 2025-10-02 08:47:43.102 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:47:43 compute-0 ceph-mon[74477]: pgmap v2070: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 02 08:47:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 02 08:47:44 compute-0 sudo[374629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:47:44 compute-0 sudo[374629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:44 compute-0 sudo[374629]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:44 compute-0 sudo[374654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:47:44 compute-0 sudo[374654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:44 compute-0 sudo[374654]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:44 compute-0 sudo[374679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:47:44 compute-0 sudo[374679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:44 compute-0 sudo[374679]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:44 compute-0 sudo[374704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:47:44 compute-0 sudo[374704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:44 compute-0 ceph-mon[74477]: pgmap v2071: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 02 08:47:44 compute-0 nova_compute[260603]: 2025-10-02 08:47:44.339 2 DEBUG nova.network.neutron [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updated VIF entry in instance network info cache for port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:47:44 compute-0 nova_compute[260603]: 2025-10-02 08:47:44.341 2 DEBUG nova.network.neutron [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updating instance_info_cache with network_info: [{"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:47:44 compute-0 nova_compute[260603]: 2025-10-02 08:47:44.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:44 compute-0 nova_compute[260603]: 2025-10-02 08:47:44.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:44 compute-0 nova_compute[260603]: 2025-10-02 08:47:44.711 2 DEBUG oslo_concurrency.lockutils [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:47:44 compute-0 sudo[374704]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:47:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:47:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:47:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:47:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:47:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:47:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev dbb651d4-e3f8-4dfe-aae6-12360ca84a28 does not exist
Oct 02 08:47:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f343bc7e-398c-4be1-9c38-28da8e552ede does not exist
Oct 02 08:47:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 24b30a72-4b4a-4f9d-b893-0be446fa8280 does not exist
Oct 02 08:47:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:47:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:47:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:47:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:47:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:47:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:47:44 compute-0 sudo[374761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:47:44 compute-0 sudo[374761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:44 compute-0 sudo[374761]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:45 compute-0 sudo[374786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:47:45 compute-0 sudo[374786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:45 compute-0 sudo[374786]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:45 compute-0 sudo[374811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:47:45 compute-0 sudo[374811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:45 compute-0 sudo[374811]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:45 compute-0 sudo[374836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:47:45 compute-0 sudo[374836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:47:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:47:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:47:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:47:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:47:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:47:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 54 KiB/s wr, 79 op/s
Oct 02 08:47:45 compute-0 podman[374901]: 2025-10-02 08:47:45.480537949 +0000 UTC m=+0.058904088 container create 42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:47:45 compute-0 podman[374901]: 2025-10-02 08:47:45.449532022 +0000 UTC m=+0.027898171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:47:45 compute-0 systemd[1]: Started libpod-conmon-42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc.scope.
Oct 02 08:47:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:47:45 compute-0 podman[374901]: 2025-10-02 08:47:45.621798472 +0000 UTC m=+0.200164631 container init 42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 08:47:45 compute-0 podman[374901]: 2025-10-02 08:47:45.631728991 +0000 UTC m=+0.210095160 container start 42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 02 08:47:45 compute-0 eager_wescoff[374917]: 167 167
Oct 02 08:47:45 compute-0 systemd[1]: libpod-42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc.scope: Deactivated successfully.
Oct 02 08:47:45 compute-0 podman[374901]: 2025-10-02 08:47:45.645556301 +0000 UTC m=+0.223922460 container attach 42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:47:45 compute-0 podman[374901]: 2025-10-02 08:47:45.646191552 +0000 UTC m=+0.224557681 container died 42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:47:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3485d28d97893cdef468bf895e64e0ee4dbe013e1e7f79b0088cee076c75613f-merged.mount: Deactivated successfully.
Oct 02 08:47:45 compute-0 podman[374901]: 2025-10-02 08:47:45.762135725 +0000 UTC m=+0.340501864 container remove 42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 08:47:45 compute-0 systemd[1]: libpod-conmon-42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc.scope: Deactivated successfully.
Oct 02 08:47:45 compute-0 podman[374943]: 2025-10-02 08:47:45.968775357 +0000 UTC m=+0.060330282 container create 1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_grothendieck, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 08:47:46 compute-0 systemd[1]: Started libpod-conmon-1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8.scope.
Oct 02 08:47:46 compute-0 podman[374943]: 2025-10-02 08:47:45.944933943 +0000 UTC m=+0.036488898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:47:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050487dd21b7c9ebd153cdbf51bccc50f825b4ade22ef4fe322b57f5caea3506/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050487dd21b7c9ebd153cdbf51bccc50f825b4ade22ef4fe322b57f5caea3506/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050487dd21b7c9ebd153cdbf51bccc50f825b4ade22ef4fe322b57f5caea3506/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050487dd21b7c9ebd153cdbf51bccc50f825b4ade22ef4fe322b57f5caea3506/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050487dd21b7c9ebd153cdbf51bccc50f825b4ade22ef4fe322b57f5caea3506/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:46 compute-0 podman[374943]: 2025-10-02 08:47:46.103411642 +0000 UTC m=+0.194966637 container init 1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_grothendieck, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 08:47:46 compute-0 podman[374943]: 2025-10-02 08:47:46.113806607 +0000 UTC m=+0.205361542 container start 1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_grothendieck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:47:46 compute-0 podman[374943]: 2025-10-02 08:47:46.117726979 +0000 UTC m=+0.209281944 container attach 1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 08:47:46 compute-0 ceph-mon[74477]: pgmap v2072: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 54 KiB/s wr, 79 op/s
Oct 02 08:47:46 compute-0 nova_compute[260603]: 2025-10-02 08:47:46.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:47:46 compute-0 nova_compute[260603]: 2025-10-02 08:47:46.592 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "262d5706-35bc-45eb-809b-217369a86015" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:46 compute-0 nova_compute[260603]: 2025-10-02 08:47:46.593 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:46 compute-0 nova_compute[260603]: 2025-10-02 08:47:46.614 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:47:46 compute-0 nova_compute[260603]: 2025-10-02 08:47:46.684 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:46 compute-0 nova_compute[260603]: 2025-10-02 08:47:46.685 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:46 compute-0 nova_compute[260603]: 2025-10-02 08:47:46.693 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:47:46 compute-0 nova_compute[260603]: 2025-10-02 08:47:46.693 2 INFO nova.compute.claims [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:47:46 compute-0 nova_compute[260603]: 2025-10-02 08:47:46.866 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:47 compute-0 strange_grothendieck[374960]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:47:47 compute-0 strange_grothendieck[374960]: --> relative data size: 1.0
Oct 02 08:47:47 compute-0 strange_grothendieck[374960]: --> All data devices are unavailable
Oct 02 08:47:47 compute-0 systemd[1]: libpod-1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8.scope: Deactivated successfully.
Oct 02 08:47:47 compute-0 podman[374943]: 2025-10-02 08:47:47.219031544 +0000 UTC m=+1.310586479 container died 1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 08:47:47 compute-0 systemd[1]: libpod-1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8.scope: Consumed 1.014s CPU time.
Oct 02 08:47:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-050487dd21b7c9ebd153cdbf51bccc50f825b4ade22ef4fe322b57f5caea3506-merged.mount: Deactivated successfully.
Oct 02 08:47:47 compute-0 podman[374943]: 2025-10-02 08:47:47.277128315 +0000 UTC m=+1.368683220 container remove 1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:47:47 compute-0 systemd[1]: libpod-conmon-1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8.scope: Deactivated successfully.
Oct 02 08:47:47 compute-0 sudo[374836]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:47:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397444312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.343 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.348 2 DEBUG nova.compute.provider_tree [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.366 2 DEBUG nova.scheduler.client.report [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:47:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3397444312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:47:47 compute-0 sudo[375020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:47:47 compute-0 sudo[375020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:47 compute-0 sudo[375020]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.387 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.387 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.433 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.433 2 DEBUG nova.network.neutron [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:47:47 compute-0 sudo[375047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:47:47 compute-0 sudo[375047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:47 compute-0 sudo[375047]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 175 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 89 op/s
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.477 2 INFO nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:47:47 compute-0 sudo[375072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:47:47 compute-0 sudo[375072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:47 compute-0 sudo[375072]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.506 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:47:47 compute-0 sudo[375097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:47:47 compute-0 sudo[375097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.595 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.596 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.596 2 INFO nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Creating image(s)
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.617 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.641 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.663 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.666 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.706 2 DEBUG nova.policy [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.747 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.747 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.748 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.748 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.767 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:47 compute-0 nova_compute[260603]: 2025-10-02 08:47:47.774 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 262d5706-35bc-45eb-809b-217369a86015_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:47 compute-0 podman[375237]: 2025-10-02 08:47:47.886770118 +0000 UTC m=+0.036790778 container create 964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jones, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:47:47 compute-0 systemd[1]: Started libpod-conmon-964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1.scope.
Oct 02 08:47:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:47:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:47:47 compute-0 podman[375237]: 2025-10-02 08:47:47.872448141 +0000 UTC m=+0.022468831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:47:47 compute-0 podman[375237]: 2025-10-02 08:47:47.985102402 +0000 UTC m=+0.135123092 container init 964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 08:47:47 compute-0 podman[375237]: 2025-10-02 08:47:47.992492962 +0000 UTC m=+0.142513632 container start 964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jones, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:47:47 compute-0 affectionate_jones[375270]: 167 167
Oct 02 08:47:47 compute-0 systemd[1]: libpod-964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1.scope: Deactivated successfully.
Oct 02 08:47:48 compute-0 podman[375237]: 2025-10-02 08:47:48.007453419 +0000 UTC m=+0.157474089 container attach 964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 08:47:48 compute-0 podman[375237]: 2025-10-02 08:47:48.007962005 +0000 UTC m=+0.157982675 container died 964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 02 08:47:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa3e9fcb635c190bf709619d3f3e995eee7d68e27fa16e364fc4f4c0614f2f9f-merged.mount: Deactivated successfully.
Oct 02 08:47:48 compute-0 podman[375237]: 2025-10-02 08:47:48.048956532 +0000 UTC m=+0.198977202 container remove 964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jones, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:47:48 compute-0 systemd[1]: libpod-conmon-964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1.scope: Deactivated successfully.
Oct 02 08:47:48 compute-0 nova_compute[260603]: 2025-10-02 08:47:48.067 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 262d5706-35bc-45eb-809b-217369a86015_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:48 compute-0 podman[375287]: 2025-10-02 08:47:48.112741 +0000 UTC m=+0.059253977 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 08:47:48 compute-0 podman[375282]: 2025-10-02 08:47:48.138687219 +0000 UTC m=+0.087202259 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 08:47:48 compute-0 nova_compute[260603]: 2025-10-02 08:47:48.168 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image 262d5706-35bc-45eb-809b-217369a86015_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:47:48 compute-0 podman[375370]: 2025-10-02 08:47:48.224445392 +0000 UTC m=+0.039944276 container create 21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:47:48 compute-0 systemd[1]: Started libpod-conmon-21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60.scope.
Oct 02 08:47:48 compute-0 nova_compute[260603]: 2025-10-02 08:47:48.272 2 DEBUG nova.objects.instance [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid 262d5706-35bc-45eb-809b-217369a86015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:47:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:47:48 compute-0 nova_compute[260603]: 2025-10-02 08:47:48.298 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:47:48 compute-0 nova_compute[260603]: 2025-10-02 08:47:48.298 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Ensure instance console log exists: /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:47:48 compute-0 nova_compute[260603]: 2025-10-02 08:47:48.298 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:48 compute-0 nova_compute[260603]: 2025-10-02 08:47:48.299 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:48 compute-0 nova_compute[260603]: 2025-10-02 08:47:48.299 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69124acb057edb9f9d9e827ed35c5597b38609e0370b7fb880dadf3aaa8a5eec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69124acb057edb9f9d9e827ed35c5597b38609e0370b7fb880dadf3aaa8a5eec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69124acb057edb9f9d9e827ed35c5597b38609e0370b7fb880dadf3aaa8a5eec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69124acb057edb9f9d9e827ed35c5597b38609e0370b7fb880dadf3aaa8a5eec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:48 compute-0 podman[375370]: 2025-10-02 08:47:48.207715001 +0000 UTC m=+0.023213905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:47:48 compute-0 podman[375370]: 2025-10-02 08:47:48.316566833 +0000 UTC m=+0.132065727 container init 21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:47:48 compute-0 podman[375370]: 2025-10-02 08:47:48.323926853 +0000 UTC m=+0.139425737 container start 21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:47:48 compute-0 podman[375370]: 2025-10-02 08:47:48.327321638 +0000 UTC m=+0.142820552 container attach 21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 08:47:48 compute-0 ceph-mon[74477]: pgmap v2073: 305 pgs: 305 active+clean; 175 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 89 op/s
Oct 02 08:47:48 compute-0 ovn_controller[152344]: 2025-10-02T08:47:48Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:15:11 10.100.0.12
Oct 02 08:47:48 compute-0 ovn_controller[152344]: 2025-10-02T08:47:48Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:15:11 10.100.0.12
Oct 02 08:47:48 compute-0 nova_compute[260603]: 2025-10-02 08:47:48.686 2 DEBUG nova.network.neutron [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Successfully created port: f66c7ec4-73d7-4000-99de-ce8f1674d0ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]: {
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:     "0": [
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:         {
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "devices": [
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "/dev/loop3"
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             ],
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_name": "ceph_lv0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_size": "21470642176",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "name": "ceph_lv0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "tags": {
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.cluster_name": "ceph",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.crush_device_class": "",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.encrypted": "0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.osd_id": "0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.type": "block",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.vdo": "0"
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             },
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "type": "block",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "vg_name": "ceph_vg0"
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:         }
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:     ],
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:     "1": [
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:         {
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "devices": [
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "/dev/loop4"
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             ],
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_name": "ceph_lv1",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_size": "21470642176",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "name": "ceph_lv1",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "tags": {
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.cluster_name": "ceph",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.crush_device_class": "",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.encrypted": "0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.osd_id": "1",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.type": "block",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.vdo": "0"
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             },
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "type": "block",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "vg_name": "ceph_vg1"
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:         }
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:     ],
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:     "2": [
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:         {
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "devices": [
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "/dev/loop5"
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             ],
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_name": "ceph_lv2",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_size": "21470642176",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "name": "ceph_lv2",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "tags": {
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.cluster_name": "ceph",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.crush_device_class": "",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.encrypted": "0",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.osd_id": "2",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.type": "block",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:                 "ceph.vdo": "0"
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             },
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "type": "block",
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:             "vg_name": "ceph_vg2"
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:         }
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]:     ]
Oct 02 08:47:49 compute-0 vigilant_mclaren[375422]: }
Oct 02 08:47:49 compute-0 systemd[1]: libpod-21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60.scope: Deactivated successfully.
Oct 02 08:47:49 compute-0 podman[375370]: 2025-10-02 08:47:49.070388889 +0000 UTC m=+0.885887773 container died 21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 08:47:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-69124acb057edb9f9d9e827ed35c5597b38609e0370b7fb880dadf3aaa8a5eec-merged.mount: Deactivated successfully.
Oct 02 08:47:49 compute-0 podman[375370]: 2025-10-02 08:47:49.127968874 +0000 UTC m=+0.943467768 container remove 21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 08:47:49 compute-0 systemd[1]: libpod-conmon-21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60.scope: Deactivated successfully.
Oct 02 08:47:49 compute-0 sudo[375097]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:49 compute-0 sudo[375444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:47:49 compute-0 sudo[375444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:49 compute-0 sudo[375444]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:49 compute-0 sudo[375469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:47:49 compute-0 sudo[375469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:49 compute-0 sudo[375469]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:49 compute-0 sudo[375494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:47:49 compute-0 sudo[375494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:49 compute-0 sudo[375494]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:49 compute-0 sudo[375519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:47:49 compute-0 sudo[375519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 208 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 106 op/s
Oct 02 08:47:49 compute-0 nova_compute[260603]: 2025-10-02 08:47:49.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:49 compute-0 nova_compute[260603]: 2025-10-02 08:47:49.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:49 compute-0 podman[375589]: 2025-10-02 08:47:49.906975353 +0000 UTC m=+0.086553508 container create 67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 08:47:49 compute-0 podman[375589]: 2025-10-02 08:47:49.86160977 +0000 UTC m=+0.041187985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:47:49 compute-0 systemd[1]: Started libpod-conmon-67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee.scope.
Oct 02 08:47:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:47:50 compute-0 podman[375589]: 2025-10-02 08:47:50.027458309 +0000 UTC m=+0.207036464 container init 67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 08:47:50 compute-0 podman[375589]: 2025-10-02 08:47:50.039969799 +0000 UTC m=+0.219547934 container start 67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_herschel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:47:50 compute-0 elegant_herschel[375606]: 167 167
Oct 02 08:47:50 compute-0 systemd[1]: libpod-67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee.scope: Deactivated successfully.
Oct 02 08:47:50 compute-0 podman[375589]: 2025-10-02 08:47:50.062919844 +0000 UTC m=+0.242498009 container attach 67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_herschel, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:47:50 compute-0 podman[375589]: 2025-10-02 08:47:50.064038859 +0000 UTC m=+0.243616994 container died 67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:47:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-99868da62706d5862c7ab285abaaef153d1f3d27fa6ff00fd21e3e473a4e6d4b-merged.mount: Deactivated successfully.
Oct 02 08:47:50 compute-0 podman[375589]: 2025-10-02 08:47:50.121944784 +0000 UTC m=+0.301522919 container remove 67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:47:50 compute-0 systemd[1]: libpod-conmon-67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee.scope: Deactivated successfully.
Oct 02 08:47:50 compute-0 podman[375629]: 2025-10-02 08:47:50.340826386 +0000 UTC m=+0.059895798 container create ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 08:47:50 compute-0 podman[375629]: 2025-10-02 08:47:50.313640999 +0000 UTC m=+0.032710381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:47:50 compute-0 systemd[1]: Started libpod-conmon-ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f.scope.
Oct 02 08:47:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3788657d3a069f301942b93bc2ec4e1849e15b05a04c2ca1ea0f156f34b7e18d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3788657d3a069f301942b93bc2ec4e1849e15b05a04c2ca1ea0f156f34b7e18d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3788657d3a069f301942b93bc2ec4e1849e15b05a04c2ca1ea0f156f34b7e18d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3788657d3a069f301942b93bc2ec4e1849e15b05a04c2ca1ea0f156f34b7e18d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:47:50 compute-0 podman[375629]: 2025-10-02 08:47:50.477331441 +0000 UTC m=+0.196400913 container init ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct 02 08:47:50 compute-0 podman[375629]: 2025-10-02 08:47:50.488685075 +0000 UTC m=+0.207754437 container start ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_franklin, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 08:47:50 compute-0 podman[375629]: 2025-10-02 08:47:50.494966301 +0000 UTC m=+0.214035783 container attach ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_franklin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 08:47:50 compute-0 nova_compute[260603]: 2025-10-02 08:47:50.520 2 DEBUG nova.network.neutron [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Successfully updated port: f66c7ec4-73d7-4000-99de-ce8f1674d0ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:47:50 compute-0 ceph-mon[74477]: pgmap v2074: 305 pgs: 305 active+clean; 208 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 106 op/s
Oct 02 08:47:50 compute-0 nova_compute[260603]: 2025-10-02 08:47:50.538 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:47:50 compute-0 nova_compute[260603]: 2025-10-02 08:47:50.539 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:47:50 compute-0 nova_compute[260603]: 2025-10-02 08:47:50.539 2 DEBUG nova.network.neutron [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:47:50 compute-0 nova_compute[260603]: 2025-10-02 08:47:50.649 2 DEBUG nova.compute.manager [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Received event network-changed-f66c7ec4-73d7-4000-99de-ce8f1674d0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:47:50 compute-0 nova_compute[260603]: 2025-10-02 08:47:50.649 2 DEBUG nova.compute.manager [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Refreshing instance network info cache due to event network-changed-f66c7ec4-73d7-4000-99de-ce8f1674d0ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:47:50 compute-0 nova_compute[260603]: 2025-10-02 08:47:50.649 2 DEBUG oslo_concurrency.lockutils [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:47:51 compute-0 nova_compute[260603]: 2025-10-02 08:47:51.336 2 DEBUG nova.network.neutron [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:47:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 208 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 848 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Oct 02 08:47:51 compute-0 zealous_franklin[375646]: {
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "osd_id": 2,
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "type": "bluestore"
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:     },
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "osd_id": 1,
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "type": "bluestore"
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:     },
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "osd_id": 0,
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:         "type": "bluestore"
Oct 02 08:47:51 compute-0 zealous_franklin[375646]:     }
Oct 02 08:47:51 compute-0 zealous_franklin[375646]: }
Oct 02 08:47:51 compute-0 systemd[1]: libpod-ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f.scope: Deactivated successfully.
Oct 02 08:47:51 compute-0 podman[375629]: 2025-10-02 08:47:51.515289383 +0000 UTC m=+1.234358775 container died ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_franklin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:47:51 compute-0 systemd[1]: libpod-ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f.scope: Consumed 1.024s CPU time.
Oct 02 08:47:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3788657d3a069f301942b93bc2ec4e1849e15b05a04c2ca1ea0f156f34b7e18d-merged.mount: Deactivated successfully.
Oct 02 08:47:51 compute-0 podman[375629]: 2025-10-02 08:47:51.699925017 +0000 UTC m=+1.418994389 container remove ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 08:47:51 compute-0 systemd[1]: libpod-conmon-ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f.scope: Deactivated successfully.
Oct 02 08:47:51 compute-0 sudo[375519]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:47:51 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:47:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:47:51 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:47:51 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ab93ea77-0581-43bd-8acb-bdd586dde5da does not exist
Oct 02 08:47:51 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8e59717d-8c65-4e80-a33a-2482cc521d9b does not exist
Oct 02 08:47:51 compute-0 sudo[375694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:47:51 compute-0 sudo[375694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:51 compute-0 sudo[375694]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:51 compute-0 sudo[375719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:47:51 compute-0 sudo[375719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:47:51 compute-0 sudo[375719]: pam_unix(sudo:session): session closed for user root
Oct 02 08:47:52 compute-0 ceph-mon[74477]: pgmap v2075: 305 pgs: 305 active+clean; 208 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 848 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Oct 02 08:47:52 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:47:52 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:47:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.421 2 DEBUG nova.network.neutron [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Updating instance_info_cache with network_info: [{"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.444 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.445 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Instance network_info: |[{"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.446 2 DEBUG oslo_concurrency.lockutils [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.446 2 DEBUG nova.network.neutron [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Refreshing network info cache for port f66c7ec4-73d7-4000-99de-ce8f1674d0ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.455 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Start _get_guest_xml network_info=[{"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.462 2 WARNING nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:47:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 924 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.471 2 DEBUG nova.virt.libvirt.host [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.473 2 DEBUG nova.virt.libvirt.host [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.483 2 DEBUG nova.virt.libvirt.host [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.484 2 DEBUG nova.virt.libvirt.host [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.485 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.486 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.486 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.487 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.487 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.488 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.488 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.489 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.489 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.490 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.491 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.491 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.496 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:47:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2047987986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.952 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.991 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:53 compute-0 nova_compute[260603]: 2025-10-02 08:47:53.997 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.223 2 INFO nova.compute.manager [None req-9ec27381-5145-4842-8cdb-d031083c76d8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Get console output
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.233 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:47:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:47:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3248806711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.434 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.437 2 DEBUG nova.virt.libvirt.vif [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:47:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=115,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUeT37zgtyGodwUyb3iGY7HM/3FmTKFRI0uJBAJFSKjatvrWQTObfDMZE8YBSZZGIkssGkh11uDOpS/CEO9ZlzyZpSQ7kHVbROBWfktxGuem67Vh+IUVlFakqZKCOP1Eg==',key_name='tempest-TestSecurityGroupsBasicOps-112352132',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-igjgp1fk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:47:47Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=262d5706-35bc-45eb-809b-217369a86015,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.438 2 DEBUG nova.network.os_vif_util [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.439 2 DEBUG nova.network.os_vif_util [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.441 2 DEBUG nova.objects.instance [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 262d5706-35bc-45eb-809b-217369a86015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.460 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:47:54 compute-0 nova_compute[260603]:   <uuid>262d5706-35bc-45eb-809b-217369a86015</uuid>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   <name>instance-00000073</name>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723</nova:name>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:47:53</nova:creationTime>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:47:54 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:47:54 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:47:54 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:47:54 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:47:54 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:47:54 compute-0 nova_compute[260603]:         <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct 02 08:47:54 compute-0 nova_compute[260603]:         <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:47:54 compute-0 nova_compute[260603]:         <nova:port uuid="f66c7ec4-73d7-4000-99de-ce8f1674d0ee">
Oct 02 08:47:54 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <system>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <entry name="serial">262d5706-35bc-45eb-809b-217369a86015</entry>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <entry name="uuid">262d5706-35bc-45eb-809b-217369a86015</entry>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     </system>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   <os>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   </os>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   <features>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   </features>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/262d5706-35bc-45eb-809b-217369a86015_disk">
Oct 02 08:47:54 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       </source>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:47:54 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/262d5706-35bc-45eb-809b-217369a86015_disk.config">
Oct 02 08:47:54 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       </source>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:47:54 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:cf:dd:2b"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <target dev="tapf66c7ec4-73"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/console.log" append="off"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <video>
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     </video>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:47:54 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:47:54 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:47:54 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:47:54 compute-0 nova_compute[260603]: </domain>
Oct 02 08:47:54 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.462 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Preparing to wait for external event network-vif-plugged-f66c7ec4-73d7-4000-99de-ce8f1674d0ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.463 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "262d5706-35bc-45eb-809b-217369a86015-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.463 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.464 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.465 2 DEBUG nova.virt.libvirt.vif [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:47:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=115,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUeT37zgtyGodwUyb3iGY7HM/3FmTKFRI0uJBAJFSKjatvrWQTObfDMZE8YBSZZGIkssGkh11uDOpS/CEO9ZlzyZpSQ7kHVbROBWfktxGuem67Vh+IUVlFakqZKCOP1Eg==',key_name='tempest-TestSecurityGroupsBasicOps-112352132',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-igjgp1fk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:47:47Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=262d5706-35bc-45eb-809b-217369a86015,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.466 2 DEBUG nova.network.os_vif_util [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.467 2 DEBUG nova.network.os_vif_util [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.468 2 DEBUG os_vif [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.471 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66c7ec4-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf66c7ec4-73, col_values=(('external_ids', {'iface-id': 'f66c7ec4-73d7-4000-99de-ce8f1674d0ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:dd:2b', 'vm-uuid': '262d5706-35bc-45eb-809b-217369a86015'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:54 compute-0 NetworkManager[45129]: <info>  [1759394874.4806] manager: (tapf66c7ec4-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.491 2 INFO os_vif [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73')
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.525 2 INFO nova.compute.manager [None req-5a2b120a-4a0a-4e5e-bab7-d231244045c0 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Pausing
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.526 2 DEBUG nova.objects.instance [None req-5a2b120a-4a0a-4e5e-bab7-d231244045c0 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'flavor' on Instance uuid 42e467eb-b532-4383-91dd-4c8e9f68328c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.545 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.546 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.546 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:cf:dd:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.546 2 INFO nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Using config drive
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.572 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.604 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394874.600825, 42e467eb-b532-4383-91dd-4c8e9f68328c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.605 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] VM Paused (Lifecycle Event)
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.606 2 DEBUG nova.compute.manager [None req-5a2b120a-4a0a-4e5e-bab7-d231244045c0 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.645 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.648 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:47:54 compute-0 nova_compute[260603]: 2025-10-02 08:47:54.702 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 02 08:47:54 compute-0 ceph-mon[74477]: pgmap v2076: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 924 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Oct 02 08:47:54 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2047987986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:47:54 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3248806711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:47:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 08:47:55 compute-0 nova_compute[260603]: 2025-10-02 08:47:55.675 2 DEBUG nova.network.neutron [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Updated VIF entry in instance network info cache for port f66c7ec4-73d7-4000-99de-ce8f1674d0ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:47:55 compute-0 nova_compute[260603]: 2025-10-02 08:47:55.676 2 DEBUG nova.network.neutron [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Updating instance_info_cache with network_info: [{"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:47:55 compute-0 nova_compute[260603]: 2025-10-02 08:47:55.696 2 DEBUG oslo_concurrency.lockutils [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:47:55 compute-0 nova_compute[260603]: 2025-10-02 08:47:55.960 2 INFO nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Creating config drive at /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/disk.config
Oct 02 08:47:55 compute-0 nova_compute[260603]: 2025-10-02 08:47:55.968 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpasia5lxx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.114 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpasia5lxx" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.136 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.139 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/disk.config 262d5706-35bc-45eb-809b-217369a86015_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.331 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/disk.config 262d5706-35bc-45eb-809b-217369a86015_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.333 2 INFO nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Deleting local config drive /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/disk.config because it was imported into RBD.
Oct 02 08:47:56 compute-0 kernel: tapf66c7ec4-73: entered promiscuous mode
Oct 02 08:47:56 compute-0 NetworkManager[45129]: <info>  [1759394876.3940] manager: (tapf66c7ec4-73): new Tun device (/org/freedesktop/NetworkManager/Devices/462)
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:56 compute-0 ovn_controller[152344]: 2025-10-02T08:47:56Z|01168|binding|INFO|Claiming lport f66c7ec4-73d7-4000-99de-ce8f1674d0ee for this chassis.
Oct 02 08:47:56 compute-0 ovn_controller[152344]: 2025-10-02T08:47:56Z|01169|binding|INFO|f66c7ec4-73d7-4000-99de-ce8f1674d0ee: Claiming fa:16:3e:cf:dd:2b 10.100.0.6
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.455 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:dd:2b 10.100.0.6'], port_security=['fa:16:3e:cf:dd:2b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '262d5706-35bc-45eb-809b-217369a86015', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '580c7ec0-94be-443b-88b7-53672ca4c5d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2af26aac-073a-413a-88d9-fc16235c9487, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f66c7ec4-73d7-4000-99de-ce8f1674d0ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.457 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f66c7ec4-73d7-4000-99de-ce8f1674d0ee in datapath 828d3558-d7f1-4d90-8e6f-bf6eff4d744e bound to our chassis
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.459 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 828d3558-d7f1-4d90-8e6f-bf6eff4d744e
Oct 02 08:47:56 compute-0 ovn_controller[152344]: 2025-10-02T08:47:56Z|01170|binding|INFO|Setting lport f66c7ec4-73d7-4000-99de-ce8f1674d0ee ovn-installed in OVS
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:56 compute-0 ovn_controller[152344]: 2025-10-02T08:47:56Z|01171|binding|INFO|Setting lport f66c7ec4-73d7-4000-99de-ce8f1674d0ee up in Southbound
Oct 02 08:47:56 compute-0 systemd-udevd[375879]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:56 compute-0 systemd-machined[214636]: New machine qemu-144-instance-00000073.
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.478 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c87b5bcc-6d17-42ce-a3d0-991ae5f82e42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:56 compute-0 systemd[1]: Started Virtual Machine qemu-144-instance-00000073.
Oct 02 08:47:56 compute-0 NetworkManager[45129]: <info>  [1759394876.4929] device (tapf66c7ec4-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:47:56 compute-0 NetworkManager[45129]: <info>  [1759394876.4943] device (tapf66c7ec4-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.510 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9a9abfe7-7d4a-4fa5-807a-1907d60fce57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.514 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fb48a8f7-b9ea-433e-8449-145c7c496fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.546 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[66586c2e-8f90-4675-ab83-fbb29066f7b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.565 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[00bb8a2b-845e-4d07-bce7-6621f6ccbdfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap828d3558-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:2f:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571794, 'reachable_time': 28448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375893, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.587 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e37b9cca-0fd0-437a-95c0-a993b5041ea0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap828d3558-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571808, 'tstamp': 571808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375895, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap828d3558-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571811, 'tstamp': 571811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375895, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.589 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap828d3558-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.592 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap828d3558-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.593 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.593 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap828d3558-d0, col_values=(('external_ids', {'iface-id': '8765a176-985e-405d-870e-44dc7d3390b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.593 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.712 2 DEBUG nova.compute.manager [req-6ecfbcb0-a6cf-48c4-8abf-2b46af835111 req-9fe0ba69-7c54-49dd-9e72-66df95beff21 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Received event network-vif-plugged-f66c7ec4-73d7-4000-99de-ce8f1674d0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.713 2 DEBUG oslo_concurrency.lockutils [req-6ecfbcb0-a6cf-48c4-8abf-2b46af835111 req-9fe0ba69-7c54-49dd-9e72-66df95beff21 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "262d5706-35bc-45eb-809b-217369a86015-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.713 2 DEBUG oslo_concurrency.lockutils [req-6ecfbcb0-a6cf-48c4-8abf-2b46af835111 req-9fe0ba69-7c54-49dd-9e72-66df95beff21 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.714 2 DEBUG oslo_concurrency.lockutils [req-6ecfbcb0-a6cf-48c4-8abf-2b46af835111 req-9fe0ba69-7c54-49dd-9e72-66df95beff21 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.714 2 DEBUG nova.compute.manager [req-6ecfbcb0-a6cf-48c4-8abf-2b46af835111 req-9fe0ba69-7c54-49dd-9e72-66df95beff21 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Processing event network-vif-plugged-f66c7ec4-73d7-4000-99de-ce8f1674d0ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:47:56 compute-0 nova_compute[260603]: 2025-10-02 08:47:56.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.794 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:47:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.795 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:47:56 compute-0 ceph-mon[74477]: pgmap v2077: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 08:47:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 08:47:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:47:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:47:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:47:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:47:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:47:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:47:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.063 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394878.0632343, 262d5706-35bc-45eb-809b-217369a86015 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.064 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] VM Started (Lifecycle Event)
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.066 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.068 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.072 2 INFO nova.virt.libvirt.driver [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] Instance spawned successfully.
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.073 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.093 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.104 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.109 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.109 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.110 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.110 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.111 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.111 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.134 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.134 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394878.0640945, 262d5706-35bc-45eb-809b-217369a86015 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.135 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] VM Paused (Lifecycle Event)
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.158 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.161 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394878.0682342, 262d5706-35bc-45eb-809b-217369a86015 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.161 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] VM Resumed (Lifecycle Event)
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.169 2 INFO nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Took 10.57 seconds to spawn the instance on the hypervisor.
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.170 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.193 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.198 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.218 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.227 2 INFO nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Took 11.57 seconds to build instance.
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.249 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.519 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.522 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.545 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.559 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.559 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Image id 420393e6-d62b-4055-afb9-674967e2c2b0 yields fingerprint 55fe19af44c773772fc736fd085017e37d622236 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.559 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] image 420393e6-d62b-4055-afb9-674967e2c2b0 at (/var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236): checking
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.560 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] image 420393e6-d62b-4055-afb9-674967e2c2b0 at (/var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.561 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.562 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] 36a84233-3256-49b2-ae05-1569eb78b50f is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.562 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] 42e467eb-b532-4383-91dd-4c8e9f68328c is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.562 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] 262d5706-35bc-45eb-809b-217369a86015 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.562 2 WARNING nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.563 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Active base files: /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.563 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Removable base files: /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.563 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.564 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.564 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.564 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.564 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 02 08:47:58 compute-0 ceph-mon[74477]: pgmap v2078: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.871 2 DEBUG nova.compute.manager [req-0a293d64-cbb3-41e9-b52d-2abb69b57890 req-6ebae71a-7b88-4389-b38a-6c57f373493a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Received event network-vif-plugged-f66c7ec4-73d7-4000-99de-ce8f1674d0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.872 2 DEBUG oslo_concurrency.lockutils [req-0a293d64-cbb3-41e9-b52d-2abb69b57890 req-6ebae71a-7b88-4389-b38a-6c57f373493a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "262d5706-35bc-45eb-809b-217369a86015-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.872 2 DEBUG oslo_concurrency.lockutils [req-0a293d64-cbb3-41e9-b52d-2abb69b57890 req-6ebae71a-7b88-4389-b38a-6c57f373493a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.873 2 DEBUG oslo_concurrency.lockutils [req-0a293d64-cbb3-41e9-b52d-2abb69b57890 req-6ebae71a-7b88-4389-b38a-6c57f373493a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.873 2 DEBUG nova.compute.manager [req-0a293d64-cbb3-41e9-b52d-2abb69b57890 req-6ebae71a-7b88-4389-b38a-6c57f373493a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] No waiting events found dispatching network-vif-plugged-f66c7ec4-73d7-4000-99de-ce8f1674d0ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.873 2 WARNING nova.compute.manager [req-0a293d64-cbb3-41e9-b52d-2abb69b57890 req-6ebae71a-7b88-4389-b38a-6c57f373493a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Received unexpected event network-vif-plugged-f66c7ec4-73d7-4000-99de-ce8f1674d0ee for instance with vm_state active and task_state None.
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.974 2 INFO nova.compute.manager [None req-e596b27b-850c-4bef-accc-8a451fc19782 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Get console output
Oct 02 08:47:58 compute-0 nova_compute[260603]: 2025-10-02 08:47:58.979 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:47:59 compute-0 nova_compute[260603]: 2025-10-02 08:47:59.164 2 INFO nova.compute.manager [None req-a4e6974c-f2bc-427e-aeb3-0a344cb24f99 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Unpausing
Oct 02 08:47:59 compute-0 nova_compute[260603]: 2025-10-02 08:47:59.165 2 DEBUG nova.objects.instance [None req-a4e6974c-f2bc-427e-aeb3-0a344cb24f99 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'flavor' on Instance uuid 42e467eb-b532-4383-91dd-4c8e9f68328c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:47:59 compute-0 nova_compute[260603]: 2025-10-02 08:47:59.204 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394879.2045522, 42e467eb-b532-4383-91dd-4c8e9f68328c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:47:59 compute-0 nova_compute[260603]: 2025-10-02 08:47:59.205 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] VM Resumed (Lifecycle Event)
Oct 02 08:47:59 compute-0 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct 02 08:47:59 compute-0 nova_compute[260603]: 2025-10-02 08:47:59.209 2 DEBUG nova.virt.libvirt.guest [None req-a4e6974c-f2bc-427e-aeb3-0a344cb24f99 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 08:47:59 compute-0 nova_compute[260603]: 2025-10-02 08:47:59.210 2 DEBUG nova.compute.manager [None req-a4e6974c-f2bc-427e-aeb3-0a344cb24f99 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:59 compute-0 nova_compute[260603]: 2025-10-02 08:47:59.220 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:47:59 compute-0 nova_compute[260603]: 2025-10-02 08:47:59.223 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:47:59 compute-0 nova_compute[260603]: 2025-10-02 08:47:59.254 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 02 08:47:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.9 MiB/s wr, 87 op/s
Oct 02 08:47:59 compute-0 nova_compute[260603]: 2025-10-02 08:47:59.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:47:59 compute-0 nova_compute[260603]: 2025-10-02 08:47:59.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:00 compute-0 nova_compute[260603]: 2025-10-02 08:48:00.135 2 INFO nova.compute.manager [None req-33c9c66d-08f1-4e56-9025-db9ae7300f6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Get console output
Oct 02 08:48:00 compute-0 nova_compute[260603]: 2025-10-02 08:48:00.142 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:48:00 compute-0 nova_compute[260603]: 2025-10-02 08:48:00.933 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:00 compute-0 nova_compute[260603]: 2025-10-02 08:48:00.934 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:00 compute-0 nova_compute[260603]: 2025-10-02 08:48:00.935 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:00 compute-0 nova_compute[260603]: 2025-10-02 08:48:00.935 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:00 compute-0 nova_compute[260603]: 2025-10-02 08:48:00.935 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:00 compute-0 nova_compute[260603]: 2025-10-02 08:48:00.937 2 INFO nova.compute.manager [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Terminating instance
Oct 02 08:48:00 compute-0 ceph-mon[74477]: pgmap v2079: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.9 MiB/s wr, 87 op/s
Oct 02 08:48:00 compute-0 nova_compute[260603]: 2025-10-02 08:48:00.939 2 DEBUG nova.compute.manager [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:48:01 compute-0 kernel: tap98d1bfd0-f1 (unregistering): left promiscuous mode
Oct 02 08:48:01 compute-0 NetworkManager[45129]: <info>  [1759394881.0105] device (tap98d1bfd0-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.013 2 DEBUG nova.compute.manager [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-changed-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.014 2 DEBUG nova.compute.manager [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Refreshing instance network info cache due to event network-changed-98d1bfd0-f123-48c7-a32d-bca6b92ab19d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.015 2 DEBUG oslo_concurrency.lockutils [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.015 2 DEBUG oslo_concurrency.lockutils [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.016 2 DEBUG nova.network.neutron [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Refreshing network info cache for port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:48:01 compute-0 ovn_controller[152344]: 2025-10-02T08:48:01Z|01172|binding|INFO|Releasing lport 98d1bfd0-f123-48c7-a32d-bca6b92ab19d from this chassis (sb_readonly=0)
Oct 02 08:48:01 compute-0 ovn_controller[152344]: 2025-10-02T08:48:01Z|01173|binding|INFO|Setting lport 98d1bfd0-f123-48c7-a32d-bca6b92ab19d down in Southbound
Oct 02 08:48:01 compute-0 ovn_controller[152344]: 2025-10-02T08:48:01Z|01174|binding|INFO|Removing iface tap98d1bfd0-f1 ovn-installed in OVS
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.038 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:15:11 10.100.0.12'], port_security=['fa:16:3e:cf:15:11 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '42e467eb-b532-4383-91dd-4c8e9f68328c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8c633b73-9b65-49fc-b72d-a746e692a924', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6683c145-41ec-4e92-b711-415f2e27650f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=98d1bfd0-f123-48c7-a32d-bca6b92ab19d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.039 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d in datapath d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 unbound from our chassis
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.040 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.042 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[76db684d-8073-4173-8463-6502c2223e07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.043 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 namespace which is not needed anymore
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:01 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Deactivated successfully.
Oct 02 08:48:01 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Consumed 12.577s CPU time.
Oct 02 08:48:01 compute-0 systemd-machined[214636]: Machine qemu-143-instance-00000072 terminated.
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.184 2 INFO nova.virt.libvirt.driver [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Instance destroyed successfully.
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.184 2 DEBUG nova.objects.instance [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid 42e467eb-b532-4383-91dd-4c8e9f68328c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.197 2 DEBUG nova.virt.libvirt.vif [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:47:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-440315055',display_name='tempest-TestNetworkAdvancedServerOps-server-440315055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-440315055',id=114,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAL7Jay6VGgIbZM/uTD3le9XaJZEa2h1Gi+bt0pESVUIc4MgOYJNkH9P9hwcT+CLKqZhGrKxvTkkVtq+Eg9Dj0D8XlduRBRW3I1LeGVNjAVQ1naigScKF/Jy3yqgcndKBQ==',key_name='tempest-TestNetworkAdvancedServerOps-1091937656',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:47:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-l13jj000',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:47:59Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=42e467eb-b532-4383-91dd-4c8e9f68328c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.197 2 DEBUG nova.network.os_vif_util [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.198 2 DEBUG nova.network.os_vif_util [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.198 2 DEBUG os_vif [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98d1bfd0-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.207 2 INFO os_vif [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1')
Oct 02 08:48:01 compute-0 neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03[374522]: [NOTICE]   (374526) : haproxy version is 2.8.14-c23fe91
Oct 02 08:48:01 compute-0 neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03[374522]: [NOTICE]   (374526) : path to executable is /usr/sbin/haproxy
Oct 02 08:48:01 compute-0 neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03[374522]: [ALERT]    (374526) : Current worker (374528) exited with code 143 (Terminated)
Oct 02 08:48:01 compute-0 neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03[374522]: [WARNING]  (374526) : All workers exited. Exiting... (0)
Oct 02 08:48:01 compute-0 systemd[1]: libpod-9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514.scope: Deactivated successfully.
Oct 02 08:48:01 compute-0 podman[375962]: 2025-10-02 08:48:01.232183994 +0000 UTC m=+0.096637674 container died 9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 08:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514-userdata-shm.mount: Deactivated successfully.
Oct 02 08:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-02087fd71ae88db67d2279610b147edf86c03fa40d0a2813a2b3379bd99958ff-merged.mount: Deactivated successfully.
Oct 02 08:48:01 compute-0 podman[375962]: 2025-10-02 08:48:01.348620623 +0000 UTC m=+0.213074303 container cleanup 9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 08:48:01 compute-0 systemd[1]: libpod-conmon-9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514.scope: Deactivated successfully.
Oct 02 08:48:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Oct 02 08:48:01 compute-0 podman[376018]: 2025-10-02 08:48:01.483513646 +0000 UTC m=+0.112378993 container remove 9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.490 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f5f212-4e8b-4d49-92f2-06ffb10a5b89]: (4, ('Thu Oct  2 08:48:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 (9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514)\n9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514\nThu Oct  2 08:48:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 (9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514)\n9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.492 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[85641175-4e00-404a-bd79-9420d88c6620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.493 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4ef4078-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:01 compute-0 kernel: tapd4ef4078-50: left promiscuous mode
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.513 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe04e92-e97b-4327-897a-b4506f291ddc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.544 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9276d924-f755-4249-82d7-68dba1531b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.546 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[90f6e09b-a16c-4a84-8592-50297e0c3561]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.563 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0456f3-05f8-452f-8ca2-fbd76e454401]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573410, 'reachable_time': 21827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376033, 'error': None, 'target': 'ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:01 compute-0 systemd[1]: run-netns-ovnmeta\x2dd4ef4078\x2d5ea1\x2d4ac7\x2da0f4\x2d0c2647248d03.mount: Deactivated successfully.
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.568 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:48:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.568 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f6540b3b-4f6a-49fa-be99-eadb779b92c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.708 2 DEBUG nova.compute.manager [req-b5450ac9-e181-4fa5-8752-c9207ecb0b53 req-58e466ef-abb5-46c8-8be0-64436c8f1fc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-vif-unplugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.708 2 DEBUG oslo_concurrency.lockutils [req-b5450ac9-e181-4fa5-8752-c9207ecb0b53 req-58e466ef-abb5-46c8-8be0-64436c8f1fc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.708 2 DEBUG oslo_concurrency.lockutils [req-b5450ac9-e181-4fa5-8752-c9207ecb0b53 req-58e466ef-abb5-46c8-8be0-64436c8f1fc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.708 2 DEBUG oslo_concurrency.lockutils [req-b5450ac9-e181-4fa5-8752-c9207ecb0b53 req-58e466ef-abb5-46c8-8be0-64436c8f1fc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.709 2 DEBUG nova.compute.manager [req-b5450ac9-e181-4fa5-8752-c9207ecb0b53 req-58e466ef-abb5-46c8-8be0-64436c8f1fc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] No waiting events found dispatching network-vif-unplugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:48:01 compute-0 nova_compute[260603]: 2025-10-02 08:48:01.709 2 DEBUG nova.compute.manager [req-b5450ac9-e181-4fa5-8752-c9207ecb0b53 req-58e466ef-abb5-46c8-8be0-64436c8f1fc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-vif-unplugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:48:02 compute-0 nova_compute[260603]: 2025-10-02 08:48:02.529 2 INFO nova.virt.libvirt.driver [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Deleting instance files /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c_del
Oct 02 08:48:02 compute-0 nova_compute[260603]: 2025-10-02 08:48:02.530 2 INFO nova.virt.libvirt.driver [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Deletion of /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c_del complete
Oct 02 08:48:02 compute-0 nova_compute[260603]: 2025-10-02 08:48:02.594 2 INFO nova.compute.manager [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Took 1.65 seconds to destroy the instance on the hypervisor.
Oct 02 08:48:02 compute-0 nova_compute[260603]: 2025-10-02 08:48:02.595 2 DEBUG oslo.service.loopingcall [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:48:02 compute-0 nova_compute[260603]: 2025-10-02 08:48:02.595 2 DEBUG nova.compute.manager [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:48:02 compute-0 nova_compute[260603]: 2025-10-02 08:48:02.595 2 DEBUG nova.network.neutron [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:48:02 compute-0 nova_compute[260603]: 2025-10-02 08:48:02.824 2 DEBUG nova.network.neutron [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updated VIF entry in instance network info cache for port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:48:02 compute-0 nova_compute[260603]: 2025-10-02 08:48:02.824 2 DEBUG nova.network.neutron [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updating instance_info_cache with network_info: [{"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:48:02 compute-0 nova_compute[260603]: 2025-10-02 08:48:02.852 2 DEBUG oslo_concurrency.lockutils [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:48:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:48:03 compute-0 ceph-mon[74477]: pgmap v2080: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.096 2 DEBUG nova.compute.manager [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Received event network-changed-f66c7ec4-73d7-4000-99de-ce8f1674d0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.097 2 DEBUG nova.compute.manager [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Refreshing instance network info cache due to event network-changed-f66c7ec4-73d7-4000-99de-ce8f1674d0ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.098 2 DEBUG oslo_concurrency.lockutils [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.098 2 DEBUG oslo_concurrency.lockutils [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.099 2 DEBUG nova.network.neutron [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Refreshing network info cache for port f66c7ec4-73d7-4000-99de-ce8f1674d0ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:48:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 167 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 134 op/s
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.812 2 DEBUG nova.compute.manager [req-be5404dd-7bbd-4d48-bdce-16a0ae51f275 req-ed9b69be-7ec6-437a-b5b4-fa309af72daf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.813 2 DEBUG oslo_concurrency.lockutils [req-be5404dd-7bbd-4d48-bdce-16a0ae51f275 req-ed9b69be-7ec6-437a-b5b4-fa309af72daf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.813 2 DEBUG oslo_concurrency.lockutils [req-be5404dd-7bbd-4d48-bdce-16a0ae51f275 req-ed9b69be-7ec6-437a-b5b4-fa309af72daf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.814 2 DEBUG oslo_concurrency.lockutils [req-be5404dd-7bbd-4d48-bdce-16a0ae51f275 req-ed9b69be-7ec6-437a-b5b4-fa309af72daf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.814 2 DEBUG nova.compute.manager [req-be5404dd-7bbd-4d48-bdce-16a0ae51f275 req-ed9b69be-7ec6-437a-b5b4-fa309af72daf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] No waiting events found dispatching network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.814 2 WARNING nova.compute.manager [req-be5404dd-7bbd-4d48-bdce-16a0ae51f275 req-ed9b69be-7ec6-437a-b5b4-fa309af72daf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received unexpected event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d for instance with vm_state active and task_state deleting.
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.965 2 DEBUG nova.network.neutron [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:48:03 compute-0 nova_compute[260603]: 2025-10-02 08:48:03.992 2 INFO nova.compute.manager [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Took 1.40 seconds to deallocate network for instance.
Oct 02 08:48:04 compute-0 nova_compute[260603]: 2025-10-02 08:48:04.038 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:04 compute-0 nova_compute[260603]: 2025-10-02 08:48:04.038 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:04 compute-0 nova_compute[260603]: 2025-10-02 08:48:04.149 2 DEBUG oslo_concurrency.processutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:48:04 compute-0 nova_compute[260603]: 2025-10-02 08:48:04.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:48:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3754006147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:48:04 compute-0 nova_compute[260603]: 2025-10-02 08:48:04.626 2 DEBUG oslo_concurrency.processutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:48:04 compute-0 nova_compute[260603]: 2025-10-02 08:48:04.633 2 DEBUG nova.compute.provider_tree [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:48:04 compute-0 nova_compute[260603]: 2025-10-02 08:48:04.653 2 DEBUG nova.scheduler.client.report [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:48:04 compute-0 nova_compute[260603]: 2025-10-02 08:48:04.676 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:04 compute-0 nova_compute[260603]: 2025-10-02 08:48:04.728 2 INFO nova.scheduler.client.report [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Deleted allocations for instance 42e467eb-b532-4383-91dd-4c8e9f68328c
Oct 02 08:48:04 compute-0 nova_compute[260603]: 2025-10-02 08:48:04.795 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:04 compute-0 nova_compute[260603]: 2025-10-02 08:48:04.997 2 DEBUG nova.network.neutron [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Updated VIF entry in instance network info cache for port f66c7ec4-73d7-4000-99de-ce8f1674d0ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:48:04 compute-0 nova_compute[260603]: 2025-10-02 08:48:04.998 2 DEBUG nova.network.neutron [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Updating instance_info_cache with network_info: [{"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:48:05 compute-0 nova_compute[260603]: 2025-10-02 08:48:05.024 2 DEBUG oslo_concurrency.lockutils [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:48:05 compute-0 ceph-mon[74477]: pgmap v2081: 305 pgs: 305 active+clean; 167 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 134 op/s
Oct 02 08:48:05 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3754006147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:48:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 167 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 99 op/s
Oct 02 08:48:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:05.796 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:06 compute-0 nova_compute[260603]: 2025-10-02 08:48:06.020 2 DEBUG nova.compute.manager [req-ce18ae08-d95c-4958-b2dc-fbcf2d35b8b0 req-fda6b97e-b93d-4724-95ec-bc081a4518b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-vif-deleted-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:06 compute-0 nova_compute[260603]: 2025-10-02 08:48:06.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:07 compute-0 ceph-mon[74477]: pgmap v2082: 305 pgs: 305 active+clean; 167 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 99 op/s
Oct 02 08:48:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 167 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 102 op/s
Oct 02 08:48:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:48:09 compute-0 ovn_controller[152344]: 2025-10-02T08:48:09Z|01175|binding|INFO|Releasing lport 8765a176-985e-405d-870e-44dc7d3390b4 from this chassis (sb_readonly=0)
Oct 02 08:48:09 compute-0 ceph-mon[74477]: pgmap v2083: 305 pgs: 305 active+clean; 167 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 102 op/s
Oct 02 08:48:09 compute-0 nova_compute[260603]: 2025-10-02 08:48:09.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 102 op/s
Oct 02 08:48:09 compute-0 nova_compute[260603]: 2025-10-02 08:48:09.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:11 compute-0 podman[376059]: 2025-10-02 08:48:11.032976459 +0000 UTC m=+0.092093151 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:48:11 compute-0 podman[376058]: 2025-10-02 08:48:11.052701744 +0000 UTC m=+0.121769886 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:48:11 compute-0 ceph-mon[74477]: pgmap v2084: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 102 op/s
Oct 02 08:48:11 compute-0 nova_compute[260603]: 2025-10-02 08:48:11.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.8 KiB/s wr, 96 op/s
Oct 02 08:48:12 compute-0 ovn_controller[152344]: 2025-10-02T08:48:12Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:dd:2b 10.100.0.6
Oct 02 08:48:12 compute-0 ovn_controller[152344]: 2025-10-02T08:48:12Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:dd:2b 10.100.0.6
Oct 02 08:48:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:48:13 compute-0 ceph-mon[74477]: pgmap v2085: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.8 KiB/s wr, 96 op/s
Oct 02 08:48:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 160 op/s
Oct 02 08:48:14 compute-0 nova_compute[260603]: 2025-10-02 08:48:14.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:15 compute-0 ceph-mon[74477]: pgmap v2086: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 160 op/s
Oct 02 08:48:15 compute-0 nova_compute[260603]: 2025-10-02 08:48:15.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:48:16 compute-0 nova_compute[260603]: 2025-10-02 08:48:16.184 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394881.182366, 42e467eb-b532-4383-91dd-4c8e9f68328c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:48:16 compute-0 nova_compute[260603]: 2025-10-02 08:48:16.185 2 INFO nova.compute.manager [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] VM Stopped (Lifecycle Event)
Oct 02 08:48:16 compute-0 nova_compute[260603]: 2025-10-02 08:48:16.222 2 DEBUG nova.compute.manager [None req-b062d4bc-de0d-48c5-9350-761caf36b850 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:48:16 compute-0 nova_compute[260603]: 2025-10-02 08:48:16.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:17 compute-0 ceph-mon[74477]: pgmap v2087: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:48:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 08:48:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:48:18 compute-0 ceph-mon[74477]: pgmap v2088: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 08:48:18 compute-0 podman[376099]: 2025-10-02 08:48:18.977529588 +0000 UTC m=+0.050405873 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 08:48:18 compute-0 podman[376100]: 2025-10-02 08:48:18.977612471 +0000 UTC m=+0.047554473 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 08:48:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:48:19 compute-0 nova_compute[260603]: 2025-10-02 08:48:19.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:19 compute-0 nova_compute[260603]: 2025-10-02 08:48:19.703 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "262d5706-35bc-45eb-809b-217369a86015" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:19 compute-0 nova_compute[260603]: 2025-10-02 08:48:19.704 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:19 compute-0 nova_compute[260603]: 2025-10-02 08:48:19.704 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "262d5706-35bc-45eb-809b-217369a86015-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:19 compute-0 nova_compute[260603]: 2025-10-02 08:48:19.704 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:19 compute-0 nova_compute[260603]: 2025-10-02 08:48:19.705 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:19 compute-0 nova_compute[260603]: 2025-10-02 08:48:19.706 2 INFO nova.compute.manager [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Terminating instance
Oct 02 08:48:19 compute-0 nova_compute[260603]: 2025-10-02 08:48:19.708 2 DEBUG nova.compute.manager [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:48:19 compute-0 kernel: tapf66c7ec4-73 (unregistering): left promiscuous mode
Oct 02 08:48:19 compute-0 NetworkManager[45129]: <info>  [1759394899.8899] device (tapf66c7ec4-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:48:19 compute-0 ovn_controller[152344]: 2025-10-02T08:48:19Z|01176|binding|INFO|Releasing lport f66c7ec4-73d7-4000-99de-ce8f1674d0ee from this chassis (sb_readonly=0)
Oct 02 08:48:19 compute-0 nova_compute[260603]: 2025-10-02 08:48:19.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:19 compute-0 ovn_controller[152344]: 2025-10-02T08:48:19Z|01177|binding|INFO|Setting lport f66c7ec4-73d7-4000-99de-ce8f1674d0ee down in Southbound
Oct 02 08:48:19 compute-0 ovn_controller[152344]: 2025-10-02T08:48:19Z|01178|binding|INFO|Removing iface tapf66c7ec4-73 ovn-installed in OVS
Oct 02 08:48:19 compute-0 nova_compute[260603]: 2025-10-02 08:48:19.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:19.909 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:dd:2b 10.100.0.6'], port_security=['fa:16:3e:cf:dd:2b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '262d5706-35bc-45eb-809b-217369a86015', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7705270d-7f1f-45f4-8a90-d1c36237d9ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2af26aac-073a-413a-88d9-fc16235c9487, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f66c7ec4-73d7-4000-99de-ce8f1674d0ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:48:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:19.911 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f66c7ec4-73d7-4000-99de-ce8f1674d0ee in datapath 828d3558-d7f1-4d90-8e6f-bf6eff4d744e unbound from our chassis
Oct 02 08:48:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:19.914 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 828d3558-d7f1-4d90-8e6f-bf6eff4d744e
Oct 02 08:48:19 compute-0 nova_compute[260603]: 2025-10-02 08:48:19.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:19.939 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e11d49fa-ec98-4ccb-96e8-263dcc63f573]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:19.984 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8200707b-2fa0-482d-a6e9-51658e5dc193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:19.988 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[32a64ec6-cc98-4f5f-953d-8f4e9b66cb85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:20 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Deactivated successfully.
Oct 02 08:48:20 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Consumed 14.596s CPU time.
Oct 02 08:48:20 compute-0 systemd-machined[214636]: Machine qemu-144-instance-00000073 terminated.
Oct 02 08:48:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.030 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3630da86-e508-416c-89d3-cc141673b66e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.062 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[af8390a5-b8ac-48d2-88e8-2f234553294a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap828d3558-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:2f:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571794, 'reachable_time': 26922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376149, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.088 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7e17f7-b3a0-41c4-a89f-3cb1e0967cca]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap828d3558-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571808, 'tstamp': 571808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376150, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap828d3558-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571811, 'tstamp': 571811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376150, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.091 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap828d3558-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.102 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap828d3558-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.103 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:48:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.103 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap828d3558-d0, col_values=(('external_ids', {'iface-id': '8765a176-985e-405d-870e-44dc7d3390b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.104 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.154 2 INFO nova.virt.libvirt.driver [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] Instance destroyed successfully.
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.154 2 DEBUG nova.objects.instance [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid 262d5706-35bc-45eb-809b-217369a86015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.182 2 DEBUG nova.virt.libvirt.vif [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:47:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=115,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUeT37zgtyGodwUyb3iGY7HM/3FmTKFRI0uJBAJFSKjatvrWQTObfDMZE8YBSZZGIkssGkh11uDOpS/CEO9ZlzyZpSQ7kHVbROBWfktxGuem67Vh+IUVlFakqZKCOP1Eg==',key_name='tempest-TestSecurityGroupsBasicOps-112352132',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:47:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-igjgp1fk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:47:58Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=262d5706-35bc-45eb-809b-217369a86015,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.183 2 DEBUG nova.network.os_vif_util [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.183 2 DEBUG nova.network.os_vif_util [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.184 2 DEBUG os_vif [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66c7ec4-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:20 compute-0 nova_compute[260603]: 2025-10-02 08:48:20.192 2 INFO os_vif [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73')
Oct 02 08:48:20 compute-0 ceph-mon[74477]: pgmap v2089: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:48:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:48:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:48:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 34K writes, 136K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 34K writes, 12K syncs, 2.76 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4613 writes, 18K keys, 4613 commit groups, 1.0 writes per commit group, ingest: 20.20 MB, 0.03 MB/s
                                           Interval WAL: 4613 writes, 1808 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 08:48:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:48:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859926937' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:48:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:48:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859926937' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:48:22 compute-0 nova_compute[260603]: 2025-10-02 08:48:22.560 2 INFO nova.virt.libvirt.driver [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Deleting instance files /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015_del
Oct 02 08:48:22 compute-0 nova_compute[260603]: 2025-10-02 08:48:22.561 2 INFO nova.virt.libvirt.driver [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Deletion of /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015_del complete
Oct 02 08:48:22 compute-0 nova_compute[260603]: 2025-10-02 08:48:22.614 2 INFO nova.compute.manager [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Took 2.91 seconds to destroy the instance on the hypervisor.
Oct 02 08:48:22 compute-0 nova_compute[260603]: 2025-10-02 08:48:22.615 2 DEBUG oslo.service.loopingcall [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:48:22 compute-0 nova_compute[260603]: 2025-10-02 08:48:22.615 2 DEBUG nova.compute.manager [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:48:22 compute-0 nova_compute[260603]: 2025-10-02 08:48:22.615 2 DEBUG nova.network.neutron [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:48:22 compute-0 ceph-mon[74477]: pgmap v2090: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:48:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1859926937' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:48:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1859926937' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:48:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:48:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 08:48:23 compute-0 nova_compute[260603]: 2025-10-02 08:48:23.720 2 DEBUG nova.network.neutron [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:48:23 compute-0 nova_compute[260603]: 2025-10-02 08:48:23.741 2 INFO nova.compute.manager [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] Took 1.13 seconds to deallocate network for instance.
Oct 02 08:48:23 compute-0 nova_compute[260603]: 2025-10-02 08:48:23.810 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:23 compute-0 nova_compute[260603]: 2025-10-02 08:48:23.811 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:23 compute-0 nova_compute[260603]: 2025-10-02 08:48:23.936 2 DEBUG nova.compute.manager [req-6580af2b-f44b-4c91-9aed-df03b7a27e2b req-7ac875a6-217d-4c66-bcdc-8b9448c49860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Received event network-vif-deleted-f66c7ec4-73d7-4000-99de-ce8f1674d0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:23 compute-0 nova_compute[260603]: 2025-10-02 08:48:23.939 2 DEBUG oslo_concurrency.processutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:48:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:48:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785040472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.417 2 DEBUG oslo_concurrency.processutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.424 2 DEBUG nova.compute.provider_tree [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.442 2 DEBUG nova.scheduler.client.report [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.467 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.512 2 INFO nova.scheduler.client.report [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance 262d5706-35bc-45eb-809b-217369a86015
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.516 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.517 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.539 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.564 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.565 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:48:24 compute-0 ceph-mon[74477]: pgmap v2091: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 08:48:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/785040472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.678 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.696 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.696 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.706 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.707 2 INFO nova.compute.claims [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:48:24 compute-0 nova_compute[260603]: 2025-10-02 08:48:24.847 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:48:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1436378918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.374 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.383 2 DEBUG nova.compute.provider_tree [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.425 2 DEBUG nova.scheduler.client.report [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.465 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.467 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:48:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.534 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.534 2 DEBUG nova.network.neutron [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.555 2 INFO nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.584 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:48:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1436378918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.697 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.699 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.699 2 INFO nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Creating image(s)
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.725 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.751 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.777 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.782 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.872 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.873 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.874 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.874 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.898 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:48:25 compute-0 nova_compute[260603]: 2025-10-02 08:48:25.902 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 dcf324a5-8e22-40e4-8f75-469fe7a04756_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:48:26 compute-0 nova_compute[260603]: 2025-10-02 08:48:26.217 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 dcf324a5-8e22-40e4-8f75-469fe7a04756_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:48:26 compute-0 nova_compute[260603]: 2025-10-02 08:48:26.308 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] resizing rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:48:26 compute-0 nova_compute[260603]: 2025-10-02 08:48:26.437 2 DEBUG nova.objects.instance [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'migration_context' on Instance uuid dcf324a5-8e22-40e4-8f75-469fe7a04756 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:48:26 compute-0 nova_compute[260603]: 2025-10-02 08:48:26.442 2 DEBUG nova.policy [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7767630a5b1049f48d7e0fed29e221ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c86b416fdb524f21b0228639a3a14116', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:48:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:48:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 37K writes, 141K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.75 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5854 writes, 21K keys, 5854 commit groups, 1.0 writes per commit group, ingest: 22.85 MB, 0.04 MB/s
                                           Interval WAL: 5854 writes, 2385 syncs, 2.45 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 08:48:26 compute-0 nova_compute[260603]: 2025-10-02 08:48:26.528 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:48:26 compute-0 nova_compute[260603]: 2025-10-02 08:48:26.528 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Ensure instance console log exists: /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:48:26 compute-0 nova_compute[260603]: 2025-10-02 08:48:26.529 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:26 compute-0 nova_compute[260603]: 2025-10-02 08:48:26.529 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:26 compute-0 nova_compute[260603]: 2025-10-02 08:48:26.529 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:26 compute-0 ceph-mon[74477]: pgmap v2092: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 02 08:48:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 127 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 215 KiB/s wr, 30 op/s
Oct 02 08:48:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:48:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:48:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:48:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:48:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:48:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:48:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:48:28
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'backups', '.mgr']
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.429 2 DEBUG nova.compute.manager [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-changed-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.430 2 DEBUG nova.compute.manager [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Refreshing instance network info cache due to event network-changed-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.430 2 DEBUG oslo_concurrency.lockutils [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.430 2 DEBUG oslo_concurrency.lockutils [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.431 2 DEBUG nova.network.neutron [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Refreshing network info cache for port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.592 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.592 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.593 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.593 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.594 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.596 2 INFO nova.compute.manager [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Terminating instance
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.598 2 DEBUG nova.compute.manager [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:48:28 compute-0 ceph-mon[74477]: pgmap v2093: 305 pgs: 305 active+clean; 127 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 215 KiB/s wr, 30 op/s
Oct 02 08:48:28 compute-0 kernel: tapd1e15184-f1 (unregistering): left promiscuous mode
Oct 02 08:48:28 compute-0 NetworkManager[45129]: <info>  [1759394908.8122] device (tapd1e15184-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:28 compute-0 ovn_controller[152344]: 2025-10-02T08:48:28Z|01179|binding|INFO|Releasing lport d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 from this chassis (sb_readonly=0)
Oct 02 08:48:28 compute-0 ovn_controller[152344]: 2025-10-02T08:48:28Z|01180|binding|INFO|Setting lport d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 down in Southbound
Oct 02 08:48:28 compute-0 ovn_controller[152344]: 2025-10-02T08:48:28Z|01181|binding|INFO|Removing iface tapd1e15184-f1 ovn-installed in OVS
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:28.835 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:d6:a0 10.100.0.7'], port_security=['fa:16:3e:5e:d6:a0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '36a84233-3256-49b2-ae05-1569eb78b50f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '57638211-3760-47a5-8fdb-d4470031f4cf 580c7ec0-94be-443b-88b7-53672ca4c5d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2af26aac-073a-413a-88d9-fc16235c9487, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:48:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:28.837 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 in datapath 828d3558-d7f1-4d90-8e6f-bf6eff4d744e unbound from our chassis
Oct 02 08:48:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:28.838 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 828d3558-d7f1-4d90-8e6f-bf6eff4d744e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:48:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:28.839 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[38ba11c1-ea1f-4f1a-b1f9-db377a3f2892]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:28.840 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e namespace which is not needed anymore
Oct 02 08:48:28 compute-0 nova_compute[260603]: 2025-10-02 08:48:28.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:28 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct 02 08:48:28 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Consumed 15.690s CPU time.
Oct 02 08:48:28 compute-0 systemd-machined[214636]: Machine qemu-142-instance-00000071 terminated.
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.044 2 INFO nova.virt.libvirt.driver [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Instance destroyed successfully.
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.045 2 DEBUG nova.objects.instance [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid 36a84233-3256-49b2-ae05-1569eb78b50f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.064 2 DEBUG nova.virt.libvirt.vif [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:47:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=113,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUeT37zgtyGodwUyb3iGY7HM/3FmTKFRI0uJBAJFSKjatvrWQTObfDMZE8YBSZZGIkssGkh11uDOpS/CEO9ZlzyZpSQ7kHVbROBWfktxGuem67Vh+IUVlFakqZKCOP1Eg==',key_name='tempest-TestSecurityGroupsBasicOps-112352132',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:47:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-cn883z4w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:47:20Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=36a84233-3256-49b2-ae05-1569eb78b50f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.065 2 DEBUG nova.network.os_vif_util [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.066 2 DEBUG nova.network.os_vif_util [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.066 2 DEBUG os_vif [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1e15184-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:48:29 compute-0 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [NOTICE]   (374061) : haproxy version is 2.8.14-c23fe91
Oct 02 08:48:29 compute-0 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [NOTICE]   (374061) : path to executable is /usr/sbin/haproxy
Oct 02 08:48:29 compute-0 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [WARNING]  (374061) : Exiting Master process...
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.109 2 INFO os_vif [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1')
Oct 02 08:48:29 compute-0 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [ALERT]    (374061) : Current worker (374063) exited with code 143 (Terminated)
Oct 02 08:48:29 compute-0 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [WARNING]  (374061) : All workers exited. Exiting... (0)
Oct 02 08:48:29 compute-0 systemd[1]: libpod-ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985.scope: Deactivated successfully.
Oct 02 08:48:29 compute-0 podman[376417]: 2025-10-02 08:48:29.119679702 +0000 UTC m=+0.139521300 container died ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 08:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-09d31a5b55753d611210a316cb1a0d76d716a9026841d73fcc07cd5dc9fe4ce7-merged.mount: Deactivated successfully.
Oct 02 08:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985-userdata-shm.mount: Deactivated successfully.
Oct 02 08:48:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.561 2 DEBUG nova.network.neutron [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Successfully created port: a488a1b0-7749-499c-971f-662c8a9ee29b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:48:29 compute-0 podman[376417]: 2025-10-02 08:48:29.580112062 +0000 UTC m=+0.599953660 container cleanup ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 08:48:29 compute-0 systemd[1]: libpod-conmon-ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985.scope: Deactivated successfully.
Oct 02 08:48:29 compute-0 podman[376477]: 2025-10-02 08:48:29.886959637 +0000 UTC m=+0.272862546 container remove ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:48:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:29.897 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3a46b3-ae57-44c4-baac-5eacf027ad52]: (4, ('Thu Oct  2 08:48:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e (ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985)\nef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985\nThu Oct  2 08:48:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e (ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985)\nef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:29.900 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22b81870-e924-483c-8341-26eae47a9223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:29.901 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap828d3558-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:29 compute-0 kernel: tap828d3558-d0: left promiscuous mode
Oct 02 08:48:29 compute-0 nova_compute[260603]: 2025-10-02 08:48:29.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:29.939 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[21d9573a-74bd-4f31-9b33-c0aa561e8ee9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:29.985 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f90a738f-6234-4536-ad62-8557e0d5df91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:29.986 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[17192e48-24a6-4046-976b-5ed11c176890]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:30.015 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[826f24ea-4fce-4eaf-acc8-bd713ae7fada]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571784, 'reachable_time': 31758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376492, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:30.019 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:48:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:30.019 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[6610980e-9ead-4b32-969e-7805e4fce4da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d828d3558\x2dd7f1\x2d4d90\x2d8e6f\x2dbf6eff4d744e.mount: Deactivated successfully.
Oct 02 08:48:30 compute-0 nova_compute[260603]: 2025-10-02 08:48:30.074 2 DEBUG nova.compute.manager [req-c44bb5ed-aa16-4f8a-b733-02aae0c74077 req-0a1ac4c8-1b5e-4e9b-ad6a-8b74ad820d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-vif-unplugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:30 compute-0 nova_compute[260603]: 2025-10-02 08:48:30.075 2 DEBUG oslo_concurrency.lockutils [req-c44bb5ed-aa16-4f8a-b733-02aae0c74077 req-0a1ac4c8-1b5e-4e9b-ad6a-8b74ad820d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:30 compute-0 nova_compute[260603]: 2025-10-02 08:48:30.075 2 DEBUG oslo_concurrency.lockutils [req-c44bb5ed-aa16-4f8a-b733-02aae0c74077 req-0a1ac4c8-1b5e-4e9b-ad6a-8b74ad820d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:30 compute-0 nova_compute[260603]: 2025-10-02 08:48:30.076 2 DEBUG oslo_concurrency.lockutils [req-c44bb5ed-aa16-4f8a-b733-02aae0c74077 req-0a1ac4c8-1b5e-4e9b-ad6a-8b74ad820d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:30 compute-0 nova_compute[260603]: 2025-10-02 08:48:30.076 2 DEBUG nova.compute.manager [req-c44bb5ed-aa16-4f8a-b733-02aae0c74077 req-0a1ac4c8-1b5e-4e9b-ad6a-8b74ad820d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] No waiting events found dispatching network-vif-unplugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:48:30 compute-0 nova_compute[260603]: 2025-10-02 08:48:30.076 2 DEBUG nova.compute.manager [req-c44bb5ed-aa16-4f8a-b733-02aae0c74077 req-0a1ac4c8-1b5e-4e9b-ad6a-8b74ad820d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-vif-unplugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:48:30 compute-0 nova_compute[260603]: 2025-10-02 08:48:30.437 2 DEBUG nova.network.neutron [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Successfully updated port: a488a1b0-7749-499c-971f-662c8a9ee29b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:48:30 compute-0 nova_compute[260603]: 2025-10-02 08:48:30.450 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:48:30 compute-0 nova_compute[260603]: 2025-10-02 08:48:30.450 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:48:30 compute-0 nova_compute[260603]: 2025-10-02 08:48:30.450 2 DEBUG nova.network.neutron [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:48:30 compute-0 nova_compute[260603]: 2025-10-02 08:48:30.662 2 DEBUG nova.network.neutron [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:48:30 compute-0 ceph-mon[74477]: pgmap v2094: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 02 08:48:31 compute-0 nova_compute[260603]: 2025-10-02 08:48:31.072 2 DEBUG nova.compute.manager [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-changed-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:31 compute-0 nova_compute[260603]: 2025-10-02 08:48:31.073 2 DEBUG nova.compute.manager [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Refreshing instance network info cache due to event network-changed-a488a1b0-7749-499c-971f-662c8a9ee29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:48:31 compute-0 nova_compute[260603]: 2025-10-02 08:48:31.074 2 DEBUG oslo_concurrency.lockutils [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:48:31 compute-0 nova_compute[260603]: 2025-10-02 08:48:31.355 2 DEBUG nova.network.neutron [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updated VIF entry in instance network info cache for port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:48:31 compute-0 nova_compute[260603]: 2025-10-02 08:48:31.356 2 DEBUG nova.network.neutron [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updating instance_info_cache with network_info: [{"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:48:31 compute-0 nova_compute[260603]: 2025-10-02 08:48:31.391 2 DEBUG oslo_concurrency.lockutils [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:48:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:48:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 29K writes, 111K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.77 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3746 writes, 14K keys, 3746 commit groups, 1.0 writes per commit group, ingest: 15.33 MB, 0.03 MB/s
                                           Interval WAL: 3746 writes, 1490 syncs, 2.51 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 08:48:31 compute-0 nova_compute[260603]: 2025-10-02 08:48:31.441 2 INFO nova.virt.libvirt.driver [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Deleting instance files /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f_del
Oct 02 08:48:31 compute-0 nova_compute[260603]: 2025-10-02 08:48:31.442 2 INFO nova.virt.libvirt.driver [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Deletion of /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f_del complete
Oct 02 08:48:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 08:48:31 compute-0 nova_compute[260603]: 2025-10-02 08:48:31.623 2 INFO nova.compute.manager [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Took 3.02 seconds to destroy the instance on the hypervisor.
Oct 02 08:48:31 compute-0 nova_compute[260603]: 2025-10-02 08:48:31.624 2 DEBUG oslo.service.loopingcall [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:48:31 compute-0 nova_compute[260603]: 2025-10-02 08:48:31.624 2 DEBUG nova.compute.manager [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:48:31 compute-0 nova_compute[260603]: 2025-10-02 08:48:31.624 2 DEBUG nova.network.neutron [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.162 2 DEBUG nova.compute.manager [req-060ae9fc-f276-4ea2-a17d-c919681e4db8 req-6c8dfa40-f004-45b9-a4e5-e037c78c674b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.163 2 DEBUG oslo_concurrency.lockutils [req-060ae9fc-f276-4ea2-a17d-c919681e4db8 req-6c8dfa40-f004-45b9-a4e5-e037c78c674b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.163 2 DEBUG oslo_concurrency.lockutils [req-060ae9fc-f276-4ea2-a17d-c919681e4db8 req-6c8dfa40-f004-45b9-a4e5-e037c78c674b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.164 2 DEBUG oslo_concurrency.lockutils [req-060ae9fc-f276-4ea2-a17d-c919681e4db8 req-6c8dfa40-f004-45b9-a4e5-e037c78c674b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.164 2 DEBUG nova.compute.manager [req-060ae9fc-f276-4ea2-a17d-c919681e4db8 req-6c8dfa40-f004-45b9-a4e5-e037c78c674b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] No waiting events found dispatching network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.164 2 WARNING nova.compute.manager [req-060ae9fc-f276-4ea2-a17d-c919681e4db8 req-6c8dfa40-f004-45b9-a4e5-e037c78c674b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received unexpected event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 for instance with vm_state active and task_state deleting.
Oct 02 08:48:32 compute-0 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.501 2 DEBUG nova.network.neutron [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.532 2 INFO nova.compute.manager [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Took 0.91 seconds to deallocate network for instance.
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.548 2 DEBUG nova.network.neutron [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.586 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.586 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.587 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.587 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance network_info: |[{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.588 2 DEBUG oslo_concurrency.lockutils [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.588 2 DEBUG nova.network.neutron [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Refreshing network info cache for port a488a1b0-7749-499c-971f-662c8a9ee29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.592 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Start _get_guest_xml network_info=[{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.601 2 WARNING nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.605 2 DEBUG nova.virt.libvirt.host [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.605 2 DEBUG nova.virt.libvirt.host [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.612 2 DEBUG nova.virt.libvirt.host [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.612 2 DEBUG nova.virt.libvirt.host [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.613 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.613 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.613 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.614 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.614 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.614 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.614 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.614 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.615 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.615 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.615 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.615 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.618 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:48:32 compute-0 nova_compute[260603]: 2025-10-02 08:48:32.686 2 DEBUG oslo_concurrency.processutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:48:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:48:32 compute-0 ceph-mon[74477]: pgmap v2095: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 08:48:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:48:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1259085605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.057 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.090 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.095 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:48:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:48:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1406254739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.189 2 DEBUG oslo_concurrency.processutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.199 2 DEBUG nova.compute.provider_tree [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.221 2 DEBUG nova.scheduler.client.report [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.248 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.269 2 INFO nova.scheduler.client.report [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance 36a84233-3256-49b2-ae05-1569eb78b50f
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.338 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct 02 08:48:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:48:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1220778805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.544 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.546 2 DEBUG nova.virt.libvirt.vif [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:48:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-457680791',display_name='tempest-TestNetworkAdvancedServerOps-server-457680791',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-457680791',id=116,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLWb168hMylok2hbHIreuvgrfJq9EYhmSfhH2YBEWlnwrl2sEeWEt1hD3O/DPiepMH4x8+byvmbAISpkoWCoZjQ5z/Keocqhs6SeEf4SxPYxes1ihT4KVXi2eUj3jZBOrQ==',key_name='tempest-TestNetworkAdvancedServerOps-74461562',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-mqv16bk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:48:25Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=dcf324a5-8e22-40e4-8f75-469fe7a04756,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.547 2 DEBUG nova.network.os_vif_util [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.548 2 DEBUG nova.network.os_vif_util [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.551 2 DEBUG nova.objects.instance [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid dcf324a5-8e22-40e4-8f75-469fe7a04756 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.574 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:48:33 compute-0 nova_compute[260603]:   <uuid>dcf324a5-8e22-40e4-8f75-469fe7a04756</uuid>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   <name>instance-00000074</name>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-457680791</nova:name>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:48:32</nova:creationTime>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:48:33 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:48:33 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:48:33 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:48:33 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:48:33 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:48:33 compute-0 nova_compute[260603]:         <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct 02 08:48:33 compute-0 nova_compute[260603]:         <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:48:33 compute-0 nova_compute[260603]:         <nova:port uuid="a488a1b0-7749-499c-971f-662c8a9ee29b">
Oct 02 08:48:33 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <system>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <entry name="serial">dcf324a5-8e22-40e4-8f75-469fe7a04756</entry>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <entry name="uuid">dcf324a5-8e22-40e4-8f75-469fe7a04756</entry>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     </system>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   <os>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   </os>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   <features>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   </features>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/dcf324a5-8e22-40e4-8f75-469fe7a04756_disk">
Oct 02 08:48:33 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       </source>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:48:33 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/dcf324a5-8e22-40e4-8f75-469fe7a04756_disk.config">
Oct 02 08:48:33 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       </source>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:48:33 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:f7:28:cb"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <target dev="tapa488a1b0-77"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/console.log" append="off"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <video>
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     </video>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:48:33 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:48:33 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:48:33 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:48:33 compute-0 nova_compute[260603]: </domain>
Oct 02 08:48:33 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.576 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Preparing to wait for external event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.577 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.578 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.578 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.579 2 DEBUG nova.virt.libvirt.vif [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:48:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-457680791',display_name='tempest-TestNetworkAdvancedServerOps-server-457680791',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-457680791',id=116,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLWb168hMylok2hbHIreuvgrfJq9EYhmSfhH2YBEWlnwrl2sEeWEt1hD3O/DPiepMH4x8+byvmbAISpkoWCoZjQ5z/Keocqhs6SeEf4SxPYxes1ihT4KVXi2eUj3jZBOrQ==',key_name='tempest-TestNetworkAdvancedServerOps-74461562',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-mqv16bk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:48:25Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=dcf324a5-8e22-40e4-8f75-469fe7a04756,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.580 2 DEBUG nova.network.os_vif_util [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.581 2 DEBUG nova.network.os_vif_util [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.582 2 DEBUG os_vif [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.590 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa488a1b0-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.591 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa488a1b0-77, col_values=(('external_ids', {'iface-id': 'a488a1b0-7749-499c-971f-662c8a9ee29b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:28:cb', 'vm-uuid': 'dcf324a5-8e22-40e4-8f75-469fe7a04756'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:33 compute-0 NetworkManager[45129]: <info>  [1759394913.5952] manager: (tapa488a1b0-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.602 2 INFO os_vif [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77')
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.700 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.700 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.700 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No VIF found with MAC fa:16:3e:f7:28:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.701 2 INFO nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Using config drive
Oct 02 08:48:33 compute-0 nova_compute[260603]: 2025-10-02 08:48:33.724 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:48:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1259085605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:48:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1406254739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:48:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1220778805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.262 2 DEBUG nova.compute.manager [req-28893072-43d2-43f1-9273-7958dcd1a527 req-69b395ab-68ef-43f6-98fd-d0edc2973f3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-vif-deleted-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.405 2 INFO nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Creating config drive at /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/disk.config
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.410 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv84zgs72 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.571 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv84zgs72" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.597 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.601 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/disk.config dcf324a5-8e22-40e4-8f75-469fe7a04756_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.709 2 DEBUG nova.network.neutron [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updated VIF entry in instance network info cache for port a488a1b0-7749-499c-971f-662c8a9ee29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.710 2 DEBUG nova.network.neutron [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.729 2 DEBUG oslo_concurrency.lockutils [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:48:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:34.829 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:34.830 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:34.830 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.998 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/disk.config dcf324a5-8e22-40e4-8f75-469fe7a04756_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:48:34 compute-0 nova_compute[260603]: 2025-10-02 08:48:34.999 2 INFO nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Deleting local config drive /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/disk.config because it was imported into RBD.
Oct 02 08:48:35 compute-0 kernel: tapa488a1b0-77: entered promiscuous mode
Oct 02 08:48:35 compute-0 NetworkManager[45129]: <info>  [1759394915.0792] manager: (tapa488a1b0-77): new Tun device (/org/freedesktop/NetworkManager/Devices/464)
Oct 02 08:48:35 compute-0 ovn_controller[152344]: 2025-10-02T08:48:35Z|01182|binding|INFO|Claiming lport a488a1b0-7749-499c-971f-662c8a9ee29b for this chassis.
Oct 02 08:48:35 compute-0 ovn_controller[152344]: 2025-10-02T08:48:35Z|01183|binding|INFO|a488a1b0-7749-499c-971f-662c8a9ee29b: Claiming fa:16:3e:f7:28:cb 10.100.0.12
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.089 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:28:cb 10.100.0.12'], port_security=['fa:16:3e:f7:28:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dcf324a5-8e22-40e4-8f75-469fe7a04756', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0bfd547f-13c2-4a12-bb9c-fb5dbeb2009f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=47273294-f17c-4793-9e46-5229ffbf5fc3, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a488a1b0-7749-499c-971f-662c8a9ee29b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.091 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a488a1b0-7749-499c-971f-662c8a9ee29b in datapath d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 bound to our chassis
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.094 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6
Oct 02 08:48:35 compute-0 systemd-udevd[376650]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.107 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ddba6b86-0ed8-4067-baf8-b8876254450e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.108 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd95e5b25-31 in ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:48:35 compute-0 ovn_controller[152344]: 2025-10-02T08:48:35Z|01184|binding|INFO|Setting lport a488a1b0-7749-499c-971f-662c8a9ee29b ovn-installed in OVS
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.111 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd95e5b25-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.111 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f6593aba-3809-4a46-b57f-1ce996d9846b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 ovn_controller[152344]: 2025-10-02T08:48:35Z|01185|binding|INFO|Setting lport a488a1b0-7749-499c-971f-662c8a9ee29b up in Southbound
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.113 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c45d16f-fa2a-4753-ad7d-2c1bbf227d3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:35 compute-0 NetworkManager[45129]: <info>  [1759394915.1310] device (tapa488a1b0-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.130 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[db02ce65-4a31-4576-a7fb-9b0de8e7d40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 NetworkManager[45129]: <info>  [1759394915.1326] device (tapa488a1b0-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:48:35 compute-0 ceph-mon[74477]: pgmap v2096: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct 02 08:48:35 compute-0 systemd-machined[214636]: New machine qemu-145-instance-00000074.
Oct 02 08:48:35 compute-0 systemd[1]: Started Virtual Machine qemu-145-instance-00000074.
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.151 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394900.1503158, 262d5706-35bc-45eb-809b-217369a86015 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.151 2 INFO nova.compute.manager [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] VM Stopped (Lifecycle Event)
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.164 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c84c8f99-590e-4bab-be90-9fbf24e4df3b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.186 2 DEBUG nova.compute.manager [None req-1a6a91b6-1281-4bf0-a683-ee45c4687ac9 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.204 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9f5603-f5f3-41b4-836d-1e5a459936bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 systemd-udevd[376656]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.210 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[543998da-6663-45e5-ab69-5e53a6c136b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 NetworkManager[45129]: <info>  [1759394915.2120] manager: (tapd95e5b25-30): new Veth device (/org/freedesktop/NetworkManager/Devices/465)
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.248 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[55c5ea34-b454-40ba-ac0d-0c43536960e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.252 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[11105366-9c01-4251-844e-128c85784b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 NetworkManager[45129]: <info>  [1759394915.2828] device (tapd95e5b25-30): carrier: link connected
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.290 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d551d929-0066-44ab-8e58-221304e10fa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.315 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[40429f81-a23b-4512-a0ed-10d0b75eacda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd95e5b25-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:8b:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579388, 'reachable_time': 32942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376685, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.342 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[091661e7-bfc1-4b1c-8456-bde449e7d501]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:8b57'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579388, 'tstamp': 579388}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376686, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.358 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2410d7d5-7a6f-4da8-abe4-d3092d703913]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd95e5b25-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:8b:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579388, 'reachable_time': 32942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376687, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.392 2 DEBUG nova.compute.manager [req-bd932a02-d299-4cca-b40b-b669b0b97459 req-e85258ae-08d9-463f-b500-1b98ee43b28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.393 2 DEBUG oslo_concurrency.lockutils [req-bd932a02-d299-4cca-b40b-b669b0b97459 req-e85258ae-08d9-463f-b500-1b98ee43b28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.394 2 DEBUG oslo_concurrency.lockutils [req-bd932a02-d299-4cca-b40b-b669b0b97459 req-e85258ae-08d9-463f-b500-1b98ee43b28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.394 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ae53ff-d8c7-48bf-b7c8-54f49dde58c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.394 2 DEBUG oslo_concurrency.lockutils [req-bd932a02-d299-4cca-b40b-b669b0b97459 req-e85258ae-08d9-463f-b500-1b98ee43b28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.395 2 DEBUG nova.compute.manager [req-bd932a02-d299-4cca-b40b-b669b0b97459 req-e85258ae-08d9-463f-b500-1b98ee43b28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Processing event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.468 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b7262b31-cea3-4575-a205-a772add8c659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.469 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd95e5b25-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.469 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.470 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd95e5b25-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:35 compute-0 kernel: tapd95e5b25-30: entered promiscuous mode
Oct 02 08:48:35 compute-0 NetworkManager[45129]: <info>  [1759394915.4728] manager: (tapd95e5b25-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.476 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd95e5b25-30, col_values=(('external_ids', {'iface-id': 'b700b6c1-06ca-44eb-af9b-8fe8d11dd204'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:35 compute-0 ovn_controller[152344]: 2025-10-02T08:48:35Z|01186|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct 02 08:48:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 02 08:48:35 compute-0 nova_compute[260603]: 2025-10-02 08:48:35.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.499 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.500 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6159529a-06dd-480f-b67e-45569a86aea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.501 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.pid.haproxy
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:48:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.501 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'env', 'PROCESS_TAG=haproxy-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:48:35 compute-0 podman[376761]: 2025-10-02 08:48:35.86900706 +0000 UTC m=+0.033661731 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:48:36 compute-0 podman[376761]: 2025-10-02 08:48:36.124490002 +0000 UTC m=+0.289144643 container create b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.137 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.139 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394916.1372008, dcf324a5-8e22-40e4-8f75-469fe7a04756 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.140 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] VM Started (Lifecycle Event)
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.144 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.150 2 INFO nova.virt.libvirt.driver [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance spawned successfully.
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.151 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.192 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:48:36 compute-0 systemd[1]: Started libpod-conmon-b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7.scope.
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.202 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.207 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.208 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.209 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.211 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.212 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.213 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:48:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fddb7fd1c24079af1e2d9c723ccd3113c5ea87904fa4f101a52504ba886f8ae2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.247 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.248 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394916.1383905, dcf324a5-8e22-40e4-8f75-469fe7a04756 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.248 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] VM Paused (Lifecycle Event)
Oct 02 08:48:36 compute-0 podman[376761]: 2025-10-02 08:48:36.270609827 +0000 UTC m=+0.435264568 container init b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:48:36 compute-0 podman[376761]: 2025-10-02 08:48:36.277073178 +0000 UTC m=+0.441727859 container start b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.278 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.283 2 INFO nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Took 10.59 seconds to spawn the instance on the hypervisor.
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.284 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.288 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394916.1412237, dcf324a5-8e22-40e4-8f75-469fe7a04756 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.289 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] VM Resumed (Lifecycle Event)
Oct 02 08:48:36 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [NOTICE]   (376780) : New worker (376782) forked
Oct 02 08:48:36 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [NOTICE]   (376780) : Loading success.
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.355 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.364 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.392 2 INFO nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Took 11.72 seconds to build instance.
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.408 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:36 compute-0 nova_compute[260603]: 2025-10-02 08:48:36.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:37 compute-0 ceph-mon[74477]: pgmap v2097: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 02 08:48:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 02 08:48:37 compute-0 nova_compute[260603]: 2025-10-02 08:48:37.533 2 DEBUG nova.compute.manager [req-132739d1-b838-409a-9b1a-9523c1317d3e req-e4b8d2ae-2b7c-4959-9a95-493b68134b41 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:37 compute-0 nova_compute[260603]: 2025-10-02 08:48:37.534 2 DEBUG oslo_concurrency.lockutils [req-132739d1-b838-409a-9b1a-9523c1317d3e req-e4b8d2ae-2b7c-4959-9a95-493b68134b41 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:37 compute-0 nova_compute[260603]: 2025-10-02 08:48:37.534 2 DEBUG oslo_concurrency.lockutils [req-132739d1-b838-409a-9b1a-9523c1317d3e req-e4b8d2ae-2b7c-4959-9a95-493b68134b41 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:37 compute-0 nova_compute[260603]: 2025-10-02 08:48:37.535 2 DEBUG oslo_concurrency.lockutils [req-132739d1-b838-409a-9b1a-9523c1317d3e req-e4b8d2ae-2b7c-4959-9a95-493b68134b41 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:37 compute-0 nova_compute[260603]: 2025-10-02 08:48:37.535 2 DEBUG nova.compute.manager [req-132739d1-b838-409a-9b1a-9523c1317d3e req-e4b8d2ae-2b7c-4959-9a95-493b68134b41 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:48:37 compute-0 nova_compute[260603]: 2025-10-02 08:48:37.536 2 WARNING nova.compute.manager [req-132739d1-b838-409a-9b1a-9523c1317d3e req-e4b8d2ae-2b7c-4959-9a95-493b68134b41 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state active and task_state None.
Oct 02 08:48:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:48:38 compute-0 ceph-mon[74477]: pgmap v2098: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 02 08:48:38 compute-0 nova_compute[260603]: 2025-10-02 08:48:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:38 compute-0 nova_compute[260603]: 2025-10-02 08:48:38.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:48:38 compute-0 nova_compute[260603]: 2025-10-02 08:48:38.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:48:38 compute-0 nova_compute[260603]: 2025-10-02 08:48:38.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:48:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:48:39 compute-0 nova_compute[260603]: 2025-10-02 08:48:39.375 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:48:39 compute-0 nova_compute[260603]: 2025-10-02 08:48:39.376 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:48:39 compute-0 nova_compute[260603]: 2025-10-02 08:48:39.376 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:48:39 compute-0 nova_compute[260603]: 2025-10-02 08:48:39.377 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dcf324a5-8e22-40e4-8f75-469fe7a04756 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:48:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 125 op/s
Oct 02 08:48:39 compute-0 nova_compute[260603]: 2025-10-02 08:48:39.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:40 compute-0 ceph-mon[74477]: pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 125 op/s
Oct 02 08:48:41 compute-0 nova_compute[260603]: 2025-10-02 08:48:41.423 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:48:41 compute-0 nova_compute[260603]: 2025-10-02 08:48:41.446 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:48:41 compute-0 nova_compute[260603]: 2025-10-02 08:48:41.446 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:48:41 compute-0 nova_compute[260603]: 2025-10-02 08:48:41.447 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Oct 02 08:48:41 compute-0 nova_compute[260603]: 2025-10-02 08:48:41.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:41 compute-0 nova_compute[260603]: 2025-10-02 08:48:41.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:41 compute-0 nova_compute[260603]: 2025-10-02 08:48:41.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:41 compute-0 nova_compute[260603]: 2025-10-02 08:48:41.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:41 compute-0 nova_compute[260603]: 2025-10-02 08:48:41.549 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:48:41 compute-0 nova_compute[260603]: 2025-10-02 08:48:41.550 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:48:41 compute-0 ovn_controller[152344]: 2025-10-02T08:48:41Z|01187|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct 02 08:48:41 compute-0 nova_compute[260603]: 2025-10-02 08:48:41.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:41 compute-0 ovn_controller[152344]: 2025-10-02T08:48:41Z|01188|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct 02 08:48:41 compute-0 nova_compute[260603]: 2025-10-02 08:48:41.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:48:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968116699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:48:42 compute-0 podman[376814]: 2025-10-02 08:48:42.019261264 +0000 UTC m=+0.074632347 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 08:48:42 compute-0 nova_compute[260603]: 2025-10-02 08:48:42.056 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:48:42 compute-0 podman[376813]: 2025-10-02 08:48:42.074400242 +0000 UTC m=+0.134163602 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 08:48:42 compute-0 nova_compute[260603]: 2025-10-02 08:48:42.163 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:48:42 compute-0 nova_compute[260603]: 2025-10-02 08:48:42.164 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:48:42 compute-0 nova_compute[260603]: 2025-10-02 08:48:42.412 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:48:42 compute-0 nova_compute[260603]: 2025-10-02 08:48:42.414 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3671MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:48:42 compute-0 nova_compute[260603]: 2025-10-02 08:48:42.415 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:42 compute-0 nova_compute[260603]: 2025-10-02 08:48:42.415 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:42 compute-0 nova_compute[260603]: 2025-10-02 08:48:42.625 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance dcf324a5-8e22-40e4-8f75-469fe7a04756 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:48:42 compute-0 nova_compute[260603]: 2025-10-02 08:48:42.626 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:48:42 compute-0 nova_compute[260603]: 2025-10-02 08:48:42.626 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:48:42 compute-0 ceph-mon[74477]: pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Oct 02 08:48:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3968116699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:48:42 compute-0 nova_compute[260603]: 2025-10-02 08:48:42.913 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:48:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:48:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:48:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857537261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.391 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.402 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.440 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.464 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.465 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:48:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 08:48:43 compute-0 ovn_controller[152344]: 2025-10-02T08:48:43Z|01189|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct 02 08:48:43 compute-0 NetworkManager[45129]: <info>  [1759394923.6028] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:43 compute-0 NetworkManager[45129]: <info>  [1759394923.6043] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:43 compute-0 ovn_controller[152344]: 2025-10-02T08:48:43Z|01190|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1857537261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.826 2 DEBUG nova.compute.manager [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-changed-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.826 2 DEBUG nova.compute.manager [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Refreshing instance network info cache due to event network-changed-a488a1b0-7749-499c-971f-662c8a9ee29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.827 2 DEBUG oslo_concurrency.lockutils [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.827 2 DEBUG oslo_concurrency.lockutils [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:48:43 compute-0 nova_compute[260603]: 2025-10-02 08:48:43.827 2 DEBUG nova.network.neutron [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Refreshing network info cache for port a488a1b0-7749-499c-971f-662c8a9ee29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:48:44 compute-0 nova_compute[260603]: 2025-10-02 08:48:44.040 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394909.0397034, 36a84233-3256-49b2-ae05-1569eb78b50f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:48:44 compute-0 nova_compute[260603]: 2025-10-02 08:48:44.041 2 INFO nova.compute.manager [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] VM Stopped (Lifecycle Event)
Oct 02 08:48:44 compute-0 nova_compute[260603]: 2025-10-02 08:48:44.072 2 DEBUG nova.compute.manager [None req-8f7e35af-a9cf-4a23-8418-903e0e1ebf3f - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:48:44 compute-0 nova_compute[260603]: 2025-10-02 08:48:44.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:44 compute-0 ceph-mon[74477]: pgmap v2101: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 02 08:48:45 compute-0 nova_compute[260603]: 2025-10-02 08:48:45.330 2 DEBUG nova.network.neutron [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updated VIF entry in instance network info cache for port a488a1b0-7749-499c-971f-662c8a9ee29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:48:45 compute-0 nova_compute[260603]: 2025-10-02 08:48:45.332 2 DEBUG nova.network.neutron [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:48:45 compute-0 nova_compute[260603]: 2025-10-02 08:48:45.354 2 DEBUG oslo_concurrency.lockutils [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:48:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:48:46 compute-0 ceph-mon[74477]: pgmap v2102: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:48:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 90 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 366 KiB/s wr, 80 op/s
Oct 02 08:48:47 compute-0 nova_compute[260603]: 2025-10-02 08:48:47.534 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:47 compute-0 nova_compute[260603]: 2025-10-02 08:48:47.535 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:47 compute-0 nova_compute[260603]: 2025-10-02 08:48:47.535 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 08:48:47 compute-0 nova_compute[260603]: 2025-10-02 08:48:47.558 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 08:48:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:48:48 compute-0 nova_compute[260603]: 2025-10-02 08:48:48.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:48 compute-0 ovn_controller[152344]: 2025-10-02T08:48:48Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:28:cb 10.100.0.12
Oct 02 08:48:48 compute-0 ovn_controller[152344]: 2025-10-02T08:48:48Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:28:cb 10.100.0.12
Oct 02 08:48:48 compute-0 ceph-mon[74477]: pgmap v2103: 305 pgs: 305 active+clean; 90 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 366 KiB/s wr, 80 op/s
Oct 02 08:48:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 101 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 91 op/s
Oct 02 08:48:49 compute-0 nova_compute[260603]: 2025-10-02 08:48:49.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:49 compute-0 nova_compute[260603]: 2025-10-02 08:48:49.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:50 compute-0 podman[376882]: 2025-10-02 08:48:50.033507576 +0000 UTC m=+0.093050851 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 08:48:50 compute-0 podman[376883]: 2025-10-02 08:48:50.046056377 +0000 UTC m=+0.098038486 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct 02 08:48:51 compute-0 ceph-mon[74477]: pgmap v2104: 305 pgs: 305 active+clean; 101 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 91 op/s
Oct 02 08:48:51 compute-0 nova_compute[260603]: 2025-10-02 08:48:51.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 101 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.6 MiB/s wr, 35 op/s
Oct 02 08:48:51 compute-0 nova_compute[260603]: 2025-10-02 08:48:51.526 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:48:51 compute-0 nova_compute[260603]: 2025-10-02 08:48:51.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:52 compute-0 sudo[376923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:48:52 compute-0 sudo[376923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:52 compute-0 sudo[376923]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:52 compute-0 sudo[376948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:48:52 compute-0 sudo[376948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:52 compute-0 sudo[376948]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:52 compute-0 sudo[376973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:48:52 compute-0 sudo[376973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:52 compute-0 sudo[376973]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:52 compute-0 sudo[376998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:48:52 compute-0 sudo[376998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:52 compute-0 sudo[376998]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:52 compute-0 sudo[377055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:48:52 compute-0 sudo[377055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:52 compute-0 sudo[377055]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:48:52 compute-0 sudo[377080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:48:52 compute-0 sudo[377080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:52 compute-0 sudo[377080]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:53 compute-0 sudo[377105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:48:53 compute-0 sudo[377105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:53 compute-0 sudo[377105]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:53 compute-0 ceph-mon[74477]: pgmap v2105: 305 pgs: 305 active+clean; 101 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.6 MiB/s wr, 35 op/s
Oct 02 08:48:53 compute-0 sudo[377130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 08:48:53 compute-0 sudo[377130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:53 compute-0 sudo[377130]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:48:53 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:48:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:48:53 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:48:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:48:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:48:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:48:53 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:48:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:48:53 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:48:53 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 80ca897a-2a6f-4045-b742-b09557197a1f does not exist
Oct 02 08:48:53 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6e470c5f-2c64-43d4-ae4a-68e02830ca84 does not exist
Oct 02 08:48:53 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 4756f1fa-bd74-47f6-a516-d26bd9afcdb3 does not exist
Oct 02 08:48:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:48:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:48:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:48:53 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:48:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:48:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:48:53 compute-0 sudo[377173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:48:53 compute-0 sudo[377173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:53 compute-0 sudo[377173]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:48:53 compute-0 sudo[377198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:48:53 compute-0 sudo[377198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:53 compute-0 sudo[377198]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:53 compute-0 sudo[377223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:48:53 compute-0 sudo[377223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:53 compute-0 sudo[377223]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:53 compute-0 nova_compute[260603]: 2025-10-02 08:48:53.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:53 compute-0 sudo[377248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:48:53 compute-0 sudo[377248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:54 compute-0 podman[377312]: 2025-10-02 08:48:54.149878047 +0000 UTC m=+0.075316068 container create 1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:48:54 compute-0 systemd[1]: Started libpod-conmon-1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22.scope.
Oct 02 08:48:54 compute-0 podman[377312]: 2025-10-02 08:48:54.124776975 +0000 UTC m=+0.050215066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:48:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:48:54 compute-0 podman[377312]: 2025-10-02 08:48:54.257554344 +0000 UTC m=+0.182992405 container init 1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 08:48:54 compute-0 podman[377312]: 2025-10-02 08:48:54.271811838 +0000 UTC m=+0.197249859 container start 1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 08:48:54 compute-0 podman[377312]: 2025-10-02 08:48:54.275373979 +0000 UTC m=+0.200812040 container attach 1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:48:54 compute-0 sharp_lederberg[377328]: 167 167
Oct 02 08:48:54 compute-0 systemd[1]: libpod-1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22.scope: Deactivated successfully.
Oct 02 08:48:54 compute-0 conmon[377328]: conmon 1e5e73cee0d021d034b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22.scope/container/memory.events
Oct 02 08:48:54 compute-0 podman[377312]: 2025-10-02 08:48:54.285240566 +0000 UTC m=+0.210678617 container died 1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 08:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0773656a8574ad998601fb10fe258eab4b68a0bc582e0d46483210c058a3cd85-merged.mount: Deactivated successfully.
Oct 02 08:48:54 compute-0 podman[377312]: 2025-10-02 08:48:54.340577301 +0000 UTC m=+0.266015342 container remove 1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 02 08:48:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:48:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:48:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:48:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:48:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:48:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:48:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:48:54 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:48:54 compute-0 ceph-mon[74477]: pgmap v2106: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:48:54 compute-0 systemd[1]: libpod-conmon-1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22.scope: Deactivated successfully.
Oct 02 08:48:54 compute-0 nova_compute[260603]: 2025-10-02 08:48:54.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:54 compute-0 nova_compute[260603]: 2025-10-02 08:48:54.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:54 compute-0 podman[377351]: 2025-10-02 08:48:54.608541224 +0000 UTC m=+0.055954936 container create b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 08:48:54 compute-0 systemd[1]: Started libpod-conmon-b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f.scope.
Oct 02 08:48:54 compute-0 podman[377351]: 2025-10-02 08:48:54.589443358 +0000 UTC m=+0.036857070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:48:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a529dafe8a49c1fa241ebc6ec0ac2fa0bbf10c73eac8ad748839f2430facf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a529dafe8a49c1fa241ebc6ec0ac2fa0bbf10c73eac8ad748839f2430facf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a529dafe8a49c1fa241ebc6ec0ac2fa0bbf10c73eac8ad748839f2430facf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a529dafe8a49c1fa241ebc6ec0ac2fa0bbf10c73eac8ad748839f2430facf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a529dafe8a49c1fa241ebc6ec0ac2fa0bbf10c73eac8ad748839f2430facf0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:54 compute-0 podman[377351]: 2025-10-02 08:48:54.726891372 +0000 UTC m=+0.174305104 container init b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:48:54 compute-0 podman[377351]: 2025-10-02 08:48:54.739907797 +0000 UTC m=+0.187321499 container start b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 08:48:54 compute-0 podman[377351]: 2025-10-02 08:48:54.743398856 +0000 UTC m=+0.190812568 container attach b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 08:48:55 compute-0 nova_compute[260603]: 2025-10-02 08:48:55.151 2 INFO nova.compute.manager [None req-0a34b517-eede-4798-83ed-981f97945204 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Get console output
Oct 02 08:48:55 compute-0 nova_compute[260603]: 2025-10-02 08:48:55.157 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:48:55 compute-0 nova_compute[260603]: 2025-10-02 08:48:55.438 2 DEBUG oslo_concurrency.lockutils [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:48:55 compute-0 nova_compute[260603]: 2025-10-02 08:48:55.439 2 DEBUG oslo_concurrency.lockutils [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:48:55 compute-0 nova_compute[260603]: 2025-10-02 08:48:55.439 2 INFO nova.compute.manager [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Rebooting instance
Oct 02 08:48:55 compute-0 nova_compute[260603]: 2025-10-02 08:48:55.453 2 DEBUG oslo_concurrency.lockutils [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:48:55 compute-0 nova_compute[260603]: 2025-10-02 08:48:55.453 2 DEBUG oslo_concurrency.lockutils [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:48:55 compute-0 nova_compute[260603]: 2025-10-02 08:48:55.453 2 DEBUG nova.network.neutron [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:48:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 08:48:55 compute-0 magical_meitner[377368]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:48:55 compute-0 magical_meitner[377368]: --> relative data size: 1.0
Oct 02 08:48:55 compute-0 magical_meitner[377368]: --> All data devices are unavailable
Oct 02 08:48:55 compute-0 systemd[1]: libpod-b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f.scope: Deactivated successfully.
Oct 02 08:48:55 compute-0 podman[377351]: 2025-10-02 08:48:55.819832937 +0000 UTC m=+1.267246629 container died b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 08:48:55 compute-0 systemd[1]: libpod-b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f.scope: Consumed 1.013s CPU time.
Oct 02 08:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4a529dafe8a49c1fa241ebc6ec0ac2fa0bbf10c73eac8ad748839f2430facf0-merged.mount: Deactivated successfully.
Oct 02 08:48:55 compute-0 podman[377351]: 2025-10-02 08:48:55.872197239 +0000 UTC m=+1.319610921 container remove b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 08:48:55 compute-0 systemd[1]: libpod-conmon-b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f.scope: Deactivated successfully.
Oct 02 08:48:55 compute-0 sudo[377248]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:55 compute-0 sudo[377412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:48:56 compute-0 sudo[377412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:56 compute-0 sudo[377412]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:56 compute-0 sudo[377437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:48:56 compute-0 sudo[377437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:56 compute-0 sudo[377437]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:56 compute-0 sudo[377462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:48:56 compute-0 sudo[377462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:56 compute-0 sudo[377462]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:56 compute-0 sudo[377487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:48:56 compute-0 sudo[377487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:56 compute-0 ovn_controller[152344]: 2025-10-02T08:48:56Z|01191|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct 02 08:48:56 compute-0 nova_compute[260603]: 2025-10-02 08:48:56.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:56 compute-0 ceph-mon[74477]: pgmap v2107: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 08:48:56 compute-0 podman[377549]: 2025-10-02 08:48:56.610958446 +0000 UTC m=+0.054303484 container create 1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_heyrovsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:48:56 compute-0 nova_compute[260603]: 2025-10-02 08:48:56.623 2 DEBUG nova.network.neutron [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:48:56 compute-0 nova_compute[260603]: 2025-10-02 08:48:56.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:56 compute-0 nova_compute[260603]: 2025-10-02 08:48:56.644 2 DEBUG oslo_concurrency.lockutils [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:48:56 compute-0 nova_compute[260603]: 2025-10-02 08:48:56.645 2 DEBUG nova.compute.manager [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:48:56 compute-0 systemd[1]: Started libpod-conmon-1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e.scope.
Oct 02 08:48:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:48:56 compute-0 podman[377549]: 2025-10-02 08:48:56.578556216 +0000 UTC m=+0.021901294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:48:56 compute-0 podman[377549]: 2025-10-02 08:48:56.691068683 +0000 UTC m=+0.134413771 container init 1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:48:56 compute-0 podman[377549]: 2025-10-02 08:48:56.698574237 +0000 UTC m=+0.141919275 container start 1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_heyrovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:48:56 compute-0 podman[377549]: 2025-10-02 08:48:56.702177669 +0000 UTC m=+0.145522717 container attach 1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct 02 08:48:56 compute-0 magical_heyrovsky[377565]: 167 167
Oct 02 08:48:56 compute-0 systemd[1]: libpod-1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e.scope: Deactivated successfully.
Oct 02 08:48:56 compute-0 podman[377549]: 2025-10-02 08:48:56.704280834 +0000 UTC m=+0.147625882 container died 1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 08:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f65d13435db74fe3922cd185474521f9207d2b03e08c90c726053f6dd44ccbd6-merged.mount: Deactivated successfully.
Oct 02 08:48:56 compute-0 podman[377549]: 2025-10-02 08:48:56.75578873 +0000 UTC m=+0.199133768 container remove 1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_heyrovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:48:56 compute-0 systemd[1]: libpod-conmon-1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e.scope: Deactivated successfully.
Oct 02 08:48:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:56.904 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:48:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:56.907 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:48:56 compute-0 nova_compute[260603]: 2025-10-02 08:48:56.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:56 compute-0 podman[377590]: 2025-10-02 08:48:56.950173478 +0000 UTC m=+0.053764006 container create a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:48:56 compute-0 systemd[1]: Started libpod-conmon-a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42.scope.
Oct 02 08:48:57 compute-0 podman[377590]: 2025-10-02 08:48:56.925461258 +0000 UTC m=+0.029051886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:48:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed3e5c27fc5f1d267d9e19083f0324678777f5a46357d80a9890351c272a07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed3e5c27fc5f1d267d9e19083f0324678777f5a46357d80a9890351c272a07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed3e5c27fc5f1d267d9e19083f0324678777f5a46357d80a9890351c272a07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed3e5c27fc5f1d267d9e19083f0324678777f5a46357d80a9890351c272a07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:57 compute-0 podman[377590]: 2025-10-02 08:48:57.042202616 +0000 UTC m=+0.145793154 container init a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cray, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:48:57 compute-0 ovn_controller[152344]: 2025-10-02T08:48:57Z|01192|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct 02 08:48:57 compute-0 podman[377590]: 2025-10-02 08:48:57.05193725 +0000 UTC m=+0.155527808 container start a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 08:48:57 compute-0 podman[377590]: 2025-10-02 08:48:57.05737848 +0000 UTC m=+0.160969008 container attach a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cray, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 02 08:48:57 compute-0 nova_compute[260603]: 2025-10-02 08:48:57.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:48:57 compute-0 hardcore_cray[377608]: {
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:     "0": [
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:         {
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "devices": [
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "/dev/loop3"
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             ],
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_name": "ceph_lv0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_size": "21470642176",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "name": "ceph_lv0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "tags": {
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.cluster_name": "ceph",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.crush_device_class": "",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.encrypted": "0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.osd_id": "0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.type": "block",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.vdo": "0"
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             },
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "type": "block",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "vg_name": "ceph_vg0"
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:         }
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:     ],
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:     "1": [
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:         {
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "devices": [
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "/dev/loop4"
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             ],
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_name": "ceph_lv1",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_size": "21470642176",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "name": "ceph_lv1",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "tags": {
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.cluster_name": "ceph",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.crush_device_class": "",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.encrypted": "0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.osd_id": "1",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.type": "block",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.vdo": "0"
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             },
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "type": "block",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "vg_name": "ceph_vg1"
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:         }
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:     ],
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:     "2": [
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:         {
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "devices": [
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "/dev/loop5"
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             ],
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_name": "ceph_lv2",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_size": "21470642176",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "name": "ceph_lv2",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "tags": {
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.cluster_name": "ceph",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.crush_device_class": "",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.encrypted": "0",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.osd_id": "2",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.type": "block",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:                 "ceph.vdo": "0"
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             },
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "type": "block",
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:             "vg_name": "ceph_vg2"
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:         }
Oct 02 08:48:57 compute-0 hardcore_cray[377608]:     ]
Oct 02 08:48:57 compute-0 hardcore_cray[377608]: }
Oct 02 08:48:57 compute-0 systemd[1]: libpod-a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42.scope: Deactivated successfully.
Oct 02 08:48:57 compute-0 podman[377590]: 2025-10-02 08:48:57.882848739 +0000 UTC m=+0.986439307 container died a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cray, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:48:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0ed3e5c27fc5f1d267d9e19083f0324678777f5a46357d80a9890351c272a07-merged.mount: Deactivated successfully.
Oct 02 08:48:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:48:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:48:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:48:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:48:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:48:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:48:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:48:57 compute-0 podman[377590]: 2025-10-02 08:48:57.956497894 +0000 UTC m=+1.060088432 container remove a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cray, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 08:48:57 compute-0 systemd[1]: libpod-conmon-a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42.scope: Deactivated successfully.
Oct 02 08:48:57 compute-0 sudo[377487]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:58 compute-0 sudo[377628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:48:58 compute-0 sudo[377628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:58 compute-0 sudo[377628]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:58 compute-0 sudo[377653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:48:58 compute-0 sudo[377653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:58 compute-0 sudo[377653]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:58 compute-0 sudo[377678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:48:58 compute-0 sudo[377678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:58 compute-0 sudo[377678]: pam_unix(sudo:session): session closed for user root
Oct 02 08:48:58 compute-0 sudo[377703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:48:58 compute-0 sudo[377703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:48:58 compute-0 ceph-mon[74477]: pgmap v2108: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:48:58 compute-0 nova_compute[260603]: 2025-10-02 08:48:58.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:58 compute-0 podman[377768]: 2025-10-02 08:48:58.802493572 +0000 UTC m=+0.057992818 container create 364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 08:48:58 compute-0 systemd[1]: Started libpod-conmon-364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484.scope.
Oct 02 08:48:58 compute-0 podman[377768]: 2025-10-02 08:48:58.772423075 +0000 UTC m=+0.027922371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:48:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:48:58 compute-0 podman[377768]: 2025-10-02 08:48:58.897968088 +0000 UTC m=+0.153467334 container init 364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 08:48:58 compute-0 podman[377768]: 2025-10-02 08:48:58.910389485 +0000 UTC m=+0.165888701 container start 364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 08:48:58 compute-0 podman[377768]: 2025-10-02 08:48:58.914333758 +0000 UTC m=+0.169833004 container attach 364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 08:48:58 compute-0 hungry_torvalds[377784]: 167 167
Oct 02 08:48:58 compute-0 systemd[1]: libpod-364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484.scope: Deactivated successfully.
Oct 02 08:48:58 compute-0 podman[377768]: 2025-10-02 08:48:58.918119296 +0000 UTC m=+0.173618532 container died 364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 08:48:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ff3841355b070469f04cfc3bebeb0bcbc90716130470634b99bca7f4bf5c452-merged.mount: Deactivated successfully.
Oct 02 08:48:58 compute-0 podman[377768]: 2025-10-02 08:48:58.965555435 +0000 UTC m=+0.221054691 container remove 364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 08:48:58 compute-0 systemd[1]: libpod-conmon-364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484.scope: Deactivated successfully.
Oct 02 08:48:59 compute-0 kernel: tapa488a1b0-77 (unregistering): left promiscuous mode
Oct 02 08:48:59 compute-0 NetworkManager[45129]: <info>  [1759394939.0865] device (tapa488a1b0-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:48:59 compute-0 ovn_controller[152344]: 2025-10-02T08:48:59Z|01193|binding|INFO|Releasing lport a488a1b0-7749-499c-971f-662c8a9ee29b from this chassis (sb_readonly=0)
Oct 02 08:48:59 compute-0 ovn_controller[152344]: 2025-10-02T08:48:59Z|01194|binding|INFO|Setting lport a488a1b0-7749-499c-971f-662c8a9ee29b down in Southbound
Oct 02 08:48:59 compute-0 ovn_controller[152344]: 2025-10-02T08:48:59Z|01195|binding|INFO|Removing iface tapa488a1b0-77 ovn-installed in OVS
Oct 02 08:48:59 compute-0 nova_compute[260603]: 2025-10-02 08:48:59.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.101 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:28:cb 10.100.0.12'], port_security=['fa:16:3e:f7:28:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dcf324a5-8e22-40e4-8f75-469fe7a04756', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0bfd547f-13c2-4a12-bb9c-fb5dbeb2009f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=47273294-f17c-4793-9e46-5229ffbf5fc3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a488a1b0-7749-499c-971f-662c8a9ee29b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.103 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a488a1b0-7749-499c-971f-662c8a9ee29b in datapath d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 unbound from our chassis
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.105 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.109 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b598bcd2-95f1-495b-8aac-fe743d828448]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.110 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 namespace which is not needed anymore
Oct 02 08:48:59 compute-0 nova_compute[260603]: 2025-10-02 08:48:59.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:59 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Deactivated successfully.
Oct 02 08:48:59 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Consumed 13.311s CPU time.
Oct 02 08:48:59 compute-0 systemd-machined[214636]: Machine qemu-145-instance-00000074 terminated.
Oct 02 08:48:59 compute-0 podman[377819]: 2025-10-02 08:48:59.23006934 +0000 UTC m=+0.051270829 container create 1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:48:59 compute-0 systemd[1]: Started libpod-conmon-1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca.scope.
Oct 02 08:48:59 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [NOTICE]   (376780) : haproxy version is 2.8.14-c23fe91
Oct 02 08:48:59 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [NOTICE]   (376780) : path to executable is /usr/sbin/haproxy
Oct 02 08:48:59 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [WARNING]  (376780) : Exiting Master process...
Oct 02 08:48:59 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [WARNING]  (376780) : Exiting Master process...
Oct 02 08:48:59 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [ALERT]    (376780) : Current worker (376782) exited with code 143 (Terminated)
Oct 02 08:48:59 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [WARNING]  (376780) : All workers exited. Exiting... (0)
Oct 02 08:48:59 compute-0 systemd[1]: libpod-b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7.scope: Deactivated successfully.
Oct 02 08:48:59 compute-0 podman[377843]: 2025-10-02 08:48:59.289370968 +0000 UTC m=+0.061214609 container died b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 08:48:59 compute-0 podman[377819]: 2025-10-02 08:48:59.208884669 +0000 UTC m=+0.030086208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:48:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d91b9582040d159ce93f4656410ffd40b16e4747bb6b61b0f11bec3525319/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d91b9582040d159ce93f4656410ffd40b16e4747bb6b61b0f11bec3525319/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d91b9582040d159ce93f4656410ffd40b16e4747bb6b61b0f11bec3525319/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d91b9582040d159ce93f4656410ffd40b16e4747bb6b61b0f11bec3525319/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:48:59 compute-0 nova_compute[260603]: 2025-10-02 08:48:59.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:59 compute-0 podman[377819]: 2025-10-02 08:48:59.332544953 +0000 UTC m=+0.153746492 container init 1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 08:48:59 compute-0 nova_compute[260603]: 2025-10-02 08:48:59.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7-userdata-shm.mount: Deactivated successfully.
Oct 02 08:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-fddb7fd1c24079af1e2d9c723ccd3113c5ea87904fa4f101a52504ba886f8ae2-merged.mount: Deactivated successfully.
Oct 02 08:48:59 compute-0 podman[377819]: 2025-10-02 08:48:59.341475492 +0000 UTC m=+0.162676981 container start 1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:48:59 compute-0 podman[377819]: 2025-10-02 08:48:59.348554472 +0000 UTC m=+0.169755971 container attach 1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:48:59 compute-0 podman[377843]: 2025-10-02 08:48:59.352696632 +0000 UTC m=+0.124540233 container cleanup b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 08:48:59 compute-0 systemd[1]: libpod-conmon-b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7.scope: Deactivated successfully.
Oct 02 08:48:59 compute-0 podman[377894]: 2025-10-02 08:48:59.434412648 +0000 UTC m=+0.052070814 container remove b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.439 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[522e2ff1-d730-478e-9ebc-975335a7d88f]: (4, ('Thu Oct  2 08:48:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 (b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7)\nb9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7\nThu Oct  2 08:48:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 (b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7)\nb9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.441 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[59b82b6e-e40f-4dc5-b306-c2641f06af0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.442 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd95e5b25-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:48:59 compute-0 nova_compute[260603]: 2025-10-02 08:48:59.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:59 compute-0 kernel: tapd95e5b25-30: left promiscuous mode
Oct 02 08:48:59 compute-0 nova_compute[260603]: 2025-10-02 08:48:59.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.471 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff8fd5e-567f-4195-8463-b48d684c133e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.504 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[44513d22-08a3-4436-b4f1-541a5ed82d42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.506 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7e99d8-b79f-4763-bd4a-fd082486b45b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d51af675-adfd-4f92-baff-0503a3d66289]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579379, 'reachable_time': 41621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377913, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.527 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.528 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[65e38658-fdf7-43bb-bf2e-4169bb8c68f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:59 compute-0 nova_compute[260603]: 2025-10-02 08:48:59.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:59 compute-0 nova_compute[260603]: 2025-10-02 08:48:59.791 2 INFO nova.virt.libvirt.driver [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance shutdown successfully.
Oct 02 08:48:59 compute-0 systemd[1]: run-netns-ovnmeta\x2dd95e5b25\x2d314c\x2d4a6b\x2db468\x2d4c3f1e9ba0f6.mount: Deactivated successfully.
Oct 02 08:48:59 compute-0 kernel: tapa488a1b0-77: entered promiscuous mode
Oct 02 08:48:59 compute-0 systemd-udevd[377807]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:48:59 compute-0 NetworkManager[45129]: <info>  [1759394939.8891] manager: (tapa488a1b0-77): new Tun device (/org/freedesktop/NetworkManager/Devices/469)
Oct 02 08:48:59 compute-0 nova_compute[260603]: 2025-10-02 08:48:59.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:59 compute-0 ovn_controller[152344]: 2025-10-02T08:48:59Z|01196|binding|INFO|Claiming lport a488a1b0-7749-499c-971f-662c8a9ee29b for this chassis.
Oct 02 08:48:59 compute-0 ovn_controller[152344]: 2025-10-02T08:48:59Z|01197|binding|INFO|a488a1b0-7749-499c-971f-662c8a9ee29b: Claiming fa:16:3e:f7:28:cb 10.100.0.12
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.906 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:28:cb 10.100.0.12'], port_security=['fa:16:3e:f7:28:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dcf324a5-8e22-40e4-8f75-469fe7a04756', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0bfd547f-13c2-4a12-bb9c-fb5dbeb2009f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=47273294-f17c-4793-9e46-5229ffbf5fc3, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a488a1b0-7749-499c-971f-662c8a9ee29b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:48:59 compute-0 NetworkManager[45129]: <info>  [1759394939.9130] device (tapa488a1b0-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.915 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a488a1b0-7749-499c-971f-662c8a9ee29b in datapath d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 bound to our chassis
Oct 02 08:48:59 compute-0 NetworkManager[45129]: <info>  [1759394939.9161] device (tapa488a1b0-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.918 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6
Oct 02 08:48:59 compute-0 ovn_controller[152344]: 2025-10-02T08:48:59Z|01198|binding|INFO|Setting lport a488a1b0-7749-499c-971f-662c8a9ee29b ovn-installed in OVS
Oct 02 08:48:59 compute-0 ovn_controller[152344]: 2025-10-02T08:48:59Z|01199|binding|INFO|Setting lport a488a1b0-7749-499c-971f-662c8a9ee29b up in Southbound
Oct 02 08:48:59 compute-0 nova_compute[260603]: 2025-10-02 08:48:59.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.938 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78381410-e5d4-4edc-9089-b8e4055714e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.939 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd95e5b25-31 in ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.942 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd95e5b25-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.942 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d42c4d6-621c-4a94-b772-f2ee4d427433]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.943 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[60fd623a-7343-4960-a613-fc7dfcf51554]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:48:59 compute-0 systemd-machined[214636]: New machine qemu-146-instance-00000074.
Oct 02 08:48:59 compute-0 systemd[1]: Started Virtual Machine qemu-146-instance-00000074.
Oct 02 08:48:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.970 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5b437fc2-82ab-4870-b776-305368459c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.010 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[04e7e87c-db0c-4fc0-bca0-85874a25568c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.057 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ddcd384c-4c3f-4978-8568-2be0fdfad640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 NetworkManager[45129]: <info>  [1759394940.0670] manager: (tapd95e5b25-30): new Veth device (/org/freedesktop/NetworkManager/Devices/470)
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.065 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e91748-0fc6-40da-923d-33ee6b9f32d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.128 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f7638d28-a26a-4ffd-9e45-1e83916ae162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.136 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0cda82-1a60-4054-bebb-0134d19e4b09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 NetworkManager[45129]: <info>  [1759394940.1771] device (tapd95e5b25-30): carrier: link connected
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.178 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[58b241f8-e791-4893-aedb-c3c2ee936a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.198 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3627bc0f-dbaf-4ce0-9676-52bcd4a3919d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd95e5b25-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:8b:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581877, 'reachable_time': 26454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377971, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.228 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5587b8-e04b-42e2-8962-93f21aa277b5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:8b57'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581877, 'tstamp': 581877}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377975, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.251 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4022229d-5c4b-4aa9-94fe-ea60d2431a33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd95e5b25-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:8b:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581877, 'reachable_time': 26454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377979, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.284 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[291d21ef-7e76-4f4e-b082-e9648a66e32d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.344 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d0bb68a2-9e83-4f61-be6f-dfa19c42a9d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.348 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd95e5b25-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.348 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.349 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd95e5b25-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:00 compute-0 nova_compute[260603]: 2025-10-02 08:49:00.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:00 compute-0 NetworkManager[45129]: <info>  [1759394940.3521] manager: (tapd95e5b25-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Oct 02 08:49:00 compute-0 stupefied_easley[377861]: {
Oct 02 08:49:00 compute-0 kernel: tapd95e5b25-30: entered promiscuous mode
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "osd_id": 2,
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "type": "bluestore"
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:     },
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "osd_id": 1,
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "type": "bluestore"
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:     },
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "osd_id": 0,
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:         "type": "bluestore"
Oct 02 08:49:00 compute-0 stupefied_easley[377861]:     }
Oct 02 08:49:00 compute-0 stupefied_easley[377861]: }
Oct 02 08:49:00 compute-0 nova_compute[260603]: 2025-10-02 08:49:00.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.358 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd95e5b25-30, col_values=(('external_ids', {'iface-id': 'b700b6c1-06ca-44eb-af9b-8fe8d11dd204'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:00 compute-0 nova_compute[260603]: 2025-10-02 08:49:00.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:00 compute-0 ovn_controller[152344]: 2025-10-02T08:49:00Z|01200|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct 02 08:49:00 compute-0 nova_compute[260603]: 2025-10-02 08:49:00.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.380 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.380 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18967c0e-c9e0-46ff-93ff-22e701180bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.381 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.pid.haproxy
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:49:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.382 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'env', 'PROCESS_TAG=haproxy-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:49:00 compute-0 systemd[1]: libpod-1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca.scope: Deactivated successfully.
Oct 02 08:49:00 compute-0 systemd[1]: libpod-1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca.scope: Consumed 1.026s CPU time.
Oct 02 08:49:00 compute-0 podman[377819]: 2025-10-02 08:49:00.389865729 +0000 UTC m=+1.211067218 container died 1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 08:49:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-670d91b9582040d159ce93f4656410ffd40b16e4747bb6b61b0f11bec3525319-merged.mount: Deactivated successfully.
Oct 02 08:49:00 compute-0 podman[377819]: 2025-10-02 08:49:00.440167746 +0000 UTC m=+1.261369225 container remove 1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:49:00 compute-0 systemd[1]: libpod-conmon-1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca.scope: Deactivated successfully.
Oct 02 08:49:00 compute-0 sudo[377703]: pam_unix(sudo:session): session closed for user root
Oct 02 08:49:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:49:00 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:49:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:49:00 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:49:00 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 4121721e-2cb2-4a64-8548-0b62cadafded does not exist
Oct 02 08:49:00 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 3000fde1-6e69-43ac-a66c-1771d86fe92d does not exist
Oct 02 08:49:00 compute-0 sudo[378009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:49:00 compute-0 sudo[378009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:49:00 compute-0 sudo[378009]: pam_unix(sudo:session): session closed for user root
Oct 02 08:49:00 compute-0 ceph-mon[74477]: pgmap v2109: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Oct 02 08:49:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:49:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:49:00 compute-0 sudo[378034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:49:00 compute-0 sudo[378034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:49:00 compute-0 sudo[378034]: pam_unix(sudo:session): session closed for user root
Oct 02 08:49:00 compute-0 podman[378118]: 2025-10-02 08:49:00.792709775 +0000 UTC m=+0.056449201 container create 52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:49:00 compute-0 systemd[1]: Started libpod-conmon-52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d.scope.
Oct 02 08:49:00 compute-0 podman[378118]: 2025-10-02 08:49:00.763536855 +0000 UTC m=+0.027276291 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:49:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7278ca594e124300e4494e2616a3c5112ca3087c80f96b44c8579fb6561b13/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:49:00 compute-0 podman[378118]: 2025-10-02 08:49:00.889420999 +0000 UTC m=+0.153160445 container init 52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 08:49:00 compute-0 podman[378118]: 2025-10-02 08:49:00.901449504 +0000 UTC m=+0.165188930 container start 52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 08:49:00 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [NOTICE]   (378142) : New worker (378144) forked
Oct 02 08:49:00 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [NOTICE]   (378142) : Loading success.
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.298 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for dcf324a5-8e22-40e4-8f75-469fe7a04756 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.299 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394941.2981935, dcf324a5-8e22-40e4-8f75-469fe7a04756 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.299 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] VM Resumed (Lifecycle Event)
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.306 2 INFO nova.virt.libvirt.driver [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance running successfully.
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.307 2 INFO nova.virt.libvirt.driver [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance soft rebooted successfully.
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.307 2 DEBUG nova.compute.manager [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.339 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.344 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.377 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] During sync_power_state the instance has a pending task (reboot_started). Skip.
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.377 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394941.298443, dcf324a5-8e22-40e4-8f75-469fe7a04756 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.378 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] VM Started (Lifecycle Event)
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.388 2 DEBUG oslo_concurrency.lockutils [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.400 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:49:01 compute-0 nova_compute[260603]: 2025-10-02 08:49:01.404 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:49:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 594 KiB/s wr, 32 op/s
Oct 02 08:49:02 compute-0 ceph-mon[74477]: pgmap v2110: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 594 KiB/s wr, 32 op/s
Oct 02 08:49:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:49:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 594 KiB/s wr, 99 op/s
Oct 02 08:49:03 compute-0 nova_compute[260603]: 2025-10-02 08:49:03.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:03.909 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:04 compute-0 nova_compute[260603]: 2025-10-02 08:49:04.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:04 compute-0 nova_compute[260603]: 2025-10-02 08:49:04.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:04 compute-0 ceph-mon[74477]: pgmap v2111: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 594 KiB/s wr, 99 op/s
Oct 02 08:49:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 70 op/s
Oct 02 08:49:05 compute-0 nova_compute[260603]: 2025-10-02 08:49:05.534 2 DEBUG nova.compute.manager [req-61136a2d-7c5f-4178-9dfa-c1275e737ee4 req-d535d685-fc07-4b91-bceb-a83ec4dc2310 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-unplugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:05 compute-0 nova_compute[260603]: 2025-10-02 08:49:05.534 2 DEBUG oslo_concurrency.lockutils [req-61136a2d-7c5f-4178-9dfa-c1275e737ee4 req-d535d685-fc07-4b91-bceb-a83ec4dc2310 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:05 compute-0 nova_compute[260603]: 2025-10-02 08:49:05.535 2 DEBUG oslo_concurrency.lockutils [req-61136a2d-7c5f-4178-9dfa-c1275e737ee4 req-d535d685-fc07-4b91-bceb-a83ec4dc2310 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:05 compute-0 nova_compute[260603]: 2025-10-02 08:49:05.535 2 DEBUG oslo_concurrency.lockutils [req-61136a2d-7c5f-4178-9dfa-c1275e737ee4 req-d535d685-fc07-4b91-bceb-a83ec4dc2310 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:05 compute-0 nova_compute[260603]: 2025-10-02 08:49:05.535 2 DEBUG nova.compute.manager [req-61136a2d-7c5f-4178-9dfa-c1275e737ee4 req-d535d685-fc07-4b91-bceb-a83ec4dc2310 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-unplugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:49:05 compute-0 nova_compute[260603]: 2025-10-02 08:49:05.536 2 WARNING nova.compute.manager [req-61136a2d-7c5f-4178-9dfa-c1275e737ee4 req-d535d685-fc07-4b91-bceb-a83ec4dc2310 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-unplugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state active and task_state None.
Oct 02 08:49:06 compute-0 ceph-mon[74477]: pgmap v2112: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 70 op/s
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 72 op/s
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.693 2 DEBUG nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.693 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.694 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.694 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.695 2 DEBUG nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.695 2 WARNING nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state active and task_state None.
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.695 2 DEBUG nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.696 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.696 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.696 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.697 2 DEBUG nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.697 2 WARNING nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state active and task_state None.
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.697 2 DEBUG nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.698 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.698 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.699 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.699 2 DEBUG nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:49:07 compute-0 nova_compute[260603]: 2025-10-02 08:49:07.700 2 WARNING nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state active and task_state None.
Oct 02 08:49:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:49:08 compute-0 nova_compute[260603]: 2025-10-02 08:49:08.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:08 compute-0 ceph-mon[74477]: pgmap v2113: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 72 op/s
Oct 02 08:49:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 71 op/s
Oct 02 08:49:09 compute-0 nova_compute[260603]: 2025-10-02 08:49:09.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:10 compute-0 ceph-mon[74477]: pgmap v2114: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 71 op/s
Oct 02 08:49:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 69 op/s
Oct 02 08:49:12 compute-0 ceph-mon[74477]: pgmap v2115: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 69 op/s
Oct 02 08:49:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:49:13 compute-0 podman[378154]: 2025-10-02 08:49:13.03542401 +0000 UTC m=+0.068159216 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct 02 08:49:13 compute-0 podman[378153]: 2025-10-02 08:49:13.057511689 +0000 UTC m=+0.096833890 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 08:49:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 111 op/s
Oct 02 08:49:13 compute-0 ovn_controller[152344]: 2025-10-02T08:49:13Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:28:cb 10.100.0.12
Oct 02 08:49:13 compute-0 nova_compute[260603]: 2025-10-02 08:49:13.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:14 compute-0 nova_compute[260603]: 2025-10-02 08:49:14.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:14 compute-0 ceph-mon[74477]: pgmap v2116: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 111 op/s
Oct 02 08:49:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 579 KiB/s rd, 12 KiB/s wr, 45 op/s
Oct 02 08:49:16 compute-0 ceph-mon[74477]: pgmap v2117: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 579 KiB/s rd, 12 KiB/s wr, 45 op/s
Oct 02 08:49:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 12 KiB/s wr, 46 op/s
Oct 02 08:49:17 compute-0 nova_compute[260603]: 2025-10-02 08:49:17.804 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:17 compute-0 nova_compute[260603]: 2025-10-02 08:49:17.804 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:17 compute-0 nova_compute[260603]: 2025-10-02 08:49:17.828 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:49:17 compute-0 nova_compute[260603]: 2025-10-02 08:49:17.908 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:17 compute-0 nova_compute[260603]: 2025-10-02 08:49:17.909 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:17 compute-0 nova_compute[260603]: 2025-10-02 08:49:17.921 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:49:17 compute-0 nova_compute[260603]: 2025-10-02 08:49:17.921 2 INFO nova.compute.claims [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:49:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.040 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:49:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3575857951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.489 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.498 2 DEBUG nova.compute.provider_tree [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.515 2 DEBUG nova.scheduler.client.report [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.538 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.539 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.594 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.595 2 DEBUG nova.network.neutron [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.611 2 INFO nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.626 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.639 2 INFO nova.compute.manager [None req-9e0f94d9-bdca-4ea0-944b-96670c2a4bdc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Get console output
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.646 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.701 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.702 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.702 2 INFO nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Creating image(s)
Oct 02 08:49:18 compute-0 ceph-mon[74477]: pgmap v2118: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 12 KiB/s wr, 46 op/s
Oct 02 08:49:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3575857951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.730 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.757 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.785 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.789 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.830 2 DEBUG nova.policy [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ba06f614b3ef4ffe8b1a2d1b848554a9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fe0d9bd380934b368dfa78fea95edb29', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.875 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.876 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.876 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.876 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.896 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:18 compute-0 nova_compute[260603]: 2025-10-02 08:49:18.899 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:19 compute-0 nova_compute[260603]: 2025-10-02 08:49:19.203 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:19 compute-0 nova_compute[260603]: 2025-10-02 08:49:19.256 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] resizing rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:49:19 compute-0 nova_compute[260603]: 2025-10-02 08:49:19.333 2 DEBUG nova.objects.instance [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d39bd73-ba92-45a5-9d69-f1f1901aa895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:49:19 compute-0 nova_compute[260603]: 2025-10-02 08:49:19.350 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:49:19 compute-0 nova_compute[260603]: 2025-10-02 08:49:19.351 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Ensure instance console log exists: /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:49:19 compute-0 nova_compute[260603]: 2025-10-02 08:49:19.351 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:19 compute-0 nova_compute[260603]: 2025-10-02 08:49:19.352 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:19 compute-0 nova_compute[260603]: 2025-10-02 08:49:19.352 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 44 op/s
Oct 02 08:49:19 compute-0 nova_compute[260603]: 2025-10-02 08:49:19.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.043 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.043 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.044 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.045 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.045 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.047 2 INFO nova.compute.manager [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Terminating instance
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.049 2 DEBUG nova.compute.manager [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:49:20 compute-0 kernel: tapa488a1b0-77 (unregistering): left promiscuous mode
Oct 02 08:49:20 compute-0 NetworkManager[45129]: <info>  [1759394960.1108] device (tapa488a1b0-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.123 2 DEBUG nova.compute.manager [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-changed-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.124 2 DEBUG nova.compute.manager [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Refreshing instance network info cache due to event network-changed-a488a1b0-7749-499c-971f-662c8a9ee29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.125 2 DEBUG oslo_concurrency.lockutils [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:49:20 compute-0 ovn_controller[152344]: 2025-10-02T08:49:20Z|01201|binding|INFO|Releasing lport a488a1b0-7749-499c-971f-662c8a9ee29b from this chassis (sb_readonly=0)
Oct 02 08:49:20 compute-0 ovn_controller[152344]: 2025-10-02T08:49:20Z|01202|binding|INFO|Setting lport a488a1b0-7749-499c-971f-662c8a9ee29b down in Southbound
Oct 02 08:49:20 compute-0 ovn_controller[152344]: 2025-10-02T08:49:20Z|01203|binding|INFO|Removing iface tapa488a1b0-77 ovn-installed in OVS
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.126 2 DEBUG oslo_concurrency.lockutils [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.127 2 DEBUG nova.network.neutron [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Refreshing network info cache for port a488a1b0-7749-499c-971f-662c8a9ee29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.137 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:28:cb 10.100.0.12'], port_security=['fa:16:3e:f7:28:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dcf324a5-8e22-40e4-8f75-469fe7a04756', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0bfd547f-13c2-4a12-bb9c-fb5dbeb2009f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=47273294-f17c-4793-9e46-5229ffbf5fc3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a488a1b0-7749-499c-971f-662c8a9ee29b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.140 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a488a1b0-7749-499c-971f-662c8a9ee29b in datapath d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 unbound from our chassis
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.143 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.146 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[689cf18c-dd50-4f64-8872-07821d5e2707]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.147 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 namespace which is not needed anymore
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:20 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000074.scope: Deactivated successfully.
Oct 02 08:49:20 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000074.scope: Consumed 12.713s CPU time.
Oct 02 08:49:20 compute-0 systemd-machined[214636]: Machine qemu-146-instance-00000074 terminated.
Oct 02 08:49:20 compute-0 podman[378385]: 2025-10-02 08:49:20.248352647 +0000 UTC m=+0.102267739 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:49:20 compute-0 podman[378389]: 2025-10-02 08:49:20.249368948 +0000 UTC m=+0.096916331 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.291 2 INFO nova.virt.libvirt.driver [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance destroyed successfully.
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.292 2 DEBUG nova.objects.instance [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid dcf324a5-8e22-40e4-8f75-469fe7a04756 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.307 2 DEBUG nova.network.neutron [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Successfully created port: 8add63d4-81e0-4c57-a383-69310529c53b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.315 2 DEBUG nova.virt.libvirt.vif [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:48:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-457680791',display_name='tempest-TestNetworkAdvancedServerOps-server-457680791',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-457680791',id=116,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLWb168hMylok2hbHIreuvgrfJq9EYhmSfhH2YBEWlnwrl2sEeWEt1hD3O/DPiepMH4x8+byvmbAISpkoWCoZjQ5z/Keocqhs6SeEf4SxPYxes1ihT4KVXi2eUj3jZBOrQ==',key_name='tempest-TestNetworkAdvancedServerOps-74461562',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:48:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-mqv16bk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:49:01Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=dcf324a5-8e22-40e4-8f75-469fe7a04756,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.316 2 DEBUG nova.network.os_vif_util [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.317 2 DEBUG nova.network.os_vif_util [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.318 2 DEBUG os_vif [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa488a1b0-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.327 2 INFO os_vif [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77')
Oct 02 08:49:20 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [NOTICE]   (378142) : haproxy version is 2.8.14-c23fe91
Oct 02 08:49:20 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [NOTICE]   (378142) : path to executable is /usr/sbin/haproxy
Oct 02 08:49:20 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [WARNING]  (378142) : Exiting Master process...
Oct 02 08:49:20 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [ALERT]    (378142) : Current worker (378144) exited with code 143 (Terminated)
Oct 02 08:49:20 compute-0 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [WARNING]  (378142) : All workers exited. Exiting... (0)
Oct 02 08:49:20 compute-0 systemd[1]: libpod-52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d.scope: Deactivated successfully.
Oct 02 08:49:20 compute-0 podman[378446]: 2025-10-02 08:49:20.36169482 +0000 UTC m=+0.079218310 container died 52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:49:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d-userdata-shm.mount: Deactivated successfully.
Oct 02 08:49:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a7278ca594e124300e4494e2616a3c5112ca3087c80f96b44c8579fb6561b13-merged.mount: Deactivated successfully.
Oct 02 08:49:20 compute-0 podman[378446]: 2025-10-02 08:49:20.437476482 +0000 UTC m=+0.154999982 container cleanup 52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:49:20 compute-0 systemd[1]: libpod-conmon-52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d.scope: Deactivated successfully.
Oct 02 08:49:20 compute-0 podman[378500]: 2025-10-02 08:49:20.544673863 +0000 UTC m=+0.070984714 container remove 52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.553 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[27c3f038-50c2-4684-b8e2-1c2b1236c2e4]: (4, ('Thu Oct  2 08:49:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 (52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d)\n52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d\nThu Oct  2 08:49:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 (52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d)\n52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.556 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[68dc94ca-185f-4b4a-8848-095ef33c6f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.557 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd95e5b25-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:20 compute-0 kernel: tapd95e5b25-30: left promiscuous mode
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.579 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10702663-6a96-4f17-aa29-95c5323872ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.604 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c9a666-a4ca-487a-b665-bdb0d49e14cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.606 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6eaf865-b026-4e8f-bee2-dd7c1d819aaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.628 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e79c700e-baf1-40d1-999e-77de93e591f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581865, 'reachable_time': 41659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378516, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.631 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:49:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.631 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b80a77fd-16c6-489e-bf67-c35f7f315204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:20 compute-0 systemd[1]: run-netns-ovnmeta\x2dd95e5b25\x2d314c\x2d4a6b\x2db468\x2d4c3f1e9ba0f6.mount: Deactivated successfully.
Oct 02 08:49:20 compute-0 ceph-mon[74477]: pgmap v2119: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 44 op/s
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.770 2 INFO nova.virt.libvirt.driver [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Deleting instance files /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756_del
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.771 2 INFO nova.virt.libvirt.driver [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Deletion of /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756_del complete
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.975 2 INFO nova.compute.manager [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.976 2 DEBUG oslo.service.loopingcall [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.976 2 DEBUG nova.compute.manager [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:49:20 compute-0 nova_compute[260603]: 2025-10-02 08:49:20.977 2 DEBUG nova.network.neutron [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.297 2 DEBUG nova.network.neutron [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Successfully updated port: 8add63d4-81e0-4c57-a383-69310529c53b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.314 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.314 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquired lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.314 2 DEBUG nova.network.neutron [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.424 2 DEBUG nova.compute.manager [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-changed-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.425 2 DEBUG nova.compute.manager [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Refreshing instance network info cache due to event network-changed-8add63d4-81e0-4c57-a383-69310529c53b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.425 2 DEBUG oslo_concurrency.lockutils [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:49:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 44 op/s
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.505 2 DEBUG nova.network.neutron [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.622 2 DEBUG nova.network.neutron [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updated VIF entry in instance network info cache for port a488a1b0-7749-499c-971f-662c8a9ee29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.623 2 DEBUG nova.network.neutron [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.643 2 DEBUG oslo_concurrency.lockutils [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.741 2 DEBUG nova.network.neutron [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.760 2 INFO nova.compute.manager [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Took 0.78 seconds to deallocate network for instance.
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.807 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.808 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:21 compute-0 nova_compute[260603]: 2025-10-02 08:49:21.871 2 DEBUG oslo_concurrency.processutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:49:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2480251863' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:49:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:49:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2480251863' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.212 2 DEBUG nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-unplugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.212 2 DEBUG oslo_concurrency.lockutils [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.213 2 DEBUG oslo_concurrency.lockutils [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.213 2 DEBUG oslo_concurrency.lockutils [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.213 2 DEBUG nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-unplugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.213 2 WARNING nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-unplugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state deleted and task_state None.
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.214 2 DEBUG nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.214 2 DEBUG oslo_concurrency.lockutils [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.214 2 DEBUG oslo_concurrency.lockutils [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.214 2 DEBUG oslo_concurrency.lockutils [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.215 2 DEBUG nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.215 2 WARNING nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state deleted and task_state None.
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.215 2 DEBUG nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-deleted-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:49:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380823278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.432 2 DEBUG oslo_concurrency.processutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.437 2 DEBUG nova.compute.provider_tree [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.455 2 DEBUG nova.scheduler.client.report [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.484 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.512 2 INFO nova.scheduler.client.report [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Deleted allocations for instance dcf324a5-8e22-40e4-8f75-469fe7a04756
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.568 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:22 compute-0 ceph-mon[74477]: pgmap v2120: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 44 op/s
Oct 02 08:49:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2480251863' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:49:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2480251863' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:49:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1380823278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.908 2 DEBUG nova.network.neutron [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updating instance_info_cache with network_info: [{"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.926 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Releasing lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.927 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Instance network_info: |[{"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.927 2 DEBUG oslo_concurrency.lockutils [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.928 2 DEBUG nova.network.neutron [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Refreshing network info cache for port 8add63d4-81e0-4c57-a383-69310529c53b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.933 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Start _get_guest_xml network_info=[{"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.939 2 WARNING nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:49:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.951 2 DEBUG nova.virt.libvirt.host [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.953 2 DEBUG nova.virt.libvirt.host [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.959 2 DEBUG nova.virt.libvirt.host [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.959 2 DEBUG nova.virt.libvirt.host [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.960 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.960 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.961 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.962 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.962 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.962 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.963 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.963 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.964 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.964 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.964 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.965 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:49:22 compute-0 nova_compute[260603]: 2025-10-02 08:49:22.970 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:49:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958514284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:49:23 compute-0 nova_compute[260603]: 2025-10-02 08:49:23.444 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:23 compute-0 nova_compute[260603]: 2025-10-02 08:49:23.473 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:23 compute-0 nova_compute[260603]: 2025-10-02 08:49:23.477 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 88 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:49:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2958514284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:49:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:49:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1686313764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.001 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.003 2 DEBUG nova.virt.libvirt.vif [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1419201510',display_name='tempest-TestServerBasicOps-server-1419201510',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1419201510',id=117,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxIiQkCuUb4hsjuixWL10UGBkKUMnwTZeKLoBPdzp7C/XvwvYOMF2Ky32VWHt/hKjva7kq8MtJ3oMOLi7ZAeECvKOs+ESXsUFj3RXVpjLuEPsjWq3UA/7zA5pWt+U9M2g==',key_name='tempest-TestServerBasicOps-36899864',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe0d9bd380934b368dfa78fea95edb29',ramdisk_id='',reservation_id='r-xw0wp2kg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1506272321',owner_user_name='tempest-TestServerBasicOps-1506272321-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:49:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ba06f614b3ef4ffe8b1a2d1b848554a9',uuid=5d39bd73-ba92-45a5-9d69-f1f1901aa895,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.003 2 DEBUG nova.network.os_vif_util [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Converting VIF {"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.005 2 DEBUG nova.network.os_vif_util [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.006 2 DEBUG nova.objects.instance [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d39bd73-ba92-45a5-9d69-f1f1901aa895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.027 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:49:24 compute-0 nova_compute[260603]:   <uuid>5d39bd73-ba92-45a5-9d69-f1f1901aa895</uuid>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   <name>instance-00000075</name>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <nova:name>tempest-TestServerBasicOps-server-1419201510</nova:name>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:49:22</nova:creationTime>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:49:24 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:49:24 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:49:24 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:49:24 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:49:24 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:49:24 compute-0 nova_compute[260603]:         <nova:user uuid="ba06f614b3ef4ffe8b1a2d1b848554a9">tempest-TestServerBasicOps-1506272321-project-member</nova:user>
Oct 02 08:49:24 compute-0 nova_compute[260603]:         <nova:project uuid="fe0d9bd380934b368dfa78fea95edb29">tempest-TestServerBasicOps-1506272321</nova:project>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:49:24 compute-0 nova_compute[260603]:         <nova:port uuid="8add63d4-81e0-4c57-a383-69310529c53b">
Oct 02 08:49:24 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <system>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <entry name="serial">5d39bd73-ba92-45a5-9d69-f1f1901aa895</entry>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <entry name="uuid">5d39bd73-ba92-45a5-9d69-f1f1901aa895</entry>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     </system>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   <os>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   </os>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   <features>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   </features>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk">
Oct 02 08:49:24 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       </source>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:49:24 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk.config">
Oct 02 08:49:24 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       </source>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:49:24 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:c1:8f:fc"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <target dev="tap8add63d4-81"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/console.log" append="off"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <video>
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     </video>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:49:24 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:49:24 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:49:24 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:49:24 compute-0 nova_compute[260603]: </domain>
Oct 02 08:49:24 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.028 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Preparing to wait for external event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.028 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.029 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.029 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.030 2 DEBUG nova.virt.libvirt.vif [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1419201510',display_name='tempest-TestServerBasicOps-server-1419201510',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1419201510',id=117,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxIiQkCuUb4hsjuixWL10UGBkKUMnwTZeKLoBPdzp7C/XvwvYOMF2Ky32VWHt/hKjva7kq8MtJ3oMOLi7ZAeECvKOs+ESXsUFj3RXVpjLuEPsjWq3UA/7zA5pWt+U9M2g==',key_name='tempest-TestServerBasicOps-36899864',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe0d9bd380934b368dfa78fea95edb29',ramdisk_id='',reservation_id='r-xw0wp2kg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1506272321',owner_user_name='tempest-TestServerBasicOps-1506272321-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:49:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ba06f614b3ef4ffe8b1a2d1b848554a9',uuid=5d39bd73-ba92-45a5-9d69-f1f1901aa895,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.030 2 DEBUG nova.network.os_vif_util [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Converting VIF {"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.031 2 DEBUG nova.network.os_vif_util [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.031 2 DEBUG os_vif [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.033 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.036 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8add63d4-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.036 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8add63d4-81, col_values=(('external_ids', {'iface-id': '8add63d4-81e0-4c57-a383-69310529c53b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:8f:fc', 'vm-uuid': '5d39bd73-ba92-45a5-9d69-f1f1901aa895'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:24 compute-0 NetworkManager[45129]: <info>  [1759394964.0390] manager: (tap8add63d4-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.045 2 INFO os_vif [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81')
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.112 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.112 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.112 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] No VIF found with MAC fa:16:3e:c1:8f:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.113 2 INFO nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Using config drive
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.137 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.527 2 INFO nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Creating config drive at /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/disk.config
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.540 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp6hsavpa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.697 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp6hsavpa" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.738 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.742 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/disk.config 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:24 compute-0 ceph-mon[74477]: pgmap v2121: 305 pgs: 305 active+clean; 88 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:49:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1686313764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.946 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/disk.config 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:24 compute-0 nova_compute[260603]: 2025-10-02 08:49:24.947 2 INFO nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Deleting local config drive /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/disk.config because it was imported into RBD.
Oct 02 08:49:25 compute-0 kernel: tap8add63d4-81: entered promiscuous mode
Oct 02 08:49:25 compute-0 NetworkManager[45129]: <info>  [1759394965.0019] manager: (tap8add63d4-81): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Oct 02 08:49:25 compute-0 systemd-udevd[378671]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:49:25 compute-0 ovn_controller[152344]: 2025-10-02T08:49:25Z|01204|binding|INFO|Claiming lport 8add63d4-81e0-4c57-a383-69310529c53b for this chassis.
Oct 02 08:49:25 compute-0 ovn_controller[152344]: 2025-10-02T08:49:25Z|01205|binding|INFO|8add63d4-81e0-4c57-a383-69310529c53b: Claiming fa:16:3e:c1:8f:fc 10.100.0.11
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.046 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:8f:fc 10.100.0.11'], port_security=['fa:16:3e:c1:8f:fc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5d39bd73-ba92-45a5-9d69-f1f1901aa895', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe0d9bd380934b368dfa78fea95edb29', 'neutron:revision_number': '2', 'neutron:security_group_ids': '685b2755-a769-4920-a3c6-7a8b24097a10 ece867f3-4b20-4c54-9ee4-8b5852490079', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86495746-eacb-4aba-a3c0-a2e0aa4fb357, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8add63d4-81e0-4c57-a383-69310529c53b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:49:25 compute-0 NetworkManager[45129]: <info>  [1759394965.0499] device (tap8add63d4-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:49:25 compute-0 NetworkManager[45129]: <info>  [1759394965.0511] device (tap8add63d4-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.049 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8add63d4-81e0-4c57-a383-69310529c53b in datapath 86d7b4c5-3981-43ff-aa9f-1933a61f6bec bound to our chassis
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.051 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 86d7b4c5-3981-43ff-aa9f-1933a61f6bec
Oct 02 08:49:25 compute-0 ovn_controller[152344]: 2025-10-02T08:49:25Z|01206|binding|INFO|Setting lport 8add63d4-81e0-4c57-a383-69310529c53b ovn-installed in OVS
Oct 02 08:49:25 compute-0 ovn_controller[152344]: 2025-10-02T08:49:25Z|01207|binding|INFO|Setting lport 8add63d4-81e0-4c57-a383-69310529c53b up in Southbound
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:25 compute-0 systemd-machined[214636]: New machine qemu-147-instance-00000075.
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.069 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c42dae59-00df-4f02-bc0a-aaf355ef6fb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.070 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap86d7b4c5-31 in ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.072 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap86d7b4c5-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.072 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cad07def-168d-4b57-8f21-1e9a83007c36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.074 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0ef425-9da7-4f3c-a8f8-1066ee72944e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 systemd[1]: Started Virtual Machine qemu-147-instance-00000075.
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.087 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[aa347af6-be91-4a06-b618-da841f37fe81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.111 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[73338777-9583-4860-9606-9525d97164b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.153 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[25e561ff-6a78-43cf-9939-71dd8fd3fffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.158 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[baa997c8-76e3-4985-822f-47474983ee1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 NetworkManager[45129]: <info>  [1759394965.1599] manager: (tap86d7b4c5-30): new Veth device (/org/freedesktop/NetworkManager/Devices/474)
Oct 02 08:49:25 compute-0 systemd-udevd[378675]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.200 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e01b61-8a1b-4cb0-9419-158c5ccf201c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.203 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[677b4c58-1337-4d52-98f5-2818ebe5a55f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 NetworkManager[45129]: <info>  [1759394965.2278] device (tap86d7b4c5-30): carrier: link connected
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.238 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a3563eee-b264-433d-bfed-22bd71bb2cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.258 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[daf4e84c-e429-4f90-bafe-1c81f121d92b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86d7b4c5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:68:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 346], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584383, 'reachable_time': 17605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378707, 'error': None, 'target': 'ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.276 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc5f05d-f360-4e8e-af83-e19d1306f0cc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:68b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584383, 'tstamp': 584383}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378708, 'error': None, 'target': 'ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.299 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[64d8f2d0-18e5-429e-8b86-62d69a1c49f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86d7b4c5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:68:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 346], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584383, 'reachable_time': 17605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378709, 'error': None, 'target': 'ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.335 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7507d5-6374-488d-a2bc-0f7765db5bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.400 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[650fc49f-bf8c-4780-a932-38ad23af1075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.401 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86d7b4c5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.401 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.402 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86d7b4c5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:25 compute-0 NetworkManager[45129]: <info>  [1759394965.4036] manager: (tap86d7b4c5-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/475)
Oct 02 08:49:25 compute-0 kernel: tap86d7b4c5-30: entered promiscuous mode
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.406 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap86d7b4c5-30, col_values=(('external_ids', {'iface-id': 'cff74aee-8cf5-4a00-8641-877b88ad7399'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:25 compute-0 ovn_controller[152344]: 2025-10-02T08:49:25Z|01208|binding|INFO|Releasing lport cff74aee-8cf5-4a00-8641-877b88ad7399 from this chassis (sb_readonly=0)
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.421 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/86d7b4c5-3981-43ff-aa9f-1933a61f6bec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/86d7b4c5-3981-43ff-aa9f-1933a61f6bec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.422 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3dfd5f37-f6de-4b64-bcdf-ad6cb8684b50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.422 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-86d7b4c5-3981-43ff-aa9f-1933a61f6bec
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/86d7b4c5-3981-43ff-aa9f-1933a61f6bec.pid.haproxy
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 86d7b4c5-3981-43ff-aa9f-1933a61f6bec
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:49:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.423 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'env', 'PROCESS_TAG=haproxy-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/86d7b4c5-3981-43ff-aa9f-1933a61f6bec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:49:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 88 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.718 2 DEBUG nova.network.neutron [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updated VIF entry in instance network info cache for port 8add63d4-81e0-4c57-a383-69310529c53b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.719 2 DEBUG nova.network.neutron [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updating instance_info_cache with network_info: [{"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.739 2 DEBUG oslo_concurrency.lockutils [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:49:25 compute-0 podman[378783]: 2025-10-02 08:49:25.741275344 +0000 UTC m=+0.020334205 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.861 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394965.86115, 5d39bd73-ba92-45a5-9d69-f1f1901aa895 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.862 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] VM Started (Lifecycle Event)
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.883 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.887 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394965.8613257, 5d39bd73-ba92-45a5-9d69-f1f1901aa895 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.887 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] VM Paused (Lifecycle Event)
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.905 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.908 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:49:25 compute-0 nova_compute[260603]: 2025-10-02 08:49:25.944 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:49:25 compute-0 podman[378783]: 2025-10-02 08:49:25.997985235 +0000 UTC m=+0.277044116 container create f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:49:26 compute-0 systemd[1]: Started libpod-conmon-f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff.scope.
Oct 02 08:49:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b46779ccf78646eb1fa1b179b99646c119a819bb4ab4cf83b813387cadabddf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:49:26 compute-0 podman[378783]: 2025-10-02 08:49:26.434621944 +0000 UTC m=+0.713680835 container init f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:49:26 compute-0 podman[378783]: 2025-10-02 08:49:26.447443304 +0000 UTC m=+0.726502195 container start f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:49:26 compute-0 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [NOTICE]   (378802) : New worker (378804) forked
Oct 02 08:49:26 compute-0 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [NOTICE]   (378802) : Loading success.
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.500 2 DEBUG nova.compute.manager [req-5fc21c4f-a8c8-4188-86bf-115f49a0c5d7 req-92379df9-ce03-4001-bb73-d79ee2d45c5f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.500 2 DEBUG oslo_concurrency.lockutils [req-5fc21c4f-a8c8-4188-86bf-115f49a0c5d7 req-92379df9-ce03-4001-bb73-d79ee2d45c5f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.501 2 DEBUG oslo_concurrency.lockutils [req-5fc21c4f-a8c8-4188-86bf-115f49a0c5d7 req-92379df9-ce03-4001-bb73-d79ee2d45c5f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.501 2 DEBUG oslo_concurrency.lockutils [req-5fc21c4f-a8c8-4188-86bf-115f49a0c5d7 req-92379df9-ce03-4001-bb73-d79ee2d45c5f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.502 2 DEBUG nova.compute.manager [req-5fc21c4f-a8c8-4188-86bf-115f49a0c5d7 req-92379df9-ce03-4001-bb73-d79ee2d45c5f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Processing event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.503 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.509 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394966.5093637, 5d39bd73-ba92-45a5-9d69-f1f1901aa895 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.510 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] VM Resumed (Lifecycle Event)
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.512 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:49:26 compute-0 ovn_controller[152344]: 2025-10-02T08:49:26Z|01209|binding|INFO|Releasing lport cff74aee-8cf5-4a00-8641-877b88ad7399 from this chassis (sb_readonly=0)
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.516 2 INFO nova.virt.libvirt.driver [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Instance spawned successfully.
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.516 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.537 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.544 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.560 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.561 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.561 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.561 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.562 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.562 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.570 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.649 2 INFO nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Took 7.95 seconds to spawn the instance on the hypervisor.
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.650 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:49:26 compute-0 ovn_controller[152344]: 2025-10-02T08:49:26Z|01210|binding|INFO|Releasing lport cff74aee-8cf5-4a00-8641-877b88ad7399 from this chassis (sb_readonly=0)
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.730 2 INFO nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Took 8.86 seconds to build instance.
Oct 02 08:49:26 compute-0 nova_compute[260603]: 2025-10-02 08:49:26.753 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:26 compute-0 ceph-mon[74477]: pgmap v2122: 305 pgs: 305 active+clean; 88 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct 02 08:49:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 88 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 08:49:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:49:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:49:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:49:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:49:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:49:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:49:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:49:28
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'default.rgw.control', 'backups', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta']
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:49:28 compute-0 nova_compute[260603]: 2025-10-02 08:49:28.638 2 DEBUG nova.compute.manager [req-272c2df5-57fd-4cea-a6b6-a35a0cec09eb req-bd7dbcd0-dabe-4dfb-8183-91f104596755 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:28 compute-0 nova_compute[260603]: 2025-10-02 08:49:28.638 2 DEBUG oslo_concurrency.lockutils [req-272c2df5-57fd-4cea-a6b6-a35a0cec09eb req-bd7dbcd0-dabe-4dfb-8183-91f104596755 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:28 compute-0 nova_compute[260603]: 2025-10-02 08:49:28.639 2 DEBUG oslo_concurrency.lockutils [req-272c2df5-57fd-4cea-a6b6-a35a0cec09eb req-bd7dbcd0-dabe-4dfb-8183-91f104596755 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:28 compute-0 nova_compute[260603]: 2025-10-02 08:49:28.639 2 DEBUG oslo_concurrency.lockutils [req-272c2df5-57fd-4cea-a6b6-a35a0cec09eb req-bd7dbcd0-dabe-4dfb-8183-91f104596755 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:28 compute-0 nova_compute[260603]: 2025-10-02 08:49:28.639 2 DEBUG nova.compute.manager [req-272c2df5-57fd-4cea-a6b6-a35a0cec09eb req-bd7dbcd0-dabe-4dfb-8183-91f104596755 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] No waiting events found dispatching network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:49:28 compute-0 nova_compute[260603]: 2025-10-02 08:49:28.640 2 WARNING nova.compute.manager [req-272c2df5-57fd-4cea-a6b6-a35a0cec09eb req-bd7dbcd0-dabe-4dfb-8183-91f104596755 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received unexpected event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b for instance with vm_state active and task_state None.
Oct 02 08:49:28 compute-0 NetworkManager[45129]: <info>  [1759394968.9183] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/476)
Oct 02 08:49:28 compute-0 NetworkManager[45129]: <info>  [1759394968.9193] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Oct 02 08:49:28 compute-0 nova_compute[260603]: 2025-10-02 08:49:28.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:28 compute-0 ceph-mon[74477]: pgmap v2123: 305 pgs: 305 active+clean; 88 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 08:49:29 compute-0 ovn_controller[152344]: 2025-10-02T08:49:29Z|01211|binding|INFO|Releasing lport cff74aee-8cf5-4a00-8641-877b88ad7399 from this chassis (sb_readonly=0)
Oct 02 08:49:29 compute-0 nova_compute[260603]: 2025-10-02 08:49:29.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:29 compute-0 ovn_controller[152344]: 2025-10-02T08:49:29Z|01212|binding|INFO|Releasing lport cff74aee-8cf5-4a00-8641-877b88ad7399 from this chassis (sb_readonly=0)
Oct 02 08:49:29 compute-0 nova_compute[260603]: 2025-10-02 08:49:29.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Oct 02 08:49:29 compute-0 nova_compute[260603]: 2025-10-02 08:49:29.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:29 compute-0 nova_compute[260603]: 2025-10-02 08:49:29.723 2 DEBUG nova.compute.manager [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-changed-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:29 compute-0 nova_compute[260603]: 2025-10-02 08:49:29.724 2 DEBUG nova.compute.manager [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Refreshing instance network info cache due to event network-changed-8add63d4-81e0-4c57-a383-69310529c53b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:49:29 compute-0 nova_compute[260603]: 2025-10-02 08:49:29.724 2 DEBUG oslo_concurrency.lockutils [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:49:29 compute-0 nova_compute[260603]: 2025-10-02 08:49:29.724 2 DEBUG oslo_concurrency.lockutils [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:49:29 compute-0 nova_compute[260603]: 2025-10-02 08:49:29.725 2 DEBUG nova.network.neutron [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Refreshing network info cache for port 8add63d4-81e0-4c57-a383-69310529c53b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:49:30 compute-0 ceph-mon[74477]: pgmap v2124: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Oct 02 08:49:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.947567) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394971947600, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1915, "num_deletes": 254, "total_data_size": 3044248, "memory_usage": 3091120, "flush_reason": "Manual Compaction"}
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394971961170, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2980023, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43148, "largest_seqno": 45062, "table_properties": {"data_size": 2971193, "index_size": 5452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18353, "raw_average_key_size": 20, "raw_value_size": 2953498, "raw_average_value_size": 3288, "num_data_blocks": 242, "num_entries": 898, "num_filter_entries": 898, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394783, "oldest_key_time": 1759394783, "file_creation_time": 1759394971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 13639 microseconds, and 7331 cpu microseconds.
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.961207) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2980023 bytes OK
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.961225) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.963033) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.963080) EVENT_LOG_v1 {"time_micros": 1759394971963073, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.963100) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 3036061, prev total WAL file size 3036061, number of live WAL files 2.
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.964241) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2910KB)], [101(7260KB)]
Oct 02 08:49:31 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394971964342, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10415131, "oldest_snapshot_seqno": -1}
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6568 keys, 8699750 bytes, temperature: kUnknown
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394972036562, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8699750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8656166, "index_size": 26064, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 170820, "raw_average_key_size": 26, "raw_value_size": 8538892, "raw_average_value_size": 1300, "num_data_blocks": 1019, "num_entries": 6568, "num_filter_entries": 6568, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.036973) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8699750 bytes
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.038184) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.9 rd, 120.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 7.1 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 7091, records dropped: 523 output_compression: NoCompression
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.038206) EVENT_LOG_v1 {"time_micros": 1759394972038196, "job": 60, "event": "compaction_finished", "compaction_time_micros": 72372, "compaction_time_cpu_micros": 41589, "output_level": 6, "num_output_files": 1, "total_output_size": 8699750, "num_input_records": 7091, "num_output_records": 6568, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394972038946, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394972040630, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.964025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.040726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.040733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.040735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.040737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:49:32 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.040739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:49:32 compute-0 nova_compute[260603]: 2025-10-02 08:49:32.059 2 DEBUG nova.network.neutron [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updated VIF entry in instance network info cache for port 8add63d4-81e0-4c57-a383-69310529c53b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:49:32 compute-0 nova_compute[260603]: 2025-10-02 08:49:32.059 2 DEBUG nova.network.neutron [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updating instance_info_cache with network_info: [{"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:49:32 compute-0 nova_compute[260603]: 2025-10-02 08:49:32.081 2 DEBUG oslo_concurrency.lockutils [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:49:32 compute-0 ceph-mon[74477]: pgmap v2125: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Oct 02 08:49:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:49:32 compute-0 nova_compute[260603]: 2025-10-02 08:49:32.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 202 op/s
Oct 02 08:49:34 compute-0 nova_compute[260603]: 2025-10-02 08:49:34.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:34 compute-0 nova_compute[260603]: 2025-10-02 08:49:34.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:34.830 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:34.832 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:34.833 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:35 compute-0 ceph-mon[74477]: pgmap v2126: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 202 op/s
Oct 02 08:49:35 compute-0 nova_compute[260603]: 2025-10-02 08:49:35.290 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394960.2894125, dcf324a5-8e22-40e4-8f75-469fe7a04756 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:49:35 compute-0 nova_compute[260603]: 2025-10-02 08:49:35.291 2 INFO nova.compute.manager [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] VM Stopped (Lifecycle Event)
Oct 02 08:49:35 compute-0 nova_compute[260603]: 2025-10-02 08:49:35.319 2 DEBUG nova.compute.manager [None req-3b454f31-9788-4b82-a393-7c16e2a80b01 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:49:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 146 op/s
Oct 02 08:49:36 compute-0 nova_compute[260603]: 2025-10-02 08:49:36.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:49:37 compute-0 ceph-mon[74477]: pgmap v2127: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 146 op/s
Oct 02 08:49:37 compute-0 nova_compute[260603]: 2025-10-02 08:49:37.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 147 op/s
Oct 02 08:49:37 compute-0 nova_compute[260603]: 2025-10-02 08:49:37.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:49:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:49:38 compute-0 ovn_controller[152344]: 2025-10-02T08:49:38Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:8f:fc 10.100.0.11
Oct 02 08:49:38 compute-0 ovn_controller[152344]: 2025-10-02T08:49:38Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:8f:fc 10.100.0.11
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003487950323956502 of space, bias 1.0, pg target 0.10463850971869505 quantized to 32 (current 32)
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:49:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:49:39 compute-0 ceph-mon[74477]: pgmap v2128: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 147 op/s
Oct 02 08:49:39 compute-0 nova_compute[260603]: 2025-10-02 08:49:39.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 102 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 154 op/s
Oct 02 08:49:39 compute-0 nova_compute[260603]: 2025-10-02 08:49:39.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:49:39 compute-0 nova_compute[260603]: 2025-10-02 08:49:39.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:49:39 compute-0 nova_compute[260603]: 2025-10-02 08:49:39.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:49:39 compute-0 nova_compute[260603]: 2025-10-02 08:49:39.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:40 compute-0 nova_compute[260603]: 2025-10-02 08:49:40.435 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:49:40 compute-0 nova_compute[260603]: 2025-10-02 08:49:40.436 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:49:40 compute-0 nova_compute[260603]: 2025-10-02 08:49:40.436 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:49:40 compute-0 nova_compute[260603]: 2025-10-02 08:49:40.437 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d39bd73-ba92-45a5-9d69-f1f1901aa895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:49:41 compute-0 ceph-mon[74477]: pgmap v2129: 305 pgs: 305 active+clean; 102 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 154 op/s
Oct 02 08:49:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 102 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 112 op/s
Oct 02 08:49:42 compute-0 nova_compute[260603]: 2025-10-02 08:49:42.940 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:42 compute-0 nova_compute[260603]: 2025-10-02 08:49:42.940 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:42 compute-0 nova_compute[260603]: 2025-10-02 08:49:42.957 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:49:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.029 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.029 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.040 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.040 2 INFO nova.compute.claims [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.048 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updating instance_info_cache with network_info: [{"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:49:43 compute-0 ceph-mon[74477]: pgmap v2130: 305 pgs: 305 active+clean; 102 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 112 op/s
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.065 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.065 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.066 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.066 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.066 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.090 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.135 2 DEBUG nova.scheduler.client.report [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.153 2 DEBUG nova.scheduler.client.report [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.153 2 DEBUG nova.compute.provider_tree [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.169 2 DEBUG nova.scheduler.client.report [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.205 2 DEBUG nova.scheduler.client.report [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.263 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct 02 08:49:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:49:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3688360750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.733 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.742 2 DEBUG nova.compute.provider_tree [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.758 2 DEBUG nova.scheduler.client.report [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.780 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.782 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.789 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.790 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.790 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.791 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.882 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.882 2 DEBUG nova.network.neutron [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.901 2 INFO nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:49:43 compute-0 nova_compute[260603]: 2025-10-02 08:49:43.919 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.015 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.018 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.019 2 INFO nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Creating image(s)
Oct 02 08:49:44 compute-0 podman[378842]: 2025-10-02 08:49:44.028985913 +0000 UTC m=+0.083905466 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 08:49:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3688360750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:49:44 compute-0 podman[378841]: 2025-10-02 08:49:44.065812371 +0000 UTC m=+0.124198112 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.074 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.098 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.121 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.124 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.171 2 DEBUG nova.policy [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7767630a5b1049f48d7e0fed29e221ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c86b416fdb524f21b0228639a3a14116', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.211 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.212 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.213 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.213 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.238 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.241 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:49:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649381719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.296 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.400 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.400 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.604 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.666 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] resizing rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.757 2 DEBUG nova.objects.instance [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'migration_context' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.774 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.774 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Ensure instance console log exists: /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.775 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.775 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.775 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.792 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.793 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3594MB free_disk=59.9428596496582GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.793 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.794 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.878 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 5d39bd73-ba92-45a5-9d69-f1f1901aa895 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.878 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.878 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.879 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:49:44 compute-0 nova_compute[260603]: 2025-10-02 08:49:44.948 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:45 compute-0 ceph-mon[74477]: pgmap v2131: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct 02 08:49:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2649381719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:49:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:49:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1434462225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:49:45 compute-0 nova_compute[260603]: 2025-10-02 08:49:45.376 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:45 compute-0 nova_compute[260603]: 2025-10-02 08:49:45.382 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:49:45 compute-0 nova_compute[260603]: 2025-10-02 08:49:45.402 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:49:45 compute-0 nova_compute[260603]: 2025-10-02 08:49:45.431 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:49:45 compute-0 nova_compute[260603]: 2025-10-02 08:49:45.431 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:45 compute-0 nova_compute[260603]: 2025-10-02 08:49:45.498 2 DEBUG nova.network.neutron [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Successfully created port: ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:49:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 08:49:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1434462225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:49:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:46.355 162644 DEBUG eventlet.wsgi.server [-] (162644) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 02 08:49:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:46.357 162644 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Oct 02 08:49:46 compute-0 ovn_metadata_agent[162328]: Accept: */*
Oct 02 08:49:46 compute-0 ovn_metadata_agent[162328]: Connection: close
Oct 02 08:49:46 compute-0 ovn_metadata_agent[162328]: Content-Type: text/plain
Oct 02 08:49:46 compute-0 ovn_metadata_agent[162328]: Host: 169.254.169.254
Oct 02 08:49:46 compute-0 ovn_metadata_agent[162328]: User-Agent: curl/7.84.0
Oct 02 08:49:46 compute-0 ovn_metadata_agent[162328]: X-Forwarded-For: 10.100.0.11
Oct 02 08:49:46 compute-0 ovn_metadata_agent[162328]: X-Ovn-Network-Id: 86d7b4c5-3981-43ff-aa9f-1933a61f6bec __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 02 08:49:46 compute-0 nova_compute[260603]: 2025-10-02 08:49:46.547 2 DEBUG nova.network.neutron [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Successfully updated port: ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:49:46 compute-0 nova_compute[260603]: 2025-10-02 08:49:46.569 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:49:46 compute-0 nova_compute[260603]: 2025-10-02 08:49:46.569 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:49:46 compute-0 nova_compute[260603]: 2025-10-02 08:49:46.570 2 DEBUG nova.network.neutron [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:49:46 compute-0 nova_compute[260603]: 2025-10-02 08:49:46.676 2 DEBUG nova.compute.manager [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-changed-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:46 compute-0 nova_compute[260603]: 2025-10-02 08:49:46.677 2 DEBUG nova.compute.manager [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Refreshing instance network info cache due to event network-changed-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:49:46 compute-0 nova_compute[260603]: 2025-10-02 08:49:46.678 2 DEBUG oslo_concurrency.lockutils [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:49:46 compute-0 nova_compute[260603]: 2025-10-02 08:49:46.738 2 DEBUG nova.network.neutron [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:49:47 compute-0 ceph-mon[74477]: pgmap v2132: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 08:49:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 142 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.8 MiB/s wr, 79 op/s
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:47.812 162644 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:47.813 162644 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.4563913
Oct 02 08:49:47 compute-0 haproxy-metadata-proxy-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378804]: 10.100.0.11:42998 [02/Oct/2025:08:49:46.353] listener listener/metadata 0/0/0/1460/1460 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.844 2 DEBUG nova.network.neutron [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updating instance_info_cache with network_info: [{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.865 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.865 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance network_info: |[{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.866 2 DEBUG oslo_concurrency.lockutils [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.866 2 DEBUG nova.network.neutron [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Refreshing network info cache for port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.874 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Start _get_guest_xml network_info=[{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.882 2 WARNING nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.895 2 DEBUG nova.virt.libvirt.host [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.896 2 DEBUG nova.virt.libvirt.host [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.901 2 DEBUG nova.virt.libvirt.host [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.902 2 DEBUG nova.virt.libvirt.host [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.903 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.903 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.904 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.905 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.905 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.906 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.906 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.907 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.908 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.908 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.909 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.909 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:49:47 compute-0 nova_compute[260603]: 2025-10-02 08:49:47.916 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:47.936 162644 DEBUG eventlet.wsgi.server [-] (162644) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:47.938 162644 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: Accept: */*
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: Connection: close
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: Content-Length: 100
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: Content-Type: application/x-www-form-urlencoded
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: Host: 169.254.169.254
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: User-Agent: curl/7.84.0
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: X-Forwarded-For: 10.100.0.11
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: X-Ovn-Network-Id: 86d7b4c5-3981-43ff-aa9f-1933a61f6bec
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:49:47 compute-0 ovn_metadata_agent[162328]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 02 08:49:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:49:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:48.211 162644 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 02 08:49:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:48.211 162644 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2735717
Oct 02 08:49:48 compute-0 haproxy-metadata-proxy-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378804]: 10.100.0.11:43000 [02/Oct/2025:08:49:47.935] listener listener/metadata 0/0/0/275/275 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Oct 02 08:49:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:49:48 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/567280329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.389 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.411 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.415 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:49:48 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507854933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.830 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.833 2 DEBUG nova.virt.libvirt.vif [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1109879286',display_name='tempest-TestNetworkAdvancedServerOps-server-1109879286',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1109879286',id=118,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+sVd8qjo0SYazKDVirRw3g5x5Vnd8QovbWt8+eK0+qPC8DTt4OfcPnD5uSpSSNv9pvlZO4xpKtw3lzuxGU0DKfNjlzDyLY6S9ZZDG7PGtaZb0/k3fS7lf7qHC3kDB/wg==',key_name='tempest-TestNetworkAdvancedServerOps-514118472',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-552rpvj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:49:43Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=9d79c6a7-edec-4a60-af9f-7e1401bb9a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.833 2 DEBUG nova.network.os_vif_util [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.835 2 DEBUG nova.network.os_vif_util [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.836 2 DEBUG nova.objects.instance [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.854 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:49:48 compute-0 nova_compute[260603]:   <uuid>9d79c6a7-edec-4a60-af9f-7e1401bb9a64</uuid>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   <name>instance-00000076</name>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1109879286</nova:name>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:49:47</nova:creationTime>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:49:48 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:49:48 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:49:48 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:49:48 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:49:48 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:49:48 compute-0 nova_compute[260603]:         <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct 02 08:49:48 compute-0 nova_compute[260603]:         <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:49:48 compute-0 nova_compute[260603]:         <nova:port uuid="ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da">
Oct 02 08:49:48 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <system>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <entry name="serial">9d79c6a7-edec-4a60-af9f-7e1401bb9a64</entry>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <entry name="uuid">9d79c6a7-edec-4a60-af9f-7e1401bb9a64</entry>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     </system>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   <os>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   </os>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   <features>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   </features>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk">
Oct 02 08:49:48 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       </source>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:49:48 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config">
Oct 02 08:49:48 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       </source>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:49:48 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:4a:e6:c0"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <target dev="tapac5b8c6c-8d"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/console.log" append="off"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <video>
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     </video>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:49:48 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:49:48 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:49:48 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:49:48 compute-0 nova_compute[260603]: </domain>
Oct 02 08:49:48 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.856 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Preparing to wait for external event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.856 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.857 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.857 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.859 2 DEBUG nova.virt.libvirt.vif [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1109879286',display_name='tempest-TestNetworkAdvancedServerOps-server-1109879286',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1109879286',id=118,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+sVd8qjo0SYazKDVirRw3g5x5Vnd8QovbWt8+eK0+qPC8DTt4OfcPnD5uSpSSNv9pvlZO4xpKtw3lzuxGU0DKfNjlzDyLY6S9ZZDG7PGtaZb0/k3fS7lf7qHC3kDB/wg==',key_name='tempest-TestNetworkAdvancedServerOps-514118472',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-552rpvj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:49:43Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=9d79c6a7-edec-4a60-af9f-7e1401bb9a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.859 2 DEBUG nova.network.os_vif_util [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.860 2 DEBUG nova.network.os_vif_util [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.861 2 DEBUG os_vif [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.863 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.864 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.868 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac5b8c6c-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.869 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac5b8c6c-8d, col_values=(('external_ids', {'iface-id': 'ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:e6:c0', 'vm-uuid': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:48 compute-0 NetworkManager[45129]: <info>  [1759394988.8718] manager: (tapac5b8c6c-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.878 2 INFO os_vif [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d')
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.933 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.933 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.933 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No VIF found with MAC fa:16:3e:4a:e6:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.934 2 INFO nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Using config drive
Oct 02 08:49:48 compute-0 nova_compute[260603]: 2025-10-02 08:49:48.955 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:49 compute-0 ceph-mon[74477]: pgmap v2133: 305 pgs: 305 active+clean; 142 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.8 MiB/s wr, 79 op/s
Oct 02 08:49:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/567280329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:49:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/507854933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:49:49 compute-0 nova_compute[260603]: 2025-10-02 08:49:49.413 2 INFO nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Creating config drive at /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config
Oct 02 08:49:49 compute-0 nova_compute[260603]: 2025-10-02 08:49:49.425 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_t0f4_lv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:49 compute-0 nova_compute[260603]: 2025-10-02 08:49:49.507 2 DEBUG nova.network.neutron [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updated VIF entry in instance network info cache for port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:49:49 compute-0 nova_compute[260603]: 2025-10-02 08:49:49.508 2 DEBUG nova.network.neutron [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updating instance_info_cache with network_info: [{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:49:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 167 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 08:49:49 compute-0 nova_compute[260603]: 2025-10-02 08:49:49.526 2 DEBUG oslo_concurrency.lockutils [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:49:49 compute-0 nova_compute[260603]: 2025-10-02 08:49:49.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:49 compute-0 nova_compute[260603]: 2025-10-02 08:49:49.600 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_t0f4_lv" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:49 compute-0 nova_compute[260603]: 2025-10-02 08:49:49.628 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:49:49 compute-0 nova_compute[260603]: 2025-10-02 08:49:49.632 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.006 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.007 2 INFO nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deleting local config drive /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config because it was imported into RBD.
Oct 02 08:49:50 compute-0 NetworkManager[45129]: <info>  [1759394990.0800] manager: (tapac5b8c6c-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/479)
Oct 02 08:49:50 compute-0 kernel: tapac5b8c6c-8d: entered promiscuous mode
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 ovn_controller[152344]: 2025-10-02T08:49:50Z|01213|binding|INFO|Claiming lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for this chassis.
Oct 02 08:49:50 compute-0 ovn_controller[152344]: 2025-10-02T08:49:50Z|01214|binding|INFO|ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da: Claiming fa:16:3e:4a:e6:c0 10.100.0.11
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.094 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:e6:c0 10.100.0.11'], port_security=['fa:16:3e:4a:e6:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42b6327c-2fd8-40d7-ba08-912b6664d7e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=250114f0-0578-4b19-ab29-d2499c25ead0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.097 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da in datapath 9ceec3f2-374d-41aa-a1df-26773af00fd7 bound to our chassis
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.100 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ceec3f2-374d-41aa-a1df-26773af00fd7
Oct 02 08:49:50 compute-0 ovn_controller[152344]: 2025-10-02T08:49:50Z|01215|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da ovn-installed in OVS
Oct 02 08:49:50 compute-0 ovn_controller[152344]: 2025-10-02T08:49:50Z|01216|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da up in Southbound
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.122 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d00cbb-2835-49d4-bc7d-0e940b821f07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.125 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ceec3f2-31 in ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.128 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ceec3f2-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.129 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[adc4be6a-9f3a-45dc-a0fa-02cebea16b1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.130 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc65f45-db93-4b16-8279-6d6e2d23ff1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 systemd-udevd[379234]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.150 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5c3918-dcd8-42e5-8502-7887cffbd381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 systemd-machined[214636]: New machine qemu-148-instance-00000076.
Oct 02 08:49:50 compute-0 NetworkManager[45129]: <info>  [1759394990.1612] device (tapac5b8c6c-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:49:50 compute-0 NetworkManager[45129]: <info>  [1759394990.1632] device (tapac5b8c6c-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:49:50 compute-0 systemd[1]: Started Virtual Machine qemu-148-instance-00000076.
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.182 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.184 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.184 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.184 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.185 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.184 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78dfade3-b60e-48fb-a8c5-621bbb51fe66]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.189 2 INFO nova.compute.manager [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Terminating instance
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.191 2 DEBUG nova.compute.manager [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.226 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ec50fa2c-2c99-427e-b7a9-b5f653fede3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.231 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[43dba610-764d-45fa-9f3b-bb4d59d584a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 NetworkManager[45129]: <info>  [1759394990.2384] manager: (tap9ceec3f2-30): new Veth device (/org/freedesktop/NetworkManager/Devices/480)
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.291 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[dd459b67-f39f-4a34-a722-cf77c4ec4c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.295 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[27e4c914-fce8-4680-9121-d5c49fe211ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 NetworkManager[45129]: <info>  [1759394990.3232] device (tap9ceec3f2-30): carrier: link connected
Oct 02 08:49:50 compute-0 kernel: tap8add63d4-81 (unregistering): left promiscuous mode
Oct 02 08:49:50 compute-0 NetworkManager[45129]: <info>  [1759394990.3290] device (tap8add63d4-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.331 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[21c3d667-6bd5-4408-86c4-620dbfafa5ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 ovn_controller[152344]: 2025-10-02T08:49:50Z|01217|binding|INFO|Releasing lport 8add63d4-81e0-4c57-a383-69310529c53b from this chassis (sb_readonly=0)
Oct 02 08:49:50 compute-0 ovn_controller[152344]: 2025-10-02T08:49:50Z|01218|binding|INFO|Setting lport 8add63d4-81e0-4c57-a383-69310529c53b down in Southbound
Oct 02 08:49:50 compute-0 ovn_controller[152344]: 2025-10-02T08:49:50Z|01219|binding|INFO|Removing iface tap8add63d4-81 ovn-installed in OVS
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.349 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:8f:fc 10.100.0.11'], port_security=['fa:16:3e:c1:8f:fc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5d39bd73-ba92-45a5-9d69-f1f1901aa895', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe0d9bd380934b368dfa78fea95edb29', 'neutron:revision_number': '4', 'neutron:security_group_ids': '685b2755-a769-4920-a3c6-7a8b24097a10 ece867f3-4b20-4c54-9ee4-8b5852490079', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86495746-eacb-4aba-a3c0-a2e0aa4fb357, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8add63d4-81e0-4c57-a383-69310529c53b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.363 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fc62bef0-e6de-4c72-9a0a-adcb03041d6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ceec3f2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:37:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586892, 'reachable_time': 35326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379269, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.382 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ade3528d-eb43-4ce5-8e62-39e83f15e377]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:3769'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586892, 'tstamp': 586892}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379285, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct 02 08:49:50 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000075.scope: Consumed 13.955s CPU time.
Oct 02 08:49:50 compute-0 systemd-machined[214636]: Machine qemu-147-instance-00000075 terminated.
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.397 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a95a6843-bd85-4aff-82ca-377ea5238575]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ceec3f2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:37:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586892, 'reachable_time': 35326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379291, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.426 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.426 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.440 2 INFO nova.virt.libvirt.driver [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Instance destroyed successfully.
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.440 2 DEBUG nova.objects.instance [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lazy-loading 'resources' on Instance uuid 5d39bd73-ba92-45a5-9d69-f1f1901aa895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:49:50 compute-0 podman[379270]: 2025-10-02 08:49:50.447148417 +0000 UTC m=+0.086877079 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.447 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0e05240d-41b7-4a57-828a-47d79bb8426b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.454 2 DEBUG nova.virt.libvirt.vif [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1419201510',display_name='tempest-TestServerBasicOps-server-1419201510',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1419201510',id=117,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxIiQkCuUb4hsjuixWL10UGBkKUMnwTZeKLoBPdzp7C/XvwvYOMF2Ky32VWHt/hKjva7kq8MtJ3oMOLi7ZAeECvKOs+ESXsUFj3RXVpjLuEPsjWq3UA/7zA5pWt+U9M2g==',key_name='tempest-TestServerBasicOps-36899864',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:49:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fe0d9bd380934b368dfa78fea95edb29',ramdisk_id='',reservation_id='r-xw0wp2kg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1506272321',owner_user_name='tempest-TestServerBasicOps-1506272321-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ba06f614b3ef4ffe8b1a2d1b848554a9',uuid=5d39bd73-ba92-45a5-9d69-f1f1901aa895,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.455 2 DEBUG nova.network.os_vif_util [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Converting VIF {"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.456 2 DEBUG nova.network.os_vif_util [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.457 2 DEBUG os_vif [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8add63d4-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.465 2 INFO os_vif [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81')
Oct 02 08:49:50 compute-0 podman[379266]: 2025-10-02 08:49:50.466576882 +0000 UTC m=+0.110303628 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.512 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2744bb08-9665-465a-b159-4d3c25082605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.515 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ceec3f2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.515 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.516 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ceec3f2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 NetworkManager[45129]: <info>  [1759394990.5186] manager: (tap9ceec3f2-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Oct 02 08:49:50 compute-0 kernel: tap9ceec3f2-30: entered promiscuous mode
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.523 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ceec3f2-30, col_values=(('external_ids', {'iface-id': '251c75ac-afb0-42a1-a89b-23c6beb7b3f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 ovn_controller[152344]: 2025-10-02T08:49:50Z|01220|binding|INFO|Releasing lport 251c75ac-afb0-42a1-a89b-23c6beb7b3f6 from this chassis (sb_readonly=0)
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.545 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ceec3f2-374d-41aa-a1df-26773af00fd7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ceec3f2-374d-41aa-a1df-26773af00fd7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.547 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba5e835-5478-4192-af3e-f71e531e60ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.548 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-9ceec3f2-374d-41aa-a1df-26773af00fd7
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/9ceec3f2-374d-41aa-a1df-26773af00fd7.pid.haproxy
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 9ceec3f2-374d-41aa-a1df-26773af00fd7
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:49:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.548 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'env', 'PROCESS_TAG=haproxy-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ceec3f2-374d-41aa-a1df-26773af00fd7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.778 2 DEBUG nova.compute.manager [req-45751aa2-5ee3-40ed-b3ff-e163ee4d9c78 req-a9dc8d3a-7f00-4895-8eae-bee39cbcf770 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-vif-unplugged-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.778 2 DEBUG oslo_concurrency.lockutils [req-45751aa2-5ee3-40ed-b3ff-e163ee4d9c78 req-a9dc8d3a-7f00-4895-8eae-bee39cbcf770 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.778 2 DEBUG oslo_concurrency.lockutils [req-45751aa2-5ee3-40ed-b3ff-e163ee4d9c78 req-a9dc8d3a-7f00-4895-8eae-bee39cbcf770 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.778 2 DEBUG oslo_concurrency.lockutils [req-45751aa2-5ee3-40ed-b3ff-e163ee4d9c78 req-a9dc8d3a-7f00-4895-8eae-bee39cbcf770 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.778 2 DEBUG nova.compute.manager [req-45751aa2-5ee3-40ed-b3ff-e163ee4d9c78 req-a9dc8d3a-7f00-4895-8eae-bee39cbcf770 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] No waiting events found dispatching network-vif-unplugged-8add63d4-81e0-4c57-a383-69310529c53b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.779 2 DEBUG nova.compute.manager [req-45751aa2-5ee3-40ed-b3ff-e163ee4d9c78 req-a9dc8d3a-7f00-4895-8eae-bee39cbcf770 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-vif-unplugged-8add63d4-81e0-4c57-a383-69310529c53b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.930 2 DEBUG nova.compute.manager [req-a2143ef4-ce10-4e31-8422-d49e0cb7d511 req-32fd6338-effe-4cdf-a990-0e988456047b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.930 2 DEBUG oslo_concurrency.lockutils [req-a2143ef4-ce10-4e31-8422-d49e0cb7d511 req-32fd6338-effe-4cdf-a990-0e988456047b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.930 2 DEBUG oslo_concurrency.lockutils [req-a2143ef4-ce10-4e31-8422-d49e0cb7d511 req-32fd6338-effe-4cdf-a990-0e988456047b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.931 2 DEBUG oslo_concurrency.lockutils [req-a2143ef4-ce10-4e31-8422-d49e0cb7d511 req-32fd6338-effe-4cdf-a990-0e988456047b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:50 compute-0 nova_compute[260603]: 2025-10-02 08:49:50.931 2 DEBUG nova.compute.manager [req-a2143ef4-ce10-4e31-8422-d49e0cb7d511 req-32fd6338-effe-4cdf-a990-0e988456047b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Processing event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:49:50 compute-0 podman[379409]: 2025-10-02 08:49:50.946842422 +0000 UTC m=+0.100307698 container create c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 08:49:50 compute-0 podman[379409]: 2025-10-02 08:49:50.867898742 +0000 UTC m=+0.021364028 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:49:51 compute-0 systemd[1]: Started libpod-conmon-c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957.scope.
Oct 02 08:49:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37bbf101800d2f34c5533d4db9bfd73f0e4cf42dd09496c7c2a5063e08e8353/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:49:51 compute-0 podman[379409]: 2025-10-02 08:49:51.075496312 +0000 UTC m=+0.228961588 container init c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:49:51 compute-0 podman[379409]: 2025-10-02 08:49:51.080979092 +0000 UTC m=+0.234444358 container start c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.106 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.108 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394991.1060913, 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.108 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] VM Started (Lifecycle Event)
Oct 02 08:49:51 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[379425]: [NOTICE]   (379429) : New worker (379431) forked
Oct 02 08:49:51 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[379425]: [NOTICE]   (379429) : Loading success.
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.111 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.115 2 INFO nova.virt.libvirt.driver [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance spawned successfully.
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.115 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:49:51 compute-0 ceph-mon[74477]: pgmap v2134: 305 pgs: 305 active+clean; 167 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.144 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.147 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.153 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.153 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.153 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.154 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.154 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.155 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.179 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.179 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394991.1078343, 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.179 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] VM Paused (Lifecycle Event)
Oct 02 08:49:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.191 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8add63d4-81e0-4c57-a383-69310529c53b in datapath 86d7b4c5-3981-43ff-aa9f-1933a61f6bec unbound from our chassis
Oct 02 08:49:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.194 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 86d7b4c5-3981-43ff-aa9f-1933a61f6bec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:49:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.195 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[66c0e86f-75d6-4b8c-b8c5-c0616234e6f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.196 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec namespace which is not needed anymore
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.213 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.216 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394991.1107974, 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.216 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] VM Resumed (Lifecycle Event)
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.234 2 INFO nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Took 7.22 seconds to spawn the instance on the hypervisor.
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.234 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.241 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.244 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.268 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.298 2 INFO nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Took 8.30 seconds to build instance.
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.316 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:51 compute-0 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [NOTICE]   (378802) : haproxy version is 2.8.14-c23fe91
Oct 02 08:49:51 compute-0 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [NOTICE]   (378802) : path to executable is /usr/sbin/haproxy
Oct 02 08:49:51 compute-0 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [WARNING]  (378802) : Exiting Master process...
Oct 02 08:49:51 compute-0 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [ALERT]    (378802) : Current worker (378804) exited with code 143 (Terminated)
Oct 02 08:49:51 compute-0 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [WARNING]  (378802) : All workers exited. Exiting... (0)
Oct 02 08:49:51 compute-0 systemd[1]: libpod-f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff.scope: Deactivated successfully.
Oct 02 08:49:51 compute-0 podman[379457]: 2025-10-02 08:49:51.364596682 +0000 UTC m=+0.081502771 container died f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 08:49:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 167 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 2.6 MiB/s wr, 68 op/s
Oct 02 08:49:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff-userdata-shm.mount: Deactivated successfully.
Oct 02 08:49:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b46779ccf78646eb1fa1b179b99646c119a819bb4ab4cf83b813387cadabddf-merged.mount: Deactivated successfully.
Oct 02 08:49:51 compute-0 podman[379457]: 2025-10-02 08:49:51.775898902 +0000 UTC m=+0.492804971 container cleanup f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 08:49:51 compute-0 systemd[1]: libpod-conmon-f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff.scope: Deactivated successfully.
Oct 02 08:49:51 compute-0 podman[379488]: 2025-10-02 08:49:51.977272969 +0000 UTC m=+0.175222003 container remove f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 08:49:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.984 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[810e3064-810a-4fc0-a8e8-3177dbbed25e]: (4, ('Thu Oct  2 08:49:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec (f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff)\nf19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff\nThu Oct  2 08:49:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec (f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff)\nf19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.986 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddb11cc-a963-4721-8788-ba67fc78e10f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.987 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86d7b4c5-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:49:51 compute-0 nova_compute[260603]: 2025-10-02 08:49:51.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:52 compute-0 kernel: tap86d7b4c5-30: left promiscuous mode
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:52.014 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[154e541f-f4b4-47f5-a448-a78d2b81f18c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:52.041 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6b12b94f-0498-4e34-a39e-c13cfb06b96a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:52.043 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a672946e-c6ff-4911-9b3d-ba7721eacf54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:52.059 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[959b027f-5cf5-4314-83d8-9b1213440f50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584374, 'reachable_time': 32110, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379504, 'error': None, 'target': 'ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d86d7b4c5\x2d3981\x2d43ff\x2daa9f\x2d1933a61f6bec.mount: Deactivated successfully.
Oct 02 08:49:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:52.063 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:49:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:49:52.063 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff89304-4cc3-4fe9-965b-94e455a19d71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.425 2 INFO nova.virt.libvirt.driver [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Deleting instance files /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895_del
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.426 2 INFO nova.virt.libvirt.driver [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Deletion of /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895_del complete
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.503 2 INFO nova.compute.manager [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Took 2.31 seconds to destroy the instance on the hypervisor.
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.503 2 DEBUG oslo.service.loopingcall [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.503 2 DEBUG nova.compute.manager [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.504 2 DEBUG nova.network.neutron [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.860 2 DEBUG nova.compute.manager [req-e2c11d70-8211-42e1-8735-85f117883c53 req-43b0f21b-d893-44f3-8aa3-06ef0695ed43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.860 2 DEBUG oslo_concurrency.lockutils [req-e2c11d70-8211-42e1-8735-85f117883c53 req-43b0f21b-d893-44f3-8aa3-06ef0695ed43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.861 2 DEBUG oslo_concurrency.lockutils [req-e2c11d70-8211-42e1-8735-85f117883c53 req-43b0f21b-d893-44f3-8aa3-06ef0695ed43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.861 2 DEBUG oslo_concurrency.lockutils [req-e2c11d70-8211-42e1-8735-85f117883c53 req-43b0f21b-d893-44f3-8aa3-06ef0695ed43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.861 2 DEBUG nova.compute.manager [req-e2c11d70-8211-42e1-8735-85f117883c53 req-43b0f21b-d893-44f3-8aa3-06ef0695ed43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] No waiting events found dispatching network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:49:52 compute-0 nova_compute[260603]: 2025-10-02 08:49:52.862 2 WARNING nova.compute.manager [req-e2c11d70-8211-42e1-8735-85f117883c53 req-43b0f21b-d893-44f3-8aa3-06ef0695ed43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received unexpected event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b for instance with vm_state active and task_state deleting.
Oct 02 08:49:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.012 2 DEBUG nova.compute.manager [req-67bf16e6-7e0b-475e-be99-e695598f63a5 req-be554120-b52c-426c-a6b8-a0ce4de54b10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.012 2 DEBUG oslo_concurrency.lockutils [req-67bf16e6-7e0b-475e-be99-e695598f63a5 req-be554120-b52c-426c-a6b8-a0ce4de54b10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.012 2 DEBUG oslo_concurrency.lockutils [req-67bf16e6-7e0b-475e-be99-e695598f63a5 req-be554120-b52c-426c-a6b8-a0ce4de54b10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.013 2 DEBUG oslo_concurrency.lockutils [req-67bf16e6-7e0b-475e-be99-e695598f63a5 req-be554120-b52c-426c-a6b8-a0ce4de54b10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.013 2 DEBUG nova.compute.manager [req-67bf16e6-7e0b-475e-be99-e695598f63a5 req-be554120-b52c-426c-a6b8-a0ce4de54b10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.013 2 WARNING nova.compute.manager [req-67bf16e6-7e0b-475e-be99-e695598f63a5 req-be554120-b52c-426c-a6b8-a0ce4de54b10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state None.
Oct 02 08:49:53 compute-0 ceph-mon[74477]: pgmap v2135: 305 pgs: 305 active+clean; 167 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 2.6 MiB/s wr, 68 op/s
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.273 2 DEBUG nova.network.neutron [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.306 2 INFO nova.compute.manager [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Took 0.80 seconds to deallocate network for instance.
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.377 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.377 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.465 2 DEBUG oslo_concurrency.processutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:49:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 88 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 158 op/s
Oct 02 08:49:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:49:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646485759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.912 2 DEBUG oslo_concurrency.processutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.918 2 DEBUG nova.compute.provider_tree [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.937 2 DEBUG nova.scheduler.client.report [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:49:53 compute-0 nova_compute[260603]: 2025-10-02 08:49:53.965 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:54 compute-0 nova_compute[260603]: 2025-10-02 08:49:54.003 2 INFO nova.scheduler.client.report [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Deleted allocations for instance 5d39bd73-ba92-45a5-9d69-f1f1901aa895
Oct 02 08:49:54 compute-0 nova_compute[260603]: 2025-10-02 08:49:54.098 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:49:54 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1646485759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:49:54 compute-0 nova_compute[260603]: 2025-10-02 08:49:54.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:54 compute-0 nova_compute[260603]: 2025-10-02 08:49:54.950 2 DEBUG nova.compute.manager [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-changed-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:54 compute-0 nova_compute[260603]: 2025-10-02 08:49:54.950 2 DEBUG nova.compute.manager [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Refreshing instance network info cache due to event network-changed-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:49:54 compute-0 nova_compute[260603]: 2025-10-02 08:49:54.950 2 DEBUG oslo_concurrency.lockutils [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:49:54 compute-0 nova_compute[260603]: 2025-10-02 08:49:54.950 2 DEBUG oslo_concurrency.lockutils [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:49:54 compute-0 nova_compute[260603]: 2025-10-02 08:49:54.951 2 DEBUG nova.network.neutron [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Refreshing network info cache for port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:49:55 compute-0 nova_compute[260603]: 2025-10-02 08:49:55.116 2 DEBUG nova.compute.manager [req-342cf86a-e5b1-4f2e-95f9-d47dcaa79c23 req-5bbe38a5-c62d-4fd8-b2cf-fc1153fc6cba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-vif-deleted-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:49:55 compute-0 ceph-mon[74477]: pgmap v2136: 305 pgs: 305 active+clean; 88 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 158 op/s
Oct 02 08:49:55 compute-0 nova_compute[260603]: 2025-10-02 08:49:55.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:49:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 88 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Oct 02 08:49:56 compute-0 ceph-mon[74477]: pgmap v2137: 305 pgs: 305 active+clean; 88 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Oct 02 08:49:56 compute-0 nova_compute[260603]: 2025-10-02 08:49:56.653 2 DEBUG nova.network.neutron [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updated VIF entry in instance network info cache for port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:49:56 compute-0 nova_compute[260603]: 2025-10-02 08:49:56.654 2 DEBUG nova.network.neutron [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updating instance_info_cache with network_info: [{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:49:56 compute-0 nova_compute[260603]: 2025-10-02 08:49:56.678 2 DEBUG oslo_concurrency.lockutils [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:49:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 02 08:49:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:49:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:49:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:49:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:49:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:49:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:49:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:49:58 compute-0 ceph-mon[74477]: pgmap v2138: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 02 08:49:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 117 op/s
Oct 02 08:49:59 compute-0 nova_compute[260603]: 2025-10-02 08:49:59.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:00 compute-0 nova_compute[260603]: 2025-10-02 08:50:00.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:00 compute-0 ovn_controller[152344]: 2025-10-02T08:50:00Z|01221|binding|INFO|Releasing lport 251c75ac-afb0-42a1-a89b-23c6beb7b3f6 from this chassis (sb_readonly=0)
Oct 02 08:50:00 compute-0 ceph-mon[74477]: pgmap v2139: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 117 op/s
Oct 02 08:50:00 compute-0 nova_compute[260603]: 2025-10-02 08:50:00.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:00 compute-0 sudo[379527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:00 compute-0 sudo[379527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:00 compute-0 sudo[379527]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:00 compute-0 sudo[379552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:50:00 compute-0 sudo[379552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:00 compute-0 sudo[379552]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:00 compute-0 sudo[379577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:00 compute-0 sudo[379577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:00 compute-0 sudo[379577]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:00 compute-0 sudo[379602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 08:50:00 compute-0 sudo[379602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:01 compute-0 sudo[379602]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:50:01 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:50:01 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:01 compute-0 sudo[379647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:01 compute-0 sudo[379647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:01 compute-0 sudo[379647]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:01 compute-0 sudo[379672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:50:01 compute-0 sudo[379672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:01 compute-0 sudo[379672]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 104 op/s
Oct 02 08:50:01 compute-0 sudo[379697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:01 compute-0 sudo[379697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:01 compute-0 sudo[379697]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:01 compute-0 sudo[379722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:50:01 compute-0 sudo[379722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:01 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 02 08:50:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:02.096 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:50:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:02.097 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:50:02 compute-0 nova_compute[260603]: 2025-10-02 08:50:02.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:02 compute-0 sudo[379722]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:02 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:02 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:02 compute-0 ceph-mon[74477]: pgmap v2140: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 104 op/s
Oct 02 08:50:02 compute-0 sudo[379777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:02 compute-0 sudo[379777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:02 compute-0 sudo[379777]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:02 compute-0 sudo[379802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:50:02 compute-0 sudo[379802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:02 compute-0 sudo[379802]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:02 compute-0 sudo[379827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:02 compute-0 sudo[379827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:02 compute-0 sudo[379827]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:02 compute-0 sudo[379852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- inventory --format=json-pretty --filter-for-batch
Oct 02 08:50:02 compute-0 sudo[379852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:50:02 compute-0 podman[379916]: 2025-10-02 08:50:02.969542231 +0000 UTC m=+0.049060090 container create 7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_merkle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 08:50:03 compute-0 systemd[1]: Started libpod-conmon-7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b.scope.
Oct 02 08:50:03 compute-0 podman[379916]: 2025-10-02 08:50:02.946607116 +0000 UTC m=+0.026124945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:50:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:50:03 compute-0 podman[379916]: 2025-10-02 08:50:03.071787028 +0000 UTC m=+0.151304867 container init 7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 08:50:03 compute-0 podman[379916]: 2025-10-02 08:50:03.084158013 +0000 UTC m=+0.163675852 container start 7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 08:50:03 compute-0 podman[379916]: 2025-10-02 08:50:03.088167989 +0000 UTC m=+0.167685828 container attach 7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:50:03 compute-0 ecstatic_merkle[379932]: 167 167
Oct 02 08:50:03 compute-0 systemd[1]: libpod-7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b.scope: Deactivated successfully.
Oct 02 08:50:03 compute-0 conmon[379932]: conmon 7b8982814b215b24fc0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b.scope/container/memory.events
Oct 02 08:50:03 compute-0 podman[379916]: 2025-10-02 08:50:03.095990263 +0000 UTC m=+0.175508122 container died 7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_merkle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 08:50:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3cf6c2866a2f68fd2902759bcb50b1496189a9b885a97acb09ee7afadd136fa-merged.mount: Deactivated successfully.
Oct 02 08:50:03 compute-0 podman[379916]: 2025-10-02 08:50:03.148075866 +0000 UTC m=+0.227593675 container remove 7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 08:50:03 compute-0 systemd[1]: libpod-conmon-7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b.scope: Deactivated successfully.
Oct 02 08:50:03 compute-0 podman[379956]: 2025-10-02 08:50:03.308707092 +0000 UTC m=+0.037273962 container create c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:50:03 compute-0 systemd[1]: Started libpod-conmon-c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d.scope.
Oct 02 08:50:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2aab9b04ed6dd1d94db7e5c36114546f4aca655df7c874bfd43892465198df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2aab9b04ed6dd1d94db7e5c36114546f4aca655df7c874bfd43892465198df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2aab9b04ed6dd1d94db7e5c36114546f4aca655df7c874bfd43892465198df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2aab9b04ed6dd1d94db7e5c36114546f4aca655df7c874bfd43892465198df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:03 compute-0 podman[379956]: 2025-10-02 08:50:03.292381183 +0000 UTC m=+0.020948073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:50:03 compute-0 podman[379956]: 2025-10-02 08:50:03.399928626 +0000 UTC m=+0.128495576 container init c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 08:50:03 compute-0 podman[379956]: 2025-10-02 08:50:03.410051521 +0000 UTC m=+0.138618401 container start c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:50:03 compute-0 podman[379956]: 2025-10-02 08:50:03.413831559 +0000 UTC m=+0.142398479 container attach c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:50:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 115 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Oct 02 08:50:03 compute-0 ovn_controller[152344]: 2025-10-02T08:50:03Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:e6:c0 10.100.0.11
Oct 02 08:50:03 compute-0 ovn_controller[152344]: 2025-10-02T08:50:03Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:e6:c0 10.100.0.11
Oct 02 08:50:04 compute-0 ceph-mon[74477]: pgmap v2141: 305 pgs: 305 active+clean; 115 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Oct 02 08:50:04 compute-0 nova_compute[260603]: 2025-10-02 08:50:04.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:04 compute-0 beautiful_moore[379972]: [
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:     {
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:         "available": false,
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:         "ceph_device": false,
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:         "lsm_data": {},
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:         "lvs": [],
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:         "path": "/dev/sr0",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:         "rejected_reasons": [
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "Has a FileSystem",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "Insufficient space (<5GB)"
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:         ],
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:         "sys_api": {
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "actuators": null,
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "device_nodes": "sr0",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "devname": "sr0",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "human_readable_size": "482.00 KB",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "id_bus": "ata",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "model": "QEMU DVD-ROM",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "nr_requests": "2",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "parent": "/dev/sr0",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "partitions": {},
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "path": "/dev/sr0",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "removable": "1",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "rev": "2.5+",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "ro": "0",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "rotational": "0",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "sas_address": "",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "sas_device_handle": "",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "scheduler_mode": "mq-deadline",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "sectors": 0,
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "sectorsize": "2048",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "size": 493568.0,
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "support_discard": "2048",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "type": "disk",
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:             "vendor": "QEMU"
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:         }
Oct 02 08:50:04 compute-0 beautiful_moore[379972]:     }
Oct 02 08:50:04 compute-0 beautiful_moore[379972]: ]
Oct 02 08:50:04 compute-0 systemd[1]: libpod-c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d.scope: Deactivated successfully.
Oct 02 08:50:04 compute-0 podman[379956]: 2025-10-02 08:50:04.81272427 +0000 UTC m=+1.541291140 container died c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 08:50:04 compute-0 systemd[1]: libpod-c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d.scope: Consumed 1.441s CPU time.
Oct 02 08:50:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b2aab9b04ed6dd1d94db7e5c36114546f4aca655df7c874bfd43892465198df-merged.mount: Deactivated successfully.
Oct 02 08:50:04 compute-0 podman[379956]: 2025-10-02 08:50:04.866509487 +0000 UTC m=+1.595076347 container remove c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:50:04 compute-0 systemd[1]: libpod-conmon-c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d.scope: Deactivated successfully.
Oct 02 08:50:04 compute-0 sudo[379852]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:50:04 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:50:04 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:50:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:50:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:50:04 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:50:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:50:04 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:04 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5b62f3a9-4742-4945-a567-2d43f25a8f7e does not exist
Oct 02 08:50:04 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8670188f-637f-4bbf-87af-d53351442408 does not exist
Oct 02 08:50:04 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ecaba74a-f8da-4ee3-a6d6-655a6715fa19 does not exist
Oct 02 08:50:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:50:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:50:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:50:04 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:50:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:50:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:50:05 compute-0 sudo[381935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:05 compute-0 sudo[381935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:05 compute-0 sudo[381935]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:05 compute-0 sudo[381960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:50:05 compute-0 sudo[381960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:05 compute-0 sudo[381960]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:05 compute-0 sudo[381985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:05 compute-0 sudo[381985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:05 compute-0 sudo[381985]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:05 compute-0 sudo[382010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:50:05 compute-0 sudo[382010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:05 compute-0 nova_compute[260603]: 2025-10-02 08:50:05.433 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394990.432356, 5d39bd73-ba92-45a5-9d69-f1f1901aa895 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:50:05 compute-0 nova_compute[260603]: 2025-10-02 08:50:05.433 2 INFO nova.compute.manager [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] VM Stopped (Lifecycle Event)
Oct 02 08:50:05 compute-0 nova_compute[260603]: 2025-10-02 08:50:05.454 2 DEBUG nova.compute.manager [None req-2c0da99a-a2dc-4f3b-beb9-419ec3a3e696 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:50:05 compute-0 nova_compute[260603]: 2025-10-02 08:50:05.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 115 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Oct 02 08:50:05 compute-0 podman[382076]: 2025-10-02 08:50:05.846822352 +0000 UTC m=+0.076275629 container create c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:50:05 compute-0 systemd[1]: Started libpod-conmon-c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659.scope.
Oct 02 08:50:05 compute-0 podman[382076]: 2025-10-02 08:50:05.818034265 +0000 UTC m=+0.047487592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:50:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:50:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:50:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:50:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:50:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:50:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:50:05 compute-0 podman[382076]: 2025-10-02 08:50:05.961678682 +0000 UTC m=+0.191132009 container init c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 08:50:05 compute-0 podman[382076]: 2025-10-02 08:50:05.974320926 +0000 UTC m=+0.203774213 container start c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:50:05 compute-0 podman[382076]: 2025-10-02 08:50:05.978469794 +0000 UTC m=+0.207923141 container attach c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 08:50:05 compute-0 flamboyant_chaum[382092]: 167 167
Oct 02 08:50:05 compute-0 systemd[1]: libpod-c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659.scope: Deactivated successfully.
Oct 02 08:50:05 compute-0 conmon[382092]: conmon c56141c9c1065709bb96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659.scope/container/memory.events
Oct 02 08:50:05 compute-0 podman[382076]: 2025-10-02 08:50:05.987205517 +0000 UTC m=+0.216658794 container died c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:50:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-26b29776e7fbfe51b456914e1587bfc46720df1555a887b94ce27d601e1a7524-merged.mount: Deactivated successfully.
Oct 02 08:50:06 compute-0 podman[382076]: 2025-10-02 08:50:06.063066082 +0000 UTC m=+0.292519369 container remove c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:50:06 compute-0 systemd[1]: libpod-conmon-c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659.scope: Deactivated successfully.
Oct 02 08:50:06 compute-0 ovn_controller[152344]: 2025-10-02T08:50:06Z|01222|binding|INFO|Releasing lport 251c75ac-afb0-42a1-a89b-23c6beb7b3f6 from this chassis (sb_readonly=0)
Oct 02 08:50:06 compute-0 podman[382118]: 2025-10-02 08:50:06.291774341 +0000 UTC m=+0.051928010 container create 37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:50:06 compute-0 nova_compute[260603]: 2025-10-02 08:50:06.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:06 compute-0 systemd[1]: Started libpod-conmon-37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2.scope.
Oct 02 08:50:06 compute-0 podman[382118]: 2025-10-02 08:50:06.273842181 +0000 UTC m=+0.033995870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:50:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167d190fb33d2d97dc8db421d89b06a87f5d566a8505d880d61517e8955895b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167d190fb33d2d97dc8db421d89b06a87f5d566a8505d880d61517e8955895b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167d190fb33d2d97dc8db421d89b06a87f5d566a8505d880d61517e8955895b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167d190fb33d2d97dc8db421d89b06a87f5d566a8505d880d61517e8955895b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167d190fb33d2d97dc8db421d89b06a87f5d566a8505d880d61517e8955895b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:06 compute-0 podman[382118]: 2025-10-02 08:50:06.424829946 +0000 UTC m=+0.184983715 container init 37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:50:06 compute-0 podman[382118]: 2025-10-02 08:50:06.436223252 +0000 UTC m=+0.196376921 container start 37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nash, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:50:06 compute-0 podman[382118]: 2025-10-02 08:50:06.440380202 +0000 UTC m=+0.200533911 container attach 37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nash, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 08:50:06 compute-0 ceph-mon[74477]: pgmap v2142: 305 pgs: 305 active+clean; 115 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Oct 02 08:50:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 732 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 08:50:07 compute-0 friendly_nash[382133]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:50:07 compute-0 friendly_nash[382133]: --> relative data size: 1.0
Oct 02 08:50:07 compute-0 friendly_nash[382133]: --> All data devices are unavailable
Oct 02 08:50:07 compute-0 systemd[1]: libpod-37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2.scope: Deactivated successfully.
Oct 02 08:50:07 compute-0 systemd[1]: libpod-37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2.scope: Consumed 1.120s CPU time.
Oct 02 08:50:07 compute-0 podman[382118]: 2025-10-02 08:50:07.609894124 +0000 UTC m=+1.370047833 container died 37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 08:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-167d190fb33d2d97dc8db421d89b06a87f5d566a8505d880d61517e8955895b8-merged.mount: Deactivated successfully.
Oct 02 08:50:07 compute-0 podman[382118]: 2025-10-02 08:50:07.681984211 +0000 UTC m=+1.442137880 container remove 37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nash, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:50:07 compute-0 systemd[1]: libpod-conmon-37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2.scope: Deactivated successfully.
Oct 02 08:50:07 compute-0 sudo[382010]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:07 compute-0 sudo[382177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:07 compute-0 sudo[382177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:07 compute-0 sudo[382177]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:07 compute-0 sudo[382202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:50:07 compute-0 sudo[382202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:07 compute-0 sudo[382202]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:50:07 compute-0 sudo[382227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:07 compute-0 sudo[382227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:07 compute-0 sudo[382227]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:08 compute-0 sudo[382252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:50:08 compute-0 sudo[382252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:08 compute-0 podman[382317]: 2025-10-02 08:50:08.548932012 +0000 UTC m=+0.079511259 container create 06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 02 08:50:08 compute-0 systemd[1]: Started libpod-conmon-06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00.scope.
Oct 02 08:50:08 compute-0 podman[382317]: 2025-10-02 08:50:08.517311627 +0000 UTC m=+0.047890944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:50:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:50:08 compute-0 podman[382317]: 2025-10-02 08:50:08.657499607 +0000 UTC m=+0.188078874 container init 06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 08:50:08 compute-0 podman[382317]: 2025-10-02 08:50:08.674678042 +0000 UTC m=+0.205257299 container start 06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:50:08 compute-0 podman[382317]: 2025-10-02 08:50:08.679299986 +0000 UTC m=+0.209879313 container attach 06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:50:08 compute-0 flamboyant_shockley[382333]: 167 167
Oct 02 08:50:08 compute-0 systemd[1]: libpod-06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00.scope: Deactivated successfully.
Oct 02 08:50:08 compute-0 podman[382317]: 2025-10-02 08:50:08.689266397 +0000 UTC m=+0.219845644 container died 06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:50:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f269de981be268fe3ab139d5945030c2f083d1c7f0a1c420d7c1fbbc2f17af5-merged.mount: Deactivated successfully.
Oct 02 08:50:08 compute-0 podman[382317]: 2025-10-02 08:50:08.73461154 +0000 UTC m=+0.265190797 container remove 06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:50:08 compute-0 systemd[1]: libpod-conmon-06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00.scope: Deactivated successfully.
Oct 02 08:50:08 compute-0 ceph-mon[74477]: pgmap v2143: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 732 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 08:50:08 compute-0 podman[382358]: 2025-10-02 08:50:08.991803546 +0000 UTC m=+0.055471950 container create b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 08:50:09 compute-0 systemd[1]: Started libpod-conmon-b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df.scope.
Oct 02 08:50:09 compute-0 podman[382358]: 2025-10-02 08:50:08.969363777 +0000 UTC m=+0.033032211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:50:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bdf3409dbd86d27ab71077ad7431d24b1a38585f9b48cdbab417b676e90fa2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bdf3409dbd86d27ab71077ad7431d24b1a38585f9b48cdbab417b676e90fa2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bdf3409dbd86d27ab71077ad7431d24b1a38585f9b48cdbab417b676e90fa2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bdf3409dbd86d27ab71077ad7431d24b1a38585f9b48cdbab417b676e90fa2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:09 compute-0 podman[382358]: 2025-10-02 08:50:09.115106079 +0000 UTC m=+0.178774493 container init b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:50:09 compute-0 podman[382358]: 2025-10-02 08:50:09.129679574 +0000 UTC m=+0.193347968 container start b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:50:09 compute-0 podman[382358]: 2025-10-02 08:50:09.132689307 +0000 UTC m=+0.196357731 container attach b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:50:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:50:09 compute-0 nova_compute[260603]: 2025-10-02 08:50:09.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]: {
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:     "0": [
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:         {
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "devices": [
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "/dev/loop3"
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             ],
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_name": "ceph_lv0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_size": "21470642176",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "name": "ceph_lv0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "tags": {
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.cluster_name": "ceph",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.crush_device_class": "",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.encrypted": "0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.osd_id": "0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.type": "block",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.vdo": "0"
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             },
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "type": "block",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "vg_name": "ceph_vg0"
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:         }
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:     ],
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:     "1": [
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:         {
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "devices": [
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "/dev/loop4"
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             ],
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_name": "ceph_lv1",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_size": "21470642176",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "name": "ceph_lv1",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "tags": {
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.cluster_name": "ceph",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.crush_device_class": "",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.encrypted": "0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.osd_id": "1",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.type": "block",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.vdo": "0"
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             },
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "type": "block",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "vg_name": "ceph_vg1"
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:         }
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:     ],
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:     "2": [
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:         {
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "devices": [
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "/dev/loop5"
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             ],
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_name": "ceph_lv2",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_size": "21470642176",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "name": "ceph_lv2",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "tags": {
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.cluster_name": "ceph",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.crush_device_class": "",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.encrypted": "0",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.osd_id": "2",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.type": "block",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:                 "ceph.vdo": "0"
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             },
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "type": "block",
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:             "vg_name": "ceph_vg2"
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:         }
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]:     ]
Oct 02 08:50:09 compute-0 practical_chandrasekhar[382374]: }
Oct 02 08:50:09 compute-0 podman[382358]: 2025-10-02 08:50:09.92749474 +0000 UTC m=+0.991163144 container died b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 08:50:09 compute-0 systemd[1]: libpod-b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df.scope: Deactivated successfully.
Oct 02 08:50:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bdf3409dbd86d27ab71077ad7431d24b1a38585f9b48cdbab417b676e90fa2f-merged.mount: Deactivated successfully.
Oct 02 08:50:09 compute-0 podman[382358]: 2025-10-02 08:50:09.983488655 +0000 UTC m=+1.047157059 container remove b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:50:09 compute-0 systemd[1]: libpod-conmon-b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df.scope: Deactivated successfully.
Oct 02 08:50:10 compute-0 sudo[382252]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:10 compute-0 nova_compute[260603]: 2025-10-02 08:50:10.035 2 INFO nova.compute.manager [None req-62e34207-d7bd-4cdd-bc8f-52699248f1b8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Get console output
Oct 02 08:50:10 compute-0 nova_compute[260603]: 2025-10-02 08:50:10.044 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:50:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:10.099 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:10 compute-0 sudo[382394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:10 compute-0 sudo[382394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:10 compute-0 sudo[382394]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:10 compute-0 sudo[382419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:50:10 compute-0 sudo[382419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:10 compute-0 sudo[382419]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:10 compute-0 sudo[382444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:10 compute-0 sudo[382444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:10 compute-0 sudo[382444]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:10 compute-0 sudo[382469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:50:10 compute-0 sudo[382469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:10 compute-0 nova_compute[260603]: 2025-10-02 08:50:10.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:10 compute-0 podman[382532]: 2025-10-02 08:50:10.704883469 +0000 UTC m=+0.071511070 container create ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldberg, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:50:10 compute-0 systemd[1]: Started libpod-conmon-ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee.scope.
Oct 02 08:50:10 compute-0 podman[382532]: 2025-10-02 08:50:10.676493955 +0000 UTC m=+0.043121606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:50:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:50:10 compute-0 podman[382532]: 2025-10-02 08:50:10.820420931 +0000 UTC m=+0.187048582 container init ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 08:50:10 compute-0 podman[382532]: 2025-10-02 08:50:10.828192293 +0000 UTC m=+0.194819864 container start ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:50:10 compute-0 podman[382532]: 2025-10-02 08:50:10.831459785 +0000 UTC m=+0.198087426 container attach ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldberg, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:50:10 compute-0 upbeat_goldberg[382548]: 167 167
Oct 02 08:50:10 compute-0 systemd[1]: libpod-ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee.scope: Deactivated successfully.
Oct 02 08:50:10 compute-0 podman[382532]: 2025-10-02 08:50:10.833627502 +0000 UTC m=+0.200255073 container died ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:50:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b034d43316f54d882f8c54ac9b58d7b608eb24a58a4f6bbed67e07adde9bc807-merged.mount: Deactivated successfully.
Oct 02 08:50:10 compute-0 podman[382532]: 2025-10-02 08:50:10.872947318 +0000 UTC m=+0.239574919 container remove ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:50:10 compute-0 systemd[1]: libpod-conmon-ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee.scope: Deactivated successfully.
Oct 02 08:50:10 compute-0 ceph-mon[74477]: pgmap v2144: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:50:11 compute-0 podman[382571]: 2025-10-02 08:50:11.124449707 +0000 UTC m=+0.068732763 container create 602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kapitsa, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:50:11 compute-0 systemd[1]: Started libpod-conmon-602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442.scope.
Oct 02 08:50:11 compute-0 podman[382571]: 2025-10-02 08:50:11.097036573 +0000 UTC m=+0.041319679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:50:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb4136ab49e5e455510e49384d704c31824faf1523c3abd554c8ff68cd070e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb4136ab49e5e455510e49384d704c31824faf1523c3abd554c8ff68cd070e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb4136ab49e5e455510e49384d704c31824faf1523c3abd554c8ff68cd070e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb4136ab49e5e455510e49384d704c31824faf1523c3abd554c8ff68cd070e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:11 compute-0 podman[382571]: 2025-10-02 08:50:11.241537846 +0000 UTC m=+0.185820942 container init 602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kapitsa, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:50:11 compute-0 podman[382571]: 2025-10-02 08:50:11.253496239 +0000 UTC m=+0.197779295 container start 602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kapitsa, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 08:50:11 compute-0 podman[382571]: 2025-10-02 08:50:11.258573968 +0000 UTC m=+0.202857024 container attach 602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:50:11 compute-0 nova_compute[260603]: 2025-10-02 08:50:11.355 2 INFO nova.compute.manager [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Rebuilding instance
Oct 02 08:50:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:50:11 compute-0 nova_compute[260603]: 2025-10-02 08:50:11.620 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:50:11 compute-0 nova_compute[260603]: 2025-10-02 08:50:11.642 2 DEBUG nova.compute.manager [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:50:11 compute-0 nova_compute[260603]: 2025-10-02 08:50:11.695 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:50:11 compute-0 nova_compute[260603]: 2025-10-02 08:50:11.711 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:50:11 compute-0 nova_compute[260603]: 2025-10-02 08:50:11.727 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:50:11 compute-0 nova_compute[260603]: 2025-10-02 08:50:11.740 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'migration_context' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:50:11 compute-0 nova_compute[260603]: 2025-10-02 08:50:11.752 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:50:11 compute-0 nova_compute[260603]: 2025-10-02 08:50:11.757 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]: {
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "osd_id": 2,
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "type": "bluestore"
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:     },
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "osd_id": 1,
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "type": "bluestore"
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:     },
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "osd_id": 0,
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:         "type": "bluestore"
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]:     }
Oct 02 08:50:12 compute-0 hopeful_kapitsa[382587]: }
Oct 02 08:50:12 compute-0 systemd[1]: libpod-602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442.scope: Deactivated successfully.
Oct 02 08:50:12 compute-0 podman[382571]: 2025-10-02 08:50:12.383367295 +0000 UTC m=+1.327650311 container died 602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kapitsa, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:50:12 compute-0 systemd[1]: libpod-602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442.scope: Consumed 1.133s CPU time.
Oct 02 08:50:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdb4136ab49e5e455510e49384d704c31824faf1523c3abd554c8ff68cd070e6-merged.mount: Deactivated successfully.
Oct 02 08:50:12 compute-0 podman[382571]: 2025-10-02 08:50:12.43099906 +0000 UTC m=+1.375282076 container remove 602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 08:50:12 compute-0 systemd[1]: libpod-conmon-602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442.scope: Deactivated successfully.
Oct 02 08:50:12 compute-0 sudo[382469]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:50:12 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:50:12 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:12 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8f87997f-15bc-4d2f-b6d5-6ccdf8bbb3e5 does not exist
Oct 02 08:50:12 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev bc63ba61-d6a6-4bbf-aef5-94461a7f65a9 does not exist
Oct 02 08:50:12 compute-0 sudo[382632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:50:12 compute-0 sudo[382632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:12 compute-0 sudo[382632]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:12 compute-0 sudo[382657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:50:12 compute-0 sudo[382657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:50:12 compute-0 sudo[382657]: pam_unix(sudo:session): session closed for user root
Oct 02 08:50:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:50:12 compute-0 ceph-mon[74477]: pgmap v2145: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:50:12 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:12 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:50:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 02 08:50:13 compute-0 nova_compute[260603]: 2025-10-02 08:50:13.936 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:50:13 compute-0 nova_compute[260603]: 2025-10-02 08:50:13.961 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 08:50:13 compute-0 nova_compute[260603]: 2025-10-02 08:50:13.962 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:13 compute-0 nova_compute[260603]: 2025-10-02 08:50:13.962 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:13 compute-0 nova_compute[260603]: 2025-10-02 08:50:13.963 2 INFO nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] During sync_power_state the instance has a pending task (rebuilding). Skip.
Oct 02 08:50:13 compute-0 nova_compute[260603]: 2025-10-02 08:50:13.963 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:14 compute-0 kernel: tapac5b8c6c-8d (unregistering): left promiscuous mode
Oct 02 08:50:14 compute-0 NetworkManager[45129]: <info>  [1759395014.1013] device (tapac5b8c6c-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01223|binding|INFO|Releasing lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da from this chassis (sb_readonly=0)
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01224|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da down in Southbound
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01225|binding|INFO|Removing iface tapac5b8c6c-8d ovn-installed in OVS
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.121 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:e6:c0 10.100.0.11'], port_security=['fa:16:3e:4a:e6:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42b6327c-2fd8-40d7-ba08-912b6664d7e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=250114f0-0578-4b19-ab29-d2499c25ead0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.123 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da in datapath 9ceec3f2-374d-41aa-a1df-26773af00fd7 unbound from our chassis
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.128 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ceec3f2-374d-41aa-a1df-26773af00fd7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.129 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a60d360-e4b7-4778-824b-2853845d5aee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.130 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 namespace which is not needed anymore
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 02 08:50:14 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000076.scope: Consumed 12.960s CPU time.
Oct 02 08:50:14 compute-0 systemd-machined[214636]: Machine qemu-148-instance-00000076 terminated.
Oct 02 08:50:14 compute-0 podman[382686]: 2025-10-02 08:50:14.207570143 +0000 UTC m=+0.076063482 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 02 08:50:14 compute-0 podman[382682]: 2025-10-02 08:50:14.219642779 +0000 UTC m=+0.095242289 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 08:50:14 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[379425]: [NOTICE]   (379429) : haproxy version is 2.8.14-c23fe91
Oct 02 08:50:14 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[379425]: [NOTICE]   (379429) : path to executable is /usr/sbin/haproxy
Oct 02 08:50:14 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[379425]: [ALERT]    (379429) : Current worker (379431) exited with code 143 (Terminated)
Oct 02 08:50:14 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[379425]: [WARNING]  (379429) : All workers exited. Exiting... (0)
Oct 02 08:50:14 compute-0 systemd[1]: libpod-c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957.scope: Deactivated successfully.
Oct 02 08:50:14 compute-0 podman[382750]: 2025-10-02 08:50:14.285543623 +0000 UTC m=+0.047232773 container died c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:50:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957-userdata-shm.mount: Deactivated successfully.
Oct 02 08:50:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d37bbf101800d2f34c5533d4db9bfd73f0e4cf42dd09496c7c2a5063e08e8353-merged.mount: Deactivated successfully.
Oct 02 08:50:14 compute-0 kernel: tapac5b8c6c-8d: entered promiscuous mode
Oct 02 08:50:14 compute-0 NetworkManager[45129]: <info>  [1759395014.3334] manager: (tapac5b8c6c-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/482)
Oct 02 08:50:14 compute-0 systemd-udevd[382715]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 kernel: tapac5b8c6c-8d (unregistering): left promiscuous mode
Oct 02 08:50:14 compute-0 podman[382750]: 2025-10-02 08:50:14.370056967 +0000 UTC m=+0.131746107 container cleanup c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01226|binding|INFO|Claiming lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for this chassis.
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01227|binding|INFO|ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da: Claiming fa:16:3e:4a:e6:c0 10.100.0.11
Oct 02 08:50:14 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.379 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:e6:c0 10.100.0.11'], port_security=['fa:16:3e:4a:e6:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42b6327c-2fd8-40d7-ba08-912b6664d7e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=250114f0-0578-4b19-ab29-d2499c25ead0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:50:14 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct 02 08:50:14 compute-0 systemd[1]: libpod-conmon-c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957.scope: Deactivated successfully.
Oct 02 08:50:14 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct 02 08:50:14 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01228|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da ovn-installed in OVS
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01229|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da up in Southbound
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01230|binding|INFO|Releasing lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da from this chassis (sb_readonly=1)
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01231|binding|INFO|Removing iface tapac5b8c6c-8d ovn-installed in OVS
Oct 02 08:50:14 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01232|if_status|INFO|Dropped 2 log messages in last 593 seconds (most recently, 593 seconds ago) due to excessive rate
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01233|if_status|INFO|Not setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da down as sb is readonly
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01234|binding|INFO|Releasing lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da from this chassis (sb_readonly=0)
Oct 02 08:50:14 compute-0 ovn_controller[152344]: 2025-10-02T08:50:14Z|01235|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da down in Southbound
Oct 02 08:50:14 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.410 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:e6:c0 10.100.0.11'], port_security=['fa:16:3e:4a:e6:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42b6327c-2fd8-40d7-ba08-912b6664d7e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=250114f0-0578-4b19-ab29-d2499c25ead0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:50:14 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct 02 08:50:14 compute-0 podman[382786]: 2025-10-02 08:50:14.458718291 +0000 UTC m=+0.057380530 container remove c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.467 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[66023eb1-d838-42c6-8b55-8698448d288e]: (4, ('Thu Oct  2 08:50:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 (c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957)\nc6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957\nThu Oct  2 08:50:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 (c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957)\nc6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.470 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c1722dd0-01b2-4848-aa06-e729af998e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.471 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ceec3f2-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 kernel: tap9ceec3f2-30: left promiscuous mode
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.491 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d5844144-2517-4c72-b705-ed6808908010]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.518 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[300e3f10-7c71-4417-bbb2-2d50884894d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.519 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b0b92c-d3d3-4929-aee2-5140f8056497]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.537 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd164ea1-542e-4257-aa7d-b4fb2625a1fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586882, 'reachable_time': 18168, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382820, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.540 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:50:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d9ceec3f2\x2d374d\x2d41aa\x2da1df\x2d26773af00fd7.mount: Deactivated successfully.
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.540 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a79f6504-f9f9-434e-88e3-2244e4aed85f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.540 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da in datapath 9ceec3f2-374d-41aa-a1df-26773af00fd7 unbound from our chassis
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.541 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ceec3f2-374d-41aa-a1df-26773af00fd7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.542 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[17e9d969-2ac6-4484-90f0-c6926c202b04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.542 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da in datapath 9ceec3f2-374d-41aa-a1df-26773af00fd7 unbound from our chassis
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.543 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ceec3f2-374d-41aa-a1df-26773af00fd7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:50:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.543 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa98740-9096-4017-b253-b1b23223dbe0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.589 2 DEBUG nova.compute.manager [req-fef72a42-6e69-4031-9a10-708f25d07be3 req-3be58cee-9bc2-4619-85e2-d4e5517e16a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.589 2 DEBUG oslo_concurrency.lockutils [req-fef72a42-6e69-4031-9a10-708f25d07be3 req-3be58cee-9bc2-4619-85e2-d4e5517e16a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.590 2 DEBUG oslo_concurrency.lockutils [req-fef72a42-6e69-4031-9a10-708f25d07be3 req-3be58cee-9bc2-4619-85e2-d4e5517e16a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.591 2 DEBUG oslo_concurrency.lockutils [req-fef72a42-6e69-4031-9a10-708f25d07be3 req-3be58cee-9bc2-4619-85e2-d4e5517e16a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.591 2 DEBUG nova.compute.manager [req-fef72a42-6e69-4031-9a10-708f25d07be3 req-3be58cee-9bc2-4619-85e2-d4e5517e16a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.591 2 WARNING nova.compute.manager [req-fef72a42-6e69-4031-9a10-708f25d07be3 req-3be58cee-9bc2-4619-85e2-d4e5517e16a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuilding.
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.781 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance shutdown successfully after 3 seconds.
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.793 2 INFO nova.virt.libvirt.driver [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance destroyed successfully.
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.799 2 INFO nova.virt.libvirt.driver [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance destroyed successfully.
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.800 2 DEBUG nova.virt.libvirt.vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1109879286',display_name='tempest-TestNetworkAdvancedServerOps-server-1109879286',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1109879286',id=118,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+sVd8qjo0SYazKDVirRw3g5x5Vnd8QovbWt8+eK0+qPC8DTt4OfcPnD5uSpSSNv9pvlZO4xpKtw3lzuxGU0DKfNjlzDyLY6S9ZZDG7PGtaZb0/k3fS7lf7qHC3kDB/wg==',key_name='tempest-TestNetworkAdvancedServerOps-514118472',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:49:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-552rpvj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:50:10Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=9d79c6a7-edec-4a60-af9f-7e1401bb9a64,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.800 2 DEBUG nova.network.os_vif_util [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.801 2 DEBUG nova.network.os_vif_util [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.801 2 DEBUG os_vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac5b8c6c-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:14 compute-0 nova_compute[260603]: 2025-10-02 08:50:14.808 2 INFO os_vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d')
Oct 02 08:50:14 compute-0 ceph-mon[74477]: pgmap v2146: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.199 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deleting instance files /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_del
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.200 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deletion of /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_del complete
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.373 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.374 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Creating image(s)
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.399 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.429 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.456 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.462 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 117 KiB/s wr, 22 op/s
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.590 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.592 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.594 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.594 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.630 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:15 compute-0 nova_compute[260603]: 2025-10-02 08:50:15.638 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.031 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.117 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] resizing rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.207 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.208 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Ensure instance console log exists: /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.209 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.209 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.210 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.212 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Start _get_guest_xml network_info=[{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.217 2 WARNING nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.224 2 DEBUG nova.virt.libvirt.host [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.224 2 DEBUG nova.virt.libvirt.host [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.228 2 DEBUG nova.virt.libvirt.host [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.228 2 DEBUG nova.virt.libvirt.host [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.229 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.229 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.229 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.230 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.230 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.230 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.231 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.231 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.231 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.232 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.232 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.232 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.232 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.249 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:50:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1970559105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:50:16 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.688 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:16 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.689 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.689 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.690 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.690 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.690 2 WARNING nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.690 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.691 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.691 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.691 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.691 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.692 2 WARNING nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.692 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.692 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.692 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.692 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.693 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.693 2 WARNING nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.693 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.693 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.694 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.694 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.694 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.694 2 WARNING nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.695 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.695 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.695 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.695 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.696 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.696 2 WARNING nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.696 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.716 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:16 compute-0 nova_compute[260603]: 2025-10-02 08:50:16.720 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:16 compute-0 ceph-mon[74477]: pgmap v2147: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 117 KiB/s wr, 22 op/s
Oct 02 08:50:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1970559105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:50:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:50:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921506983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.199 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.201 2 DEBUG nova.virt.libvirt.vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1109879286',display_name='tempest-TestNetworkAdvancedServerOps-server-1109879286',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1109879286',id=118,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+sVd8qjo0SYazKDVirRw3g5x5Vnd8QovbWt8+eK0+qPC8DTt4OfcPnD5uSpSSNv9pvlZO4xpKtw3lzuxGU0DKfNjlzDyLY6S9ZZDG7PGtaZb0/k3fS7lf7qHC3kDB/wg==',key_name='tempest-TestNetworkAdvancedServerOps-514118472',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:49:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-552rpvj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:50:15Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=9d79c6a7-edec-4a60-af9f-7e1401bb9a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.201 2 DEBUG nova.network.os_vif_util [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.202 2 DEBUG nova.network.os_vif_util [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.204 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:50:17 compute-0 nova_compute[260603]:   <uuid>9d79c6a7-edec-4a60-af9f-7e1401bb9a64</uuid>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   <name>instance-00000076</name>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1109879286</nova:name>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:50:16</nova:creationTime>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:50:17 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:50:17 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:50:17 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:50:17 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:50:17 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:50:17 compute-0 nova_compute[260603]:         <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct 02 08:50:17 compute-0 nova_compute[260603]:         <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:50:17 compute-0 nova_compute[260603]:         <nova:port uuid="ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da">
Oct 02 08:50:17 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <system>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <entry name="serial">9d79c6a7-edec-4a60-af9f-7e1401bb9a64</entry>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <entry name="uuid">9d79c6a7-edec-4a60-af9f-7e1401bb9a64</entry>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     </system>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   <os>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   </os>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   <features>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   </features>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk">
Oct 02 08:50:17 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       </source>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:50:17 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config">
Oct 02 08:50:17 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       </source>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:50:17 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:4a:e6:c0"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <target dev="tapac5b8c6c-8d"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/console.log" append="off"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <video>
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     </video>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:50:17 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:50:17 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:50:17 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:50:17 compute-0 nova_compute[260603]: </domain>
Oct 02 08:50:17 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.206 2 DEBUG nova.virt.libvirt.vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1109879286',display_name='tempest-TestNetworkAdvancedServerOps-server-1109879286',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1109879286',id=118,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+sVd8qjo0SYazKDVirRw3g5x5Vnd8QovbWt8+eK0+qPC8DTt4OfcPnD5uSpSSNv9pvlZO4xpKtw3lzuxGU0DKfNjlzDyLY6S9ZZDG7PGtaZb0/k3fS7lf7qHC3kDB/wg==',key_name='tempest-TestNetworkAdvancedServerOps-514118472',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:49:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-552rpvj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:50:15Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=9d79c6a7-edec-4a60-af9f-7e1401bb9a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.206 2 DEBUG nova.network.os_vif_util [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.206 2 DEBUG nova.network.os_vif_util [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.207 2 DEBUG os_vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac5b8c6c-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.212 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac5b8c6c-8d, col_values=(('external_ids', {'iface-id': 'ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:e6:c0', 'vm-uuid': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:17 compute-0 NetworkManager[45129]: <info>  [1759395017.2151] manager: (tapac5b8c6c-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.223 2 INFO os_vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d')
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.290 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.290 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.290 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No VIF found with MAC fa:16:3e:4a:e6:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.291 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Using config drive
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.315 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.338 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.363 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'keypairs' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:50:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 117 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.757 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Creating config drive at /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.761 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuj986w_1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.911 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuj986w_1" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.947 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:17 compute-0 nova_compute[260603]: 2025-10-02 08:50:17.952 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:50:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/921506983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.126 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.128 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deleting local config drive /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config because it was imported into RBD.
Oct 02 08:50:18 compute-0 kernel: tapac5b8c6c-8d: entered promiscuous mode
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:18 compute-0 ovn_controller[152344]: 2025-10-02T08:50:18Z|01236|binding|INFO|Claiming lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for this chassis.
Oct 02 08:50:18 compute-0 ovn_controller[152344]: 2025-10-02T08:50:18Z|01237|binding|INFO|ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da: Claiming fa:16:3e:4a:e6:c0 10.100.0.11
Oct 02 08:50:18 compute-0 NetworkManager[45129]: <info>  [1759395018.2170] manager: (tapac5b8c6c-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/484)
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.227 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:e6:c0 10.100.0.11'], port_security=['fa:16:3e:4a:e6:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '7', 'neutron:security_group_ids': '42b6327c-2fd8-40d7-ba08-912b6664d7e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=250114f0-0578-4b19-ab29-d2499c25ead0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.228 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da in datapath 9ceec3f2-374d-41aa-a1df-26773af00fd7 bound to our chassis
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.230 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ceec3f2-374d-41aa-a1df-26773af00fd7
Oct 02 08:50:18 compute-0 ovn_controller[152344]: 2025-10-02T08:50:18Z|01238|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da ovn-installed in OVS
Oct 02 08:50:18 compute-0 ovn_controller[152344]: 2025-10-02T08:50:18Z|01239|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da up in Southbound
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.247 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff382342-4245-4b44-95bf-ca89ac9a969c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.248 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ceec3f2-31 in ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.250 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ceec3f2-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.250 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[90003908-baf5-4885-8b81-e340303b453c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.251 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6fd401-b748-40bb-8146-b108c17639fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.270 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[683c7532-0388-44a6-8ea2-fcba3e6ff38e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 systemd-machined[214636]: New machine qemu-149-instance-00000076.
Oct 02 08:50:18 compute-0 systemd[1]: Started Virtual Machine qemu-149-instance-00000076.
Oct 02 08:50:18 compute-0 systemd-udevd[383145]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.296 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ef92e6-943c-4c8e-bc7a-8907f82c01e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 NetworkManager[45129]: <info>  [1759395018.3103] device (tapac5b8c6c-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:50:18 compute-0 NetworkManager[45129]: <info>  [1759395018.3154] device (tapac5b8c6c-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.329 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7e60b02f-3f93-479f-9cb1-5f844630a2b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 NetworkManager[45129]: <info>  [1759395018.3367] manager: (tap9ceec3f2-30): new Veth device (/org/freedesktop/NetworkManager/Devices/485)
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.335 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb6e294-741f-4a7d-abde-331491b33e3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.374 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[721862ff-03bc-4967-9174-049467a9809b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.377 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1b18d1a1-f4ef-4c17-8082-74bbb33db28a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 NetworkManager[45129]: <info>  [1759395018.4009] device (tap9ceec3f2-30): carrier: link connected
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.408 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cf12e2fe-d5cd-4c3b-93f6-44155b81f6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.424 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff38eda5-db3a-421f-9cff-5748e7aa008f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ceec3f2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:37:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589700, 'reachable_time': 28367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383175, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.442 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dea6a320-98e8-44fb-a439-dfc2430292d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:3769'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589700, 'tstamp': 589700}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383176, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.460 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[81cbb336-8628-493a-b932-7049b575a4b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ceec3f2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:37:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589700, 'reachable_time': 28367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383177, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.491 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[96951f0c-283a-4a65-8651-c9e2280af6e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.552 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1357741e-6786-4389-952e-23777d2bdfa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.554 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ceec3f2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.554 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.554 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ceec3f2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:18 compute-0 NetworkManager[45129]: <info>  [1759395018.5571] manager: (tap9ceec3f2-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Oct 02 08:50:18 compute-0 kernel: tap9ceec3f2-30: entered promiscuous mode
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.561 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ceec3f2-30, col_values=(('external_ids', {'iface-id': '251c75ac-afb0-42a1-a89b-23c6beb7b3f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:18 compute-0 ovn_controller[152344]: 2025-10-02T08:50:18Z|01240|binding|INFO|Releasing lport 251c75ac-afb0-42a1-a89b-23c6beb7b3f6 from this chassis (sb_readonly=0)
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.584 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ceec3f2-374d-41aa-a1df-26773af00fd7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ceec3f2-374d-41aa-a1df-26773af00fd7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.584 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[edc36f49-752c-4158-afcd-9e7e968a0f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.585 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-9ceec3f2-374d-41aa-a1df-26773af00fd7
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/9ceec3f2-374d-41aa-a1df-26773af00fd7.pid.haproxy
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 9ceec3f2-374d-41aa-a1df-26773af00fd7
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:50:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.586 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'env', 'PROCESS_TAG=haproxy-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ceec3f2-374d-41aa-a1df-26773af00fd7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.791 2 DEBUG nova.compute.manager [req-e6ad55d9-5e72-4919-aa15-5f50988d80ed req-346108e3-ccaa-4e47-8112-3e16327fead5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.791 2 DEBUG oslo_concurrency.lockutils [req-e6ad55d9-5e72-4919-aa15-5f50988d80ed req-346108e3-ccaa-4e47-8112-3e16327fead5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.792 2 DEBUG oslo_concurrency.lockutils [req-e6ad55d9-5e72-4919-aa15-5f50988d80ed req-346108e3-ccaa-4e47-8112-3e16327fead5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.792 2 DEBUG oslo_concurrency.lockutils [req-e6ad55d9-5e72-4919-aa15-5f50988d80ed req-346108e3-ccaa-4e47-8112-3e16327fead5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.793 2 DEBUG nova.compute.manager [req-e6ad55d9-5e72-4919-aa15-5f50988d80ed req-346108e3-ccaa-4e47-8112-3e16327fead5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:50:18 compute-0 nova_compute[260603]: 2025-10-02 08:50:18.793 2 WARNING nova.compute.manager [req-e6ad55d9-5e72-4919-aa15-5f50988d80ed req-346108e3-ccaa-4e47-8112-3e16327fead5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuild_spawning.
Oct 02 08:50:19 compute-0 ceph-mon[74477]: pgmap v2148: 305 pgs: 305 active+clean; 117 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 02 08:50:19 compute-0 podman[383251]: 2025-10-02 08:50:19.070969339 +0000 UTC m=+0.058977760 container create a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.085 2 DEBUG nova.compute.manager [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.086 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.086 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.087 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395019.0855837, 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.087 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] VM Resumed (Lifecycle Event)
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.091 2 INFO nova.virt.libvirt.driver [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance spawned successfully.
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.091 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.109 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.113 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.114 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.114 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.114 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.115 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.115 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.119 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:50:19 compute-0 systemd[1]: Started libpod-conmon-a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677.scope.
Oct 02 08:50:19 compute-0 podman[383251]: 2025-10-02 08:50:19.040424927 +0000 UTC m=+0.028433378 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.151 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.151 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395019.085648, 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.152 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] VM Started (Lifecycle Event)
Oct 02 08:50:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:50:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784dcec7337b162a43bf6659c2be12ccdf58165f84be64f967db39bb20ec5671/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.172 2 DEBUG nova.compute.manager [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.174 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.181 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:50:19 compute-0 podman[383251]: 2025-10-02 08:50:19.18139195 +0000 UTC m=+0.169400381 container init a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:50:19 compute-0 podman[383251]: 2025-10-02 08:50:19.18713893 +0000 UTC m=+0.175147341 container start a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:50:19 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [NOTICE]   (383270) : New worker (383272) forked
Oct 02 08:50:19 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [NOTICE]   (383270) : Loading success.
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.215 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.240 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.240 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.240 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.301 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Oct 02 08:50:19 compute-0 nova_compute[260603]: 2025-10-02 08:50:19.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:20 compute-0 nova_compute[260603]: 2025-10-02 08:50:20.917 2 DEBUG nova.compute.manager [req-26f6a5ec-834d-406a-bd77-cb39000fdfd0 req-381e8944-459f-4328-9562-4326f39de6fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:20 compute-0 nova_compute[260603]: 2025-10-02 08:50:20.918 2 DEBUG oslo_concurrency.lockutils [req-26f6a5ec-834d-406a-bd77-cb39000fdfd0 req-381e8944-459f-4328-9562-4326f39de6fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:20 compute-0 nova_compute[260603]: 2025-10-02 08:50:20.918 2 DEBUG oslo_concurrency.lockutils [req-26f6a5ec-834d-406a-bd77-cb39000fdfd0 req-381e8944-459f-4328-9562-4326f39de6fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:20 compute-0 nova_compute[260603]: 2025-10-02 08:50:20.919 2 DEBUG oslo_concurrency.lockutils [req-26f6a5ec-834d-406a-bd77-cb39000fdfd0 req-381e8944-459f-4328-9562-4326f39de6fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:20 compute-0 nova_compute[260603]: 2025-10-02 08:50:20.919 2 DEBUG nova.compute.manager [req-26f6a5ec-834d-406a-bd77-cb39000fdfd0 req-381e8944-459f-4328-9562-4326f39de6fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:50:20 compute-0 nova_compute[260603]: 2025-10-02 08:50:20.920 2 WARNING nova.compute.manager [req-26f6a5ec-834d-406a-bd77-cb39000fdfd0 req-381e8944-459f-4328-9562-4326f39de6fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state None.
Oct 02 08:50:20 compute-0 podman[383282]: 2025-10-02 08:50:20.994239354 +0000 UTC m=+0.063268883 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 08:50:21 compute-0 podman[383281]: 2025-10-02 08:50:21.010211602 +0000 UTC m=+0.072147490 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:50:21 compute-0 ceph-mon[74477]: pgmap v2149: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Oct 02 08:50:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Oct 02 08:50:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:50:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2010036876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:50:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:50:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2010036876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:50:22 compute-0 nova_compute[260603]: 2025-10-02 08:50:22.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:50:23 compute-0 ceph-mon[74477]: pgmap v2150: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Oct 02 08:50:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2010036876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:50:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2010036876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:50:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Oct 02 08:50:24 compute-0 nova_compute[260603]: 2025-10-02 08:50:24.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:24 compute-0 nova_compute[260603]: 2025-10-02 08:50:24.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:25 compute-0 ceph-mon[74477]: pgmap v2151: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Oct 02 08:50:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 02 08:50:26 compute-0 nova_compute[260603]: 2025-10-02 08:50:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:50:26 compute-0 nova_compute[260603]: 2025-10-02 08:50:26.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:50:27 compute-0 ceph-mon[74477]: pgmap v2152: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 02 08:50:27 compute-0 nova_compute[260603]: 2025-10-02 08:50:27.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 02 08:50:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:50:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:50:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:50:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:50:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:50:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:50:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:50:28
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'vms', 'volumes', 'images', '.mgr', 'backups']
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:50:29 compute-0 ceph-mon[74477]: pgmap v2153: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 02 08:50:29 compute-0 nova_compute[260603]: 2025-10-02 08:50:29.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 86 KiB/s wr, 95 op/s
Oct 02 08:50:29 compute-0 nova_compute[260603]: 2025-10-02 08:50:29.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:30 compute-0 ceph-mon[74477]: pgmap v2154: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 86 KiB/s wr, 95 op/s
Oct 02 08:50:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 70 op/s
Oct 02 08:50:32 compute-0 nova_compute[260603]: 2025-10-02 08:50:32.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:32 compute-0 ceph-mon[74477]: pgmap v2155: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 70 op/s
Oct 02 08:50:32 compute-0 ovn_controller[152344]: 2025-10-02T08:50:32Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:e6:c0 10.100.0.11
Oct 02 08:50:32 compute-0 ovn_controller[152344]: 2025-10-02T08:50:32Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:e6:c0 10.100.0.11
Oct 02 08:50:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:50:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 109 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Oct 02 08:50:34 compute-0 nova_compute[260603]: 2025-10-02 08:50:34.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:34 compute-0 ceph-mon[74477]: pgmap v2156: 305 pgs: 305 active+clean; 109 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Oct 02 08:50:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:34.831 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:34.832 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:34.833 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 109 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 187 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Oct 02 08:50:36 compute-0 nova_compute[260603]: 2025-10-02 08:50:36.431 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:36 compute-0 nova_compute[260603]: 2025-10-02 08:50:36.432 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:36 compute-0 nova_compute[260603]: 2025-10-02 08:50:36.466 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:50:36 compute-0 nova_compute[260603]: 2025-10-02 08:50:36.647 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:36 compute-0 nova_compute[260603]: 2025-10-02 08:50:36.648 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:36 compute-0 nova_compute[260603]: 2025-10-02 08:50:36.658 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:50:36 compute-0 nova_compute[260603]: 2025-10-02 08:50:36.659 2 INFO nova.compute.claims [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:50:36 compute-0 ceph-mon[74477]: pgmap v2157: 305 pgs: 305 active+clean; 109 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 187 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Oct 02 08:50:36 compute-0 nova_compute[260603]: 2025-10-02 08:50:36.920 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:50:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949143917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.398 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.405 2 DEBUG nova.compute.provider_tree [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.421 2 DEBUG nova.scheduler.client.report [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.444 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.445 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.499 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.500 2 DEBUG nova.network.neutron [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.524 2 INFO nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:50:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 117 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.540 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.654 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.655 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.655 2 INFO nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Creating image(s)
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.682 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.709 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.735 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.739 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.807 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.808 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.810 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.810 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.845 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:37 compute-0 nova_compute[260603]: 2025-10-02 08:50:37.850 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3949143917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:50:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:50:38 compute-0 nova_compute[260603]: 2025-10-02 08:50:38.437 2 DEBUG nova.policy [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aceb8b0273154f1abe964d78a6261936', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:50:38 compute-0 nova_compute[260603]: 2025-10-02 08:50:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:50:38 compute-0 nova_compute[260603]: 2025-10-02 08:50:38.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:50:38 compute-0 nova_compute[260603]: 2025-10-02 08:50:38.526 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:38 compute-0 nova_compute[260603]: 2025-10-02 08:50:38.619 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] resizing rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000754758111121964 of space, bias 1.0, pg target 0.22642743333658918 quantized to 32 (current 32)
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:50:38 compute-0 nova_compute[260603]: 2025-10-02 08:50:38.864 2 DEBUG nova.objects.instance [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lazy-loading 'migration_context' on Instance uuid d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:50:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:50:38 compute-0 ceph-mon[74477]: pgmap v2158: 305 pgs: 305 active+clean; 117 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 02 08:50:38 compute-0 nova_compute[260603]: 2025-10-02 08:50:38.962 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:50:38 compute-0 nova_compute[260603]: 2025-10-02 08:50:38.963 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Ensure instance console log exists: /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:50:38 compute-0 nova_compute[260603]: 2025-10-02 08:50:38.964 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:38 compute-0 nova_compute[260603]: 2025-10-02 08:50:38.965 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:38 compute-0 nova_compute[260603]: 2025-10-02 08:50:38.966 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 121 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 08:50:39 compute-0 nova_compute[260603]: 2025-10-02 08:50:39.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:39 compute-0 nova_compute[260603]: 2025-10-02 08:50:39.921 2 INFO nova.compute.manager [None req-df2a7b0e-c963-487f-8157-7cd0aa470225 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Get console output
Oct 02 08:50:39 compute-0 nova_compute[260603]: 2025-10-02 08:50:39.930 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:50:40 compute-0 nova_compute[260603]: 2025-10-02 08:50:40.415 2 DEBUG nova.network.neutron [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Successfully created port: 236df88e-d54e-410f-9a5c-cf5fa95debd1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:50:40 compute-0 nova_compute[260603]: 2025-10-02 08:50:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:50:40 compute-0 nova_compute[260603]: 2025-10-02 08:50:40.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:50:40 compute-0 nova_compute[260603]: 2025-10-02 08:50:40.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:50:40 compute-0 nova_compute[260603]: 2025-10-02 08:50:40.553 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 08:50:40 compute-0 nova_compute[260603]: 2025-10-02 08:50:40.791 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:50:40 compute-0 nova_compute[260603]: 2025-10-02 08:50:40.792 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:50:40 compute-0 nova_compute[260603]: 2025-10-02 08:50:40.793 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:50:40 compute-0 nova_compute[260603]: 2025-10-02 08:50:40.793 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:50:40 compute-0 ceph-mon[74477]: pgmap v2159: 305 pgs: 305 active+clean; 121 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.131 2 DEBUG nova.compute.manager [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-changed-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.132 2 DEBUG nova.compute.manager [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Refreshing instance network info cache due to event network-changed-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.133 2 DEBUG oslo_concurrency.lockutils [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.288 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.289 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.290 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.290 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.291 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.293 2 INFO nova.compute.manager [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Terminating instance
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.295 2 DEBUG nova.compute.manager [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:50:41 compute-0 kernel: tapac5b8c6c-8d (unregistering): left promiscuous mode
Oct 02 08:50:41 compute-0 NetworkManager[45129]: <info>  [1759395041.3603] device (tapac5b8c6c-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:41 compute-0 ovn_controller[152344]: 2025-10-02T08:50:41Z|01241|binding|INFO|Releasing lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da from this chassis (sb_readonly=0)
Oct 02 08:50:41 compute-0 ovn_controller[152344]: 2025-10-02T08:50:41Z|01242|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da down in Southbound
Oct 02 08:50:41 compute-0 ovn_controller[152344]: 2025-10-02T08:50:41Z|01243|binding|INFO|Removing iface tapac5b8c6c-8d ovn-installed in OVS
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:41 compute-0 ovn_controller[152344]: 2025-10-02T08:50:41Z|01244|binding|INFO|Releasing lport 251c75ac-afb0-42a1-a89b-23c6beb7b3f6 from this chassis (sb_readonly=0)
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.392 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:e6:c0 10.100.0.11'], port_security=['fa:16:3e:4a:e6:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '8', 'neutron:security_group_ids': '42b6327c-2fd8-40d7-ba08-912b6664d7e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=250114f0-0578-4b19-ab29-d2499c25ead0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.393 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da in datapath 9ceec3f2-374d-41aa-a1df-26773af00fd7 unbound from our chassis
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.396 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ceec3f2-374d-41aa-a1df-26773af00fd7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.397 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9a892080-696f-490c-b88c-be52912aac2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.398 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 namespace which is not needed anymore
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 121 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 08:50:41 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 02 08:50:41 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Consumed 13.178s CPU time.
Oct 02 08:50:41 compute-0 systemd-machined[214636]: Machine qemu-149-instance-00000076 terminated.
Oct 02 08:50:41 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [NOTICE]   (383270) : haproxy version is 2.8.14-c23fe91
Oct 02 08:50:41 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [NOTICE]   (383270) : path to executable is /usr/sbin/haproxy
Oct 02 08:50:41 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [WARNING]  (383270) : Exiting Master process...
Oct 02 08:50:41 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [WARNING]  (383270) : Exiting Master process...
Oct 02 08:50:41 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [ALERT]    (383270) : Current worker (383272) exited with code 143 (Terminated)
Oct 02 08:50:41 compute-0 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [WARNING]  (383270) : All workers exited. Exiting... (0)
Oct 02 08:50:41 compute-0 systemd[1]: libpod-a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677.scope: Deactivated successfully.
Oct 02 08:50:41 compute-0 podman[383537]: 2025-10-02 08:50:41.576014687 +0000 UTC m=+0.067586158 container died a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:50:41 compute-0 ovn_controller[152344]: 2025-10-02T08:50:41Z|01245|binding|INFO|Releasing lport 251c75ac-afb0-42a1-a89b-23c6beb7b3f6 from this chassis (sb_readonly=0)
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-784dcec7337b162a43bf6659c2be12ccdf58165f84be64f967db39bb20ec5671-merged.mount: Deactivated successfully.
Oct 02 08:50:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677-userdata-shm.mount: Deactivated successfully.
Oct 02 08:50:41 compute-0 podman[383537]: 2025-10-02 08:50:41.621795183 +0000 UTC m=+0.113366644 container cleanup a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 08:50:41 compute-0 systemd[1]: libpod-conmon-a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677.scope: Deactivated successfully.
Oct 02 08:50:41 compute-0 podman[383567]: 2025-10-02 08:50:41.686457649 +0000 UTC m=+0.043488646 container remove a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.695 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a379b2-4343-4318-85d6-f4c313c6a8ba]: (4, ('Thu Oct  2 08:50:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 (a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677)\na6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677\nThu Oct  2 08:50:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 (a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677)\na6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.696 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c949c3d-922a-4876-9d5f-8c22c793e2c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.697 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ceec3f2-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:41 compute-0 kernel: tap9ceec3f2-30: left promiscuous mode
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.716 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9f9edf-1f47-497e-9fc3-e3bff2879afe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:41 compute-0 NetworkManager[45129]: <info>  [1759395041.7206] manager: (tapac5b8c6c-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/487)
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.740 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[126b60cf-557c-4b1a-b805-66ca636611ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.741 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0d28ab-f27c-4a7d-9a95-f9a670d07534]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.740 2 INFO nova.virt.libvirt.driver [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance destroyed successfully.
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.740 2 DEBUG nova.objects.instance [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.754 2 DEBUG nova.virt.libvirt.vif [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1109879286',display_name='tempest-TestNetworkAdvancedServerOps-server-1109879286',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1109879286',id=118,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+sVd8qjo0SYazKDVirRw3g5x5Vnd8QovbWt8+eK0+qPC8DTt4OfcPnD5uSpSSNv9pvlZO4xpKtw3lzuxGU0DKfNjlzDyLY6S9ZZDG7PGtaZb0/k3fS7lf7qHC3kDB/wg==',key_name='tempest-TestNetworkAdvancedServerOps-514118472',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:50:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-552rpvj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:50:19Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=9d79c6a7-edec-4a60-af9f-7e1401bb9a64,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.755 2 DEBUG nova.network.os_vif_util [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.755 2 DEBUG nova.network.os_vif_util [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.756 2 DEBUG os_vif [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.755 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[41b679ca-aa1b-4a58-ad9b-c87ecb44e4fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589692, 'reachable_time': 16748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383593, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.759 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac5b8c6c-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d9ceec3f2\x2d374d\x2d41aa\x2da1df\x2d26773af00fd7.mount: Deactivated successfully.
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.758 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:50:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.758 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4620b791-48c9-4548-ac8e-7ba7c8ee43d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.768 2 INFO os_vif [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d')
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.797 2 DEBUG nova.network.neutron [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Successfully updated port: 236df88e-d54e-410f-9a5c-cf5fa95debd1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.826 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.826 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquired lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.826 2 DEBUG nova.network.neutron [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.915 2 DEBUG nova.compute.manager [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received event network-changed-236df88e-d54e-410f-9a5c-cf5fa95debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.915 2 DEBUG nova.compute.manager [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Refreshing instance network info cache due to event network-changed-236df88e-d54e-410f-9a5c-cf5fa95debd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:50:41 compute-0 nova_compute[260603]: 2025-10-02 08:50:41.917 2 DEBUG oslo_concurrency.lockutils [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.025 2 DEBUG nova.network.neutron [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.161 2 INFO nova.virt.libvirt.driver [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deleting instance files /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_del
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.163 2 INFO nova.virt.libvirt.driver [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deletion of /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_del complete
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.212 2 INFO nova.compute.manager [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Took 0.92 seconds to destroy the instance on the hypervisor.
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.213 2 DEBUG oslo.service.loopingcall [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.214 2 DEBUG nova.compute.manager [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.214 2 DEBUG nova.network.neutron [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.475 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updating instance_info_cache with network_info: [{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.505 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.506 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.507 2 DEBUG oslo_concurrency.lockutils [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.507 2 DEBUG nova.network.neutron [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Refreshing network info cache for port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.509 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.510 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.539 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.539 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.540 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.540 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.541 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.801 2 DEBUG nova.network.neutron [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.826 2 INFO nova.compute.manager [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Took 0.61 seconds to deallocate network for instance.
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.879 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.879 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:42 compute-0 nova_compute[260603]: 2025-10-02 08:50:42.941 2 DEBUG oslo_concurrency.processutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:42 compute-0 ceph-mon[74477]: pgmap v2160: 305 pgs: 305 active+clean; 121 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 08:50:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:50:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:50:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729854280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.012 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.320 2 DEBUG nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.321 2 DEBUG oslo_concurrency.lockutils [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.321 2 DEBUG oslo_concurrency.lockutils [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.321 2 DEBUG oslo_concurrency.lockutils [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.321 2 DEBUG nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.322 2 WARNING nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state deleted and task_state None.
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.322 2 DEBUG nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.322 2 DEBUG oslo_concurrency.lockutils [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.322 2 DEBUG oslo_concurrency.lockutils [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.323 2 DEBUG oslo_concurrency.lockutils [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.323 2 DEBUG nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.323 2 WARNING nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state deleted and task_state None.
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.323 2 DEBUG nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-deleted-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.354 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.355 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3710MB free_disk=59.94289016723633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.355 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:50:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/877712512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.470 2 DEBUG nova.network.neutron [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.482 2 DEBUG oslo_concurrency.processutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.489 2 DEBUG nova.compute.provider_tree [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.499 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Releasing lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.500 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Instance network_info: |[{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.500 2 DEBUG oslo_concurrency.lockutils [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.501 2 DEBUG nova.network.neutron [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Refreshing network info cache for port 236df88e-d54e-410f-9a5c-cf5fa95debd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.506 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Start _get_guest_xml network_info=[{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.512 2 DEBUG nova.scheduler.client.report [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.519 2 WARNING nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.529 2 DEBUG nova.virt.libvirt.host [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.530 2 DEBUG nova.virt.libvirt.host [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.533 2 DEBUG nova.virt.libvirt.host [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.534 2 DEBUG nova.virt.libvirt.host [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.535 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.535 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.536 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.536 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.537 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.537 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.538 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.538 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.539 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.539 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.539 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.540 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:50:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.545 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.582 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.588 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.624 2 INFO nova.scheduler.client.report [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Deleted allocations for instance 9d79c6a7-edec-4a60-af9f-7e1401bb9a64
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.660 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.660 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.661 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.691 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:43 compute-0 nova_compute[260603]: 2025-10-02 08:50:43.723 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1729854280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:50:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/877712512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:50:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:50:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1287723593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.057 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.083 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.086 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:50:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346735743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.160 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.166 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.187 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.212 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.212 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.300 2 DEBUG nova.network.neutron [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updated VIF entry in instance network info cache for port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.301 2 DEBUG nova.network.neutron [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updating instance_info_cache with network_info: [{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.320 2 DEBUG oslo_concurrency.lockutils [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:50:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:50:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411603649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.509 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.510 2 DEBUG nova.virt.libvirt.vif [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:50:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1240552507',display_name='tempest-TestSnapshotPattern-server-1240552507',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1240552507',id=119,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCpLz1iGKUc/R0jTBlojNlaCcKVn52HrOCGXK3cRl7ZwI3LdmmDfGB817F44mQzj+4scFHSvX8lk2zoI/a0vux5P2hegs1HkAovNnUXiH3pFHVXQWoAuzDtOhefGzzHniQ==',key_name='tempest-TestSnapshotPattern-624580712',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc44155f2d45ff9f66fe254a7b21c7',ramdisk_id='',reservation_id='r-s694dn34',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-495107275',owner_user_name='tempest-TestSnapshotPattern-495107275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:50:37Z,user_data=None,user_id='aceb8b0273154f1abe964d78a6261936',uuid=d5e3e825-fcee-4f1b-8c05-5a9ee07013d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.511 2 DEBUG nova.network.os_vif_util [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converting VIF {"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.512 2 DEBUG nova.network.os_vif_util [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.513 2 DEBUG nova.objects.instance [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.532 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:50:44 compute-0 nova_compute[260603]:   <uuid>d5e3e825-fcee-4f1b-8c05-5a9ee07013d7</uuid>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   <name>instance-00000077</name>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <nova:name>tempest-TestSnapshotPattern-server-1240552507</nova:name>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:50:43</nova:creationTime>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:50:44 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:50:44 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:50:44 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:50:44 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:50:44 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:50:44 compute-0 nova_compute[260603]:         <nova:user uuid="aceb8b0273154f1abe964d78a6261936">tempest-TestSnapshotPattern-495107275-project-member</nova:user>
Oct 02 08:50:44 compute-0 nova_compute[260603]:         <nova:project uuid="bfcc44155f2d45ff9f66fe254a7b21c7">tempest-TestSnapshotPattern-495107275</nova:project>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:50:44 compute-0 nova_compute[260603]:         <nova:port uuid="236df88e-d54e-410f-9a5c-cf5fa95debd1">
Oct 02 08:50:44 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <system>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <entry name="serial">d5e3e825-fcee-4f1b-8c05-5a9ee07013d7</entry>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <entry name="uuid">d5e3e825-fcee-4f1b-8c05-5a9ee07013d7</entry>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     </system>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   <os>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   </os>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   <features>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   </features>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk">
Oct 02 08:50:44 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       </source>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:50:44 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk.config">
Oct 02 08:50:44 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       </source>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:50:44 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:de:8a:1b"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <target dev="tap236df88e-d5"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/console.log" append="off"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <video>
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     </video>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:50:44 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:50:44 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:50:44 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:50:44 compute-0 nova_compute[260603]: </domain>
Oct 02 08:50:44 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.534 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Preparing to wait for external event network-vif-plugged-236df88e-d54e-410f-9a5c-cf5fa95debd1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.534 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.534 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.535 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.536 2 DEBUG nova.virt.libvirt.vif [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:50:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1240552507',display_name='tempest-TestSnapshotPattern-server-1240552507',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1240552507',id=119,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCpLz1iGKUc/R0jTBlojNlaCcKVn52HrOCGXK3cRl7ZwI3LdmmDfGB817F44mQzj+4scFHSvX8lk2zoI/a0vux5P2hegs1HkAovNnUXiH3pFHVXQWoAuzDtOhefGzzHniQ==',key_name='tempest-TestSnapshotPattern-624580712',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc44155f2d45ff9f66fe254a7b21c7',ramdisk_id='',reservation_id='r-s694dn34',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-495107275',owner_user_name='tempest-TestSnapshotPattern-495107275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:50:37Z,user_data=None,user_id='aceb8b0273154f1abe964d78a6261936',uuid=d5e3e825-fcee-4f1b-8c05-5a9ee07013d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.536 2 DEBUG nova.network.os_vif_util [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converting VIF {"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.537 2 DEBUG nova.network.os_vif_util [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.537 2 DEBUG os_vif [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.538 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.539 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap236df88e-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap236df88e-d5, col_values=(('external_ids', {'iface-id': '236df88e-d54e-410f-9a5c-cf5fa95debd1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:8a:1b', 'vm-uuid': 'd5e3e825-fcee-4f1b-8c05-5a9ee07013d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:44 compute-0 NetworkManager[45129]: <info>  [1759395044.5850] manager: (tap236df88e-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/488)
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.592 2 INFO os_vif [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5')
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.657 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.657 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.658 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No VIF found with MAC fa:16:3e:de:8a:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.659 2 INFO nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Using config drive
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.689 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:44 compute-0 nova_compute[260603]: 2025-10-02 08:50:44.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:44 compute-0 ceph-mon[74477]: pgmap v2161: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Oct 02 08:50:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1287723593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:50:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3346735743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:50:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1411603649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:50:45 compute-0 podman[383765]: 2025-10-02 08:50:45.019016921 +0000 UTC m=+0.071726697 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:50:45 compute-0 podman[383764]: 2025-10-02 08:50:45.101298996 +0000 UTC m=+0.159812873 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.221 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.221 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.305 2 INFO nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Creating config drive at /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/disk.config
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.311 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpylwz_x1z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.358 2 DEBUG nova.network.neutron [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updated VIF entry in instance network info cache for port 236df88e-d54e-410f-9a5c-cf5fa95debd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.359 2 DEBUG nova.network.neutron [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.389 2 DEBUG oslo_concurrency.lockutils [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.475 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpylwz_x1z" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.512 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.515 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/disk.config d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:50:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 1.9 MiB/s wr, 72 op/s
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.707 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/disk.config d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.709 2 INFO nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Deleting local config drive /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/disk.config because it was imported into RBD.
Oct 02 08:50:45 compute-0 kernel: tap236df88e-d5: entered promiscuous mode
Oct 02 08:50:45 compute-0 NetworkManager[45129]: <info>  [1759395045.7892] manager: (tap236df88e-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/489)
Oct 02 08:50:45 compute-0 ovn_controller[152344]: 2025-10-02T08:50:45Z|01246|binding|INFO|Claiming lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 for this chassis.
Oct 02 08:50:45 compute-0 ovn_controller[152344]: 2025-10-02T08:50:45Z|01247|binding|INFO|236df88e-d54e-410f-9a5c-cf5fa95debd1: Claiming fa:16:3e:de:8a:1b 10.100.0.3
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.803 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:8a:1b 10.100.0.3'], port_security=['fa:16:3e:de:8a:1b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5e3e825-fcee-4f1b-8c05-5a9ee07013d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8372a1-ae46-455d-aa74-54645f729e73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b23ca944-49a2-456f-a94b-c77445a13bdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10547b4a-e45d-47e8-b4d5-6979908e5ff3, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=236df88e-d54e-410f-9a5c-cf5fa95debd1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.806 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 236df88e-d54e-410f-9a5c-cf5fa95debd1 in datapath 5e8372a1-ae46-455d-aa74-54645f729e73 bound to our chassis
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.809 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e8372a1-ae46-455d-aa74-54645f729e73
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.830 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48421315-5aab-479b-91e1-b150e9a6a6dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.831 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5e8372a1-a1 in ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.835 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5e8372a1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.835 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[74c5386c-18b3-4192-9056-e9c1a1d45cd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.837 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[77461d47-cdc4-4279-9fd7-2d2600e07881]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:45 compute-0 systemd-udevd[383861]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:50:45 compute-0 systemd-machined[214636]: New machine qemu-150-instance-00000077.
Oct 02 08:50:45 compute-0 NetworkManager[45129]: <info>  [1759395045.8674] device (tap236df88e-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:50:45 compute-0 NetworkManager[45129]: <info>  [1759395045.8688] device (tap236df88e-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.868 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[00cc90bc-6c4a-4c3a-9213-d49c4a9302ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:45 compute-0 systemd[1]: Started Virtual Machine qemu-150-instance-00000077.
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.892 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed7406e-440b-4885-93d3-c28230f7ae47]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:45 compute-0 ovn_controller[152344]: 2025-10-02T08:50:45Z|01248|binding|INFO|Setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 ovn-installed in OVS
Oct 02 08:50:45 compute-0 ovn_controller[152344]: 2025-10-02T08:50:45Z|01249|binding|INFO|Setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 up in Southbound
Oct 02 08:50:45 compute-0 nova_compute[260603]: 2025-10-02 08:50:45.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.933 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[45f4aeaf-5270-460a-879a-7dbdc28d4e8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:45 compute-0 systemd-udevd[383865]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:50:45 compute-0 NetworkManager[45129]: <info>  [1759395045.9426] manager: (tap5e8372a1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/490)
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.941 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a433757-8104-447e-8d57-5b841c49f079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.988 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[281ef230-e4c8-48d7-959b-36a15a853f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.090 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[dff92745-5d42-422e-9ee5-534faf4bcf42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:46 compute-0 NetworkManager[45129]: <info>  [1759395046.1165] device (tap5e8372a1-a0): carrier: link connected
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.122 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7fde9017-4f5e-4337-a9c5-d4b532f1ea99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.137 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7d76d5d0-1b31-4b07-b6ed-1313a04a8b76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e8372a1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:60:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592471, 'reachable_time': 17032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383894, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.151 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[caca82ee-6a87-43ec-b12e-eaaaba235ac6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feee:6077'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592471, 'tstamp': 592471}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383895, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.165 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[043c76d5-fc45-4ce0-bb60-e1a80e4629c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e8372a1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:60:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592471, 'reachable_time': 17032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383896, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.192 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ff9946-e15e-4b93-9620-3c52956ea417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.271 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[68be29d0-c12c-4e11-8ebc-8397ef8bf2ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.276 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e8372a1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.277 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.278 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e8372a1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:46 compute-0 kernel: tap5e8372a1-a0: entered promiscuous mode
Oct 02 08:50:46 compute-0 NetworkManager[45129]: <info>  [1759395046.2821] manager: (tap5e8372a1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/491)
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.286 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e8372a1-a0, col_values=(('external_ids', {'iface-id': '6972524a-7a8a-4c19-a841-08e51ccd9aaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:46 compute-0 ovn_controller[152344]: 2025-10-02T08:50:46Z|01250|binding|INFO|Releasing lport 6972524a-7a8a-4c19-a841-08e51ccd9aaa from this chassis (sb_readonly=0)
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.291 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5e8372a1-ae46-455d-aa74-54645f729e73.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5e8372a1-ae46-455d-aa74-54645f729e73.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.294 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2af8a7-8d37-41f7-9738-50e79e0d7a78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.295 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-5e8372a1-ae46-455d-aa74-54645f729e73
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/5e8372a1-ae46-455d-aa74-54645f729e73.pid.haproxy
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 5e8372a1-ae46-455d-aa74-54645f729e73
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:50:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.296 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'env', 'PROCESS_TAG=haproxy-5e8372a1-ae46-455d-aa74-54645f729e73', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5e8372a1-ae46-455d-aa74-54645f729e73.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.698 2 DEBUG nova.compute.manager [req-84ed9cf5-1a6f-4325-b58c-bbbb4da7d873 req-ec360967-84bd-47a6-bf78-2ac3e0d41dfa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received event network-vif-plugged-236df88e-d54e-410f-9a5c-cf5fa95debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.699 2 DEBUG oslo_concurrency.lockutils [req-84ed9cf5-1a6f-4325-b58c-bbbb4da7d873 req-ec360967-84bd-47a6-bf78-2ac3e0d41dfa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.701 2 DEBUG oslo_concurrency.lockutils [req-84ed9cf5-1a6f-4325-b58c-bbbb4da7d873 req-ec360967-84bd-47a6-bf78-2ac3e0d41dfa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.701 2 DEBUG oslo_concurrency.lockutils [req-84ed9cf5-1a6f-4325-b58c-bbbb4da7d873 req-ec360967-84bd-47a6-bf78-2ac3e0d41dfa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.702 2 DEBUG nova.compute.manager [req-84ed9cf5-1a6f-4325-b58c-bbbb4da7d873 req-ec360967-84bd-47a6-bf78-2ac3e0d41dfa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Processing event network-vif-plugged-236df88e-d54e-410f-9a5c-cf5fa95debd1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:50:46 compute-0 podman[383970]: 2025-10-02 08:50:46.757429404 +0000 UTC m=+0.069995582 container create 36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 08:50:46 compute-0 systemd[1]: Started libpod-conmon-36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a.scope.
Oct 02 08:50:46 compute-0 podman[383970]: 2025-10-02 08:50:46.725243202 +0000 UTC m=+0.037809420 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:50:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55629b6f151fe9b01df698b94aebf7c6e06eef4cd3a7619588ea9563a64ff3ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.854 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395046.8535216, d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.855 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] VM Started (Lifecycle Event)
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.860 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.864 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.868 2 INFO nova.virt.libvirt.driver [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Instance spawned successfully.
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.869 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:50:46 compute-0 podman[383970]: 2025-10-02 08:50:46.874008358 +0000 UTC m=+0.186574616 container init 36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.876 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.882 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:50:46 compute-0 podman[383970]: 2025-10-02 08:50:46.883592757 +0000 UTC m=+0.196158975 container start 36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.896 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.897 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.898 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.899 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.899 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.900 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:50:46 compute-0 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [NOTICE]   (383989) : New worker (383991) forked
Oct 02 08:50:46 compute-0 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [NOTICE]   (383989) : Loading success.
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.952 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.952 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395046.8539078, d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.952 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] VM Paused (Lifecycle Event)
Oct 02 08:50:46 compute-0 ceph-mon[74477]: pgmap v2162: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 1.9 MiB/s wr, 72 op/s
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.989 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.993 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395046.8630984, d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:50:46 compute-0 nova_compute[260603]: 2025-10-02 08:50:46.993 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] VM Resumed (Lifecycle Event)
Oct 02 08:50:47 compute-0 nova_compute[260603]: 2025-10-02 08:50:47.016 2 INFO nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Took 9.36 seconds to spawn the instance on the hypervisor.
Oct 02 08:50:47 compute-0 nova_compute[260603]: 2025-10-02 08:50:47.017 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:50:47 compute-0 nova_compute[260603]: 2025-10-02 08:50:47.020 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:50:47 compute-0 nova_compute[260603]: 2025-10-02 08:50:47.025 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:50:47 compute-0 nova_compute[260603]: 2025-10-02 08:50:47.056 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:50:47 compute-0 nova_compute[260603]: 2025-10-02 08:50:47.081 2 INFO nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Took 10.48 seconds to build instance.
Oct 02 08:50:47 compute-0 nova_compute[260603]: 2025-10-02 08:50:47.100 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 88 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 1.9 MiB/s wr, 76 op/s
Oct 02 08:50:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:50:48 compute-0 nova_compute[260603]: 2025-10-02 08:50:48.838 2 DEBUG nova.compute.manager [req-05878fde-14b8-4572-be49-fcb9f60e686e req-803688b6-cda9-45e9-a4c3-b72dc6981b7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received event network-vif-plugged-236df88e-d54e-410f-9a5c-cf5fa95debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:48 compute-0 nova_compute[260603]: 2025-10-02 08:50:48.839 2 DEBUG oslo_concurrency.lockutils [req-05878fde-14b8-4572-be49-fcb9f60e686e req-803688b6-cda9-45e9-a4c3-b72dc6981b7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:50:48 compute-0 nova_compute[260603]: 2025-10-02 08:50:48.839 2 DEBUG oslo_concurrency.lockutils [req-05878fde-14b8-4572-be49-fcb9f60e686e req-803688b6-cda9-45e9-a4c3-b72dc6981b7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:50:48 compute-0 nova_compute[260603]: 2025-10-02 08:50:48.840 2 DEBUG oslo_concurrency.lockutils [req-05878fde-14b8-4572-be49-fcb9f60e686e req-803688b6-cda9-45e9-a4c3-b72dc6981b7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:50:48 compute-0 nova_compute[260603]: 2025-10-02 08:50:48.840 2 DEBUG nova.compute.manager [req-05878fde-14b8-4572-be49-fcb9f60e686e req-803688b6-cda9-45e9-a4c3-b72dc6981b7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] No waiting events found dispatching network-vif-plugged-236df88e-d54e-410f-9a5c-cf5fa95debd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:50:48 compute-0 nova_compute[260603]: 2025-10-02 08:50:48.840 2 WARNING nova.compute.manager [req-05878fde-14b8-4572-be49-fcb9f60e686e req-803688b6-cda9-45e9-a4c3-b72dc6981b7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received unexpected event network-vif-plugged-236df88e-d54e-410f-9a5c-cf5fa95debd1 for instance with vm_state active and task_state None.
Oct 02 08:50:48 compute-0 ceph-mon[74477]: pgmap v2163: 305 pgs: 305 active+clean; 88 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 1.9 MiB/s wr, 76 op/s
Oct 02 08:50:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 807 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct 02 08:50:49 compute-0 nova_compute[260603]: 2025-10-02 08:50:49.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:49 compute-0 nova_compute[260603]: 2025-10-02 08:50:49.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:50 compute-0 nova_compute[260603]: 2025-10-02 08:50:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:50:50 compute-0 nova_compute[260603]: 2025-10-02 08:50:50.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:50 compute-0 NetworkManager[45129]: <info>  [1759395050.9093] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Oct 02 08:50:50 compute-0 NetworkManager[45129]: <info>  [1759395050.9104] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/493)
Oct 02 08:50:51 compute-0 ceph-mon[74477]: pgmap v2164: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 807 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct 02 08:50:51 compute-0 nova_compute[260603]: 2025-10-02 08:50:51.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:51 compute-0 ovn_controller[152344]: 2025-10-02T08:50:51Z|01251|binding|INFO|Releasing lport 6972524a-7a8a-4c19-a841-08e51ccd9aaa from this chassis (sb_readonly=0)
Oct 02 08:50:51 compute-0 nova_compute[260603]: 2025-10-02 08:50:51.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 795 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 02 08:50:51 compute-0 nova_compute[260603]: 2025-10-02 08:50:51.763 2 DEBUG nova.compute.manager [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received event network-changed-236df88e-d54e-410f-9a5c-cf5fa95debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:50:51 compute-0 nova_compute[260603]: 2025-10-02 08:50:51.763 2 DEBUG nova.compute.manager [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Refreshing instance network info cache due to event network-changed-236df88e-d54e-410f-9a5c-cf5fa95debd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:50:51 compute-0 nova_compute[260603]: 2025-10-02 08:50:51.764 2 DEBUG oslo_concurrency.lockutils [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:50:51 compute-0 nova_compute[260603]: 2025-10-02 08:50:51.764 2 DEBUG oslo_concurrency.lockutils [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:50:51 compute-0 nova_compute[260603]: 2025-10-02 08:50:51.764 2 DEBUG nova.network.neutron [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Refreshing network info cache for port 236df88e-d54e-410f-9a5c-cf5fa95debd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:50:52 compute-0 podman[384002]: 2025-10-02 08:50:52.035483083 +0000 UTC m=+0.082098599 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 08:50:52 compute-0 podman[384001]: 2025-10-02 08:50:52.054682102 +0000 UTC m=+0.101238947 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:50:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:50:53 compute-0 ceph-mon[74477]: pgmap v2165: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 795 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 02 08:50:53 compute-0 nova_compute[260603]: 2025-10-02 08:50:53.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:50:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 02 08:50:53 compute-0 nova_compute[260603]: 2025-10-02 08:50:53.608 2 DEBUG nova.network.neutron [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updated VIF entry in instance network info cache for port 236df88e-d54e-410f-9a5c-cf5fa95debd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:50:53 compute-0 nova_compute[260603]: 2025-10-02 08:50:53.609 2 DEBUG nova.network.neutron [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:50:53 compute-0 nova_compute[260603]: 2025-10-02 08:50:53.635 2 DEBUG oslo_concurrency.lockutils [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:50:54 compute-0 nova_compute[260603]: 2025-10-02 08:50:54.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:54 compute-0 nova_compute[260603]: 2025-10-02 08:50:54.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:54 compute-0 nova_compute[260603]: 2025-10-02 08:50:54.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:55 compute-0 ceph-mon[74477]: pgmap v2166: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 02 08:50:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:50:56 compute-0 nova_compute[260603]: 2025-10-02 08:50:56.736 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395041.7356882, 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:50:56 compute-0 nova_compute[260603]: 2025-10-02 08:50:56.736 2 INFO nova.compute.manager [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] VM Stopped (Lifecycle Event)
Oct 02 08:50:56 compute-0 nova_compute[260603]: 2025-10-02 08:50:56.758 2 DEBUG nova.compute.manager [None req-6f09ed6a-0597-4818-a383-fbc03c66b1ad - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:50:57 compute-0 ceph-mon[74477]: pgmap v2167: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:50:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 81 op/s
Oct 02 08:50:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:50:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:50:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:50:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:50:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:50:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:50:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:50:58 compute-0 ovn_controller[152344]: 2025-10-02T08:50:58Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:8a:1b 10.100.0.3
Oct 02 08:50:58 compute-0 ovn_controller[152344]: 2025-10-02T08:50:58Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:8a:1b 10.100.0.3
Oct 02 08:50:59 compute-0 ceph-mon[74477]: pgmap v2168: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 81 op/s
Oct 02 08:50:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 102 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Oct 02 08:50:59 compute-0 nova_compute[260603]: 2025-10-02 08:50:59.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:50:59 compute-0 nova_compute[260603]: 2025-10-02 08:50:59.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:00 compute-0 nova_compute[260603]: 2025-10-02 08:51:00.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:01 compute-0 ceph-mon[74477]: pgmap v2169: 305 pgs: 305 active+clean; 102 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Oct 02 08:51:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 102 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 54 op/s
Oct 02 08:51:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:02.318 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:51:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:02.320 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:51:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:02.321 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:02 compute-0 nova_compute[260603]: 2025-10-02 08:51:02.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:51:03 compute-0 ceph-mon[74477]: pgmap v2170: 305 pgs: 305 active+clean; 102 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 54 op/s
Oct 02 08:51:03 compute-0 nova_compute[260603]: 2025-10-02 08:51:03.374 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:03 compute-0 nova_compute[260603]: 2025-10-02 08:51:03.375 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:03 compute-0 nova_compute[260603]: 2025-10-02 08:51:03.395 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:51:03 compute-0 nova_compute[260603]: 2025-10-02 08:51:03.492 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:03 compute-0 nova_compute[260603]: 2025-10-02 08:51:03.493 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:03 compute-0 nova_compute[260603]: 2025-10-02 08:51:03.505 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:51:03 compute-0 nova_compute[260603]: 2025-10-02 08:51:03.505 2 INFO nova.compute.claims [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:51:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 121 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 02 08:51:03 compute-0 nova_compute[260603]: 2025-10-02 08:51:03.658 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:51:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4090223476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.127 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.137 2 DEBUG nova.compute.provider_tree [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.158 2 DEBUG nova.scheduler.client.report [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.187 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.188 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.250 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.250 2 DEBUG nova.network.neutron [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.274 2 INFO nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.295 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.415 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.416 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.417 2 INFO nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Creating image(s)
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.448 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.479 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.509 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.514 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.622 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.624 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.625 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.625 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.658 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.663 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.718 2 DEBUG nova.policy [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7767630a5b1049f48d7e0fed29e221ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c86b416fdb524f21b0228639a3a14116', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:04 compute-0 nova_compute[260603]: 2025-10-02 08:51:04.992 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:05 compute-0 nova_compute[260603]: 2025-10-02 08:51:05.050 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] resizing rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:51:05 compute-0 ceph-mon[74477]: pgmap v2171: 305 pgs: 305 active+clean; 121 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 02 08:51:05 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4090223476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:51:05 compute-0 nova_compute[260603]: 2025-10-02 08:51:05.175 2 DEBUG nova.objects.instance [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'migration_context' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:05 compute-0 nova_compute[260603]: 2025-10-02 08:51:05.194 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:51:05 compute-0 nova_compute[260603]: 2025-10-02 08:51:05.195 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Ensure instance console log exists: /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:51:05 compute-0 nova_compute[260603]: 2025-10-02 08:51:05.195 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:05 compute-0 nova_compute[260603]: 2025-10-02 08:51:05.196 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:05 compute-0 nova_compute[260603]: 2025-10-02 08:51:05.197 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 121 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:51:05 compute-0 nova_compute[260603]: 2025-10-02 08:51:05.557 2 DEBUG nova.network.neutron [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Successfully created port: ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:51:06 compute-0 nova_compute[260603]: 2025-10-02 08:51:06.645 2 DEBUG nova.network.neutron [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Successfully updated port: ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:51:06 compute-0 nova_compute[260603]: 2025-10-02 08:51:06.671 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:51:06 compute-0 nova_compute[260603]: 2025-10-02 08:51:06.672 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:51:06 compute-0 nova_compute[260603]: 2025-10-02 08:51:06.672 2 DEBUG nova.network.neutron [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:51:06 compute-0 nova_compute[260603]: 2025-10-02 08:51:06.875 2 DEBUG nova.compute.manager [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-changed-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:51:06 compute-0 nova_compute[260603]: 2025-10-02 08:51:06.876 2 DEBUG nova.compute.manager [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Refreshing instance network info cache due to event network-changed-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:51:06 compute-0 nova_compute[260603]: 2025-10-02 08:51:06.877 2 DEBUG oslo_concurrency.lockutils [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:51:06 compute-0 nova_compute[260603]: 2025-10-02 08:51:06.927 2 DEBUG nova.network.neutron [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:51:07 compute-0 ceph-mon[74477]: pgmap v2172: 305 pgs: 305 active+clean; 121 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:51:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 154 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 3.4 MiB/s wr, 67 op/s
Oct 02 08:51:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.286 2 DEBUG nova.compute.manager [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.347 2 INFO nova.compute.manager [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] instance snapshotting
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.637 2 INFO nova.virt.libvirt.driver [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Beginning live snapshot process
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.690 2 DEBUG nova.network.neutron [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updating instance_info_cache with network_info: [{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.717 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.718 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance network_info: |[{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.719 2 DEBUG oslo_concurrency.lockutils [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.719 2 DEBUG nova.network.neutron [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Refreshing network info cache for port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.725 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Start _get_guest_xml network_info=[{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.818 2 DEBUG nova.virt.libvirt.imagebackend [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.824 2 WARNING nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.830 2 DEBUG nova.virt.libvirt.host [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.831 2 DEBUG nova.virt.libvirt.host [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.833 2 DEBUG nova.virt.libvirt.host [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.834 2 DEBUG nova.virt.libvirt.host [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.834 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.834 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.835 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.835 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.835 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.835 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.836 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.836 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.836 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.836 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.837 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.837 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:51:08 compute-0 nova_compute[260603]: 2025-10-02 08:51:08.839 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.098 2 DEBUG nova.storage.rbd_utils [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] creating snapshot(1105335cf2ba49d4b4fa4d2098cd8e03) on rbd image(d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:51:09 compute-0 ceph-mon[74477]: pgmap v2173: 305 pgs: 305 active+clean; 154 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 3.4 MiB/s wr, 67 op/s
Oct 02 08:51:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:51:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274241326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.316 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.339 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.343 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 167 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 08:51:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:51:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117980269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.902 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.905 2 DEBUG nova.virt.libvirt.vif [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:51:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1642420573',display_name='tempest-TestNetworkAdvancedServerOps-server-1642420573',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1642420573',id=120,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4QvS560h5Nt86MUax5zsZlHkfG3ItwowrPNScykbxoYWCKz1GitCrHEDcUIk0uoHK8H9Y5yTAmbaorVjHeOMl3in6cmQv3hOwWfkcNBCwehb1AIqliYtOYjjLnnPR64Q==',key_name='tempest-TestNetworkAdvancedServerOps-1971503056',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-vx22hflz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:51:04Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=0239090c-f1eb-4b8b-8f45-94efee345fa5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.906 2 DEBUG nova.network.os_vif_util [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.908 2 DEBUG nova.network.os_vif_util [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.911 2 DEBUG nova.objects.instance [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.930 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:51:09 compute-0 nova_compute[260603]:   <uuid>0239090c-f1eb-4b8b-8f45-94efee345fa5</uuid>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   <name>instance-00000078</name>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1642420573</nova:name>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:51:08</nova:creationTime>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:51:09 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:51:09 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:51:09 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:51:09 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:51:09 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:51:09 compute-0 nova_compute[260603]:         <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct 02 08:51:09 compute-0 nova_compute[260603]:         <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:51:09 compute-0 nova_compute[260603]:         <nova:port uuid="ceb50499-7e3c-4d47-a2dc-05ce86dbbde1">
Oct 02 08:51:09 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <system>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <entry name="serial">0239090c-f1eb-4b8b-8f45-94efee345fa5</entry>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <entry name="uuid">0239090c-f1eb-4b8b-8f45-94efee345fa5</entry>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     </system>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   <os>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   </os>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   <features>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   </features>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/0239090c-f1eb-4b8b-8f45-94efee345fa5_disk">
Oct 02 08:51:09 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       </source>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:51:09 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config">
Oct 02 08:51:09 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       </source>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:51:09 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:80:51:fb"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <target dev="tapceb50499-7e"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/console.log" append="off"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <video>
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     </video>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:51:09 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:51:09 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:51:09 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:51:09 compute-0 nova_compute[260603]: </domain>
Oct 02 08:51:09 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.932 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Preparing to wait for external event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.932 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.933 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.933 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.933 2 DEBUG nova.virt.libvirt.vif [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:51:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1642420573',display_name='tempest-TestNetworkAdvancedServerOps-server-1642420573',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1642420573',id=120,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4QvS560h5Nt86MUax5zsZlHkfG3ItwowrPNScykbxoYWCKz1GitCrHEDcUIk0uoHK8H9Y5yTAmbaorVjHeOMl3in6cmQv3hOwWfkcNBCwehb1AIqliYtOYjjLnnPR64Q==',key_name='tempest-TestNetworkAdvancedServerOps-1971503056',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-vx22hflz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:51:04Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=0239090c-f1eb-4b8b-8f45-94efee345fa5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.934 2 DEBUG nova.network.os_vif_util [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.934 2 DEBUG nova.network.os_vif_util [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.934 2 DEBUG os_vif [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.936 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.938 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapceb50499-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.938 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapceb50499-7e, col_values=(('external_ids', {'iface-id': 'ceb50499-7e3c-4d47-a2dc-05ce86dbbde1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:51:fb', 'vm-uuid': '0239090c-f1eb-4b8b-8f45-94efee345fa5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:09 compute-0 NetworkManager[45129]: <info>  [1759395069.9412] manager: (tapceb50499-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/494)
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:09 compute-0 nova_compute[260603]: 2025-10-02 08:51:09.952 2 INFO os_vif [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e')
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.018 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.019 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.019 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No VIF found with MAC fa:16:3e:80:51:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.020 2 INFO nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Using config drive
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.055 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct 02 08:51:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3274241326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:51:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3117980269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:51:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct 02 08:51:10 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.181 2 DEBUG nova.storage.rbd_utils [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] cloning vms/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk@1105335cf2ba49d4b4fa4d2098cd8e03 to images/bfda7a79-c502-4c19-8d8e-da8dbbb22d04 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.283 2 DEBUG nova.storage.rbd_utils [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] flattening images/bfda7a79-c502-4c19-8d8e-da8dbbb22d04 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:51:10 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.611 2 INFO nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Creating config drive at /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/disk.config
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.621 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj1gfcyxm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.785 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj1gfcyxm" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.817 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.821 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/disk.config 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:10 compute-0 nova_compute[260603]: 2025-10-02 08:51:10.950 2 DEBUG nova.storage.rbd_utils [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] removing snapshot(1105335cf2ba49d4b4fa4d2098cd8e03) on rbd image(d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.033 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/disk.config 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.034 2 INFO nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Deleting local config drive /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/disk.config because it was imported into RBD.
Oct 02 08:51:11 compute-0 NetworkManager[45129]: <info>  [1759395071.0845] manager: (tapceb50499-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/495)
Oct 02 08:51:11 compute-0 kernel: tapceb50499-7e: entered promiscuous mode
Oct 02 08:51:11 compute-0 ovn_controller[152344]: 2025-10-02T08:51:11Z|01252|binding|INFO|Claiming lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for this chassis.
Oct 02 08:51:11 compute-0 ovn_controller[152344]: 2025-10-02T08:51:11Z|01253|binding|INFO|ceb50499-7e3c-4d47-a2dc-05ce86dbbde1: Claiming fa:16:3e:80:51:fb 10.100.0.11
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.095 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:51:fb 10.100.0.11'], port_security=['fa:16:3e:80:51:fb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0239090c-f1eb-4b8b-8f45-94efee345fa5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a398fde7-e9b6-47e4-afc2-221c0b15c74f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=633447ab-1fed-4ecc-895b-fd3d7334df6b, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.096 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 in datapath 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 bound to our chassis
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.097 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:11 compute-0 ovn_controller[152344]: 2025-10-02T08:51:11Z|01254|binding|INFO|Setting lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 ovn-installed in OVS
Oct 02 08:51:11 compute-0 ovn_controller[152344]: 2025-10-02T08:51:11Z|01255|binding|INFO|Setting lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 up in Southbound
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.108 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c25d67f7-f348-47fb-9c75-131397a45d27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.111 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap21ddf177-31 in ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.114 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap21ddf177-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.114 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09849c76-9fd8-4274-8eb7-8681865ed451]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.115 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3574f12d-4c5d-4b2e-99c4-adc4a3d25680]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 systemd-udevd[384489]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:51:11 compute-0 systemd-machined[214636]: New machine qemu-151-instance-00000078.
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.126 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[028e075e-b183-4f28-9562-3fcf03489057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Oct 02 08:51:11 compute-0 ceph-mon[74477]: pgmap v2174: 305 pgs: 305 active+clean; 167 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Oct 02 08:51:11 compute-0 ceph-mon[74477]: osdmap e274: 3 total, 3 up, 3 in
Oct 02 08:51:11 compute-0 NetworkManager[45129]: <info>  [1759395071.1360] device (tapceb50499-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:51:11 compute-0 NetworkManager[45129]: <info>  [1759395071.1381] device (tapceb50499-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:51:11 compute-0 systemd[1]: Started Virtual Machine qemu-151-instance-00000078.
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.142 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18e1b5b8-ec87-46f9-8bb3-dfe668c3d73e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Oct 02 08:51:11 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.172 2 DEBUG nova.storage.rbd_utils [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] creating snapshot(snap) on rbd image(bfda7a79-c502-4c19-8d8e-da8dbbb22d04) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.186 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b6375ff4-adb8-49ee-b572-4883bfb7f482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 systemd-udevd[384494]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.192 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[889ea84e-f3c8-4760-b4ce-137b3935b7b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 NetworkManager[45129]: <info>  [1759395071.1936] manager: (tap21ddf177-30): new Veth device (/org/freedesktop/NetworkManager/Devices/496)
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.229 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f795a6c5-192f-45b0-962c-55570d0515a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.233 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed054c3-5c20-4bcb-b34b-d3f69b10fe12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 NetworkManager[45129]: <info>  [1759395071.2607] device (tap21ddf177-30): carrier: link connected
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.267 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb6088a-98e5-4f86-82e6-1fbd43c23df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.288 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83737bc9-90dc-4a32-8a78-74007622de40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21ddf177-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:39:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594986, 'reachable_time': 28269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384540, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.305 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[25258b77-2469-4977-a23b-834d1e6b86a7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:390d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 594986, 'tstamp': 594986}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384541, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.327 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a565b21b-621d-4e44-9de6-d81c3215a6e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21ddf177-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:39:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594986, 'reachable_time': 28269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384542, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.361 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[77ba12d5-28a3-4710-b38d-3645f0395913]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.416 2 DEBUG nova.compute.manager [req-ae8b7ff3-f8d5-4f2f-935a-2f58dba6bcd0 req-9f2b517d-5fb3-4243-b37a-2f471eeb19c9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.417 2 DEBUG oslo_concurrency.lockutils [req-ae8b7ff3-f8d5-4f2f-935a-2f58dba6bcd0 req-9f2b517d-5fb3-4243-b37a-2f471eeb19c9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.417 2 DEBUG oslo_concurrency.lockutils [req-ae8b7ff3-f8d5-4f2f-935a-2f58dba6bcd0 req-9f2b517d-5fb3-4243-b37a-2f471eeb19c9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.417 2 DEBUG oslo_concurrency.lockutils [req-ae8b7ff3-f8d5-4f2f-935a-2f58dba6bcd0 req-9f2b517d-5fb3-4243-b37a-2f471eeb19c9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.417 2 DEBUG nova.compute.manager [req-ae8b7ff3-f8d5-4f2f-935a-2f58dba6bcd0 req-9f2b517d-5fb3-4243-b37a-2f471eeb19c9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Processing event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.448 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[79b32e08-6024-46a4-9525-2ac131a81509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.449 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21ddf177-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.450 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.450 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21ddf177-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:11 compute-0 NetworkManager[45129]: <info>  [1759395071.4531] manager: (tap21ddf177-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Oct 02 08:51:11 compute-0 kernel: tap21ddf177-30: entered promiscuous mode
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.457 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap21ddf177-30, col_values=(('external_ids', {'iface-id': 'da5470e2-37c6-4448-8dab-7b3e1d890a66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:11 compute-0 ovn_controller[152344]: 2025-10-02T08:51:11Z|01256|binding|INFO|Releasing lport da5470e2-37c6-4448-8dab-7b3e1d890a66 from this chassis (sb_readonly=0)
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.493 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.494 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f821c61-d88d-4849-976e-a3772f578c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.495 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.pid.haproxy
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:51:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.495 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'env', 'PROCESS_TAG=haproxy-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.544 2 DEBUG nova.network.neutron [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updated VIF entry in instance network info cache for port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.544 2 DEBUG nova.network.neutron [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updating instance_info_cache with network_info: [{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:51:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 167 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Oct 02 08:51:11 compute-0 nova_compute[260603]: 2025-10-02 08:51:11.568 2 DEBUG oslo_concurrency.lockutils [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:51:11 compute-0 podman[384589]: 2025-10-02 08:51:11.945981033 +0000 UTC m=+0.090323526 container create f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:51:11 compute-0 podman[384589]: 2025-10-02 08:51:11.909203216 +0000 UTC m=+0.053545789 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:51:12 compute-0 systemd[1]: Started libpod-conmon-f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3.scope.
Oct 02 08:51:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a077a86bcce7e5489f801b5d34f88a8eb9ddfc03b2537cc12437d7144a2267e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:12 compute-0 podman[384589]: 2025-10-02 08:51:12.06715043 +0000 UTC m=+0.211492903 container init f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:51:12 compute-0 podman[384589]: 2025-10-02 08:51:12.085653947 +0000 UTC m=+0.229996420 container start f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 08:51:12 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [NOTICE]   (384636) : New worker (384638) forked
Oct 02 08:51:12 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [NOTICE]   (384636) : Loading success.
Oct 02 08:51:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Oct 02 08:51:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Oct 02 08:51:12 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Oct 02 08:51:12 compute-0 ceph-mon[74477]: osdmap e275: 3 total, 3 up, 3 in
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.419 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395072.4181633, 0239090c-f1eb-4b8b-8f45-94efee345fa5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.419 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] VM Started (Lifecycle Event)
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.422 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.426 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.431 2 INFO nova.virt.libvirt.driver [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance spawned successfully.
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.432 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.453 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.459 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.464 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.464 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.464 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.465 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.465 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.466 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.480 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.480 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395072.418307, 0239090c-f1eb-4b8b-8f45-94efee345fa5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.480 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] VM Paused (Lifecycle Event)
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.509 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.512 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395072.4249125, 0239090c-f1eb-4b8b-8f45-94efee345fa5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.513 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] VM Resumed (Lifecycle Event)
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.529 2 INFO nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Took 8.11 seconds to spawn the instance on the hypervisor.
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.529 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.530 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.538 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.609 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.615 2 INFO nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Took 9.16 seconds to build instance.
Oct 02 08:51:12 compute-0 nova_compute[260603]: 2025-10-02 08:51:12.629 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:12 compute-0 sudo[384647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:51:12 compute-0 sudo[384647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:12 compute-0 sudo[384647]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:12 compute-0 sudo[384672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:51:12 compute-0 sudo[384672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:12 compute-0 sudo[384672]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:12 compute-0 sudo[384697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:51:12 compute-0 sudo[384697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:12 compute-0 sudo[384697]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:12 compute-0 sudo[384722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 08:51:12 compute-0 sudo[384722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:51:13 compute-0 ceph-mon[74477]: pgmap v2177: 305 pgs: 305 active+clean; 167 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Oct 02 08:51:13 compute-0 ceph-mon[74477]: osdmap e276: 3 total, 3 up, 3 in
Oct 02 08:51:13 compute-0 podman[384819]: 2025-10-02 08:51:13.440730403 +0000 UTC m=+0.068803666 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:51:13 compute-0 nova_compute[260603]: 2025-10-02 08:51:13.473 2 INFO nova.virt.libvirt.driver [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Snapshot image upload complete
Oct 02 08:51:13 compute-0 nova_compute[260603]: 2025-10-02 08:51:13.474 2 INFO nova.compute.manager [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Took 5.12 seconds to snapshot the instance on the hypervisor.
Oct 02 08:51:13 compute-0 nova_compute[260603]: 2025-10-02 08:51:13.491 2 DEBUG nova.compute.manager [req-39744ba4-8a1a-46ed-b718-93a803a37c56 req-9d6ba5c4-5230-4a2e-9844-bc880612b1d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:51:13 compute-0 nova_compute[260603]: 2025-10-02 08:51:13.492 2 DEBUG oslo_concurrency.lockutils [req-39744ba4-8a1a-46ed-b718-93a803a37c56 req-9d6ba5c4-5230-4a2e-9844-bc880612b1d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:13 compute-0 nova_compute[260603]: 2025-10-02 08:51:13.493 2 DEBUG oslo_concurrency.lockutils [req-39744ba4-8a1a-46ed-b718-93a803a37c56 req-9d6ba5c4-5230-4a2e-9844-bc880612b1d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:13 compute-0 nova_compute[260603]: 2025-10-02 08:51:13.493 2 DEBUG oslo_concurrency.lockutils [req-39744ba4-8a1a-46ed-b718-93a803a37c56 req-9d6ba5c4-5230-4a2e-9844-bc880612b1d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:13 compute-0 nova_compute[260603]: 2025-10-02 08:51:13.494 2 DEBUG nova.compute.manager [req-39744ba4-8a1a-46ed-b718-93a803a37c56 req-9d6ba5c4-5230-4a2e-9844-bc880612b1d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:51:13 compute-0 nova_compute[260603]: 2025-10-02 08:51:13.494 2 WARNING nova.compute.manager [req-39744ba4-8a1a-46ed-b718-93a803a37c56 req-9d6ba5c4-5230-4a2e-9844-bc880612b1d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state active and task_state None.
Oct 02 08:51:13 compute-0 podman[384819]: 2025-10-02 08:51:13.537892311 +0000 UTC m=+0.165965474 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 08:51:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 8.8 MiB/s wr, 239 op/s
Oct 02 08:51:14 compute-0 sudo[384722]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:51:14 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:51:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:51:14 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:51:14 compute-0 sudo[384977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:51:14 compute-0 sudo[384977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:14 compute-0 sudo[384977]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:14 compute-0 sudo[385002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:51:14 compute-0 sudo[385002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:14 compute-0 sudo[385002]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:14 compute-0 sudo[385027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:51:14 compute-0 sudo[385027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:14 compute-0 sudo[385027]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:14 compute-0 sudo[385052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:51:14 compute-0 sudo[385052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:14 compute-0 nova_compute[260603]: 2025-10-02 08:51:14.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:14 compute-0 nova_compute[260603]: 2025-10-02 08:51:14.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:15 compute-0 sudo[385052]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:15 compute-0 ceph-mon[74477]: pgmap v2179: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 8.8 MiB/s wr, 239 op/s
Oct 02 08:51:15 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:51:15 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:51:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 08:51:15 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 08:51:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:51:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:51:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:51:15 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:51:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:51:15 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:51:15 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0e50915f-49a1-4c61-82e6-cdd9804f7774 does not exist
Oct 02 08:51:15 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 22a74ab9-e60d-4604-b1c6-f4255d105b16 does not exist
Oct 02 08:51:15 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 1afe87f1-1bdd-48db-86c4-65653b7e57d3 does not exist
Oct 02 08:51:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:51:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:51:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:51:15 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:51:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:51:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:51:15 compute-0 sudo[385110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:51:15 compute-0 sudo[385110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:15 compute-0 sudo[385110]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:15 compute-0 sudo[385147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:51:15 compute-0 sudo[385147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:15 compute-0 podman[385135]: 2025-10-02 08:51:15.346005437 +0000 UTC m=+0.057853084 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 08:51:15 compute-0 sudo[385147]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:15 compute-0 podman[385134]: 2025-10-02 08:51:15.373723331 +0000 UTC m=+0.086324442 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:51:15 compute-0 sudo[385203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:51:15 compute-0 sudo[385203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:15 compute-0 sudo[385203]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:15 compute-0 sudo[385231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:51:15 compute-0 sudo[385231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 7.8 MiB/s wr, 192 op/s
Oct 02 08:51:15 compute-0 nova_compute[260603]: 2025-10-02 08:51:15.772 2 DEBUG nova.compute.manager [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-changed-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:51:15 compute-0 nova_compute[260603]: 2025-10-02 08:51:15.773 2 DEBUG nova.compute.manager [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Refreshing instance network info cache due to event network-changed-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:51:15 compute-0 nova_compute[260603]: 2025-10-02 08:51:15.773 2 DEBUG oslo_concurrency.lockutils [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:51:15 compute-0 nova_compute[260603]: 2025-10-02 08:51:15.773 2 DEBUG oslo_concurrency.lockutils [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:51:15 compute-0 nova_compute[260603]: 2025-10-02 08:51:15.773 2 DEBUG nova.network.neutron [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Refreshing network info cache for port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:51:15 compute-0 podman[385296]: 2025-10-02 08:51:15.801450583 +0000 UTC m=+0.052080125 container create 13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 08:51:15 compute-0 systemd[1]: Started libpod-conmon-13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35.scope.
Oct 02 08:51:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:51:15 compute-0 podman[385296]: 2025-10-02 08:51:15.782063428 +0000 UTC m=+0.032692990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:51:15 compute-0 podman[385296]: 2025-10-02 08:51:15.881546339 +0000 UTC m=+0.132175971 container init 13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:51:15 compute-0 podman[385296]: 2025-10-02 08:51:15.893919615 +0000 UTC m=+0.144549157 container start 13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:51:15 compute-0 podman[385296]: 2025-10-02 08:51:15.897137184 +0000 UTC m=+0.147766726 container attach 13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 08:51:15 compute-0 gifted_cray[385311]: 167 167
Oct 02 08:51:15 compute-0 systemd[1]: libpod-13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35.scope: Deactivated successfully.
Oct 02 08:51:15 compute-0 conmon[385311]: conmon 13bf8f55f760cad54c00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35.scope/container/memory.events
Oct 02 08:51:15 compute-0 podman[385296]: 2025-10-02 08:51:15.899300412 +0000 UTC m=+0.149929954 container died 13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 08:51:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-936ab3dc185968f5543489d1595bfa247c7688d24667f0a037bd2d0b33ea59b9-merged.mount: Deactivated successfully.
Oct 02 08:51:15 compute-0 podman[385296]: 2025-10-02 08:51:15.936018007 +0000 UTC m=+0.186647549 container remove 13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 02 08:51:15 compute-0 systemd[1]: libpod-conmon-13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35.scope: Deactivated successfully.
Oct 02 08:51:16 compute-0 podman[385335]: 2025-10-02 08:51:16.116050178 +0000 UTC m=+0.041138153 container create 4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:51:16 compute-0 systemd[1]: Started libpod-conmon-4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379.scope.
Oct 02 08:51:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7b227dbee8be176950fc3c368ffd17e8bdee1d6358cf65d0e124a98a4a466/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7b227dbee8be176950fc3c368ffd17e8bdee1d6358cf65d0e124a98a4a466/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7b227dbee8be176950fc3c368ffd17e8bdee1d6358cf65d0e124a98a4a466/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7b227dbee8be176950fc3c368ffd17e8bdee1d6358cf65d0e124a98a4a466/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7b227dbee8be176950fc3c368ffd17e8bdee1d6358cf65d0e124a98a4a466/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:16 compute-0 podman[385335]: 2025-10-02 08:51:16.101336069 +0000 UTC m=+0.026424084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:51:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 08:51:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:51:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:51:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:51:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:51:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:51:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:51:16 compute-0 podman[385335]: 2025-10-02 08:51:16.225791398 +0000 UTC m=+0.150879473 container init 4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 08:51:16 compute-0 podman[385335]: 2025-10-02 08:51:16.236834663 +0000 UTC m=+0.161922678 container start 4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:51:16 compute-0 podman[385335]: 2025-10-02 08:51:16.252433219 +0000 UTC m=+0.177521244 container attach 4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.153 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.155 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.212 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:51:17 compute-0 ceph-mon[74477]: pgmap v2180: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 7.8 MiB/s wr, 192 op/s
Oct 02 08:51:17 compute-0 wonderful_lewin[385352]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:51:17 compute-0 wonderful_lewin[385352]: --> relative data size: 1.0
Oct 02 08:51:17 compute-0 wonderful_lewin[385352]: --> All data devices are unavailable
Oct 02 08:51:17 compute-0 systemd[1]: libpod-4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379.scope: Deactivated successfully.
Oct 02 08:51:17 compute-0 podman[385335]: 2025-10-02 08:51:17.272188913 +0000 UTC m=+1.197276908 container died 4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:51:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-01e7b227dbee8be176950fc3c368ffd17e8bdee1d6358cf65d0e124a98a4a466-merged.mount: Deactivated successfully.
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.306 2 DEBUG nova.network.neutron [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updated VIF entry in instance network info cache for port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.307 2 DEBUG nova.network.neutron [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updating instance_info_cache with network_info: [{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.330 2 DEBUG oslo_concurrency.lockutils [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:51:17 compute-0 podman[385335]: 2025-10-02 08:51:17.336876249 +0000 UTC m=+1.261964264 container remove 4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.336 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.337 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.349 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.349 2 INFO nova.compute.claims [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:51:17 compute-0 systemd[1]: libpod-conmon-4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379.scope: Deactivated successfully.
Oct 02 08:51:17 compute-0 sudo[385231]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:17 compute-0 sudo[385393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:51:17 compute-0 sudo[385393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:17 compute-0 sudo[385393]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.507 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:17 compute-0 sudo[385418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:51:17 compute-0 sudo[385418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 6.3 MiB/s wr, 239 op/s
Oct 02 08:51:17 compute-0 sudo[385418]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:17 compute-0 sudo[385444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:51:17 compute-0 sudo[385444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:17 compute-0 sudo[385444]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:17 compute-0 sudo[385469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:51:17 compute-0 sudo[385469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:51:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3651839947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.961 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.968 2 DEBUG nova.compute.provider_tree [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:51:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:51:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Oct 02 08:51:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Oct 02 08:51:17 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Oct 02 08:51:17 compute-0 nova_compute[260603]: 2025-10-02 08:51:17.985 2 DEBUG nova.scheduler.client.report [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.013 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.014 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:51:18 compute-0 podman[385555]: 2025-10-02 08:51:18.052540586 +0000 UTC m=+0.036827490 container create 2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_johnson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.063 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.064 2 DEBUG nova.network.neutron [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.085 2 INFO nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:51:18 compute-0 systemd[1]: Started libpod-conmon-2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b.scope.
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.107 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:51:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:51:18 compute-0 podman[385555]: 2025-10-02 08:51:18.037274279 +0000 UTC m=+0.021561183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:51:18 compute-0 podman[385555]: 2025-10-02 08:51:18.143031436 +0000 UTC m=+0.127318350 container init 2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 02 08:51:18 compute-0 podman[385555]: 2025-10-02 08:51:18.155261217 +0000 UTC m=+0.139548111 container start 2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_johnson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:51:18 compute-0 podman[385555]: 2025-10-02 08:51:18.15920787 +0000 UTC m=+0.143494784 container attach 2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_johnson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:51:18 compute-0 great_johnson[385571]: 167 167
Oct 02 08:51:18 compute-0 systemd[1]: libpod-2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b.scope: Deactivated successfully.
Oct 02 08:51:18 compute-0 podman[385555]: 2025-10-02 08:51:18.163304908 +0000 UTC m=+0.147591822 container died 2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 08:51:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e4ceaadd6d083aa073e5162a1ec8a774980b2ba651d661c8829a34b9a73544b-merged.mount: Deactivated successfully.
Oct 02 08:51:18 compute-0 podman[385555]: 2025-10-02 08:51:18.216597148 +0000 UTC m=+0.200884072 container remove 2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_johnson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 08:51:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3651839947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:51:18 compute-0 ceph-mon[74477]: osdmap e277: 3 total, 3 up, 3 in
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.229 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.233 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.234 2 INFO nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Creating image(s)
Oct 02 08:51:18 compute-0 systemd[1]: libpod-conmon-2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b.scope: Deactivated successfully.
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.288 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.323 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.384 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.394 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "61b3d24d77efae9abe448d859237f603a5c1f35c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.395 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "61b3d24d77efae9abe448d859237f603a5c1f35c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.462 2 DEBUG nova.policy [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aceb8b0273154f1abe964d78a6261936', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:51:18 compute-0 podman[385648]: 2025-10-02 08:51:18.469528062 +0000 UTC m=+0.062149848 container create 4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:51:18 compute-0 systemd[1]: Started libpod-conmon-4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5.scope.
Oct 02 08:51:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:51:18 compute-0 podman[385648]: 2025-10-02 08:51:18.450723336 +0000 UTC m=+0.043345152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:51:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0b79f3aff3d90abeca45195365c26bc30b13a1f521e7db162132afd5a98d8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0b79f3aff3d90abeca45195365c26bc30b13a1f521e7db162132afd5a98d8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0b79f3aff3d90abeca45195365c26bc30b13a1f521e7db162132afd5a98d8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0b79f3aff3d90abeca45195365c26bc30b13a1f521e7db162132afd5a98d8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:18 compute-0 podman[385648]: 2025-10-02 08:51:18.565915447 +0000 UTC m=+0.158537273 container init 4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 08:51:18 compute-0 podman[385648]: 2025-10-02 08:51:18.572814961 +0000 UTC m=+0.165436767 container start 4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:51:18 compute-0 podman[385648]: 2025-10-02 08:51:18.576710033 +0000 UTC m=+0.169331919 container attach 4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.628 2 DEBUG nova.virt.libvirt.imagebackend [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/bfda7a79-c502-4c19-8d8e-da8dbbb22d04/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/bfda7a79-c502-4c19-8d8e-da8dbbb22d04/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.720 2 DEBUG nova.virt.libvirt.imagebackend [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Selected location: {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/bfda7a79-c502-4c19-8d8e-da8dbbb22d04/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.721 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] cloning images/bfda7a79-c502-4c19-8d8e-da8dbbb22d04@snap to None/062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.847 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "61b3d24d77efae9abe448d859237f603a5c1f35c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.964 2 DEBUG nova.objects.instance [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 062b2a3e-b612-42bf-b96c-fb9bdd9008ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.982 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.983 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Ensure instance console log exists: /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.983 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.984 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:18 compute-0 nova_compute[260603]: 2025-10-02 08:51:18.984 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:19 compute-0 ceph-mon[74477]: pgmap v2181: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 6.3 MiB/s wr, 239 op/s
Oct 02 08:51:19 compute-0 condescending_carson[385665]: {
Oct 02 08:51:19 compute-0 condescending_carson[385665]:     "0": [
Oct 02 08:51:19 compute-0 condescending_carson[385665]:         {
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "devices": [
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "/dev/loop3"
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             ],
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_name": "ceph_lv0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_size": "21470642176",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "name": "ceph_lv0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "tags": {
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.cluster_name": "ceph",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.crush_device_class": "",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.encrypted": "0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.osd_id": "0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.type": "block",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.vdo": "0"
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             },
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "type": "block",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "vg_name": "ceph_vg0"
Oct 02 08:51:19 compute-0 condescending_carson[385665]:         }
Oct 02 08:51:19 compute-0 condescending_carson[385665]:     ],
Oct 02 08:51:19 compute-0 condescending_carson[385665]:     "1": [
Oct 02 08:51:19 compute-0 condescending_carson[385665]:         {
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "devices": [
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "/dev/loop4"
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             ],
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_name": "ceph_lv1",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_size": "21470642176",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "name": "ceph_lv1",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "tags": {
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.cluster_name": "ceph",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.crush_device_class": "",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.encrypted": "0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.osd_id": "1",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.type": "block",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.vdo": "0"
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             },
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "type": "block",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "vg_name": "ceph_vg1"
Oct 02 08:51:19 compute-0 condescending_carson[385665]:         }
Oct 02 08:51:19 compute-0 condescending_carson[385665]:     ],
Oct 02 08:51:19 compute-0 condescending_carson[385665]:     "2": [
Oct 02 08:51:19 compute-0 condescending_carson[385665]:         {
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "devices": [
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "/dev/loop5"
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             ],
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_name": "ceph_lv2",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_size": "21470642176",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "name": "ceph_lv2",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "tags": {
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.cluster_name": "ceph",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.crush_device_class": "",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.encrypted": "0",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.osd_id": "2",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.type": "block",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:                 "ceph.vdo": "0"
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             },
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "type": "block",
Oct 02 08:51:19 compute-0 condescending_carson[385665]:             "vg_name": "ceph_vg2"
Oct 02 08:51:19 compute-0 condescending_carson[385665]:         }
Oct 02 08:51:19 compute-0 condescending_carson[385665]:     ]
Oct 02 08:51:19 compute-0 condescending_carson[385665]: }
Oct 02 08:51:19 compute-0 systemd[1]: libpod-4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5.scope: Deactivated successfully.
Oct 02 08:51:19 compute-0 podman[385648]: 2025-10-02 08:51:19.344542175 +0000 UTC m=+0.937163971 container died 4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:51:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f0b79f3aff3d90abeca45195365c26bc30b13a1f521e7db162132afd5a98d8a-merged.mount: Deactivated successfully.
Oct 02 08:51:19 compute-0 podman[385648]: 2025-10-02 08:51:19.42266363 +0000 UTC m=+1.015285426 container remove 4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 08:51:19 compute-0 systemd[1]: libpod-conmon-4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5.scope: Deactivated successfully.
Oct 02 08:51:19 compute-0 sudo[385469]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:19 compute-0 sudo[385810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:51:19 compute-0 sudo[385810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:19 compute-0 sudo[385810]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 246 op/s
Oct 02 08:51:19 compute-0 sudo[385835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:51:19 compute-0 sudo[385835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:19 compute-0 sudo[385835]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:19 compute-0 sudo[385860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:51:19 compute-0 sudo[385860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:19 compute-0 sudo[385860]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:19 compute-0 sudo[385885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:51:19 compute-0 sudo[385885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:19 compute-0 nova_compute[260603]: 2025-10-02 08:51:19.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:19 compute-0 nova_compute[260603]: 2025-10-02 08:51:19.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:20 compute-0 podman[385951]: 2025-10-02 08:51:20.102575462 +0000 UTC m=+0.041468223 container create a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 08:51:20 compute-0 systemd[1]: Started libpod-conmon-a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a.scope.
Oct 02 08:51:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:51:20 compute-0 podman[385951]: 2025-10-02 08:51:20.178355654 +0000 UTC m=+0.117248495 container init a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:51:20 compute-0 podman[385951]: 2025-10-02 08:51:20.08548609 +0000 UTC m=+0.024378871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:51:20 compute-0 podman[385951]: 2025-10-02 08:51:20.186610652 +0000 UTC m=+0.125503433 container start a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:51:20 compute-0 podman[385951]: 2025-10-02 08:51:20.190097559 +0000 UTC m=+0.128990420 container attach a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 08:51:20 compute-0 amazing_bardeen[385965]: 167 167
Oct 02 08:51:20 compute-0 systemd[1]: libpod-a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a.scope: Deactivated successfully.
Oct 02 08:51:20 compute-0 podman[385951]: 2025-10-02 08:51:20.19266393 +0000 UTC m=+0.131556691 container died a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-50073cabf32ad22f7a353d7d862624d82ad2ed468d8087112e9d10972707a87c-merged.mount: Deactivated successfully.
Oct 02 08:51:20 compute-0 podman[385951]: 2025-10-02 08:51:20.249183202 +0000 UTC m=+0.188076003 container remove a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 08:51:20 compute-0 ceph-mon[74477]: pgmap v2183: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 246 op/s
Oct 02 08:51:20 compute-0 systemd[1]: libpod-conmon-a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a.scope: Deactivated successfully.
Oct 02 08:51:20 compute-0 podman[385991]: 2025-10-02 08:51:20.489782861 +0000 UTC m=+0.053991924 container create 48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:51:20 compute-0 systemd[1]: Started libpod-conmon-48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778.scope.
Oct 02 08:51:20 compute-0 podman[385991]: 2025-10-02 08:51:20.466547286 +0000 UTC m=+0.030756439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:51:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c4cb8e24e824f2f879981b3507483141b8b6da0482d4ecd2eeda44cb319b1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c4cb8e24e824f2f879981b3507483141b8b6da0482d4ecd2eeda44cb319b1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c4cb8e24e824f2f879981b3507483141b8b6da0482d4ecd2eeda44cb319b1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c4cb8e24e824f2f879981b3507483141b8b6da0482d4ecd2eeda44cb319b1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:20 compute-0 podman[385991]: 2025-10-02 08:51:20.591821081 +0000 UTC m=+0.156030164 container init 48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_merkle, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 08:51:20 compute-0 podman[385991]: 2025-10-02 08:51:20.603163335 +0000 UTC m=+0.167372408 container start 48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:51:20 compute-0 podman[385991]: 2025-10-02 08:51:20.607075397 +0000 UTC m=+0.171284480 container attach 48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 08:51:21 compute-0 nova_compute[260603]: 2025-10-02 08:51:21.020 2 DEBUG nova.network.neutron [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Successfully created port: 06e58664-5a1a-4f08-9eb3-6a275e07a062 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]: {
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "osd_id": 2,
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "type": "bluestore"
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:     },
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "osd_id": 1,
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "type": "bluestore"
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:     },
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "osd_id": 0,
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:         "type": "bluestore"
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]:     }
Oct 02 08:51:21 compute-0 thirsty_merkle[386008]: }
Oct 02 08:51:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 5.0 MiB/s wr, 209 op/s
Oct 02 08:51:21 compute-0 systemd[1]: libpod-48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778.scope: Deactivated successfully.
Oct 02 08:51:21 compute-0 podman[385991]: 2025-10-02 08:51:21.580395303 +0000 UTC m=+1.144604386 container died 48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:51:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8c4cb8e24e824f2f879981b3507483141b8b6da0482d4ecd2eeda44cb319b1a-merged.mount: Deactivated successfully.
Oct 02 08:51:21 compute-0 podman[385991]: 2025-10-02 08:51:21.686449829 +0000 UTC m=+1.250658902 container remove 48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_merkle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:51:21 compute-0 systemd[1]: libpod-conmon-48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778.scope: Deactivated successfully.
Oct 02 08:51:21 compute-0 sudo[385885]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:51:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:51:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:51:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:51:21 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b0895a0b-30d1-48e7-9971-9128fd013df3 does not exist
Oct 02 08:51:21 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev fe385605-b082-4473-add0-d028a94f29b6 does not exist
Oct 02 08:51:21 compute-0 sudo[386055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:51:21 compute-0 sudo[386055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:21 compute-0 sudo[386055]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:21 compute-0 sudo[386080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:51:21 compute-0 sudo[386080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:51:21 compute-0 sudo[386080]: pam_unix(sudo:session): session closed for user root
Oct 02 08:51:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:51:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3641325275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:51:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:51:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3641325275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:51:22 compute-0 nova_compute[260603]: 2025-10-02 08:51:22.271 2 DEBUG nova.network.neutron [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Successfully updated port: 06e58664-5a1a-4f08-9eb3-6a275e07a062 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:51:22 compute-0 nova_compute[260603]: 2025-10-02 08:51:22.289 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:51:22 compute-0 nova_compute[260603]: 2025-10-02 08:51:22.290 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquired lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:51:22 compute-0 nova_compute[260603]: 2025-10-02 08:51:22.290 2 DEBUG nova.network.neutron [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:51:22 compute-0 nova_compute[260603]: 2025-10-02 08:51:22.386 2 DEBUG nova.compute.manager [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received event network-changed-06e58664-5a1a-4f08-9eb3-6a275e07a062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:51:22 compute-0 nova_compute[260603]: 2025-10-02 08:51:22.387 2 DEBUG nova.compute.manager [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Refreshing instance network info cache due to event network-changed-06e58664-5a1a-4f08-9eb3-6a275e07a062. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:51:22 compute-0 nova_compute[260603]: 2025-10-02 08:51:22.387 2 DEBUG oslo_concurrency.lockutils [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:51:22 compute-0 nova_compute[260603]: 2025-10-02 08:51:22.504 2 DEBUG nova.network.neutron [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:51:22 compute-0 ceph-mon[74477]: pgmap v2184: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 5.0 MiB/s wr, 209 op/s
Oct 02 08:51:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:51:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:51:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3641325275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:51:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3641325275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:51:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:51:23 compute-0 podman[386105]: 2025-10-02 08:51:23.055104148 +0000 UTC m=+0.100364839 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct 02 08:51:23 compute-0 podman[386106]: 2025-10-02 08:51:23.059310099 +0000 UTC m=+0.104337333 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.450 2 DEBUG nova.network.neutron [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updating instance_info_cache with network_info: [{"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.470 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Releasing lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.471 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Instance network_info: |[{"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.471 2 DEBUG oslo_concurrency.lockutils [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.471 2 DEBUG nova.network.neutron [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Refreshing network info cache for port 06e58664-5a1a-4f08-9eb3-6a275e07a062 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.474 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Start _get_guest_xml network_info=[{"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:51:08Z,direct_url=<?>,disk_format='raw',id=bfda7a79-c502-4c19-8d8e-da8dbbb22d04,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-694582967',owner='bfcc44155f2d45ff9f66fe254a7b21c7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:51:14Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'bfda7a79-c502-4c19-8d8e-da8dbbb22d04'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.481 2 WARNING nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.487 2 DEBUG nova.virt.libvirt.host [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.488 2 DEBUG nova.virt.libvirt.host [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.492 2 DEBUG nova.virt.libvirt.host [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.493 2 DEBUG nova.virt.libvirt.host [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.493 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.493 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:51:08Z,direct_url=<?>,disk_format='raw',id=bfda7a79-c502-4c19-8d8e-da8dbbb22d04,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-694582967',owner='bfcc44155f2d45ff9f66fe254a7b21c7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:51:14Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.494 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.494 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.494 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.495 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.495 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.495 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.495 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.496 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.496 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.496 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.500 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 249 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 283 KiB/s wr, 118 op/s
Oct 02 08:51:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:51:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600227038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.948 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.985 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:23 compute-0 nova_compute[260603]: 2025-10-02 08:51:23.991 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:51:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1948412629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.460 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.462 2 DEBUG nova.virt.libvirt.vif [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:51:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1642693838',display_name='tempest-TestSnapshotPattern-server-1642693838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1642693838',id=121,image_ref='bfda7a79-c502-4c19-8d8e-da8dbbb22d04',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCpLz1iGKUc/R0jTBlojNlaCcKVn52HrOCGXK3cRl7ZwI3LdmmDfGB817F44mQzj+4scFHSvX8lk2zoI/a0vux5P2hegs1HkAovNnUXiH3pFHVXQWoAuzDtOhefGzzHniQ==',key_name='tempest-TestSnapshotPattern-624580712',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc44155f2d45ff9f66fe254a7b21c7',ramdisk_id='',reservation_id='r-uis0qfoh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d5e3e825-fcee-4f1b-8c05-5a9ee07013d7',image_min_disk='1',image_min_ram='0',image_owner_id='bfcc44155f2d45ff9f66fe254a7b21c7',image_owner_project_name='tempest-TestSnapshotPattern-495107275',image_owner_user_name='tempest-TestSnapshotPattern-495107275-project-member',image_user_id='aceb8b0273154f1abe964d78a6261936',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-495107275',owner_user_name='tempest-TestSnapshotPattern-495107275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:51:18Z,user_data=None,user_id='aceb8b0273154f1abe964d78a6261936',uuid=062b2a3e-b612-42bf-b96c-fb9bdd9008ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.462 2 DEBUG nova.network.os_vif_util [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converting VIF {"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.463 2 DEBUG nova.network.os_vif_util [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.464 2 DEBUG nova.objects.instance [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 062b2a3e-b612-42bf-b96c-fb9bdd9008ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.495 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:51:24 compute-0 nova_compute[260603]:   <uuid>062b2a3e-b612-42bf-b96c-fb9bdd9008ee</uuid>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   <name>instance-00000079</name>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <nova:name>tempest-TestSnapshotPattern-server-1642693838</nova:name>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:51:23</nova:creationTime>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:51:24 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:51:24 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:51:24 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:51:24 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:51:24 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:51:24 compute-0 nova_compute[260603]:         <nova:user uuid="aceb8b0273154f1abe964d78a6261936">tempest-TestSnapshotPattern-495107275-project-member</nova:user>
Oct 02 08:51:24 compute-0 nova_compute[260603]:         <nova:project uuid="bfcc44155f2d45ff9f66fe254a7b21c7">tempest-TestSnapshotPattern-495107275</nova:project>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="bfda7a79-c502-4c19-8d8e-da8dbbb22d04"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:51:24 compute-0 nova_compute[260603]:         <nova:port uuid="06e58664-5a1a-4f08-9eb3-6a275e07a062">
Oct 02 08:51:24 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <system>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <entry name="serial">062b2a3e-b612-42bf-b96c-fb9bdd9008ee</entry>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <entry name="uuid">062b2a3e-b612-42bf-b96c-fb9bdd9008ee</entry>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     </system>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   <os>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   </os>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   <features>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   </features>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk">
Oct 02 08:51:24 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       </source>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:51:24 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk.config">
Oct 02 08:51:24 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       </source>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:51:24 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:e1:f8:b9"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <target dev="tap06e58664-5a"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/console.log" append="off"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <video>
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     </video>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:51:24 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:51:24 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:51:24 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:51:24 compute-0 nova_compute[260603]: </domain>
Oct 02 08:51:24 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.496 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Preparing to wait for external event network-vif-plugged-06e58664-5a1a-4f08-9eb3-6a275e07a062 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.497 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.497 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.498 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.499 2 DEBUG nova.virt.libvirt.vif [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:51:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1642693838',display_name='tempest-TestSnapshotPattern-server-1642693838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1642693838',id=121,image_ref='bfda7a79-c502-4c19-8d8e-da8dbbb22d04',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCpLz1iGKUc/R0jTBlojNlaCcKVn52HrOCGXK3cRl7ZwI3LdmmDfGB817F44mQzj+4scFHSvX8lk2zoI/a0vux5P2hegs1HkAovNnUXiH3pFHVXQWoAuzDtOhefGzzHniQ==',key_name='tempest-TestSnapshotPattern-624580712',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc44155f2d45ff9f66fe254a7b21c7',ramdisk_id='',reservation_id='r-uis0qfoh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d5e3e825-fcee-4f1b-8c05-5a9ee07013d7',image_min_disk='1',image_min_ram='0',image_owner_id='bfcc44155f2d45ff9f66fe254a7b21c7',image_owner_project_name='tempest-TestSnapshotPattern-495107275',image_owner_user_name='tempest-TestSnapshotPattern-495107275-project-member',image_user_id='aceb8b0273154f1abe964d78a6261936',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-495107275',owner_user_name='tempest-TestSnapshotPattern-495107275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:51:18Z,user_data=None,user_id='aceb8b0273154f1abe964d78a6261936',uuid=062b2a3e-b612-42bf-b96c-fb9bdd9008ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.500 2 DEBUG nova.network.os_vif_util [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converting VIF {"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.501 2 DEBUG nova.network.os_vif_util [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.501 2 DEBUG os_vif [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.503 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06e58664-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06e58664-5a, col_values=(('external_ids', {'iface-id': '06e58664-5a1a-4f08-9eb3-6a275e07a062', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:f8:b9', 'vm-uuid': '062b2a3e-b612-42bf-b96c-fb9bdd9008ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:24 compute-0 NetworkManager[45129]: <info>  [1759395084.5096] manager: (tap06e58664-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.517 2 INFO os_vif [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a')
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.586 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.586 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.587 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No VIF found with MAC fa:16:3e:e1:f8:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.587 2 INFO nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Using config drive
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.607 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:24 compute-0 nova_compute[260603]: 2025-10-02 08:51:24.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:24 compute-0 ceph-mon[74477]: pgmap v2185: 305 pgs: 305 active+clean; 249 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 283 KiB/s wr, 118 op/s
Oct 02 08:51:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1600227038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:51:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1948412629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:51:25 compute-0 ovn_controller[152344]: 2025-10-02T08:51:25Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:51:fb 10.100.0.11
Oct 02 08:51:25 compute-0 ovn_controller[152344]: 2025-10-02T08:51:25Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:51:fb 10.100.0.11
Oct 02 08:51:25 compute-0 nova_compute[260603]: 2025-10-02 08:51:25.547 2 INFO nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Creating config drive at /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/disk.config
Oct 02 08:51:25 compute-0 nova_compute[260603]: 2025-10-02 08:51:25.557 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe1haa8op execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 249 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 283 KiB/s wr, 118 op/s
Oct 02 08:51:25 compute-0 nova_compute[260603]: 2025-10-02 08:51:25.727 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe1haa8op" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:25 compute-0 nova_compute[260603]: 2025-10-02 08:51:25.756 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:51:25 compute-0 nova_compute[260603]: 2025-10-02 08:51:25.760 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/disk.config 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:25 compute-0 nova_compute[260603]: 2025-10-02 08:51:25.812 2 DEBUG nova.network.neutron [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updated VIF entry in instance network info cache for port 06e58664-5a1a-4f08-9eb3-6a275e07a062. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:51:25 compute-0 nova_compute[260603]: 2025-10-02 08:51:25.813 2 DEBUG nova.network.neutron [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updating instance_info_cache with network_info: [{"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:51:25 compute-0 nova_compute[260603]: 2025-10-02 08:51:25.840 2 DEBUG oslo_concurrency.lockutils [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:51:25 compute-0 nova_compute[260603]: 2025-10-02 08:51:25.970 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/disk.config 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:25 compute-0 nova_compute[260603]: 2025-10-02 08:51:25.972 2 INFO nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Deleting local config drive /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/disk.config because it was imported into RBD.
Oct 02 08:51:26 compute-0 kernel: tap06e58664-5a: entered promiscuous mode
Oct 02 08:51:26 compute-0 NetworkManager[45129]: <info>  [1759395086.0353] manager: (tap06e58664-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/499)
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:26 compute-0 ovn_controller[152344]: 2025-10-02T08:51:26Z|01257|binding|INFO|Claiming lport 06e58664-5a1a-4f08-9eb3-6a275e07a062 for this chassis.
Oct 02 08:51:26 compute-0 ovn_controller[152344]: 2025-10-02T08:51:26Z|01258|binding|INFO|06e58664-5a1a-4f08-9eb3-6a275e07a062: Claiming fa:16:3e:e1:f8:b9 10.100.0.8
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.048 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:f8:b9 10.100.0.8'], port_security=['fa:16:3e:e1:f8:b9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '062b2a3e-b612-42bf-b96c-fb9bdd9008ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8372a1-ae46-455d-aa74-54645f729e73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b23ca944-49a2-456f-a94b-c77445a13bdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10547b4a-e45d-47e8-b4d5-6979908e5ff3, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=06e58664-5a1a-4f08-9eb3-6a275e07a062) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.050 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 06e58664-5a1a-4f08-9eb3-6a275e07a062 in datapath 5e8372a1-ae46-455d-aa74-54645f729e73 bound to our chassis
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.053 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e8372a1-ae46-455d-aa74-54645f729e73
Oct 02 08:51:26 compute-0 ovn_controller[152344]: 2025-10-02T08:51:26Z|01259|binding|INFO|Setting lport 06e58664-5a1a-4f08-9eb3-6a275e07a062 ovn-installed in OVS
Oct 02 08:51:26 compute-0 ovn_controller[152344]: 2025-10-02T08:51:26Z|01260|binding|INFO|Setting lport 06e58664-5a1a-4f08-9eb3-6a275e07a062 up in Southbound
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.070 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6bddbc90-097b-4c3b-b21e-090b2d01582a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:26 compute-0 systemd-machined[214636]: New machine qemu-152-instance-00000079.
Oct 02 08:51:26 compute-0 systemd[1]: Started Virtual Machine qemu-152-instance-00000079.
Oct 02 08:51:26 compute-0 systemd-udevd[386283]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.106 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f9067118-80b2-4070-97e7-b10395b04f96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.113 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[58d69353-66a0-4a65-afe0-d67b5dcf5fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:26 compute-0 NetworkManager[45129]: <info>  [1759395086.1169] device (tap06e58664-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:51:26 compute-0 NetworkManager[45129]: <info>  [1759395086.1183] device (tap06e58664-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.143 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[63c2711a-7897-4062-9f11-0fe01f97b0e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.167 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a18c9a18-2d6d-429a-8ea2-2f1a32e31ce6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e8372a1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:60:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592471, 'reachable_time': 17032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386293, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.184 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62529704-a1fa-4f7d-ac17-5367e5bec152]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e8372a1-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592482, 'tstamp': 592482}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386294, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e8372a1-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592486, 'tstamp': 592486}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386294, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.185 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e8372a1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.222 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e8372a1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.222 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.222 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e8372a1-a0, col_values=(('external_ids', {'iface-id': '6972524a-7a8a-4c19-a841-08e51ccd9aaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.223 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.660 2 DEBUG nova.compute.manager [req-a2431e80-ddc5-4f18-b625-a22de7cf8299 req-ea40eb60-7e1e-429d-8a27-2e2e60387d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received event network-vif-plugged-06e58664-5a1a-4f08-9eb3-6a275e07a062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.661 2 DEBUG oslo_concurrency.lockutils [req-a2431e80-ddc5-4f18-b625-a22de7cf8299 req-ea40eb60-7e1e-429d-8a27-2e2e60387d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.662 2 DEBUG oslo_concurrency.lockutils [req-a2431e80-ddc5-4f18-b625-a22de7cf8299 req-ea40eb60-7e1e-429d-8a27-2e2e60387d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.662 2 DEBUG oslo_concurrency.lockutils [req-a2431e80-ddc5-4f18-b625-a22de7cf8299 req-ea40eb60-7e1e-429d-8a27-2e2e60387d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.662 2 DEBUG nova.compute.manager [req-a2431e80-ddc5-4f18-b625-a22de7cf8299 req-ea40eb60-7e1e-429d-8a27-2e2e60387d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Processing event network-vif-plugged-06e58664-5a1a-4f08-9eb3-6a275e07a062 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:51:26 compute-0 ceph-mon[74477]: pgmap v2186: 305 pgs: 305 active+clean; 249 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 283 KiB/s wr, 118 op/s
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.940 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395086.9399688, 062b2a3e-b612-42bf-b96c-fb9bdd9008ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.941 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] VM Started (Lifecycle Event)
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.943 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.948 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.952 2 INFO nova.virt.libvirt.driver [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Instance spawned successfully.
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.952 2 INFO nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Took 8.72 seconds to spawn the instance on the hypervisor.
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.953 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.963 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:26 compute-0 nova_compute[260603]: 2025-10-02 08:51:26.967 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:51:27 compute-0 nova_compute[260603]: 2025-10-02 08:51:27.000 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:51:27 compute-0 nova_compute[260603]: 2025-10-02 08:51:27.000 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395086.9409995, 062b2a3e-b612-42bf-b96c-fb9bdd9008ee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:51:27 compute-0 nova_compute[260603]: 2025-10-02 08:51:27.001 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] VM Paused (Lifecycle Event)
Oct 02 08:51:27 compute-0 nova_compute[260603]: 2025-10-02 08:51:27.033 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:27 compute-0 nova_compute[260603]: 2025-10-02 08:51:27.035 2 INFO nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Took 9.75 seconds to build instance.
Oct 02 08:51:27 compute-0 nova_compute[260603]: 2025-10-02 08:51:27.039 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395086.9468055, 062b2a3e-b612-42bf-b96c-fb9bdd9008ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:51:27 compute-0 nova_compute[260603]: 2025-10-02 08:51:27.039 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] VM Resumed (Lifecycle Event)
Oct 02 08:51:27 compute-0 nova_compute[260603]: 2025-10-02 08:51:27.062 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:27 compute-0 nova_compute[260603]: 2025-10-02 08:51:27.065 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:27 compute-0 nova_compute[260603]: 2025-10-02 08:51:27.067 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:51:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 269 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 741 KiB/s rd, 1.7 MiB/s wr, 105 op/s
Oct 02 08:51:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:51:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:51:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:51:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:51:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:51:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:51:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:51:28
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'images', 'vms', '.rgw.root', '.mgr', 'default.rgw.meta']
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:51:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:51:28 compute-0 nova_compute[260603]: 2025-10-02 08:51:28.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:51:28 compute-0 nova_compute[260603]: 2025-10-02 08:51:28.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:51:28 compute-0 nova_compute[260603]: 2025-10-02 08:51:28.776 2 DEBUG nova.compute.manager [req-8a1dfd64-2d46-48df-870b-92e5aa61fa54 req-3f6141a8-860c-4eba-8591-614241604ceb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received event network-vif-plugged-06e58664-5a1a-4f08-9eb3-6a275e07a062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:51:28 compute-0 nova_compute[260603]: 2025-10-02 08:51:28.777 2 DEBUG oslo_concurrency.lockutils [req-8a1dfd64-2d46-48df-870b-92e5aa61fa54 req-3f6141a8-860c-4eba-8591-614241604ceb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:28 compute-0 nova_compute[260603]: 2025-10-02 08:51:28.779 2 DEBUG oslo_concurrency.lockutils [req-8a1dfd64-2d46-48df-870b-92e5aa61fa54 req-3f6141a8-860c-4eba-8591-614241604ceb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:28 compute-0 nova_compute[260603]: 2025-10-02 08:51:28.779 2 DEBUG oslo_concurrency.lockutils [req-8a1dfd64-2d46-48df-870b-92e5aa61fa54 req-3f6141a8-860c-4eba-8591-614241604ceb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:28 compute-0 nova_compute[260603]: 2025-10-02 08:51:28.780 2 DEBUG nova.compute.manager [req-8a1dfd64-2d46-48df-870b-92e5aa61fa54 req-3f6141a8-860c-4eba-8591-614241604ceb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] No waiting events found dispatching network-vif-plugged-06e58664-5a1a-4f08-9eb3-6a275e07a062 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:51:28 compute-0 nova_compute[260603]: 2025-10-02 08:51:28.781 2 WARNING nova.compute.manager [req-8a1dfd64-2d46-48df-870b-92e5aa61fa54 req-3f6141a8-860c-4eba-8591-614241604ceb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received unexpected event network-vif-plugged-06e58664-5a1a-4f08-9eb3-6a275e07a062 for instance with vm_state active and task_state None.
Oct 02 08:51:28 compute-0 ceph-mon[74477]: pgmap v2187: 305 pgs: 305 active+clean; 269 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 741 KiB/s rd, 1.7 MiB/s wr, 105 op/s
Oct 02 08:51:29 compute-0 nova_compute[260603]: 2025-10-02 08:51:29.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1006 KiB/s rd, 2.2 MiB/s wr, 131 op/s
Oct 02 08:51:29 compute-0 nova_compute[260603]: 2025-10-02 08:51:29.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:30 compute-0 ceph-mon[74477]: pgmap v2188: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1006 KiB/s rd, 2.2 MiB/s wr, 131 op/s
Oct 02 08:51:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 08:51:31 compute-0 nova_compute[260603]: 2025-10-02 08:51:31.563 2 INFO nova.compute.manager [None req-3360325b-e0d4-438a-b8b2-a4d1d31d78f9 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Get console output
Oct 02 08:51:31 compute-0 nova_compute[260603]: 2025-10-02 08:51:31.569 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:51:31 compute-0 nova_compute[260603]: 2025-10-02 08:51:31.908 2 DEBUG oslo_concurrency.lockutils [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:31 compute-0 nova_compute[260603]: 2025-10-02 08:51:31.909 2 DEBUG oslo_concurrency.lockutils [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:31 compute-0 nova_compute[260603]: 2025-10-02 08:51:31.909 2 DEBUG nova.compute.manager [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:31 compute-0 nova_compute[260603]: 2025-10-02 08:51:31.914 2 DEBUG nova.compute.manager [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 08:51:31 compute-0 nova_compute[260603]: 2025-10-02 08:51:31.915 2 DEBUG nova.objects.instance [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'flavor' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:31 compute-0 nova_compute[260603]: 2025-10-02 08:51:31.941 2 DEBUG nova.virt.libvirt.driver [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:51:32 compute-0 ceph-mon[74477]: pgmap v2189: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 08:51:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:51:33 compute-0 nova_compute[260603]: 2025-10-02 08:51:33.488 2 DEBUG nova.compute.manager [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received event network-changed-06e58664-5a1a-4f08-9eb3-6a275e07a062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:51:33 compute-0 nova_compute[260603]: 2025-10-02 08:51:33.488 2 DEBUG nova.compute.manager [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Refreshing instance network info cache due to event network-changed-06e58664-5a1a-4f08-9eb3-6a275e07a062. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:51:33 compute-0 nova_compute[260603]: 2025-10-02 08:51:33.488 2 DEBUG oslo_concurrency.lockutils [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:51:33 compute-0 nova_compute[260603]: 2025-10-02 08:51:33.488 2 DEBUG oslo_concurrency.lockutils [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:51:33 compute-0 nova_compute[260603]: 2025-10-02 08:51:33.489 2 DEBUG nova.network.neutron [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Refreshing network info cache for port 06e58664-5a1a-4f08-9eb3-6a275e07a062 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:51:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Oct 02 08:51:34 compute-0 kernel: tapceb50499-7e (unregistering): left promiscuous mode
Oct 02 08:51:34 compute-0 NetworkManager[45129]: <info>  [1759395094.2336] device (tapceb50499-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:51:34 compute-0 nova_compute[260603]: 2025-10-02 08:51:34.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:34 compute-0 ovn_controller[152344]: 2025-10-02T08:51:34Z|01261|binding|INFO|Releasing lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 from this chassis (sb_readonly=0)
Oct 02 08:51:34 compute-0 ovn_controller[152344]: 2025-10-02T08:51:34Z|01262|binding|INFO|Setting lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 down in Southbound
Oct 02 08:51:34 compute-0 ovn_controller[152344]: 2025-10-02T08:51:34Z|01263|binding|INFO|Removing iface tapceb50499-7e ovn-installed in OVS
Oct 02 08:51:34 compute-0 nova_compute[260603]: 2025-10-02 08:51:34.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.252 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:51:fb 10.100.0.11'], port_security=['fa:16:3e:80:51:fb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0239090c-f1eb-4b8b-8f45-94efee345fa5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a398fde7-e9b6-47e4-afc2-221c0b15c74f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=633447ab-1fed-4ecc-895b-fd3d7334df6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.253 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 in datapath 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 unbound from our chassis
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.254 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.256 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4871c549-7a61-4e88-89fd-0bdeccff6119]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.257 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 namespace which is not needed anymore
Oct 02 08:51:34 compute-0 nova_compute[260603]: 2025-10-02 08:51:34.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:34 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct 02 08:51:34 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Consumed 13.173s CPU time.
Oct 02 08:51:34 compute-0 systemd-machined[214636]: Machine qemu-151-instance-00000078 terminated.
Oct 02 08:51:34 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [NOTICE]   (384636) : haproxy version is 2.8.14-c23fe91
Oct 02 08:51:34 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [NOTICE]   (384636) : path to executable is /usr/sbin/haproxy
Oct 02 08:51:34 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [WARNING]  (384636) : Exiting Master process...
Oct 02 08:51:34 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [ALERT]    (384636) : Current worker (384638) exited with code 143 (Terminated)
Oct 02 08:51:34 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [WARNING]  (384636) : All workers exited. Exiting... (0)
Oct 02 08:51:34 compute-0 systemd[1]: libpod-f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3.scope: Deactivated successfully.
Oct 02 08:51:34 compute-0 podman[386362]: 2025-10-02 08:51:34.418002522 +0000 UTC m=+0.052947270 container died f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3-userdata-shm.mount: Deactivated successfully.
Oct 02 08:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a077a86bcce7e5489f801b5d34f88a8eb9ddfc03b2537cc12437d7144a2267e-merged.mount: Deactivated successfully.
Oct 02 08:51:34 compute-0 podman[386362]: 2025-10-02 08:51:34.47664485 +0000 UTC m=+0.111589588 container cleanup f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:51:34 compute-0 systemd[1]: libpod-conmon-f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3.scope: Deactivated successfully.
Oct 02 08:51:34 compute-0 nova_compute[260603]: 2025-10-02 08:51:34.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:34 compute-0 nova_compute[260603]: 2025-10-02 08:51:34.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:34 compute-0 podman[386396]: 2025-10-02 08:51:34.597842688 +0000 UTC m=+0.059869437 container remove f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.604 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b5710436-8545-4f5a-92c6-e2c5d8d92b1c]: (4, ('Thu Oct  2 08:51:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 (f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3)\nf261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3\nThu Oct  2 08:51:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 (f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3)\nf261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.607 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91970ce8-b63d-4295-8506-41c9dd6ef68b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.608 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21ddf177-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:34 compute-0 nova_compute[260603]: 2025-10-02 08:51:34.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:34 compute-0 kernel: tap21ddf177-30: left promiscuous mode
Oct 02 08:51:34 compute-0 nova_compute[260603]: 2025-10-02 08:51:34.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.632 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa91ca3-4212-44b5-9ef9-dd0bcbc559cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.650 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0241a934-2018-45b5-9dcc-a041edd5b38f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.651 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6403598-abf9-4e59-9ffb-06a6e2d3a9b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.667 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ea7ba9-c035-45b9-b3b0-73f99fed59c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594978, 'reachable_time': 35841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386415, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.670 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.670 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5c861e-d72a-455b-9629-7c7025d1870d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d21ddf177\x2d3d8e\x2d4ce2\x2d9cd9\x2dfa17adfabd73.mount: Deactivated successfully.
Oct 02 08:51:34 compute-0 nova_compute[260603]: 2025-10-02 08:51:34.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.832 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.833 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.834 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:34 compute-0 ceph-mon[74477]: pgmap v2190: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Oct 02 08:51:34 compute-0 nova_compute[260603]: 2025-10-02 08:51:34.959 2 INFO nova.virt.libvirt.driver [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance shutdown successfully after 3 seconds.
Oct 02 08:51:34 compute-0 nova_compute[260603]: 2025-10-02 08:51:34.972 2 INFO nova.virt.libvirt.driver [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance destroyed successfully.
Oct 02 08:51:34 compute-0 nova_compute[260603]: 2025-10-02 08:51:34.973 2 DEBUG nova.objects.instance [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:34 compute-0 nova_compute[260603]: 2025-10-02 08:51:34.988 2 DEBUG nova.compute.manager [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:35 compute-0 nova_compute[260603]: 2025-10-02 08:51:35.036 2 DEBUG oslo_concurrency.lockutils [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:35 compute-0 nova_compute[260603]: 2025-10-02 08:51:35.219 2 DEBUG nova.network.neutron [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updated VIF entry in instance network info cache for port 06e58664-5a1a-4f08-9eb3-6a275e07a062. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:51:35 compute-0 nova_compute[260603]: 2025-10-02 08:51:35.220 2 DEBUG nova.network.neutron [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updating instance_info_cache with network_info: [{"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:51:35 compute-0 nova_compute[260603]: 2025-10-02 08:51:35.238 2 DEBUG oslo_concurrency.lockutils [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:51:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 140 op/s
Oct 02 08:51:36 compute-0 ceph-mon[74477]: pgmap v2191: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 140 op/s
Oct 02 08:51:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 140 op/s
Oct 02 08:51:37 compute-0 nova_compute[260603]: 2025-10-02 08:51:37.758 2 DEBUG nova.compute.manager [req-51a0e2a0-e79e-4ff9-a6ff-cf2d94445e76 req-142366ee-ecc1-4578-8256-4a8fc06c4c68 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-unplugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:51:37 compute-0 nova_compute[260603]: 2025-10-02 08:51:37.758 2 DEBUG oslo_concurrency.lockutils [req-51a0e2a0-e79e-4ff9-a6ff-cf2d94445e76 req-142366ee-ecc1-4578-8256-4a8fc06c4c68 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:37 compute-0 nova_compute[260603]: 2025-10-02 08:51:37.758 2 DEBUG oslo_concurrency.lockutils [req-51a0e2a0-e79e-4ff9-a6ff-cf2d94445e76 req-142366ee-ecc1-4578-8256-4a8fc06c4c68 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:37 compute-0 nova_compute[260603]: 2025-10-02 08:51:37.759 2 DEBUG oslo_concurrency.lockutils [req-51a0e2a0-e79e-4ff9-a6ff-cf2d94445e76 req-142366ee-ecc1-4578-8256-4a8fc06c4c68 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:37 compute-0 nova_compute[260603]: 2025-10-02 08:51:37.759 2 DEBUG nova.compute.manager [req-51a0e2a0-e79e-4ff9-a6ff-cf2d94445e76 req-142366ee-ecc1-4578-8256-4a8fc06c4c68 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-unplugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:51:37 compute-0 nova_compute[260603]: 2025-10-02 08:51:37.759 2 WARNING nova.compute.manager [req-51a0e2a0-e79e-4ff9-a6ff-cf2d94445e76 req-142366ee-ecc1-4578-8256-4a8fc06c4c68 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-unplugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state stopped and task_state None.
Oct 02 08:51:37 compute-0 nova_compute[260603]: 2025-10-02 08:51:37.885 2 INFO nova.compute.manager [None req-59e53893-93dd-4111-9be1-70be2ccc7669 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Get console output
Oct 02 08:51:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:51:38 compute-0 nova_compute[260603]: 2025-10-02 08:51:38.098 2 DEBUG nova.objects.instance [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'flavor' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:38 compute-0 nova_compute[260603]: 2025-10-02 08:51:38.124 2 DEBUG oslo_concurrency.lockutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:51:38 compute-0 nova_compute[260603]: 2025-10-02 08:51:38.124 2 DEBUG oslo_concurrency.lockutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:51:38 compute-0 nova_compute[260603]: 2025-10-02 08:51:38.124 2 DEBUG nova.network.neutron [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:51:38 compute-0 nova_compute[260603]: 2025-10-02 08:51:38.124 2 DEBUG nova.objects.instance [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'info_cache' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:38 compute-0 nova_compute[260603]: 2025-10-02 08:51:38.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015229338615940613 of space, bias 1.0, pg target 0.4568801584782184 quantized to 32 (current 32)
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014239867202253044 of space, bias 1.0, pg target 0.42719601606759133 quantized to 32 (current 32)
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:51:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:51:38 compute-0 ceph-mon[74477]: pgmap v2192: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 140 op/s
Oct 02 08:51:39 compute-0 nova_compute[260603]: 2025-10-02 08:51:39.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 757 KiB/s wr, 100 op/s
Oct 02 08:51:39 compute-0 nova_compute[260603]: 2025-10-02 08:51:39.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:39 compute-0 nova_compute[260603]: 2025-10-02 08:51:39.855 2 DEBUG nova.compute.manager [req-afe8d924-2c2a-4c5f-9e4d-d1921cf2a5d0 req-ac1ee12e-431a-405b-9353-6583c00982f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:51:39 compute-0 nova_compute[260603]: 2025-10-02 08:51:39.855 2 DEBUG oslo_concurrency.lockutils [req-afe8d924-2c2a-4c5f-9e4d-d1921cf2a5d0 req-ac1ee12e-431a-405b-9353-6583c00982f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:39 compute-0 nova_compute[260603]: 2025-10-02 08:51:39.855 2 DEBUG oslo_concurrency.lockutils [req-afe8d924-2c2a-4c5f-9e4d-d1921cf2a5d0 req-ac1ee12e-431a-405b-9353-6583c00982f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:39 compute-0 nova_compute[260603]: 2025-10-02 08:51:39.856 2 DEBUG oslo_concurrency.lockutils [req-afe8d924-2c2a-4c5f-9e4d-d1921cf2a5d0 req-ac1ee12e-431a-405b-9353-6583c00982f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:39 compute-0 nova_compute[260603]: 2025-10-02 08:51:39.856 2 DEBUG nova.compute.manager [req-afe8d924-2c2a-4c5f-9e4d-d1921cf2a5d0 req-ac1ee12e-431a-405b-9353-6583c00982f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:51:39 compute-0 nova_compute[260603]: 2025-10-02 08:51:39.856 2 WARNING nova.compute.manager [req-afe8d924-2c2a-4c5f-9e4d-d1921cf2a5d0 req-ac1ee12e-431a-405b-9353-6583c00982f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state stopped and task_state powering-on.
Oct 02 08:51:40 compute-0 nova_compute[260603]: 2025-10-02 08:51:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:51:40 compute-0 ceph-mon[74477]: pgmap v2193: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 757 KiB/s wr, 100 op/s
Oct 02 08:51:40 compute-0 nova_compute[260603]: 2025-10-02 08:51:40.961 2 DEBUG nova.network.neutron [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updating instance_info_cache with network_info: [{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:51:40 compute-0 nova_compute[260603]: 2025-10-02 08:51:40.986 2 DEBUG oslo_concurrency.lockutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.010 2 INFO nova.virt.libvirt.driver [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance destroyed successfully.
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.011 2 DEBUG nova.objects.instance [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.030 2 DEBUG nova.objects.instance [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.047 2 DEBUG nova.virt.libvirt.vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:51:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1642420573',display_name='tempest-TestNetworkAdvancedServerOps-server-1642420573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1642420573',id=120,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4QvS560h5Nt86MUax5zsZlHkfG3ItwowrPNScykbxoYWCKz1GitCrHEDcUIk0uoHK8H9Y5yTAmbaorVjHeOMl3in6cmQv3hOwWfkcNBCwehb1AIqliYtOYjjLnnPR64Q==',key_name='tempest-TestNetworkAdvancedServerOps-1971503056',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:51:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-vx22hflz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:51:35Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=0239090c-f1eb-4b8b-8f45-94efee345fa5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.048 2 DEBUG nova.network.os_vif_util [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.048 2 DEBUG nova.network.os_vif_util [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.049 2 DEBUG os_vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapceb50499-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.055 2 INFO os_vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e')
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.061 2 DEBUG nova.virt.libvirt.driver [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Start _get_guest_xml network_info=[{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.064 2 WARNING nova.virt.libvirt.driver [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.070 2 DEBUG nova.virt.libvirt.host [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.070 2 DEBUG nova.virt.libvirt.host [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.073 2 DEBUG nova.virt.libvirt.host [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.074 2 DEBUG nova.virt.libvirt.host [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.074 2 DEBUG nova.virt.libvirt.driver [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.074 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.075 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.075 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.075 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.075 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.076 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.076 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.076 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.076 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.076 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.077 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.077 2 DEBUG nova.objects.instance [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.097 2 DEBUG oslo_concurrency.processutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:51:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104328999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.515 2 DEBUG oslo_concurrency.processutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.554 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.555 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.555 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:51:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 30 KiB/s wr, 61 op/s
Oct 02 08:51:41 compute-0 nova_compute[260603]: 2025-10-02 08:51:41.565 2 DEBUG oslo_concurrency.processutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:41 compute-0 ovn_controller[152344]: 2025-10-02T08:51:41Z|00135|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.8
Oct 02 08:51:41 compute-0 ovn_controller[152344]: 2025-10-02T08:51:41Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e1:f8:b9 10.100.0.8
Oct 02 08:51:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/104328999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:51:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:51:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3804192273' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.021 2 DEBUG oslo_concurrency.processutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.022 2 DEBUG nova.virt.libvirt.vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:51:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1642420573',display_name='tempest-TestNetworkAdvancedServerOps-server-1642420573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1642420573',id=120,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4QvS560h5Nt86MUax5zsZlHkfG3ItwowrPNScykbxoYWCKz1GitCrHEDcUIk0uoHK8H9Y5yTAmbaorVjHeOMl3in6cmQv3hOwWfkcNBCwehb1AIqliYtOYjjLnnPR64Q==',key_name='tempest-TestNetworkAdvancedServerOps-1971503056',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:51:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-vx22hflz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:51:35Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=0239090c-f1eb-4b8b-8f45-94efee345fa5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.023 2 DEBUG nova.network.os_vif_util [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.023 2 DEBUG nova.network.os_vif_util [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.025 2 DEBUG nova.objects.instance [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.037 2 DEBUG nova.virt.libvirt.driver [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:51:42 compute-0 nova_compute[260603]:   <uuid>0239090c-f1eb-4b8b-8f45-94efee345fa5</uuid>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   <name>instance-00000078</name>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1642420573</nova:name>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:51:41</nova:creationTime>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:51:42 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:51:42 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:51:42 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:51:42 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:51:42 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:51:42 compute-0 nova_compute[260603]:         <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct 02 08:51:42 compute-0 nova_compute[260603]:         <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:51:42 compute-0 nova_compute[260603]:         <nova:port uuid="ceb50499-7e3c-4d47-a2dc-05ce86dbbde1">
Oct 02 08:51:42 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <system>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <entry name="serial">0239090c-f1eb-4b8b-8f45-94efee345fa5</entry>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <entry name="uuid">0239090c-f1eb-4b8b-8f45-94efee345fa5</entry>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     </system>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   <os>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   </os>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   <features>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   </features>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/0239090c-f1eb-4b8b-8f45-94efee345fa5_disk">
Oct 02 08:51:42 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       </source>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:51:42 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config">
Oct 02 08:51:42 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       </source>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:51:42 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:80:51:fb"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <target dev="tapceb50499-7e"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/console.log" append="off"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <video>
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     </video>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:51:42 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:51:42 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:51:42 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:51:42 compute-0 nova_compute[260603]: </domain>
Oct 02 08:51:42 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.040 2 DEBUG nova.virt.libvirt.driver [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.040 2 DEBUG nova.virt.libvirt.driver [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.042 2 DEBUG nova.virt.libvirt.vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:51:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1642420573',display_name='tempest-TestNetworkAdvancedServerOps-server-1642420573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1642420573',id=120,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4QvS560h5Nt86MUax5zsZlHkfG3ItwowrPNScykbxoYWCKz1GitCrHEDcUIk0uoHK8H9Y5yTAmbaorVjHeOMl3in6cmQv3hOwWfkcNBCwehb1AIqliYtOYjjLnnPR64Q==',key_name='tempest-TestNetworkAdvancedServerOps-1971503056',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:51:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-vx22hflz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:51:35Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=0239090c-f1eb-4b8b-8f45-94efee345fa5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.042 2 DEBUG nova.network.os_vif_util [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.043 2 DEBUG nova.network.os_vif_util [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.044 2 DEBUG os_vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.051 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapceb50499-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapceb50499-7e, col_values=(('external_ids', {'iface-id': 'ceb50499-7e3c-4d47-a2dc-05ce86dbbde1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:51:fb', 'vm-uuid': '0239090c-f1eb-4b8b-8f45-94efee345fa5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:42 compute-0 NetworkManager[45129]: <info>  [1759395102.0556] manager: (tapceb50499-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/500)
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.061 2 INFO os_vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e')
Oct 02 08:51:42 compute-0 kernel: tapceb50499-7e: entered promiscuous mode
Oct 02 08:51:42 compute-0 NetworkManager[45129]: <info>  [1759395102.1412] manager: (tapceb50499-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:42 compute-0 ovn_controller[152344]: 2025-10-02T08:51:42Z|01264|binding|INFO|Claiming lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for this chassis.
Oct 02 08:51:42 compute-0 ovn_controller[152344]: 2025-10-02T08:51:42Z|01265|binding|INFO|ceb50499-7e3c-4d47-a2dc-05ce86dbbde1: Claiming fa:16:3e:80:51:fb 10.100.0.11
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.154 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:51:fb 10.100.0.11'], port_security=['fa:16:3e:80:51:fb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0239090c-f1eb-4b8b-8f45-94efee345fa5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a398fde7-e9b6-47e4-afc2-221c0b15c74f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=633447ab-1fed-4ecc-895b-fd3d7334df6b, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.155 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 in datapath 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 bound to our chassis
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.156 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73
Oct 02 08:51:42 compute-0 ovn_controller[152344]: 2025-10-02T08:51:42Z|01266|binding|INFO|Setting lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 ovn-installed in OVS
Oct 02 08:51:42 compute-0 ovn_controller[152344]: 2025-10-02T08:51:42Z|01267|binding|INFO|Setting lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 up in Southbound
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.173 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e745fc-fae5-46e8-8cbc-5b58fcec8336]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.174 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap21ddf177-31 in ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.178 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap21ddf177-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.178 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7e8652-85bc-4996-9187-2c8f6a929c4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.179 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[971b6ca9-3887-41cd-9431-a82d526d357f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 systemd-machined[214636]: New machine qemu-153-instance-00000078.
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.191 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b00c6ae6-082c-4c7c-a86a-a94329e50246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 systemd[1]: Started Virtual Machine qemu-153-instance-00000078.
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.211 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d86e141b-e855-443c-9a03-c1422ba3b7ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 systemd-udevd[386501]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:51:42 compute-0 NetworkManager[45129]: <info>  [1759395102.2402] device (tapceb50499-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:51:42 compute-0 NetworkManager[45129]: <info>  [1759395102.2413] device (tapceb50499-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.249 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4f8ac3-ae1f-454f-8b5b-c52aa959fae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 systemd-udevd[386506]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:51:42 compute-0 NetworkManager[45129]: <info>  [1759395102.2570] manager: (tap21ddf177-30): new Veth device (/org/freedesktop/NetworkManager/Devices/502)
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.256 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[08c25fb5-0d63-4ffd-b53b-902285ab4c17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.296 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6dbe0c0a-6b67-465f-99db-029aa00dcd23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.300 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1917d7a6-9ccb-4fd1-abfb-0d0e9ab3881a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 NetworkManager[45129]: <info>  [1759395102.3238] device (tap21ddf177-30): carrier: link connected
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.330 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[50f0e135-84b0-4562-a415-46a36d2925db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.350 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e10b6da6-c6f3-4c9f-ae3f-8c73be315fd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21ddf177-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:39:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598092, 'reachable_time': 17190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386530, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.367 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9e51be-9900-4f04-9e13-c613b1628107]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:390d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598092, 'tstamp': 598092}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386531, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.384 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[84951224-8c46-462b-8234-ef8a113c0d6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21ddf177-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:39:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598092, 'reachable_time': 17190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386536, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.418 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2df0502-bbce-47e3-b808-cba11a4eb958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.483 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8102d05-2863-4120-a183-ae2abbeda929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.485 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21ddf177-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.485 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.486 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21ddf177-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:42 compute-0 NetworkManager[45129]: <info>  [1759395102.4886] manager: (tap21ddf177-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Oct 02 08:51:42 compute-0 kernel: tap21ddf177-30: entered promiscuous mode
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.494 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap21ddf177-30, col_values=(('external_ids', {'iface-id': 'da5470e2-37c6-4448-8dab-7b3e1d890a66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:51:42 compute-0 ovn_controller[152344]: 2025-10-02T08:51:42Z|01268|binding|INFO|Releasing lport da5470e2-37c6-4448-8dab-7b3e1d890a66 from this chassis (sb_readonly=0)
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.499 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.500 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac4663b-d718-40ab-a8eb-19b041d8c54f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.501 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.pid.haproxy
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:51:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.502 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'env', 'PROCESS_TAG=haproxy-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.507 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.507 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.507 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.508 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.722 2 DEBUG nova.compute.manager [req-8458ac16-58d4-4054-a820-350ae3e5ae46 req-1ad74504-f8c0-44a5-bd81-3605a47e6dcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.723 2 DEBUG oslo_concurrency.lockutils [req-8458ac16-58d4-4054-a820-350ae3e5ae46 req-1ad74504-f8c0-44a5-bd81-3605a47e6dcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.723 2 DEBUG oslo_concurrency.lockutils [req-8458ac16-58d4-4054-a820-350ae3e5ae46 req-1ad74504-f8c0-44a5-bd81-3605a47e6dcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.723 2 DEBUG oslo_concurrency.lockutils [req-8458ac16-58d4-4054-a820-350ae3e5ae46 req-1ad74504-f8c0-44a5-bd81-3605a47e6dcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.724 2 DEBUG nova.compute.manager [req-8458ac16-58d4-4054-a820-350ae3e5ae46 req-1ad74504-f8c0-44a5-bd81-3605a47e6dcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.724 2 WARNING nova.compute.manager [req-8458ac16-58d4-4054-a820-350ae3e5ae46 req-1ad74504-f8c0-44a5-bd81-3605a47e6dcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state stopped and task_state powering-on.
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.933 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 0239090c-f1eb-4b8b-8f45-94efee345fa5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.934 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395102.9308743, 0239090c-f1eb-4b8b-8f45-94efee345fa5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.934 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] VM Resumed (Lifecycle Event)
Oct 02 08:51:42 compute-0 ceph-mon[74477]: pgmap v2194: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 30 KiB/s wr, 61 op/s
Oct 02 08:51:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3804192273' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.945 2 DEBUG nova.compute.manager [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.949 2 INFO nova.virt.libvirt.driver [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance rebooted successfully.
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.949 2 DEBUG nova.compute.manager [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.976 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:42 compute-0 nova_compute[260603]: 2025-10-02 08:51:42.980 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:51:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:51:42 compute-0 podman[386606]: 2025-10-02 08:51:42.900278882 +0000 UTC m=+0.031544282 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:51:42 compute-0 podman[386606]: 2025-10-02 08:51:42.995627375 +0000 UTC m=+0.126892725 container create c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 08:51:43 compute-0 nova_compute[260603]: 2025-10-02 08:51:43.012 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 02 08:51:43 compute-0 nova_compute[260603]: 2025-10-02 08:51:43.013 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395102.9449165, 0239090c-f1eb-4b8b-8f45-94efee345fa5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:51:43 compute-0 nova_compute[260603]: 2025-10-02 08:51:43.014 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] VM Started (Lifecycle Event)
Oct 02 08:51:43 compute-0 systemd[1]: Started libpod-conmon-c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5.scope.
Oct 02 08:51:43 compute-0 nova_compute[260603]: 2025-10-02 08:51:43.044 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:51:43 compute-0 nova_compute[260603]: 2025-10-02 08:51:43.047 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:51:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce46867541b1cf76ec1f6225ba651561c209bb33faf6a185de6c04d8cdfff8a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:51:43 compute-0 podman[386606]: 2025-10-02 08:51:43.068248466 +0000 UTC m=+0.199513836 container init c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 08:51:43 compute-0 podman[386606]: 2025-10-02 08:51:43.074003239 +0000 UTC m=+0.205268589 container start c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:51:43 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [NOTICE]   (386626) : New worker (386628) forked
Oct 02 08:51:43 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [NOTICE]   (386626) : Loading success.
Oct 02 08:51:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 294 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 529 KiB/s wr, 116 op/s
Oct 02 08:51:44 compute-0 nova_compute[260603]: 2025-10-02 08:51:44.826 2 DEBUG nova.compute.manager [req-fa853198-e6e5-45ed-a738-89bbc7b312e2 req-40ff427f-eaff-429d-b924-2b296af3d06f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:51:44 compute-0 nova_compute[260603]: 2025-10-02 08:51:44.828 2 DEBUG oslo_concurrency.lockutils [req-fa853198-e6e5-45ed-a738-89bbc7b312e2 req-40ff427f-eaff-429d-b924-2b296af3d06f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:44 compute-0 nova_compute[260603]: 2025-10-02 08:51:44.828 2 DEBUG oslo_concurrency.lockutils [req-fa853198-e6e5-45ed-a738-89bbc7b312e2 req-40ff427f-eaff-429d-b924-2b296af3d06f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:44 compute-0 nova_compute[260603]: 2025-10-02 08:51:44.829 2 DEBUG oslo_concurrency.lockutils [req-fa853198-e6e5-45ed-a738-89bbc7b312e2 req-40ff427f-eaff-429d-b924-2b296af3d06f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:44 compute-0 nova_compute[260603]: 2025-10-02 08:51:44.829 2 DEBUG nova.compute.manager [req-fa853198-e6e5-45ed-a738-89bbc7b312e2 req-40ff427f-eaff-429d-b924-2b296af3d06f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:51:44 compute-0 nova_compute[260603]: 2025-10-02 08:51:44.830 2 WARNING nova.compute.manager [req-fa853198-e6e5-45ed-a738-89bbc7b312e2 req-40ff427f-eaff-429d-b924-2b296af3d06f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state active and task_state None.
Oct 02 08:51:44 compute-0 nova_compute[260603]: 2025-10-02 08:51:44.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:44 compute-0 ceph-mon[74477]: pgmap v2195: 305 pgs: 305 active+clean; 294 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 529 KiB/s wr, 116 op/s
Oct 02 08:51:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 294 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 55 op/s
Oct 02 08:51:45 compute-0 nova_compute[260603]: 2025-10-02 08:51:45.797 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:51:45 compute-0 nova_compute[260603]: 2025-10-02 08:51:45.825 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:51:45 compute-0 nova_compute[260603]: 2025-10-02 08:51:45.826 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:51:45 compute-0 nova_compute[260603]: 2025-10-02 08:51:45.827 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:51:45 compute-0 nova_compute[260603]: 2025-10-02 08:51:45.827 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:51:45 compute-0 nova_compute[260603]: 2025-10-02 08:51:45.827 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:51:45 compute-0 nova_compute[260603]: 2025-10-02 08:51:45.863 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:45 compute-0 nova_compute[260603]: 2025-10-02 08:51:45.864 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:45 compute-0 nova_compute[260603]: 2025-10-02 08:51:45.864 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:45 compute-0 nova_compute[260603]: 2025-10-02 08:51:45.865 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:51:45 compute-0 nova_compute[260603]: 2025-10-02 08:51:45.865 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:45 compute-0 podman[386639]: 2025-10-02 08:51:45.989532201 +0000 UTC m=+0.053149186 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:51:46 compute-0 podman[386638]: 2025-10-02 08:51:46.015188265 +0000 UTC m=+0.086537165 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 08:51:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:51:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2413806608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.329 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.396 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.396 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.399 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.399 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.402 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.403 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:51:46 compute-0 ovn_controller[152344]: 2025-10-02T08:51:46Z|00137|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.8
Oct 02 08:51:46 compute-0 ovn_controller[152344]: 2025-10-02T08:51:46Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e1:f8:b9 10.100.0.8
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.551 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.552 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3183MB free_disk=59.8912467956543GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.552 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.623 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.623 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 0239090c-f1eb-4b8b-8f45-94efee345fa5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.623 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 062b2a3e-b612-42bf-b96c-fb9bdd9008ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.624 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.624 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:51:46 compute-0 nova_compute[260603]: 2025-10-02 08:51:46.677 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:51:46 compute-0 ovn_controller[152344]: 2025-10-02T08:51:46Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:f8:b9 10.100.0.8
Oct 02 08:51:46 compute-0 ovn_controller[152344]: 2025-10-02T08:51:46Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:f8:b9 10.100.0.8
Oct 02 08:51:46 compute-0 ceph-mon[74477]: pgmap v2196: 305 pgs: 305 active+clean; 294 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 55 op/s
Oct 02 08:51:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2413806608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:51:47 compute-0 nova_compute[260603]: 2025-10-02 08:51:47.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:51:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1011446166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:51:47 compute-0 nova_compute[260603]: 2025-10-02 08:51:47.093 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:51:47 compute-0 nova_compute[260603]: 2025-10-02 08:51:47.098 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:51:47 compute-0 nova_compute[260603]: 2025-10-02 08:51:47.114 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:51:47 compute-0 nova_compute[260603]: 2025-10-02 08:51:47.136 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:51:47 compute-0 nova_compute[260603]: 2025-10-02 08:51:47.136 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:51:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 546 KiB/s wr, 109 op/s
Oct 02 08:51:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1011446166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:51:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:51:48 compute-0 ceph-mon[74477]: pgmap v2197: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 546 KiB/s wr, 109 op/s
Oct 02 08:51:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 545 KiB/s wr, 124 op/s
Oct 02 08:51:49 compute-0 nova_compute[260603]: 2025-10-02 08:51:49.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:51 compute-0 ceph-mon[74477]: pgmap v2198: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 545 KiB/s wr, 124 op/s
Oct 02 08:51:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 545 KiB/s wr, 124 op/s
Oct 02 08:51:51 compute-0 nova_compute[260603]: 2025-10-02 08:51:51.828 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:51:51 compute-0 nova_compute[260603]: 2025-10-02 08:51:51.829 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:51:52 compute-0 nova_compute[260603]: 2025-10-02 08:51:52.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:51:53 compute-0 ceph-mon[74477]: pgmap v2199: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 545 KiB/s wr, 124 op/s
Oct 02 08:51:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 549 KiB/s wr, 124 op/s
Oct 02 08:51:54 compute-0 podman[386726]: 2025-10-02 08:51:53.999486846 +0000 UTC m=+0.067241013 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:51:54 compute-0 podman[386727]: 2025-10-02 08:51:54.044884905 +0000 UTC m=+0.096752728 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:51:54 compute-0 nova_compute[260603]: 2025-10-02 08:51:54.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:54 compute-0 ovn_controller[152344]: 2025-10-02T08:51:54Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:51:fb 10.100.0.11
Oct 02 08:51:55 compute-0 ceph-mon[74477]: pgmap v2200: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 549 KiB/s wr, 124 op/s
Oct 02 08:51:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 50 KiB/s wr, 69 op/s
Oct 02 08:51:57 compute-0 ceph-mon[74477]: pgmap v2201: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 50 KiB/s wr, 69 op/s
Oct 02 08:51:57 compute-0 nova_compute[260603]: 2025-10-02 08:51:57.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:51:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 60 KiB/s wr, 99 op/s
Oct 02 08:51:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:51:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:51:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:51:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:51:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:51:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:51:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:51:59 compute-0 ceph-mon[74477]: pgmap v2202: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 60 KiB/s wr, 99 op/s
Oct 02 08:51:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1021 KiB/s rd, 15 KiB/s wr, 59 op/s
Oct 02 08:51:59 compute-0 nova_compute[260603]: 2025-10-02 08:51:59.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:00 compute-0 nova_compute[260603]: 2025-10-02 08:52:00.157 2 INFO nova.compute.manager [None req-2cd302c7-df42-4ade-acca-e6e914f44ab9 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Get console output
Oct 02 08:52:00 compute-0 nova_compute[260603]: 2025-10-02 08:52:00.164 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:52:00 compute-0 nova_compute[260603]: 2025-10-02 08:52:00.907 2 DEBUG nova.compute.manager [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-changed-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:00 compute-0 nova_compute[260603]: 2025-10-02 08:52:00.907 2 DEBUG nova.compute.manager [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Refreshing instance network info cache due to event network-changed-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:52:00 compute-0 nova_compute[260603]: 2025-10-02 08:52:00.908 2 DEBUG oslo_concurrency.lockutils [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:52:00 compute-0 nova_compute[260603]: 2025-10-02 08:52:00.909 2 DEBUG oslo_concurrency.lockutils [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:52:00 compute-0 nova_compute[260603]: 2025-10-02 08:52:00.909 2 DEBUG nova.network.neutron [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Refreshing network info cache for port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.008 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.008 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.009 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.009 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.009 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.011 2 INFO nova.compute.manager [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Terminating instance
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.012 2 DEBUG nova.compute.manager [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:52:01 compute-0 kernel: tapceb50499-7e (unregistering): left promiscuous mode
Oct 02 08:52:01 compute-0 NetworkManager[45129]: <info>  [1759395121.0647] device (tapceb50499-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:52:01 compute-0 ceph-mon[74477]: pgmap v2203: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1021 KiB/s rd, 15 KiB/s wr, 59 op/s
Oct 02 08:52:01 compute-0 ovn_controller[152344]: 2025-10-02T08:52:01Z|01269|binding|INFO|Releasing lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 from this chassis (sb_readonly=0)
Oct 02 08:52:01 compute-0 ovn_controller[152344]: 2025-10-02T08:52:01Z|01270|binding|INFO|Setting lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 down in Southbound
Oct 02 08:52:01 compute-0 ovn_controller[152344]: 2025-10-02T08:52:01Z|01271|binding|INFO|Removing iface tapceb50499-7e ovn-installed in OVS
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.093 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:51:fb 10.100.0.11'], port_security=['fa:16:3e:80:51:fb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0239090c-f1eb-4b8b-8f45-94efee345fa5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a398fde7-e9b6-47e4-afc2-221c0b15c74f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=633447ab-1fed-4ecc-895b-fd3d7334df6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.096 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 in datapath 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 unbound from our chassis
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.099 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.101 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[21f7503d-b315-42e0-b135-ccdb6d5a8d06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.101 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 namespace which is not needed anymore
Oct 02 08:52:01 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct 02 08:52:01 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000078.scope: Consumed 12.360s CPU time.
Oct 02 08:52:01 compute-0 systemd-machined[214636]: Machine qemu-153-instance-00000078 terminated.
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.261 2 INFO nova.virt.libvirt.driver [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance destroyed successfully.
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.263 2 DEBUG nova.objects.instance [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:52:01 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [NOTICE]   (386626) : haproxy version is 2.8.14-c23fe91
Oct 02 08:52:01 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [NOTICE]   (386626) : path to executable is /usr/sbin/haproxy
Oct 02 08:52:01 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [WARNING]  (386626) : Exiting Master process...
Oct 02 08:52:01 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [ALERT]    (386626) : Current worker (386628) exited with code 143 (Terminated)
Oct 02 08:52:01 compute-0 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [WARNING]  (386626) : All workers exited. Exiting... (0)
Oct 02 08:52:01 compute-0 systemd[1]: libpod-c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5.scope: Deactivated successfully.
Oct 02 08:52:01 compute-0 podman[386790]: 2025-10-02 08:52:01.282444132 +0000 UTC m=+0.073211502 container died c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.282 2 DEBUG nova.virt.libvirt.vif [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:51:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1642420573',display_name='tempest-TestNetworkAdvancedServerOps-server-1642420573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1642420573',id=120,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4QvS560h5Nt86MUax5zsZlHkfG3ItwowrPNScykbxoYWCKz1GitCrHEDcUIk0uoHK8H9Y5yTAmbaorVjHeOMl3in6cmQv3hOwWfkcNBCwehb1AIqliYtOYjjLnnPR64Q==',key_name='tempest-TestNetworkAdvancedServerOps-1971503056',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:51:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-vx22hflz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:51:43Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=0239090c-f1eb-4b8b-8f45-94efee345fa5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.283 2 DEBUG nova.network.os_vif_util [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.284 2 DEBUG nova.network.os_vif_util [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.284 2 DEBUG os_vif [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.286 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapceb50499-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.297 2 INFO os_vif [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e')
Oct 02 08:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce46867541b1cf76ec1f6225ba651561c209bb33faf6a185de6c04d8cdfff8a2-merged.mount: Deactivated successfully.
Oct 02 08:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5-userdata-shm.mount: Deactivated successfully.
Oct 02 08:52:01 compute-0 podman[386790]: 2025-10-02 08:52:01.342050802 +0000 UTC m=+0.132818172 container cleanup c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 08:52:01 compute-0 systemd[1]: libpod-conmon-c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5.scope: Deactivated successfully.
Oct 02 08:52:01 compute-0 podman[386845]: 2025-10-02 08:52:01.429304898 +0000 UTC m=+0.058709822 container remove c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.438 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a36cae46-d40c-4af6-8d2c-85a6dd2356eb]: (4, ('Thu Oct  2 08:52:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 (c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5)\nc06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5\nThu Oct  2 08:52:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 (c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5)\nc06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.442 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[433f2d7a-4553-4c4a-ac3d-ba0ff1345d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.444 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21ddf177-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:01 compute-0 kernel: tap21ddf177-30: left promiscuous mode
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.487 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0451e5a-ced4-430f-855b-f664cc3ef0b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.522 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c12f3edf-373a-4a16-b34c-6a3e86d6180f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1fcc2142-c8f0-435e-bcb8-07ed1cebcc54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.548 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[16da16db-ad5f-4a99-9dd0-354d2df51290]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598084, 'reachable_time': 15525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386861, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.550 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:52:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.550 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8251e9cf-af91-4934-8e3f-96b684939684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d21ddf177\x2d3d8e\x2d4ce2\x2d9cd9\x2dfa17adfabd73.mount: Deactivated successfully.
Oct 02 08:52:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 523 KiB/s rd, 15 KiB/s wr, 43 op/s
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.774 2 INFO nova.virt.libvirt.driver [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Deleting instance files /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5_del
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.774 2 INFO nova.virt.libvirt.driver [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Deletion of /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5_del complete
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.833 2 INFO nova.compute.manager [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.833 2 DEBUG oslo.service.loopingcall [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.834 2 DEBUG nova.compute.manager [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:52:01 compute-0 nova_compute[260603]: 2025-10-02 08:52:01.834 2 DEBUG nova.network.neutron [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:52:02 compute-0 nova_compute[260603]: 2025-10-02 08:52:02.683 2 DEBUG nova.network.neutron [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updated VIF entry in instance network info cache for port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:52:02 compute-0 nova_compute[260603]: 2025-10-02 08:52:02.684 2 DEBUG nova.network.neutron [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updating instance_info_cache with network_info: [{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:52:02 compute-0 nova_compute[260603]: 2025-10-02 08:52:02.720 2 DEBUG oslo_concurrency.lockutils [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:52:02 compute-0 nova_compute[260603]: 2025-10-02 08:52:02.743 2 DEBUG nova.network.neutron [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:52:02 compute-0 nova_compute[260603]: 2025-10-02 08:52:02.763 2 INFO nova.compute.manager [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Took 0.93 seconds to deallocate network for instance.
Oct 02 08:52:02 compute-0 nova_compute[260603]: 2025-10-02 08:52:02.828 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:02 compute-0 nova_compute[260603]: 2025-10-02 08:52:02.829 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:02.889 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:52:02 compute-0 nova_compute[260603]: 2025-10-02 08:52:02.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:02.892 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:52:02 compute-0 nova_compute[260603]: 2025-10-02 08:52:02.923 2 DEBUG oslo_concurrency.processutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:52:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.010 2 DEBUG nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-unplugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.011 2 DEBUG oslo_concurrency.lockutils [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.011 2 DEBUG oslo_concurrency.lockutils [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.012 2 DEBUG oslo_concurrency.lockutils [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.012 2 DEBUG nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-unplugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.012 2 WARNING nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-unplugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state deleted and task_state None.
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.012 2 DEBUG nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.012 2 DEBUG oslo_concurrency.lockutils [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.013 2 DEBUG oslo_concurrency.lockutils [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.013 2 DEBUG oslo_concurrency.lockutils [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.013 2 DEBUG nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.013 2 WARNING nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state deleted and task_state None.
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.014 2 DEBUG nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-deleted-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:03 compute-0 ceph-mon[74477]: pgmap v2204: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 523 KiB/s rd, 15 KiB/s wr, 43 op/s
Oct 02 08:52:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:52:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/262064444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.405 2 DEBUG oslo_concurrency.processutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.411 2 DEBUG nova.compute.provider_tree [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.428 2 DEBUG nova.scheduler.client.report [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.450 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.482 2 INFO nova.scheduler.client.report [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Deleted allocations for instance 0239090c-f1eb-4b8b-8f45-94efee345fa5
Oct 02 08:52:03 compute-0 nova_compute[260603]: 2025-10-02 08:52:03.549 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 217 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 30 KiB/s wr, 73 op/s
Oct 02 08:52:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:03.895 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/262064444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:52:04 compute-0 nova_compute[260603]: 2025-10-02 08:52:04.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:05 compute-0 ceph-mon[74477]: pgmap v2205: 305 pgs: 305 active+clean; 217 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 30 KiB/s wr, 73 op/s
Oct 02 08:52:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 217 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 27 KiB/s wr, 72 op/s
Oct 02 08:52:06 compute-0 nova_compute[260603]: 2025-10-02 08:52:06.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:06 compute-0 nova_compute[260603]: 2025-10-02 08:52:06.425 2 DEBUG nova.compute.manager [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:52:06 compute-0 nova_compute[260603]: 2025-10-02 08:52:06.484 2 INFO nova.compute.manager [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] instance snapshotting
Oct 02 08:52:06 compute-0 nova_compute[260603]: 2025-10-02 08:52:06.753 2 INFO nova.virt.libvirt.driver [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Beginning live snapshot process
Oct 02 08:52:06 compute-0 nova_compute[260603]: 2025-10-02 08:52:06.898 2 DEBUG nova.storage.rbd_utils [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] creating snapshot(8863bfc66ade4ce6966d26e758a216a9) on rbd image(062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:52:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Oct 02 08:52:07 compute-0 ceph-mon[74477]: pgmap v2206: 305 pgs: 305 active+clean; 217 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 27 KiB/s wr, 72 op/s
Oct 02 08:52:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Oct 02 08:52:07 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Oct 02 08:52:07 compute-0 nova_compute[260603]: 2025-10-02 08:52:07.188 2 DEBUG nova.storage.rbd_utils [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] cloning vms/062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk@8863bfc66ade4ce6966d26e758a216a9 to images/f103e592-76b3-4821-8bf4-1ab4b6adc845 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:52:07 compute-0 ovn_controller[152344]: 2025-10-02T08:52:07Z|01272|binding|INFO|Releasing lport 6972524a-7a8a-4c19-a841-08e51ccd9aaa from this chassis (sb_readonly=0)
Oct 02 08:52:07 compute-0 nova_compute[260603]: 2025-10-02 08:52:07.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:07 compute-0 nova_compute[260603]: 2025-10-02 08:52:07.374 2 DEBUG nova.storage.rbd_utils [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] flattening images/f103e592-76b3-4821-8bf4-1ab4b6adc845 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:52:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 217 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 21 KiB/s wr, 65 op/s
Oct 02 08:52:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:52:08 compute-0 ceph-mon[74477]: osdmap e278: 3 total, 3 up, 3 in
Oct 02 08:52:08 compute-0 nova_compute[260603]: 2025-10-02 08:52:08.415 2 DEBUG nova.storage.rbd_utils [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] removing snapshot(8863bfc66ade4ce6966d26e758a216a9) on rbd image(062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:52:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Oct 02 08:52:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Oct 02 08:52:09 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Oct 02 08:52:09 compute-0 ceph-mon[74477]: pgmap v2208: 305 pgs: 305 active+clean; 217 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 21 KiB/s wr, 65 op/s
Oct 02 08:52:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 237 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 02 08:52:09 compute-0 nova_compute[260603]: 2025-10-02 08:52:09.637 2 DEBUG nova.storage.rbd_utils [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] creating snapshot(snap) on rbd image(f103e592-76b3-4821-8bf4-1ab4b6adc845) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:52:09 compute-0 nova_compute[260603]: 2025-10-02 08:52:09.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Oct 02 08:52:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Oct 02 08:52:10 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Oct 02 08:52:10 compute-0 ceph-mon[74477]: osdmap e279: 3 total, 3 up, 3 in
Oct 02 08:52:10 compute-0 ceph-mon[74477]: pgmap v2210: 305 pgs: 305 active+clean; 237 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 02 08:52:11 compute-0 ceph-mon[74477]: osdmap e280: 3 total, 3 up, 3 in
Oct 02 08:52:11 compute-0 nova_compute[260603]: 2025-10-02 08:52:11.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 237 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 55 op/s
Oct 02 08:52:12 compute-0 nova_compute[260603]: 2025-10-02 08:52:12.222 2 INFO nova.virt.libvirt.driver [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Snapshot image upload complete
Oct 02 08:52:12 compute-0 nova_compute[260603]: 2025-10-02 08:52:12.223 2 INFO nova.compute.manager [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Took 5.74 seconds to snapshot the instance on the hypervisor.
Oct 02 08:52:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct 02 08:52:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct 02 08:52:12 compute-0 ceph-mon[74477]: pgmap v2212: 305 pgs: 305 active+clean; 237 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 55 op/s
Oct 02 08:52:12 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct 02 08:52:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:52:13 compute-0 ceph-mon[74477]: osdmap e281: 3 total, 3 up, 3 in
Oct 02 08:52:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 295 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 15 MiB/s wr, 191 op/s
Oct 02 08:52:14 compute-0 ceph-mon[74477]: pgmap v2214: 305 pgs: 305 active+clean; 295 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 15 MiB/s wr, 191 op/s
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.857 2 DEBUG nova.compute.manager [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received event network-changed-06e58664-5a1a-4f08-9eb3-6a275e07a062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.858 2 DEBUG nova.compute.manager [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Refreshing instance network info cache due to event network-changed-06e58664-5a1a-4f08-9eb3-6a275e07a062. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.859 2 DEBUG oslo_concurrency.lockutils [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.860 2 DEBUG oslo_concurrency.lockutils [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.860 2 DEBUG nova.network.neutron [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Refreshing network info cache for port 06e58664-5a1a-4f08-9eb3-6a275e07a062 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.866 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.867 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.868 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.869 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.869 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.871 2 INFO nova.compute.manager [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Terminating instance
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.876 2 DEBUG nova.compute.manager [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:52:14 compute-0 nova_compute[260603]: 2025-10-02 08:52:14.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:14 compute-0 kernel: tap06e58664-5a (unregistering): left promiscuous mode
Oct 02 08:52:14 compute-0 NetworkManager[45129]: <info>  [1759395134.9800] device (tap06e58664-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:52:15 compute-0 ovn_controller[152344]: 2025-10-02T08:52:15Z|01273|binding|INFO|Releasing lport 06e58664-5a1a-4f08-9eb3-6a275e07a062 from this chassis (sb_readonly=0)
Oct 02 08:52:15 compute-0 ovn_controller[152344]: 2025-10-02T08:52:15Z|01274|binding|INFO|Setting lport 06e58664-5a1a-4f08-9eb3-6a275e07a062 down in Southbound
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:15 compute-0 ovn_controller[152344]: 2025-10-02T08:52:15Z|01275|binding|INFO|Removing iface tap06e58664-5a ovn-installed in OVS
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.065 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:f8:b9 10.100.0.8'], port_security=['fa:16:3e:e1:f8:b9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '062b2a3e-b612-42bf-b96c-fb9bdd9008ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8372a1-ae46-455d-aa74-54645f729e73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b23ca944-49a2-456f-a94b-c77445a13bdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10547b4a-e45d-47e8-b4d5-6979908e5ff3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=06e58664-5a1a-4f08-9eb3-6a275e07a062) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.066 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 06e58664-5a1a-4f08-9eb3-6a275e07a062 in datapath 5e8372a1-ae46-455d-aa74-54645f729e73 unbound from our chassis
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.067 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e8372a1-ae46-455d-aa74-54645f729e73
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.094 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[84d11dac-1cc4-4bd8-9916-aac416a70822]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:15 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000079.scope: Deactivated successfully.
Oct 02 08:52:15 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000079.scope: Consumed 16.524s CPU time.
Oct 02 08:52:15 compute-0 systemd-machined[214636]: Machine qemu-152-instance-00000079 terminated.
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.137 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[112ccdca-a765-495d-b887-dde30612c583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.140 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8d53b5ff-77f1-48b4-9d30-a87aad699436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.178 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca5a9e5-9d7a-4c9c-943f-0aa1b3dcdfcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.207 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d827ce56-232f-4ab6-8421-03f517b1485a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e8372a1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:60:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592471, 'reachable_time': 17032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387038, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.226 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f7297d-9c58-4062-93fd-823e66dbecdf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e8372a1-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592482, 'tstamp': 592482}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387039, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e8372a1-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592486, 'tstamp': 592486}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387039, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.228 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e8372a1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.239 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e8372a1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.239 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.241 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e8372a1-a0, col_values=(('external_ids', {'iface-id': '6972524a-7a8a-4c19-a841-08e51ccd9aaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.242 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.329 2 INFO nova.virt.libvirt.driver [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Instance destroyed successfully.
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.329 2 DEBUG nova.objects.instance [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lazy-loading 'resources' on Instance uuid 062b2a3e-b612-42bf-b96c-fb9bdd9008ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.343 2 DEBUG nova.virt.libvirt.vif [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:51:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1642693838',display_name='tempest-TestSnapshotPattern-server-1642693838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1642693838',id=121,image_ref='bfda7a79-c502-4c19-8d8e-da8dbbb22d04',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCpLz1iGKUc/R0jTBlojNlaCcKVn52HrOCGXK3cRl7ZwI3LdmmDfGB817F44mQzj+4scFHSvX8lk2zoI/a0vux5P2hegs1HkAovNnUXiH3pFHVXQWoAuzDtOhefGzzHniQ==',key_name='tempest-TestSnapshotPattern-624580712',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:51:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfcc44155f2d45ff9f66fe254a7b21c7',ramdisk_id='',reservation_id='r-uis0qfoh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d5e3e825-fcee-4f1b-8c05-5a9ee07013d7',image_min_disk='1',image_min_ram='0',image_owner_id='bfcc44155f2d45ff9f66fe254a7b21c7',image_owner_project_name='tempest-TestSnapshotPattern-495107275',image_owner_user_name='tempest-TestSnapshotPattern-495107275-project-member',image_user_id='aceb8b0273154f1abe964d78a6261936',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-495107275',owner_user_name='tempest-TestSnapshotPattern-495107275-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:52:12Z,user_data=None,user_id='aceb8b0273154f1abe964d78a6261936',uuid=062b2a3e-b612-42bf-b96c-fb9bdd9008ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.344 2 DEBUG nova.network.os_vif_util [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converting VIF {"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.345 2 DEBUG nova.network.os_vif_util [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.345 2 DEBUG os_vif [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06e58664-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.354 2 INFO os_vif [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a')
Oct 02 08:52:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 295 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 152 op/s
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.740 2 INFO nova.virt.libvirt.driver [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Deleting instance files /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee_del
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.741 2 INFO nova.virt.libvirt.driver [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Deletion of /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee_del complete
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.816 2 INFO nova.compute.manager [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Took 0.94 seconds to destroy the instance on the hypervisor.
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.817 2 DEBUG oslo.service.loopingcall [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.817 2 DEBUG nova.compute.manager [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:52:15 compute-0 nova_compute[260603]: 2025-10-02 08:52:15.818 2 DEBUG nova.network.neutron [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:52:16 compute-0 nova_compute[260603]: 2025-10-02 08:52:16.260 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395121.2589183, 0239090c-f1eb-4b8b-8f45-94efee345fa5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:52:16 compute-0 nova_compute[260603]: 2025-10-02 08:52:16.261 2 INFO nova.compute.manager [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] VM Stopped (Lifecycle Event)
Oct 02 08:52:16 compute-0 nova_compute[260603]: 2025-10-02 08:52:16.282 2 DEBUG nova.compute.manager [None req-d1312103-c3b5-4c25-9a20-185c8b39333f - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:52:16 compute-0 ceph-mon[74477]: pgmap v2215: 305 pgs: 305 active+clean; 295 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 152 op/s
Oct 02 08:52:16 compute-0 nova_compute[260603]: 2025-10-02 08:52:16.833 2 DEBUG nova.network.neutron [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:52:16 compute-0 nova_compute[260603]: 2025-10-02 08:52:16.849 2 INFO nova.compute.manager [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Took 1.03 seconds to deallocate network for instance.
Oct 02 08:52:16 compute-0 nova_compute[260603]: 2025-10-02 08:52:16.905 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:16 compute-0 nova_compute[260603]: 2025-10-02 08:52:16.906 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:16 compute-0 nova_compute[260603]: 2025-10-02 08:52:16.933 2 DEBUG nova.compute.manager [req-cfc1e01c-d23b-43ea-8e6f-c7077be969fd req-a327c7b6-6bfd-4bc7-b439-05432cfb2013 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received event network-vif-deleted-06e58664-5a1a-4f08-9eb3-6a275e07a062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:17 compute-0 nova_compute[260603]: 2025-10-02 08:52:16.989 2 DEBUG oslo_concurrency.processutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:52:17 compute-0 podman[387072]: 2025-10-02 08:52:17.014795437 +0000 UTC m=+0.073825451 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 02 08:52:17 compute-0 podman[387071]: 2025-10-02 08:52:17.038709865 +0000 UTC m=+0.099456263 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 02 08:52:17 compute-0 nova_compute[260603]: 2025-10-02 08:52:17.424 2 DEBUG nova.network.neutron [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updated VIF entry in instance network info cache for port 06e58664-5a1a-4f08-9eb3-6a275e07a062. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:52:17 compute-0 nova_compute[260603]: 2025-10-02 08:52:17.425 2 DEBUG nova.network.neutron [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updating instance_info_cache with network_info: [{"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:52:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:52:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1397368750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:52:17 compute-0 nova_compute[260603]: 2025-10-02 08:52:17.445 2 DEBUG oslo_concurrency.lockutils [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:52:17 compute-0 nova_compute[260603]: 2025-10-02 08:52:17.466 2 DEBUG oslo_concurrency.processutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:52:17 compute-0 nova_compute[260603]: 2025-10-02 08:52:17.475 2 DEBUG nova.compute.provider_tree [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:52:17 compute-0 nova_compute[260603]: 2025-10-02 08:52:17.491 2 DEBUG nova.scheduler.client.report [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:52:17 compute-0 nova_compute[260603]: 2025-10-02 08:52:17.515 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:17 compute-0 nova_compute[260603]: 2025-10-02 08:52:17.541 2 INFO nova.scheduler.client.report [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Deleted allocations for instance 062b2a3e-b612-42bf-b96c-fb9bdd9008ee
Oct 02 08:52:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 207 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.5 MiB/s wr, 169 op/s
Oct 02 08:52:17 compute-0 nova_compute[260603]: 2025-10-02 08:52:17.598 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1397368750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:52:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:52:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct 02 08:52:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct 02 08:52:18 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct 02 08:52:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct 02 08:52:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct 02 08:52:19 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct 02 08:52:19 compute-0 ceph-mon[74477]: pgmap v2216: 305 pgs: 305 active+clean; 207 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.5 MiB/s wr, 169 op/s
Oct 02 08:52:19 compute-0 ceph-mon[74477]: osdmap e282: 3 total, 3 up, 3 in
Oct 02 08:52:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 1.4 MiB/s wr, 85 op/s
Oct 02 08:52:19 compute-0 nova_compute[260603]: 2025-10-02 08:52:19.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:20 compute-0 ceph-mon[74477]: osdmap e283: 3 total, 3 up, 3 in
Oct 02 08:52:20 compute-0 nova_compute[260603]: 2025-10-02 08:52:20.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:21 compute-0 ceph-mon[74477]: pgmap v2219: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 1.4 MiB/s wr, 85 op/s
Oct 02 08:52:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.2 KiB/s wr, 67 op/s
Oct 02 08:52:22 compute-0 sudo[387137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:52:22 compute-0 sudo[387137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:22 compute-0 sudo[387137]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:52:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3660420233' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:52:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:52:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3660420233' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.163 2 DEBUG nova.compute.manager [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received event network-changed-236df88e-d54e-410f-9a5c-cf5fa95debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.163 2 DEBUG nova.compute.manager [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Refreshing instance network info cache due to event network-changed-236df88e-d54e-410f-9a5c-cf5fa95debd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.164 2 DEBUG oslo_concurrency.lockutils [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.164 2 DEBUG oslo_concurrency.lockutils [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.164 2 DEBUG nova.network.neutron [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Refreshing network info cache for port 236df88e-d54e-410f-9a5c-cf5fa95debd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:52:22 compute-0 sudo[387162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:52:22 compute-0 sudo[387162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:22 compute-0 sudo[387162]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.230 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.231 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.231 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.231 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.231 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.233 2 INFO nova.compute.manager [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Terminating instance
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.234 2 DEBUG nova.compute.manager [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:52:22 compute-0 kernel: tap236df88e-d5 (unregistering): left promiscuous mode
Oct 02 08:52:22 compute-0 sudo[387187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:52:22 compute-0 NetworkManager[45129]: <info>  [1759395142.2979] device (tap236df88e-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:52:22 compute-0 sudo[387187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01276|binding|INFO|Releasing lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 from this chassis (sb_readonly=0)
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01277|binding|INFO|Setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 down in Southbound
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01278|binding|INFO|Removing iface tap236df88e-d5 ovn-installed in OVS
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:22 compute-0 sudo[387187]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.310 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:8a:1b 10.100.0.3'], port_security=['fa:16:3e:de:8a:1b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5e3e825-fcee-4f1b-8c05-5a9ee07013d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8372a1-ae46-455d-aa74-54645f729e73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b23ca944-49a2-456f-a94b-c77445a13bdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10547b4a-e45d-47e8-b4d5-6979908e5ff3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=236df88e-d54e-410f-9a5c-cf5fa95debd1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.311 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 236df88e-d54e-410f-9a5c-cf5fa95debd1 in datapath 5e8372a1-ae46-455d-aa74-54645f729e73 unbound from our chassis
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.312 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e8372a1-ae46-455d-aa74-54645f729e73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.313 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94ddc038-1455-4404-a4e5-14d45f83d208]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.314 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73 namespace which is not needed anymore
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:22 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000077.scope: Deactivated successfully.
Oct 02 08:52:22 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000077.scope: Consumed 16.041s CPU time.
Oct 02 08:52:22 compute-0 systemd-machined[214636]: Machine qemu-150-instance-00000077 terminated.
Oct 02 08:52:22 compute-0 sudo[387216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:52:22 compute-0 sudo[387216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:22 compute-0 kernel: tap236df88e-d5: entered promiscuous mode
Oct 02 08:52:22 compute-0 NetworkManager[45129]: <info>  [1759395142.4589] manager: (tap236df88e-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/504)
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:22 compute-0 kernel: tap236df88e-d5 (unregistering): left promiscuous mode
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01279|binding|INFO|Claiming lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 for this chassis.
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01280|binding|INFO|236df88e-d54e-410f-9a5c-cf5fa95debd1: Claiming fa:16:3e:de:8a:1b 10.100.0.3
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.520 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:8a:1b 10.100.0.3'], port_security=['fa:16:3e:de:8a:1b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5e3e825-fcee-4f1b-8c05-5a9ee07013d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8372a1-ae46-455d-aa74-54645f729e73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b23ca944-49a2-456f-a94b-c77445a13bdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10547b4a-e45d-47e8-b4d5-6979908e5ff3, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=236df88e-d54e-410f-9a5c-cf5fa95debd1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.532 2 INFO nova.virt.libvirt.driver [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Instance destroyed successfully.
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.532 2 DEBUG nova.objects.instance [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lazy-loading 'resources' on Instance uuid d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01281|binding|INFO|Setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 ovn-installed in OVS
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01282|binding|INFO|Setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 up in Southbound
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01283|binding|INFO|Releasing lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 from this chassis (sb_readonly=1)
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01284|if_status|INFO|Dropped 1 log messages in last 128 seconds (most recently, 128 seconds ago) due to excessive rate
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01285|if_status|INFO|Not setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 down as sb is readonly
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01286|binding|INFO|Removing iface tap236df88e-d5 ovn-installed in OVS
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.551 2 DEBUG nova.virt.libvirt.vif [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:50:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1240552507',display_name='tempest-TestSnapshotPattern-server-1240552507',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1240552507',id=119,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCpLz1iGKUc/R0jTBlojNlaCcKVn52HrOCGXK3cRl7ZwI3LdmmDfGB817F44mQzj+4scFHSvX8lk2zoI/a0vux5P2hegs1HkAovNnUXiH3pFHVXQWoAuzDtOhefGzzHniQ==',key_name='tempest-TestSnapshotPattern-624580712',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:50:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfcc44155f2d45ff9f66fe254a7b21c7',ramdisk_id='',reservation_id='r-s694dn34',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-495107275',owner_user_name='tempest-TestSnapshotPattern-495107275-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:51:13Z,user_data=None,user_id='aceb8b0273154f1abe964d78a6261936',uuid=d5e3e825-fcee-4f1b-8c05-5a9ee07013d7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.552 2 DEBUG nova.network.os_vif_util [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converting VIF {"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:52:22 compute-0 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [NOTICE]   (383989) : haproxy version is 2.8.14-c23fe91
Oct 02 08:52:22 compute-0 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [NOTICE]   (383989) : path to executable is /usr/sbin/haproxy
Oct 02 08:52:22 compute-0 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [WARNING]  (383989) : Exiting Master process...
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.552 2 DEBUG nova.network.os_vif_util [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.554 2 DEBUG os_vif [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01287|binding|INFO|Releasing lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 from this chassis (sb_readonly=0)
Oct 02 08:52:22 compute-0 ovn_controller[152344]: 2025-10-02T08:52:22Z|01288|binding|INFO|Setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 down in Southbound
Oct 02 08:52:22 compute-0 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [WARNING]  (383989) : Exiting Master process...
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.556 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap236df88e-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:22 compute-0 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [ALERT]    (383989) : Current worker (383991) exited with code 143 (Terminated)
Oct 02 08:52:22 compute-0 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [WARNING]  (383989) : All workers exited. Exiting... (0)
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:22 compute-0 systemd[1]: libpod-36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a.scope: Deactivated successfully.
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.564 2 INFO os_vif [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5')
Oct 02 08:52:22 compute-0 podman[387261]: 2025-10-02 08:52:22.567120528 +0000 UTC m=+0.113237521 container died 36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.571 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:8a:1b 10.100.0.3'], port_security=['fa:16:3e:de:8a:1b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5e3e825-fcee-4f1b-8c05-5a9ee07013d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8372a1-ae46-455d-aa74-54645f729e73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b23ca944-49a2-456f-a94b-c77445a13bdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10547b4a-e45d-47e8-b4d5-6979908e5ff3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=236df88e-d54e-410f-9a5c-cf5fa95debd1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a-userdata-shm.mount: Deactivated successfully.
Oct 02 08:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-55629b6f151fe9b01df698b94aebf7c6e06eef4cd3a7619588ea9563a64ff3ac-merged.mount: Deactivated successfully.
Oct 02 08:52:22 compute-0 podman[387261]: 2025-10-02 08:52:22.609970226 +0000 UTC m=+0.156087189 container cleanup 36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:52:22 compute-0 systemd[1]: libpod-conmon-36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a.scope: Deactivated successfully.
Oct 02 08:52:22 compute-0 podman[387323]: 2025-10-02 08:52:22.687952138 +0000 UTC m=+0.055549522 container remove 36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.695 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9700df3-476b-497e-b6c0-9965fd30ac39]: (4, ('Thu Oct  2 08:52:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73 (36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a)\n36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a\nThu Oct  2 08:52:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73 (36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a)\n36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.698 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ce58611b-b355-4360-8c9b-cbbc77baa6db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.699 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e8372a1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:22 compute-0 kernel: tap5e8372a1-a0: left promiscuous mode
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.717 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4a9372-37b8-424e-9fca-c6dd1d2a638f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7d66db18-490e-4668-bdcf-5bd76e721395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.745 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09964408-5520-4604-8f61-09a1ee4ae89c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.760 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fb20268a-b66e-45f7-a3fd-6cf7be8c702b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592452, 'reachable_time': 36995, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387345, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.762 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.763 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9485a7e9-39bc-448e-89a2-edeb5f746e30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d5e8372a1\x2dae46\x2d455d\x2daa74\x2d54645f729e73.mount: Deactivated successfully.
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.764 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 236df88e-d54e-410f-9a5c-cf5fa95debd1 in datapath 5e8372a1-ae46-455d-aa74-54645f729e73 unbound from our chassis
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.766 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e8372a1-ae46-455d-aa74-54645f729e73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.767 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[845d22f3-17e8-4082-b1de-fee6603daeef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.768 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 236df88e-d54e-410f-9a5c-cf5fa95debd1 in datapath 5e8372a1-ae46-455d-aa74-54645f729e73 unbound from our chassis
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.770 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e8372a1-ae46-455d-aa74-54645f729e73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:52:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.770 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[67582e2b-2ee6-467d-8b8f-cc7b2842c32b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.924 2 INFO nova.virt.libvirt.driver [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Deleting instance files /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_del
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.925 2 INFO nova.virt.libvirt.driver [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Deletion of /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_del complete
Oct 02 08:52:22 compute-0 sudo[387216]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.971 2 INFO nova.compute.manager [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.973 2 DEBUG oslo.service.loopingcall [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.973 2 DEBUG nova.compute.manager [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:52:22 compute-0 nova_compute[260603]: 2025-10-02 08:52:22.974 2 DEBUG nova.network.neutron [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:52:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:52:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:52:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:52:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:52:23 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:52:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:52:23 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:52:23 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 64bfcaf4-9d8d-4204-bfcf-5746f5c73a13 does not exist
Oct 02 08:52:23 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 30de6dfa-4662-4299-bac8-5f9699482a6f does not exist
Oct 02 08:52:23 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5c5563d6-7e23-4baa-b9ce-e107d41d93e2 does not exist
Oct 02 08:52:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:52:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:52:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:52:23 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:52:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:52:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:52:23 compute-0 sudo[387362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:52:23 compute-0 sudo[387362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:23 compute-0 sudo[387362]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:23 compute-0 ceph-mon[74477]: pgmap v2220: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.2 KiB/s wr, 67 op/s
Oct 02 08:52:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3660420233' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:52:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3660420233' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:52:23 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:52:23 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:52:23 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:52:23 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:52:23 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:52:23 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:52:23 compute-0 sudo[387387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:52:23 compute-0 sudo[387387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:23 compute-0 sudo[387387]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:23 compute-0 sudo[387412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:52:23 compute-0 sudo[387412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:23 compute-0 sudo[387412]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:23 compute-0 sudo[387437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:52:23 compute-0 sudo[387437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 86 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 5.5 KiB/s wr, 121 op/s
Oct 02 08:52:23 compute-0 podman[387502]: 2025-10-02 08:52:23.707595214 +0000 UTC m=+0.064993761 container create 3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mendeleev, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 08:52:23 compute-0 systemd[1]: Started libpod-conmon-3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40.scope.
Oct 02 08:52:23 compute-0 podman[387502]: 2025-10-02 08:52:23.674357161 +0000 UTC m=+0.031755768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:52:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:52:23 compute-0 podman[387502]: 2025-10-02 08:52:23.813270355 +0000 UTC m=+0.170668942 container init 3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mendeleev, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:52:23 compute-0 podman[387502]: 2025-10-02 08:52:23.823739766 +0000 UTC m=+0.181138313 container start 3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mendeleev, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 08:52:23 compute-0 podman[387502]: 2025-10-02 08:52:23.827990051 +0000 UTC m=+0.185388658 container attach 3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mendeleev, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:52:23 compute-0 sad_mendeleev[387518]: 167 167
Oct 02 08:52:23 compute-0 systemd[1]: libpod-3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40.scope: Deactivated successfully.
Oct 02 08:52:23 compute-0 conmon[387518]: conmon 3a80ef3f06d5a61122cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40.scope/container/memory.events
Oct 02 08:52:23 compute-0 podman[387502]: 2025-10-02 08:52:23.831807412 +0000 UTC m=+0.189205959 container died 3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:52:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c247211939cfa4ff808d8aed7ad2a623e5fbdd17d1e7dad2b410887a50b8ec1e-merged.mount: Deactivated successfully.
Oct 02 08:52:23 compute-0 podman[387502]: 2025-10-02 08:52:23.887782667 +0000 UTC m=+0.245181184 container remove 3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mendeleev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 08:52:23 compute-0 systemd[1]: libpod-conmon-3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40.scope: Deactivated successfully.
Oct 02 08:52:24 compute-0 podman[387544]: 2025-10-02 08:52:24.074134445 +0000 UTC m=+0.046418753 container create 8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 08:52:24 compute-0 systemd[1]: Started libpod-conmon-8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d.scope.
Oct 02 08:52:24 compute-0 podman[387544]: 2025-10-02 08:52:24.055241666 +0000 UTC m=+0.027526024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:52:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f051d6d8559a35a225d32f817c527185c3e88c1070d0465174cd8e31328b62b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f051d6d8559a35a225d32f817c527185c3e88c1070d0465174cd8e31328b62b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f051d6d8559a35a225d32f817c527185c3e88c1070d0465174cd8e31328b62b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f051d6d8559a35a225d32f817c527185c3e88c1070d0465174cd8e31328b62b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f051d6d8559a35a225d32f817c527185c3e88c1070d0465174cd8e31328b62b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:24 compute-0 podman[387560]: 2025-10-02 08:52:24.189854084 +0000 UTC m=+0.069187065 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 08:52:24 compute-0 podman[387544]: 2025-10-02 08:52:24.197898088 +0000 UTC m=+0.170182376 container init 8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 08:52:24 compute-0 podman[387544]: 2025-10-02 08:52:24.208121053 +0000 UTC m=+0.180405321 container start 8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 08:52:24 compute-0 podman[387544]: 2025-10-02 08:52:24.211174019 +0000 UTC m=+0.183458307 container attach 8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 08:52:24 compute-0 podman[387558]: 2025-10-02 08:52:24.247177011 +0000 UTC m=+0.123487457 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 02 08:52:24 compute-0 nova_compute[260603]: 2025-10-02 08:52:24.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:25 compute-0 ceph-mon[74477]: pgmap v2221: 305 pgs: 305 active+clean; 86 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 5.5 KiB/s wr, 121 op/s
Oct 02 08:52:25 compute-0 magical_ellis[387577]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:52:25 compute-0 magical_ellis[387577]: --> relative data size: 1.0
Oct 02 08:52:25 compute-0 magical_ellis[387577]: --> All data devices are unavailable
Oct 02 08:52:25 compute-0 systemd[1]: libpod-8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d.scope: Deactivated successfully.
Oct 02 08:52:25 compute-0 podman[387544]: 2025-10-02 08:52:25.395683653 +0000 UTC m=+1.367967941 container died 8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:52:25 compute-0 systemd[1]: libpod-8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d.scope: Consumed 1.116s CPU time.
Oct 02 08:52:25 compute-0 nova_compute[260603]: 2025-10-02 08:52:25.413 2 DEBUG nova.network.neutron [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:52:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f051d6d8559a35a225d32f817c527185c3e88c1070d0465174cd8e31328b62b-merged.mount: Deactivated successfully.
Oct 02 08:52:25 compute-0 nova_compute[260603]: 2025-10-02 08:52:25.436 2 INFO nova.compute.manager [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Took 2.46 seconds to deallocate network for instance.
Oct 02 08:52:25 compute-0 nova_compute[260603]: 2025-10-02 08:52:25.460 2 DEBUG nova.network.neutron [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updated VIF entry in instance network info cache for port 236df88e-d54e-410f-9a5c-cf5fa95debd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:52:25 compute-0 nova_compute[260603]: 2025-10-02 08:52:25.460 2 DEBUG nova.network.neutron [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:52:25 compute-0 podman[387544]: 2025-10-02 08:52:25.466161167 +0000 UTC m=+1.438445445 container remove 8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 08:52:25 compute-0 systemd[1]: libpod-conmon-8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d.scope: Deactivated successfully.
Oct 02 08:52:25 compute-0 nova_compute[260603]: 2025-10-02 08:52:25.498 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:25 compute-0 nova_compute[260603]: 2025-10-02 08:52:25.499 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:25 compute-0 nova_compute[260603]: 2025-10-02 08:52:25.501 2 DEBUG oslo_concurrency.lockutils [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:52:25 compute-0 sudo[387437]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:25 compute-0 nova_compute[260603]: 2025-10-02 08:52:25.520 2 DEBUG nova.compute.manager [req-5ad345f9-7856-422b-aa53-ea21d3a0d624 req-957e2f9f-f599-49f9-94af-f7b9fd2b30fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received event network-vif-deleted-236df88e-d54e-410f-9a5c-cf5fa95debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:25 compute-0 nova_compute[260603]: 2025-10-02 08:52:25.521 2 INFO nova.compute.manager [req-5ad345f9-7856-422b-aa53-ea21d3a0d624 req-957e2f9f-f599-49f9-94af-f7b9fd2b30fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Neutron deleted interface 236df88e-d54e-410f-9a5c-cf5fa95debd1; detaching it from the instance and deleting it from the info cache
Oct 02 08:52:25 compute-0 nova_compute[260603]: 2025-10-02 08:52:25.521 2 DEBUG nova.network.neutron [req-5ad345f9-7856-422b-aa53-ea21d3a0d624 req-957e2f9f-f599-49f9-94af-f7b9fd2b30fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:52:25 compute-0 nova_compute[260603]: 2025-10-02 08:52:25.561 2 DEBUG nova.compute.manager [req-5ad345f9-7856-422b-aa53-ea21d3a0d624 req-957e2f9f-f599-49f9-94af-f7b9fd2b30fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Detach interface failed, port_id=236df88e-d54e-410f-9a5c-cf5fa95debd1, reason: Instance d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 08:52:25 compute-0 nova_compute[260603]: 2025-10-02 08:52:25.581 2 DEBUG oslo_concurrency.processutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:52:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 86 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.0 KiB/s wr, 73 op/s
Oct 02 08:52:25 compute-0 sudo[387643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:52:25 compute-0 sudo[387643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:25 compute-0 sudo[387643]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:25 compute-0 sudo[387669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:52:25 compute-0 sudo[387669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:25 compute-0 sudo[387669]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:25 compute-0 sudo[387694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:52:25 compute-0 sudo[387694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:25 compute-0 sudo[387694]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:25 compute-0 sudo[387738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:52:25 compute-0 sudo[387738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:52:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3142570958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:52:26 compute-0 nova_compute[260603]: 2025-10-02 08:52:26.043 2 DEBUG oslo_concurrency.processutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:52:26 compute-0 nova_compute[260603]: 2025-10-02 08:52:26.052 2 DEBUG nova.compute.provider_tree [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:52:26 compute-0 nova_compute[260603]: 2025-10-02 08:52:26.073 2 DEBUG nova.scheduler.client.report [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:52:26 compute-0 nova_compute[260603]: 2025-10-02 08:52:26.096 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3142570958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:52:26 compute-0 nova_compute[260603]: 2025-10-02 08:52:26.125 2 INFO nova.scheduler.client.report [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Deleted allocations for instance d5e3e825-fcee-4f1b-8c05-5a9ee07013d7
Oct 02 08:52:26 compute-0 nova_compute[260603]: 2025-10-02 08:52:26.204 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:26 compute-0 podman[387809]: 2025-10-02 08:52:26.256938197 +0000 UTC m=+0.046386682 container create 6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 08:52:26 compute-0 systemd[1]: Started libpod-conmon-6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd.scope.
Oct 02 08:52:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:52:26 compute-0 podman[387809]: 2025-10-02 08:52:26.331186851 +0000 UTC m=+0.120635336 container init 6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:52:26 compute-0 podman[387809]: 2025-10-02 08:52:26.338028538 +0000 UTC m=+0.127477023 container start 6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:52:26 compute-0 podman[387809]: 2025-10-02 08:52:26.243251883 +0000 UTC m=+0.032700388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:52:26 compute-0 podman[387809]: 2025-10-02 08:52:26.341917411 +0000 UTC m=+0.131365926 container attach 6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 08:52:26 compute-0 stupefied_aryabhata[387825]: 167 167
Oct 02 08:52:26 compute-0 systemd[1]: libpod-6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd.scope: Deactivated successfully.
Oct 02 08:52:26 compute-0 conmon[387825]: conmon 6b8fc817dbccb640ca66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd.scope/container/memory.events
Oct 02 08:52:26 compute-0 podman[387809]: 2025-10-02 08:52:26.344531774 +0000 UTC m=+0.133980259 container died 6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:52:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3473bd7bd6cc6ab123d10ada981f33d9377ed9eb389aa8d310801e3b6d7c2513-merged.mount: Deactivated successfully.
Oct 02 08:52:26 compute-0 podman[387809]: 2025-10-02 08:52:26.378123609 +0000 UTC m=+0.167572094 container remove 6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:52:26 compute-0 systemd[1]: libpod-conmon-6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd.scope: Deactivated successfully.
Oct 02 08:52:26 compute-0 podman[387848]: 2025-10-02 08:52:26.534124975 +0000 UTC m=+0.048196759 container create 1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chatelet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 08:52:26 compute-0 systemd[1]: Started libpod-conmon-1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802.scope.
Oct 02 08:52:26 compute-0 podman[387848]: 2025-10-02 08:52:26.509398781 +0000 UTC m=+0.023470625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:52:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad398c56f2ada02be8d5c14ae4534ab8635290c6aebdf935a35368fc013c1a69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad398c56f2ada02be8d5c14ae4534ab8635290c6aebdf935a35368fc013c1a69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad398c56f2ada02be8d5c14ae4534ab8635290c6aebdf935a35368fc013c1a69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad398c56f2ada02be8d5c14ae4534ab8635290c6aebdf935a35368fc013c1a69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:26 compute-0 podman[387848]: 2025-10-02 08:52:26.639596908 +0000 UTC m=+0.153668662 container init 1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:52:26 compute-0 podman[387848]: 2025-10-02 08:52:26.651396643 +0000 UTC m=+0.165468397 container start 1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chatelet, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:52:26 compute-0 podman[387848]: 2025-10-02 08:52:26.654614515 +0000 UTC m=+0.168686269 container attach 1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chatelet, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:52:26 compute-0 nova_compute[260603]: 2025-10-02 08:52:26.940 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:26 compute-0 nova_compute[260603]: 2025-10-02 08:52:26.942 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:26 compute-0 nova_compute[260603]: 2025-10-02 08:52:26.965 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.032 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.032 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.039 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.040 2 INFO nova.compute.claims [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.124 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:52:27 compute-0 ceph-mon[74477]: pgmap v2222: 305 pgs: 305 active+clean; 86 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.0 KiB/s wr, 73 op/s
Oct 02 08:52:27 compute-0 great_chatelet[387864]: {
Oct 02 08:52:27 compute-0 great_chatelet[387864]:     "0": [
Oct 02 08:52:27 compute-0 great_chatelet[387864]:         {
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "devices": [
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "/dev/loop3"
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             ],
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_name": "ceph_lv0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_size": "21470642176",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "name": "ceph_lv0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "tags": {
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.cluster_name": "ceph",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.crush_device_class": "",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.encrypted": "0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.osd_id": "0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.type": "block",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.vdo": "0"
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             },
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "type": "block",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "vg_name": "ceph_vg0"
Oct 02 08:52:27 compute-0 great_chatelet[387864]:         }
Oct 02 08:52:27 compute-0 great_chatelet[387864]:     ],
Oct 02 08:52:27 compute-0 great_chatelet[387864]:     "1": [
Oct 02 08:52:27 compute-0 great_chatelet[387864]:         {
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "devices": [
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "/dev/loop4"
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             ],
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_name": "ceph_lv1",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_size": "21470642176",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "name": "ceph_lv1",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "tags": {
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.cluster_name": "ceph",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.crush_device_class": "",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.encrypted": "0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.osd_id": "1",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.type": "block",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.vdo": "0"
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             },
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "type": "block",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "vg_name": "ceph_vg1"
Oct 02 08:52:27 compute-0 great_chatelet[387864]:         }
Oct 02 08:52:27 compute-0 great_chatelet[387864]:     ],
Oct 02 08:52:27 compute-0 great_chatelet[387864]:     "2": [
Oct 02 08:52:27 compute-0 great_chatelet[387864]:         {
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "devices": [
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "/dev/loop5"
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             ],
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_name": "ceph_lv2",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_size": "21470642176",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "name": "ceph_lv2",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "tags": {
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.cluster_name": "ceph",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.crush_device_class": "",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.encrypted": "0",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.osd_id": "2",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.type": "block",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:                 "ceph.vdo": "0"
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             },
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "type": "block",
Oct 02 08:52:27 compute-0 great_chatelet[387864]:             "vg_name": "ceph_vg2"
Oct 02 08:52:27 compute-0 great_chatelet[387864]:         }
Oct 02 08:52:27 compute-0 great_chatelet[387864]:     ]
Oct 02 08:52:27 compute-0 great_chatelet[387864]: }
Oct 02 08:52:27 compute-0 systemd[1]: libpod-1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802.scope: Deactivated successfully.
Oct 02 08:52:27 compute-0 podman[387848]: 2025-10-02 08:52:27.437623489 +0000 UTC m=+0.951695283 container died 1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chatelet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 08:52:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad398c56f2ada02be8d5c14ae4534ab8635290c6aebdf935a35368fc013c1a69-merged.mount: Deactivated successfully.
Oct 02 08:52:27 compute-0 podman[387848]: 2025-10-02 08:52:27.508406563 +0000 UTC m=+1.022478317 container remove 1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:52:27 compute-0 systemd[1]: libpod-conmon-1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802.scope: Deactivated successfully.
Oct 02 08:52:27 compute-0 sudo[387738]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.5 KiB/s wr, 79 op/s
Oct 02 08:52:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:52:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589371837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:52:27 compute-0 sudo[387905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:52:27 compute-0 sudo[387905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.645 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:52:27 compute-0 sudo[387905]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.656 2 DEBUG nova.compute.provider_tree [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.675 2 DEBUG nova.scheduler.client.report [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.696 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.697 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:52:27 compute-0 sudo[387932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.746 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.747 2 DEBUG nova.network.neutron [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:52:27 compute-0 sudo[387932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:27 compute-0 sudo[387932]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.766 2 INFO nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.783 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:52:27 compute-0 sudo[387957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:52:27 compute-0 sudo[387957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:27 compute-0 sudo[387957]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.881 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.882 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.883 2 INFO nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Creating image(s)
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.915 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:52:27 compute-0 sudo[387982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:52:27 compute-0 sudo[387982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:52:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:52:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:52:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:52:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:52:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.945 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.971 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:52:27 compute-0 nova_compute[260603]: 2025-10-02 08:52:27.975 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:52:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:52:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct 02 08:52:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct 02 08:52:28 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:52:28
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.080 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.081 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.081 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.081 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.106 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.109 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:52:28 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/589371837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:52:28 compute-0 ceph-mon[74477]: osdmap e284: 3 total, 3 up, 3 in
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:52:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:52:28 compute-0 podman[388138]: 2025-10-02 08:52:28.403783989 +0000 UTC m=+0.054117796 container create 589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.413 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:52:28 compute-0 systemd[1]: Started libpod-conmon-589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa.scope.
Oct 02 08:52:28 compute-0 podman[388138]: 2025-10-02 08:52:28.380116839 +0000 UTC m=+0.030450696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:52:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.484 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] resizing rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:52:28 compute-0 podman[388138]: 2025-10-02 08:52:28.500778355 +0000 UTC m=+0.151112192 container init 589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 08:52:28 compute-0 podman[388138]: 2025-10-02 08:52:28.514207621 +0000 UTC m=+0.164541448 container start 589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:52:28 compute-0 podman[388138]: 2025-10-02 08:52:28.519431776 +0000 UTC m=+0.169765633 container attach 589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:52:28 compute-0 distracted_maxwell[388173]: 167 167
Oct 02 08:52:28 compute-0 systemd[1]: libpod-589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa.scope: Deactivated successfully.
Oct 02 08:52:28 compute-0 podman[388138]: 2025-10-02 08:52:28.524097844 +0000 UTC m=+0.174431651 container died 589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:52:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2f5d90b66cdd8ffd3a66d722219e0ab2e616680456d4e85fb0ec9916fcc05e7-merged.mount: Deactivated successfully.
Oct 02 08:52:28 compute-0 podman[388138]: 2025-10-02 08:52:28.565710993 +0000 UTC m=+0.216044800 container remove 589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:52:28 compute-0 systemd[1]: libpod-conmon-589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa.scope: Deactivated successfully.
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.586 2 DEBUG nova.objects.instance [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'migration_context' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.610 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.610 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Ensure instance console log exists: /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.611 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.612 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.612 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:28 compute-0 nova_compute[260603]: 2025-10-02 08:52:28.656 2 DEBUG nova.policy [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7767630a5b1049f48d7e0fed29e221ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c86b416fdb524f21b0228639a3a14116', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:52:28 compute-0 podman[388251]: 2025-10-02 08:52:28.78673848 +0000 UTC m=+0.057399780 container create afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rubin, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 08:52:28 compute-0 systemd[1]: Started libpod-conmon-afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de.scope.
Oct 02 08:52:28 compute-0 podman[388251]: 2025-10-02 08:52:28.759208188 +0000 UTC m=+0.029869548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:52:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdecddb408a271405c3c1fe962f1c0ffe436fb60e58d6b4acb7495edaf8e5a22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdecddb408a271405c3c1fe962f1c0ffe436fb60e58d6b4acb7495edaf8e5a22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdecddb408a271405c3c1fe962f1c0ffe436fb60e58d6b4acb7495edaf8e5a22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdecddb408a271405c3c1fe962f1c0ffe436fb60e58d6b4acb7495edaf8e5a22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:28 compute-0 podman[388251]: 2025-10-02 08:52:28.903256404 +0000 UTC m=+0.173917744 container init afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 08:52:28 compute-0 podman[388251]: 2025-10-02 08:52:28.913837909 +0000 UTC m=+0.184499179 container start afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:52:28 compute-0 podman[388251]: 2025-10-02 08:52:28.91858865 +0000 UTC m=+0.189249910 container attach afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rubin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:52:29 compute-0 ceph-mon[74477]: pgmap v2223: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.5 KiB/s wr, 79 op/s
Oct 02 08:52:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Oct 02 08:52:29 compute-0 nova_compute[260603]: 2025-10-02 08:52:29.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]: {
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "osd_id": 2,
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "type": "bluestore"
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:     },
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "osd_id": 1,
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "type": "bluestore"
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:     },
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "osd_id": 0,
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:         "type": "bluestore"
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]:     }
Oct 02 08:52:29 compute-0 wonderful_rubin[388268]: }
Oct 02 08:52:30 compute-0 systemd[1]: libpod-afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de.scope: Deactivated successfully.
Oct 02 08:52:30 compute-0 podman[388251]: 2025-10-02 08:52:30.013584005 +0000 UTC m=+1.284245305 container died afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rubin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:52:30 compute-0 systemd[1]: libpod-afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de.scope: Consumed 1.105s CPU time.
Oct 02 08:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdecddb408a271405c3c1fe962f1c0ffe436fb60e58d6b4acb7495edaf8e5a22-merged.mount: Deactivated successfully.
Oct 02 08:52:30 compute-0 podman[388251]: 2025-10-02 08:52:30.086554309 +0000 UTC m=+1.357215599 container remove afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rubin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 08:52:30 compute-0 systemd[1]: libpod-conmon-afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de.scope: Deactivated successfully.
Oct 02 08:52:30 compute-0 sudo[387982]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:52:30 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:52:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:52:30 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:52:30 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b404ed55-95a0-433c-95d5-79d2cb4e5c3f does not exist
Oct 02 08:52:30 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 64904732-6785-4aac-bc9a-fa768ff05f5d does not exist
Oct 02 08:52:30 compute-0 sudo[388315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:52:30 compute-0 sudo[388315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:30 compute-0 sudo[388315]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:30 compute-0 sudo[388340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:52:30 compute-0 sudo[388340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:52:30 compute-0 sudo[388340]: pam_unix(sudo:session): session closed for user root
Oct 02 08:52:30 compute-0 nova_compute[260603]: 2025-10-02 08:52:30.328 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395135.326101, 062b2a3e-b612-42bf-b96c-fb9bdd9008ee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:52:30 compute-0 nova_compute[260603]: 2025-10-02 08:52:30.329 2 INFO nova.compute.manager [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] VM Stopped (Lifecycle Event)
Oct 02 08:52:30 compute-0 nova_compute[260603]: 2025-10-02 08:52:30.361 2 DEBUG nova.compute.manager [None req-71e12c87-40bf-4889-9653-311681661fb8 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:52:30 compute-0 nova_compute[260603]: 2025-10-02 08:52:30.961 2 DEBUG nova.network.neutron [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Successfully created port: 4092976f-1133-4b84-91bd-87043169cb4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:52:31 compute-0 ceph-mon[74477]: pgmap v2225: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Oct 02 08:52:31 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:52:31 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:52:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Oct 02 08:52:32 compute-0 nova_compute[260603]: 2025-10-02 08:52:32.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:32 compute-0 nova_compute[260603]: 2025-10-02 08:52:32.607 2 DEBUG nova.network.neutron [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Successfully updated port: 4092976f-1133-4b84-91bd-87043169cb4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:52:32 compute-0 nova_compute[260603]: 2025-10-02 08:52:32.628 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:52:32 compute-0 nova_compute[260603]: 2025-10-02 08:52:32.628 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:52:32 compute-0 nova_compute[260603]: 2025-10-02 08:52:32.628 2 DEBUG nova.network.neutron [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:52:32 compute-0 nova_compute[260603]: 2025-10-02 08:52:32.743 2 DEBUG nova.compute.manager [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-changed-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:32 compute-0 nova_compute[260603]: 2025-10-02 08:52:32.743 2 DEBUG nova.compute.manager [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Refreshing instance network info cache due to event network-changed-4092976f-1133-4b84-91bd-87043169cb4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:52:32 compute-0 nova_compute[260603]: 2025-10-02 08:52:32.744 2 DEBUG oslo_concurrency.lockutils [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:52:32 compute-0 nova_compute[260603]: 2025-10-02 08:52:32.817 2 DEBUG nova.network.neutron [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:52:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:52:33 compute-0 ceph-mon[74477]: pgmap v2226: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Oct 02 08:52:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.752 2 DEBUG nova.network.neutron [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updating instance_info_cache with network_info: [{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.779 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.780 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance network_info: |[{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.781 2 DEBUG oslo_concurrency.lockutils [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.782 2 DEBUG nova.network.neutron [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Refreshing network info cache for port 4092976f-1133-4b84-91bd-87043169cb4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.786 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Start _get_guest_xml network_info=[{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.793 2 WARNING nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.800 2 DEBUG nova.virt.libvirt.host [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.800 2 DEBUG nova.virt.libvirt.host [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.806 2 DEBUG nova.virt.libvirt.host [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.806 2 DEBUG nova.virt.libvirt.host [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.807 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.807 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.808 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.808 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.808 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.808 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.809 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.809 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.809 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.809 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.810 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.810 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.812 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:52:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:34.833 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:34.834 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:34.834 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:34 compute-0 nova_compute[260603]: 2025-10-02 08:52:34.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:35 compute-0 ceph-mon[74477]: pgmap v2227: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 02 08:52:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:52:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3594699734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.317 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.351 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.355 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:52:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 02 08:52:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:52:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1176357024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.824 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.825 2 DEBUG nova.virt.libvirt.vif [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1775312205',display_name='tempest-TestNetworkAdvancedServerOps-server-1775312205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1775312205',id=122,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjAoXXSuQzVTrLfupmZ2yj1eQO64gMLoVS/w1fXN3YlLWUVSU8Ny9eVy5tfjHnq8vP2d+YlXvRC/+xl37WQ+3lkHjliAoJJEZ9n209ktTnU2lK2CJZClltvCqZ21bE87w==',key_name='tempest-TestNetworkAdvancedServerOps-1329763099',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-gkp0e6eh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:52:27Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=094b6b24-7323-4ad5-bd0d-e449c0c96f6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.826 2 DEBUG nova.network.os_vif_util [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.827 2 DEBUG nova.network.os_vif_util [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.828 2 DEBUG nova.objects.instance [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.860 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:52:35 compute-0 nova_compute[260603]:   <uuid>094b6b24-7323-4ad5-bd0d-e449c0c96f6f</uuid>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   <name>instance-0000007a</name>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1775312205</nova:name>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:52:34</nova:creationTime>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:52:35 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:52:35 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:52:35 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:52:35 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:52:35 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:52:35 compute-0 nova_compute[260603]:         <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct 02 08:52:35 compute-0 nova_compute[260603]:         <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:52:35 compute-0 nova_compute[260603]:         <nova:port uuid="4092976f-1133-4b84-91bd-87043169cb4f">
Oct 02 08:52:35 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <system>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <entry name="serial">094b6b24-7323-4ad5-bd0d-e449c0c96f6f</entry>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <entry name="uuid">094b6b24-7323-4ad5-bd0d-e449c0c96f6f</entry>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     </system>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   <os>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   </os>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   <features>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   </features>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk">
Oct 02 08:52:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       </source>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:52:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk.config">
Oct 02 08:52:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       </source>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:52:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:9e:3a:a7"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <target dev="tap4092976f-11"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/console.log" append="off"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <video>
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     </video>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:52:35 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:52:35 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:52:35 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:52:35 compute-0 nova_compute[260603]: </domain>
Oct 02 08:52:35 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.861 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Preparing to wait for external event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.862 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.862 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.862 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.863 2 DEBUG nova.virt.libvirt.vif [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1775312205',display_name='tempest-TestNetworkAdvancedServerOps-server-1775312205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1775312205',id=122,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjAoXXSuQzVTrLfupmZ2yj1eQO64gMLoVS/w1fXN3YlLWUVSU8Ny9eVy5tfjHnq8vP2d+YlXvRC/+xl37WQ+3lkHjliAoJJEZ9n209ktTnU2lK2CJZClltvCqZ21bE87w==',key_name='tempest-TestNetworkAdvancedServerOps-1329763099',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-gkp0e6eh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:52:27Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=094b6b24-7323-4ad5-bd0d-e449c0c96f6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.863 2 DEBUG nova.network.os_vif_util [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.864 2 DEBUG nova.network.os_vif_util [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.865 2 DEBUG os_vif [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.866 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.866 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.869 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4092976f-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.869 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4092976f-11, col_values=(('external_ids', {'iface-id': '4092976f-1133-4b84-91bd-87043169cb4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:3a:a7', 'vm-uuid': '094b6b24-7323-4ad5-bd0d-e449c0c96f6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:35 compute-0 NetworkManager[45129]: <info>  [1759395155.9016] manager: (tap4092976f-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.910 2 INFO os_vif [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11')
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.973 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.976 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.976 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No VIF found with MAC fa:16:3e:9e:3a:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:52:35 compute-0 nova_compute[260603]: 2025-10-02 08:52:35.977 2 INFO nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Using config drive
Oct 02 08:52:36 compute-0 nova_compute[260603]: 2025-10-02 08:52:36.007 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:52:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3594699734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:52:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1176357024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:52:36 compute-0 nova_compute[260603]: 2025-10-02 08:52:36.629 2 INFO nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Creating config drive at /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/disk.config
Oct 02 08:52:36 compute-0 nova_compute[260603]: 2025-10-02 08:52:36.640 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz3jxxvc9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:52:36 compute-0 nova_compute[260603]: 2025-10-02 08:52:36.817 2 DEBUG nova.network.neutron [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updated VIF entry in instance network info cache for port 4092976f-1133-4b84-91bd-87043169cb4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:52:36 compute-0 nova_compute[260603]: 2025-10-02 08:52:36.819 2 DEBUG nova.network.neutron [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updating instance_info_cache with network_info: [{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:52:36 compute-0 nova_compute[260603]: 2025-10-02 08:52:36.823 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz3jxxvc9" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:52:36 compute-0 nova_compute[260603]: 2025-10-02 08:52:36.863 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:52:36 compute-0 nova_compute[260603]: 2025-10-02 08:52:36.868 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/disk.config 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:52:36 compute-0 nova_compute[260603]: 2025-10-02 08:52:36.927 2 DEBUG oslo_concurrency.lockutils [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.060 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/disk.config 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.062 2 INFO nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Deleting local config drive /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/disk.config because it was imported into RBD.
Oct 02 08:52:37 compute-0 kernel: tap4092976f-11: entered promiscuous mode
Oct 02 08:52:37 compute-0 NetworkManager[45129]: <info>  [1759395157.1397] manager: (tap4092976f-11): new Tun device (/org/freedesktop/NetworkManager/Devices/506)
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:37 compute-0 ovn_controller[152344]: 2025-10-02T08:52:37Z|01289|binding|INFO|Claiming lport 4092976f-1133-4b84-91bd-87043169cb4f for this chassis.
Oct 02 08:52:37 compute-0 ovn_controller[152344]: 2025-10-02T08:52:37Z|01290|binding|INFO|4092976f-1133-4b84-91bd-87043169cb4f: Claiming fa:16:3e:9e:3a:a7 10.100.0.3
Oct 02 08:52:37 compute-0 ceph-mon[74477]: pgmap v2228: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.199 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:3a:a7 10.100.0.3'], port_security=['fa:16:3e:9e:3a:a7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '094b6b24-7323-4ad5-bd0d-e449c0c96f6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca65236d-124f-4d37-afbf-f114cc90e015', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c5f2f44-40d9-415b-8ae8-affca47a93a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0a762f3-67f7-4fbc-8632-5b2810d06f8f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4092976f-1133-4b84-91bd-87043169cb4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.200 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4092976f-1133-4b84-91bd-87043169cb4f in datapath ca65236d-124f-4d37-afbf-f114cc90e015 bound to our chassis
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.203 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca65236d-124f-4d37-afbf-f114cc90e015
Oct 02 08:52:37 compute-0 systemd-udevd[388503]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:52:37 compute-0 systemd-machined[214636]: New machine qemu-154-instance-0000007a.
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.226 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[71f3c494-4566-4324-b22f-ab1235a76011]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.227 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca65236d-11 in ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.230 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca65236d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.230 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b4889272-2dcc-4a58-afff-24fd2ca269ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.232 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[56da630e-7980-4f9f-93be-e23ad7b24d17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.246 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1d6cbd-2a3e-451e-8a4e-29e8fa6612af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 NetworkManager[45129]: <info>  [1759395157.2551] device (tap4092976f-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:52:37 compute-0 NetworkManager[45129]: <info>  [1759395157.2558] device (tap4092976f-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:52:37 compute-0 systemd[1]: Started Virtual Machine qemu-154-instance-0000007a.
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.276 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[abd50e35-1219-46ec-b0b9-dd32ff708a71]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:37 compute-0 ovn_controller[152344]: 2025-10-02T08:52:37Z|01291|binding|INFO|Setting lport 4092976f-1133-4b84-91bd-87043169cb4f ovn-installed in OVS
Oct 02 08:52:37 compute-0 ovn_controller[152344]: 2025-10-02T08:52:37Z|01292|binding|INFO|Setting lport 4092976f-1133-4b84-91bd-87043169cb4f up in Southbound
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.313 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d6957a81-e3c7-4c05-8950-1fd091cf8fc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.325 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c2fd6f6-4c5f-4af3-8e71-da43813f04b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 NetworkManager[45129]: <info>  [1759395157.3267] manager: (tapca65236d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/507)
Oct 02 08:52:37 compute-0 systemd-udevd[388507]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.372 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[706fbf47-29d5-459e-a031-0ccc11dc3d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.377 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9a6eee3d-d89d-4a0a-92f4-a5707d57ba18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 NetworkManager[45129]: <info>  [1759395157.4063] device (tapca65236d-10): carrier: link connected
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.412 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c39960ed-2e5f-4f2f-817d-29b800570524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.433 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b642de-dde7-4382-b507-5c4a593cd61e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca65236d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:d7:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603600, 'reachable_time': 44887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388535, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.464 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ddd31c-c875-488f-9ab3-53ff22eb4b71]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:d753'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603600, 'tstamp': 603600}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388536, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.489 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[884e8e58-f9cf-4856-8adc-82df75ecab3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca65236d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:d7:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603600, 'reachable_time': 44887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388537, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.508 2 DEBUG nova.compute.manager [req-3ede35a0-4ea8-489b-95a7-22854d4fdcfc req-168580ad-b9af-48a4-abaf-a54c51c2586e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.508 2 DEBUG oslo_concurrency.lockutils [req-3ede35a0-4ea8-489b-95a7-22854d4fdcfc req-168580ad-b9af-48a4-abaf-a54c51c2586e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.508 2 DEBUG oslo_concurrency.lockutils [req-3ede35a0-4ea8-489b-95a7-22854d4fdcfc req-168580ad-b9af-48a4-abaf-a54c51c2586e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.508 2 DEBUG oslo_concurrency.lockutils [req-3ede35a0-4ea8-489b-95a7-22854d4fdcfc req-168580ad-b9af-48a4-abaf-a54c51c2586e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.509 2 DEBUG nova.compute.manager [req-3ede35a0-4ea8-489b-95a7-22854d4fdcfc req-168580ad-b9af-48a4-abaf-a54c51c2586e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Processing event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.527 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48fadc4d-7f4e-4833-9f66-7691973a64e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.529 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395142.528547, d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.530 2 INFO nova.compute.manager [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] VM Stopped (Lifecycle Event)
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.553 2 DEBUG nova.compute.manager [None req-1ce90d65-4a7a-4aaa-8ff0-32558516d18e - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:52:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 38 op/s
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.601 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[88f87964-2e66-4d6e-8b62-5fb025790ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.602 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca65236d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.603 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.603 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca65236d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:37 compute-0 NetworkManager[45129]: <info>  [1759395157.6060] manager: (tapca65236d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:37 compute-0 kernel: tapca65236d-10: entered promiscuous mode
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.611 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca65236d-10, col_values=(('external_ids', {'iface-id': '11510ab0-6469-4123-a903-78f5dba4ba14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:37 compute-0 ovn_controller[152344]: 2025-10-02T08:52:37Z|01293|binding|INFO|Releasing lport 11510ab0-6469-4123-a903-78f5dba4ba14 from this chassis (sb_readonly=0)
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:37 compute-0 nova_compute[260603]: 2025-10-02 08:52:37.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.633 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca65236d-124f-4d37-afbf-f114cc90e015.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca65236d-124f-4d37-afbf-f114cc90e015.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.634 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6a908d15-e81c-4e6a-901a-3786ba2fcacf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.635 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-ca65236d-124f-4d37-afbf-f114cc90e015
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/ca65236d-124f-4d37-afbf-f114cc90e015.pid.haproxy
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID ca65236d-124f-4d37-afbf-f114cc90e015
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:52:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.636 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'env', 'PROCESS_TAG=haproxy-ca65236d-124f-4d37-afbf-f114cc90e015', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca65236d-124f-4d37-afbf-f114cc90e015.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:52:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:52:38 compute-0 podman[388609]: 2025-10-02 08:52:38.038921965 +0000 UTC m=+0.076919720 container create 795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:52:38 compute-0 systemd[1]: Started libpod-conmon-795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66.scope.
Oct 02 08:52:38 compute-0 podman[388609]: 2025-10-02 08:52:37.989687504 +0000 UTC m=+0.027685289 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:52:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bb29d2f3499731a67cce06b19a7eb0e193f73fce6d50e29c10b23f7511d4a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:52:38 compute-0 podman[388609]: 2025-10-02 08:52:38.141610011 +0000 UTC m=+0.179607836 container init 795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:52:38 compute-0 podman[388609]: 2025-10-02 08:52:38.152044852 +0000 UTC m=+0.190042617 container start 795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.166 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395158.1655385, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.166 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Started (Lifecycle Event)
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.168 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:52:38 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [NOTICE]   (388629) : New worker (388631) forked
Oct 02 08:52:38 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [NOTICE]   (388629) : Loading success.
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.189 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.198 2 INFO nova.virt.libvirt.driver [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance spawned successfully.
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.198 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.203 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.209 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.226 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.227 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.227 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.227 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.228 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.228 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.242 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.242 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395158.1680305, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.242 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Paused (Lifecycle Event)
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.272 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.274 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395158.1709986, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.274 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Resumed (Lifecycle Event)
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.302 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.304 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.314 2 INFO nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Took 10.43 seconds to spawn the instance on the hypervisor.
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.315 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.325 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.392 2 INFO nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Took 11.38 seconds to build instance.
Oct 02 08:52:38 compute-0 nova_compute[260603]: 2025-10-02 08:52:38.410 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:52:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:52:39 compute-0 ceph-mon[74477]: pgmap v2229: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 38 op/s
Oct 02 08:52:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 02 08:52:39 compute-0 nova_compute[260603]: 2025-10-02 08:52:39.601 2 DEBUG nova.compute.manager [req-9a211d68-f44f-49e9-a735-f8a57c5529ce req-c82fafa9-6d73-485e-854f-99794de29ef5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:39 compute-0 nova_compute[260603]: 2025-10-02 08:52:39.602 2 DEBUG oslo_concurrency.lockutils [req-9a211d68-f44f-49e9-a735-f8a57c5529ce req-c82fafa9-6d73-485e-854f-99794de29ef5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:39 compute-0 nova_compute[260603]: 2025-10-02 08:52:39.602 2 DEBUG oslo_concurrency.lockutils [req-9a211d68-f44f-49e9-a735-f8a57c5529ce req-c82fafa9-6d73-485e-854f-99794de29ef5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:39 compute-0 nova_compute[260603]: 2025-10-02 08:52:39.603 2 DEBUG oslo_concurrency.lockutils [req-9a211d68-f44f-49e9-a735-f8a57c5529ce req-c82fafa9-6d73-485e-854f-99794de29ef5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:39 compute-0 nova_compute[260603]: 2025-10-02 08:52:39.603 2 DEBUG nova.compute.manager [req-9a211d68-f44f-49e9-a735-f8a57c5529ce req-c82fafa9-6d73-485e-854f-99794de29ef5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:52:39 compute-0 nova_compute[260603]: 2025-10-02 08:52:39.603 2 WARNING nova.compute.manager [req-9a211d68-f44f-49e9-a735-f8a57c5529ce req-c82fafa9-6d73-485e-854f-99794de29ef5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received unexpected event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with vm_state active and task_state None.
Oct 02 08:52:39 compute-0 nova_compute[260603]: 2025-10-02 08:52:39.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:40 compute-0 nova_compute[260603]: 2025-10-02 08:52:40.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:52:40 compute-0 nova_compute[260603]: 2025-10-02 08:52:40.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:41 compute-0 ceph-mon[74477]: pgmap v2230: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 02 08:52:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 08:52:42 compute-0 nova_compute[260603]: 2025-10-02 08:52:42.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:52:42 compute-0 nova_compute[260603]: 2025-10-02 08:52:42.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:52:42 compute-0 nova_compute[260603]: 2025-10-02 08:52:42.557 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:52:42 compute-0 nova_compute[260603]: 2025-10-02 08:52:42.557 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:52:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:52:43 compute-0 ovn_controller[152344]: 2025-10-02T08:52:43Z|01294|binding|INFO|Releasing lport 11510ab0-6469-4123-a903-78f5dba4ba14 from this chassis (sb_readonly=0)
Oct 02 08:52:43 compute-0 NetworkManager[45129]: <info>  [1759395163.1269] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Oct 02 08:52:43 compute-0 NetworkManager[45129]: <info>  [1759395163.1289] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:43 compute-0 ovn_controller[152344]: 2025-10-02T08:52:43Z|01295|binding|INFO|Releasing lport 11510ab0-6469-4123-a903-78f5dba4ba14 from this chassis (sb_readonly=0)
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:43 compute-0 ceph-mon[74477]: pgmap v2231: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.544 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.545 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:52:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.732 2 DEBUG nova.compute.manager [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-changed-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.732 2 DEBUG nova.compute.manager [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Refreshing instance network info cache due to event network-changed-4092976f-1133-4b84-91bd-87043169cb4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.733 2 DEBUG oslo_concurrency.lockutils [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.733 2 DEBUG oslo_concurrency.lockutils [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:52:43 compute-0 nova_compute[260603]: 2025-10-02 08:52:43.733 2 DEBUG nova.network.neutron [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Refreshing network info cache for port 4092976f-1133-4b84-91bd-87043169cb4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:52:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:52:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445214993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.020 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.109 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.110 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:52:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3445214993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.328 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.329 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3546MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.330 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.330 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.403 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 094b6b24-7323-4ad5-bd0d-e449c0c96f6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.404 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.404 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.442 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:52:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:52:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2147402747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.879 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.888 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.916 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.977 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:52:44 compute-0 nova_compute[260603]: 2025-10-02 08:52:44.978 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:45 compute-0 ceph-mon[74477]: pgmap v2232: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:52:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2147402747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:52:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:52:45 compute-0 nova_compute[260603]: 2025-10-02 08:52:45.641 2 DEBUG nova.network.neutron [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updated VIF entry in instance network info cache for port 4092976f-1133-4b84-91bd-87043169cb4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:52:45 compute-0 nova_compute[260603]: 2025-10-02 08:52:45.642 2 DEBUG nova.network.neutron [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updating instance_info_cache with network_info: [{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:52:45 compute-0 nova_compute[260603]: 2025-10-02 08:52:45.664 2 DEBUG oslo_concurrency.lockutils [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:52:45 compute-0 nova_compute[260603]: 2025-10-02 08:52:45.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:47 compute-0 ceph-mon[74477]: pgmap v2233: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:52:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:52:47 compute-0 nova_compute[260603]: 2025-10-02 08:52:47.974 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:52:47 compute-0 nova_compute[260603]: 2025-10-02 08:52:47.974 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:52:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:52:48 compute-0 podman[388687]: 2025-10-02 08:52:48.038510446 +0000 UTC m=+0.088415214 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 08:52:48 compute-0 podman[388686]: 2025-10-02 08:52:48.068020082 +0000 UTC m=+0.121304368 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:52:49 compute-0 ceph-mon[74477]: pgmap v2234: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:52:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Oct 02 08:52:49 compute-0 ovn_controller[152344]: 2025-10-02T08:52:49Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:3a:a7 10.100.0.3
Oct 02 08:52:49 compute-0 ovn_controller[152344]: 2025-10-02T08:52:49Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:3a:a7 10.100.0.3
Oct 02 08:52:49 compute-0 nova_compute[260603]: 2025-10-02 08:52:49.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:50 compute-0 ceph-mon[74477]: pgmap v2235: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Oct 02 08:52:50 compute-0 nova_compute[260603]: 2025-10-02 08:52:50.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:51 compute-0 nova_compute[260603]: 2025-10-02 08:52:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:52:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct 02 08:52:52 compute-0 ceph-mon[74477]: pgmap v2236: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct 02 08:52:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:52:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 02 08:52:54 compute-0 ceph-mon[74477]: pgmap v2237: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 02 08:52:54 compute-0 nova_compute[260603]: 2025-10-02 08:52:54.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:55 compute-0 podman[388731]: 2025-10-02 08:52:55.011423741 +0000 UTC m=+0.070107273 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 08:52:55 compute-0 podman[388730]: 2025-10-02 08:52:55.017613918 +0000 UTC m=+0.078905022 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 08:52:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:52:55 compute-0 nova_compute[260603]: 2025-10-02 08:52:55.621 2 INFO nova.compute.manager [None req-30ea783e-e4b5-4579-9b92-151299b25866 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Get console output
Oct 02 08:52:55 compute-0 nova_compute[260603]: 2025-10-02 08:52:55.629 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:52:55 compute-0 nova_compute[260603]: 2025-10-02 08:52:55.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:56 compute-0 nova_compute[260603]: 2025-10-02 08:52:56.394 2 DEBUG nova.objects.instance [None req-876a88ad-a9f1-4b9c-ab37-67d5d6b67d8d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:52:56 compute-0 nova_compute[260603]: 2025-10-02 08:52:56.424 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395176.4235923, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:52:56 compute-0 nova_compute[260603]: 2025-10-02 08:52:56.424 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Paused (Lifecycle Event)
Oct 02 08:52:56 compute-0 nova_compute[260603]: 2025-10-02 08:52:56.445 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:52:56 compute-0 nova_compute[260603]: 2025-10-02 08:52:56.451 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:52:56 compute-0 nova_compute[260603]: 2025-10-02 08:52:56.473 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 08:52:56 compute-0 nova_compute[260603]: 2025-10-02 08:52:56.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:52:56 compute-0 ceph-mon[74477]: pgmap v2238: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.073895) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395177073951, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2127, "num_deletes": 255, "total_data_size": 3394690, "memory_usage": 3448808, "flush_reason": "Manual Compaction"}
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395177101031, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3303781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45063, "largest_seqno": 47189, "table_properties": {"data_size": 3294116, "index_size": 6095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19987, "raw_average_key_size": 20, "raw_value_size": 3274734, "raw_average_value_size": 3351, "num_data_blocks": 269, "num_entries": 977, "num_filter_entries": 977, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394972, "oldest_key_time": 1759394972, "file_creation_time": 1759395177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 27181 microseconds, and 9229 cpu microseconds.
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.101081) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3303781 bytes OK
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.101103) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.102453) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.102805) EVENT_LOG_v1 {"time_micros": 1759395177102799, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.102826) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3385702, prev total WAL file size 3385702, number of live WAL files 2.
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.104380) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3226KB)], [104(8495KB)]
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395177104449, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12003531, "oldest_snapshot_seqno": -1}
Oct 02 08:52:57 compute-0 kernel: tap4092976f-11 (unregistering): left promiscuous mode
Oct 02 08:52:57 compute-0 NetworkManager[45129]: <info>  [1759395177.1506] device (tap4092976f-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:52:57 compute-0 ovn_controller[152344]: 2025-10-02T08:52:57Z|01296|binding|INFO|Releasing lport 4092976f-1133-4b84-91bd-87043169cb4f from this chassis (sb_readonly=0)
Oct 02 08:52:57 compute-0 ovn_controller[152344]: 2025-10-02T08:52:57Z|01297|binding|INFO|Setting lport 4092976f-1133-4b84-91bd-87043169cb4f down in Southbound
Oct 02 08:52:57 compute-0 nova_compute[260603]: 2025-10-02 08:52:57.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:57 compute-0 ovn_controller[152344]: 2025-10-02T08:52:57Z|01298|binding|INFO|Removing iface tap4092976f-11 ovn-installed in OVS
Oct 02 08:52:57 compute-0 nova_compute[260603]: 2025-10-02 08:52:57.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.171 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:3a:a7 10.100.0.3'], port_security=['fa:16:3e:9e:3a:a7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '094b6b24-7323-4ad5-bd0d-e449c0c96f6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca65236d-124f-4d37-afbf-f114cc90e015', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c5f2f44-40d9-415b-8ae8-affca47a93a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0a762f3-67f7-4fbc-8632-5b2810d06f8f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4092976f-1133-4b84-91bd-87043169cb4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.175 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4092976f-1133-4b84-91bd-87043169cb4f in datapath ca65236d-124f-4d37-afbf-f114cc90e015 unbound from our chassis
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.177 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca65236d-124f-4d37-afbf-f114cc90e015, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.178 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5458460e-45fe-45e2-a829-de5fb9fcc205]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.179 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 namespace which is not needed anymore
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7021 keys, 10324194 bytes, temperature: kUnknown
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395177183075, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10324194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10275896, "index_size": 29626, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 180990, "raw_average_key_size": 25, "raw_value_size": 10149031, "raw_average_value_size": 1445, "num_data_blocks": 1167, "num_entries": 7021, "num_filter_entries": 7021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.183293) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10324194 bytes
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.184294) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.5 rd, 131.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.3 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7545, records dropped: 524 output_compression: NoCompression
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.184310) EVENT_LOG_v1 {"time_micros": 1759395177184303, "job": 62, "event": "compaction_finished", "compaction_time_micros": 78705, "compaction_time_cpu_micros": 42361, "output_level": 6, "num_output_files": 1, "total_output_size": 10324194, "num_input_records": 7545, "num_output_records": 7021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395177185024, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395177186800, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.104249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.187817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.187827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.187831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.187835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:52:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.187839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:52:57 compute-0 nova_compute[260603]: 2025-10-02 08:52:57.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:57 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Oct 02 08:52:57 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Consumed 13.146s CPU time.
Oct 02 08:52:57 compute-0 systemd-machined[214636]: Machine qemu-154-instance-0000007a terminated.
Oct 02 08:52:57 compute-0 nova_compute[260603]: 2025-10-02 08:52:57.296 2 DEBUG nova.compute.manager [None req-876a88ad-a9f1-4b9c-ab37-67d5d6b67d8d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:52:57 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [NOTICE]   (388629) : haproxy version is 2.8.14-c23fe91
Oct 02 08:52:57 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [NOTICE]   (388629) : path to executable is /usr/sbin/haproxy
Oct 02 08:52:57 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [WARNING]  (388629) : Exiting Master process...
Oct 02 08:52:57 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [WARNING]  (388629) : Exiting Master process...
Oct 02 08:52:57 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [ALERT]    (388629) : Current worker (388631) exited with code 143 (Terminated)
Oct 02 08:52:57 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [WARNING]  (388629) : All workers exited. Exiting... (0)
Oct 02 08:52:57 compute-0 systemd[1]: libpod-795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66.scope: Deactivated successfully.
Oct 02 08:52:57 compute-0 podman[388800]: 2025-10-02 08:52:57.350035304 +0000 UTC m=+0.058755254 container died 795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 08:52:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-88bb29d2f3499731a67cce06b19a7eb0e193f73fce6d50e29c10b23f7511d4a2-merged.mount: Deactivated successfully.
Oct 02 08:52:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66-userdata-shm.mount: Deactivated successfully.
Oct 02 08:52:57 compute-0 podman[388800]: 2025-10-02 08:52:57.389191135 +0000 UTC m=+0.097911085 container cleanup 795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:52:57 compute-0 systemd[1]: libpod-conmon-795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66.scope: Deactivated successfully.
Oct 02 08:52:57 compute-0 nova_compute[260603]: 2025-10-02 08:52:57.446 2 DEBUG nova.compute.manager [req-9095d607-99dc-42c7-beef-8162cc557eaf req-3611fa71-6056-4d11-841a-9dd2ee435c78 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-unplugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:57 compute-0 nova_compute[260603]: 2025-10-02 08:52:57.446 2 DEBUG oslo_concurrency.lockutils [req-9095d607-99dc-42c7-beef-8162cc557eaf req-3611fa71-6056-4d11-841a-9dd2ee435c78 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:57 compute-0 nova_compute[260603]: 2025-10-02 08:52:57.447 2 DEBUG oslo_concurrency.lockutils [req-9095d607-99dc-42c7-beef-8162cc557eaf req-3611fa71-6056-4d11-841a-9dd2ee435c78 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:57 compute-0 nova_compute[260603]: 2025-10-02 08:52:57.447 2 DEBUG oslo_concurrency.lockutils [req-9095d607-99dc-42c7-beef-8162cc557eaf req-3611fa71-6056-4d11-841a-9dd2ee435c78 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:57 compute-0 nova_compute[260603]: 2025-10-02 08:52:57.447 2 DEBUG nova.compute.manager [req-9095d607-99dc-42c7-beef-8162cc557eaf req-3611fa71-6056-4d11-841a-9dd2ee435c78 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-unplugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:52:57 compute-0 nova_compute[260603]: 2025-10-02 08:52:57.447 2 WARNING nova.compute.manager [req-9095d607-99dc-42c7-beef-8162cc557eaf req-3611fa71-6056-4d11-841a-9dd2ee435c78 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received unexpected event network-vif-unplugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with vm_state suspended and task_state None.
Oct 02 08:52:57 compute-0 podman[388832]: 2025-10-02 08:52:57.457636686 +0000 UTC m=+0.048238701 container remove 795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.467 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0443b151-90bc-4fff-bd8b-96240b919648]: (4, ('Thu Oct  2 08:52:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 (795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66)\n795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66\nThu Oct  2 08:52:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 (795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66)\n795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.468 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[80e71a20-ec3a-43fc-9ae0-14fd4e8d8622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.469 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca65236d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:52:57 compute-0 nova_compute[260603]: 2025-10-02 08:52:57.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:57 compute-0 kernel: tapca65236d-10: left promiscuous mode
Oct 02 08:52:57 compute-0 nova_compute[260603]: 2025-10-02 08:52:57.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.525 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8420fbae-3a1a-47dd-b8ea-aa62b7b4cd43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.566 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ad2838-3e47-4e46-b090-39a9b688aa4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.567 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6ec4ec-07d0-43aa-ad89-4bff334649ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.589 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c1fafd-6b1b-4c09-8908-5b48791daf2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603591, 'reachable_time': 32463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388851, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.593 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:52:57 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.594 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6cd7b5-8d49-46f6-9f07-ef4691cbc976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:52:57 compute-0 systemd[1]: run-netns-ovnmeta\x2dca65236d\x2d124f\x2d4d37\x2dafbf\x2df114cc90e015.mount: Deactivated successfully.
Oct 02 08:52:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:52:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:52:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:52:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:52:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:52:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:52:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:52:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:52:59 compute-0 ceph-mon[74477]: pgmap v2239: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:52:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 08:52:59 compute-0 nova_compute[260603]: 2025-10-02 08:52:59.619 2 DEBUG nova.compute.manager [req-d6f8798c-d059-4bbe-b737-5ff60a86e402 req-2e3666a6-5459-477d-afcd-14ae7fb51a13 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:52:59 compute-0 nova_compute[260603]: 2025-10-02 08:52:59.620 2 DEBUG oslo_concurrency.lockutils [req-d6f8798c-d059-4bbe-b737-5ff60a86e402 req-2e3666a6-5459-477d-afcd-14ae7fb51a13 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:52:59 compute-0 nova_compute[260603]: 2025-10-02 08:52:59.621 2 DEBUG oslo_concurrency.lockutils [req-d6f8798c-d059-4bbe-b737-5ff60a86e402 req-2e3666a6-5459-477d-afcd-14ae7fb51a13 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:52:59 compute-0 nova_compute[260603]: 2025-10-02 08:52:59.621 2 DEBUG oslo_concurrency.lockutils [req-d6f8798c-d059-4bbe-b737-5ff60a86e402 req-2e3666a6-5459-477d-afcd-14ae7fb51a13 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:52:59 compute-0 nova_compute[260603]: 2025-10-02 08:52:59.622 2 DEBUG nova.compute.manager [req-d6f8798c-d059-4bbe-b737-5ff60a86e402 req-2e3666a6-5459-477d-afcd-14ae7fb51a13 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:52:59 compute-0 nova_compute[260603]: 2025-10-02 08:52:59.622 2 WARNING nova.compute.manager [req-d6f8798c-d059-4bbe-b737-5ff60a86e402 req-2e3666a6-5459-477d-afcd-14ae7fb51a13 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received unexpected event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with vm_state suspended and task_state None.
Oct 02 08:52:59 compute-0 nova_compute[260603]: 2025-10-02 08:52:59.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:00 compute-0 nova_compute[260603]: 2025-10-02 08:53:00.182 2 INFO nova.compute.manager [None req-8174d3f2-90e6-4535-b808-136a879bd745 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Get console output
Oct 02 08:53:00 compute-0 nova_compute[260603]: 2025-10-02 08:53:00.769 2 INFO nova.compute.manager [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Resuming
Oct 02 08:53:00 compute-0 nova_compute[260603]: 2025-10-02 08:53:00.770 2 DEBUG nova.objects.instance [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'flavor' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:53:00 compute-0 nova_compute[260603]: 2025-10-02 08:53:00.807 2 DEBUG oslo_concurrency.lockutils [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:53:00 compute-0 nova_compute[260603]: 2025-10-02 08:53:00.807 2 DEBUG oslo_concurrency.lockutils [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:53:00 compute-0 nova_compute[260603]: 2025-10-02 08:53:00.808 2 DEBUG nova.network.neutron [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:53:00 compute-0 nova_compute[260603]: 2025-10-02 08:53:00.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:01 compute-0 ceph-mon[74477]: pgmap v2240: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 08:53:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:53:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:53:03 compute-0 ceph-mon[74477]: pgmap v2241: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.353 2 DEBUG nova.network.neutron [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updating instance_info_cache with network_info: [{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.381 2 DEBUG oslo_concurrency.lockutils [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.389 2 DEBUG nova.virt.libvirt.vif [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1775312205',display_name='tempest-TestNetworkAdvancedServerOps-server-1775312205',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1775312205',id=122,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjAoXXSuQzVTrLfupmZ2yj1eQO64gMLoVS/w1fXN3YlLWUVSU8Ny9eVy5tfjHnq8vP2d+YlXvRC/+xl37WQ+3lkHjliAoJJEZ9n209ktTnU2lK2CJZClltvCqZ21bE87w==',key_name='tempest-TestNetworkAdvancedServerOps-1329763099',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:52:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-gkp0e6eh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:52:57Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=094b6b24-7323-4ad5-bd0d-e449c0c96f6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.389 2 DEBUG nova.network.os_vif_util [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.391 2 DEBUG nova.network.os_vif_util [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.391 2 DEBUG os_vif [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4092976f-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4092976f-11, col_values=(('external_ids', {'iface-id': '4092976f-1133-4b84-91bd-87043169cb4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:3a:a7', 'vm-uuid': '094b6b24-7323-4ad5-bd0d-e449c0c96f6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.398 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.398 2 INFO os_vif [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11')
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.433 2 DEBUG nova.objects.instance [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'numa_topology' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:53:03 compute-0 NetworkManager[45129]: <info>  [1759395183.5265] manager: (tap4092976f-11): new Tun device (/org/freedesktop/NetworkManager/Devices/511)
Oct 02 08:53:03 compute-0 kernel: tap4092976f-11: entered promiscuous mode
Oct 02 08:53:03 compute-0 ovn_controller[152344]: 2025-10-02T08:53:03Z|01299|binding|INFO|Claiming lport 4092976f-1133-4b84-91bd-87043169cb4f for this chassis.
Oct 02 08:53:03 compute-0 ovn_controller[152344]: 2025-10-02T08:53:03Z|01300|binding|INFO|4092976f-1133-4b84-91bd-87043169cb4f: Claiming fa:16:3e:9e:3a:a7 10.100.0.3
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.589 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:3a:a7 10.100.0.3'], port_security=['fa:16:3e:9e:3a:a7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '094b6b24-7323-4ad5-bd0d-e449c0c96f6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca65236d-124f-4d37-afbf-f114cc90e015', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1c5f2f44-40d9-415b-8ae8-affca47a93a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0a762f3-67f7-4fbc-8632-5b2810d06f8f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4092976f-1133-4b84-91bd-87043169cb4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.591 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4092976f-1133-4b84-91bd-87043169cb4f in datapath ca65236d-124f-4d37-afbf-f114cc90e015 bound to our chassis
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.594 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca65236d-124f-4d37-afbf-f114cc90e015
Oct 02 08:53:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:53:03 compute-0 systemd-udevd[388866]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:53:03 compute-0 ovn_controller[152344]: 2025-10-02T08:53:03Z|01301|binding|INFO|Setting lport 4092976f-1133-4b84-91bd-87043169cb4f ovn-installed in OVS
Oct 02 08:53:03 compute-0 ovn_controller[152344]: 2025-10-02T08:53:03Z|01302|binding|INFO|Setting lport 4092976f-1133-4b84-91bd-87043169cb4f up in Southbound
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.608 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[178e4c72-ab00-4063-ae07-2364d7f59030]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.609 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca65236d-11 in ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.612 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca65236d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.612 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3caa3884-d460-4805-8bc3-6446e72b84d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fe698746-bd45-4b4c-889b-1e386621c106]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 NetworkManager[45129]: <info>  [1759395183.6263] device (tap4092976f-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:53:03 compute-0 NetworkManager[45129]: <info>  [1759395183.6270] device (tap4092976f-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.625 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e548fdcf-b162-4f39-a09c-4fc4b22da48c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 systemd-machined[214636]: New machine qemu-155-instance-0000007a.
Oct 02 08:53:03 compute-0 systemd[1]: Started Virtual Machine qemu-155-instance-0000007a.
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.656 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f703766a-68bf-4627-8d5d-f90e038a0435]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.688 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f192fe1a-8257-4f77-88d7-4e6869bea14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.693 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f0fe75-0dc5-4619-98eb-3d7b5229a870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 NetworkManager[45129]: <info>  [1759395183.6968] manager: (tapca65236d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/512)
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.748 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1d80a886-459b-4c1d-aa89-646a131fd7ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.750 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2eb8a4-2d7f-4560-ade9-ff70e5661d01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 NetworkManager[45129]: <info>  [1759395183.7883] device (tapca65236d-10): carrier: link connected
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.798 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[768426df-fa41-4916-8e03-42eea1fb4547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.824 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb5e013-531d-4fbe-b0e3-c0aa4a227772]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca65236d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:d7:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606239, 'reachable_time': 32654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388899, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.829 2 DEBUG nova.compute.manager [req-c0b14a11-8348-42bb-8dca-0ce0ebc810b4 req-56b2bd6c-49ec-4e8a-aa7c-37f523840b51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.829 2 DEBUG oslo_concurrency.lockutils [req-c0b14a11-8348-42bb-8dca-0ce0ebc810b4 req-56b2bd6c-49ec-4e8a-aa7c-37f523840b51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.829 2 DEBUG oslo_concurrency.lockutils [req-c0b14a11-8348-42bb-8dca-0ce0ebc810b4 req-56b2bd6c-49ec-4e8a-aa7c-37f523840b51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.830 2 DEBUG oslo_concurrency.lockutils [req-c0b14a11-8348-42bb-8dca-0ce0ebc810b4 req-56b2bd6c-49ec-4e8a-aa7c-37f523840b51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.830 2 DEBUG nova.compute.manager [req-c0b14a11-8348-42bb-8dca-0ce0ebc810b4 req-56b2bd6c-49ec-4e8a-aa7c-37f523840b51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.830 2 WARNING nova.compute.manager [req-c0b14a11-8348-42bb-8dca-0ce0ebc810b4 req-56b2bd6c-49ec-4e8a-aa7c-37f523840b51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received unexpected event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with vm_state suspended and task_state resuming.
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.848 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec84c17-142f-4a4b-becc-012747714903]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:d753'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606239, 'tstamp': 606239}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388900, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.878 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0d9f01-c4f5-42ca-978c-58fed357e588]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca65236d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:d7:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606239, 'reachable_time': 32654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388901, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.922 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[08288e86-8698-4448-b146-625472b2bbf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:03 compute-0 nova_compute[260603]: 2025-10-02 08:53:03.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.960 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.019 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d79401-bb7a-431b-adad-83bb4400935d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.020 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca65236d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.021 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.021 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca65236d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:04 compute-0 nova_compute[260603]: 2025-10-02 08:53:04.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:04 compute-0 kernel: tapca65236d-10: entered promiscuous mode
Oct 02 08:53:04 compute-0 NetworkManager[45129]: <info>  [1759395184.0259] manager: (tapca65236d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/513)
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.030 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca65236d-10, col_values=(('external_ids', {'iface-id': '11510ab0-6469-4123-a903-78f5dba4ba14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:04 compute-0 ovn_controller[152344]: 2025-10-02T08:53:04Z|01303|binding|INFO|Releasing lport 11510ab0-6469-4123-a903-78f5dba4ba14 from this chassis (sb_readonly=0)
Oct 02 08:53:04 compute-0 nova_compute[260603]: 2025-10-02 08:53:04.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:04 compute-0 nova_compute[260603]: 2025-10-02 08:53:04.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:04 compute-0 nova_compute[260603]: 2025-10-02 08:53:04.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.062 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca65236d-124f-4d37-afbf-f114cc90e015.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca65236d-124f-4d37-afbf-f114cc90e015.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.063 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ea6073-b09e-41fa-97d9-082b1d4362b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.063 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-ca65236d-124f-4d37-afbf-f114cc90e015
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/ca65236d-124f-4d37-afbf-f114cc90e015.pid.haproxy
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID ca65236d-124f-4d37-afbf-f114cc90e015
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.064 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'env', 'PROCESS_TAG=haproxy-ca65236d-124f-4d37-afbf-f114cc90e015', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca65236d-124f-4d37-afbf-f114cc90e015.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:53:04 compute-0 podman[388970]: 2025-10-02 08:53:04.475010891 +0000 UTC m=+0.066940253 container create 33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 08:53:04 compute-0 podman[388970]: 2025-10-02 08:53:04.437724488 +0000 UTC m=+0.029653880 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:53:04 compute-0 systemd[1]: Started libpod-conmon-33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425.scope.
Oct 02 08:53:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df3af0a887f56125e18a5dff2994a58aeee1c818086f552ccc3946b82615b0ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:04 compute-0 podman[388970]: 2025-10-02 08:53:04.598934979 +0000 UTC m=+0.190864361 container init 33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:53:04 compute-0 podman[388970]: 2025-10-02 08:53:04.611963413 +0000 UTC m=+0.203892795 container start 33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:53:04 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [NOTICE]   (388994) : New worker (388996) forked
Oct 02 08:53:04 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [NOTICE]   (388994) : Loading success.
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.667 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:53:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.669 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:04 compute-0 nova_compute[260603]: 2025-10-02 08:53:04.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:05 compute-0 ceph-mon[74477]: pgmap v2242: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.118 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 094b6b24-7323-4ad5-bd0d-e449c0c96f6f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.118 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395185.1176782, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.119 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Started (Lifecycle Event)
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.138 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.148 2 DEBUG nova.compute.manager [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.149 2 DEBUG nova.objects.instance [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.153 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.169 2 INFO nova.virt.libvirt.driver [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance running successfully.
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.171 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.171 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395185.1230593, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:53:05 compute-0 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.172 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Resumed (Lifecycle Event)
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.175 2 DEBUG nova.virt.libvirt.guest [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.176 2 DEBUG nova.compute.manager [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.444 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.449 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:53:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.937 2 DEBUG nova.compute.manager [req-2be64af2-a569-45b1-92f1-5961499b591b req-f6fcc5ce-e761-4bdc-bc18-0b3dcc1c3058 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.937 2 DEBUG oslo_concurrency.lockutils [req-2be64af2-a569-45b1-92f1-5961499b591b req-f6fcc5ce-e761-4bdc-bc18-0b3dcc1c3058 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.938 2 DEBUG oslo_concurrency.lockutils [req-2be64af2-a569-45b1-92f1-5961499b591b req-f6fcc5ce-e761-4bdc-bc18-0b3dcc1c3058 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.938 2 DEBUG oslo_concurrency.lockutils [req-2be64af2-a569-45b1-92f1-5961499b591b req-f6fcc5ce-e761-4bdc-bc18-0b3dcc1c3058 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.938 2 DEBUG nova.compute.manager [req-2be64af2-a569-45b1-92f1-5961499b591b req-f6fcc5ce-e761-4bdc-bc18-0b3dcc1c3058 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:53:05 compute-0 nova_compute[260603]: 2025-10-02 08:53:05.939 2 WARNING nova.compute.manager [req-2be64af2-a569-45b1-92f1-5961499b591b req-f6fcc5ce-e761-4bdc-bc18-0b3dcc1c3058 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received unexpected event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with vm_state active and task_state None.
Oct 02 08:53:07 compute-0 ceph-mon[74477]: pgmap v2243: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 08:53:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 3.1 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct 02 08:53:07 compute-0 nova_compute[260603]: 2025-10-02 08:53:07.929 2 INFO nova.compute.manager [None req-265eb2d1-7b16-48a8-86c9-64fbc1af29f9 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Get console output
Oct 02 08:53:07 compute-0 nova_compute[260603]: 2025-10-02 08:53:07.934 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:53:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:53:09 compute-0 ceph-mon[74477]: pgmap v2244: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 3.1 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct 02 08:53:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 0 B/s wr, 5 op/s
Oct 02 08:53:09 compute-0 nova_compute[260603]: 2025-10-02 08:53:09.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.690 2 DEBUG nova.compute.manager [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-changed-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.691 2 DEBUG nova.compute.manager [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Refreshing instance network info cache due to event network-changed-4092976f-1133-4b84-91bd-87043169cb4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.691 2 DEBUG oslo_concurrency.lockutils [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.692 2 DEBUG oslo_concurrency.lockutils [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.692 2 DEBUG nova.network.neutron [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Refreshing network info cache for port 4092976f-1133-4b84-91bd-87043169cb4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.798 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.799 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.799 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.800 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.800 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.802 2 INFO nova.compute.manager [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Terminating instance
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.804 2 DEBUG nova.compute.manager [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:53:10 compute-0 kernel: tap4092976f-11 (unregistering): left promiscuous mode
Oct 02 08:53:10 compute-0 NetworkManager[45129]: <info>  [1759395190.8801] device (tap4092976f-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:10 compute-0 ovn_controller[152344]: 2025-10-02T08:53:10Z|01304|binding|INFO|Releasing lport 4092976f-1133-4b84-91bd-87043169cb4f from this chassis (sb_readonly=0)
Oct 02 08:53:10 compute-0 ovn_controller[152344]: 2025-10-02T08:53:10Z|01305|binding|INFO|Setting lport 4092976f-1133-4b84-91bd-87043169cb4f down in Southbound
Oct 02 08:53:10 compute-0 ovn_controller[152344]: 2025-10-02T08:53:10Z|01306|binding|INFO|Removing iface tap4092976f-11 ovn-installed in OVS
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:10.899 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:3a:a7 10.100.0.3'], port_security=['fa:16:3e:9e:3a:a7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '094b6b24-7323-4ad5-bd0d-e449c0c96f6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca65236d-124f-4d37-afbf-f114cc90e015', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1c5f2f44-40d9-415b-8ae8-affca47a93a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0a762f3-67f7-4fbc-8632-5b2810d06f8f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4092976f-1133-4b84-91bd-87043169cb4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:53:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:10.901 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4092976f-1133-4b84-91bd-87043169cb4f in datapath ca65236d-124f-4d37-afbf-f114cc90e015 unbound from our chassis
Oct 02 08:53:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:10.904 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca65236d-124f-4d37-afbf-f114cc90e015, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:53:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:10.905 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9ce422-50cf-4413-96d1-b58488628c13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:10 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:10.906 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 namespace which is not needed anymore
Oct 02 08:53:10 compute-0 nova_compute[260603]: 2025-10-02 08:53:10.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:10 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Oct 02 08:53:10 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007a.scope: Consumed 1.619s CPU time.
Oct 02 08:53:10 compute-0 systemd-machined[214636]: Machine qemu-155-instance-0000007a terminated.
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.083 2 INFO nova.virt.libvirt.driver [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance destroyed successfully.
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.084 2 DEBUG nova.objects.instance [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.104 2 DEBUG nova.virt.libvirt.vif [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1775312205',display_name='tempest-TestNetworkAdvancedServerOps-server-1775312205',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1775312205',id=122,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjAoXXSuQzVTrLfupmZ2yj1eQO64gMLoVS/w1fXN3YlLWUVSU8Ny9eVy5tfjHnq8vP2d+YlXvRC/+xl37WQ+3lkHjliAoJJEZ9n209ktTnU2lK2CJZClltvCqZ21bE87w==',key_name='tempest-TestNetworkAdvancedServerOps-1329763099',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:52:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-gkp0e6eh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:53:05Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=094b6b24-7323-4ad5-bd0d-e449c0c96f6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.105 2 DEBUG nova.network.os_vif_util [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.106 2 DEBUG nova.network.os_vif_util [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.106 2 DEBUG os_vif [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4092976f-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:11 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [NOTICE]   (388994) : haproxy version is 2.8.14-c23fe91
Oct 02 08:53:11 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [NOTICE]   (388994) : path to executable is /usr/sbin/haproxy
Oct 02 08:53:11 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [WARNING]  (388994) : Exiting Master process...
Oct 02 08:53:11 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [WARNING]  (388994) : Exiting Master process...
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:11 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [ALERT]    (388994) : Current worker (388996) exited with code 143 (Terminated)
Oct 02 08:53:11 compute-0 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [WARNING]  (388994) : All workers exited. Exiting... (0)
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:11 compute-0 systemd[1]: libpod-33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425.scope: Deactivated successfully.
Oct 02 08:53:11 compute-0 conmon[388990]: conmon 33fc396ec372afccd690 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425.scope/container/memory.events
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.116 2 INFO os_vif [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11')
Oct 02 08:53:11 compute-0 podman[389030]: 2025-10-02 08:53:11.120247126 +0000 UTC m=+0.051457452 container died 33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:53:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425-userdata-shm.mount: Deactivated successfully.
Oct 02 08:53:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-df3af0a887f56125e18a5dff2994a58aeee1c818086f552ccc3946b82615b0ba-merged.mount: Deactivated successfully.
Oct 02 08:53:11 compute-0 podman[389030]: 2025-10-02 08:53:11.162787615 +0000 UTC m=+0.093997941 container cleanup 33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:53:11 compute-0 systemd[1]: libpod-conmon-33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425.scope: Deactivated successfully.
Oct 02 08:53:11 compute-0 ceph-mon[74477]: pgmap v2245: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 0 B/s wr, 5 op/s
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.198 2 DEBUG nova.compute.manager [req-9e249330-2190-49d0-b5e5-64966af5278b req-f25ef52b-606d-4b25-b346-42885d6603b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-unplugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.198 2 DEBUG oslo_concurrency.lockutils [req-9e249330-2190-49d0-b5e5-64966af5278b req-f25ef52b-606d-4b25-b346-42885d6603b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.198 2 DEBUG oslo_concurrency.lockutils [req-9e249330-2190-49d0-b5e5-64966af5278b req-f25ef52b-606d-4b25-b346-42885d6603b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.199 2 DEBUG oslo_concurrency.lockutils [req-9e249330-2190-49d0-b5e5-64966af5278b req-f25ef52b-606d-4b25-b346-42885d6603b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.199 2 DEBUG nova.compute.manager [req-9e249330-2190-49d0-b5e5-64966af5278b req-f25ef52b-606d-4b25-b346-42885d6603b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-unplugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.199 2 DEBUG nova.compute.manager [req-9e249330-2190-49d0-b5e5-64966af5278b req-f25ef52b-606d-4b25-b346-42885d6603b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-unplugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:53:11 compute-0 podman[389084]: 2025-10-02 08:53:11.243345779 +0000 UTC m=+0.046856967 container remove 33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 08:53:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.254 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cd49cc-905d-4ccb-b34e-ccc7c999690f]: (4, ('Thu Oct  2 08:53:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 (33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425)\n33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425\nThu Oct  2 08:53:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 (33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425)\n33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.257 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3321fb90-3df5-4514-8b22-731f4c46bf24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.258 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca65236d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:11 compute-0 kernel: tapca65236d-10: left promiscuous mode
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.282 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab24571c-8310-455d-9447-0699180bdc21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.314 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c19a0b-541c-4e62-a4e0-b5af7a9be127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.316 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8cbf18a-c337-4cf5-a38c-46343c3a23f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.350 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9baa68c9-13ba-46e7-8013-65b2176dd32e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606228, 'reachable_time': 44539, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389099, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:11 compute-0 systemd[1]: run-netns-ovnmeta\x2dca65236d\x2d124f\x2d4d37\x2dafbf\x2df114cc90e015.mount: Deactivated successfully.
Oct 02 08:53:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.354 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:53:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.354 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[da7aa489-1667-4fb6-a278-14289b222ed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.620 2 INFO nova.virt.libvirt.driver [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Deleting instance files /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f_del
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.621 2 INFO nova.virt.libvirt.driver [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Deletion of /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f_del complete
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.672 2 INFO nova.compute.manager [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Took 0.87 seconds to destroy the instance on the hypervisor.
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.673 2 DEBUG oslo.service.loopingcall [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.673 2 DEBUG nova.compute.manager [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:53:11 compute-0 nova_compute[260603]: 2025-10-02 08:53:11.674 2 DEBUG nova.network.neutron [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:53:12 compute-0 nova_compute[260603]: 2025-10-02 08:53:12.937 2 DEBUG nova.network.neutron [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:53:12 compute-0 nova_compute[260603]: 2025-10-02 08:53:12.956 2 INFO nova.compute.manager [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Took 1.28 seconds to deallocate network for instance.
Oct 02 08:53:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.009 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.010 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.072 2 DEBUG oslo_concurrency.processutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:13 compute-0 ceph-mon[74477]: pgmap v2246: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.294 2 DEBUG nova.compute.manager [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.295 2 DEBUG oslo_concurrency.lockutils [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.295 2 DEBUG oslo_concurrency.lockutils [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.296 2 DEBUG oslo_concurrency.lockutils [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.296 2 DEBUG nova.compute.manager [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.296 2 WARNING nova.compute.manager [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received unexpected event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with vm_state deleted and task_state None.
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.297 2 DEBUG nova.compute.manager [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-deleted-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.411 2 DEBUG nova.network.neutron [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updated VIF entry in instance network info cache for port 4092976f-1133-4b84-91bd-87043169cb4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.412 2 DEBUG nova.network.neutron [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updating instance_info_cache with network_info: [{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.438 2 DEBUG oslo_concurrency.lockutils [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:53:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:53:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901527271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.549 2 DEBUG oslo_concurrency.processutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.556 2 DEBUG nova.compute.provider_tree [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.570 2 DEBUG nova.scheduler.client.report [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.593 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.618 2 INFO nova.scheduler.client.report [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Deleted allocations for instance 094b6b24-7323-4ad5-bd0d-e449c0c96f6f
Oct 02 08:53:13 compute-0 nova_compute[260603]: 2025-10-02 08:53:13.680 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:14 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3901527271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:53:14 compute-0 nova_compute[260603]: 2025-10-02 08:53:14.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:15 compute-0 ceph-mon[74477]: pgmap v2247: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Oct 02 08:53:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Oct 02 08:53:16 compute-0 nova_compute[260603]: 2025-10-02 08:53:16.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:17 compute-0 ceph-mon[74477]: pgmap v2248: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Oct 02 08:53:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Oct 02 08:53:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:53:19 compute-0 podman[389125]: 2025-10-02 08:53:19.04284505 +0000 UTC m=+0.091918525 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:53:19 compute-0 nova_compute[260603]: 2025-10-02 08:53:19.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:19 compute-0 podman[389124]: 2025-10-02 08:53:19.131433269 +0000 UTC m=+0.183079325 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:53:19 compute-0 nova_compute[260603]: 2025-10-02 08:53:19.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:19 compute-0 ceph-mon[74477]: pgmap v2249: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Oct 02 08:53:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Oct 02 08:53:19 compute-0 sshd-session[389166]: Invalid user admin from 139.19.117.131 port 58434
Oct 02 08:53:19 compute-0 sshd-session[389166]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Oct 02 08:53:19 compute-0 nova_compute[260603]: 2025-10-02 08:53:19.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:21 compute-0 nova_compute[260603]: 2025-10-02 08:53:21.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:21 compute-0 ceph-mon[74477]: pgmap v2250: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Oct 02 08:53:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 08:53:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:53:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3417358275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:53:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:53:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3417358275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:53:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3417358275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:53:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3417358275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:53:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.018508) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395203018583, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 452, "num_deletes": 255, "total_data_size": 380591, "memory_usage": 390808, "flush_reason": "Manual Compaction"}
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395203024100, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 377416, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47190, "largest_seqno": 47641, "table_properties": {"data_size": 374779, "index_size": 673, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6019, "raw_average_key_size": 18, "raw_value_size": 369629, "raw_average_value_size": 1106, "num_data_blocks": 31, "num_entries": 334, "num_filter_entries": 334, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395178, "oldest_key_time": 1759395178, "file_creation_time": 1759395203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 5657 microseconds, and 2796 cpu microseconds.
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.024176) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 377416 bytes OK
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.024199) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.025636) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.025656) EVENT_LOG_v1 {"time_micros": 1759395203025649, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.025677) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 377844, prev total WAL file size 377844, number of live WAL files 2.
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.026281) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373533' seq:72057594037927935, type:22 .. '6C6F676D0032303034' seq:0, type:0; will stop at (end)
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(368KB)], [107(10082KB)]
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395203026323, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 10701610, "oldest_snapshot_seqno": -1}
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6837 keys, 10583870 bytes, temperature: kUnknown
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395203111278, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10583870, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10535905, "index_size": 29741, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 178083, "raw_average_key_size": 26, "raw_value_size": 10411423, "raw_average_value_size": 1522, "num_data_blocks": 1169, "num_entries": 6837, "num_filter_entries": 6837, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.111582) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10583870 bytes
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.113381) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.8 rd, 124.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(56.4) write-amplify(28.0) OK, records in: 7355, records dropped: 518 output_compression: NoCompression
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.113411) EVENT_LOG_v1 {"time_micros": 1759395203113397, "job": 64, "event": "compaction_finished", "compaction_time_micros": 85043, "compaction_time_cpu_micros": 49435, "output_level": 6, "num_output_files": 1, "total_output_size": 10583870, "num_input_records": 7355, "num_output_records": 6837, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395203113723, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395203117438, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.026182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.117512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.117518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.117521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.117524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:53:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.117527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:53:23 compute-0 ceph-mon[74477]: pgmap v2251: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 08:53:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 08:53:24 compute-0 ceph-mon[74477]: pgmap v2252: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 08:53:24 compute-0 nova_compute[260603]: 2025-10-02 08:53:24.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:26 compute-0 podman[389169]: 2025-10-02 08:53:25.999739828 +0000 UTC m=+0.062805232 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid)
Oct 02 08:53:26 compute-0 podman[389168]: 2025-10-02 08:53:26.036654728 +0000 UTC m=+0.091841682 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct 02 08:53:26 compute-0 nova_compute[260603]: 2025-10-02 08:53:26.082 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395191.0809329, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:53:26 compute-0 nova_compute[260603]: 2025-10-02 08:53:26.082 2 INFO nova.compute.manager [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Stopped (Lifecycle Event)
Oct 02 08:53:26 compute-0 nova_compute[260603]: 2025-10-02 08:53:26.103 2 DEBUG nova.compute.manager [None req-6d4ea745-a3d2-4f3e-8f5e-a40031873548 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:53:26 compute-0 nova_compute[260603]: 2025-10-02 08:53:26.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:26 compute-0 ceph-mon[74477]: pgmap v2253: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:53:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:53:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:53:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:53:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:53:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:53:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:53:28
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'volumes', 'backups']
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:53:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:53:28 compute-0 nova_compute[260603]: 2025-10-02 08:53:28.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:53:28 compute-0 nova_compute[260603]: 2025-10-02 08:53:28.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:53:28 compute-0 ceph-mon[74477]: pgmap v2254: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:29 compute-0 sshd-session[389166]: Connection closed by invalid user admin 139.19.117.131 port 58434 [preauth]
Oct 02 08:53:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:29 compute-0 nova_compute[260603]: 2025-10-02 08:53:29.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:30 compute-0 sudo[389205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:53:30 compute-0 sudo[389205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:30 compute-0 sudo[389205]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:30 compute-0 sudo[389230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:53:30 compute-0 sudo[389230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:30 compute-0 sudo[389230]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:30 compute-0 sudo[389255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:53:30 compute-0 sudo[389255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:30 compute-0 sudo[389255]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:30 compute-0 ceph-mon[74477]: pgmap v2255: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:30 compute-0 sudo[389280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:53:30 compute-0 sudo[389280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:31 compute-0 nova_compute[260603]: 2025-10-02 08:53:31.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:31 compute-0 sudo[389280]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:53:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:53:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:53:31 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:53:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:53:31 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:53:31 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev cfcb97f9-09e3-45b3-a7aa-bb10ac5581ab does not exist
Oct 02 08:53:31 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 24da39d4-8461-495f-a8fa-d36de2fb2eb1 does not exist
Oct 02 08:53:31 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0c12582d-9e08-4949-aff8-28849b32dd68 does not exist
Oct 02 08:53:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:53:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:53:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:53:31 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:53:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:53:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:53:31 compute-0 sudo[389336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:53:31 compute-0 sudo[389336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:31 compute-0 sudo[389336]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:31 compute-0 sudo[389361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:53:31 compute-0 sudo[389361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:31 compute-0 sudo[389361]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:31 compute-0 sudo[389386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:53:31 compute-0 sudo[389386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:31 compute-0 sudo[389386]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:31 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:53:31 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:53:31 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:53:31 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:53:31 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:53:31 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:53:31 compute-0 sudo[389411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:53:31 compute-0 sudo[389411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:32 compute-0 podman[389478]: 2025-10-02 08:53:32.117309595 +0000 UTC m=+0.053930630 container create 81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:53:32 compute-0 systemd[1]: Started libpod-conmon-81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274.scope.
Oct 02 08:53:32 compute-0 podman[389478]: 2025-10-02 08:53:32.101395751 +0000 UTC m=+0.038016806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:53:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:53:32 compute-0 podman[389478]: 2025-10-02 08:53:32.222999796 +0000 UTC m=+0.159620861 container init 81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 08:53:32 compute-0 podman[389478]: 2025-10-02 08:53:32.236884206 +0000 UTC m=+0.173505281 container start 81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_neumann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:53:32 compute-0 podman[389478]: 2025-10-02 08:53:32.241393419 +0000 UTC m=+0.178014494 container attach 81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:53:32 compute-0 nervous_neumann[389494]: 167 167
Oct 02 08:53:32 compute-0 systemd[1]: libpod-81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274.scope: Deactivated successfully.
Oct 02 08:53:32 compute-0 conmon[389494]: conmon 81e5ace31b6a22b287fa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274.scope/container/memory.events
Oct 02 08:53:32 compute-0 podman[389478]: 2025-10-02 08:53:32.249352551 +0000 UTC m=+0.185973626 container died 81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_neumann, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:53:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-64c60e31b2732dfdb292db9cf1a968bcb0ab1a62a53d7ce5b9e77fd2c8536df3-merged.mount: Deactivated successfully.
Oct 02 08:53:32 compute-0 podman[389478]: 2025-10-02 08:53:32.305242194 +0000 UTC m=+0.241863279 container remove 81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 08:53:32 compute-0 systemd[1]: libpod-conmon-81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274.scope: Deactivated successfully.
Oct 02 08:53:32 compute-0 podman[389519]: 2025-10-02 08:53:32.53820626 +0000 UTC m=+0.067299495 container create 11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:53:32 compute-0 systemd[1]: Started libpod-conmon-11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf.scope.
Oct 02 08:53:32 compute-0 podman[389519]: 2025-10-02 08:53:32.513960821 +0000 UTC m=+0.043054136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:53:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05b3f6ac2474417a99710534cd6fa2b1e4a10d655fa2b83016876c1d9f14db4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05b3f6ac2474417a99710534cd6fa2b1e4a10d655fa2b83016876c1d9f14db4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05b3f6ac2474417a99710534cd6fa2b1e4a10d655fa2b83016876c1d9f14db4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05b3f6ac2474417a99710534cd6fa2b1e4a10d655fa2b83016876c1d9f14db4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05b3f6ac2474417a99710534cd6fa2b1e4a10d655fa2b83016876c1d9f14db4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:32 compute-0 podman[389519]: 2025-10-02 08:53:32.656094477 +0000 UTC m=+0.185187772 container init 11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 08:53:32 compute-0 podman[389519]: 2025-10-02 08:53:32.666855097 +0000 UTC m=+0.195948352 container start 11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 08:53:32 compute-0 podman[389519]: 2025-10-02 08:53:32.671406662 +0000 UTC m=+0.200499917 container attach 11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 08:53:32 compute-0 ceph-mon[74477]: pgmap v2256: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:53:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:33 compute-0 interesting_poincare[389536]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:53:33 compute-0 interesting_poincare[389536]: --> relative data size: 1.0
Oct 02 08:53:33 compute-0 interesting_poincare[389536]: --> All data devices are unavailable
Oct 02 08:53:33 compute-0 systemd[1]: libpod-11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf.scope: Deactivated successfully.
Oct 02 08:53:33 compute-0 podman[389519]: 2025-10-02 08:53:33.95228806 +0000 UTC m=+1.481381315 container died 11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 08:53:33 compute-0 systemd[1]: libpod-11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf.scope: Consumed 1.229s CPU time.
Oct 02 08:53:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a05b3f6ac2474417a99710534cd6fa2b1e4a10d655fa2b83016876c1d9f14db4-merged.mount: Deactivated successfully.
Oct 02 08:53:34 compute-0 podman[389519]: 2025-10-02 08:53:34.01094563 +0000 UTC m=+1.540038865 container remove 11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 08:53:34 compute-0 systemd[1]: libpod-conmon-11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf.scope: Deactivated successfully.
Oct 02 08:53:34 compute-0 sudo[389411]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:34 compute-0 sudo[389579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:53:34 compute-0 sudo[389579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:34 compute-0 sudo[389579]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:34 compute-0 sudo[389604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:53:34 compute-0 sudo[389604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:34 compute-0 sudo[389604]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:34 compute-0 sudo[389629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:53:34 compute-0 sudo[389629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:34 compute-0 sudo[389629]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:34 compute-0 sudo[389654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:53:34 compute-0 sudo[389654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:34 compute-0 ceph-mon[74477]: pgmap v2257: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:34.834 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:34.838 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:34.838 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:34 compute-0 podman[389720]: 2025-10-02 08:53:34.918805372 +0000 UTC m=+0.074876805 container create dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:53:34 compute-0 systemd[1]: Started libpod-conmon-dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63.scope.
Oct 02 08:53:34 compute-0 podman[389720]: 2025-10-02 08:53:34.887606923 +0000 UTC m=+0.043678396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:53:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:53:34 compute-0 nova_compute[260603]: 2025-10-02 08:53:34.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:35 compute-0 podman[389720]: 2025-10-02 08:53:35.00862169 +0000 UTC m=+0.164693153 container init dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:53:35 compute-0 podman[389720]: 2025-10-02 08:53:35.018116141 +0000 UTC m=+0.174187544 container start dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 08:53:35 compute-0 podman[389720]: 2025-10-02 08:53:35.021546409 +0000 UTC m=+0.177617912 container attach dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct 02 08:53:35 compute-0 goofy_driscoll[389736]: 167 167
Oct 02 08:53:35 compute-0 systemd[1]: libpod-dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63.scope: Deactivated successfully.
Oct 02 08:53:35 compute-0 podman[389720]: 2025-10-02 08:53:35.026147906 +0000 UTC m=+0.182219339 container died dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 08:53:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-02e7f122343f2fb14c2db925f5321338bb245e4c972523f47b273646f491d546-merged.mount: Deactivated successfully.
Oct 02 08:53:35 compute-0 podman[389720]: 2025-10-02 08:53:35.075928204 +0000 UTC m=+0.231999637 container remove dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 08:53:35 compute-0 systemd[1]: libpod-conmon-dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63.scope: Deactivated successfully.
Oct 02 08:53:35 compute-0 podman[389762]: 2025-10-02 08:53:35.324336279 +0000 UTC m=+0.064204356 container create bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 08:53:35 compute-0 systemd[1]: Started libpod-conmon-bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0.scope.
Oct 02 08:53:35 compute-0 podman[389762]: 2025-10-02 08:53:35.296299351 +0000 UTC m=+0.036167468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:53:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18cd734e860a51889aa1938ee9519b4348c478c4d2857a4e4c6626dccc5c0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18cd734e860a51889aa1938ee9519b4348c478c4d2857a4e4c6626dccc5c0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18cd734e860a51889aa1938ee9519b4348c478c4d2857a4e4c6626dccc5c0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18cd734e860a51889aa1938ee9519b4348c478c4d2857a4e4c6626dccc5c0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:35 compute-0 podman[389762]: 2025-10-02 08:53:35.438259371 +0000 UTC m=+0.178127448 container init bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 08:53:35 compute-0 podman[389762]: 2025-10-02 08:53:35.453045579 +0000 UTC m=+0.192913656 container start bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 08:53:35 compute-0 podman[389762]: 2025-10-02 08:53:35.457589304 +0000 UTC m=+0.197457441 container attach bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:53:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:36 compute-0 nova_compute[260603]: 2025-10-02 08:53:36.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]: {
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:     "0": [
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:         {
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "devices": [
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "/dev/loop3"
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             ],
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_name": "ceph_lv0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_size": "21470642176",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "name": "ceph_lv0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "tags": {
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.cluster_name": "ceph",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.crush_device_class": "",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.encrypted": "0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.osd_id": "0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.type": "block",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.vdo": "0"
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             },
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "type": "block",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "vg_name": "ceph_vg0"
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:         }
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:     ],
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:     "1": [
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:         {
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "devices": [
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "/dev/loop4"
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             ],
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_name": "ceph_lv1",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_size": "21470642176",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "name": "ceph_lv1",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "tags": {
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.cluster_name": "ceph",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.crush_device_class": "",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.encrypted": "0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.osd_id": "1",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.type": "block",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.vdo": "0"
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             },
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "type": "block",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "vg_name": "ceph_vg1"
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:         }
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:     ],
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:     "2": [
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:         {
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "devices": [
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "/dev/loop5"
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             ],
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_name": "ceph_lv2",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_size": "21470642176",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "name": "ceph_lv2",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "tags": {
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.cluster_name": "ceph",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.crush_device_class": "",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.encrypted": "0",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.osd_id": "2",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.type": "block",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:                 "ceph.vdo": "0"
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             },
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "type": "block",
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:             "vg_name": "ceph_vg2"
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:         }
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]:     ]
Oct 02 08:53:36 compute-0 inspiring_taussig[389779]: }
Oct 02 08:53:36 compute-0 systemd[1]: libpod-bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0.scope: Deactivated successfully.
Oct 02 08:53:36 compute-0 podman[389762]: 2025-10-02 08:53:36.194366212 +0000 UTC m=+0.934234249 container died bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:53:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f18cd734e860a51889aa1938ee9519b4348c478c4d2857a4e4c6626dccc5c0a-merged.mount: Deactivated successfully.
Oct 02 08:53:36 compute-0 podman[389762]: 2025-10-02 08:53:36.253165346 +0000 UTC m=+0.993033383 container remove bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:53:36 compute-0 systemd[1]: libpod-conmon-bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0.scope: Deactivated successfully.
Oct 02 08:53:36 compute-0 sudo[389654]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:36 compute-0 sudo[389800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:53:36 compute-0 sudo[389800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:36 compute-0 sudo[389800]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:36 compute-0 sudo[389825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:53:36 compute-0 sudo[389825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:36 compute-0 sudo[389825]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:36 compute-0 sudo[389850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:53:36 compute-0 sudo[389850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:36 compute-0 sudo[389850]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:36 compute-0 sudo[389875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:53:36 compute-0 sudo[389875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:36 compute-0 ceph-mon[74477]: pgmap v2258: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:37 compute-0 podman[389941]: 2025-10-02 08:53:37.039719982 +0000 UTC m=+0.069906307 container create a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mahavira, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 02 08:53:37 compute-0 systemd[1]: Started libpod-conmon-a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e.scope.
Oct 02 08:53:37 compute-0 podman[389941]: 2025-10-02 08:53:37.010372672 +0000 UTC m=+0.040559037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:53:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:53:37 compute-0 podman[389941]: 2025-10-02 08:53:37.158811958 +0000 UTC m=+0.188998293 container init a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 08:53:37 compute-0 podman[389941]: 2025-10-02 08:53:37.169993203 +0000 UTC m=+0.200179518 container start a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mahavira, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:53:37 compute-0 podman[389941]: 2025-10-02 08:53:37.174386092 +0000 UTC m=+0.204572407 container attach a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 08:53:37 compute-0 gifted_mahavira[389957]: 167 167
Oct 02 08:53:37 compute-0 systemd[1]: libpod-a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e.scope: Deactivated successfully.
Oct 02 08:53:37 compute-0 podman[389941]: 2025-10-02 08:53:37.178232654 +0000 UTC m=+0.208418979 container died a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:53:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c60578550ff433f79edca02bf1f05e3806d3db3ec6fc3de3c699f386c2d7f2c5-merged.mount: Deactivated successfully.
Oct 02 08:53:37 compute-0 podman[389941]: 2025-10-02 08:53:37.232917328 +0000 UTC m=+0.263103643 container remove a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mahavira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:53:37 compute-0 systemd[1]: libpod-conmon-a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e.scope: Deactivated successfully.
Oct 02 08:53:37 compute-0 podman[389980]: 2025-10-02 08:53:37.496918227 +0000 UTC m=+0.075571606 container create 014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 08:53:37 compute-0 systemd[1]: Started libpod-conmon-014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100.scope.
Oct 02 08:53:37 compute-0 podman[389980]: 2025-10-02 08:53:37.466545955 +0000 UTC m=+0.045199374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:53:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129eb30fb1a81395cc661d69e8a159dd0c2ae09dd968c646d35e28de3b081ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129eb30fb1a81395cc661d69e8a159dd0c2ae09dd968c646d35e28de3b081ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129eb30fb1a81395cc661d69e8a159dd0c2ae09dd968c646d35e28de3b081ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129eb30fb1a81395cc661d69e8a159dd0c2ae09dd968c646d35e28de3b081ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:37 compute-0 podman[389980]: 2025-10-02 08:53:37.627256529 +0000 UTC m=+0.205909898 container init 014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 08:53:37 compute-0 podman[389980]: 2025-10-02 08:53:37.640542641 +0000 UTC m=+0.219196020 container start 014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:53:37 compute-0 podman[389980]: 2025-10-02 08:53:37.645038633 +0000 UTC m=+0.223692052 container attach 014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 08:53:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:53:38 compute-0 stupefied_golick[389997]: {
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "osd_id": 2,
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "type": "bluestore"
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:     },
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "osd_id": 1,
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "type": "bluestore"
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:     },
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "osd_id": 0,
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:         "type": "bluestore"
Oct 02 08:53:38 compute-0 stupefied_golick[389997]:     }
Oct 02 08:53:38 compute-0 stupefied_golick[389997]: }
Oct 02 08:53:38 compute-0 systemd[1]: libpod-014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100.scope: Deactivated successfully.
Oct 02 08:53:38 compute-0 systemd[1]: libpod-014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100.scope: Consumed 1.089s CPU time.
Oct 02 08:53:38 compute-0 ceph-mon[74477]: pgmap v2259: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:38 compute-0 podman[390031]: 2025-10-02 08:53:38.770416821 +0000 UTC m=+0.024983113 container died 014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 08:53:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0129eb30fb1a81395cc661d69e8a159dd0c2ae09dd968c646d35e28de3b081ab-merged.mount: Deactivated successfully.
Oct 02 08:53:38 compute-0 podman[390031]: 2025-10-02 08:53:38.863123621 +0000 UTC m=+0.117689873 container remove 014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:53:38 compute-0 systemd[1]: libpod-conmon-014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100.scope: Deactivated successfully.
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:53:38 compute-0 sudo[389875]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:53:38 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:53:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:53:38 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 42c27c1c-c518-44f0-b24d-18a086314123 does not exist
Oct 02 08:53:38 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 4c18ad71-7e6c-46fd-937a-d1832f4ad690 does not exist
Oct 02 08:53:39 compute-0 sudo[390046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:53:39 compute-0 sudo[390046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:39 compute-0 sudo[390046]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:39 compute-0 sudo[390071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:53:39 compute-0 sudo[390071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:53:39 compute-0 sudo[390071]: pam_unix(sudo:session): session closed for user root
Oct 02 08:53:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:39 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:53:39 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:53:40 compute-0 nova_compute[260603]: 2025-10-02 08:53:40.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:40 compute-0 ceph-mon[74477]: pgmap v2260: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:41 compute-0 nova_compute[260603]: 2025-10-02 08:53:41.008 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:41 compute-0 nova_compute[260603]: 2025-10-02 08:53:41.008 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:41 compute-0 nova_compute[260603]: 2025-10-02 08:53:41.089 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:53:41 compute-0 nova_compute[260603]: 2025-10-02 08:53:41.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:41 compute-0 nova_compute[260603]: 2025-10-02 08:53:41.245 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:41 compute-0 nova_compute[260603]: 2025-10-02 08:53:41.245 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:41 compute-0 nova_compute[260603]: 2025-10-02 08:53:41.256 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:53:41 compute-0 nova_compute[260603]: 2025-10-02 08:53:41.257 2 INFO nova.compute.claims [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:53:41 compute-0 nova_compute[260603]: 2025-10-02 08:53:41.471 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:41 compute-0 nova_compute[260603]: 2025-10-02 08:53:41.522 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:53:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:53:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3348612134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:53:41 compute-0 nova_compute[260603]: 2025-10-02 08:53:41.998 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.003 2 DEBUG nova.compute.provider_tree [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.016 2 DEBUG nova.scheduler.client.report [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.051 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.052 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.185 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.186 2 DEBUG nova.network.neutron [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.212 2 INFO nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.235 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.431 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.433 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.434 2 INFO nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Creating image(s)
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.466 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.500 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.533 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.538 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.581 2 DEBUG nova.policy [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.615 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.615 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.616 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.616 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.639 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.643 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ce9a5c17-646f-4ba2-a974-90e4b864872e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:42 compute-0 nova_compute[260603]: 2025-10-02 08:53:42.947 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ce9a5c17-646f-4ba2-a974-90e4b864872e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:42 compute-0 ceph-mon[74477]: pgmap v2261: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:53:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3348612134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:53:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.049 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.133 2 DEBUG nova.objects.instance [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid ce9a5c17-646f-4ba2-a974-90e4b864872e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.165 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.166 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Ensure instance console log exists: /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.166 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.166 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.167 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.538 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.538 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:53:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 54 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 655 KiB/s wr, 23 op/s
Oct 02 08:53:43 compute-0 nova_compute[260603]: 2025-10-02 08:53:43.661 2 DEBUG nova.network.neutron [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Successfully created port: 4fd5381b-e8ba-485f-9cb6-692a37b716a1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:53:44 compute-0 nova_compute[260603]: 2025-10-02 08:53:44.418 2 DEBUG nova.network.neutron [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Successfully updated port: 4fd5381b-e8ba-485f-9cb6-692a37b716a1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:53:44 compute-0 nova_compute[260603]: 2025-10-02 08:53:44.432 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:53:44 compute-0 nova_compute[260603]: 2025-10-02 08:53:44.432 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:53:44 compute-0 nova_compute[260603]: 2025-10-02 08:53:44.432 2 DEBUG nova.network.neutron [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:53:44 compute-0 nova_compute[260603]: 2025-10-02 08:53:44.500 2 DEBUG nova.compute.manager [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-changed-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:53:44 compute-0 nova_compute[260603]: 2025-10-02 08:53:44.500 2 DEBUG nova.compute.manager [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Refreshing instance network info cache due to event network-changed-4fd5381b-e8ba-485f-9cb6-692a37b716a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:53:44 compute-0 nova_compute[260603]: 2025-10-02 08:53:44.500 2 DEBUG oslo_concurrency.lockutils [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:53:44 compute-0 nova_compute[260603]: 2025-10-02 08:53:44.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:53:44 compute-0 nova_compute[260603]: 2025-10-02 08:53:44.576 2 DEBUG nova.network.neutron [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:53:44 compute-0 ceph-mon[74477]: pgmap v2262: 305 pgs: 305 active+clean; 54 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 655 KiB/s wr, 23 op/s
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.282 2 DEBUG nova.network.neutron [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updating instance_info_cache with network_info: [{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.304 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.305 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Instance network_info: |[{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.305 2 DEBUG oslo_concurrency.lockutils [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.306 2 DEBUG nova.network.neutron [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Refreshing network info cache for port 4fd5381b-e8ba-485f-9cb6-692a37b716a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.308 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Start _get_guest_xml network_info=[{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.314 2 WARNING nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.320 2 DEBUG nova.virt.libvirt.host [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.321 2 DEBUG nova.virt.libvirt.host [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.330 2 DEBUG nova.virt.libvirt.host [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.331 2 DEBUG nova.virt.libvirt.host [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.332 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.333 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.334 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.334 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.335 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.335 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.335 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.336 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.336 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.336 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.337 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.337 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.342 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.543 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.543 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 54 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 655 KiB/s wr, 23 op/s
Oct 02 08:53:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:53:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4292277673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.793 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.823 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:45 compute-0 nova_compute[260603]: 2025-10-02 08:53:45.827 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:53:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099875373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:53:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4292277673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.034 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.235 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.237 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.98080825805664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.237 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.237 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:53:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1070282520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.271 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.272 2 DEBUG nova.virt.libvirt.vif [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1285499332',display_name='tempest-TestNetworkBasicOps-server-1285499332',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1285499332',id=123,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAz8IMZ7Jf5vn2RyFjWWITiqepr2aLtS8oy68xeil523gnhXxTAuBsHLCjnOZ53PJL0p7gfBo3vI6+ggEPUbO2Sg7r3zAGgPHkKwwrkmc/GeJgKH1QAb2g1ODGvS4w3VLw==',key_name='tempest-TestNetworkBasicOps-2122334356',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-5j8y63ir',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:53:42Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=ce9a5c17-646f-4ba2-a974-90e4b864872e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.273 2 DEBUG nova.network.os_vif_util [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.274 2 DEBUG nova.network.os_vif_util [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.275 2 DEBUG nova.objects.instance [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce9a5c17-646f-4ba2-a974-90e4b864872e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.306 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:53:46 compute-0 nova_compute[260603]:   <uuid>ce9a5c17-646f-4ba2-a974-90e4b864872e</uuid>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   <name>instance-0000007b</name>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-1285499332</nova:name>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:53:45</nova:creationTime>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:53:46 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:53:46 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:53:46 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:53:46 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:53:46 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:53:46 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:53:46 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:53:46 compute-0 nova_compute[260603]:         <nova:port uuid="4fd5381b-e8ba-485f-9cb6-692a37b716a1">
Oct 02 08:53:46 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <system>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <entry name="serial">ce9a5c17-646f-4ba2-a974-90e4b864872e</entry>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <entry name="uuid">ce9a5c17-646f-4ba2-a974-90e4b864872e</entry>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     </system>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   <os>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   </os>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   <features>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   </features>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ce9a5c17-646f-4ba2-a974-90e4b864872e_disk">
Oct 02 08:53:46 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       </source>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:53:46 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ce9a5c17-646f-4ba2-a974-90e4b864872e_disk.config">
Oct 02 08:53:46 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       </source>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:53:46 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:04:0b:fe"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <target dev="tap4fd5381b-e8"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/console.log" append="off"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <video>
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     </video>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:53:46 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:53:46 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:53:46 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:53:46 compute-0 nova_compute[260603]: </domain>
Oct 02 08:53:46 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.308 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Preparing to wait for external event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.308 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.308 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.308 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.309 2 DEBUG nova.virt.libvirt.vif [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1285499332',display_name='tempest-TestNetworkBasicOps-server-1285499332',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1285499332',id=123,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAz8IMZ7Jf5vn2RyFjWWITiqepr2aLtS8oy68xeil523gnhXxTAuBsHLCjnOZ53PJL0p7gfBo3vI6+ggEPUbO2Sg7r3zAGgPHkKwwrkmc/GeJgKH1QAb2g1ODGvS4w3VLw==',key_name='tempest-TestNetworkBasicOps-2122334356',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-5j8y63ir',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:53:42Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=ce9a5c17-646f-4ba2-a974-90e4b864872e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.310 2 DEBUG nova.network.os_vif_util [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.310 2 DEBUG nova.network.os_vif_util [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.311 2 DEBUG os_vif [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.312 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.313 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd5381b-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fd5381b-e8, col_values=(('external_ids', {'iface-id': '4fd5381b-e8ba-485f-9cb6-692a37b716a1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:0b:fe', 'vm-uuid': 'ce9a5c17-646f-4ba2-a974-90e4b864872e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:46 compute-0 NetworkManager[45129]: <info>  [1759395226.3472] manager: (tap4fd5381b-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/514)
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.353 2 INFO os_vif [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8')
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.412 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.412 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.413 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:04:0b:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.413 2 INFO nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Using config drive
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.435 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.461 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ce9a5c17-646f-4ba2-a974-90e4b864872e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.461 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.462 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.596 2 DEBUG nova.network.neutron [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updated VIF entry in instance network info cache for port 4fd5381b-e8ba-485f-9cb6-692a37b716a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.597 2 DEBUG nova.network.neutron [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updating instance_info_cache with network_info: [{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.602 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.659 2 DEBUG oslo_concurrency.lockutils [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.844 2 INFO nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Creating config drive at /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/disk.config
Oct 02 08:53:46 compute-0 nova_compute[260603]: 2025-10-02 08:53:46.850 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbjpw9e8i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:47 compute-0 ceph-mon[74477]: pgmap v2263: 305 pgs: 305 active+clean; 54 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 655 KiB/s wr, 23 op/s
Oct 02 08:53:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1099875373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:53:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1070282520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.015 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbjpw9e8i" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.056 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.061 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/disk.config ce9a5c17-646f-4ba2-a974-90e4b864872e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:53:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3013958535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.119 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.129 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.147 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.169 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.170 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.275 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/disk.config ce9a5c17-646f-4ba2-a974-90e4b864872e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.275 2 INFO nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Deleting local config drive /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/disk.config because it was imported into RBD.
Oct 02 08:53:47 compute-0 kernel: tap4fd5381b-e8: entered promiscuous mode
Oct 02 08:53:47 compute-0 NetworkManager[45129]: <info>  [1759395227.3536] manager: (tap4fd5381b-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:47 compute-0 ovn_controller[152344]: 2025-10-02T08:53:47Z|01307|binding|INFO|Claiming lport 4fd5381b-e8ba-485f-9cb6-692a37b716a1 for this chassis.
Oct 02 08:53:47 compute-0 ovn_controller[152344]: 2025-10-02T08:53:47Z|01308|binding|INFO|4fd5381b-e8ba-485f-9cb6-692a37b716a1: Claiming fa:16:3e:04:0b:fe 10.100.0.10
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.373 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:0b:fe 10.100.0.10'], port_security=['fa:16:3e:04:0b:fe 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ce9a5c17-646f-4ba2-a974-90e4b864872e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7fe3b6c1-e179-4b79-8e83-9d4b983838b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=300ebc04-b3b0-45d9-94e7-4b6ae68b1331, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4fd5381b-e8ba-485f-9cb6-692a37b716a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.374 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4fd5381b-e8ba-485f-9cb6-692a37b716a1 in datapath 7511e8c2-7c26-4eea-b465-32e904aba1a9 bound to our chassis
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.376 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7511e8c2-7c26-4eea-b465-32e904aba1a9
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.397 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f40d67-da28-4845-a828-6a8d9a261f29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.398 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7511e8c2-71 in ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:53:47 compute-0 systemd-udevd[390464]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.401 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7511e8c2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.402 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1c34c4fe-0165-4697-a123-2736b541d4d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.403 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[056faaba-dd58-4a0b-ba5c-a1c072f3b2f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 systemd-machined[214636]: New machine qemu-156-instance-0000007b.
Oct 02 08:53:47 compute-0 NetworkManager[45129]: <info>  [1759395227.4146] device (tap4fd5381b-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:53:47 compute-0 NetworkManager[45129]: <info>  [1759395227.4162] device (tap4fd5381b-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.421 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[3221b740-bbaa-4001-9cd7-6367a5a39604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 systemd[1]: Started Virtual Machine qemu-156-instance-0000007b.
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.455 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3d70c249-6867-4a2b-84b4-75bdada2844e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 ovn_controller[152344]: 2025-10-02T08:53:47Z|01309|binding|INFO|Setting lport 4fd5381b-e8ba-485f-9cb6-692a37b716a1 ovn-installed in OVS
Oct 02 08:53:47 compute-0 ovn_controller[152344]: 2025-10-02T08:53:47Z|01310|binding|INFO|Setting lport 4fd5381b-e8ba-485f-9cb6-692a37b716a1 up in Southbound
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.487 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6126e26a-568a-48d5-8f23-b417d6c74a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 NetworkManager[45129]: <info>  [1759395227.5071] manager: (tap7511e8c2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/516)
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.506 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3c27d6-e298-4d99-9d95-f43e4b9e4809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.540 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5b31e849-23e1-4f8a-bde7-9976e6a20905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.542 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e4346170-91de-4c9e-89af-9752bbbde23c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 NetworkManager[45129]: <info>  [1759395227.5680] device (tap7511e8c2-70): carrier: link connected
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.576 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[21d8cf89-59c7-4dbe-857d-8b85609a9163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.594 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[831f4731-8ceb-4ee7-81f5-29c5cce72bb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7511e8c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:ab:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610617, 'reachable_time': 28651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390496, 'error': None, 'target': 'ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.610 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[323430ba-7861-426b-a7fc-82a2c26f099c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:ab2e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610617, 'tstamp': 610617}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390497, 'error': None, 'target': 'ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.626 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e417a2-7189-4148-8453-68cd1b81e389]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7511e8c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:ab:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610617, 'reachable_time': 28651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390498, 'error': None, 'target': 'ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.655 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[af1995fd-ac24-4e9f-9dbf-0490599d0b03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.702 2 DEBUG nova.compute.manager [req-e3b11257-b53a-46d0-97a0-ba82423a0f0b req-94d6fcb4-c17b-4867-8662-4876e5d30168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.703 2 DEBUG oslo_concurrency.lockutils [req-e3b11257-b53a-46d0-97a0-ba82423a0f0b req-94d6fcb4-c17b-4867-8662-4876e5d30168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.703 2 DEBUG oslo_concurrency.lockutils [req-e3b11257-b53a-46d0-97a0-ba82423a0f0b req-94d6fcb4-c17b-4867-8662-4876e5d30168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.703 2 DEBUG oslo_concurrency.lockutils [req-e3b11257-b53a-46d0-97a0-ba82423a0f0b req-94d6fcb4-c17b-4867-8662-4876e5d30168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.703 2 DEBUG nova.compute.manager [req-e3b11257-b53a-46d0-97a0-ba82423a0f0b req-94d6fcb4-c17b-4867-8662-4876e5d30168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Processing event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.708 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a73fd3f3-b2fe-4839-9422-ec5cbfa8ef8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.709 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7511e8c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.709 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.710 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7511e8c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:47 compute-0 NetworkManager[45129]: <info>  [1759395227.7122] manager: (tap7511e8c2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/517)
Oct 02 08:53:47 compute-0 kernel: tap7511e8c2-70: entered promiscuous mode
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.716 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7511e8c2-70, col_values=(('external_ids', {'iface-id': 'f80df61f-a234-49f9-816c-2f176ad94b02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:47 compute-0 ovn_controller[152344]: 2025-10-02T08:53:47Z|01311|binding|INFO|Releasing lport f80df61f-a234-49f9-816c-2f176ad94b02 from this chassis (sb_readonly=0)
Oct 02 08:53:47 compute-0 nova_compute[260603]: 2025-10-02 08:53:47.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.730 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7511e8c2-7c26-4eea-b465-32e904aba1a9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7511e8c2-7c26-4eea-b465-32e904aba1a9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.734 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[41c9c83a-9913-4177-aa55-ff37d442e728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.734 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-7511e8c2-7c26-4eea-b465-32e904aba1a9
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/7511e8c2-7c26-4eea-b465-32e904aba1a9.pid.haproxy
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 7511e8c2-7c26-4eea-b465-32e904aba1a9
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:53:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.735 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'env', 'PROCESS_TAG=haproxy-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7511e8c2-7c26-4eea-b465-32e904aba1a9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:53:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:53:48 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3013958535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:53:48 compute-0 podman[390530]: 2025-10-02 08:53:48.109396581 +0000 UTC m=+0.055624604 container create bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:53:48 compute-0 systemd[1]: Started libpod-conmon-bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a.scope.
Oct 02 08:53:48 compute-0 podman[390530]: 2025-10-02 08:53:48.076121797 +0000 UTC m=+0.022349810 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:53:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841e1ce90c14f958dd89444a0f9f00fd4fb528b405b275066734c073b28f4568/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:53:48 compute-0 podman[390530]: 2025-10-02 08:53:48.192182096 +0000 UTC m=+0.138410159 container init bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:53:48 compute-0 podman[390530]: 2025-10-02 08:53:48.197777123 +0000 UTC m=+0.144005156 container start bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 08:53:48 compute-0 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [NOTICE]   (390549) : New worker (390551) forked
Oct 02 08:53:48 compute-0 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [NOTICE]   (390549) : Loading success.
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.847 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395228.8468075, ce9a5c17-646f-4ba2-a974-90e4b864872e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.848 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] VM Started (Lifecycle Event)
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.851 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.854 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.858 2 INFO nova.virt.libvirt.driver [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Instance spawned successfully.
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.858 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.875 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.885 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.888 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.889 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.889 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.889 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.890 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.890 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.914 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.914 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395228.8471572, ce9a5c17-646f-4ba2-a974-90e4b864872e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.915 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] VM Paused (Lifecycle Event)
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.946 2 INFO nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Took 6.51 seconds to spawn the instance on the hypervisor.
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.946 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.948 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.954 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395228.854169, ce9a5c17-646f-4ba2-a974-90e4b864872e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:53:48 compute-0 nova_compute[260603]: 2025-10-02 08:53:48.954 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] VM Resumed (Lifecycle Event)
Oct 02 08:53:49 compute-0 nova_compute[260603]: 2025-10-02 08:53:49.001 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:53:49 compute-0 nova_compute[260603]: 2025-10-02 08:53:49.004 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:53:49 compute-0 nova_compute[260603]: 2025-10-02 08:53:49.023 2 INFO nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Took 7.82 seconds to build instance.
Oct 02 08:53:49 compute-0 ceph-mon[74477]: pgmap v2264: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 08:53:49 compute-0 nova_compute[260603]: 2025-10-02 08:53:49.042 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 08:53:50 compute-0 nova_compute[260603]: 2025-10-02 08:53:50.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:50 compute-0 podman[390603]: 2025-10-02 08:53:50.043490339 +0000 UTC m=+0.090390017 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 08:53:50 compute-0 nova_compute[260603]: 2025-10-02 08:53:50.065 2 DEBUG nova.compute.manager [req-1102cea7-a77c-4577-9ef1-950045f2bf20 req-07ca1f04-e5bc-4ab0-8e59-ba72adc57e5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:53:50 compute-0 nova_compute[260603]: 2025-10-02 08:53:50.066 2 DEBUG oslo_concurrency.lockutils [req-1102cea7-a77c-4577-9ef1-950045f2bf20 req-07ca1f04-e5bc-4ab0-8e59-ba72adc57e5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:50 compute-0 nova_compute[260603]: 2025-10-02 08:53:50.066 2 DEBUG oslo_concurrency.lockutils [req-1102cea7-a77c-4577-9ef1-950045f2bf20 req-07ca1f04-e5bc-4ab0-8e59-ba72adc57e5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:50 compute-0 nova_compute[260603]: 2025-10-02 08:53:50.066 2 DEBUG oslo_concurrency.lockutils [req-1102cea7-a77c-4577-9ef1-950045f2bf20 req-07ca1f04-e5bc-4ab0-8e59-ba72adc57e5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:50 compute-0 nova_compute[260603]: 2025-10-02 08:53:50.067 2 DEBUG nova.compute.manager [req-1102cea7-a77c-4577-9ef1-950045f2bf20 req-07ca1f04-e5bc-4ab0-8e59-ba72adc57e5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] No waiting events found dispatching network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:53:50 compute-0 nova_compute[260603]: 2025-10-02 08:53:50.067 2 WARNING nova.compute.manager [req-1102cea7-a77c-4577-9ef1-950045f2bf20 req-07ca1f04-e5bc-4ab0-8e59-ba72adc57e5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received unexpected event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 for instance with vm_state active and task_state None.
Oct 02 08:53:50 compute-0 podman[390602]: 2025-10-02 08:53:50.078792668 +0000 UTC m=+0.125736367 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 08:53:50 compute-0 nova_compute[260603]: 2025-10-02 08:53:50.165 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:53:50 compute-0 nova_compute[260603]: 2025-10-02 08:53:50.166 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:53:51 compute-0 ceph-mon[74477]: pgmap v2265: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:51 compute-0 NetworkManager[45129]: <info>  [1759395231.3279] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Oct 02 08:53:51 compute-0 NetworkManager[45129]: <info>  [1759395231.3287] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.345 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.345 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.362 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:51 compute-0 ovn_controller[152344]: 2025-10-02T08:53:51Z|01312|binding|INFO|Releasing lport f80df61f-a234-49f9-816c-2f176ad94b02 from this chassis (sb_readonly=0)
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.441 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.442 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.450 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.451 2 INFO nova.compute.claims [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.588 2 DEBUG nova.compute.manager [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-changed-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.588 2 DEBUG nova.compute.manager [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Refreshing instance network info cache due to event network-changed-4fd5381b-e8ba-485f-9cb6-692a37b716a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.589 2 DEBUG oslo_concurrency.lockutils [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.589 2 DEBUG oslo_concurrency.lockutils [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.590 2 DEBUG nova.network.neutron [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Refreshing network info cache for port 4fd5381b-e8ba-485f-9cb6-692a37b716a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:53:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 08:53:51 compute-0 nova_compute[260603]: 2025-10-02 08:53:51.638 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:53:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4083111441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.127 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.137 2 DEBUG nova.compute.provider_tree [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.183 2 DEBUG nova.scheduler.client.report [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:53:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4083111441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.232 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.233 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.300 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.300 2 DEBUG nova.network.neutron [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.337 2 INFO nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.372 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.501 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.502 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.502 2 INFO nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Creating image(s)
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.520 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.543 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.561 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.564 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.606 2 DEBUG nova.policy [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bd8daf03c9d144d7a60fe3f81abdfbb4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '21220064aba34c77a8af713fad28c08b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.611 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.648 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.649 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.650 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.650 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.670 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.673 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.920 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:52 compute-0 nova_compute[260603]: 2025-10-02 08:53:52.969 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] resizing rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:53:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:53:53 compute-0 nova_compute[260603]: 2025-10-02 08:53:53.063 2 DEBUG nova.objects.instance [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'migration_context' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:53:53 compute-0 nova_compute[260603]: 2025-10-02 08:53:53.126 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:53:53 compute-0 nova_compute[260603]: 2025-10-02 08:53:53.127 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Ensure instance console log exists: /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:53:53 compute-0 nova_compute[260603]: 2025-10-02 08:53:53.128 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:53 compute-0 nova_compute[260603]: 2025-10-02 08:53:53.129 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:53 compute-0 nova_compute[260603]: 2025-10-02 08:53:53.129 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:53 compute-0 ceph-mon[74477]: pgmap v2266: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 08:53:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 111 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.1 MiB/s wr, 102 op/s
Oct 02 08:53:55 compute-0 nova_compute[260603]: 2025-10-02 08:53:55.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:55 compute-0 nova_compute[260603]: 2025-10-02 08:53:55.187 2 DEBUG nova.network.neutron [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updated VIF entry in instance network info cache for port 4fd5381b-e8ba-485f-9cb6-692a37b716a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:53:55 compute-0 nova_compute[260603]: 2025-10-02 08:53:55.188 2 DEBUG nova.network.neutron [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updating instance_info_cache with network_info: [{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:53:55 compute-0 ceph-mon[74477]: pgmap v2267: 305 pgs: 305 active+clean; 111 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.1 MiB/s wr, 102 op/s
Oct 02 08:53:55 compute-0 nova_compute[260603]: 2025-10-02 08:53:55.293 2 DEBUG oslo_concurrency.lockutils [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:53:55 compute-0 nova_compute[260603]: 2025-10-02 08:53:55.316 2 DEBUG nova.network.neutron [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Successfully created port: 6c4955ab-011a-4e64-9871-c01b4818d740 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:53:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 111 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 78 op/s
Oct 02 08:53:56 compute-0 nova_compute[260603]: 2025-10-02 08:53:56.138 2 DEBUG nova.network.neutron [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Successfully updated port: 6c4955ab-011a-4e64-9871-c01b4818d740 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:53:56 compute-0 nova_compute[260603]: 2025-10-02 08:53:56.220 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:53:56 compute-0 nova_compute[260603]: 2025-10-02 08:53:56.221 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:53:56 compute-0 nova_compute[260603]: 2025-10-02 08:53:56.221 2 DEBUG nova.network.neutron [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:53:56 compute-0 nova_compute[260603]: 2025-10-02 08:53:56.286 2 DEBUG nova.compute.manager [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:53:56 compute-0 nova_compute[260603]: 2025-10-02 08:53:56.287 2 DEBUG nova.compute.manager [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing instance network info cache due to event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:53:56 compute-0 nova_compute[260603]: 2025-10-02 08:53:56.287 2 DEBUG oslo_concurrency.lockutils [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:53:56 compute-0 nova_compute[260603]: 2025-10-02 08:53:56.396 2 DEBUG nova.network.neutron [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:53:56 compute-0 nova_compute[260603]: 2025-10-02 08:53:56.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:56 compute-0 nova_compute[260603]: 2025-10-02 08:53:56.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:53:56 compute-0 nova_compute[260603]: 2025-10-02 08:53:56.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 08:53:56 compute-0 nova_compute[260603]: 2025-10-02 08:53:56.539 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:53:57 compute-0 podman[390835]: 2025-10-02 08:53:57.008163993 +0000 UTC m=+0.069715832 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 02 08:53:57 compute-0 podman[390834]: 2025-10-02 08:53:57.011875161 +0000 UTC m=+0.075904368 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 08:53:57 compute-0 ceph-mon[74477]: pgmap v2268: 305 pgs: 305 active+clean; 111 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 78 op/s
Oct 02 08:53:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 134 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 104 op/s
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.824 2 DEBUG nova.network.neutron [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.850 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.851 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance network_info: |[{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.852 2 DEBUG oslo_concurrency.lockutils [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.853 2 DEBUG nova.network.neutron [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.859 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Start _get_guest_xml network_info=[{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.867 2 WARNING nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.880 2 DEBUG nova.virt.libvirt.host [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.882 2 DEBUG nova.virt.libvirt.host [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.888 2 DEBUG nova.virt.libvirt.host [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.889 2 DEBUG nova.virt.libvirt.host [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.890 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.891 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.892 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.893 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.893 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.894 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.895 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.895 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.896 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.897 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.897 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.898 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:53:57 compute-0 nova_compute[260603]: 2025-10-02 08:53:57.905 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:53:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:53:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:53:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:53:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:53:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:53:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:53:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:53:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021823407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.383 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.413 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.418 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:53:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1682623345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.938 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.940 2 DEBUG nova.virt.libvirt.vif [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-199210370',display_name='tempest-TestShelveInstance-server-199210370',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-199210370',id=124,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzsQ/6b3tjUGybuPyR4SVVOx0MdNuIeiSvtBwT6p2uLvkn4tE1YW+Uq6JJ1YItW22s6E/D+cvCAL9Uj4JCY+wAR12IR6juEk+h0nnTQwb0m9GcGi0Su/6v36I6xXc+YZw==',key_name='tempest-TestShelveInstance-203364505',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21220064aba34c77a8af713fad28c08b',ramdisk_id='',reservation_id='r-2iy01e2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-783621896',owner_user_name='tempest-TestShelveInstance-783621896-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:53:52Z,user_data=None,user_id='bd8daf03c9d144d7a60fe3f81abdfbb4',uuid=29e25f63-82b1-45cf-8916-41d9acc44ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.941 2 DEBUG nova.network.os_vif_util [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converting VIF {"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.942 2 DEBUG nova.network.os_vif_util [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.943 2 DEBUG nova.objects.instance [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'pci_devices' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.964 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:53:58 compute-0 nova_compute[260603]:   <uuid>29e25f63-82b1-45cf-8916-41d9acc44ac9</uuid>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   <name>instance-0000007c</name>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <nova:name>tempest-TestShelveInstance-server-199210370</nova:name>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:53:57</nova:creationTime>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:53:58 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:53:58 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:53:58 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:53:58 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:53:58 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:53:58 compute-0 nova_compute[260603]:         <nova:user uuid="bd8daf03c9d144d7a60fe3f81abdfbb4">tempest-TestShelveInstance-783621896-project-member</nova:user>
Oct 02 08:53:58 compute-0 nova_compute[260603]:         <nova:project uuid="21220064aba34c77a8af713fad28c08b">tempest-TestShelveInstance-783621896</nova:project>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:53:58 compute-0 nova_compute[260603]:         <nova:port uuid="6c4955ab-011a-4e64-9871-c01b4818d740">
Oct 02 08:53:58 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <system>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <entry name="serial">29e25f63-82b1-45cf-8916-41d9acc44ac9</entry>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <entry name="uuid">29e25f63-82b1-45cf-8916-41d9acc44ac9</entry>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     </system>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   <os>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   </os>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   <features>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   </features>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk">
Oct 02 08:53:58 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       </source>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:53:58 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config">
Oct 02 08:53:58 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       </source>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:53:58 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:ec:f3:67"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <target dev="tap6c4955ab-01"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/console.log" append="off"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <video>
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     </video>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:53:58 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:53:58 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:53:58 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:53:58 compute-0 nova_compute[260603]: </domain>
Oct 02 08:53:58 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.966 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Preparing to wait for external event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.966 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.967 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.967 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.968 2 DEBUG nova.virt.libvirt.vif [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-199210370',display_name='tempest-TestShelveInstance-server-199210370',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-199210370',id=124,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzsQ/6b3tjUGybuPyR4SVVOx0MdNuIeiSvtBwT6p2uLvkn4tE1YW+Uq6JJ1YItW22s6E/D+cvCAL9Uj4JCY+wAR12IR6juEk+h0nnTQwb0m9GcGi0Su/6v36I6xXc+YZw==',key_name='tempest-TestShelveInstance-203364505',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21220064aba34c77a8af713fad28c08b',ramdisk_id='',reservation_id='r-2iy01e2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-783621896',owner_user_name='tempest-TestShelveInstance-783621896-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:53:52Z,user_data=None,user_id='bd8daf03c9d144d7a60fe3f81abdfbb4',uuid=29e25f63-82b1-45cf-8916-41d9acc44ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.969 2 DEBUG nova.network.os_vif_util [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converting VIF {"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.970 2 DEBUG nova.network.os_vif_util [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.970 2 DEBUG os_vif [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.972 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.972 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.976 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c4955ab-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.977 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6c4955ab-01, col_values=(('external_ids', {'iface-id': '6c4955ab-011a-4e64-9871-c01b4818d740', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:f3:67', 'vm-uuid': '29e25f63-82b1-45cf-8916-41d9acc44ac9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:58 compute-0 NetworkManager[45129]: <info>  [1759395238.9794] manager: (tap6c4955ab-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/520)
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:53:58 compute-0 nova_compute[260603]: 2025-10-02 08:53:58.989 2 INFO os_vif [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01')
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.051 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.051 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.052 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No VIF found with MAC fa:16:3e:ec:f3:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.052 2 INFO nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Using config drive
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.086 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:59 compute-0 ceph-mon[74477]: pgmap v2269: 305 pgs: 305 active+clean; 134 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 104 op/s
Oct 02 08:53:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1021823407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:53:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1682623345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:53:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 134 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.692 2 INFO nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Creating config drive at /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.697 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvn_ke8og execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.846 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvn_ke8og" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.885 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.889 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.928 2 DEBUG nova.network.neutron [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updated VIF entry in instance network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.929 2 DEBUG nova.network.neutron [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:53:59 compute-0 nova_compute[260603]: 2025-10-02 08:53:59.949 2 DEBUG oslo_concurrency.lockutils [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.081 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.082 2 INFO nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deleting local config drive /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config because it was imported into RBD.
Oct 02 08:54:00 compute-0 kernel: tap6c4955ab-01: entered promiscuous mode
Oct 02 08:54:00 compute-0 NetworkManager[45129]: <info>  [1759395240.1356] manager: (tap6c4955ab-01): new Tun device (/org/freedesktop/NetworkManager/Devices/521)
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:00 compute-0 ovn_controller[152344]: 2025-10-02T08:54:00Z|01313|binding|INFO|Claiming lport 6c4955ab-011a-4e64-9871-c01b4818d740 for this chassis.
Oct 02 08:54:00 compute-0 ovn_controller[152344]: 2025-10-02T08:54:00Z|01314|binding|INFO|6c4955ab-011a-4e64-9871-c01b4818d740: Claiming fa:16:3e:ec:f3:67 10.100.0.4
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.153 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f3:67 10.100.0.4'], port_security=['fa:16:3e:ec:f3:67 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29e25f63-82b1-45cf-8916-41d9acc44ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21220064aba34c77a8af713fad28c08b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19fc7080-3cba-4840-a004-532eaa7c989b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee760329-6e77-4a17-b501-5dd001a2d022, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6c4955ab-011a-4e64-9871-c01b4818d740) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.155 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6c4955ab-011a-4e64-9871-c01b4818d740 in datapath 5c4ca2f6-ca60-420d-aded-392a44195bf1 bound to our chassis
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.157 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c4ca2f6-ca60-420d-aded-392a44195bf1
Oct 02 08:54:00 compute-0 ovn_controller[152344]: 2025-10-02T08:54:00Z|01315|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 ovn-installed in OVS
Oct 02 08:54:00 compute-0 ovn_controller[152344]: 2025-10-02T08:54:00Z|01316|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 up in Southbound
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.171 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e657e71f-c279-492f-b718-29b1edf30eb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.171 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c4ca2f6-c1 in ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:54:00 compute-0 systemd-udevd[391008]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.173 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c4ca2f6-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.174 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6731f28b-3b29-4391-8dbe-e67b59cdc9d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.174 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[05086e11-f4d0-4516-9076-a49c75e76950]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 systemd-machined[214636]: New machine qemu-157-instance-0000007c.
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.192 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b4c5bb-9310-427d-b97c-2ad9127f392c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_controller[152344]: 2025-10-02T08:54:00Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:0b:fe 10.100.0.10
Oct 02 08:54:00 compute-0 ovn_controller[152344]: 2025-10-02T08:54:00Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:0b:fe 10.100.0.10
Oct 02 08:54:00 compute-0 NetworkManager[45129]: <info>  [1759395240.1945] device (tap6c4955ab-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:54:00 compute-0 NetworkManager[45129]: <info>  [1759395240.1956] device (tap6c4955ab-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:54:00 compute-0 systemd[1]: Started Virtual Machine qemu-157-instance-0000007c.
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.217 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1600d5-47e2-48cf-bd5c-b2393e4b0484]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.247 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d2fb02-11dc-4e9c-b1f2-305076620626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.251 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8991a5-3f2f-4a6f-a9cd-3c3bb14c931d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 NetworkManager[45129]: <info>  [1759395240.2524] manager: (tap5c4ca2f6-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/522)
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.293 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfab5c0-ef9c-4b36-ae5b-45ec353b4133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.298 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d532a3fd-5ebc-4d8b-9b85-cc6cf9792ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 NetworkManager[45129]: <info>  [1759395240.3276] device (tap5c4ca2f6-c0): carrier: link connected
Oct 02 08:54:00 compute-0 ceph-mon[74477]: pgmap v2270: 305 pgs: 305 active+clean; 134 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.334 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e1653457-ef77-4e25-b1d8-9650baf0414e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.358 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fb587f9d-6614-4200-8269-84b443e61e38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c4ca2f6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:2b:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611893, 'reachable_time': 43974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391041, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.373 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[364e7e6c-15e1-42a7-a178-5049000e4eca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:2b16'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611893, 'tstamp': 611893}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391042, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.392 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3e037b50-5d24-4436-9810-517012393857]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c4ca2f6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:2b:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611893, 'reachable_time': 43974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391043, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.423 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c941b092-3a26-4806-bbe2-9a89d5d51173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.493 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[322a4831-3c85-494c-80ea-9564b0ec8551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.494 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c4ca2f6-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.495 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.495 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c4ca2f6-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:00 compute-0 NetworkManager[45129]: <info>  [1759395240.4982] manager: (tap5c4ca2f6-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/523)
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:00 compute-0 kernel: tap5c4ca2f6-c0: entered promiscuous mode
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.502 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c4ca2f6-c0, col_values=(('external_ids', {'iface-id': '2b6a47ce-d685-4242-a053-f63d07d5d559'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:00 compute-0 ovn_controller[152344]: 2025-10-02T08:54:00Z|01317|binding|INFO|Releasing lport 2b6a47ce-d685-4242-a053-f63d07d5d559 from this chassis (sb_readonly=0)
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.518 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c4ca2f6-ca60-420d-aded-392a44195bf1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c4ca2f6-ca60-420d-aded-392a44195bf1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.519 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7cda5733-dea1-4b98-829d-d3aa5b2ca75b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.520 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-5c4ca2f6-ca60-420d-aded-392a44195bf1
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/5c4ca2f6-ca60-420d-aded-392a44195bf1.pid.haproxy
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 5c4ca2f6-ca60-420d-aded-392a44195bf1
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:54:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.522 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'env', 'PROCESS_TAG=haproxy-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c4ca2f6-ca60-420d-aded-392a44195bf1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.640 2 DEBUG nova.compute.manager [req-cbcb1686-592a-41d4-b61a-8b6bfaba86f7 req-7b07d386-75af-40be-a851-6d03d8b56f62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.641 2 DEBUG oslo_concurrency.lockutils [req-cbcb1686-592a-41d4-b61a-8b6bfaba86f7 req-7b07d386-75af-40be-a851-6d03d8b56f62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.641 2 DEBUG oslo_concurrency.lockutils [req-cbcb1686-592a-41d4-b61a-8b6bfaba86f7 req-7b07d386-75af-40be-a851-6d03d8b56f62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.642 2 DEBUG oslo_concurrency.lockutils [req-cbcb1686-592a-41d4-b61a-8b6bfaba86f7 req-7b07d386-75af-40be-a851-6d03d8b56f62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.642 2 DEBUG nova.compute.manager [req-cbcb1686-592a-41d4-b61a-8b6bfaba86f7 req-7b07d386-75af-40be-a851-6d03d8b56f62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Processing event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:54:00 compute-0 podman[391116]: 2025-10-02 08:54:00.958156303 +0000 UTC m=+0.071927032 container create 64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.978 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395240.9774995, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.978 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Started (Lifecycle Event)
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.982 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.985 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.989 2 INFO nova.virt.libvirt.driver [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance spawned successfully.
Oct 02 08:54:00 compute-0 nova_compute[260603]: 2025-10-02 08:54:00.990 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:54:01 compute-0 systemd[1]: Started libpod-conmon-64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54.scope.
Oct 02 08:54:01 compute-0 podman[391116]: 2025-10-02 08:54:00.921397737 +0000 UTC m=+0.035168586 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.014 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.020 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.039 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.040 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.041 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.041 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.042 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.042 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:54:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.054 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.054 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395240.9776886, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.055 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Paused (Lifecycle Event)
Oct 02 08:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9d580230bf0e4e9fc32b97474e3cee5deeb211aa3deb462ec3ffd85a1c2b12f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:01 compute-0 podman[391116]: 2025-10-02 08:54:01.081821752 +0000 UTC m=+0.195592491 container init 64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:54:01 compute-0 podman[391116]: 2025-10-02 08:54:01.090169877 +0000 UTC m=+0.203940596 container start 64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:54:01 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [NOTICE]   (391135) : New worker (391137) forked
Oct 02 08:54:01 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [NOTICE]   (391135) : Loading success.
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.138 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.144 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395240.9849544, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.144 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Resumed (Lifecycle Event)
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.171 2 INFO nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Took 8.67 seconds to spawn the instance on the hypervisor.
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.171 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.176 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.185 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.238 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.254 2 INFO nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Took 9.84 seconds to build instance.
Oct 02 08:54:01 compute-0 nova_compute[260603]: 2025-10-02 08:54:01.272 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 134 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 02 08:54:02 compute-0 ceph-mon[74477]: pgmap v2271: 305 pgs: 305 active+clean; 134 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 02 08:54:02 compute-0 nova_compute[260603]: 2025-10-02 08:54:02.761 2 DEBUG nova.compute.manager [req-2ecced72-c33a-4010-974e-26f0172f4493 req-082a9aa4-8d0e-4fad-9139-625734235bbc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:02 compute-0 nova_compute[260603]: 2025-10-02 08:54:02.762 2 DEBUG oslo_concurrency.lockutils [req-2ecced72-c33a-4010-974e-26f0172f4493 req-082a9aa4-8d0e-4fad-9139-625734235bbc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:02 compute-0 nova_compute[260603]: 2025-10-02 08:54:02.762 2 DEBUG oslo_concurrency.lockutils [req-2ecced72-c33a-4010-974e-26f0172f4493 req-082a9aa4-8d0e-4fad-9139-625734235bbc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:02 compute-0 nova_compute[260603]: 2025-10-02 08:54:02.763 2 DEBUG oslo_concurrency.lockutils [req-2ecced72-c33a-4010-974e-26f0172f4493 req-082a9aa4-8d0e-4fad-9139-625734235bbc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:02 compute-0 nova_compute[260603]: 2025-10-02 08:54:02.763 2 DEBUG nova.compute.manager [req-2ecced72-c33a-4010-974e-26f0172f4493 req-082a9aa4-8d0e-4fad-9139-625734235bbc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:54:02 compute-0 nova_compute[260603]: 2025-10-02 08:54:02.764 2 WARNING nova.compute.manager [req-2ecced72-c33a-4010-974e-26f0172f4493 req-082a9aa4-8d0e-4fad-9139-625734235bbc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state None.
Oct 02 08:54:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:54:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 236 op/s
Oct 02 08:54:03 compute-0 nova_compute[260603]: 2025-10-02 08:54:03.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:04.199 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:54:04 compute-0 nova_compute[260603]: 2025-10-02 08:54:04.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:04.203 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:54:04 compute-0 ceph-mon[74477]: pgmap v2272: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 236 op/s
Oct 02 08:54:04 compute-0 nova_compute[260603]: 2025-10-02 08:54:04.944 2 DEBUG nova.compute.manager [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:04 compute-0 nova_compute[260603]: 2025-10-02 08:54:04.945 2 DEBUG nova.compute.manager [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing instance network info cache due to event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:54:04 compute-0 nova_compute[260603]: 2025-10-02 08:54:04.945 2 DEBUG oslo_concurrency.lockutils [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:54:04 compute-0 nova_compute[260603]: 2025-10-02 08:54:04.946 2 DEBUG oslo_concurrency.lockutils [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:54:04 compute-0 nova_compute[260603]: 2025-10-02 08:54:04.946 2 DEBUG nova.network.neutron [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:54:05 compute-0 nova_compute[260603]: 2025-10-02 08:54:05.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Oct 02 08:54:06 compute-0 nova_compute[260603]: 2025-10-02 08:54:06.456 2 DEBUG nova.network.neutron [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updated VIF entry in instance network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:54:06 compute-0 nova_compute[260603]: 2025-10-02 08:54:06.457 2 DEBUG nova.network.neutron [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:54:06 compute-0 nova_compute[260603]: 2025-10-02 08:54:06.486 2 DEBUG oslo_concurrency.lockutils [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:54:06 compute-0 ceph-mon[74477]: pgmap v2273: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Oct 02 08:54:07 compute-0 nova_compute[260603]: 2025-10-02 08:54:07.407 2 INFO nova.compute.manager [None req-4a1a1ce2-f670-4033-9885-36ba87b3f749 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Get console output
Oct 02 08:54:07 compute-0 nova_compute[260603]: 2025-10-02 08:54:07.414 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:54:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 164 op/s
Oct 02 08:54:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:54:08 compute-0 ceph-mon[74477]: pgmap v2274: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 164 op/s
Oct 02 08:54:08 compute-0 nova_compute[260603]: 2025-10-02 08:54:08.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 02 08:54:10 compute-0 nova_compute[260603]: 2025-10-02 08:54:10.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:10 compute-0 ceph-mon[74477]: pgmap v2275: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 02 08:54:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 02 08:54:11 compute-0 sshd-session[391146]: banner exchange: Connection from 195.178.110.160 port 25798: invalid format
Oct 02 08:54:11 compute-0 sshd-session[391147]: banner exchange: Connection from 195.178.110.160 port 25800: invalid format
Oct 02 08:54:12 compute-0 ceph-mon[74477]: pgmap v2276: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 02 08:54:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:54:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 170 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 152 op/s
Oct 02 08:54:13 compute-0 nova_compute[260603]: 2025-10-02 08:54:13.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:14 compute-0 ovn_controller[152344]: 2025-10-02T08:54:14Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:f3:67 10.100.0.4
Oct 02 08:54:14 compute-0 ovn_controller[152344]: 2025-10-02T08:54:14Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:f3:67 10.100.0.4
Oct 02 08:54:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:14.206 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:14 compute-0 ceph-mon[74477]: pgmap v2277: 305 pgs: 305 active+clean; 170 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 152 op/s
Oct 02 08:54:15 compute-0 nova_compute[260603]: 2025-10-02 08:54:15.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 170 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 360 KiB/s wr, 14 op/s
Oct 02 08:54:16 compute-0 ceph-mon[74477]: pgmap v2278: 305 pgs: 305 active+clean; 170 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 360 KiB/s wr, 14 op/s
Oct 02 08:54:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 199 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 08:54:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:54:18 compute-0 ceph-mon[74477]: pgmap v2279: 305 pgs: 305 active+clean; 199 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 08:54:19 compute-0 nova_compute[260603]: 2025-10-02 08:54:19.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:54:20 compute-0 nova_compute[260603]: 2025-10-02 08:54:20.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:20 compute-0 ceph-mon[74477]: pgmap v2280: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:54:21 compute-0 podman[391149]: 2025-10-02 08:54:21.021546401 +0000 UTC m=+0.078620283 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 08:54:21 compute-0 podman[391148]: 2025-10-02 08:54:21.053577487 +0000 UTC m=+0.123644401 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:54:21 compute-0 nova_compute[260603]: 2025-10-02 08:54:21.130 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:21 compute-0 nova_compute[260603]: 2025-10-02 08:54:21.130 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:21 compute-0 nova_compute[260603]: 2025-10-02 08:54:21.131 2 INFO nova.compute.manager [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Shelving
Oct 02 08:54:21 compute-0 nova_compute[260603]: 2025-10-02 08:54:21.163 2 DEBUG nova.virt.libvirt.driver [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 08:54:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:54:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:54:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3705302853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:54:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:54:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3705302853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:54:22 compute-0 ceph-mon[74477]: pgmap v2281: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:54:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3705302853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:54:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3705302853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:54:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:54:23 compute-0 kernel: tap6c4955ab-01 (unregistering): left promiscuous mode
Oct 02 08:54:23 compute-0 NetworkManager[45129]: <info>  [1759395263.4760] device (tap6c4955ab-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01318|binding|INFO|Releasing lport 6c4955ab-011a-4e64-9871-c01b4818d740 from this chassis (sb_readonly=0)
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01319|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 down in Southbound
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01320|binding|INFO|Removing iface tap6c4955ab-01 ovn-installed in OVS
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.497 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f3:67 10.100.0.4'], port_security=['fa:16:3e:ec:f3:67 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29e25f63-82b1-45cf-8916-41d9acc44ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21220064aba34c77a8af713fad28c08b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19fc7080-3cba-4840-a004-532eaa7c989b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee760329-6e77-4a17-b501-5dd001a2d022, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6c4955ab-011a-4e64-9871-c01b4818d740) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.499 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6c4955ab-011a-4e64-9871-c01b4818d740 in datapath 5c4ca2f6-ca60-420d-aded-392a44195bf1 unbound from our chassis
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.500 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c4ca2f6-ca60-420d-aded-392a44195bf1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.502 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30352437-283f-4dba-9659-5d94b270839d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.502 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 namespace which is not needed anymore
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:23 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Oct 02 08:54:23 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Consumed 14.436s CPU time.
Oct 02 08:54:23 compute-0 systemd-machined[214636]: Machine qemu-157-instance-0000007c terminated.
Oct 02 08:54:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct 02 08:54:23 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [NOTICE]   (391135) : haproxy version is 2.8.14-c23fe91
Oct 02 08:54:23 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [NOTICE]   (391135) : path to executable is /usr/sbin/haproxy
Oct 02 08:54:23 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [WARNING]  (391135) : Exiting Master process...
Oct 02 08:54:23 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [ALERT]    (391135) : Current worker (391137) exited with code 143 (Terminated)
Oct 02 08:54:23 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [WARNING]  (391135) : All workers exited. Exiting... (0)
Oct 02 08:54:23 compute-0 systemd[1]: libpod-64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54.scope: Deactivated successfully.
Oct 02 08:54:23 compute-0 podman[391216]: 2025-10-02 08:54:23.661223879 +0000 UTC m=+0.048304832 container died 64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54-userdata-shm.mount: Deactivated successfully.
Oct 02 08:54:23 compute-0 kernel: tap6c4955ab-01: entered promiscuous mode
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01321|binding|INFO|Claiming lport 6c4955ab-011a-4e64-9871-c01b4818d740 for this chassis.
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01322|binding|INFO|6c4955ab-011a-4e64-9871-c01b4818d740: Claiming fa:16:3e:ec:f3:67 10.100.0.4
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:23 compute-0 kernel: tap6c4955ab-01 (unregistering): left promiscuous mode
Oct 02 08:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9d580230bf0e4e9fc32b97474e3cee5deeb211aa3deb462ec3ffd85a1c2b12f-merged.mount: Deactivated successfully.
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.724 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f3:67 10.100.0.4'], port_security=['fa:16:3e:ec:f3:67 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29e25f63-82b1-45cf-8916-41d9acc44ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21220064aba34c77a8af713fad28c08b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19fc7080-3cba-4840-a004-532eaa7c989b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee760329-6e77-4a17-b501-5dd001a2d022, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6c4955ab-011a-4e64-9871-c01b4818d740) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:54:23 compute-0 podman[391216]: 2025-10-02 08:54:23.734199102 +0000 UTC m=+0.121280075 container cleanup 64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01323|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 ovn-installed in OVS
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01324|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 up in Southbound
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01325|binding|INFO|Releasing lport 6c4955ab-011a-4e64-9871-c01b4818d740 from this chassis (sb_readonly=1)
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01326|if_status|INFO|Dropped 4 log messages in last 121 seconds (most recently, 121 seconds ago) due to excessive rate
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01327|if_status|INFO|Not setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 down as sb is readonly
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01328|binding|INFO|Removing iface tap6c4955ab-01 ovn-installed in OVS
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01329|binding|INFO|Releasing lport 6c4955ab-011a-4e64-9871-c01b4818d740 from this chassis (sb_readonly=0)
Oct 02 08:54:23 compute-0 ovn_controller[152344]: 2025-10-02T08:54:23Z|01330|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 down in Southbound
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.762 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f3:67 10.100.0.4'], port_security=['fa:16:3e:ec:f3:67 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29e25f63-82b1-45cf-8916-41d9acc44ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21220064aba34c77a8af713fad28c08b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19fc7080-3cba-4840-a004-532eaa7c989b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee760329-6e77-4a17-b501-5dd001a2d022, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6c4955ab-011a-4e64-9871-c01b4818d740) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:54:23 compute-0 systemd[1]: libpod-conmon-64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54.scope: Deactivated successfully.
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:23 compute-0 podman[391255]: 2025-10-02 08:54:23.829345099 +0000 UTC m=+0.054981004 container remove 64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.841 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91239f45-e29d-42eb-8a45-1f3335c8e293]: (4, ('Thu Oct  2 08:54:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 (64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54)\n64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54\nThu Oct  2 08:54:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 (64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54)\n64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.843 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[229d2e25-da79-4a0d-b315-6fd298ff8ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.845 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c4ca2f6-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:23 compute-0 kernel: tap5c4ca2f6-c0: left promiscuous mode
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.872 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[827c7ba4-b455-4710-b4c6-44846e9c999c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.881 2 DEBUG nova.compute.manager [req-feeab2c3-8487-42d6-ae98-f205310a2ef0 req-457eef20-33df-420b-a0f3-b7618bb7dfda 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.883 2 DEBUG oslo_concurrency.lockutils [req-feeab2c3-8487-42d6-ae98-f205310a2ef0 req-457eef20-33df-420b-a0f3-b7618bb7dfda 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.883 2 DEBUG oslo_concurrency.lockutils [req-feeab2c3-8487-42d6-ae98-f205310a2ef0 req-457eef20-33df-420b-a0f3-b7618bb7dfda 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.883 2 DEBUG oslo_concurrency.lockutils [req-feeab2c3-8487-42d6-ae98-f205310a2ef0 req-457eef20-33df-420b-a0f3-b7618bb7dfda 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.884 2 DEBUG nova.compute.manager [req-feeab2c3-8487-42d6-ae98-f205310a2ef0 req-457eef20-33df-420b-a0f3-b7618bb7dfda 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.884 2 WARNING nova.compute.manager [req-feeab2c3-8487-42d6-ae98-f205310a2ef0 req-457eef20-33df-420b-a0f3-b7618bb7dfda 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state shelving.
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.896 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[71af15d8-a5c1-4618-8b24-f47a52fb67a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.897 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c92a63-660c-49a3-897f-8cbbd735ace1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.909 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.910 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.917 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f75c9e2e-11c9-441d-9952-bd000d9ed810]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611884, 'reachable_time': 41284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391273, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d5c4ca2f6\x2dca60\x2d420d\x2daded\x2d392a44195bf1.mount: Deactivated successfully.
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.921 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.921 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e6537c59-4a96-4355-be51-b910336d01f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.922 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6c4955ab-011a-4e64-9871-c01b4818d740 in datapath 5c4ca2f6-ca60-420d-aded-392a44195bf1 unbound from our chassis
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.924 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c4ca2f6-ca60-420d-aded-392a44195bf1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.925 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e99bcd85-c67f-4fea-95b1-27fe61363bdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.925 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6c4955ab-011a-4e64-9871-c01b4818d740 in datapath 5c4ca2f6-ca60-420d-aded-392a44195bf1 unbound from our chassis
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.927 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c4ca2f6-ca60-420d-aded-392a44195bf1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:54:23 compute-0 nova_compute[260603]: 2025-10-02 08:54:23.928 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:54:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.928 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86a7cb29-3ba1-4d8a-bd51-d61289e4e42a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.007 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.008 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.020 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.020 2 INFO nova.compute.claims [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.183 2 INFO nova.virt.libvirt.driver [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance shutdown successfully after 3 seconds.
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.190 2 INFO nova.virt.libvirt.driver [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance destroyed successfully.
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.190 2 DEBUG nova.objects.instance [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'numa_topology' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.208 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.598 2 INFO nova.virt.libvirt.driver [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Beginning cold snapshot process
Oct 02 08:54:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:54:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469341986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.697 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.705 2 DEBUG nova.compute.provider_tree [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:54:24 compute-0 ceph-mon[74477]: pgmap v2282: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct 02 08:54:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3469341986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.788 2 DEBUG nova.scheduler.client.report [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.803 2 DEBUG nova.virt.libvirt.imagebackend [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.814 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.816 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.870 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.872 2 DEBUG nova.network.neutron [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.897 2 INFO nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:54:24 compute-0 nova_compute[260603]: 2025-10-02 08:54:24.917 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.016 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.018 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.018 2 INFO nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Creating image(s)
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.050 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.086 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.122 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.132 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.198 2 DEBUG nova.storage.rbd_utils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] creating snapshot(46bbf4a8538e4f82909a89b0898209ee) on rbd image(29e25f63-82b1-45cf-8916-41d9acc44ac9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.247 2 DEBUG nova.policy [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.257 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.259 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.260 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.260 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.286 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.290 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.562 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.631 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:54:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.738 2 DEBUG nova.objects.instance [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 1e566f75-d068-4a59-bf53-0a71ad8c5e45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.760 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.761 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Ensure instance console log exists: /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.761 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.762 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.762 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct 02 08:54:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct 02 08:54:25 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.857 2 DEBUG nova.storage.rbd_utils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] cloning vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk@46bbf4a8538e4f82909a89b0898209ee to images/fb02f3f9-de2f-45c4-91bc-9579a3c7316c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:54:25 compute-0 nova_compute[260603]: 2025-10-02 08:54:25.972 2 DEBUG nova.storage.rbd_utils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] flattening images/fb02f3f9-de2f-45c4-91bc-9579a3c7316c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.040 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.040 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.041 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.041 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.042 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.043 2 WARNING nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.044 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.044 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.044 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.045 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.045 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.045 2 WARNING nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.046 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.046 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.046 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.047 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.047 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.047 2 WARNING nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.048 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.048 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.048 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.049 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.049 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.050 2 WARNING nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.050 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.050 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.051 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.051 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.052 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.052 2 WARNING nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.358 2 DEBUG nova.storage.rbd_utils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] removing snapshot(46bbf4a8538e4f82909a89b0898209ee) on rbd image(29e25f63-82b1-45cf-8916-41d9acc44ac9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 08:54:26 compute-0 ceph-mon[74477]: pgmap v2283: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 02 08:54:26 compute-0 ceph-mon[74477]: osdmap e285: 3 total, 3 up, 3 in
Oct 02 08:54:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct 02 08:54:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct 02 08:54:26 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct 02 08:54:26 compute-0 nova_compute[260603]: 2025-10-02 08:54:26.850 2 DEBUG nova.storage.rbd_utils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] creating snapshot(snap) on rbd image(fb02f3f9-de2f-45c4-91bc-9579a3c7316c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 08:54:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 287 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.5 MiB/s wr, 102 op/s
Oct 02 08:54:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct 02 08:54:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct 02 08:54:27 compute-0 ceph-mon[74477]: osdmap e286: 3 total, 3 up, 3 in
Oct 02 08:54:27 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct 02 08:54:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:54:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:54:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:54:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:54:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:54:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:54:28 compute-0 podman[391603]: 2025-10-02 08:54:28.007027456 +0000 UTC m=+0.079935205 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:54:28
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.log', 'volumes', '.mgr']
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:54:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:54:28 compute-0 podman[391604]: 2025-10-02 08:54:28.040363723 +0000 UTC m=+0.100882729 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:54:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:54:28 compute-0 nova_compute[260603]: 2025-10-02 08:54:28.640 2 DEBUG nova.network.neutron [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Successfully created port: 0114a231-1abb-410d-8d5f-29713ba6ca28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:54:28 compute-0 ceph-mon[74477]: pgmap v2286: 305 pgs: 305 active+clean; 287 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.5 MiB/s wr, 102 op/s
Oct 02 08:54:28 compute-0 ceph-mon[74477]: osdmap e287: 3 total, 3 up, 3 in
Oct 02 08:54:29 compute-0 nova_compute[260603]: 2025-10-02 08:54:29.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:29 compute-0 nova_compute[260603]: 2025-10-02 08:54:29.330 2 INFO nova.virt.libvirt.driver [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Snapshot image upload complete
Oct 02 08:54:29 compute-0 nova_compute[260603]: 2025-10-02 08:54:29.331 2 DEBUG nova.compute.manager [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:29 compute-0 nova_compute[260603]: 2025-10-02 08:54:29.383 2 INFO nova.compute.manager [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Shelve offloading
Oct 02 08:54:29 compute-0 nova_compute[260603]: 2025-10-02 08:54:29.394 2 INFO nova.virt.libvirt.driver [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance destroyed successfully.
Oct 02 08:54:29 compute-0 nova_compute[260603]: 2025-10-02 08:54:29.395 2 DEBUG nova.compute.manager [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:29 compute-0 nova_compute[260603]: 2025-10-02 08:54:29.398 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:54:29 compute-0 nova_compute[260603]: 2025-10-02 08:54:29.399 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:54:29 compute-0 nova_compute[260603]: 2025-10-02 08:54:29.399 2 DEBUG nova.network.neutron [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:54:29 compute-0 nova_compute[260603]: 2025-10-02 08:54:29.552 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:54:29 compute-0 nova_compute[260603]: 2025-10-02 08:54:29.552 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:54:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 200 op/s
Oct 02 08:54:30 compute-0 nova_compute[260603]: 2025-10-02 08:54:30.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:30 compute-0 nova_compute[260603]: 2025-10-02 08:54:30.547 2 DEBUG nova.network.neutron [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Successfully updated port: 0114a231-1abb-410d-8d5f-29713ba6ca28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:54:30 compute-0 nova_compute[260603]: 2025-10-02 08:54:30.565 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-1e566f75-d068-4a59-bf53-0a71ad8c5e45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:54:30 compute-0 nova_compute[260603]: 2025-10-02 08:54:30.565 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-1e566f75-d068-4a59-bf53-0a71ad8c5e45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:54:30 compute-0 nova_compute[260603]: 2025-10-02 08:54:30.565 2 DEBUG nova.network.neutron [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:54:30 compute-0 ceph-mon[74477]: pgmap v2288: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 200 op/s
Oct 02 08:54:31 compute-0 nova_compute[260603]: 2025-10-02 08:54:31.553 2 DEBUG nova.network.neutron [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:54:31 compute-0 nova_compute[260603]: 2025-10-02 08:54:31.607 2 DEBUG nova.compute.manager [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-changed-0114a231-1abb-410d-8d5f-29713ba6ca28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:31 compute-0 nova_compute[260603]: 2025-10-02 08:54:31.608 2 DEBUG nova.compute.manager [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Refreshing instance network info cache due to event network-changed-0114a231-1abb-410d-8d5f-29713ba6ca28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:54:31 compute-0 nova_compute[260603]: 2025-10-02 08:54:31.608 2 DEBUG oslo_concurrency.lockutils [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1e566f75-d068-4a59-bf53-0a71ad8c5e45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:54:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 200 op/s
Oct 02 08:54:32 compute-0 ceph-mon[74477]: pgmap v2289: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 200 op/s
Oct 02 08:54:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:54:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct 02 08:54:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct 02 08:54:33 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct 02 08:54:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 10 MiB/s wr, 200 op/s
Oct 02 08:54:34 compute-0 ceph-mon[74477]: osdmap e288: 3 total, 3 up, 3 in
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.539 2 DEBUG nova.network.neutron [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.541 2 DEBUG nova.network.neutron [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Updating instance_info_cache with network_info: [{"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.568 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.573 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-1e566f75-d068-4a59-bf53-0a71ad8c5e45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.574 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Instance network_info: |[{"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.575 2 DEBUG oslo_concurrency.lockutils [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1e566f75-d068-4a59-bf53-0a71ad8c5e45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.576 2 DEBUG nova.network.neutron [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Refreshing network info cache for port 0114a231-1abb-410d-8d5f-29713ba6ca28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.582 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Start _get_guest_xml network_info=[{"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.589 2 WARNING nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.600 2 DEBUG nova.virt.libvirt.host [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.601 2 DEBUG nova.virt.libvirt.host [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.613 2 DEBUG nova.virt.libvirt.host [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.614 2 DEBUG nova.virt.libvirt.host [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.614 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.615 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.616 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.616 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.617 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.617 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.618 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.618 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.619 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.619 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.619 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.620 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:54:34 compute-0 nova_compute[260603]: 2025-10-02 08:54:34.625 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:34.835 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:34.836 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:34.837 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:35 compute-0 ceph-mon[74477]: pgmap v2291: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 10 MiB/s wr, 200 op/s
Oct 02 08:54:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:54:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3914923318' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.091 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.115 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.121 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:54:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/205786419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.560 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.561 2 DEBUG nova.virt.libvirt.vif [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:54:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1505974006',display_name='tempest-TestNetworkBasicOps-server-1505974006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1505974006',id=125,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOBsPfGQIKXrU0ixL8zdO1Id1043n5Wya2XViCYfbc8+hdkDDiOW4w1UIZN3cliMo4JpT768RX8ZTyGW3MECALsE1YSIbBikkYwsG7A75q2etUDqr7ZHw9V2TylyqodwPA==',key_name='tempest-TestNetworkBasicOps-1332028269',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-q4dl0p75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:54:24Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=1e566f75-d068-4a59-bf53-0a71ad8c5e45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.562 2 DEBUG nova.network.os_vif_util [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.563 2 DEBUG nova.network.os_vif_util [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.564 2 DEBUG nova.objects.instance [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e566f75-d068-4a59-bf53-0a71ad8c5e45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.588 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:54:35 compute-0 nova_compute[260603]:   <uuid>1e566f75-d068-4a59-bf53-0a71ad8c5e45</uuid>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   <name>instance-0000007d</name>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-1505974006</nova:name>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:54:34</nova:creationTime>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:54:35 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:54:35 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:54:35 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:54:35 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:54:35 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:54:35 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:54:35 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:54:35 compute-0 nova_compute[260603]:         <nova:port uuid="0114a231-1abb-410d-8d5f-29713ba6ca28">
Oct 02 08:54:35 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <system>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <entry name="serial">1e566f75-d068-4a59-bf53-0a71ad8c5e45</entry>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <entry name="uuid">1e566f75-d068-4a59-bf53-0a71ad8c5e45</entry>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     </system>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   <os>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   </os>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   <features>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   </features>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk">
Oct 02 08:54:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       </source>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:54:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk.config">
Oct 02 08:54:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       </source>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:54:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:33:dd:de"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <target dev="tap0114a231-1a"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/console.log" append="off"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <video>
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     </video>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:54:35 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:54:35 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:54:35 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:54:35 compute-0 nova_compute[260603]: </domain>
Oct 02 08:54:35 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.590 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Preparing to wait for external event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.591 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.591 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.592 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.593 2 DEBUG nova.virt.libvirt.vif [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:54:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1505974006',display_name='tempest-TestNetworkBasicOps-server-1505974006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1505974006',id=125,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOBsPfGQIKXrU0ixL8zdO1Id1043n5Wya2XViCYfbc8+hdkDDiOW4w1UIZN3cliMo4JpT768RX8ZTyGW3MECALsE1YSIbBikkYwsG7A75q2etUDqr7ZHw9V2TylyqodwPA==',key_name='tempest-TestNetworkBasicOps-1332028269',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-q4dl0p75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:54:24Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=1e566f75-d068-4a59-bf53-0a71ad8c5e45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.593 2 DEBUG nova.network.os_vif_util [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.594 2 DEBUG nova.network.os_vif_util [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.595 2 DEBUG os_vif [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0114a231-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.602 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0114a231-1a, col_values=(('external_ids', {'iface-id': '0114a231-1abb-410d-8d5f-29713ba6ca28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:dd:de', 'vm-uuid': '1e566f75-d068-4a59-bf53-0a71ad8c5e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:35 compute-0 NetworkManager[45129]: <info>  [1759395275.6054] manager: (tap0114a231-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.613 2 INFO os_vif [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a')
Oct 02 08:54:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.684 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.685 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.686 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:33:dd:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.687 2 INFO nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Using config drive
Oct 02 08:54:35 compute-0 nova_compute[260603]: 2025-10-02 08:54:35.712 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3914923318' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:54:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/205786419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:54:36 compute-0 nova_compute[260603]: 2025-10-02 08:54:36.529 2 INFO nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Creating config drive at /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/disk.config
Oct 02 08:54:36 compute-0 nova_compute[260603]: 2025-10-02 08:54:36.538 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5dkdczu5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:36 compute-0 nova_compute[260603]: 2025-10-02 08:54:36.708 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5dkdczu5" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:36 compute-0 nova_compute[260603]: 2025-10-02 08:54:36.750 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:36 compute-0 nova_compute[260603]: 2025-10-02 08:54:36.755 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/disk.config 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:36 compute-0 nova_compute[260603]: 2025-10-02 08:54:36.974 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/disk.config 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:36 compute-0 nova_compute[260603]: 2025-10-02 08:54:36.976 2 INFO nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Deleting local config drive /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/disk.config because it was imported into RBD.
Oct 02 08:54:37 compute-0 kernel: tap0114a231-1a: entered promiscuous mode
Oct 02 08:54:37 compute-0 NetworkManager[45129]: <info>  [1759395277.0582] manager: (tap0114a231-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:37 compute-0 ovn_controller[152344]: 2025-10-02T08:54:37Z|01331|binding|INFO|Claiming lport 0114a231-1abb-410d-8d5f-29713ba6ca28 for this chassis.
Oct 02 08:54:37 compute-0 ovn_controller[152344]: 2025-10-02T08:54:37Z|01332|binding|INFO|0114a231-1abb-410d-8d5f-29713ba6ca28: Claiming fa:16:3e:33:dd:de 10.100.0.26
Oct 02 08:54:37 compute-0 ceph-mon[74477]: pgmap v2292: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.081 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:dd:de 10.100.0.26'], port_security=['fa:16:3e:33:dd:de 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '1e566f75-d068-4a59-bf53-0a71ad8c5e45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bb814552-631a-471e-89ab-a7fd4a866ac7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999292f0-e1e9-4e7e-82ce-9e16433e3e2c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0114a231-1abb-410d-8d5f-29713ba6ca28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.083 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0114a231-1abb-410d-8d5f-29713ba6ca28 in datapath fcd51a14-855e-486f-92b0-d9c9ee06ef45 bound to our chassis
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.085 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fcd51a14-855e-486f-92b0-d9c9ee06ef45
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.105 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f34809e6-e797-4d54-8b65-787f20b21ff9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.106 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfcd51a14-81 in ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:54:37 compute-0 systemd-udevd[391784]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.113 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfcd51a14-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.113 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ae56a8a2-6d6f-4102-a73e-8379fe809199]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.115 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[236af463-50af-4bd5-b87a-0fe32db74f49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 systemd-machined[214636]: New machine qemu-158-instance-0000007d.
Oct 02 08:54:37 compute-0 NetworkManager[45129]: <info>  [1759395277.1354] device (tap0114a231-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:54:37 compute-0 systemd[1]: Started Virtual Machine qemu-158-instance-0000007d.
Oct 02 08:54:37 compute-0 NetworkManager[45129]: <info>  [1759395277.1376] device (tap0114a231-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.139 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d803898c-8752-4353-b66d-c2163a46ffd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 ovn_controller[152344]: 2025-10-02T08:54:37Z|01333|binding|INFO|Setting lport 0114a231-1abb-410d-8d5f-29713ba6ca28 ovn-installed in OVS
Oct 02 08:54:37 compute-0 ovn_controller[152344]: 2025-10-02T08:54:37Z|01334|binding|INFO|Setting lport 0114a231-1abb-410d-8d5f-29713ba6ca28 up in Southbound
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.152 2 INFO nova.virt.libvirt.driver [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance destroyed successfully.
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.157 2 DEBUG nova.objects.instance [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'resources' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.166 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a378cd-7dac-412a-982c-e08a60617f12]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.181 2 DEBUG nova.virt.libvirt.vif [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-199210370',display_name='tempest-TestShelveInstance-server-199210370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-199210370',id=124,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzsQ/6b3tjUGybuPyR4SVVOx0MdNuIeiSvtBwT6p2uLvkn4tE1YW+Uq6JJ1YItW22s6E/D+cvCAL9Uj4JCY+wAR12IR6juEk+h0nnTQwb0m9GcGi0Su/6v36I6xXc+YZw==',key_name='tempest-TestShelveInstance-203364505',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:54:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='21220064aba34c77a8af713fad28c08b',ramdisk_id='',reservation_id='r-2iy01e2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-783621896',owner_user_name='tempest-TestShelveInstance-783621896-project-member',shelved_at='2025-10-02T08:54:29.331405',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fb02f3f9-de2f-45c4-91bc-9579a3c7316c'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:54:24Z,user_data=None,user_id='bd8daf03c9d144d7a60fe3f81abdfbb4',uuid=29e25f63-82b1-45cf-8916-41d9acc44ac9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.182 2 DEBUG nova.network.os_vif_util [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converting VIF {"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.183 2 DEBUG nova.network.os_vif_util [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.184 2 DEBUG os_vif [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c4955ab-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.200 2 INFO os_vif [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01')
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.226 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3875ef-9c41-452b-ae7a-0fc114bc1b19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.234 2 DEBUG nova.compute.manager [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.235 2 DEBUG nova.compute.manager [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing instance network info cache due to event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.235 2 DEBUG oslo_concurrency.lockutils [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.237 2 DEBUG oslo_concurrency.lockutils [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.237 2 DEBUG nova.network.neutron [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.244 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7aca103e-3cea-4a69-8f67-b6fe655f9fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 NetworkManager[45129]: <info>  [1759395277.2467] manager: (tapfcd51a14-80): new Veth device (/org/freedesktop/NetworkManager/Devices/526)
Oct 02 08:54:37 compute-0 systemd-udevd[391787]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.295 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6be222d3-e4af-4c57-baea-7d82e2eeb50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.300 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1a315321-459b-43c0-9fce-831e37c98d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 NetworkManager[45129]: <info>  [1759395277.3309] device (tapfcd51a14-80): carrier: link connected
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.336 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9c0e18-d02d-4f66-b368-ceae2c48f236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.364 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[905a2b9f-2523-453a-85f7-ae2504a18aee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfcd51a14-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4c:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 377], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615593, 'reachable_time': 35538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391837, 'error': None, 'target': 'ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.385 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df144f46-ba78-42c6-88b5-1328c3c85535]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:4cb2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 615593, 'tstamp': 615593}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391838, 'error': None, 'target': 'ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.416 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ea3fe8-4951-4e87-b96e-521d203f70da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfcd51a14-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4c:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 377], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615593, 'reachable_time': 35538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391839, 'error': None, 'target': 'ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.425 2 DEBUG nova.network.neutron [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Updated VIF entry in instance network info cache for port 0114a231-1abb-410d-8d5f-29713ba6ca28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.426 2 DEBUG nova.network.neutron [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Updating instance_info_cache with network_info: [{"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.453 2 DEBUG oslo_concurrency.lockutils [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1e566f75-d068-4a59-bf53-0a71ad8c5e45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.474 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7da2bda1-6343-44e3-9dcf-69d6ec65ca6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.577 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[245ec138-c1ef-4b25-8e4c-5b4165f18593]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.579 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfcd51a14-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.580 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.580 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfcd51a14-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:37 compute-0 kernel: tapfcd51a14-80: entered promiscuous mode
Oct 02 08:54:37 compute-0 NetworkManager[45129]: <info>  [1759395277.5851] manager: (tapfcd51a14-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.588 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfcd51a14-80, col_values=(('external_ids', {'iface-id': 'ed953b14-70f0-43c8-ae8c-e9063bcdd7db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:37 compute-0 ovn_controller[152344]: 2025-10-02T08:54:37Z|01335|binding|INFO|Releasing lport ed953b14-70f0-43c8-ae8c-e9063bcdd7db from this chassis (sb_readonly=0)
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.624 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fcd51a14-855e-486f-92b0-d9c9ee06ef45.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fcd51a14-855e-486f-92b0-d9c9ee06ef45.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.626 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[15f6414f-7e62-43fd-8c9a-e065afa8aa8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.627 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-fcd51a14-855e-486f-92b0-d9c9ee06ef45
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/fcd51a14-855e-486f-92b0-d9c9ee06ef45.pid.haproxy
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID fcd51a14-855e-486f-92b0-d9c9ee06ef45
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:54:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.627 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'env', 'PROCESS_TAG=haproxy-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fcd51a14-855e-486f-92b0-d9c9ee06ef45.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:54:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 67 op/s
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.676 2 INFO nova.virt.libvirt.driver [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deleting instance files /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9_del
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.678 2 INFO nova.virt.libvirt.driver [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deletion of /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9_del complete
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.840 2 INFO nova.scheduler.client.report [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Deleted allocations for instance 29e25f63-82b1-45cf-8916-41d9acc44ac9
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.896 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.898 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:37 compute-0 nova_compute[260603]: 2025-10-02 08:54:37.982 2 DEBUG oslo_concurrency.processutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:38 compute-0 podman[391914]: 2025-10-02 08:54:38.021953622 +0000 UTC m=+0.056136590 container create 42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 08:54:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:54:38 compute-0 systemd[1]: Started libpod-conmon-42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403.scope.
Oct 02 08:54:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:54:38 compute-0 podman[391914]: 2025-10-02 08:54:37.993679406 +0000 UTC m=+0.027862384 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19e482ac1bfb83fa0f30e489e044044eba2a82cab9b325cf7843b422f0931f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:38 compute-0 podman[391914]: 2025-10-02 08:54:38.105133909 +0000 UTC m=+0.139316897 container init 42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 08:54:38 compute-0 podman[391914]: 2025-10-02 08:54:38.110452748 +0000 UTC m=+0.144635706 container start 42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 08:54:38 compute-0 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [NOTICE]   (391934) : New worker (391954) forked
Oct 02 08:54:38 compute-0 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [NOTICE]   (391934) : Loading success.
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.223 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395278.222527, 1e566f75-d068-4a59-bf53-0a71ad8c5e45 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.224 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] VM Started (Lifecycle Event)
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.248 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.251 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395278.2226515, 1e566f75-d068-4a59-bf53-0a71ad8c5e45 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.252 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] VM Paused (Lifecycle Event)
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.275 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.279 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.302 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:54:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:54:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401185990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.442 2 DEBUG oslo_concurrency.processutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.449 2 DEBUG nova.compute.provider_tree [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.470 2 DEBUG nova.scheduler.client.report [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.499 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.563 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 17.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.730 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395263.7298799, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.731 2 INFO nova.compute.manager [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Stopped (Lifecycle Event)
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.759 2 DEBUG nova.compute.manager [None req-44bec396-a4eb-41aa-a179-00adb4d6af62 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018669595909031713 of space, bias 1.0, pg target 0.5600878772709514 quantized to 32 (current 32)
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014249405808426125 of space, bias 1.0, pg target 0.42748217425278373 quantized to 32 (current 32)
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:54:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.943 2 DEBUG nova.network.neutron [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updated VIF entry in instance network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.944 2 DEBUG nova.network.neutron [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": null, "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap6c4955ab-01", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:54:38 compute-0 nova_compute[260603]: 2025-10-02 08:54:38.970 2 DEBUG oslo_concurrency.lockutils [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:54:39 compute-0 ceph-mon[74477]: pgmap v2293: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 67 op/s
Oct 02 08:54:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3401185990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:54:39 compute-0 sudo[391965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:54:39 compute-0 sudo[391965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:39 compute-0 sudo[391965]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.346 2 DEBUG nova.compute.manager [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.347 2 DEBUG oslo_concurrency.lockutils [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.348 2 DEBUG oslo_concurrency.lockutils [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.349 2 DEBUG oslo_concurrency.lockutils [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.349 2 DEBUG nova.compute.manager [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Processing event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.350 2 DEBUG nova.compute.manager [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.351 2 DEBUG oslo_concurrency.lockutils [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.351 2 DEBUG oslo_concurrency.lockutils [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.352 2 DEBUG oslo_concurrency.lockutils [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.352 2 DEBUG nova.compute.manager [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] No waiting events found dispatching network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.353 2 WARNING nova.compute.manager [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received unexpected event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 for instance with vm_state building and task_state spawning.
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.354 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.358 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395279.3581648, 1e566f75-d068-4a59-bf53-0a71ad8c5e45 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.358 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] VM Resumed (Lifecycle Event)
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.361 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:54:39 compute-0 sudo[391990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.366 2 INFO nova.virt.libvirt.driver [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Instance spawned successfully.
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.367 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:54:39 compute-0 sudo[391990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:39 compute-0 sudo[391990]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.386 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.396 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.402 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.403 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.404 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.405 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.406 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.407 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:54:39 compute-0 sudo[392015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.455 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:54:39 compute-0 sudo[392015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:39 compute-0 sudo[392015]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.495 2 INFO nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Took 14.48 seconds to spawn the instance on the hypervisor.
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.496 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:39 compute-0 sudo[392040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:54:39 compute-0 sudo[392040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.576 2 INFO nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Took 15.61 seconds to build instance.
Oct 02 08:54:39 compute-0 nova_compute[260603]: 2025-10-02 08:54:39.595 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 298 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 18 KiB/s wr, 24 op/s
Oct 02 08:54:40 compute-0 nova_compute[260603]: 2025-10-02 08:54:40.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:40 compute-0 sudo[392040]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:54:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:54:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:54:40 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:54:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:54:40 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:54:40 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0e67ec3b-ffdc-46ce-afff-85c14a9cccca does not exist
Oct 02 08:54:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:54:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:54:40 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0443375a-8f02-43a1-b6a6-2e154b0d269b does not exist
Oct 02 08:54:40 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 4af6aecd-ed27-4f06-afe8-0d09ae598972 does not exist
Oct 02 08:54:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:54:40 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:54:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:54:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:54:40 compute-0 sudo[392094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:54:40 compute-0 sudo[392094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:40 compute-0 sudo[392094]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:40 compute-0 sudo[392119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:54:40 compute-0 sudo[392119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:40 compute-0 sudo[392119]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:40 compute-0 sudo[392144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:54:40 compute-0 sudo[392144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:40 compute-0 sudo[392144]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:40 compute-0 sudo[392169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:54:40 compute-0 sudo[392169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:40 compute-0 podman[392234]: 2025-10-02 08:54:40.787730617 +0000 UTC m=+0.073411928 container create a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 08:54:40 compute-0 systemd[1]: Started libpod-conmon-a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20.scope.
Oct 02 08:54:40 compute-0 podman[392234]: 2025-10-02 08:54:40.758191731 +0000 UTC m=+0.043873132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:54:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:54:40 compute-0 podman[392234]: 2025-10-02 08:54:40.882333347 +0000 UTC m=+0.168014698 container init a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 08:54:40 compute-0 podman[392234]: 2025-10-02 08:54:40.888483031 +0000 UTC m=+0.174164382 container start a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 08:54:40 compute-0 podman[392234]: 2025-10-02 08:54:40.892038695 +0000 UTC m=+0.177720086 container attach a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldberg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:54:40 compute-0 unruffled_goldberg[392250]: 167 167
Oct 02 08:54:40 compute-0 systemd[1]: libpod-a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20.scope: Deactivated successfully.
Oct 02 08:54:40 compute-0 podman[392234]: 2025-10-02 08:54:40.899214502 +0000 UTC m=+0.184895853 container died a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldberg, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:54:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e926b3787a29684bbb8b93386e1aad3a1986634cd30f435aab4fec66881ba274-merged.mount: Deactivated successfully.
Oct 02 08:54:40 compute-0 podman[392234]: 2025-10-02 08:54:40.956403065 +0000 UTC m=+0.242084416 container remove a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:54:40 compute-0 systemd[1]: libpod-conmon-a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20.scope: Deactivated successfully.
Oct 02 08:54:41 compute-0 ceph-mon[74477]: pgmap v2294: 305 pgs: 305 active+clean; 298 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 18 KiB/s wr, 24 op/s
Oct 02 08:54:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:54:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:54:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:54:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:54:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:54:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:54:41 compute-0 podman[392274]: 2025-10-02 08:54:41.248593028 +0000 UTC m=+0.081366261 container create 0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_golick, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 08:54:41 compute-0 systemd[1]: Started libpod-conmon-0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756.scope.
Oct 02 08:54:41 compute-0 podman[392274]: 2025-10-02 08:54:41.208115055 +0000 UTC m=+0.040888368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:54:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d6a37f2648b61bb18f4abece52060f78ae71539cb16b9cff1987b0f7aee270/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d6a37f2648b61bb18f4abece52060f78ae71539cb16b9cff1987b0f7aee270/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d6a37f2648b61bb18f4abece52060f78ae71539cb16b9cff1987b0f7aee270/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d6a37f2648b61bb18f4abece52060f78ae71539cb16b9cff1987b0f7aee270/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d6a37f2648b61bb18f4abece52060f78ae71539cb16b9cff1987b0f7aee270/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:41 compute-0 podman[392274]: 2025-10-02 08:54:41.347719801 +0000 UTC m=+0.180493114 container init 0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_golick, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 08:54:41 compute-0 podman[392274]: 2025-10-02 08:54:41.355322422 +0000 UTC m=+0.188095675 container start 0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:54:41 compute-0 podman[392274]: 2025-10-02 08:54:41.3587192 +0000 UTC m=+0.191492463 container attach 0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:54:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 298 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 18 KiB/s wr, 24 op/s
Oct 02 08:54:42 compute-0 nova_compute[260603]: 2025-10-02 08:54:42.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:42 compute-0 suspicious_golick[392290]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:54:42 compute-0 suspicious_golick[392290]: --> relative data size: 1.0
Oct 02 08:54:42 compute-0 suspicious_golick[392290]: --> All data devices are unavailable
Oct 02 08:54:42 compute-0 systemd[1]: libpod-0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756.scope: Deactivated successfully.
Oct 02 08:54:42 compute-0 podman[392274]: 2025-10-02 08:54:42.509199174 +0000 UTC m=+1.341972437 container died 0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 08:54:42 compute-0 systemd[1]: libpod-0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756.scope: Consumed 1.070s CPU time.
Oct 02 08:54:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-22d6a37f2648b61bb18f4abece52060f78ae71539cb16b9cff1987b0f7aee270-merged.mount: Deactivated successfully.
Oct 02 08:54:42 compute-0 podman[392274]: 2025-10-02 08:54:42.582483847 +0000 UTC m=+1.415257070 container remove 0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_golick, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:54:42 compute-0 systemd[1]: libpod-conmon-0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756.scope: Deactivated successfully.
Oct 02 08:54:42 compute-0 sudo[392169]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:42 compute-0 sudo[392331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:54:42 compute-0 sudo[392331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:42 compute-0 sudo[392331]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:42 compute-0 sudo[392356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:54:42 compute-0 sudo[392356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:42 compute-0 sudo[392356]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:42 compute-0 sudo[392381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:54:42 compute-0 sudo[392381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:42 compute-0 sudo[392381]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:42 compute-0 sudo[392406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:54:42 compute-0 sudo[392406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.044572) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395283044604, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 965, "num_deletes": 251, "total_data_size": 1272138, "memory_usage": 1294584, "flush_reason": "Manual Compaction"}
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395283052214, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 812145, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47642, "largest_seqno": 48606, "table_properties": {"data_size": 808200, "index_size": 1597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10571, "raw_average_key_size": 20, "raw_value_size": 799624, "raw_average_value_size": 1586, "num_data_blocks": 71, "num_entries": 504, "num_filter_entries": 504, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395203, "oldest_key_time": 1759395203, "file_creation_time": 1759395283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 7706 microseconds, and 3624 cpu microseconds.
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.052273) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 812145 bytes OK
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.052298) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.053802) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.053828) EVENT_LOG_v1 {"time_micros": 1759395283053819, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.053851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1267484, prev total WAL file size 1267484, number of live WAL files 2.
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.054605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373532' seq:72057594037927935, type:22 .. '6D6772737461740032303033' seq:0, type:0; will stop at (end)
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(793KB)], [110(10MB)]
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395283054627, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11396015, "oldest_snapshot_seqno": -1}
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6857 keys, 8538723 bytes, temperature: kUnknown
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395283102954, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8538723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8494333, "index_size": 26166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 178739, "raw_average_key_size": 26, "raw_value_size": 8373118, "raw_average_value_size": 1221, "num_data_blocks": 1022, "num_entries": 6857, "num_filter_entries": 6857, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:54:43 compute-0 ceph-mon[74477]: pgmap v2295: 305 pgs: 305 active+clean; 298 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 18 KiB/s wr, 24 op/s
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.103313) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8538723 bytes
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.106132) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.3 rd, 176.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(24.5) write-amplify(10.5) OK, records in: 7341, records dropped: 484 output_compression: NoCompression
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.106165) EVENT_LOG_v1 {"time_micros": 1759395283106151, "job": 66, "event": "compaction_finished", "compaction_time_micros": 48439, "compaction_time_cpu_micros": 20038, "output_level": 6, "num_output_files": 1, "total_output_size": 8538723, "num_input_records": 7341, "num_output_records": 6857, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395283106602, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395283110398, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.054540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.110468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.110474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.110478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.110481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:54:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.110483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:54:43 compute-0 podman[392471]: 2025-10-02 08:54:43.362804626 +0000 UTC m=+0.053491797 container create 7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 08:54:43 compute-0 systemd[1]: Started libpod-conmon-7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216.scope.
Oct 02 08:54:43 compute-0 podman[392471]: 2025-10-02 08:54:43.340618153 +0000 UTC m=+0.031305364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:54:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:54:43 compute-0 podman[392471]: 2025-10-02 08:54:43.474352033 +0000 UTC m=+0.165039294 container init 7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 08:54:43 compute-0 podman[392471]: 2025-10-02 08:54:43.481618353 +0000 UTC m=+0.172305524 container start 7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 08:54:43 compute-0 vigilant_heisenberg[392486]: 167 167
Oct 02 08:54:43 compute-0 systemd[1]: libpod-7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216.scope: Deactivated successfully.
Oct 02 08:54:43 compute-0 podman[392471]: 2025-10-02 08:54:43.489185422 +0000 UTC m=+0.179872673 container attach 7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heisenberg, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:54:43 compute-0 podman[392471]: 2025-10-02 08:54:43.490209175 +0000 UTC m=+0.180896376 container died 7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heisenberg, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:54:43 compute-0 nova_compute[260603]: 2025-10-02 08:54:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:54:43 compute-0 nova_compute[260603]: 2025-10-02 08:54:43.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:54:43 compute-0 nova_compute[260603]: 2025-10-02 08:54:43.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:54:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba94bb24485ccd622723c680953348650b65bb3baa90b2becaf4f8257c15c213-merged.mount: Deactivated successfully.
Oct 02 08:54:43 compute-0 podman[392471]: 2025-10-02 08:54:43.543202985 +0000 UTC m=+0.233890146 container remove 7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 02 08:54:43 compute-0 systemd[1]: libpod-conmon-7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216.scope: Deactivated successfully.
Oct 02 08:54:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 16 KiB/s wr, 114 op/s
Oct 02 08:54:43 compute-0 nova_compute[260603]: 2025-10-02 08:54:43.757 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:54:43 compute-0 nova_compute[260603]: 2025-10-02 08:54:43.757 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:54:43 compute-0 nova_compute[260603]: 2025-10-02 08:54:43.758 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:54:43 compute-0 nova_compute[260603]: 2025-10-02 08:54:43.758 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ce9a5c17-646f-4ba2-a974-90e4b864872e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:43 compute-0 podman[392510]: 2025-10-02 08:54:43.771358398 +0000 UTC m=+0.079920254 container create 16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:54:43 compute-0 systemd[1]: Started libpod-conmon-16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956.scope.
Oct 02 08:54:43 compute-0 podman[392510]: 2025-10-02 08:54:43.738662261 +0000 UTC m=+0.047224137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:54:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466649ac5d278b686e2b9722dd49fd945aa21a61825d602023116c68970cac18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466649ac5d278b686e2b9722dd49fd945aa21a61825d602023116c68970cac18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466649ac5d278b686e2b9722dd49fd945aa21a61825d602023116c68970cac18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466649ac5d278b686e2b9722dd49fd945aa21a61825d602023116c68970cac18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:43 compute-0 podman[392510]: 2025-10-02 08:54:43.896352231 +0000 UTC m=+0.204914097 container init 16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:54:43 compute-0 podman[392510]: 2025-10-02 08:54:43.910097547 +0000 UTC m=+0.218659423 container start 16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 08:54:43 compute-0 podman[392510]: 2025-10-02 08:54:43.914197937 +0000 UTC m=+0.222759893 container attach 16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.179 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.180 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.180 2 INFO nova.compute.manager [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Unshelving
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.306 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.307 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.313 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'pci_requests' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.326 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'numa_topology' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.370 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.370 2 INFO nova.compute.claims [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.501 2 DEBUG nova.scheduler.client.report [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.533 2 DEBUG nova.scheduler.client.report [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.534 2 DEBUG nova.compute.provider_tree [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.557 2 DEBUG nova.scheduler.client.report [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.595 2 DEBUG nova.scheduler.client.report [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 08:54:44 compute-0 nova_compute[260603]: 2025-10-02 08:54:44.686 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]: {
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:     "0": [
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:         {
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "devices": [
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "/dev/loop3"
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             ],
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_name": "ceph_lv0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_size": "21470642176",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "name": "ceph_lv0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "tags": {
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.cluster_name": "ceph",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.crush_device_class": "",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.encrypted": "0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.osd_id": "0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.type": "block",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.vdo": "0"
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             },
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "type": "block",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "vg_name": "ceph_vg0"
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:         }
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:     ],
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:     "1": [
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:         {
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "devices": [
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "/dev/loop4"
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             ],
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_name": "ceph_lv1",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_size": "21470642176",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "name": "ceph_lv1",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "tags": {
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.cluster_name": "ceph",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.crush_device_class": "",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.encrypted": "0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.osd_id": "1",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.type": "block",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.vdo": "0"
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             },
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "type": "block",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "vg_name": "ceph_vg1"
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:         }
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:     ],
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:     "2": [
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:         {
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "devices": [
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "/dev/loop5"
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             ],
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_name": "ceph_lv2",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_size": "21470642176",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "name": "ceph_lv2",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "tags": {
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.cluster_name": "ceph",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.crush_device_class": "",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.encrypted": "0",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.osd_id": "2",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.type": "block",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:                 "ceph.vdo": "0"
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             },
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "type": "block",
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:             "vg_name": "ceph_vg2"
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:         }
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]:     ]
Oct 02 08:54:44 compute-0 reverent_proskuriakova[392527]: }
Oct 02 08:54:44 compute-0 systemd[1]: libpod-16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956.scope: Deactivated successfully.
Oct 02 08:54:44 compute-0 podman[392510]: 2025-10-02 08:54:44.757844104 +0000 UTC m=+1.066406000 container died 16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 08:54:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-466649ac5d278b686e2b9722dd49fd945aa21a61825d602023116c68970cac18-merged.mount: Deactivated successfully.
Oct 02 08:54:44 compute-0 podman[392510]: 2025-10-02 08:54:44.835775144 +0000 UTC m=+1.144336990 container remove 16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 08:54:44 compute-0 systemd[1]: libpod-conmon-16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956.scope: Deactivated successfully.
Oct 02 08:54:44 compute-0 sudo[392406]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:44 compute-0 sudo[392570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:54:44 compute-0 sudo[392570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:44 compute-0 sudo[392570]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:45 compute-0 sudo[392595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:54:45 compute-0 sudo[392595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:45 compute-0 sudo[392595]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:45 compute-0 ceph-mon[74477]: pgmap v2296: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 16 KiB/s wr, 114 op/s
Oct 02 08:54:45 compute-0 nova_compute[260603]: 2025-10-02 08:54:45.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:54:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/345732408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:54:45 compute-0 sudo[392620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:54:45 compute-0 sudo[392620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:45 compute-0 sudo[392620]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:45 compute-0 nova_compute[260603]: 2025-10-02 08:54:45.191 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:45 compute-0 nova_compute[260603]: 2025-10-02 08:54:45.203 2 DEBUG nova.compute.provider_tree [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:54:45 compute-0 nova_compute[260603]: 2025-10-02 08:54:45.227 2 DEBUG nova.scheduler.client.report [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:54:45 compute-0 nova_compute[260603]: 2025-10-02 08:54:45.258 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:45 compute-0 sudo[392647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:54:45 compute-0 sudo[392647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:45 compute-0 nova_compute[260603]: 2025-10-02 08:54:45.498 2 INFO nova.network.neutron [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating port 6c4955ab-011a-4e64-9871-c01b4818d740 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 08:54:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 02 08:54:45 compute-0 podman[392712]: 2025-10-02 08:54:45.695612844 +0000 UTC m=+0.043333605 container create 7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_swartz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:54:45 compute-0 systemd[1]: Started libpod-conmon-7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6.scope.
Oct 02 08:54:45 compute-0 nova_compute[260603]: 2025-10-02 08:54:45.736 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updating instance_info_cache with network_info: [{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:54:45 compute-0 nova_compute[260603]: 2025-10-02 08:54:45.754 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:54:45 compute-0 nova_compute[260603]: 2025-10-02 08:54:45.755 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:54:45 compute-0 nova_compute[260603]: 2025-10-02 08:54:45.755 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:54:45 compute-0 nova_compute[260603]: 2025-10-02 08:54:45.756 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:54:45 compute-0 podman[392712]: 2025-10-02 08:54:45.675405153 +0000 UTC m=+0.023125914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:54:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:54:45 compute-0 podman[392712]: 2025-10-02 08:54:45.789907683 +0000 UTC m=+0.137628454 container init 7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 08:54:45 compute-0 podman[392712]: 2025-10-02 08:54:45.796007287 +0000 UTC m=+0.143728008 container start 7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 08:54:45 compute-0 podman[392712]: 2025-10-02 08:54:45.79928252 +0000 UTC m=+0.147003291 container attach 7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:54:45 compute-0 stupefied_swartz[392729]: 167 167
Oct 02 08:54:45 compute-0 systemd[1]: libpod-7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6.scope: Deactivated successfully.
Oct 02 08:54:45 compute-0 podman[392712]: 2025-10-02 08:54:45.801419389 +0000 UTC m=+0.149140110 container died 7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_swartz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 08:54:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b73da2309655c36b3cc5791c153228a05faf88e28c7ef51943e3584f76ea1a4-merged.mount: Deactivated successfully.
Oct 02 08:54:45 compute-0 podman[392712]: 2025-10-02 08:54:45.837655457 +0000 UTC m=+0.185376188 container remove 7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_swartz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 08:54:45 compute-0 systemd[1]: libpod-conmon-7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6.scope: Deactivated successfully.
Oct 02 08:54:46 compute-0 podman[392754]: 2025-10-02 08:54:46.054903675 +0000 UTC m=+0.071055804 container create 3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_keller, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:54:46 compute-0 systemd[1]: Started libpod-conmon-3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1.scope.
Oct 02 08:54:46 compute-0 podman[392754]: 2025-10-02 08:54:46.021888018 +0000 UTC m=+0.038040207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:54:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/345732408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:54:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf8bbd6eeac31f1ec93cb9914ec2692dd898b44bbc388cc438de719de3d1d1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf8bbd6eeac31f1ec93cb9914ec2692dd898b44bbc388cc438de719de3d1d1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf8bbd6eeac31f1ec93cb9914ec2692dd898b44bbc388cc438de719de3d1d1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf8bbd6eeac31f1ec93cb9914ec2692dd898b44bbc388cc438de719de3d1d1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:46 compute-0 podman[392754]: 2025-10-02 08:54:46.180449905 +0000 UTC m=+0.196602094 container init 3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_keller, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:54:46 compute-0 podman[392754]: 2025-10-02 08:54:46.194382226 +0000 UTC m=+0.210534355 container start 3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_keller, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 08:54:46 compute-0 podman[392754]: 2025-10-02 08:54:46.198472507 +0000 UTC m=+0.214624696 container attach 3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_keller, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 08:54:46 compute-0 nova_compute[260603]: 2025-10-02 08:54:46.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:54:46 compute-0 nova_compute[260603]: 2025-10-02 08:54:46.732 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:54:46 compute-0 nova_compute[260603]: 2025-10-02 08:54:46.733 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:54:46 compute-0 nova_compute[260603]: 2025-10-02 08:54:46.734 2 DEBUG nova.network.neutron [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:54:46 compute-0 nova_compute[260603]: 2025-10-02 08:54:46.844 2 DEBUG nova.compute.manager [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:46 compute-0 nova_compute[260603]: 2025-10-02 08:54:46.846 2 DEBUG nova.compute.manager [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing instance network info cache due to event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:54:46 compute-0 nova_compute[260603]: 2025-10-02 08:54:46.846 2 DEBUG oslo_concurrency.lockutils [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:54:47 compute-0 ceph-mon[74477]: pgmap v2297: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 02 08:54:47 compute-0 nova_compute[260603]: 2025-10-02 08:54:47.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:47 compute-0 goofy_keller[392771]: {
Oct 02 08:54:47 compute-0 goofy_keller[392771]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "osd_id": 2,
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "type": "bluestore"
Oct 02 08:54:47 compute-0 goofy_keller[392771]:     },
Oct 02 08:54:47 compute-0 goofy_keller[392771]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "osd_id": 1,
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "type": "bluestore"
Oct 02 08:54:47 compute-0 goofy_keller[392771]:     },
Oct 02 08:54:47 compute-0 goofy_keller[392771]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "osd_id": 0,
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:54:47 compute-0 goofy_keller[392771]:         "type": "bluestore"
Oct 02 08:54:47 compute-0 goofy_keller[392771]:     }
Oct 02 08:54:47 compute-0 goofy_keller[392771]: }
Oct 02 08:54:47 compute-0 podman[392754]: 2025-10-02 08:54:47.275464911 +0000 UTC m=+1.291617000 container died 3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_keller, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:54:47 compute-0 systemd[1]: libpod-3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1.scope: Deactivated successfully.
Oct 02 08:54:47 compute-0 systemd[1]: libpod-3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1.scope: Consumed 1.083s CPU time.
Oct 02 08:54:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-abf8bbd6eeac31f1ec93cb9914ec2692dd898b44bbc388cc438de719de3d1d1a-merged.mount: Deactivated successfully.
Oct 02 08:54:47 compute-0 podman[392754]: 2025-10-02 08:54:47.358880035 +0000 UTC m=+1.375032134 container remove 3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_keller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 08:54:47 compute-0 systemd[1]: libpod-conmon-3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1.scope: Deactivated successfully.
Oct 02 08:54:47 compute-0 sudo[392647]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:54:47 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:54:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:54:47 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:54:47 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9dc656b6-7f9c-4bf4-958b-0043e896a332 does not exist
Oct 02 08:54:47 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2b2ba02a-b577-46f8-abfe-91425eea59bb does not exist
Oct 02 08:54:47 compute-0 sudo[392817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:54:47 compute-0 sudo[392817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:47 compute-0 nova_compute[260603]: 2025-10-02 08:54:47.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:54:47 compute-0 sudo[392817]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:47 compute-0 nova_compute[260603]: 2025-10-02 08:54:47.551 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:47 compute-0 nova_compute[260603]: 2025-10-02 08:54:47.551 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:47 compute-0 nova_compute[260603]: 2025-10-02 08:54:47.551 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:47 compute-0 nova_compute[260603]: 2025-10-02 08:54:47.552 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:54:47 compute-0 nova_compute[260603]: 2025-10-02 08:54:47.552 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:47 compute-0 sudo[392842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:54:47 compute-0 sudo[392842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:54:47 compute-0 sudo[392842]: pam_unix(sudo:session): session closed for user root
Oct 02 08:54:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 02 08:54:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:54:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1361077569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.005 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.145 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.146 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.155 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.155 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:54:48 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:54:48 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:54:48 compute-0 ceph-mon[74477]: pgmap v2298: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 02 08:54:48 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1361077569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.519 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.521 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3362MB free_disk=59.92179870605469GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.616 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ce9a5c17-646f-4ba2-a974-90e4b864872e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.616 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 1e566f75-d068-4a59-bf53-0a71ad8c5e45 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.617 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 29e25f63-82b1-45cf-8916-41d9acc44ac9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.617 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.617 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.734 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:48 compute-0 nova_compute[260603]: 2025-10-02 08:54:48.978 2 DEBUG nova.network.neutron [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.011 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.015 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.016 2 INFO nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Creating image(s)
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.049 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.055 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'trusted_certs' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.058 2 DEBUG oslo_concurrency.lockutils [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.059 2 DEBUG nova.network.neutron [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.106 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.138 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.143 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "01a08b635df1a702c6eaac83ad3113a321b2a912" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.144 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "01a08b635df1a702c6eaac83ad3113a321b2a912" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:54:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015154286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.319 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.327 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.349 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.373 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.373 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.437 2 DEBUG nova.virt.libvirt.imagebackend [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/fb02f3f9-de2f-45c4-91bc-9579a3c7316c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/fb02f3f9-de2f-45c4-91bc-9579a3c7316c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 08:54:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3015154286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.495 2 DEBUG nova.virt.libvirt.imagebackend [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Selected location: {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/fb02f3f9-de2f-45c4-91bc-9579a3c7316c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.496 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] cloning images/fb02f3f9-de2f-45c4-91bc-9579a3c7316c@snap to None/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.625 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "01a08b635df1a702c6eaac83ad3113a321b2a912" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.812 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'migration_context' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:49 compute-0 nova_compute[260603]: 2025-10-02 08:54:49.921 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] flattening vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.363 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Image rbd:vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.364 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.364 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Ensure instance console log exists: /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.365 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.365 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.366 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.370 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Start _get_guest_xml network_info=[{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:54:21Z,direct_url=<?>,disk_format='raw',id=fb02f3f9-de2f-45c4-91bc-9579a3c7316c,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-199210370-shelved',owner='21220064aba34c77a8af713fad28c08b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:54:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.375 2 WARNING nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.382 2 DEBUG nova.virt.libvirt.host [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.383 2 DEBUG nova.virt.libvirt.host [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.386 2 DEBUG nova.virt.libvirt.host [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.387 2 DEBUG nova.virt.libvirt.host [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.387 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.388 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:54:21Z,direct_url=<?>,disk_format='raw',id=fb02f3f9-de2f-45c4-91bc-9579a3c7316c,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-199210370-shelved',owner='21220064aba34c77a8af713fad28c08b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:54:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.389 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.389 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.389 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.390 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.390 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.391 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.392 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.392 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.393 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.394 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.394 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.423 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:50 compute-0 ceph-mon[74477]: pgmap v2299: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.917 2 DEBUG nova.network.neutron [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updated VIF entry in instance network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.918 2 DEBUG nova.network.neutron [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:54:50 compute-0 nova_compute[260603]: 2025-10-02 08:54:50.933 2 DEBUG oslo_concurrency.lockutils [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:54:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:54:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788032437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.011 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.036 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.040 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.369 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.371 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:54:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1788032437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:54:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:54:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4113836995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.556 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.558 2 DEBUG nova.virt.libvirt.vif [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-199210370',display_name='tempest-TestShelveInstance-server-199210370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-199210370',id=124,image_ref='fb02f3f9-de2f-45c4-91bc-9579a3c7316c',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-203364505',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:54:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='21220064aba34c77a8af713fad28c08b',ramdisk_id='',reservation_id='r-2iy01e2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-783621896',owner_user_name='tempest-TestShelveInstance-783621896-project-member',shelved_at='2025-10-02T08:54:29.331405',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fb02f3f9-de2f-45c4-91bc-9579a3c7316c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:54:44Z,user_data=None,user_id='bd8daf03c9d144d7a60fe3f81abdfbb4',uuid=29e25f63-82b1-45cf-8916-41d9acc44ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.559 2 DEBUG nova.network.os_vif_util [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converting VIF {"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.560 2 DEBUG nova.network.os_vif_util [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.562 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'pci_devices' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.587 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:54:51 compute-0 nova_compute[260603]:   <uuid>29e25f63-82b1-45cf-8916-41d9acc44ac9</uuid>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   <name>instance-0000007c</name>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <nova:name>tempest-TestShelveInstance-server-199210370</nova:name>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:54:50</nova:creationTime>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:54:51 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:54:51 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:54:51 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:54:51 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:54:51 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:54:51 compute-0 nova_compute[260603]:         <nova:user uuid="bd8daf03c9d144d7a60fe3f81abdfbb4">tempest-TestShelveInstance-783621896-project-member</nova:user>
Oct 02 08:54:51 compute-0 nova_compute[260603]:         <nova:project uuid="21220064aba34c77a8af713fad28c08b">tempest-TestShelveInstance-783621896</nova:project>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="fb02f3f9-de2f-45c4-91bc-9579a3c7316c"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:54:51 compute-0 nova_compute[260603]:         <nova:port uuid="6c4955ab-011a-4e64-9871-c01b4818d740">
Oct 02 08:54:51 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <system>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <entry name="serial">29e25f63-82b1-45cf-8916-41d9acc44ac9</entry>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <entry name="uuid">29e25f63-82b1-45cf-8916-41d9acc44ac9</entry>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     </system>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   <os>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   </os>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   <features>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   </features>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk">
Oct 02 08:54:51 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       </source>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:54:51 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config">
Oct 02 08:54:51 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       </source>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:54:51 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:ec:f3:67"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <target dev="tap6c4955ab-01"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/console.log" append="off"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <video>
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     </video>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <input type="keyboard" bus="usb"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:54:51 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:54:51 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:54:51 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:54:51 compute-0 nova_compute[260603]: </domain>
Oct 02 08:54:51 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.589 2 DEBUG nova.compute.manager [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Preparing to wait for external event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.589 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.590 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.590 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.590 2 DEBUG nova.virt.libvirt.vif [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-199210370',display_name='tempest-TestShelveInstance-server-199210370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-199210370',id=124,image_ref='fb02f3f9-de2f-45c4-91bc-9579a3c7316c',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-203364505',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:54:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='21220064aba34c77a8af713fad28c08b',ramdisk_id='',reservation_id='r-2iy01e2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-783621896',owner_user_name='tempest-TestShelveInstance-783621896-project-member',shelved_at='2025-10-02T08:54:29.331405',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fb02f3f9-de2f-45c4-91bc-9579a3c7316c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:54:44Z,user_data=None,user_id='bd8daf03c9d144d7a60fe3f81abdfbb4',uuid=29e25f63-82b1-45cf-8916-41d9acc44ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.591 2 DEBUG nova.network.os_vif_util [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converting VIF {"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.591 2 DEBUG nova.network.os_vif_util [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.592 2 DEBUG os_vif [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c4955ab-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6c4955ab-01, col_values=(('external_ids', {'iface-id': '6c4955ab-011a-4e64-9871-c01b4818d740', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:f3:67', 'vm-uuid': '29e25f63-82b1-45cf-8916-41d9acc44ac9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:51 compute-0 NetworkManager[45129]: <info>  [1759395291.6018] manager: (tap6c4955ab-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.610 2 INFO os_vif [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01')
Oct 02 08:54:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 93 op/s
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.668 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.669 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.669 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No VIF found with MAC fa:16:3e:ec:f3:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.670 2 INFO nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Using config drive
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.695 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.721 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'ec2_ids' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:51 compute-0 nova_compute[260603]: 2025-10-02 08:54:51.780 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'keypairs' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:54:52 compute-0 podman[393209]: 2025-10-02 08:54:52.052396276 +0000 UTC m=+0.109199413 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:54:52 compute-0 podman[393208]: 2025-10-02 08:54:52.071281925 +0000 UTC m=+0.128852896 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:54:52 compute-0 nova_compute[260603]: 2025-10-02 08:54:52.250 2 INFO nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Creating config drive at /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config
Oct 02 08:54:52 compute-0 nova_compute[260603]: 2025-10-02 08:54:52.260 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpef0_4ucw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:52 compute-0 nova_compute[260603]: 2025-10-02 08:54:52.443 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpef0_4ucw" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4113836995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:54:52 compute-0 ceph-mon[74477]: pgmap v2300: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 93 op/s
Oct 02 08:54:52 compute-0 nova_compute[260603]: 2025-10-02 08:54:52.482 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:54:52 compute-0 nova_compute[260603]: 2025-10-02 08:54:52.487 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:54:52 compute-0 nova_compute[260603]: 2025-10-02 08:54:52.535 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:54:52 compute-0 ovn_controller[152344]: 2025-10-02T08:54:52Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:dd:de 10.100.0.26
Oct 02 08:54:52 compute-0 ovn_controller[152344]: 2025-10-02T08:54:52Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:dd:de 10.100.0.26
Oct 02 08:54:52 compute-0 nova_compute[260603]: 2025-10-02 08:54:52.712 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:54:52 compute-0 nova_compute[260603]: 2025-10-02 08:54:52.713 2 INFO nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deleting local config drive /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config because it was imported into RBD.
Oct 02 08:54:52 compute-0 kernel: tap6c4955ab-01: entered promiscuous mode
Oct 02 08:54:52 compute-0 NetworkManager[45129]: <info>  [1759395292.7743] manager: (tap6c4955ab-01): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Oct 02 08:54:52 compute-0 ovn_controller[152344]: 2025-10-02T08:54:52Z|01336|binding|INFO|Claiming lport 6c4955ab-011a-4e64-9871-c01b4818d740 for this chassis.
Oct 02 08:54:52 compute-0 ovn_controller[152344]: 2025-10-02T08:54:52Z|01337|binding|INFO|6c4955ab-011a-4e64-9871-c01b4818d740: Claiming fa:16:3e:ec:f3:67 10.100.0.4
Oct 02 08:54:52 compute-0 nova_compute[260603]: 2025-10-02 08:54:52.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.786 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f3:67 10.100.0.4'], port_security=['fa:16:3e:ec:f3:67 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29e25f63-82b1-45cf-8916-41d9acc44ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21220064aba34c77a8af713fad28c08b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '19fc7080-3cba-4840-a004-532eaa7c989b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee760329-6e77-4a17-b501-5dd001a2d022, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6c4955ab-011a-4e64-9871-c01b4818d740) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.787 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6c4955ab-011a-4e64-9871-c01b4818d740 in datapath 5c4ca2f6-ca60-420d-aded-392a44195bf1 bound to our chassis
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.789 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c4ca2f6-ca60-420d-aded-392a44195bf1
Oct 02 08:54:52 compute-0 ovn_controller[152344]: 2025-10-02T08:54:52Z|01338|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 ovn-installed in OVS
Oct 02 08:54:52 compute-0 ovn_controller[152344]: 2025-10-02T08:54:52Z|01339|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 up in Southbound
Oct 02 08:54:52 compute-0 nova_compute[260603]: 2025-10-02 08:54:52.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:52 compute-0 nova_compute[260603]: 2025-10-02 08:54:52.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.806 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5543afc9-e562-4de1-86ff-7161ffcfcdd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.807 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c4ca2f6-c1 in ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.811 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c4ca2f6-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.812 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9abfc91f-fe65-492c-aa08-48e8cef15a36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.814 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a71c7a-6652-4be5-a6f8-11e1df43a9e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:52 compute-0 systemd-udevd[393306]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.826 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1717ae-3961-422b-9688-82a48d928a7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:52 compute-0 systemd-machined[214636]: New machine qemu-159-instance-0000007c.
Oct 02 08:54:52 compute-0 NetworkManager[45129]: <info>  [1759395292.8411] device (tap6c4955ab-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:54:52 compute-0 NetworkManager[45129]: <info>  [1759395292.8427] device (tap6c4955ab-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:54:52 compute-0 systemd[1]: Started Virtual Machine qemu-159-instance-0000007c.
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.858 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f706e4f8-58ab-4520-9528-8009e26d6468]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.892 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[46f39986-9b36-4339-b1ca-b0e0c9d9141a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:52 compute-0 systemd-udevd[393311]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:54:52 compute-0 NetworkManager[45129]: <info>  [1759395292.9056] manager: (tap5c4ca2f6-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/530)
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.901 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[60f16b78-6e0a-4220-8491-13d1a79e89ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.936 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb47d600-6196-4da6-bb8b-1fbde9f473e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.939 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cb72fb93-c5a6-4728-8ae4-a9d8098c7780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:52 compute-0 NetworkManager[45129]: <info>  [1759395292.9729] device (tap5c4ca2f6-c0): carrier: link connected
Oct 02 08:54:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.979 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c9925ce2-8e77-4c1a-9fb0-5decb2eb09c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.001 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6efae54a-42b8-43cd-a20f-ba2c26558f6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c4ca2f6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:2b:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617157, 'reachable_time': 33543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393339, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.019 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7884f34-ce6a-424b-9443-fcbe7a80dc5a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:2b16'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 617157, 'tstamp': 617157}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393340, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.034 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[482c369b-6e44-47d2-aea3-084220701c70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c4ca2f6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:2b:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617157, 'reachable_time': 33543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393341, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.074 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd81147-8ac4-49a4-8c69-25919856b567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.152 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c47a3f1b-ea6e-4e2b-9243-201e0a574a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.154 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c4ca2f6-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.155 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.155 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c4ca2f6-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:53 compute-0 NetworkManager[45129]: <info>  [1759395293.1586] manager: (tap5c4ca2f6-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/531)
Oct 02 08:54:53 compute-0 kernel: tap5c4ca2f6-c0: entered promiscuous mode
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.162 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c4ca2f6-c0, col_values=(('external_ids', {'iface-id': '2b6a47ce-d685-4242-a053-f63d07d5d559'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:54:53 compute-0 ovn_controller[152344]: 2025-10-02T08:54:53Z|01340|binding|INFO|Releasing lport 2b6a47ce-d685-4242-a053-f63d07d5d559 from this chassis (sb_readonly=0)
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.185 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c4ca2f6-ca60-420d-aded-392a44195bf1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c4ca2f6-ca60-420d-aded-392a44195bf1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.186 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b43abed4-907b-4eb9-998a-288e98cb412d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.189 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-5c4ca2f6-ca60-420d-aded-392a44195bf1
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/5c4ca2f6-ca60-420d-aded-392a44195bf1.pid.haproxy
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 5c4ca2f6-ca60-420d-aded-392a44195bf1
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:54:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.190 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'env', 'PROCESS_TAG=haproxy-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c4ca2f6-ca60-420d-aded-392a44195bf1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:54:53 compute-0 podman[393415]: 2025-10-02 08:54:53.624341532 +0000 UTC m=+0.076113715 container create 910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:54:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 358 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.0 MiB/s wr, 240 op/s
Oct 02 08:54:53 compute-0 podman[393415]: 2025-10-02 08:54:53.591549962 +0000 UTC m=+0.043322145 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:54:53 compute-0 systemd[1]: Started libpod-conmon-910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293.scope.
Oct 02 08:54:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5854661b36c6863990b03e1ef3fd12b61c77135bce82b5bcc5ab4d576bfc16fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:54:53 compute-0 podman[393415]: 2025-10-02 08:54:53.72364807 +0000 UTC m=+0.175420263 container init 910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:54:53 compute-0 podman[393415]: 2025-10-02 08:54:53.729503345 +0000 UTC m=+0.181275498 container start 910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.757 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395293.7565522, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.758 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Started (Lifecycle Event)
Oct 02 08:54:53 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [NOTICE]   (393434) : New worker (393436) forked
Oct 02 08:54:53 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [NOTICE]   (393434) : Loading success.
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.797 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.801 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395293.7576892, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.801 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Paused (Lifecycle Event)
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.840 2 DEBUG nova.compute.manager [req-b368290f-8328-4639-b905-fe68502b87bc req-0f354b1d-c594-41b1-bd00-d898fdf03852 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.841 2 DEBUG oslo_concurrency.lockutils [req-b368290f-8328-4639-b905-fe68502b87bc req-0f354b1d-c594-41b1-bd00-d898fdf03852 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.841 2 DEBUG oslo_concurrency.lockutils [req-b368290f-8328-4639-b905-fe68502b87bc req-0f354b1d-c594-41b1-bd00-d898fdf03852 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.842 2 DEBUG oslo_concurrency.lockutils [req-b368290f-8328-4639-b905-fe68502b87bc req-0f354b1d-c594-41b1-bd00-d898fdf03852 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.842 2 DEBUG nova.compute.manager [req-b368290f-8328-4639-b905-fe68502b87bc req-0f354b1d-c594-41b1-bd00-d898fdf03852 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Processing event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.844 2 DEBUG nova.compute.manager [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.845 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.849 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.853 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395293.8488777, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.854 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Resumed (Lifecycle Event)
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.858 2 INFO nova.virt.libvirt.driver [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance spawned successfully.
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.875 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.879 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:54:53 compute-0 nova_compute[260603]: 2025-10-02 08:54:53.900 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:54:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct 02 08:54:54 compute-0 ceph-mon[74477]: pgmap v2301: 305 pgs: 305 active+clean; 358 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.0 MiB/s wr, 240 op/s
Oct 02 08:54:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct 02 08:54:54 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct 02 08:54:55 compute-0 nova_compute[260603]: 2025-10-02 08:54:55.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:55 compute-0 nova_compute[260603]: 2025-10-02 08:54:55.175 2 DEBUG nova.compute.manager [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:54:55 compute-0 nova_compute[260603]: 2025-10-02 08:54:55.272 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 358 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 7.2 MiB/s wr, 175 op/s
Oct 02 08:54:55 compute-0 ceph-mon[74477]: osdmap e289: 3 total, 3 up, 3 in
Oct 02 08:54:55 compute-0 nova_compute[260603]: 2025-10-02 08:54:55.973 2 DEBUG nova.compute.manager [req-64afbaaa-99ad-45af-9471-0e7aa9b58c0c req-e788365a-6f4a-4287-9e77-a215025619ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:54:55 compute-0 nova_compute[260603]: 2025-10-02 08:54:55.973 2 DEBUG oslo_concurrency.lockutils [req-64afbaaa-99ad-45af-9471-0e7aa9b58c0c req-e788365a-6f4a-4287-9e77-a215025619ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:54:55 compute-0 nova_compute[260603]: 2025-10-02 08:54:55.974 2 DEBUG oslo_concurrency.lockutils [req-64afbaaa-99ad-45af-9471-0e7aa9b58c0c req-e788365a-6f4a-4287-9e77-a215025619ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:54:55 compute-0 nova_compute[260603]: 2025-10-02 08:54:55.975 2 DEBUG oslo_concurrency.lockutils [req-64afbaaa-99ad-45af-9471-0e7aa9b58c0c req-e788365a-6f4a-4287-9e77-a215025619ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:54:55 compute-0 nova_compute[260603]: 2025-10-02 08:54:55.976 2 DEBUG nova.compute.manager [req-64afbaaa-99ad-45af-9471-0e7aa9b58c0c req-e788365a-6f4a-4287-9e77-a215025619ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:54:55 compute-0 nova_compute[260603]: 2025-10-02 08:54:55.976 2 WARNING nova.compute.manager [req-64afbaaa-99ad-45af-9471-0e7aa9b58c0c req-e788365a-6f4a-4287-9e77-a215025619ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state None.
Oct 02 08:54:56 compute-0 nova_compute[260603]: 2025-10-02 08:54:56.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:54:56 compute-0 ceph-mon[74477]: pgmap v2303: 305 pgs: 305 active+clean; 358 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 7.2 MiB/s wr, 175 op/s
Oct 02 08:54:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 307 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.2 MiB/s wr, 238 op/s
Oct 02 08:54:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:54:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:54:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:54:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:54:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:54:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:54:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:54:58 compute-0 nova_compute[260603]: 2025-10-02 08:54:58.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:54:58 compute-0 ceph-mon[74477]: pgmap v2304: 305 pgs: 305 active+clean; 307 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.2 MiB/s wr, 238 op/s
Oct 02 08:54:59 compute-0 podman[393446]: 2025-10-02 08:54:59.057380067 +0000 UTC m=+0.104772702 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:54:59 compute-0 podman[393445]: 2025-10-02 08:54:59.06188723 +0000 UTC m=+0.112481607 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 08:54:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 279 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.3 MiB/s wr, 287 op/s
Oct 02 08:55:00 compute-0 nova_compute[260603]: 2025-10-02 08:55:00.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:00 compute-0 ceph-mon[74477]: pgmap v2305: 305 pgs: 305 active+clean; 279 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.3 MiB/s wr, 287 op/s
Oct 02 08:55:01 compute-0 anacron[188284]: Job `cron.weekly' started
Oct 02 08:55:01 compute-0 anacron[188284]: Job `cron.weekly' terminated
Oct 02 08:55:01 compute-0 nova_compute[260603]: 2025-10-02 08:55:01.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 279 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.2 MiB/s wr, 287 op/s
Oct 02 08:55:02 compute-0 ceph-mon[74477]: pgmap v2306: 305 pgs: 305 active+clean; 279 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.2 MiB/s wr, 287 op/s
Oct 02 08:55:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:55:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct 02 08:55:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct 02 08:55:03 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct 02 08:55:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 279 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 30 KiB/s wr, 127 op/s
Oct 02 08:55:04 compute-0 ceph-mon[74477]: osdmap e290: 3 total, 3 up, 3 in
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.610 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.611 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.612 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.613 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.613 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.616 2 INFO nova.compute.manager [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Terminating instance
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.618 2 DEBUG nova.compute.manager [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:55:04 compute-0 kernel: tap0114a231-1a (unregistering): left promiscuous mode
Oct 02 08:55:04 compute-0 NetworkManager[45129]: <info>  [1759395304.6713] device (tap0114a231-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:55:04 compute-0 ovn_controller[152344]: 2025-10-02T08:55:04Z|01341|binding|INFO|Releasing lport 0114a231-1abb-410d-8d5f-29713ba6ca28 from this chassis (sb_readonly=0)
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:04 compute-0 ovn_controller[152344]: 2025-10-02T08:55:04Z|01342|binding|INFO|Setting lport 0114a231-1abb-410d-8d5f-29713ba6ca28 down in Southbound
Oct 02 08:55:04 compute-0 ovn_controller[152344]: 2025-10-02T08:55:04Z|01343|binding|INFO|Removing iface tap0114a231-1a ovn-installed in OVS
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:04.695 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:dd:de 10.100.0.26'], port_security=['fa:16:3e:33:dd:de 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '1e566f75-d068-4a59-bf53-0a71ad8c5e45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bb814552-631a-471e-89ab-a7fd4a866ac7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999292f0-e1e9-4e7e-82ce-9e16433e3e2c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0114a231-1abb-410d-8d5f-29713ba6ca28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:55:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:04.697 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0114a231-1abb-410d-8d5f-29713ba6ca28 in datapath fcd51a14-855e-486f-92b0-d9c9ee06ef45 unbound from our chassis
Oct 02 08:55:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:04.699 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fcd51a14-855e-486f-92b0-d9c9ee06ef45, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:55:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:04.700 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a433e1b0-b857-42ac-b7f3-acf43c43a31c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:04.701 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45 namespace which is not needed anymore
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:04 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Oct 02 08:55:04 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Consumed 13.141s CPU time.
Oct 02 08:55:04 compute-0 systemd-machined[214636]: Machine qemu-158-instance-0000007d terminated.
Oct 02 08:55:04 compute-0 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [NOTICE]   (391934) : haproxy version is 2.8.14-c23fe91
Oct 02 08:55:04 compute-0 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [NOTICE]   (391934) : path to executable is /usr/sbin/haproxy
Oct 02 08:55:04 compute-0 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [WARNING]  (391934) : Exiting Master process...
Oct 02 08:55:04 compute-0 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [WARNING]  (391934) : Exiting Master process...
Oct 02 08:55:04 compute-0 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [ALERT]    (391934) : Current worker (391954) exited with code 143 (Terminated)
Oct 02 08:55:04 compute-0 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [WARNING]  (391934) : All workers exited. Exiting... (0)
Oct 02 08:55:04 compute-0 systemd[1]: libpod-42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403.scope: Deactivated successfully.
Oct 02 08:55:04 compute-0 podman[393511]: 2025-10-02 08:55:04.868758457 +0000 UTC m=+0.053127245 container died 42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.872 2 INFO nova.virt.libvirt.driver [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Instance destroyed successfully.
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.873 2 DEBUG nova.objects.instance [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 1e566f75-d068-4a59-bf53-0a71ad8c5e45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.889 2 DEBUG nova.virt.libvirt.vif [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:54:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1505974006',display_name='tempest-TestNetworkBasicOps-server-1505974006',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1505974006',id=125,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOBsPfGQIKXrU0ixL8zdO1Id1043n5Wya2XViCYfbc8+hdkDDiOW4w1UIZN3cliMo4JpT768RX8ZTyGW3MECALsE1YSIbBikkYwsG7A75q2etUDqr7ZHw9V2TylyqodwPA==',key_name='tempest-TestNetworkBasicOps-1332028269',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:54:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-q4dl0p75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:54:39Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=1e566f75-d068-4a59-bf53-0a71ad8c5e45,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.890 2 DEBUG nova.network.os_vif_util [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.890 2 DEBUG nova.network.os_vif_util [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.891 2 DEBUG os_vif [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0114a231-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.899 2 INFO os_vif [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a')
Oct 02 08:55:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403-userdata-shm.mount: Deactivated successfully.
Oct 02 08:55:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d19e482ac1bfb83fa0f30e489e044044eba2a82cab9b325cf7843b422f0931f0-merged.mount: Deactivated successfully.
Oct 02 08:55:04 compute-0 podman[393511]: 2025-10-02 08:55:04.919005131 +0000 UTC m=+0.103373909 container cleanup 42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 08:55:04 compute-0 systemd[1]: libpod-conmon-42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403.scope: Deactivated successfully.
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.950 2 DEBUG nova.compute.manager [req-4f2fe99c-ab5a-4747-9e01-0ca025249102 req-a61e5641-efa5-407e-b8ca-967d14fe4ecb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-vif-unplugged-0114a231-1abb-410d-8d5f-29713ba6ca28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.950 2 DEBUG oslo_concurrency.lockutils [req-4f2fe99c-ab5a-4747-9e01-0ca025249102 req-a61e5641-efa5-407e-b8ca-967d14fe4ecb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.951 2 DEBUG oslo_concurrency.lockutils [req-4f2fe99c-ab5a-4747-9e01-0ca025249102 req-a61e5641-efa5-407e-b8ca-967d14fe4ecb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.951 2 DEBUG oslo_concurrency.lockutils [req-4f2fe99c-ab5a-4747-9e01-0ca025249102 req-a61e5641-efa5-407e-b8ca-967d14fe4ecb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.951 2 DEBUG nova.compute.manager [req-4f2fe99c-ab5a-4747-9e01-0ca025249102 req-a61e5641-efa5-407e-b8ca-967d14fe4ecb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] No waiting events found dispatching network-vif-unplugged-0114a231-1abb-410d-8d5f-29713ba6ca28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:55:04 compute-0 nova_compute[260603]: 2025-10-02 08:55:04.951 2 DEBUG nova.compute.manager [req-4f2fe99c-ab5a-4747-9e01-0ca025249102 req-a61e5641-efa5-407e-b8ca-967d14fe4ecb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-vif-unplugged-0114a231-1abb-410d-8d5f-29713ba6ca28 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:55:05 compute-0 podman[393571]: 2025-10-02 08:55:05.008113105 +0000 UTC m=+0.053805846 container remove 42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 08:55:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.015 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc06c8d-7eae-4b23-a3b2-a7698553da40]: (4, ('Thu Oct  2 08:55:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45 (42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403)\n42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403\nThu Oct  2 08:55:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45 (42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403)\n42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.018 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7e362efd-adc3-4db1-8457-a99afe449827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.019 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfcd51a14-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:05 compute-0 nova_compute[260603]: 2025-10-02 08:55:05.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:05 compute-0 kernel: tapfcd51a14-80: left promiscuous mode
Oct 02 08:55:05 compute-0 nova_compute[260603]: 2025-10-02 08:55:05.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.046 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4234c23e-86c1-49e0-9e1b-3b87ef238e88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:05 compute-0 ceph-mon[74477]: pgmap v2308: 305 pgs: 305 active+clean; 279 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 30 KiB/s wr, 127 op/s
Oct 02 08:55:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.077 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df7eaccb-451c-484e-ab80-0541bffdc64a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.078 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[404d8e38-002e-49ab-9f47-ce5b16f50361]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.101 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1006602f-e9cf-4191-96b0-6e52ef0c903d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615581, 'reachable_time': 41535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393586, 'error': None, 'target': 'ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:05 compute-0 systemd[1]: run-netns-ovnmeta\x2dfcd51a14\x2d855e\x2d486f\x2d92b0\x2dd9c9ee06ef45.mount: Deactivated successfully.
Oct 02 08:55:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.105 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:55:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.105 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a7551fbb-b1cc-4899-97fe-48eb899608db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:05 compute-0 nova_compute[260603]: 2025-10-02 08:55:05.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:05 compute-0 nova_compute[260603]: 2025-10-02 08:55:05.234 2 INFO nova.virt.libvirt.driver [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Deleting instance files /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45_del
Oct 02 08:55:05 compute-0 nova_compute[260603]: 2025-10-02 08:55:05.235 2 INFO nova.virt.libvirt.driver [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Deletion of /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45_del complete
Oct 02 08:55:05 compute-0 nova_compute[260603]: 2025-10-02 08:55:05.295 2 INFO nova.compute.manager [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Took 0.68 seconds to destroy the instance on the hypervisor.
Oct 02 08:55:05 compute-0 nova_compute[260603]: 2025-10-02 08:55:05.295 2 DEBUG oslo.service.loopingcall [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:55:05 compute-0 nova_compute[260603]: 2025-10-02 08:55:05.296 2 DEBUG nova.compute.manager [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:55:05 compute-0 nova_compute[260603]: 2025-10-02 08:55:05.296 2 DEBUG nova.network.neutron [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:55:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 279 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 27 KiB/s wr, 113 op/s
Oct 02 08:55:06 compute-0 nova_compute[260603]: 2025-10-02 08:55:06.542 2 DEBUG nova.network.neutron [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:55:06 compute-0 nova_compute[260603]: 2025-10-02 08:55:06.560 2 INFO nova.compute.manager [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Took 1.26 seconds to deallocate network for instance.
Oct 02 08:55:06 compute-0 nova_compute[260603]: 2025-10-02 08:55:06.603 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:06 compute-0 nova_compute[260603]: 2025-10-02 08:55:06.604 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:06 compute-0 nova_compute[260603]: 2025-10-02 08:55:06.682 2 DEBUG oslo_concurrency.processutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.051 2 DEBUG nova.compute.manager [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.052 2 DEBUG oslo_concurrency.lockutils [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.052 2 DEBUG oslo_concurrency.lockutils [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.053 2 DEBUG oslo_concurrency.lockutils [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.053 2 DEBUG nova.compute.manager [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] No waiting events found dispatching network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.054 2 WARNING nova.compute.manager [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received unexpected event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 for instance with vm_state deleted and task_state None.
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.054 2 DEBUG nova.compute.manager [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-vif-deleted-0114a231-1abb-410d-8d5f-29713ba6ca28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:07 compute-0 ceph-mon[74477]: pgmap v2309: 305 pgs: 305 active+clean; 279 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 27 KiB/s wr, 113 op/s
Oct 02 08:55:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:55:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3590211806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.122 2 DEBUG oslo_concurrency.processutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.129 2 DEBUG nova.compute.provider_tree [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.151 2 DEBUG nova.scheduler.client.report [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.176 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.214 2 INFO nova.scheduler.client.report [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 1e566f75-d068-4a59-bf53-0a71ad8c5e45
Oct 02 08:55:07 compute-0 nova_compute[260603]: 2025-10-02 08:55:07.305 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:07 compute-0 ovn_controller[152344]: 2025-10-02T08:55:07Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:f3:67 10.100.0.4
Oct 02 08:55:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 227 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 41 KiB/s wr, 105 op/s
Oct 02 08:55:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:55:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3590211806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:55:09 compute-0 ceph-mon[74477]: pgmap v2310: 305 pgs: 305 active+clean; 227 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 41 KiB/s wr, 105 op/s
Oct 02 08:55:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 200 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 33 KiB/s wr, 89 op/s
Oct 02 08:55:09 compute-0 nova_compute[260603]: 2025-10-02 08:55:09.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:10 compute-0 nova_compute[260603]: 2025-10-02 08:55:10.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:10 compute-0 ovn_controller[152344]: 2025-10-02T08:55:10Z|01344|binding|INFO|Releasing lport 2b6a47ce-d685-4242-a053-f63d07d5d559 from this chassis (sb_readonly=0)
Oct 02 08:55:10 compute-0 ovn_controller[152344]: 2025-10-02T08:55:10Z|01345|binding|INFO|Releasing lport f80df61f-a234-49f9-816c-2f176ad94b02 from this chassis (sb_readonly=0)
Oct 02 08:55:10 compute-0 nova_compute[260603]: 2025-10-02 08:55:10.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:11 compute-0 ceph-mon[74477]: pgmap v2311: 305 pgs: 305 active+clean; 200 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 33 KiB/s wr, 89 op/s
Oct 02 08:55:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 200 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 33 KiB/s wr, 89 op/s
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.715 2 DEBUG nova.compute.manager [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-changed-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.715 2 DEBUG nova.compute.manager [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Refreshing instance network info cache due to event network-changed-4fd5381b-e8ba-485f-9cb6-692a37b716a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.716 2 DEBUG oslo_concurrency.lockutils [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.716 2 DEBUG oslo_concurrency.lockutils [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.716 2 DEBUG nova.network.neutron [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Refreshing network info cache for port 4fd5381b-e8ba-485f-9cb6-692a37b716a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.791 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.792 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.792 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.793 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.793 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.795 2 INFO nova.compute.manager [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Terminating instance
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.797 2 DEBUG nova.compute.manager [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:55:12 compute-0 kernel: tap4fd5381b-e8 (unregistering): left promiscuous mode
Oct 02 08:55:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.847 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:55:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.848 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:55:12 compute-0 NetworkManager[45129]: <info>  [1759395312.8898] device (tap4fd5381b-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:55:12 compute-0 ovn_controller[152344]: 2025-10-02T08:55:12Z|01346|binding|INFO|Releasing lport 4fd5381b-e8ba-485f-9cb6-692a37b716a1 from this chassis (sb_readonly=0)
Oct 02 08:55:12 compute-0 ovn_controller[152344]: 2025-10-02T08:55:12Z|01347|binding|INFO|Setting lport 4fd5381b-e8ba-485f-9cb6-692a37b716a1 down in Southbound
Oct 02 08:55:12 compute-0 ovn_controller[152344]: 2025-10-02T08:55:12Z|01348|binding|INFO|Removing iface tap4fd5381b-e8 ovn-installed in OVS
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.901 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:0b:fe 10.100.0.10'], port_security=['fa:16:3e:04:0b:fe 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ce9a5c17-646f-4ba2-a974-90e4b864872e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fe3b6c1-e179-4b79-8e83-9d4b983838b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=300ebc04-b3b0-45d9-94e7-4b6ae68b1331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4fd5381b-e8ba-485f-9cb6-692a37b716a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:55:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.902 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4fd5381b-e8ba-485f-9cb6-692a37b716a1 in datapath 7511e8c2-7c26-4eea-b465-32e904aba1a9 unbound from our chassis
Oct 02 08:55:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.904 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7511e8c2-7c26-4eea-b465-32e904aba1a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:55:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.905 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc87fb6-03b3-4096-8c32-22ce9810a658]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.905 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9 namespace which is not needed anymore
Oct 02 08:55:12 compute-0 nova_compute[260603]: 2025-10-02 08:55:12.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:12 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct 02 08:55:12 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Consumed 15.993s CPU time.
Oct 02 08:55:12 compute-0 systemd-machined[214636]: Machine qemu-156-instance-0000007b terminated.
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.039 2 INFO nova.virt.libvirt.driver [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Instance destroyed successfully.
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.040 2 DEBUG nova.objects.instance [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid ce9a5c17-646f-4ba2-a974-90e4b864872e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:55:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.052 2 DEBUG nova.virt.libvirt.vif [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1285499332',display_name='tempest-TestNetworkBasicOps-server-1285499332',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1285499332',id=123,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAz8IMZ7Jf5vn2RyFjWWITiqepr2aLtS8oy68xeil523gnhXxTAuBsHLCjnOZ53PJL0p7gfBo3vI6+ggEPUbO2Sg7r3zAGgPHkKwwrkmc/GeJgKH1QAb2g1ODGvS4w3VLw==',key_name='tempest-TestNetworkBasicOps-2122334356',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:53:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-5j8y63ir',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:53:49Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=ce9a5c17-646f-4ba2-a974-90e4b864872e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.053 2 DEBUG nova.network.os_vif_util [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.055 2 DEBUG nova.network.os_vif_util [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:55:13 compute-0 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [NOTICE]   (390549) : haproxy version is 2.8.14-c23fe91
Oct 02 08:55:13 compute-0 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [NOTICE]   (390549) : path to executable is /usr/sbin/haproxy
Oct 02 08:55:13 compute-0 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [WARNING]  (390549) : Exiting Master process...
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.056 2 DEBUG os_vif [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:55:13 compute-0 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [ALERT]    (390549) : Current worker (390551) exited with code 143 (Terminated)
Oct 02 08:55:13 compute-0 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [WARNING]  (390549) : All workers exited. Exiting... (0)
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.060 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd5381b-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:13 compute-0 systemd[1]: libpod-bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a.scope: Deactivated successfully.
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:55:13 compute-0 podman[393634]: 2025-10-02 08:55:13.068941234 +0000 UTC m=+0.057465733 container died bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.067 2 INFO os_vif [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8')
Oct 02 08:55:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-841e1ce90c14f958dd89444a0f9f00fd4fb528b405b275066734c073b28f4568-merged.mount: Deactivated successfully.
Oct 02 08:55:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a-userdata-shm.mount: Deactivated successfully.
Oct 02 08:55:13 compute-0 podman[393634]: 2025-10-02 08:55:13.103641523 +0000 UTC m=+0.092165952 container cleanup bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:55:13 compute-0 ceph-mon[74477]: pgmap v2312: 305 pgs: 305 active+clean; 200 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 33 KiB/s wr, 89 op/s
Oct 02 08:55:13 compute-0 systemd[1]: libpod-conmon-bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a.scope: Deactivated successfully.
Oct 02 08:55:13 compute-0 podman[393689]: 2025-10-02 08:55:13.184988342 +0000 UTC m=+0.048416306 container remove bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 08:55:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.193 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[49162d11-84cc-4951-ac85-4c967289f6c5]: (4, ('Thu Oct  2 08:55:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9 (bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a)\nbd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a\nThu Oct  2 08:55:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9 (bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a)\nbd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.196 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae4b5ad-71f2-4443-b7e5-aa463d4ddcfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.197 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7511e8c2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:13 compute-0 kernel: tap7511e8c2-70: left promiscuous mode
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.237 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c2275ece-f5f1-4544-b3bf-f34be25cfc1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.267 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[675827a4-12e4-4a88-8364-d6de7e6def18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.269 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[634ffde8-2ac0-4415-a52b-f29b905bd2a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.290 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6de0d24-26cd-4459-8bb6-91ffae66b4dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610608, 'reachable_time': 43695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393706, 'error': None, 'target': 'ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d7511e8c2\x2d7c26\x2d4eea\x2db465\x2d32e904aba1a9.mount: Deactivated successfully.
Oct 02 08:55:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.293 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:55:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.293 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[30bf4fd1-0ccc-4fe7-9671-7f3f829054ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.496 2 INFO nova.virt.libvirt.driver [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Deleting instance files /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e_del
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.498 2 INFO nova.virt.libvirt.driver [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Deletion of /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e_del complete
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.582 2 INFO nova.compute.manager [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.583 2 DEBUG oslo.service.loopingcall [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.583 2 DEBUG nova.compute.manager [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:55:13 compute-0 nova_compute[260603]: 2025-10-02 08:55:13.584 2 DEBUG nova.network.neutron [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:55:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 189 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 641 KiB/s rd, 41 KiB/s wr, 96 op/s
Oct 02 08:55:14 compute-0 nova_compute[260603]: 2025-10-02 08:55:14.613 2 DEBUG nova.compute.manager [req-1fdd247b-2498-455d-8082-3d1a19c9aa49 req-b1a9d915-750d-413b-b39e-a3fa6f70c4b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-vif-unplugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:14 compute-0 nova_compute[260603]: 2025-10-02 08:55:14.614 2 DEBUG oslo_concurrency.lockutils [req-1fdd247b-2498-455d-8082-3d1a19c9aa49 req-b1a9d915-750d-413b-b39e-a3fa6f70c4b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:14 compute-0 nova_compute[260603]: 2025-10-02 08:55:14.615 2 DEBUG oslo_concurrency.lockutils [req-1fdd247b-2498-455d-8082-3d1a19c9aa49 req-b1a9d915-750d-413b-b39e-a3fa6f70c4b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:14 compute-0 nova_compute[260603]: 2025-10-02 08:55:14.615 2 DEBUG oslo_concurrency.lockutils [req-1fdd247b-2498-455d-8082-3d1a19c9aa49 req-b1a9d915-750d-413b-b39e-a3fa6f70c4b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:14 compute-0 nova_compute[260603]: 2025-10-02 08:55:14.616 2 DEBUG nova.compute.manager [req-1fdd247b-2498-455d-8082-3d1a19c9aa49 req-b1a9d915-750d-413b-b39e-a3fa6f70c4b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] No waiting events found dispatching network-vif-unplugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:55:14 compute-0 nova_compute[260603]: 2025-10-02 08:55:14.616 2 DEBUG nova.compute.manager [req-1fdd247b-2498-455d-8082-3d1a19c9aa49 req-b1a9d915-750d-413b-b39e-a3fa6f70c4b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-vif-unplugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:55:14 compute-0 nova_compute[260603]: 2025-10-02 08:55:14.867 2 DEBUG nova.compute.manager [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:14 compute-0 nova_compute[260603]: 2025-10-02 08:55:14.868 2 DEBUG nova.compute.manager [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing instance network info cache due to event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:55:14 compute-0 nova_compute[260603]: 2025-10-02 08:55:14.868 2 DEBUG oslo_concurrency.lockutils [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:55:14 compute-0 nova_compute[260603]: 2025-10-02 08:55:14.869 2 DEBUG oslo_concurrency.lockutils [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:55:14 compute-0 nova_compute[260603]: 2025-10-02 08:55:14.869 2 DEBUG nova.network.neutron [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.021 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.022 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.022 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.023 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.023 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.025 2 INFO nova.compute.manager [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Terminating instance
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.027 2 DEBUG nova.compute.manager [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:55:15 compute-0 kernel: tap6c4955ab-01 (unregistering): left promiscuous mode
Oct 02 08:55:15 compute-0 NetworkManager[45129]: <info>  [1759395315.0824] device (tap6c4955ab-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:15 compute-0 ovn_controller[152344]: 2025-10-02T08:55:15Z|01349|binding|INFO|Releasing lport 6c4955ab-011a-4e64-9871-c01b4818d740 from this chassis (sb_readonly=0)
Oct 02 08:55:15 compute-0 ovn_controller[152344]: 2025-10-02T08:55:15Z|01350|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 down in Southbound
Oct 02 08:55:15 compute-0 ovn_controller[152344]: 2025-10-02T08:55:15Z|01351|binding|INFO|Removing iface tap6c4955ab-01 ovn-installed in OVS
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:15 compute-0 ceph-mon[74477]: pgmap v2313: 305 pgs: 305 active+clean; 189 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 641 KiB/s rd, 41 KiB/s wr, 96 op/s
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.150 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f3:67 10.100.0.4'], port_security=['fa:16:3e:ec:f3:67 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29e25f63-82b1-45cf-8916-41d9acc44ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21220064aba34c77a8af713fad28c08b', 'neutron:revision_number': '11', 'neutron:security_group_ids': '19fc7080-3cba-4840-a004-532eaa7c989b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee760329-6e77-4a17-b501-5dd001a2d022, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6c4955ab-011a-4e64-9871-c01b4818d740) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.152 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6c4955ab-011a-4e64-9871-c01b4818d740 in datapath 5c4ca2f6-ca60-420d-aded-392a44195bf1 unbound from our chassis
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.154 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c4ca2f6-ca60-420d-aded-392a44195bf1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.157 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[99bb3090-4a84-4ffd-a563-a2a0d63bcdca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.157 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 namespace which is not needed anymore
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:15 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Oct 02 08:55:15 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007c.scope: Consumed 14.362s CPU time.
Oct 02 08:55:15 compute-0 systemd-machined[214636]: Machine qemu-159-instance-0000007c terminated.
Oct 02 08:55:15 compute-0 NetworkManager[45129]: <info>  [1759395315.2530] manager: (tap6c4955ab-01): new Tun device (/org/freedesktop/NetworkManager/Devices/532)
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.272 2 INFO nova.virt.libvirt.driver [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance destroyed successfully.
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.273 2 DEBUG nova.objects.instance [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'resources' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.289 2 DEBUG nova.virt.libvirt.vif [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-199210370',display_name='tempest-TestShelveInstance-server-199210370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-199210370',id=124,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzsQ/6b3tjUGybuPyR4SVVOx0MdNuIeiSvtBwT6p2uLvkn4tE1YW+Uq6JJ1YItW22s6E/D+cvCAL9Uj4JCY+wAR12IR6juEk+h0nnTQwb0m9GcGi0Su/6v36I6xXc+YZw==',key_name='tempest-TestShelveInstance-203364505',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:54:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='21220064aba34c77a8af713fad28c08b',ramdisk_id='',reservation_id='r-2iy01e2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-783621896',owner_user_name='tempest-TestShelveInstance-783621896-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:54:55Z,user_data=None,user_id='bd8daf03c9d144d7a60fe3f81abdfbb4',uuid=29e25f63-82b1-45cf-8916-41d9acc44ac9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.290 2 DEBUG nova.network.os_vif_util [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converting VIF {"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.291 2 DEBUG nova.network.os_vif_util [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.292 2 DEBUG os_vif [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.295 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c4955ab-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.303 2 INFO os_vif [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01')
Oct 02 08:55:15 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [NOTICE]   (393434) : haproxy version is 2.8.14-c23fe91
Oct 02 08:55:15 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [NOTICE]   (393434) : path to executable is /usr/sbin/haproxy
Oct 02 08:55:15 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [WARNING]  (393434) : Exiting Master process...
Oct 02 08:55:15 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [ALERT]    (393434) : Current worker (393436) exited with code 143 (Terminated)
Oct 02 08:55:15 compute-0 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [WARNING]  (393434) : All workers exited. Exiting... (0)
Oct 02 08:55:15 compute-0 systemd[1]: libpod-910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293.scope: Deactivated successfully.
Oct 02 08:55:15 compute-0 conmon[393430]: conmon 910f5d907573da932248 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293.scope/container/memory.events
Oct 02 08:55:15 compute-0 podman[393741]: 2025-10-02 08:55:15.381661276 +0000 UTC m=+0.060848371 container died 910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:55:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5854661b36c6863990b03e1ef3fd12b61c77135bce82b5bcc5ab4d576bfc16fd-merged.mount: Deactivated successfully.
Oct 02 08:55:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293-userdata-shm.mount: Deactivated successfully.
Oct 02 08:55:15 compute-0 podman[393741]: 2025-10-02 08:55:15.42344246 +0000 UTC m=+0.102629515 container cleanup 910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 08:55:15 compute-0 systemd[1]: libpod-conmon-910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293.scope: Deactivated successfully.
Oct 02 08:55:15 compute-0 podman[393784]: 2025-10-02 08:55:15.514182117 +0000 UTC m=+0.058160605 container remove 910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.522 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[07d6dd22-60d5-4293-944d-e716e04a785b]: (4, ('Thu Oct  2 08:55:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 (910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293)\n910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293\nThu Oct  2 08:55:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 (910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293)\n910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.524 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86852060-489a-49a9-89ad-219d24994888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.525 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c4ca2f6-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:15 compute-0 kernel: tap5c4ca2f6-c0: left promiscuous mode
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.550 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7559249a-350d-4044-898a-85326fecb459]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.573 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[97b232c4-6738-416f-9076-1e81d5d34c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.574 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8fa53bf-d420-4b34-9844-9e18505a56e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.591 2 DEBUG nova.network.neutron [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.593 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cae825a5-080d-44d8-86ed-0c2c312a7d36]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617148, 'reachable_time': 43694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393804, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.595 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:55:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.595 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0a386ac7-29a3-4e71-9ec9-7d70129e73ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d5c4ca2f6\x2dca60\x2d420d\x2daded\x2d392a44195bf1.mount: Deactivated successfully.
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.610 2 INFO nova.compute.manager [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Took 2.03 seconds to deallocate network for instance.
Oct 02 08:55:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 189 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 28 KiB/s wr, 84 op/s
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.659 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.660 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.732 2 INFO nova.virt.libvirt.driver [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deleting instance files /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9_del
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.733 2 INFO nova.virt.libvirt.driver [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deletion of /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9_del complete
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.739 2 DEBUG oslo_concurrency.processutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.808 2 INFO nova.compute.manager [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.809 2 DEBUG oslo.service.loopingcall [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.810 2 DEBUG nova.compute.manager [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.811 2 DEBUG nova.network.neutron [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.851 2 DEBUG nova.network.neutron [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updated VIF entry in instance network info cache for port 4fd5381b-e8ba-485f-9cb6-692a37b716a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.853 2 DEBUG nova.network.neutron [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updating instance_info_cache with network_info: [{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:55:15 compute-0 nova_compute[260603]: 2025-10-02 08:55:15.883 2 DEBUG oslo_concurrency.lockutils [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:55:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:55:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884811579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.184 2 DEBUG oslo_concurrency.processutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.192 2 DEBUG nova.compute.provider_tree [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.221 2 DEBUG nova.scheduler.client.report [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.262 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.294 2 INFO nova.scheduler.client.report [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance ce9a5c17-646f-4ba2-a974-90e4b864872e
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.382 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.781 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.782 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.783 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.783 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.783 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] No waiting events found dispatching network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.784 2 WARNING nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received unexpected event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 for instance with vm_state deleted and task_state None.
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.784 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-vif-deleted-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.785 2 INFO nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Neutron deleted interface 4fd5381b-e8ba-485f-9cb6-692a37b716a1; detaching it from the instance and deleting it from the info cache
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.785 2 DEBUG nova.network.neutron [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.792 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Detach interface failed, port_id=4fd5381b-e8ba-485f-9cb6-692a37b716a1, reason: Instance ce9a5c17-646f-4ba2-a974-90e4b864872e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.793 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.793 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.794 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.794 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.794 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.795 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.795 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.796 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.796 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.797 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.797 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.797 2 WARNING nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state deleting.
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.929 2 DEBUG nova.network.neutron [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updated VIF entry in instance network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:55:16 compute-0 nova_compute[260603]: 2025-10-02 08:55:16.930 2 DEBUG nova.network.neutron [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:55:17 compute-0 nova_compute[260603]: 2025-10-02 08:55:17.013 2 DEBUG oslo_concurrency.lockutils [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:55:17 compute-0 nova_compute[260603]: 2025-10-02 08:55:17.137 2 DEBUG nova.network.neutron [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:55:17 compute-0 ceph-mon[74477]: pgmap v2314: 305 pgs: 305 active+clean; 189 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 28 KiB/s wr, 84 op/s
Oct 02 08:55:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/884811579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:55:17 compute-0 nova_compute[260603]: 2025-10-02 08:55:17.231 2 INFO nova.compute.manager [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Took 1.42 seconds to deallocate network for instance.
Oct 02 08:55:17 compute-0 nova_compute[260603]: 2025-10-02 08:55:17.291 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:17 compute-0 nova_compute[260603]: 2025-10-02 08:55:17.292 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:17 compute-0 nova_compute[260603]: 2025-10-02 08:55:17.344 2 DEBUG oslo_concurrency.processutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:55:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 87 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 733 KiB/s rd, 30 KiB/s wr, 117 op/s
Oct 02 08:55:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:55:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4172470615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:55:17 compute-0 nova_compute[260603]: 2025-10-02 08:55:17.840 2 DEBUG oslo_concurrency.processutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:55:17 compute-0 nova_compute[260603]: 2025-10-02 08:55:17.847 2 DEBUG nova.compute.provider_tree [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:55:17 compute-0 nova_compute[260603]: 2025-10-02 08:55:17.873 2 DEBUG nova.scheduler.client.report [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:55:17 compute-0 nova_compute[260603]: 2025-10-02 08:55:17.893 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:17 compute-0 nova_compute[260603]: 2025-10-02 08:55:17.927 2 INFO nova.scheduler.client.report [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Deleted allocations for instance 29e25f63-82b1-45cf-8916-41d9acc44ac9
Oct 02 08:55:18 compute-0 nova_compute[260603]: 2025-10-02 08:55:18.004 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:55:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4172470615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:55:18 compute-0 nova_compute[260603]: 2025-10-02 08:55:18.884 2 DEBUG nova.compute.manager [req-fd8eb654-13ef-4e45-bd07-eac7fc215986 req-0433cea9-8bdc-4eef-8cf8-361fb0e69815 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-deleted-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:19 compute-0 ceph-mon[74477]: pgmap v2315: 305 pgs: 305 active+clean; 87 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 733 KiB/s rd, 30 KiB/s wr, 117 op/s
Oct 02 08:55:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 17 KiB/s wr, 89 op/s
Oct 02 08:55:19 compute-0 nova_compute[260603]: 2025-10-02 08:55:19.868 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395304.867757, 1e566f75-d068-4a59-bf53-0a71ad8c5e45 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:55:19 compute-0 nova_compute[260603]: 2025-10-02 08:55:19.869 2 INFO nova.compute.manager [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] VM Stopped (Lifecycle Event)
Oct 02 08:55:19 compute-0 nova_compute[260603]: 2025-10-02 08:55:19.999 2 DEBUG nova.compute.manager [None req-f7af6bc1-22cf-4440-8eb9-aeec870fcfca - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:55:20 compute-0 nova_compute[260603]: 2025-10-02 08:55:20.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:20 compute-0 nova_compute[260603]: 2025-10-02 08:55:20.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:21 compute-0 ceph-mon[74477]: pgmap v2316: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 17 KiB/s wr, 89 op/s
Oct 02 08:55:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 02 08:55:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:55:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3779683840' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:55:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:55:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3779683840' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:55:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3779683840' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:55:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3779683840' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:55:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:22.849 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:22 compute-0 nova_compute[260603]: 2025-10-02 08:55:22.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:23 compute-0 podman[393850]: 2025-10-02 08:55:23.030433078 +0000 UTC m=+0.069736893 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 08:55:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:55:23 compute-0 podman[393849]: 2025-10-02 08:55:23.072289405 +0000 UTC m=+0.123011831 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 02 08:55:23 compute-0 nova_compute[260603]: 2025-10-02 08:55:23.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:23 compute-0 ceph-mon[74477]: pgmap v2317: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 02 08:55:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 02 08:55:25 compute-0 nova_compute[260603]: 2025-10-02 08:55:25.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:25 compute-0 ceph-mon[74477]: pgmap v2318: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 02 08:55:25 compute-0 nova_compute[260603]: 2025-10-02 08:55:25.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 241 KiB/s rd, 2.2 KiB/s wr, 50 op/s
Oct 02 08:55:27 compute-0 ceph-mon[74477]: pgmap v2319: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 241 KiB/s rd, 2.2 KiB/s wr, 50 op/s
Oct 02 08:55:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 241 KiB/s rd, 2.2 KiB/s wr, 50 op/s
Oct 02 08:55:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:55:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:55:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:55:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:55:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:55:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:55:28
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes']
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:55:28 compute-0 nova_compute[260603]: 2025-10-02 08:55:28.037 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395313.0369906, ce9a5c17-646f-4ba2-a974-90e4b864872e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:55:28 compute-0 nova_compute[260603]: 2025-10-02 08:55:28.038 2 INFO nova.compute.manager [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] VM Stopped (Lifecycle Event)
Oct 02 08:55:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:55:28 compute-0 nova_compute[260603]: 2025-10-02 08:55:28.085 2 DEBUG nova.compute.manager [None req-adfaf4c5-3baa-41cf-a7cb-6df1a8020d8f - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:55:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:55:29 compute-0 ceph-mon[74477]: pgmap v2320: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 241 KiB/s rd, 2.2 KiB/s wr, 50 op/s
Oct 02 08:55:29 compute-0 nova_compute[260603]: 2025-10-02 08:55:29.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:55:29 compute-0 nova_compute[260603]: 2025-10-02 08:55:29.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:55:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 511 B/s wr, 17 op/s
Oct 02 08:55:30 compute-0 podman[393895]: 2025-10-02 08:55:30.033534099 +0000 UTC m=+0.094848087 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd)
Oct 02 08:55:30 compute-0 podman[393896]: 2025-10-02 08:55:30.034550461 +0000 UTC m=+0.090926463 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:55:30 compute-0 nova_compute[260603]: 2025-10-02 08:55:30.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:30 compute-0 nova_compute[260603]: 2025-10-02 08:55:30.268 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395315.2672634, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:55:30 compute-0 nova_compute[260603]: 2025-10-02 08:55:30.269 2 INFO nova.compute.manager [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Stopped (Lifecycle Event)
Oct 02 08:55:30 compute-0 nova_compute[260603]: 2025-10-02 08:55:30.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:30 compute-0 nova_compute[260603]: 2025-10-02 08:55:30.303 2 DEBUG nova.compute.manager [None req-ce346288-d7df-4611-a3c2-4a2e62c655f3 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:55:31 compute-0 ceph-mon[74477]: pgmap v2321: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 511 B/s wr, 17 op/s
Oct 02 08:55:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:55:33 compute-0 ceph-mon[74477]: pgmap v2322: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:34.837 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:34.837 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:34.838 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:35 compute-0 nova_compute[260603]: 2025-10-02 08:55:35.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:35 compute-0 ceph-mon[74477]: pgmap v2323: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:35 compute-0 nova_compute[260603]: 2025-10-02 08:55:35.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:37 compute-0 ceph-mon[74477]: pgmap v2324: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:55:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:55:39 compute-0 ceph-mon[74477]: pgmap v2325: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:40 compute-0 nova_compute[260603]: 2025-10-02 08:55:40.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:40 compute-0 nova_compute[260603]: 2025-10-02 08:55:40.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:41 compute-0 ceph-mon[74477]: pgmap v2326: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:42 compute-0 ceph-mon[74477]: pgmap v2327: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:55:43 compute-0 nova_compute[260603]: 2025-10-02 08:55:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:55:43 compute-0 nova_compute[260603]: 2025-10-02 08:55:43.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:55:43 compute-0 nova_compute[260603]: 2025-10-02 08:55:43.553 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:55:43 compute-0 nova_compute[260603]: 2025-10-02 08:55:43.554 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:55:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:44 compute-0 ceph-mon[74477]: pgmap v2328: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.007 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.007 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.033 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.127 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.127 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.140 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.140 2 INFO nova.compute.claims [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.254 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:55:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521662997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:55:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.690 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.696 2 DEBUG nova.compute.provider_tree [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.717 2 DEBUG nova.scheduler.client.report [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.737 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.738 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:55:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1521662997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.795 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.796 2 DEBUG nova.network.neutron [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.821 2 INFO nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.851 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.959 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.961 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.961 2 INFO nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Creating image(s)
Oct 02 08:55:45 compute-0 nova_compute[260603]: 2025-10-02 08:55:45.981 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.000 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.019 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.022 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.069 2 DEBUG nova.policy [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.106 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.106 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.107 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.107 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.126 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.129 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.497 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.536 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.583 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.683 2 DEBUG nova.objects.instance [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.703 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.704 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Ensure instance console log exists: /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.704 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.705 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:46 compute-0 nova_compute[260603]: 2025-10-02 08:55:46.705 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:46 compute-0 ceph-mon[74477]: pgmap v2329: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:55:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 71 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 MiB/s wr, 13 op/s
Oct 02 08:55:47 compute-0 sudo[394125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:55:47 compute-0 sudo[394125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:47 compute-0 sudo[394125]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:47 compute-0 sudo[394150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:55:47 compute-0 sudo[394150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:47 compute-0 sudo[394150]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:47 compute-0 sudo[394175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:55:47 compute-0 sudo[394175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:47 compute-0 sudo[394175]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:47 compute-0 sudo[394200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:55:47 compute-0 sudo[394200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:55:48 compute-0 sudo[394200]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:48 compute-0 nova_compute[260603]: 2025-10-02 08:55:48.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:55:48 compute-0 nova_compute[260603]: 2025-10-02 08:55:48.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:55:48 compute-0 nova_compute[260603]: 2025-10-02 08:55:48.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:48 compute-0 nova_compute[260603]: 2025-10-02 08:55:48.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:48 compute-0 nova_compute[260603]: 2025-10-02 08:55:48.550 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:48 compute-0 nova_compute[260603]: 2025-10-02 08:55:48.550 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:55:48 compute-0 nova_compute[260603]: 2025-10-02 08:55:48.550 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:55:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:55:48 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:55:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:55:48 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:55:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:55:48 compute-0 nova_compute[260603]: 2025-10-02 08:55:48.603 2 DEBUG nova.network.neutron [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Successfully created port: 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:55:48 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:55:48 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 274c8f6e-9512-4411-9c06-27b20753faf2 does not exist
Oct 02 08:55:48 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5fab23ab-a79d-42d0-aeb4-363191fea15e does not exist
Oct 02 08:55:48 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 14e78a10-6323-4558-96cc-b950cf3d72a0 does not exist
Oct 02 08:55:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:55:48 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:55:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:55:48 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:55:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:55:48 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:55:48 compute-0 sudo[394257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:55:48 compute-0 sudo[394257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:48 compute-0 sudo[394257]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:48 compute-0 sudo[394282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:55:48 compute-0 sudo[394282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:48 compute-0 sudo[394282]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:48 compute-0 ceph-mon[74477]: pgmap v2330: 305 pgs: 305 active+clean; 71 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 MiB/s wr, 13 op/s
Oct 02 08:55:48 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:55:48 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:55:48 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:55:48 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:55:48 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:55:48 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:55:48 compute-0 sudo[394326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:55:48 compute-0 sudo[394326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:48 compute-0 sudo[394326]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:48 compute-0 sudo[394351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:55:48 compute-0 sudo[394351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:55:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3922536804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.028 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.171 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.172 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3656MB free_disk=59.97496795654297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.172 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.172 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:49 compute-0 podman[394420]: 2025-10-02 08:55:49.228433924 +0000 UTC m=+0.035229848 container create fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:55:49 compute-0 systemd[1]: Started libpod-conmon-fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20.scope.
Oct 02 08:55:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.300 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.301 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.302 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:55:49 compute-0 podman[394420]: 2025-10-02 08:55:49.211794997 +0000 UTC m=+0.018590921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:55:49 compute-0 podman[394420]: 2025-10-02 08:55:49.314056409 +0000 UTC m=+0.120852363 container init fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:55:49 compute-0 podman[394420]: 2025-10-02 08:55:49.319889803 +0000 UTC m=+0.126685717 container start fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 08:55:49 compute-0 podman[394420]: 2025-10-02 08:55:49.323088165 +0000 UTC m=+0.129884079 container attach fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 08:55:49 compute-0 fervent_edison[394436]: 167 167
Oct 02 08:55:49 compute-0 systemd[1]: libpod-fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20.scope: Deactivated successfully.
Oct 02 08:55:49 compute-0 podman[394420]: 2025-10-02 08:55:49.327578357 +0000 UTC m=+0.134374271 container died fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.350 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:55:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e710210b2cfc285f19e80578c3d3328ea7dd995630914b528d3754cbf237296-merged.mount: Deactivated successfully.
Oct 02 08:55:49 compute-0 podman[394420]: 2025-10-02 08:55:49.368799844 +0000 UTC m=+0.175595758 container remove fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:55:49 compute-0 systemd[1]: libpod-conmon-fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20.scope: Deactivated successfully.
Oct 02 08:55:49 compute-0 podman[394460]: 2025-10-02 08:55:49.535724416 +0000 UTC m=+0.038701357 container create 106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 08:55:49 compute-0 systemd[1]: Started libpod-conmon-106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb.scope.
Oct 02 08:55:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac949bf0e619bcc39c422c5be502fe43c718e13602574e6b348b23a588cb4ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac949bf0e619bcc39c422c5be502fe43c718e13602574e6b348b23a588cb4ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac949bf0e619bcc39c422c5be502fe43c718e13602574e6b348b23a588cb4ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac949bf0e619bcc39c422c5be502fe43c718e13602574e6b348b23a588cb4ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:49 compute-0 podman[394460]: 2025-10-02 08:55:49.520665379 +0000 UTC m=+0.023642340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac949bf0e619bcc39c422c5be502fe43c718e13602574e6b348b23a588cb4ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:49 compute-0 podman[394460]: 2025-10-02 08:55:49.632112812 +0000 UTC m=+0.135089753 container init 106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:55:49 compute-0 podman[394460]: 2025-10-02 08:55:49.640777937 +0000 UTC m=+0.143754868 container start 106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:55:49 compute-0 podman[394460]: 2025-10-02 08:55:49.643921176 +0000 UTC m=+0.146898117 container attach 106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 08:55:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:55:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3922536804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:55:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:55:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111831245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.834 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.840 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.871 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.939 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:55:49 compute-0 nova_compute[260603]: 2025-10-02 08:55:49.940 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:50 compute-0 nova_compute[260603]: 2025-10-02 08:55:50.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:50 compute-0 nova_compute[260603]: 2025-10-02 08:55:50.237 2 DEBUG nova.network.neutron [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Successfully updated port: 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:55:50 compute-0 nova_compute[260603]: 2025-10-02 08:55:50.257 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:55:50 compute-0 nova_compute[260603]: 2025-10-02 08:55:50.258 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:55:50 compute-0 nova_compute[260603]: 2025-10-02 08:55:50.258 2 DEBUG nova.network.neutron [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:55:50 compute-0 nova_compute[260603]: 2025-10-02 08:55:50.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:50 compute-0 nova_compute[260603]: 2025-10-02 08:55:50.423 2 DEBUG nova.compute.manager [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-changed-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:50 compute-0 nova_compute[260603]: 2025-10-02 08:55:50.426 2 DEBUG nova.compute.manager [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing instance network info cache due to event network-changed-3e0191b3-5405-4fe1-ba86-56a0b092a5d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:55:50 compute-0 nova_compute[260603]: 2025-10-02 08:55:50.426 2 DEBUG oslo_concurrency.lockutils [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:55:50 compute-0 nova_compute[260603]: 2025-10-02 08:55:50.522 2 DEBUG nova.network.neutron [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:55:50 compute-0 cool_matsumoto[394497]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:55:50 compute-0 cool_matsumoto[394497]: --> relative data size: 1.0
Oct 02 08:55:50 compute-0 cool_matsumoto[394497]: --> All data devices are unavailable
Oct 02 08:55:50 compute-0 systemd[1]: libpod-106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb.scope: Deactivated successfully.
Oct 02 08:55:50 compute-0 podman[394460]: 2025-10-02 08:55:50.747354459 +0000 UTC m=+1.250331410 container died 106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 08:55:50 compute-0 systemd[1]: libpod-106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb.scope: Consumed 1.051s CPU time.
Oct 02 08:55:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ac949bf0e619bcc39c422c5be502fe43c718e13602574e6b348b23a588cb4ed-merged.mount: Deactivated successfully.
Oct 02 08:55:50 compute-0 ceph-mon[74477]: pgmap v2331: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:55:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2111831245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:55:50 compute-0 podman[394460]: 2025-10-02 08:55:50.816833321 +0000 UTC m=+1.319810302 container remove 106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 08:55:50 compute-0 systemd[1]: libpod-conmon-106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb.scope: Deactivated successfully.
Oct 02 08:55:50 compute-0 sudo[394351]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:50 compute-0 nova_compute[260603]: 2025-10-02 08:55:50.939 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:55:50 compute-0 nova_compute[260603]: 2025-10-02 08:55:50.940 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:55:50 compute-0 sudo[394540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:55:50 compute-0 sudo[394540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:50 compute-0 sudo[394540]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:51 compute-0 sudo[394565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:55:51 compute-0 sudo[394565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:51 compute-0 sudo[394565]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:51 compute-0 sudo[394590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:55:51 compute-0 sudo[394590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:51 compute-0 sudo[394590]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:51 compute-0 sudo[394615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:55:51 compute-0 sudo[394615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:51 compute-0 podman[394682]: 2025-10-02 08:55:51.62108873 +0000 UTC m=+0.063690600 container create 3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:55:51 compute-0 systemd[1]: Started libpod-conmon-3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7.scope.
Oct 02 08:55:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:55:51 compute-0 podman[394682]: 2025-10-02 08:55:51.595478227 +0000 UTC m=+0.038080147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:55:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:55:51 compute-0 podman[394682]: 2025-10-02 08:55:51.709247155 +0000 UTC m=+0.151849035 container init 3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:55:51 compute-0 podman[394682]: 2025-10-02 08:55:51.717102534 +0000 UTC m=+0.159704404 container start 3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:55:51 compute-0 podman[394682]: 2025-10-02 08:55:51.720988227 +0000 UTC m=+0.163590137 container attach 3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:55:51 compute-0 musing_maxwell[394698]: 167 167
Oct 02 08:55:51 compute-0 systemd[1]: libpod-3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7.scope: Deactivated successfully.
Oct 02 08:55:51 compute-0 podman[394682]: 2025-10-02 08:55:51.723111954 +0000 UTC m=+0.165713824 container died 3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:55:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2da188fa719b46c77b29ee2fcd9655993fde59171af5f3294bcbbee9e1917e8f-merged.mount: Deactivated successfully.
Oct 02 08:55:51 compute-0 podman[394682]: 2025-10-02 08:55:51.772964844 +0000 UTC m=+0.215566684 container remove 3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:55:51 compute-0 systemd[1]: libpod-conmon-3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7.scope: Deactivated successfully.
Oct 02 08:55:52 compute-0 podman[394722]: 2025-10-02 08:55:52.005982232 +0000 UTC m=+0.053982362 container create ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:55:52 compute-0 systemd[1]: Started libpod-conmon-ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9.scope.
Oct 02 08:55:52 compute-0 podman[394722]: 2025-10-02 08:55:51.979381799 +0000 UTC m=+0.027381999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:55:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d4e8a54fcfecb2d4c4382fda4ac59b7a10f632697fee039bbcd6dda072c1c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d4e8a54fcfecb2d4c4382fda4ac59b7a10f632697fee039bbcd6dda072c1c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d4e8a54fcfecb2d4c4382fda4ac59b7a10f632697fee039bbcd6dda072c1c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d4e8a54fcfecb2d4c4382fda4ac59b7a10f632697fee039bbcd6dda072c1c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:52 compute-0 podman[394722]: 2025-10-02 08:55:52.108582365 +0000 UTC m=+0.156582575 container init ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hofstadter, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:55:52 compute-0 podman[394722]: 2025-10-02 08:55:52.123063113 +0000 UTC m=+0.171063233 container start ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 08:55:52 compute-0 podman[394722]: 2025-10-02 08:55:52.127171664 +0000 UTC m=+0.175171874 container attach ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hofstadter, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.662 2 DEBUG nova.network.neutron [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.685 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.686 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Instance network_info: |[{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.687 2 DEBUG oslo_concurrency.lockutils [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.687 2 DEBUG nova.network.neutron [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing network info cache for port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.691 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Start _get_guest_xml network_info=[{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.696 2 WARNING nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.704 2 DEBUG nova.virt.libvirt.host [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.704 2 DEBUG nova.virt.libvirt.host [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.708 2 DEBUG nova.virt.libvirt.host [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.709 2 DEBUG nova.virt.libvirt.host [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.709 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.709 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.710 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.710 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.711 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.711 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.711 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.711 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.712 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.712 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.712 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.713 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:55:52 compute-0 nova_compute[260603]: 2025-10-02 08:55:52.716 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:55:52 compute-0 ceph-mon[74477]: pgmap v2332: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]: {
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:     "0": [
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:         {
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "devices": [
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "/dev/loop3"
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             ],
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_name": "ceph_lv0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_size": "21470642176",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "name": "ceph_lv0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "tags": {
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.cluster_name": "ceph",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.crush_device_class": "",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.encrypted": "0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.osd_id": "0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.type": "block",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.vdo": "0"
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             },
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "type": "block",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "vg_name": "ceph_vg0"
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:         }
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:     ],
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:     "1": [
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:         {
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "devices": [
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "/dev/loop4"
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             ],
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_name": "ceph_lv1",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_size": "21470642176",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "name": "ceph_lv1",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "tags": {
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.cluster_name": "ceph",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.crush_device_class": "",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.encrypted": "0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.osd_id": "1",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.type": "block",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.vdo": "0"
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             },
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "type": "block",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "vg_name": "ceph_vg1"
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:         }
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:     ],
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:     "2": [
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:         {
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "devices": [
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "/dev/loop5"
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             ],
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_name": "ceph_lv2",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_size": "21470642176",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "name": "ceph_lv2",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "tags": {
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.cluster_name": "ceph",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.crush_device_class": "",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.encrypted": "0",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.osd_id": "2",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.type": "block",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:                 "ceph.vdo": "0"
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             },
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "type": "block",
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:             "vg_name": "ceph_vg2"
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:         }
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]:     ]
Oct 02 08:55:52 compute-0 elegant_hofstadter[394739]: }
Oct 02 08:55:52 compute-0 systemd[1]: libpod-ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9.scope: Deactivated successfully.
Oct 02 08:55:52 compute-0 podman[394722]: 2025-10-02 08:55:52.948065959 +0000 UTC m=+0.996066099 container died ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hofstadter, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:55:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8d4e8a54fcfecb2d4c4382fda4ac59b7a10f632697fee039bbcd6dda072c1c3-merged.mount: Deactivated successfully.
Oct 02 08:55:53 compute-0 podman[394722]: 2025-10-02 08:55:53.035684097 +0000 UTC m=+1.083684217 container remove ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hofstadter, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:55:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:55:53 compute-0 systemd[1]: libpod-conmon-ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9.scope: Deactivated successfully.
Oct 02 08:55:53 compute-0 sudo[394615]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:53 compute-0 podman[394782]: 2025-10-02 08:55:53.172303249 +0000 UTC m=+0.075687692 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:55:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:55:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1257783275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:55:53 compute-0 sudo[394783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:55:53 compute-0 sudo[394783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:53 compute-0 sudo[394783]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.198 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.229 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.236 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:55:53 compute-0 sudo[394835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:55:53 compute-0 sudo[394835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:53 compute-0 podman[394826]: 2025-10-02 08:55:53.300506143 +0000 UTC m=+0.105503996 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:55:53 compute-0 sudo[394835]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:53 compute-0 sudo[394899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:55:53 compute-0 sudo[394899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:53 compute-0 sudo[394899]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:53 compute-0 sudo[394943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:55:53 compute-0 sudo[394943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:55:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:55:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/395293971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.700 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.703 2 DEBUG nova.virt.libvirt.vif [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:55:45Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.703 2 DEBUG nova.network.os_vif_util [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.704 2 DEBUG nova.network.os_vif_util [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.706 2 DEBUG nova.objects.instance [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.728 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:55:53 compute-0 nova_compute[260603]:   <uuid>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</uuid>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   <name>instance-0000007e</name>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:55:52</nova:creationTime>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:55:53 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:55:53 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:55:53 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:55:53 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:55:53 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:55:53 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:55:53 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:55:53 compute-0 nova_compute[260603]:         <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct 02 08:55:53 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <system>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <entry name="serial">d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <entry name="uuid">d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     </system>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   <os>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   </os>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   <features>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   </features>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk">
Oct 02 08:55:53 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       </source>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:55:53 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config">
Oct 02 08:55:53 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       </source>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:55:53 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:00:df:98"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <target dev="tap3e0191b3-54"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log" append="off"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <video>
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     </video>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:55:53 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:55:53 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:55:53 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:55:53 compute-0 nova_compute[260603]: </domain>
Oct 02 08:55:53 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.729 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Preparing to wait for external event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.729 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.729 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.730 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.730 2 DEBUG nova.virt.libvirt.vif [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:55:45Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.731 2 DEBUG nova.network.os_vif_util [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.731 2 DEBUG nova.network.os_vif_util [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.732 2 DEBUG os_vif [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.737 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e0191b3-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.737 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e0191b3-54, col_values=(('external_ids', {'iface-id': '3e0191b3-5405-4fe1-ba86-56a0b092a5d6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:df:98', 'vm-uuid': 'd84a3f3a-a7fb-4222-9b59-c51b64d74a13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:53 compute-0 NetworkManager[45129]: <info>  [1759395353.7405] manager: (tap3e0191b3-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/533)
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.752 2 INFO os_vif [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54')
Oct 02 08:55:53 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1257783275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:55:53 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/395293971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.813 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.813 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.813 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:00:df:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.814 2 INFO nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Using config drive
Oct 02 08:55:53 compute-0 nova_compute[260603]: 2025-10-02 08:55:53.837 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:55:53 compute-0 podman[395012]: 2025-10-02 08:55:53.863036227 +0000 UTC m=+0.041159736 container create f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:55:53 compute-0 systemd[1]: Started libpod-conmon-f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e.scope.
Oct 02 08:55:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:55:53 compute-0 podman[395012]: 2025-10-02 08:55:53.844984884 +0000 UTC m=+0.023108383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:55:53 compute-0 podman[395012]: 2025-10-02 08:55:53.953342499 +0000 UTC m=+0.131466028 container init f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:55:53 compute-0 podman[395012]: 2025-10-02 08:55:53.959711681 +0000 UTC m=+0.137835150 container start f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 08:55:53 compute-0 podman[395012]: 2025-10-02 08:55:53.962677836 +0000 UTC m=+0.140801355 container attach f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:55:53 compute-0 elated_cori[395046]: 167 167
Oct 02 08:55:53 compute-0 systemd[1]: libpod-f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e.scope: Deactivated successfully.
Oct 02 08:55:53 compute-0 podman[395012]: 2025-10-02 08:55:53.966170176 +0000 UTC m=+0.144293685 container died f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 08:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fbaeda1f040c2b8946a1ede03f073c08d56074ecc78ee4236a227b43cce4924-merged.mount: Deactivated successfully.
Oct 02 08:55:54 compute-0 podman[395012]: 2025-10-02 08:55:54.016202972 +0000 UTC m=+0.194326451 container remove f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 08:55:54 compute-0 systemd[1]: libpod-conmon-f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e.scope: Deactivated successfully.
Oct 02 08:55:54 compute-0 podman[395072]: 2025-10-02 08:55:54.240182293 +0000 UTC m=+0.060455608 container create b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:55:54 compute-0 systemd[1]: Started libpod-conmon-b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148.scope.
Oct 02 08:55:54 compute-0 podman[395072]: 2025-10-02 08:55:54.212078812 +0000 UTC m=+0.032352167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:55:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6dffbf82a5a905471494b97e1598a398fb8739d158d5e0cef27a4453c892b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6dffbf82a5a905471494b97e1598a398fb8739d158d5e0cef27a4453c892b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6dffbf82a5a905471494b97e1598a398fb8739d158d5e0cef27a4453c892b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6dffbf82a5a905471494b97e1598a398fb8739d158d5e0cef27a4453c892b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:54 compute-0 podman[395072]: 2025-10-02 08:55:54.3580485 +0000 UTC m=+0.178321855 container init b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 08:55:54 compute-0 podman[395072]: 2025-10-02 08:55:54.379494159 +0000 UTC m=+0.199767464 container start b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 08:55:54 compute-0 podman[395072]: 2025-10-02 08:55:54.384019083 +0000 UTC m=+0.204292448 container attach b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:55:54 compute-0 nova_compute[260603]: 2025-10-02 08:55:54.459 2 INFO nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Creating config drive at /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/disk.config
Oct 02 08:55:54 compute-0 nova_compute[260603]: 2025-10-02 08:55:54.463 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejvrc2v6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:55:54 compute-0 nova_compute[260603]: 2025-10-02 08:55:54.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:55:54 compute-0 nova_compute[260603]: 2025-10-02 08:55:54.604 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejvrc2v6" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:55:54 compute-0 nova_compute[260603]: 2025-10-02 08:55:54.644 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:55:54 compute-0 nova_compute[260603]: 2025-10-02 08:55:54.649 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/disk.config d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:55:54 compute-0 nova_compute[260603]: 2025-10-02 08:55:54.723 2 DEBUG nova.network.neutron [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updated VIF entry in instance network info cache for port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:55:54 compute-0 nova_compute[260603]: 2025-10-02 08:55:54.724 2 DEBUG nova.network.neutron [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:55:54 compute-0 nova_compute[260603]: 2025-10-02 08:55:54.749 2 DEBUG oslo_concurrency.lockutils [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:55:54 compute-0 nova_compute[260603]: 2025-10-02 08:55:54.823 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/disk.config d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:55:54 compute-0 nova_compute[260603]: 2025-10-02 08:55:54.824 2 INFO nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Deleting local config drive /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/disk.config because it was imported into RBD.
Oct 02 08:55:54 compute-0 ceph-mon[74477]: pgmap v2333: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:55:54 compute-0 kernel: tap3e0191b3-54: entered promiscuous mode
Oct 02 08:55:54 compute-0 NetworkManager[45129]: <info>  [1759395354.9045] manager: (tap3e0191b3-54): new Tun device (/org/freedesktop/NetworkManager/Devices/534)
Oct 02 08:55:54 compute-0 ovn_controller[152344]: 2025-10-02T08:55:54Z|01352|binding|INFO|Claiming lport 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 for this chassis.
Oct 02 08:55:54 compute-0 ovn_controller[152344]: 2025-10-02T08:55:54Z|01353|binding|INFO|3e0191b3-5405-4fe1-ba86-56a0b092a5d6: Claiming fa:16:3e:00:df:98 10.100.0.3
Oct 02 08:55:54 compute-0 nova_compute[260603]: 2025-10-02 08:55:54.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:54 compute-0 systemd-udevd[395143]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:55:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.932 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:df:98 10.100.0.3'], port_security=['fa:16:3e:00:df:98 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd84a3f3a-a7fb-4222-9b59-c51b64d74a13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ca3ac45-6851-48dd-917a-457549b659ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f31abdd-db09-4203-b1b9-483c21cac467', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=95f926d6-852b-4212-98b8-ed01a74b48f6, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3e0191b3-5405-4fe1-ba86-56a0b092a5d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:55:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.936 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 in datapath 0ca3ac45-6851-48dd-917a-457549b659ba bound to our chassis
Oct 02 08:55:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.939 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ca3ac45-6851-48dd-917a-457549b659ba
Oct 02 08:55:54 compute-0 NetworkManager[45129]: <info>  [1759395354.9422] device (tap3e0191b3-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:55:54 compute-0 NetworkManager[45129]: <info>  [1759395354.9442] device (tap3e0191b3-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:55:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.955 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[37c6442a-7756-4ac5-af7f-0d452231d8d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.956 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0ca3ac45-61 in ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:55:54 compute-0 systemd-machined[214636]: New machine qemu-160-instance-0000007e.
Oct 02 08:55:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.961 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0ca3ac45-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:55:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.961 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[37c5c069-e105-4516-ba39-aee476719630]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.964 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d82de28-a18a-4178-b9ca-9df753eaaeca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:54 compute-0 systemd[1]: Started Virtual Machine qemu-160-instance-0000007e.
Oct 02 08:55:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.978 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[ff275f94-c11c-42ad-a539-80dd8e9a55ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.006 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dd153e0b-69d9-48d0-8132-89bf182e6fa0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:55 compute-0 ovn_controller[152344]: 2025-10-02T08:55:55Z|01354|binding|INFO|Setting lport 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 ovn-installed in OVS
Oct 02 08:55:55 compute-0 ovn_controller[152344]: 2025-10-02T08:55:55Z|01355|binding|INFO|Setting lport 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 up in Southbound
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.039 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a3570d-2866-440e-b051-7cd1ba600606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.043 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[69fca0da-4ecc-4d6c-be95-aa3b2248560e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 NetworkManager[45129]: <info>  [1759395355.0448] manager: (tap0ca3ac45-60): new Veth device (/org/freedesktop/NetworkManager/Devices/535)
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.091 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0ebb7a-4fc6-4731-919a-4ec857b48475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.094 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fed9d15b-567f-4c08-9d1b-b2ef933f7c23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 NetworkManager[45129]: <info>  [1759395355.1244] device (tap0ca3ac45-60): carrier: link connected
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.132 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8127cde4-5ded-47f7-894a-29ab361f2d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.161 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff9dd76-6aca-470f-a6f9-0768efc88a94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ca3ac45-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:46:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 384], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623372, 'reachable_time': 38556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395184, 'error': None, 'target': 'ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.184 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bc46e523-f7ec-410d-9f40-cb8e50fa1641]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:46db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 623372, 'tstamp': 623372}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395188, 'error': None, 'target': 'ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.205 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d95a3a-328a-4ad2-8dd3-4b41dc5f1a9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ca3ac45-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:46:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 384], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623372, 'reachable_time': 38556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395189, 'error': None, 'target': 'ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.244 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4a482d-3edd-4606-b814-831535d0b16d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.302 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[45bf6096-c237-4fb4-a2b8-cf5a02949076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.304 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ca3ac45-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.304 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.304 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ca3ac45-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:55 compute-0 NetworkManager[45129]: <info>  [1759395355.3068] manager: (tap0ca3ac45-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/536)
Oct 02 08:55:55 compute-0 kernel: tap0ca3ac45-60: entered promiscuous mode
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.309 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ca3ac45-60, col_values=(('external_ids', {'iface-id': 'ab6e362e-db4a-479c-a0fc-ea7c3db44980'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:55 compute-0 ovn_controller[152344]: 2025-10-02T08:55:55Z|01356|binding|INFO|Releasing lport ab6e362e-db4a-479c-a0fc-ea7c3db44980 from this chassis (sb_readonly=0)
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.325 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0ca3ac45-6851-48dd-917a-457549b659ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0ca3ac45-6851-48dd-917a-457549b659ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.326 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5719efbb-8a0d-40ea-9422-85d0e2338757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.328 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-0ca3ac45-6851-48dd-917a-457549b659ba
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/0ca3ac45-6851-48dd-917a-457549b659ba.pid.haproxy
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 0ca3ac45-6851-48dd-917a-457549b659ba
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:55:55 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.328 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba', 'env', 'PROCESS_TAG=haproxy-0ca3ac45-6851-48dd-917a-457549b659ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0ca3ac45-6851-48dd-917a-457549b659ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]: {
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "osd_id": 2,
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "type": "bluestore"
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:     },
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "osd_id": 1,
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "type": "bluestore"
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:     },
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "osd_id": 0,
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:         "type": "bluestore"
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]:     }
Oct 02 08:55:55 compute-0 trusting_hofstadter[395088]: }
Oct 02 08:55:55 compute-0 systemd[1]: libpod-b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148.scope: Deactivated successfully.
Oct 02 08:55:55 compute-0 podman[395072]: 2025-10-02 08:55:55.423174497 +0000 UTC m=+1.243447792 container died b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:55:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed6dffbf82a5a905471494b97e1598a398fb8739d158d5e0cef27a4453c892b6-merged.mount: Deactivated successfully.
Oct 02 08:55:55 compute-0 podman[395072]: 2025-10-02 08:55:55.491577456 +0000 UTC m=+1.311850741 container remove b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:55:55 compute-0 systemd[1]: libpod-conmon-b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148.scope: Deactivated successfully.
Oct 02 08:55:55 compute-0 sudo[394943]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.521 2 DEBUG nova.compute.manager [req-fa1844bf-5088-456d-82ae-5f4f1d0deb1f req-db9a1723-a130-4462-92ef-a094810cff63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.522 2 DEBUG oslo_concurrency.lockutils [req-fa1844bf-5088-456d-82ae-5f4f1d0deb1f req-db9a1723-a130-4462-92ef-a094810cff63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.522 2 DEBUG oslo_concurrency.lockutils [req-fa1844bf-5088-456d-82ae-5f4f1d0deb1f req-db9a1723-a130-4462-92ef-a094810cff63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.522 2 DEBUG oslo_concurrency.lockutils [req-fa1844bf-5088-456d-82ae-5f4f1d0deb1f req-db9a1723-a130-4462-92ef-a094810cff63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:55 compute-0 nova_compute[260603]: 2025-10-02 08:55:55.523 2 DEBUG nova.compute.manager [req-fa1844bf-5088-456d-82ae-5f4f1d0deb1f req-db9a1723-a130-4462-92ef-a094810cff63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Processing event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:55:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:55:55 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:55:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:55:55 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:55:55 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 90a8f4db-9711-433e-b33c-0453efe3fc1f does not exist
Oct 02 08:55:55 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 636d9398-06f4-4e55-8eb2-e123640ca08f does not exist
Oct 02 08:55:55 compute-0 sudo[395231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:55:55 compute-0 sudo[395231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:55 compute-0 sudo[395231]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:55 compute-0 sudo[395259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:55:55 compute-0 sudo[395259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:55:55 compute-0 sudo[395259]: pam_unix(sudo:session): session closed for user root
Oct 02 08:55:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:55:55 compute-0 podman[395301]: 2025-10-02 08:55:55.737588995 +0000 UTC m=+0.046712152 container create 117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 08:55:55 compute-0 systemd[1]: Started libpod-conmon-117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60.scope.
Oct 02 08:55:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931c6303f091c199c5789a98492e9e4f99efc35fbf41e4ee1e289eeeb8eb185b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:55:55 compute-0 podman[395301]: 2025-10-02 08:55:55.712774878 +0000 UTC m=+0.021898085 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:55:55 compute-0 podman[395301]: 2025-10-02 08:55:55.817345913 +0000 UTC m=+0.126469090 container init 117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:55:55 compute-0 podman[395301]: 2025-10-02 08:55:55.823597311 +0000 UTC m=+0.132720478 container start 117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:55:55 compute-0 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [NOTICE]   (395321) : New worker (395323) forked
Oct 02 08:55:55 compute-0 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [NOTICE]   (395321) : Loading success.
Oct 02 08:55:56 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:55:56 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:55:56 compute-0 ceph-mon[74477]: pgmap v2334: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.031 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395357.0304892, d84a3f3a-a7fb-4222-9b59-c51b64d74a13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.032 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] VM Started (Lifecycle Event)
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.035 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.040 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.045 2 INFO nova.virt.libvirt.driver [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Instance spawned successfully.
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.046 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.061 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.071 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.077 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.077 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.078 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.079 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.080 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.081 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.113 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.114 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395357.0369425, d84a3f3a-a7fb-4222-9b59-c51b64d74a13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.115 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] VM Paused (Lifecycle Event)
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.150 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.155 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395357.0395665, d84a3f3a-a7fb-4222-9b59-c51b64d74a13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.156 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] VM Resumed (Lifecycle Event)
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.160 2 INFO nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Took 11.20 seconds to spawn the instance on the hypervisor.
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.161 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.177 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.182 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.208 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.235 2 INFO nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Took 12.15 seconds to build instance.
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.260 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.631 2 DEBUG nova.compute.manager [req-551adfbb-c4e6-4eb3-a22e-f8008e096dfd req-d025032e-a318-4584-be7a-933a9529372d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.632 2 DEBUG oslo_concurrency.lockutils [req-551adfbb-c4e6-4eb3-a22e-f8008e096dfd req-d025032e-a318-4584-be7a-933a9529372d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.633 2 DEBUG oslo_concurrency.lockutils [req-551adfbb-c4e6-4eb3-a22e-f8008e096dfd req-d025032e-a318-4584-be7a-933a9529372d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.633 2 DEBUG oslo_concurrency.lockutils [req-551adfbb-c4e6-4eb3-a22e-f8008e096dfd req-d025032e-a318-4584-be7a-933a9529372d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.634 2 DEBUG nova.compute.manager [req-551adfbb-c4e6-4eb3-a22e-f8008e096dfd req-d025032e-a318-4584-be7a-933a9529372d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:55:57 compute-0 nova_compute[260603]: 2025-10-02 08:55:57.635 2 WARNING nova.compute.manager [req-551adfbb-c4e6-4eb3-a22e-f8008e096dfd req-d025032e-a318-4584-be7a-933a9529372d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 for instance with vm_state active and task_state None.
Oct 02 08:55:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 08:55:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:55:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:55:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:55:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:55:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:55:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:55:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:55:58 compute-0 nova_compute[260603]: 2025-10-02 08:55:58.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:55:58 compute-0 ceph-mon[74477]: pgmap v2335: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 08:55:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 554 KiB/s rd, 665 KiB/s wr, 40 op/s
Oct 02 08:56:00 compute-0 nova_compute[260603]: 2025-10-02 08:56:00.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:00 compute-0 ceph-mon[74477]: pgmap v2336: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 554 KiB/s rd, 665 KiB/s wr, 40 op/s
Oct 02 08:56:01 compute-0 podman[395374]: 2025-10-02 08:56:01.035080204 +0000 UTC m=+0.088620010 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 08:56:01 compute-0 podman[395375]: 2025-10-02 08:56:01.05103803 +0000 UTC m=+0.099093333 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:56:01 compute-0 NetworkManager[45129]: <info>  [1759395361.3169] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Oct 02 08:56:01 compute-0 NetworkManager[45129]: <info>  [1759395361.3190] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Oct 02 08:56:01 compute-0 ovn_controller[152344]: 2025-10-02T08:56:01Z|01357|binding|INFO|Releasing lport ab6e362e-db4a-479c-a0fc-ea7c3db44980 from this chassis (sb_readonly=0)
Oct 02 08:56:01 compute-0 nova_compute[260603]: 2025-10-02 08:56:01.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:01 compute-0 ovn_controller[152344]: 2025-10-02T08:56:01Z|01358|binding|INFO|Releasing lport ab6e362e-db4a-479c-a0fc-ea7c3db44980 from this chassis (sb_readonly=0)
Oct 02 08:56:01 compute-0 nova_compute[260603]: 2025-10-02 08:56:01.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:01 compute-0 nova_compute[260603]: 2025-10-02 08:56:01.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 12 KiB/s wr, 27 op/s
Oct 02 08:56:01 compute-0 nova_compute[260603]: 2025-10-02 08:56:01.921 2 DEBUG nova.compute.manager [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-changed-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:56:01 compute-0 nova_compute[260603]: 2025-10-02 08:56:01.922 2 DEBUG nova.compute.manager [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing instance network info cache due to event network-changed-3e0191b3-5405-4fe1-ba86-56a0b092a5d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:56:01 compute-0 nova_compute[260603]: 2025-10-02 08:56:01.922 2 DEBUG oslo_concurrency.lockutils [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:56:01 compute-0 nova_compute[260603]: 2025-10-02 08:56:01.922 2 DEBUG oslo_concurrency.lockutils [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:56:01 compute-0 nova_compute[260603]: 2025-10-02 08:56:01.922 2 DEBUG nova.network.neutron [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing network info cache for port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:56:02 compute-0 ceph-mon[74477]: pgmap v2337: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 12 KiB/s wr, 27 op/s
Oct 02 08:56:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:56:03 compute-0 nova_compute[260603]: 2025-10-02 08:56:03.158 2 DEBUG nova.network.neutron [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updated VIF entry in instance network info cache for port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:56:03 compute-0 nova_compute[260603]: 2025-10-02 08:56:03.159 2 DEBUG nova.network.neutron [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:56:03 compute-0 nova_compute[260603]: 2025-10-02 08:56:03.186 2 DEBUG oslo_concurrency.lockutils [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:56:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:56:03 compute-0 nova_compute[260603]: 2025-10-02 08:56:03.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:04 compute-0 ceph-mon[74477]: pgmap v2338: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:56:05 compute-0 nova_compute[260603]: 2025-10-02 08:56:05.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:56:06 compute-0 ceph-mon[74477]: pgmap v2339: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:56:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 02 08:56:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:56:08 compute-0 nova_compute[260603]: 2025-10-02 08:56:08.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:08 compute-0 ceph-mon[74477]: pgmap v2340: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 02 08:56:08 compute-0 ovn_controller[152344]: 2025-10-02T08:56:08Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:df:98 10.100.0.3
Oct 02 08:56:08 compute-0 ovn_controller[152344]: 2025-10-02T08:56:08Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:df:98 10.100.0.3
Oct 02 08:56:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 94 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 713 KiB/s wr, 84 op/s
Oct 02 08:56:10 compute-0 nova_compute[260603]: 2025-10-02 08:56:10.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:10 compute-0 ceph-mon[74477]: pgmap v2341: 305 pgs: 305 active+clean; 94 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 713 KiB/s wr, 84 op/s
Oct 02 08:56:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 94 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 700 KiB/s wr, 65 op/s
Oct 02 08:56:12 compute-0 ceph-mon[74477]: pgmap v2342: 305 pgs: 305 active+clean; 94 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 700 KiB/s wr, 65 op/s
Oct 02 08:56:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:56:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 02 08:56:13 compute-0 nova_compute[260603]: 2025-10-02 08:56:13.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:14 compute-0 ceph-mon[74477]: pgmap v2343: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 02 08:56:14 compute-0 nova_compute[260603]: 2025-10-02 08:56:14.867 2 INFO nova.compute.manager [None req-d81b26de-9148-4b0a-94ff-e98fe0edaa4b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Get console output
Oct 02 08:56:14 compute-0 nova_compute[260603]: 2025-10-02 08:56:14.876 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:56:15 compute-0 nova_compute[260603]: 2025-10-02 08:56:15.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:56:16 compute-0 ceph-mon[74477]: pgmap v2344: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:56:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:56:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:56:18 compute-0 nova_compute[260603]: 2025-10-02 08:56:18.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:18 compute-0 ceph-mon[74477]: pgmap v2345: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:56:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:56:20 compute-0 nova_compute[260603]: 2025-10-02 08:56:20.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:20 compute-0 ceph-mon[74477]: pgmap v2346: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:56:21 compute-0 nova_compute[260603]: 2025-10-02 08:56:21.392 2 DEBUG oslo_concurrency.lockutils [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "interface-d84a3f3a-a7fb-4222-9b59-c51b64d74a13-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:21 compute-0 nova_compute[260603]: 2025-10-02 08:56:21.393 2 DEBUG oslo_concurrency.lockutils [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-d84a3f3a-a7fb-4222-9b59-c51b64d74a13-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:21 compute-0 nova_compute[260603]: 2025-10-02 08:56:21.393 2 DEBUG nova.objects.instance [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'flavor' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:56:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Oct 02 08:56:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:56:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2871296160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:56:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:56:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2871296160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:56:22 compute-0 ceph-mon[74477]: pgmap v2347: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Oct 02 08:56:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2871296160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:56:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2871296160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:56:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:56:23 compute-0 nova_compute[260603]: 2025-10-02 08:56:23.557 2 DEBUG nova.objects.instance [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_requests' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:56:23 compute-0 nova_compute[260603]: 2025-10-02 08:56:23.579 2 DEBUG nova.network.neutron [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:56:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Oct 02 08:56:23 compute-0 nova_compute[260603]: 2025-10-02 08:56:23.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:24 compute-0 podman[395418]: 2025-10-02 08:56:24.026658477 +0000 UTC m=+0.079112609 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 08:56:24 compute-0 podman[395417]: 2025-10-02 08:56:24.043658565 +0000 UTC m=+0.111944619 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:56:24 compute-0 nova_compute[260603]: 2025-10-02 08:56:24.481 2 DEBUG nova.policy [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:56:24 compute-0 ceph-mon[74477]: pgmap v2348: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Oct 02 08:56:25 compute-0 nova_compute[260603]: 2025-10-02 08:56:25.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:25.376 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:56:25 compute-0 nova_compute[260603]: 2025-10-02 08:56:25.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:25.380 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:56:25 compute-0 nova_compute[260603]: 2025-10-02 08:56:25.450 2 DEBUG nova.network.neutron [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Successfully created port: 3d7e333e-5ee7-4373-b49b-3f78fb696f43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:56:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 02 08:56:26 compute-0 nova_compute[260603]: 2025-10-02 08:56:26.795 2 DEBUG nova.network.neutron [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Successfully updated port: 3d7e333e-5ee7-4373-b49b-3f78fb696f43 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:56:26 compute-0 nova_compute[260603]: 2025-10-02 08:56:26.813 2 DEBUG oslo_concurrency.lockutils [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:56:26 compute-0 nova_compute[260603]: 2025-10-02 08:56:26.814 2 DEBUG oslo_concurrency.lockutils [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:56:26 compute-0 nova_compute[260603]: 2025-10-02 08:56:26.814 2 DEBUG nova.network.neutron [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:56:26 compute-0 ceph-mon[74477]: pgmap v2349: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 02 08:56:26 compute-0 nova_compute[260603]: 2025-10-02 08:56:26.902 2 DEBUG nova.compute.manager [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-changed-3d7e333e-5ee7-4373-b49b-3f78fb696f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:56:26 compute-0 nova_compute[260603]: 2025-10-02 08:56:26.902 2 DEBUG nova.compute.manager [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing instance network info cache due to event network-changed-3d7e333e-5ee7-4373-b49b-3f78fb696f43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:56:26 compute-0 nova_compute[260603]: 2025-10-02 08:56:26.903 2 DEBUG oslo_concurrency.lockutils [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:56:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:27.382 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:56:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Oct 02 08:56:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:56:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:56:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:56:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:56:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:56:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:56:28
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.meta', 'volumes']
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:56:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:56:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:56:28 compute-0 nova_compute[260603]: 2025-10-02 08:56:28.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:28 compute-0 ceph-mon[74477]: pgmap v2350: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Oct 02 08:56:29 compute-0 nova_compute[260603]: 2025-10-02 08:56:29.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:56:29 compute-0 nova_compute[260603]: 2025-10-02 08:56:29.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:56:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 4.0 KiB/s wr, 0 op/s
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.918469) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395389918526, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1168, "num_deletes": 252, "total_data_size": 1673677, "memory_usage": 1700816, "flush_reason": "Manual Compaction"}
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395389931256, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1645795, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48607, "largest_seqno": 49774, "table_properties": {"data_size": 1640239, "index_size": 2951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12147, "raw_average_key_size": 19, "raw_value_size": 1628911, "raw_average_value_size": 2674, "num_data_blocks": 132, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395283, "oldest_key_time": 1759395283, "file_creation_time": 1759395389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 12851 microseconds, and 8045 cpu microseconds.
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.931315) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1645795 bytes OK
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.931352) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.932971) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.932994) EVENT_LOG_v1 {"time_micros": 1759395389932986, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.933018) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1668321, prev total WAL file size 1668321, number of live WAL files 2.
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.934079) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1607KB)], [113(8338KB)]
Oct 02 08:56:29 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395389934118, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10184518, "oldest_snapshot_seqno": -1}
Oct 02 08:56:29 compute-0 nova_compute[260603]: 2025-10-02 08:56:29.972 2 DEBUG nova.network.neutron [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6946 keys, 8467232 bytes, temperature: kUnknown
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395390000192, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8467232, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8422302, "index_size": 26450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 181257, "raw_average_key_size": 26, "raw_value_size": 8299604, "raw_average_value_size": 1194, "num_data_blocks": 1028, "num_entries": 6946, "num_filter_entries": 6946, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.000552) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8467232 bytes
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.002390) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.9 rd, 127.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(11.3) write-amplify(5.1) OK, records in: 7466, records dropped: 520 output_compression: NoCompression
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.002422) EVENT_LOG_v1 {"time_micros": 1759395390002407, "job": 68, "event": "compaction_finished", "compaction_time_micros": 66194, "compaction_time_cpu_micros": 40117, "output_level": 6, "num_output_files": 1, "total_output_size": 8467232, "num_input_records": 7466, "num_output_records": 6946, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395390003118, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395390006205, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.933969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.006253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.006259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.006 2 DEBUG oslo_concurrency.lockutils [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.006262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.006266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:56:30 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.006269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.007 2 DEBUG oslo_concurrency.lockutils [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.008 2 DEBUG nova.network.neutron [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing network info cache for port 3d7e333e-5ee7-4373-b49b-3f78fb696f43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.013 2 DEBUG nova.virt.libvirt.vif [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.014 2 DEBUG nova.network.os_vif_util [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.015 2 DEBUG nova.network.os_vif_util [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.016 2 DEBUG os_vif [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.019 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d7e333e-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.024 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3d7e333e-5e, col_values=(('external_ids', {'iface-id': '3d7e333e-5ee7-4373-b49b-3f78fb696f43', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:a9:81', 'vm-uuid': 'd84a3f3a-a7fb-4222-9b59-c51b64d74a13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:56:30 compute-0 NetworkManager[45129]: <info>  [1759395390.0291] manager: (tap3d7e333e-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/539)
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.052 2 INFO os_vif [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e')
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.053 2 DEBUG nova.virt.libvirt.vif [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.054 2 DEBUG nova.network.os_vif_util [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.055 2 DEBUG nova.network.os_vif_util [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.059 2 DEBUG nova.virt.libvirt.guest [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] attach device xml: <interface type="ethernet">
Oct 02 08:56:30 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:26:a9:81"/>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   <target dev="tap3d7e333e-5e"/>
Oct 02 08:56:30 compute-0 nova_compute[260603]: </interface>
Oct 02 08:56:30 compute-0 nova_compute[260603]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 08:56:30 compute-0 NetworkManager[45129]: <info>  [1759395390.0767] manager: (tap3d7e333e-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/540)
Oct 02 08:56:30 compute-0 kernel: tap3d7e333e-5e: entered promiscuous mode
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:30 compute-0 ovn_controller[152344]: 2025-10-02T08:56:30Z|01359|binding|INFO|Claiming lport 3d7e333e-5ee7-4373-b49b-3f78fb696f43 for this chassis.
Oct 02 08:56:30 compute-0 ovn_controller[152344]: 2025-10-02T08:56:30Z|01360|binding|INFO|3d7e333e-5ee7-4373-b49b-3f78fb696f43: Claiming fa:16:3e:26:a9:81 10.100.0.27
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.093 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:a9:81 10.100.0.27'], port_security=['fa:16:3e:26:a9:81 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'd84a3f3a-a7fb-4222-9b59-c51b64d74a13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-582d804a-4891-4cd3-8540-2cfd9f71444e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=614405e1-3a58-497d-8a87-2cf68394c005, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3d7e333e-5ee7-4373-b49b-3f78fb696f43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.094 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3d7e333e-5ee7-4373-b49b-3f78fb696f43 in datapath 582d804a-4891-4cd3-8540-2cfd9f71444e bound to our chassis
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.095 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 582d804a-4891-4cd3-8540-2cfd9f71444e
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.117 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb869a4-916c-44f7-ac70-74dc9e6f782b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.118 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap582d804a-41 in ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.121 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap582d804a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.121 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3a03dcd5-e17c-4cac-8940-cf79f1f81001]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.122 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[337010d2-b881-45d5-b7d6-ef6207ba7224]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 systemd-udevd[395469]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.143 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddfabdb-9df2-448d-aa5e-02c023d86ffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 ovn_controller[152344]: 2025-10-02T08:56:30Z|01361|binding|INFO|Setting lport 3d7e333e-5ee7-4373-b49b-3f78fb696f43 ovn-installed in OVS
Oct 02 08:56:30 compute-0 ovn_controller[152344]: 2025-10-02T08:56:30Z|01362|binding|INFO|Setting lport 3d7e333e-5ee7-4373-b49b-3f78fb696f43 up in Southbound
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:30 compute-0 NetworkManager[45129]: <info>  [1759395390.1643] device (tap3d7e333e-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:56:30 compute-0 NetworkManager[45129]: <info>  [1759395390.1655] device (tap3d7e333e-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.179 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[772249ba-afb3-453c-b113-e10762d60713]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.213 2 DEBUG nova.virt.libvirt.driver [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.214 2 DEBUG nova.virt.libvirt.driver [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.214 2 DEBUG nova.virt.libvirt.driver [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:00:df:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.214 2 DEBUG nova.virt.libvirt.driver [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:26:a9:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.233 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[79a97f6d-5e64-4196-8596-55f2807394ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 NetworkManager[45129]: <info>  [1759395390.2425] manager: (tap582d804a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/541)
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.241 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22851b80-c9bd-4b30-aeb5-5d62f64a017d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 systemd-udevd[395472]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.249 2 DEBUG nova.virt.libvirt.guest [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:56:30 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:56:30</nova:creationTime>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:56:30 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:56:30 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:56:30 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:56:30 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:56:30 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:56:30 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:56:30 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:56:30 compute-0 nova_compute[260603]:     <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct 02 08:56:30 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:56:30 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:56:30 compute-0 nova_compute[260603]:     <nova:port uuid="3d7e333e-5ee7-4373-b49b-3f78fb696f43">
Oct 02 08:56:30 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct 02 08:56:30 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:56:30 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:56:30 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:56:30 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.284 2 DEBUG oslo_concurrency.lockutils [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-d84a3f3a-a7fb-4222-9b59-c51b64d74a13-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.292 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d71fb43f-a508-4e0b-a3bd-61e05a572fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.295 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a5cc7725-1754-4713-9966-4c8cc3ca288d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 NetworkManager[45129]: <info>  [1759395390.3295] device (tap582d804a-40): carrier: link connected
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.339 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b9fecf-db22-4284-b617-a7f7e7e23ff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.361 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9e4051-9dbe-4af3-8408-a1769ffb8178]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap582d804a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:8e:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626893, 'reachable_time': 37342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395494, 'error': None, 'target': 'ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.386 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6da29dda-ef93-42a1-b69f-5f0b75941cb9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:8e6b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626893, 'tstamp': 626893}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395495, 'error': None, 'target': 'ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.415 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42b68cb1-a750-4c48-9d4a-6261066e2952]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap582d804a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:8e:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626893, 'reachable_time': 37342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395496, 'error': None, 'target': 'ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.462 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e93f1a93-acc3-4763-b71f-2d8e1ca8f8f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.563 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e00541e8-d59f-4e44-bcdd-a5f073979548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.566 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap582d804a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.566 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.567 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap582d804a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:30 compute-0 kernel: tap582d804a-40: entered promiscuous mode
Oct 02 08:56:30 compute-0 NetworkManager[45129]: <info>  [1759395390.5722] manager: (tap582d804a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.574 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap582d804a-40, col_values=(('external_ids', {'iface-id': 'fd77efd3-8b3c-4ce5-bc3b-650b9665197c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:30 compute-0 ovn_controller[152344]: 2025-10-02T08:56:30Z|01363|binding|INFO|Releasing lport fd77efd3-8b3c-4ce5-bc3b-650b9665197c from this chassis (sb_readonly=0)
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.597 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/582d804a-4891-4cd3-8540-2cfd9f71444e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/582d804a-4891-4cd3-8540-2cfd9f71444e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.598 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f563d364-0c79-4856-993a-e6599982711b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.599 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-582d804a-4891-4cd3-8540-2cfd9f71444e
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/582d804a-4891-4cd3-8540-2cfd9f71444e.pid.haproxy
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 582d804a-4891-4cd3-8540-2cfd9f71444e
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:56:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.600 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e', 'env', 'PROCESS_TAG=haproxy-582d804a-4891-4cd3-8540-2cfd9f71444e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/582d804a-4891-4cd3-8540-2cfd9f71444e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.783 2 DEBUG nova.compute.manager [req-cb44ebce-a6e9-4da9-97e5-1eab001f08c6 req-8747b6bb-3a45-4476-96f3-85106eb10d2f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.784 2 DEBUG oslo_concurrency.lockutils [req-cb44ebce-a6e9-4da9-97e5-1eab001f08c6 req-8747b6bb-3a45-4476-96f3-85106eb10d2f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.784 2 DEBUG oslo_concurrency.lockutils [req-cb44ebce-a6e9-4da9-97e5-1eab001f08c6 req-8747b6bb-3a45-4476-96f3-85106eb10d2f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.784 2 DEBUG oslo_concurrency.lockutils [req-cb44ebce-a6e9-4da9-97e5-1eab001f08c6 req-8747b6bb-3a45-4476-96f3-85106eb10d2f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.785 2 DEBUG nova.compute.manager [req-cb44ebce-a6e9-4da9-97e5-1eab001f08c6 req-8747b6bb-3a45-4476-96f3-85106eb10d2f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:56:30 compute-0 nova_compute[260603]: 2025-10-02 08:56:30.785 2 WARNING nova.compute.manager [req-cb44ebce-a6e9-4da9-97e5-1eab001f08c6 req-8747b6bb-3a45-4476-96f3-85106eb10d2f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 for instance with vm_state active and task_state None.
Oct 02 08:56:30 compute-0 ceph-mon[74477]: pgmap v2351: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 4.0 KiB/s wr, 0 op/s
Oct 02 08:56:31 compute-0 podman[395528]: 2025-10-02 08:56:31.071730082 +0000 UTC m=+0.079720198 container create 47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 08:56:31 compute-0 podman[395528]: 2025-10-02 08:56:31.041277186 +0000 UTC m=+0.049267342 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:56:31 compute-0 systemd[1]: Started libpod-conmon-47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5.scope.
Oct 02 08:56:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9dfa101d84a32dcff657b7d40a07ab3dd078bf7004b1c5d61a78dcd0c932fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:56:31 compute-0 podman[395528]: 2025-10-02 08:56:31.2072933 +0000 UTC m=+0.215283446 container init 47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:56:31 compute-0 podman[395528]: 2025-10-02 08:56:31.214662763 +0000 UTC m=+0.222652879 container start 47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:56:31 compute-0 podman[395544]: 2025-10-02 08:56:31.246856263 +0000 UTC m=+0.091965106 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:56:31 compute-0 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [NOTICE]   (395578) : New worker (395585) forked
Oct 02 08:56:31 compute-0 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [NOTICE]   (395578) : Loading success.
Oct 02 08:56:31 compute-0 podman[395541]: 2025-10-02 08:56:31.268885402 +0000 UTC m=+0.118636742 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.503 2 DEBUG oslo_concurrency.lockutils [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "interface-d84a3f3a-a7fb-4222-9b59-c51b64d74a13-3d7e333e-5ee7-4373-b49b-3f78fb696f43" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.504 2 DEBUG oslo_concurrency.lockutils [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-d84a3f3a-a7fb-4222-9b59-c51b64d74a13-3d7e333e-5ee7-4373-b49b-3f78fb696f43" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.518 2 DEBUG nova.objects.instance [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'flavor' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.540 2 DEBUG nova.virt.libvirt.vif [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.540 2 DEBUG nova.network.os_vif_util [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.541 2 DEBUG nova.network.os_vif_util [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.546 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.548 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.550 2 DEBUG nova.virt.libvirt.driver [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Attempting to detach device tap3d7e333e-5e from instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.551 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] detach device xml: <interface type="ethernet">
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:26:a9:81"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <target dev="tap3d7e333e-5e"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]: </interface>
Oct 02 08:56:31 compute-0 nova_compute[260603]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.557 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.560 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface>not found in domain: <domain type='kvm' id='160'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <name>instance-0000007e</name>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <uuid>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</uuid>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:56:30</nova:creationTime>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:port uuid="3d7e333e-5ee7-4373-b49b-3f78fb696f43">
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:56:31 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <system>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <entry name='serial'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <entry name='uuid'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </system>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <os>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </os>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <features>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </features>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk' index='2'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       </source>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config' index='1'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       </source>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:00:df:98'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target dev='tap3e0191b3-54'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:26:a9:81'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target dev='tap3d7e333e-5e'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='net1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       </target>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </console>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </input>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </input>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </input>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <video>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </video>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c271,c793</label>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c271,c793</imagelabel>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:56:31 compute-0 nova_compute[260603]: </domain>
Oct 02 08:56:31 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.563 2 INFO nova.virt.libvirt.driver [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully detached device tap3d7e333e-5e from instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13 from the persistent domain config.
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.564 2 DEBUG nova.virt.libvirt.driver [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] (1/8): Attempting to detach device tap3d7e333e-5e with device alias net1 from instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.564 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] detach device xml: <interface type="ethernet">
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:26:a9:81"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <target dev="tap3d7e333e-5e"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]: </interface>
Oct 02 08:56:31 compute-0 nova_compute[260603]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 08:56:31 compute-0 kernel: tap3d7e333e-5e (unregistering): left promiscuous mode
Oct 02 08:56:31 compute-0 NetworkManager[45129]: <info>  [1759395391.6721] device (tap3d7e333e-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:31 compute-0 ovn_controller[152344]: 2025-10-02T08:56:31Z|01364|binding|INFO|Releasing lport 3d7e333e-5ee7-4373-b49b-3f78fb696f43 from this chassis (sb_readonly=0)
Oct 02 08:56:31 compute-0 ovn_controller[152344]: 2025-10-02T08:56:31Z|01365|binding|INFO|Setting lport 3d7e333e-5ee7-4373-b49b-3f78fb696f43 down in Southbound
Oct 02 08:56:31 compute-0 ovn_controller[152344]: 2025-10-02T08:56:31Z|01366|binding|INFO|Removing iface tap3d7e333e-5e ovn-installed in OVS
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.694 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Received event <DeviceRemovedEvent: 1759395391.6931865, d84a3f3a-a7fb-4222-9b59-c51b64d74a13 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.697 2 DEBUG nova.virt.libvirt.driver [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Start waiting for the detach event from libvirt for device tap3d7e333e-5e with device alias net1 for instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.698 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:56:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:31.700 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:a9:81 10.100.0.27'], port_security=['fa:16:3e:26:a9:81 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'd84a3f3a-a7fb-4222-9b59-c51b64d74a13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-582d804a-4891-4cd3-8540-2cfd9f71444e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=614405e1-3a58-497d-8a87-2cf68394c005, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3d7e333e-5ee7-4373-b49b-3f78fb696f43) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:56:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:31.705 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3d7e333e-5ee7-4373-b49b-3f78fb696f43 in datapath 582d804a-4891-4cd3-8540-2cfd9f71444e unbound from our chassis
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.705 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface>not found in domain: <domain type='kvm' id='160'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <name>instance-0000007e</name>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <uuid>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</uuid>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:56:30</nova:creationTime>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:port uuid="3d7e333e-5ee7-4373-b49b-3f78fb696f43">
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:56:31 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <system>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <entry name='serial'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <entry name='uuid'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </system>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <os>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </os>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <features>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </features>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk' index='2'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       </source>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config' index='1'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       </source>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:00:df:98'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target dev='tap3e0191b3-54'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       </target>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </console>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </input>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </input>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </input>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <video>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </video>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c271,c793</label>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c271,c793</imagelabel>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:56:31 compute-0 nova_compute[260603]: </domain>
Oct 02 08:56:31 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.705 2 INFO nova.virt.libvirt.driver [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully detached device tap3d7e333e-5e from instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13 from the live domain config.
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.707 2 DEBUG nova.virt.libvirt.vif [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.707 2 DEBUG nova.network.os_vif_util [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:56:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:31.707 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 582d804a-4891-4cd3-8540-2cfd9f71444e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.708 2 DEBUG nova.network.os_vif_util [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.709 2 DEBUG os_vif [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:56:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:31.709 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[16c9af7a-777d-400c-82c2-600ecbb866c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d7e333e-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:31.710 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e namespace which is not needed anymore
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.722 2 INFO os_vif [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e')
Oct 02 08:56:31 compute-0 nova_compute[260603]: 2025-10-02 08:56:31.723 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:56:31</nova:creationTime>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct 02 08:56:31 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:56:31 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:56:31 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:56:31 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:56:31 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:56:31 compute-0 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [NOTICE]   (395578) : haproxy version is 2.8.14-c23fe91
Oct 02 08:56:31 compute-0 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [NOTICE]   (395578) : path to executable is /usr/sbin/haproxy
Oct 02 08:56:31 compute-0 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [WARNING]  (395578) : Exiting Master process...
Oct 02 08:56:31 compute-0 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [ALERT]    (395578) : Current worker (395585) exited with code 143 (Terminated)
Oct 02 08:56:31 compute-0 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [WARNING]  (395578) : All workers exited. Exiting... (0)
Oct 02 08:56:31 compute-0 systemd[1]: libpod-47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5.scope: Deactivated successfully.
Oct 02 08:56:31 compute-0 podman[395615]: 2025-10-02 08:56:31.908067596 +0000 UTC m=+0.070012480 container died 47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5-userdata-shm.mount: Deactivated successfully.
Oct 02 08:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c9dfa101d84a32dcff657b7d40a07ab3dd078bf7004b1c5d61a78dcd0c932fc-merged.mount: Deactivated successfully.
Oct 02 08:56:31 compute-0 podman[395615]: 2025-10-02 08:56:31.973179961 +0000 UTC m=+0.135124785 container cleanup 47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 08:56:31 compute-0 systemd[1]: libpod-conmon-47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5.scope: Deactivated successfully.
Oct 02 08:56:32 compute-0 podman[395645]: 2025-10-02 08:56:32.076777605 +0000 UTC m=+0.063614268 container remove 47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 08:56:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.083 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7bdcbe-f569-4839-9a75-53fc4ee57439]: (4, ('Thu Oct  2 08:56:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e (47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5)\n47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5\nThu Oct  2 08:56:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e (47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5)\n47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.085 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3314cb5e-0c4d-425c-8540-261463de944a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.087 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap582d804a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:32 compute-0 kernel: tap582d804a-40: left promiscuous mode
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.107 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[896028fe-5e32-4cfe-b75a-15a7b30e5ff8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.136 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[99bbda72-4621-41f0-881c-9ab2c40af85e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.137 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e72ddf7e-eb3c-4e97-9f1e-c0e31aa0dd89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.155 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b67af64a-d9d3-4c1a-bbda-cc688c8fa8cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626882, 'reachable_time': 16983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395660, 'error': None, 'target': 'ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.158 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:56:32 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.158 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[dbad63d6-d65e-4eb3-94f8-8a72ea0f0843]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d582d804a\x2d4891\x2d4cd3\x2d8540\x2d2cfd9f71444e.mount: Deactivated successfully.
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.400 2 DEBUG nova.network.neutron [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updated VIF entry in instance network info cache for port 3d7e333e-5ee7-4373-b49b-3f78fb696f43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.401 2 DEBUG nova.network.neutron [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.420 2 DEBUG oslo_concurrency.lockutils [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.445 2 DEBUG oslo_concurrency.lockutils [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.445 2 DEBUG oslo_concurrency.lockutils [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.446 2 DEBUG nova.network.neutron [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.483 2 DEBUG nova.compute.manager [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-deleted-3d7e333e-5ee7-4373-b49b-3f78fb696f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.483 2 INFO nova.compute.manager [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Neutron deleted interface 3d7e333e-5ee7-4373-b49b-3f78fb696f43; detaching it from the instance and deleting it from the info cache
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.484 2 DEBUG nova.network.neutron [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.506 2 DEBUG nova.objects.instance [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lazy-loading 'system_metadata' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.532 2 DEBUG nova.objects.instance [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lazy-loading 'flavor' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.553 2 DEBUG nova.virt.libvirt.vif [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.554 2 DEBUG nova.network.os_vif_util [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converting VIF {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.554 2 DEBUG nova.network.os_vif_util [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.558 2 DEBUG nova.virt.libvirt.guest [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.562 2 DEBUG nova.virt.libvirt.guest [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface>not found in domain: <domain type='kvm' id='160'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <name>instance-0000007e</name>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <uuid>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</uuid>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:56:31</nova:creationTime>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:56:32 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <system>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <entry name='serial'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <entry name='uuid'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </system>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <os>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </os>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <features>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </features>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk' index='2'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       </source>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config' index='1'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       </source>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:00:df:98'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target dev='tap3e0191b3-54'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       </target>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </console>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </input>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </input>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </input>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <video>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </video>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c271,c793</label>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c271,c793</imagelabel>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:56:32 compute-0 nova_compute[260603]: </domain>
Oct 02 08:56:32 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.562 2 DEBUG nova.virt.libvirt.guest [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.567 2 DEBUG nova.virt.libvirt.guest [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface>not found in domain: <domain type='kvm' id='160'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <name>instance-0000007e</name>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <uuid>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</uuid>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:56:31</nova:creationTime>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:56:32 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <resource>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </resource>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <system>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <entry name='serial'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <entry name='uuid'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </system>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <os>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </os>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <features>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </features>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk' index='2'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       </source>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config' index='1'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       </source>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </controller>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:00:df:98'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target dev='tap3e0191b3-54'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       </target>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </console>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </input>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </input>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </input>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </graphics>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <video>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </video>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c271,c793</label>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c271,c793</imagelabel>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 08:56:32 compute-0 nova_compute[260603]: </domain>
Oct 02 08:56:32 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.568 2 WARNING nova.virt.libvirt.driver [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Detaching interface fa:16:3e:26:a9:81 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap3d7e333e-5e' not found.
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.568 2 DEBUG nova.virt.libvirt.vif [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.569 2 DEBUG nova.network.os_vif_util [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converting VIF {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.569 2 DEBUG nova.network.os_vif_util [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.570 2 DEBUG os_vif [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.571 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d7e333e-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.575 2 INFO os_vif [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e')
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.575 2 DEBUG nova.virt.libvirt.guest [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:56:32</nova:creationTime>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct 02 08:56:32 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 08:56:32 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:56:32 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:56:32 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:56:32 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.871 2 DEBUG nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.871 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.872 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.872 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.872 2 DEBUG nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.872 2 WARNING nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 for instance with vm_state active and task_state None.
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.873 2 DEBUG nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-unplugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.873 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.873 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.873 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.873 2 DEBUG nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-unplugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.874 2 WARNING nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-unplugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 for instance with vm_state active and task_state None.
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.874 2 DEBUG nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.874 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.874 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.875 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.875 2 DEBUG nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:56:32 compute-0 nova_compute[260603]: 2025-10-02 08:56:32.875 2 WARNING nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 for instance with vm_state active and task_state None.
Oct 02 08:56:32 compute-0 ceph-mon[74477]: pgmap v2352: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 02 08:56:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:56:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Oct 02 08:56:33 compute-0 nova_compute[260603]: 2025-10-02 08:56:33.801 2 INFO nova.network.neutron [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Port 3d7e333e-5ee7-4373-b49b-3f78fb696f43 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 08:56:33 compute-0 nova_compute[260603]: 2025-10-02 08:56:33.801 2 DEBUG nova.network.neutron [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:56:33 compute-0 nova_compute[260603]: 2025-10-02 08:56:33.835 2 DEBUG oslo_concurrency.lockutils [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:56:33 compute-0 nova_compute[260603]: 2025-10-02 08:56:33.855 2 DEBUG oslo_concurrency.lockutils [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-d84a3f3a-a7fb-4222-9b59-c51b64d74a13-3d7e333e-5ee7-4373-b49b-3f78fb696f43" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:33 compute-0 ovn_controller[152344]: 2025-10-02T08:56:33Z|01367|binding|INFO|Releasing lport ab6e362e-db4a-479c-a0fc-ea7c3db44980 from this chassis (sb_readonly=0)
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.741 2 DEBUG nova.compute.manager [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-changed-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.741 2 DEBUG nova.compute.manager [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing instance network info cache due to event network-changed-3e0191b3-5405-4fe1-ba86-56a0b092a5d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.742 2 DEBUG oslo_concurrency.lockutils [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.742 2 DEBUG oslo_concurrency.lockutils [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.743 2 DEBUG nova.network.neutron [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing network info cache for port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:56:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:34.837 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:34.838 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.875 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.876 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.876 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.877 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.877 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.879 2 INFO nova.compute.manager [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Terminating instance
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.881 2 DEBUG nova.compute.manager [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:56:34 compute-0 ceph-mon[74477]: pgmap v2353: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Oct 02 08:56:34 compute-0 kernel: tap3e0191b3-54 (unregistering): left promiscuous mode
Oct 02 08:56:34 compute-0 NetworkManager[45129]: <info>  [1759395394.9731] device (tap3e0191b3-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:34 compute-0 nova_compute[260603]: 2025-10-02 08:56:34.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:34 compute-0 ovn_controller[152344]: 2025-10-02T08:56:34Z|01368|binding|INFO|Releasing lport 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 from this chassis (sb_readonly=0)
Oct 02 08:56:34 compute-0 ovn_controller[152344]: 2025-10-02T08:56:34Z|01369|binding|INFO|Setting lport 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 down in Southbound
Oct 02 08:56:34 compute-0 ovn_controller[152344]: 2025-10-02T08:56:34Z|01370|binding|INFO|Removing iface tap3e0191b3-54 ovn-installed in OVS
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:35 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct 02 08:56:35 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007e.scope: Consumed 15.290s CPU time.
Oct 02 08:56:35 compute-0 systemd-machined[214636]: Machine qemu-160-instance-0000007e terminated.
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.071 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:df:98 10.100.0.3'], port_security=['fa:16:3e:00:df:98 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd84a3f3a-a7fb-4222-9b59-c51b64d74a13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ca3ac45-6851-48dd-917a-457549b659ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f31abdd-db09-4203-b1b9-483c21cac467', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=95f926d6-852b-4212-98b8-ed01a74b48f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3e0191b3-5405-4fe1-ba86-56a0b092a5d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.073 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 in datapath 0ca3ac45-6851-48dd-917a-457549b659ba unbound from our chassis
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.075 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ca3ac45-6851-48dd-917a-457549b659ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.076 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6efac0-f3df-46fe-839c-58d715bf3c59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.077 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba namespace which is not needed anymore
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.152 2 INFO nova.virt.libvirt.driver [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Instance destroyed successfully.
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.153 2 DEBUG nova.objects.instance [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.169 2 DEBUG nova.virt.libvirt.vif [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.170 2 DEBUG nova.network.os_vif_util [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.171 2 DEBUG nova.network.os_vif_util [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.172 2 DEBUG os_vif [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e0191b3-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.179 2 INFO os_vif [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54')
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:35 compute-0 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [NOTICE]   (395321) : haproxy version is 2.8.14-c23fe91
Oct 02 08:56:35 compute-0 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [NOTICE]   (395321) : path to executable is /usr/sbin/haproxy
Oct 02 08:56:35 compute-0 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [WARNING]  (395321) : Exiting Master process...
Oct 02 08:56:35 compute-0 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [ALERT]    (395321) : Current worker (395323) exited with code 143 (Terminated)
Oct 02 08:56:35 compute-0 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [WARNING]  (395321) : All workers exited. Exiting... (0)
Oct 02 08:56:35 compute-0 systemd[1]: libpod-117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60.scope: Deactivated successfully.
Oct 02 08:56:35 compute-0 podman[395711]: 2025-10-02 08:56:35.267548223 +0000 UTC m=+0.052056241 container died 117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:56:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60-userdata-shm.mount: Deactivated successfully.
Oct 02 08:56:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-931c6303f091c199c5789a98492e9e4f99efc35fbf41e4ee1e289eeeb8eb185b-merged.mount: Deactivated successfully.
Oct 02 08:56:35 compute-0 podman[395711]: 2025-10-02 08:56:35.312887651 +0000 UTC m=+0.097395679 container cleanup 117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 08:56:35 compute-0 systemd[1]: libpod-conmon-117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60.scope: Deactivated successfully.
Oct 02 08:56:35 compute-0 podman[395744]: 2025-10-02 08:56:35.405047082 +0000 UTC m=+0.063390250 container remove 117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.413 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c61ffbb5-6a6b-45a0-8565-b6e341812377]: (4, ('Thu Oct  2 08:56:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba (117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60)\n117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60\nThu Oct  2 08:56:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba (117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60)\n117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.417 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4621ec30-7c7a-4a90-94dd-203c50871686]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.418 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ca3ac45-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:35 compute-0 kernel: tap0ca3ac45-60: left promiscuous mode
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.453 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9dd8d3-67d4-42c2-a729-78ad46f9a152]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.482 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[216a8a4d-c66c-43d6-b54b-ae5972a58eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.484 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86a113c8-1c28-4959-bdca-0fe582ac4804]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.513 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7540c1-0193-416a-a753-5f67a4597255]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623363, 'reachable_time': 35620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395760, 'error': None, 'target': 'ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.516 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:56:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.516 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[cac5e413-b714-4791-a0ce-64de4113d8ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:56:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d0ca3ac45\x2d6851\x2d48dd\x2d917a\x2d457549b659ba.mount: Deactivated successfully.
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.596 2 INFO nova.virt.libvirt.driver [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Deleting instance files /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_del
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.596 2 INFO nova.virt.libvirt.driver [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Deletion of /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_del complete
Oct 02 08:56:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 1023 B/s wr, 0 op/s
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.799 2 INFO nova.compute.manager [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Took 0.92 seconds to destroy the instance on the hypervisor.
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.801 2 DEBUG oslo.service.loopingcall [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.801 2 DEBUG nova.compute.manager [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:56:35 compute-0 nova_compute[260603]: 2025-10-02 08:56:35.802 2 DEBUG nova.network.neutron [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:56:36 compute-0 sshd-session[395762]: error: kex_exchange_identification: read: Connection reset by peer
Oct 02 08:56:36 compute-0 sshd-session[395762]: Connection reset by 45.140.17.97 port 19936
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.573 2 DEBUG nova.network.neutron [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updated VIF entry in instance network info cache for port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.574 2 DEBUG nova.network.neutron [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.633 2 DEBUG oslo_concurrency.lockutils [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.683 2 DEBUG nova.network.neutron [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.746 2 INFO nova.compute.manager [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Took 0.94 seconds to deallocate network for instance.
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.822 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.823 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.853 2 DEBUG nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-unplugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.854 2 DEBUG oslo_concurrency.lockutils [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.854 2 DEBUG oslo_concurrency.lockutils [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.855 2 DEBUG oslo_concurrency.lockutils [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.855 2 DEBUG nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-unplugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.855 2 WARNING nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-unplugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 for instance with vm_state deleted and task_state None.
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.856 2 DEBUG nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.856 2 DEBUG oslo_concurrency.lockutils [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.857 2 DEBUG oslo_concurrency.lockutils [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.857 2 DEBUG oslo_concurrency.lockutils [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.857 2 DEBUG nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.857 2 WARNING nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 for instance with vm_state deleted and task_state None.
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.858 2 DEBUG nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-deleted-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:56:36 compute-0 nova_compute[260603]: 2025-10-02 08:56:36.884 2 DEBUG oslo_concurrency.processutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:56:36 compute-0 ceph-mon[74477]: pgmap v2354: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 1023 B/s wr, 0 op/s
Oct 02 08:56:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:56:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557312375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:56:37 compute-0 nova_compute[260603]: 2025-10-02 08:56:37.321 2 DEBUG oslo_concurrency.processutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:56:37 compute-0 nova_compute[260603]: 2025-10-02 08:56:37.331 2 DEBUG nova.compute.provider_tree [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:56:37 compute-0 nova_compute[260603]: 2025-10-02 08:56:37.353 2 DEBUG nova.scheduler.client.report [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:56:37 compute-0 nova_compute[260603]: 2025-10-02 08:56:37.388 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:37 compute-0 nova_compute[260603]: 2025-10-02 08:56:37.430 2 INFO nova.scheduler.client.report [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13
Oct 02 08:56:37 compute-0 nova_compute[260603]: 2025-10-02 08:56:37.501 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 66 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Oct 02 08:56:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1557312375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:56:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002623116697597187 of space, bias 1.0, pg target 0.0786935009279156 quantized to 32 (current 32)
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:56:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:56:38 compute-0 ceph-mon[74477]: pgmap v2355: 305 pgs: 305 active+clean; 66 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Oct 02 08:56:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 02 08:56:40 compute-0 nova_compute[260603]: 2025-10-02 08:56:40.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:40 compute-0 nova_compute[260603]: 2025-10-02 08:56:40.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:40 compute-0 nova_compute[260603]: 2025-10-02 08:56:40.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:40 compute-0 nova_compute[260603]: 2025-10-02 08:56:40.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:40 compute-0 ceph-mon[74477]: pgmap v2356: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 02 08:56:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:56:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 49K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1458 writes, 6553 keys, 1458 commit groups, 1.0 writes per commit group, ingest: 9.16 MB, 0.02 MB/s
                                           Interval WAL: 1458 writes, 1458 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    109.7      0.55              0.23        34    0.016       0      0       0.0       0.0
                                             L6      1/0    8.07 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.5    165.9    138.9      1.94              0.93        33    0.059    193K    18K       0.0       0.0
                                            Sum      1/0    8.07 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.5    129.2    132.5      2.49              1.16        67    0.037    193K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.1    124.9    127.2      0.42              0.22        10    0.042     36K   2569       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    165.9    138.9      1.94              0.93        33    0.059    193K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    110.7      0.54              0.23        33    0.017       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.059, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.32 GB write, 0.08 MB/s write, 0.31 GB read, 0.08 MB/s read, 2.5 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 36.23 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000285 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2348,34.79 MB,11.4427%) FilterBlock(68,549.05 KB,0.176375%) IndexBlock(68,930.86 KB,0.299027%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 08:56:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 02 08:56:42 compute-0 ceph-mon[74477]: pgmap v2357: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 02 08:56:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:56:43 compute-0 nova_compute[260603]: 2025-10-02 08:56:43.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:56:43 compute-0 nova_compute[260603]: 2025-10-02 08:56:43.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:56:43 compute-0 nova_compute[260603]: 2025-10-02 08:56:43.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:56:43 compute-0 nova_compute[260603]: 2025-10-02 08:56:43.535 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:56:43 compute-0 nova_compute[260603]: 2025-10-02 08:56:43.536 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:56:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 02 08:56:45 compute-0 ceph-mon[74477]: pgmap v2358: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 02 08:56:45 compute-0 nova_compute[260603]: 2025-10-02 08:56:45.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:45 compute-0 nova_compute[260603]: 2025-10-02 08:56:45.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:56:47 compute-0 ceph-mon[74477]: pgmap v2359: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:56:47 compute-0 nova_compute[260603]: 2025-10-02 08:56:47.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:56:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:56:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:56:49 compute-0 ceph-mon[74477]: pgmap v2360: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:56:49 compute-0 nova_compute[260603]: 2025-10-02 08:56:49.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:56:49 compute-0 nova_compute[260603]: 2025-10-02 08:56:49.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:49 compute-0 nova_compute[260603]: 2025-10-02 08:56:49.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:49 compute-0 nova_compute[260603]: 2025-10-02 08:56:49.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:49 compute-0 nova_compute[260603]: 2025-10-02 08:56:49.550 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:56:49 compute-0 nova_compute[260603]: 2025-10-02 08:56:49.550 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:56:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 682 B/s wr, 8 op/s
Oct 02 08:56:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:56:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/123398550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.073 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.151 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395395.1505415, d84a3f3a-a7fb-4222-9b59-c51b64d74a13 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.152 2 INFO nova.compute.manager [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] VM Stopped (Lifecycle Event)
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.175 2 DEBUG nova.compute.manager [None req-ad7ce0f9-e662-4aa1-b6cd-dcb8f72785a9 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.384 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.386 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3693MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.387 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.388 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.490 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.490 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:56:50 compute-0 nova_compute[260603]: 2025-10-02 08:56:50.516 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:56:51 compute-0 ceph-mon[74477]: pgmap v2361: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 682 B/s wr, 8 op/s
Oct 02 08:56:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/123398550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:56:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:56:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738804684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:56:51 compute-0 nova_compute[260603]: 2025-10-02 08:56:51.086 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:56:51 compute-0 nova_compute[260603]: 2025-10-02 08:56:51.096 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:56:51 compute-0 nova_compute[260603]: 2025-10-02 08:56:51.122 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:56:51 compute-0 nova_compute[260603]: 2025-10-02 08:56:51.161 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:56:51 compute-0 nova_compute[260603]: 2025-10-02 08:56:51.162 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:56:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/738804684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:56:52 compute-0 nova_compute[260603]: 2025-10-02 08:56:52.164 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:56:52 compute-0 nova_compute[260603]: 2025-10-02 08:56:52.165 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:56:52 compute-0 nova_compute[260603]: 2025-10-02 08:56:52.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:56:53 compute-0 ceph-mon[74477]: pgmap v2362: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:56:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:56:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:56:55 compute-0 ceph-mon[74477]: pgmap v2363: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:56:55 compute-0 podman[395833]: 2025-10-02 08:56:55.063939958 +0000 UTC m=+0.112752897 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 08:56:55 compute-0 podman[395832]: 2025-10-02 08:56:55.097277345 +0000 UTC m=+0.152686722 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:56:55 compute-0 nova_compute[260603]: 2025-10-02 08:56:55.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:55 compute-0 nova_compute[260603]: 2025-10-02 08:56:55.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:56:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:56:55 compute-0 sudo[395877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:56:55 compute-0 sudo[395877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:56:55 compute-0 sudo[395877]: pam_unix(sudo:session): session closed for user root
Oct 02 08:56:55 compute-0 sudo[395902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:56:55 compute-0 sudo[395902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:56:55 compute-0 sudo[395902]: pam_unix(sudo:session): session closed for user root
Oct 02 08:56:55 compute-0 sudo[395927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:56:55 compute-0 sudo[395927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:56:55 compute-0 sudo[395927]: pam_unix(sudo:session): session closed for user root
Oct 02 08:56:56 compute-0 sudo[395952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:56:56 compute-0 sudo[395952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:56:56 compute-0 nova_compute[260603]: 2025-10-02 08:56:56.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:56:56 compute-0 sudo[395952]: pam_unix(sudo:session): session closed for user root
Oct 02 08:56:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:56:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:56:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:56:56 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:56:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:56:56 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:56:56 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8443ee5c-ef44-463d-9d4e-05023784aa43 does not exist
Oct 02 08:56:56 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a5afbcef-4129-4616-8141-bdabc7eb12eb does not exist
Oct 02 08:56:56 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9b68f3fd-6814-4ec4-8764-3e35dd67e807 does not exist
Oct 02 08:56:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:56:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:56:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:56:56 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:56:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:56:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:56:56 compute-0 sudo[396008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:56:56 compute-0 sudo[396008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:56:56 compute-0 sudo[396008]: pam_unix(sudo:session): session closed for user root
Oct 02 08:56:56 compute-0 sudo[396033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:56:56 compute-0 sudo[396033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:56:56 compute-0 sudo[396033]: pam_unix(sudo:session): session closed for user root
Oct 02 08:56:57 compute-0 sudo[396058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:56:57 compute-0 sudo[396058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:56:57 compute-0 sudo[396058]: pam_unix(sudo:session): session closed for user root
Oct 02 08:56:57 compute-0 ceph-mon[74477]: pgmap v2364: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:56:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:56:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:56:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:56:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:56:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:56:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:56:57 compute-0 sudo[396083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:56:57 compute-0 sudo[396083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:56:57 compute-0 nova_compute[260603]: 2025-10-02 08:56:57.420 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:57 compute-0 nova_compute[260603]: 2025-10-02 08:56:57.420 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:57 compute-0 nova_compute[260603]: 2025-10-02 08:56:57.449 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:56:57 compute-0 podman[396148]: 2025-10-02 08:56:57.521353047 +0000 UTC m=+0.052368562 container create 36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swirles, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 08:56:57 compute-0 nova_compute[260603]: 2025-10-02 08:56:57.547 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:57 compute-0 nova_compute[260603]: 2025-10-02 08:56:57.549 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:57 compute-0 nova_compute[260603]: 2025-10-02 08:56:57.559 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:56:57 compute-0 nova_compute[260603]: 2025-10-02 08:56:57.559 2 INFO nova.compute.claims [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:56:57 compute-0 systemd[1]: Started libpod-conmon-36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f.scope.
Oct 02 08:56:57 compute-0 podman[396148]: 2025-10-02 08:56:57.496590311 +0000 UTC m=+0.027605796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:56:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:56:57 compute-0 podman[396148]: 2025-10-02 08:56:57.648607681 +0000 UTC m=+0.179623186 container init 36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 08:56:57 compute-0 podman[396148]: 2025-10-02 08:56:57.660973674 +0000 UTC m=+0.191989189 container start 36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swirles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:56:57 compute-0 podman[396148]: 2025-10-02 08:56:57.664444424 +0000 UTC m=+0.195459939 container attach 36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swirles, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 08:56:57 compute-0 peaceful_swirles[396164]: 167 167
Oct 02 08:56:57 compute-0 systemd[1]: libpod-36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f.scope: Deactivated successfully.
Oct 02 08:56:57 compute-0 conmon[396164]: conmon 36670a450c502fe0509c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f.scope/container/memory.events
Oct 02 08:56:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:56:57 compute-0 podman[396169]: 2025-10-02 08:56:57.735206846 +0000 UTC m=+0.041699102 container died 36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:56:57 compute-0 nova_compute[260603]: 2025-10-02 08:56:57.738 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:56:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd3b859adf3aa941c183d995a77f083c383538817fdb0ed9c52ff96281600239-merged.mount: Deactivated successfully.
Oct 02 08:56:57 compute-0 podman[396169]: 2025-10-02 08:56:57.779702187 +0000 UTC m=+0.086194453 container remove 36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:56:57 compute-0 systemd[1]: libpod-conmon-36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f.scope: Deactivated successfully.
Oct 02 08:56:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:56:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:56:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:56:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:56:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:56:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:56:58 compute-0 podman[396211]: 2025-10-02 08:56:58.052547037 +0000 UTC m=+0.065728844 container create 9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 08:56:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:56:58 compute-0 systemd[1]: Started libpod-conmon-9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5.scope.
Oct 02 08:56:58 compute-0 podman[396211]: 2025-10-02 08:56:58.032105889 +0000 UTC m=+0.045287726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:56:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feed2b77eae23de7b5bce9565d9ce528bcccb103c9473f1f0789e1f7e1867cf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feed2b77eae23de7b5bce9565d9ce528bcccb103c9473f1f0789e1f7e1867cf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feed2b77eae23de7b5bce9565d9ce528bcccb103c9473f1f0789e1f7e1867cf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feed2b77eae23de7b5bce9565d9ce528bcccb103c9473f1f0789e1f7e1867cf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feed2b77eae23de7b5bce9565d9ce528bcccb103c9473f1f0789e1f7e1867cf9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:56:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:56:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637743910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:56:58 compute-0 podman[396211]: 2025-10-02 08:56:58.216090903 +0000 UTC m=+0.229272790 container init 9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 08:56:58 compute-0 podman[396211]: 2025-10-02 08:56:58.225959445 +0000 UTC m=+0.239141252 container start 9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_solomon, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 08:56:58 compute-0 podman[396211]: 2025-10-02 08:56:58.229710544 +0000 UTC m=+0.242892441 container attach 9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.232 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.242 2 DEBUG nova.compute.provider_tree [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.260 2 DEBUG nova.scheduler.client.report [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.293 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.294 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.371 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.372 2 DEBUG nova.network.neutron [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.401 2 INFO nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.422 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.545 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.547 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.548 2 INFO nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Creating image(s)
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.583 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.621 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.658 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.663 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.724 2 DEBUG nova.policy [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.773 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.775 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.775 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.776 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.801 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:56:58 compute-0 nova_compute[260603]: 2025-10-02 08:56:58.806 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:56:59 compute-0 ceph-mon[74477]: pgmap v2365: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:56:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2637743910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:56:59 compute-0 nova_compute[260603]: 2025-10-02 08:56:59.136 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:56:59 compute-0 nova_compute[260603]: 2025-10-02 08:56:59.240 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:56:59 compute-0 relaxed_solomon[396227]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:56:59 compute-0 relaxed_solomon[396227]: --> relative data size: 1.0
Oct 02 08:56:59 compute-0 relaxed_solomon[396227]: --> All data devices are unavailable
Oct 02 08:56:59 compute-0 nova_compute[260603]: 2025-10-02 08:56:59.362 2 DEBUG nova.objects.instance [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:56:59 compute-0 systemd[1]: libpod-9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5.scope: Deactivated successfully.
Oct 02 08:56:59 compute-0 podman[396211]: 2025-10-02 08:56:59.38373033 +0000 UTC m=+1.396912167 container died 9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_solomon, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 08:56:59 compute-0 nova_compute[260603]: 2025-10-02 08:56:59.384 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:56:59 compute-0 systemd[1]: libpod-9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5.scope: Consumed 1.072s CPU time.
Oct 02 08:56:59 compute-0 nova_compute[260603]: 2025-10-02 08:56:59.384 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Ensure instance console log exists: /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:56:59 compute-0 nova_compute[260603]: 2025-10-02 08:56:59.385 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:56:59 compute-0 nova_compute[260603]: 2025-10-02 08:56:59.385 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:56:59 compute-0 nova_compute[260603]: 2025-10-02 08:56:59.385 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:56:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-feed2b77eae23de7b5bce9565d9ce528bcccb103c9473f1f0789e1f7e1867cf9-merged.mount: Deactivated successfully.
Oct 02 08:56:59 compute-0 podman[396211]: 2025-10-02 08:56:59.449604039 +0000 UTC m=+1.462785836 container remove 9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_solomon, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:56:59 compute-0 systemd[1]: libpod-conmon-9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5.scope: Deactivated successfully.
Oct 02 08:56:59 compute-0 sudo[396083]: pam_unix(sudo:session): session closed for user root
Oct 02 08:56:59 compute-0 nova_compute[260603]: 2025-10-02 08:56:59.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:56:59 compute-0 sudo[396439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:56:59 compute-0 sudo[396439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:56:59 compute-0 sudo[396439]: pam_unix(sudo:session): session closed for user root
Oct 02 08:56:59 compute-0 sudo[396464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:56:59 compute-0 sudo[396464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:56:59 compute-0 sudo[396464]: pam_unix(sudo:session): session closed for user root
Oct 02 08:56:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:56:59 compute-0 nova_compute[260603]: 2025-10-02 08:56:59.720 2 DEBUG nova.network.neutron [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Successfully created port: d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:56:59 compute-0 sudo[396489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:56:59 compute-0 sudo[396489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:56:59 compute-0 sudo[396489]: pam_unix(sudo:session): session closed for user root
Oct 02 08:56:59 compute-0 sudo[396514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:56:59 compute-0 sudo[396514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:57:00 compute-0 nova_compute[260603]: 2025-10-02 08:57:00.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:00 compute-0 nova_compute[260603]: 2025-10-02 08:57:00.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:00 compute-0 podman[396581]: 2025-10-02 08:57:00.342081304 +0000 UTC m=+0.050659818 container create 0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 08:57:00 compute-0 systemd[1]: Started libpod-conmon-0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4.scope.
Oct 02 08:57:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:57:00 compute-0 podman[396581]: 2025-10-02 08:57:00.321314054 +0000 UTC m=+0.029892648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:57:00 compute-0 podman[396581]: 2025-10-02 08:57:00.430207817 +0000 UTC m=+0.138786331 container init 0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:57:00 compute-0 podman[396581]: 2025-10-02 08:57:00.441904308 +0000 UTC m=+0.150482822 container start 0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 08:57:00 compute-0 podman[396581]: 2025-10-02 08:57:00.44543986 +0000 UTC m=+0.154018374 container attach 0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 08:57:00 compute-0 suspicious_pasteur[396599]: 167 167
Oct 02 08:57:00 compute-0 systemd[1]: libpod-0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4.scope: Deactivated successfully.
Oct 02 08:57:00 compute-0 podman[396581]: 2025-10-02 08:57:00.449304343 +0000 UTC m=+0.157882867 container died 0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:57:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-aca76938ff4b65222f9daeaeb875b5820ae3c1be1bf7be924ce40d5c18999d45-merged.mount: Deactivated successfully.
Oct 02 08:57:00 compute-0 podman[396581]: 2025-10-02 08:57:00.488191575 +0000 UTC m=+0.196770109 container remove 0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 08:57:00 compute-0 systemd[1]: libpod-conmon-0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4.scope: Deactivated successfully.
Oct 02 08:57:00 compute-0 nova_compute[260603]: 2025-10-02 08:57:00.684 2 DEBUG nova.network.neutron [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Successfully updated port: d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:57:00 compute-0 podman[396623]: 2025-10-02 08:57:00.698427661 +0000 UTC m=+0.043545642 container create 9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:57:00 compute-0 nova_compute[260603]: 2025-10-02 08:57:00.704 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:57:00 compute-0 nova_compute[260603]: 2025-10-02 08:57:00.705 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:57:00 compute-0 nova_compute[260603]: 2025-10-02 08:57:00.705 2 DEBUG nova.network.neutron [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:57:00 compute-0 systemd[1]: Started libpod-conmon-9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5.scope.
Oct 02 08:57:00 compute-0 podman[396623]: 2025-10-02 08:57:00.677826227 +0000 UTC m=+0.022944208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:57:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4180288c5274885df5fb34efcc70ec10f7bab2e721f8a9a4c5fc661ee13fdc71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4180288c5274885df5fb34efcc70ec10f7bab2e721f8a9a4c5fc661ee13fdc71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4180288c5274885df5fb34efcc70ec10f7bab2e721f8a9a4c5fc661ee13fdc71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4180288c5274885df5fb34efcc70ec10f7bab2e721f8a9a4c5fc661ee13fdc71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:57:00 compute-0 nova_compute[260603]: 2025-10-02 08:57:00.809 2 DEBUG nova.compute.manager [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-changed-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:57:00 compute-0 nova_compute[260603]: 2025-10-02 08:57:00.809 2 DEBUG nova.compute.manager [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Refreshing instance network info cache due to event network-changed-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:57:00 compute-0 nova_compute[260603]: 2025-10-02 08:57:00.810 2 DEBUG oslo_concurrency.lockutils [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:57:00 compute-0 podman[396623]: 2025-10-02 08:57:00.810846505 +0000 UTC m=+0.155964536 container init 9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:57:00 compute-0 podman[396623]: 2025-10-02 08:57:00.819741017 +0000 UTC m=+0.164859018 container start 9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:57:00 compute-0 podman[396623]: 2025-10-02 08:57:00.824176528 +0000 UTC m=+0.169294579 container attach 9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 08:57:00 compute-0 nova_compute[260603]: 2025-10-02 08:57:00.899 2 DEBUG nova.network.neutron [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:57:01 compute-0 ceph-mon[74477]: pgmap v2366: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]: {
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:     "0": [
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:         {
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "devices": [
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "/dev/loop3"
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             ],
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_name": "ceph_lv0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_size": "21470642176",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "name": "ceph_lv0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "tags": {
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.cluster_name": "ceph",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.crush_device_class": "",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.encrypted": "0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.osd_id": "0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.type": "block",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.vdo": "0"
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             },
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "type": "block",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "vg_name": "ceph_vg0"
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:         }
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:     ],
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:     "1": [
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:         {
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "devices": [
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "/dev/loop4"
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             ],
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_name": "ceph_lv1",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_size": "21470642176",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "name": "ceph_lv1",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "tags": {
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.cluster_name": "ceph",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.crush_device_class": "",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.encrypted": "0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.osd_id": "1",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.type": "block",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.vdo": "0"
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             },
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "type": "block",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "vg_name": "ceph_vg1"
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:         }
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:     ],
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:     "2": [
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:         {
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "devices": [
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "/dev/loop5"
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             ],
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_name": "ceph_lv2",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_size": "21470642176",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "name": "ceph_lv2",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "tags": {
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.cluster_name": "ceph",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.crush_device_class": "",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.encrypted": "0",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.osd_id": "2",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.type": "block",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:                 "ceph.vdo": "0"
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             },
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "type": "block",
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:             "vg_name": "ceph_vg2"
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:         }
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]:     ]
Oct 02 08:57:01 compute-0 quirky_bhaskara[396639]: }
Oct 02 08:57:01 compute-0 systemd[1]: libpod-9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5.scope: Deactivated successfully.
Oct 02 08:57:01 compute-0 podman[396623]: 2025-10-02 08:57:01.557552918 +0000 UTC m=+0.902670879 container died 9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:57:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4180288c5274885df5fb34efcc70ec10f7bab2e721f8a9a4c5fc661ee13fdc71-merged.mount: Deactivated successfully.
Oct 02 08:57:01 compute-0 podman[396623]: 2025-10-02 08:57:01.631004267 +0000 UTC m=+0.976122238 container remove 9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 08:57:01 compute-0 systemd[1]: libpod-conmon-9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5.scope: Deactivated successfully.
Oct 02 08:57:01 compute-0 sudo[396514]: pam_unix(sudo:session): session closed for user root
Oct 02 08:57:01 compute-0 podman[396658]: 2025-10-02 08:57:01.692800566 +0000 UTC m=+0.094929671 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:57:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:57:01 compute-0 podman[396648]: 2025-10-02 08:57:01.716785036 +0000 UTC m=+0.116769433 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 08:57:01 compute-0 sudo[396696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:57:01 compute-0 sudo[396696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:57:01 compute-0 sudo[396696]: pam_unix(sudo:session): session closed for user root
Oct 02 08:57:01 compute-0 sudo[396725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:57:01 compute-0 sudo[396725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:57:01 compute-0 sudo[396725]: pam_unix(sudo:session): session closed for user root
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.879 2 DEBUG nova.network.neutron [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updating instance_info_cache with network_info: [{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:57:01 compute-0 sudo[396750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:57:01 compute-0 sudo[396750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:57:01 compute-0 sudo[396750]: pam_unix(sudo:session): session closed for user root
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.908 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.908 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Instance network_info: |[{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.908 2 DEBUG oslo_concurrency.lockutils [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.908 2 DEBUG nova.network.neutron [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Refreshing network info cache for port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.911 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Start _get_guest_xml network_info=[{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.916 2 WARNING nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.921 2 DEBUG nova.virt.libvirt.host [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.922 2 DEBUG nova.virt.libvirt.host [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.925 2 DEBUG nova.virt.libvirt.host [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.925 2 DEBUG nova.virt.libvirt.host [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.925 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.926 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.926 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.926 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.927 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.927 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.927 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.927 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.927 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.928 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.928 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.928 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:57:01 compute-0 nova_compute[260603]: 2025-10-02 08:57:01.931 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:01 compute-0 sudo[396775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:57:01 compute-0 sudo[396775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:57:02 compute-0 podman[396860]: 2025-10-02 08:57:02.31367477 +0000 UTC m=+0.050743070 container create c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 08:57:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:57:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2006322213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:57:02 compute-0 systemd[1]: Started libpod-conmon-c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56.scope.
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.368 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:02 compute-0 podman[396860]: 2025-10-02 08:57:02.291671542 +0000 UTC m=+0.028739842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:57:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:57:02 compute-0 podman[396860]: 2025-10-02 08:57:02.400339087 +0000 UTC m=+0.137407367 container init c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:57:02 compute-0 podman[396860]: 2025-10-02 08:57:02.410140258 +0000 UTC m=+0.147208518 container start c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.409 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:57:02 compute-0 podman[396860]: 2025-10-02 08:57:02.413337349 +0000 UTC m=+0.150405609 container attach c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_carson, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.413 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:02 compute-0 funny_carson[396879]: 167 167
Oct 02 08:57:02 compute-0 systemd[1]: libpod-c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56.scope: Deactivated successfully.
Oct 02 08:57:02 compute-0 podman[396860]: 2025-10-02 08:57:02.418017948 +0000 UTC m=+0.155086298 container died c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 08:57:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed77df4859749418d438cd2eb7d15d22706f844208b1287f7d6282d9b196ae71-merged.mount: Deactivated successfully.
Oct 02 08:57:02 compute-0 podman[396860]: 2025-10-02 08:57:02.460555326 +0000 UTC m=+0.197623586 container remove c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:57:02 compute-0 systemd[1]: libpod-conmon-c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56.scope: Deactivated successfully.
Oct 02 08:57:02 compute-0 podman[396921]: 2025-10-02 08:57:02.609793057 +0000 UTC m=+0.040109063 container create bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 08:57:02 compute-0 systemd[1]: Started libpod-conmon-bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74.scope.
Oct 02 08:57:02 compute-0 podman[396921]: 2025-10-02 08:57:02.591506428 +0000 UTC m=+0.021822444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:57:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d78b2ce007e66987db1213eb483ff70a259c848ec8290f42f3e9ffc223b7020/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d78b2ce007e66987db1213eb483ff70a259c848ec8290f42f3e9ffc223b7020/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d78b2ce007e66987db1213eb483ff70a259c848ec8290f42f3e9ffc223b7020/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d78b2ce007e66987db1213eb483ff70a259c848ec8290f42f3e9ffc223b7020/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:57:02 compute-0 podman[396921]: 2025-10-02 08:57:02.713615459 +0000 UTC m=+0.143931535 container init bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:57:02 compute-0 podman[396921]: 2025-10-02 08:57:02.724924417 +0000 UTC m=+0.155240433 container start bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:57:02 compute-0 podman[396921]: 2025-10-02 08:57:02.729040768 +0000 UTC m=+0.159356804 container attach bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 08:57:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:57:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/126124369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.885 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.888 2 DEBUG nova.virt.libvirt.vif [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:56:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1392343358',display_name='tempest-TestNetworkBasicOps-server-1392343358',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1392343358',id=127,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM7vAmstY/1wj8zilQusjnWxCH9HDy5ckxC0RSN69DfVAe97aVFxd/BU6NsgAvBbRW+DKoyLRAwETsr2HsjzpTSAXv0ZjguOo1+U6bZINpkB4yql6sk6skb6iUdvJViCMg==',key_name='tempest-TestNetworkBasicOps-399164218',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-k0j5xbkd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:56:58Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=44f2da71-ac35-415b-bb4b-6fbc3afe6cf9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.889 2 DEBUG nova.network.os_vif_util [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.889 2 DEBUG nova.network.os_vif_util [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.891 2 DEBUG nova.objects.instance [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.915 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:57:02 compute-0 nova_compute[260603]:   <uuid>44f2da71-ac35-415b-bb4b-6fbc3afe6cf9</uuid>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   <name>instance-0000007f</name>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-1392343358</nova:name>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:57:01</nova:creationTime>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:57:02 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:57:02 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:57:02 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:57:02 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:57:02 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:57:02 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:57:02 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:57:02 compute-0 nova_compute[260603]:         <nova:port uuid="d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424">
Oct 02 08:57:02 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <system>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <entry name="serial">44f2da71-ac35-415b-bb4b-6fbc3afe6cf9</entry>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <entry name="uuid">44f2da71-ac35-415b-bb4b-6fbc3afe6cf9</entry>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     </system>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   <os>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   </os>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   <features>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   </features>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk">
Oct 02 08:57:02 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       </source>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:57:02 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk.config">
Oct 02 08:57:02 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       </source>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:57:02 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:1a:a0:55"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <target dev="tapd9fd4c9a-6e"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/console.log" append="off"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <video>
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     </video>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:57:02 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:57:02 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:57:02 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:57:02 compute-0 nova_compute[260603]: </domain>
Oct 02 08:57:02 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.916 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Preparing to wait for external event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.917 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.917 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.917 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.918 2 DEBUG nova.virt.libvirt.vif [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:56:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1392343358',display_name='tempest-TestNetworkBasicOps-server-1392343358',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1392343358',id=127,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM7vAmstY/1wj8zilQusjnWxCH9HDy5ckxC0RSN69DfVAe97aVFxd/BU6NsgAvBbRW+DKoyLRAwETsr2HsjzpTSAXv0ZjguOo1+U6bZINpkB4yql6sk6skb6iUdvJViCMg==',key_name='tempest-TestNetworkBasicOps-399164218',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-k0j5xbkd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:56:58Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=44f2da71-ac35-415b-bb4b-6fbc3afe6cf9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.918 2 DEBUG nova.network.os_vif_util [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.918 2 DEBUG nova.network.os_vif_util [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.919 2 DEBUG os_vif [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9fd4c9a-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd9fd4c9a-6e, col_values=(('external_ids', {'iface-id': 'd9fd4c9a-6e5f-4c8a-b17c-28f6fe795424', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:a0:55', 'vm-uuid': '44f2da71-ac35-415b-bb4b-6fbc3afe6cf9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:02 compute-0 NetworkManager[45129]: <info>  [1759395422.9281] manager: (tapd9fd4c9a-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/543)
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.935 2 INFO os_vif [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e')
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.987 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.988 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.988 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:1a:a0:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:57:02 compute-0 nova_compute[260603]: 2025-10-02 08:57:02.988 2 INFO nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Using config drive
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.004 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:57:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:57:03 compute-0 ceph-mon[74477]: pgmap v2367: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:57:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2006322213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:57:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/126124369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.156 2 DEBUG nova.network.neutron [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updated VIF entry in instance network info cache for port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.157 2 DEBUG nova.network.neutron [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updating instance_info_cache with network_info: [{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.176 2 DEBUG oslo_concurrency.lockutils [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.352 2 INFO nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Creating config drive at /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/disk.config
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.356 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5jh5is_6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.523 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5jh5is_6" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.552 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.557 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/disk.config 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.728 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/disk.config 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.729 2 INFO nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Deleting local config drive /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/disk.config because it was imported into RBD.
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]: {
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "osd_id": 2,
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "type": "bluestore"
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:     },
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "osd_id": 1,
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "type": "bluestore"
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:     },
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "osd_id": 0,
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:         "type": "bluestore"
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]:     }
Oct 02 08:57:03 compute-0 naughty_sinoussi[396956]: }
Oct 02 08:57:03 compute-0 systemd[1]: libpod-bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74.scope: Deactivated successfully.
Oct 02 08:57:03 compute-0 systemd[1]: libpod-bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74.scope: Consumed 1.023s CPU time.
Oct 02 08:57:03 compute-0 podman[396921]: 2025-10-02 08:57:03.753082613 +0000 UTC m=+1.183398589 container died bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 08:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d78b2ce007e66987db1213eb483ff70a259c848ec8290f42f3e9ffc223b7020-merged.mount: Deactivated successfully.
Oct 02 08:57:03 compute-0 kernel: tapd9fd4c9a-6e: entered promiscuous mode
Oct 02 08:57:03 compute-0 NetworkManager[45129]: <info>  [1759395423.8065] manager: (tapd9fd4c9a-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/544)
Oct 02 08:57:03 compute-0 ovn_controller[152344]: 2025-10-02T08:57:03Z|01371|binding|INFO|Claiming lport d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 for this chassis.
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:03 compute-0 ovn_controller[152344]: 2025-10-02T08:57:03Z|01372|binding|INFO|d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424: Claiming fa:16:3e:1a:a0:55 10.100.0.9
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:03 compute-0 podman[396921]: 2025-10-02 08:57:03.819576732 +0000 UTC m=+1.249892708 container remove bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.825 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:a0:55 10.100.0.9'], port_security=['fa:16:3e:1a:a0:55 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '44f2da71-ac35-415b-bb4b-6fbc3afe6cf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50f973db-7622-438d-89a4-98949bc018c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38d1b2fd-ca30-472b-9abc-fb39d0f98549', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6dd5c4a2-6437-4d79-b5d7-6dfc11d7b30a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.826 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 in datapath 50f973db-7622-438d-89a4-98949bc018c7 bound to our chassis
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.827 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50f973db-7622-438d-89a4-98949bc018c7
Oct 02 08:57:03 compute-0 systemd[1]: libpod-conmon-bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74.scope: Deactivated successfully.
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.840 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[491c54c4-557a-4417-a59f-24c2d248322b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.842 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50f973db-71 in ovnmeta-50f973db-7622-438d-89a4-98949bc018c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.843 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50f973db-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.844 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[629b2574-e653-4998-8384-cfb76e0cd022]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.844 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72729d87-e95e-4ec7-a70a-afe58c0ec815]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:03 compute-0 systemd-udevd[397076]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:57:03 compute-0 systemd-machined[214636]: New machine qemu-161-instance-0000007f.
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.855 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4461033e-2610-42f8-83f6-9e34e597cdf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:03 compute-0 NetworkManager[45129]: <info>  [1759395423.8613] device (tapd9fd4c9a-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:57:03 compute-0 NetworkManager[45129]: <info>  [1759395423.8620] device (tapd9fd4c9a-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:57:03 compute-0 systemd[1]: Started Virtual Machine qemu-161-instance-0000007f.
Oct 02 08:57:03 compute-0 sudo[396775]: pam_unix(sudo:session): session closed for user root
Oct 02 08:57:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:57:03 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:57:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.885 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed5f0be-2455-470e-a21d-0f73244fb191]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:03 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:03 compute-0 ovn_controller[152344]: 2025-10-02T08:57:03Z|01373|binding|INFO|Setting lport d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 ovn-installed in OVS
Oct 02 08:57:03 compute-0 ovn_controller[152344]: 2025-10-02T08:57:03Z|01374|binding|INFO|Setting lport d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 up in Southbound
Oct 02 08:57:03 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 00cf357e-337a-4520-9f70-e52ecf4d11f6 does not exist
Oct 02 08:57:03 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 3f4e2b77-cfef-43b7-b133-5d599384b783 does not exist
Oct 02 08:57:03 compute-0 nova_compute[260603]: 2025-10-02 08:57:03.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.942 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[515f21e0-0d26-46b6-b9e2-11d53d2b2433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.946 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b9843866-546a-439f-8fc8-20cb5ec5973e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:03 compute-0 NetworkManager[45129]: <info>  [1759395423.9477] manager: (tap50f973db-70): new Veth device (/org/freedesktop/NetworkManager/Devices/545)
Oct 02 08:57:03 compute-0 systemd-udevd[397079]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.974 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5ceaf987-36e2-4525-b763-0c0d3e9de4a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.977 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[20beffdb-6b71-4f41-b67e-96a832a68797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:03 compute-0 sudo[397085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:57:03 compute-0 sudo[397085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:57:04 compute-0 NetworkManager[45129]: <info>  [1759395424.0003] device (tap50f973db-70): carrier: link connected
Oct 02 08:57:04 compute-0 sudo[397085]: pam_unix(sudo:session): session closed for user root
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.006 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[da5a044b-8d75-47a0-ab40-881956ef037e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.022 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29695432-4423-42ed-a6d8-e94e7a21b8d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50f973db-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:ed:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 389], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630260, 'reachable_time': 28777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397134, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.037 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5211b2-c9a5-4b04-a300-36df7b89be87]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:eddf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630260, 'tstamp': 630260}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397146, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.055 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd596933-489b-4bd8-9e98-1dbcf7a7af52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50f973db-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:ed:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 389], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630260, 'reachable_time': 28777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 397158, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:04 compute-0 sudo[397133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:57:04 compute-0 sudo[397133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:57:04 compute-0 sudo[397133]: pam_unix(sudo:session): session closed for user root
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.091 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[45562fe8-0b12-4b13-b1b4-09ebcc3cf3da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.170 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f42ac0c9-a155-4efe-b555-e67028bb14be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.172 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50f973db-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.172 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.172 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50f973db-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:04 compute-0 NetworkManager[45129]: <info>  [1759395424.1746] manager: (tap50f973db-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Oct 02 08:57:04 compute-0 kernel: tap50f973db-70: entered promiscuous mode
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.178 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50f973db-70, col_values=(('external_ids', {'iface-id': '02679a4c-7fb0-490c-8428-8668ec8ddf0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:04 compute-0 ovn_controller[152344]: 2025-10-02T08:57:04Z|01375|binding|INFO|Releasing lport 02679a4c-7fb0-490c-8428-8668ec8ddf0b from this chassis (sb_readonly=0)
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.213 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50f973db-7622-438d-89a4-98949bc018c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50f973db-7622-438d-89a4-98949bc018c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.214 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[515f0575-cbef-4829-b204-391758fbeb1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.215 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-50f973db-7622-438d-89a4-98949bc018c7
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/50f973db-7622-438d-89a4-98949bc018c7.pid.haproxy
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 50f973db-7622-438d-89a4-98949bc018c7
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:57:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.215 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'env', 'PROCESS_TAG=haproxy-50f973db-7622-438d-89a4-98949bc018c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50f973db-7622-438d-89a4-98949bc018c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.620 2 DEBUG nova.compute.manager [req-9df5ef77-b2b4-4872-8c07-639917f3d955 req-aefec794-0706-4960-a95b-06ec842ffac8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.621 2 DEBUG oslo_concurrency.lockutils [req-9df5ef77-b2b4-4872-8c07-639917f3d955 req-aefec794-0706-4960-a95b-06ec842ffac8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.621 2 DEBUG oslo_concurrency.lockutils [req-9df5ef77-b2b4-4872-8c07-639917f3d955 req-aefec794-0706-4960-a95b-06ec842ffac8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.622 2 DEBUG oslo_concurrency.lockutils [req-9df5ef77-b2b4-4872-8c07-639917f3d955 req-aefec794-0706-4960-a95b-06ec842ffac8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.622 2 DEBUG nova.compute.manager [req-9df5ef77-b2b4-4872-8c07-639917f3d955 req-aefec794-0706-4960-a95b-06ec842ffac8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Processing event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:57:04 compute-0 podman[397233]: 2025-10-02 08:57:04.633361392 +0000 UTC m=+0.053810638 container create ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 08:57:04 compute-0 systemd[1]: Started libpod-conmon-ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f.scope.
Oct 02 08:57:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c715720152d167f38b377cf95f213b66460140411dee007ac5a683ebd4decaf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:57:04 compute-0 podman[397233]: 2025-10-02 08:57:04.608170983 +0000 UTC m=+0.028620219 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:57:04 compute-0 podman[397233]: 2025-10-02 08:57:04.711470198 +0000 UTC m=+0.131919544 container init ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:57:04 compute-0 podman[397233]: 2025-10-02 08:57:04.716669823 +0000 UTC m=+0.137119099 container start ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 08:57:04 compute-0 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [NOTICE]   (397252) : New worker (397254) forked
Oct 02 08:57:04 compute-0 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [NOTICE]   (397252) : Loading success.
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.882 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.884 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395424.881668, 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.884 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] VM Started (Lifecycle Event)
Oct 02 08:57:04 compute-0 ceph-mon[74477]: pgmap v2368: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:57:04 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:57:04 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.888 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.894 2 INFO nova.virt.libvirt.driver [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Instance spawned successfully.
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.895 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.920 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.928 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.934 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.935 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.936 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.937 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.937 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.938 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.974 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.975 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395424.8824563, 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:57:04 compute-0 nova_compute[260603]: 2025-10-02 08:57:04.975 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] VM Paused (Lifecycle Event)
Oct 02 08:57:05 compute-0 nova_compute[260603]: 2025-10-02 08:57:05.008 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:57:05 compute-0 nova_compute[260603]: 2025-10-02 08:57:05.013 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395424.8873615, 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:57:05 compute-0 nova_compute[260603]: 2025-10-02 08:57:05.013 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] VM Resumed (Lifecycle Event)
Oct 02 08:57:05 compute-0 nova_compute[260603]: 2025-10-02 08:57:05.023 2 INFO nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Took 6.48 seconds to spawn the instance on the hypervisor.
Oct 02 08:57:05 compute-0 nova_compute[260603]: 2025-10-02 08:57:05.024 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:57:05 compute-0 nova_compute[260603]: 2025-10-02 08:57:05.033 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:57:05 compute-0 nova_compute[260603]: 2025-10-02 08:57:05.035 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:57:05 compute-0 nova_compute[260603]: 2025-10-02 08:57:05.077 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:57:05 compute-0 nova_compute[260603]: 2025-10-02 08:57:05.108 2 INFO nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Took 7.59 seconds to build instance.
Oct 02 08:57:05 compute-0 nova_compute[260603]: 2025-10-02 08:57:05.130 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:05 compute-0 nova_compute[260603]: 2025-10-02 08:57:05.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:57:06 compute-0 nova_compute[260603]: 2025-10-02 08:57:06.695 2 DEBUG nova.compute.manager [req-eb32dccf-a9f8-4dd5-af35-bc514652f683 req-73d2aac8-70a4-479a-869c-d87946a4b073 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:57:06 compute-0 nova_compute[260603]: 2025-10-02 08:57:06.695 2 DEBUG oslo_concurrency.lockutils [req-eb32dccf-a9f8-4dd5-af35-bc514652f683 req-73d2aac8-70a4-479a-869c-d87946a4b073 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:06 compute-0 nova_compute[260603]: 2025-10-02 08:57:06.695 2 DEBUG oslo_concurrency.lockutils [req-eb32dccf-a9f8-4dd5-af35-bc514652f683 req-73d2aac8-70a4-479a-869c-d87946a4b073 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:06 compute-0 nova_compute[260603]: 2025-10-02 08:57:06.695 2 DEBUG oslo_concurrency.lockutils [req-eb32dccf-a9f8-4dd5-af35-bc514652f683 req-73d2aac8-70a4-479a-869c-d87946a4b073 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:06 compute-0 nova_compute[260603]: 2025-10-02 08:57:06.696 2 DEBUG nova.compute.manager [req-eb32dccf-a9f8-4dd5-af35-bc514652f683 req-73d2aac8-70a4-479a-869c-d87946a4b073 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] No waiting events found dispatching network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:57:06 compute-0 nova_compute[260603]: 2025-10-02 08:57:06.696 2 WARNING nova.compute.manager [req-eb32dccf-a9f8-4dd5-af35-bc514652f683 req-73d2aac8-70a4-479a-869c-d87946a4b073 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received unexpected event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 for instance with vm_state active and task_state None.
Oct 02 08:57:06 compute-0 ceph-mon[74477]: pgmap v2369: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:57:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Oct 02 08:57:07 compute-0 nova_compute[260603]: 2025-10-02 08:57:07.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:57:08 compute-0 ceph-mon[74477]: pgmap v2370: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Oct 02 08:57:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:57:09 compute-0 nova_compute[260603]: 2025-10-02 08:57:09.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:09 compute-0 ovn_controller[152344]: 2025-10-02T08:57:09Z|01376|binding|INFO|Releasing lport 02679a4c-7fb0-490c-8428-8668ec8ddf0b from this chassis (sb_readonly=0)
Oct 02 08:57:09 compute-0 NetworkManager[45129]: <info>  [1759395429.7798] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/547)
Oct 02 08:57:09 compute-0 NetworkManager[45129]: <info>  [1759395429.7814] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/548)
Oct 02 08:57:09 compute-0 ovn_controller[152344]: 2025-10-02T08:57:09Z|01377|binding|INFO|Releasing lport 02679a4c-7fb0-490c-8428-8668ec8ddf0b from this chassis (sb_readonly=0)
Oct 02 08:57:09 compute-0 nova_compute[260603]: 2025-10-02 08:57:09.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:09 compute-0 nova_compute[260603]: 2025-10-02 08:57:09.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:10 compute-0 nova_compute[260603]: 2025-10-02 08:57:10.145 2 DEBUG nova.compute.manager [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-changed-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:57:10 compute-0 nova_compute[260603]: 2025-10-02 08:57:10.145 2 DEBUG nova.compute.manager [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Refreshing instance network info cache due to event network-changed-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:57:10 compute-0 nova_compute[260603]: 2025-10-02 08:57:10.146 2 DEBUG oslo_concurrency.lockutils [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:57:10 compute-0 nova_compute[260603]: 2025-10-02 08:57:10.146 2 DEBUG oslo_concurrency.lockutils [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:57:10 compute-0 nova_compute[260603]: 2025-10-02 08:57:10.146 2 DEBUG nova.network.neutron [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Refreshing network info cache for port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:57:10 compute-0 nova_compute[260603]: 2025-10-02 08:57:10.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:10 compute-0 ceph-mon[74477]: pgmap v2371: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:57:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:57:11 compute-0 nova_compute[260603]: 2025-10-02 08:57:11.880 2 DEBUG nova.network.neutron [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updated VIF entry in instance network info cache for port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:57:11 compute-0 nova_compute[260603]: 2025-10-02 08:57:11.880 2 DEBUG nova.network.neutron [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updating instance_info_cache with network_info: [{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:57:11 compute-0 nova_compute[260603]: 2025-10-02 08:57:11.910 2 DEBUG oslo_concurrency.lockutils [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:57:12 compute-0 ceph-mon[74477]: pgmap v2372: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:57:12 compute-0 nova_compute[260603]: 2025-10-02 08:57:12.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:57:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:57:14 compute-0 ceph-mon[74477]: pgmap v2373: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:57:15 compute-0 nova_compute[260603]: 2025-10-02 08:57:15.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:57:16 compute-0 ceph-mon[74477]: pgmap v2374: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:57:17 compute-0 ovn_controller[152344]: 2025-10-02T08:57:17Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:a0:55 10.100.0.9
Oct 02 08:57:17 compute-0 ovn_controller[152344]: 2025-10-02T08:57:17Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:a0:55 10.100.0.9
Oct 02 08:57:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 103 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 96 op/s
Oct 02 08:57:17 compute-0 nova_compute[260603]: 2025-10-02 08:57:17.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:57:19 compute-0 ceph-mon[74477]: pgmap v2375: 305 pgs: 305 active+clean; 103 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 96 op/s
Oct 02 08:57:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 114 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 929 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 02 08:57:20 compute-0 nova_compute[260603]: 2025-10-02 08:57:20.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:21 compute-0 ceph-mon[74477]: pgmap v2376: 305 pgs: 305 active+clean; 114 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 929 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 02 08:57:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 114 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 02 08:57:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:57:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1835829058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:57:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:57:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1835829058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:57:22 compute-0 nova_compute[260603]: 2025-10-02 08:57:22.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:23 compute-0 ceph-mon[74477]: pgmap v2377: 305 pgs: 305 active+clean; 114 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 02 08:57:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1835829058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:57:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1835829058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:57:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:57:23 compute-0 nova_compute[260603]: 2025-10-02 08:57:23.443 2 INFO nova.compute.manager [None req-788b3887-a2da-42a8-a28c-6c0ed17f5037 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Get console output
Oct 02 08:57:23 compute-0 nova_compute[260603]: 2025-10-02 08:57:23.449 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:57:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:57:25 compute-0 ceph-mon[74477]: pgmap v2378: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:57:25 compute-0 nova_compute[260603]: 2025-10-02 08:57:25.169 2 DEBUG nova.compute.manager [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-changed-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:57:25 compute-0 nova_compute[260603]: 2025-10-02 08:57:25.170 2 DEBUG nova.compute.manager [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Refreshing instance network info cache due to event network-changed-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:57:25 compute-0 nova_compute[260603]: 2025-10-02 08:57:25.170 2 DEBUG oslo_concurrency.lockutils [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:57:25 compute-0 nova_compute[260603]: 2025-10-02 08:57:25.170 2 DEBUG oslo_concurrency.lockutils [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:57:25 compute-0 nova_compute[260603]: 2025-10-02 08:57:25.170 2 DEBUG nova.network.neutron [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Refreshing network info cache for port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:57:25 compute-0 nova_compute[260603]: 2025-10-02 08:57:25.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:57:26 compute-0 podman[397265]: 2025-10-02 08:57:26.000102593 +0000 UTC m=+0.058903508 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:57:26 compute-0 podman[397264]: 2025-10-02 08:57:26.020550081 +0000 UTC m=+0.084404886 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 08:57:26 compute-0 nova_compute[260603]: 2025-10-02 08:57:26.210 2 DEBUG nova.network.neutron [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updated VIF entry in instance network info cache for port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:57:26 compute-0 nova_compute[260603]: 2025-10-02 08:57:26.211 2 DEBUG nova.network.neutron [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updating instance_info_cache with network_info: [{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:57:26 compute-0 nova_compute[260603]: 2025-10-02 08:57:26.235 2 DEBUG oslo_concurrency.lockutils [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:57:26 compute-0 nova_compute[260603]: 2025-10-02 08:57:26.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:26.599 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:57:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:26.600 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:57:27 compute-0 ceph-mon[74477]: pgmap v2379: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:57:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:57:27 compute-0 nova_compute[260603]: 2025-10-02 08:57:27.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:57:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:57:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:57:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:57:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:57:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:57:28
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log']
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:57:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:57:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:57:29 compute-0 ceph-mon[74477]: pgmap v2380: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:57:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 981 KiB/s wr, 41 op/s
Oct 02 08:57:30 compute-0 nova_compute[260603]: 2025-10-02 08:57:30.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:30 compute-0 nova_compute[260603]: 2025-10-02 08:57:30.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:57:30 compute-0 nova_compute[260603]: 2025-10-02 08:57:30.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:57:31 compute-0 ceph-mon[74477]: pgmap v2381: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 981 KiB/s wr, 41 op/s
Oct 02 08:57:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 41 KiB/s wr, 6 op/s
Oct 02 08:57:32 compute-0 podman[397312]: 2025-10-02 08:57:32.008533654 +0000 UTC m=+0.064313919 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 08:57:32 compute-0 podman[397311]: 2025-10-02 08:57:32.008547805 +0000 UTC m=+0.067390228 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 08:57:32 compute-0 nova_compute[260603]: 2025-10-02 08:57:32.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:57:33 compute-0 ceph-mon[74477]: pgmap v2382: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 41 KiB/s wr, 6 op/s
Oct 02 08:57:33 compute-0 nova_compute[260603]: 2025-10-02 08:57:33.410 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:33 compute-0 nova_compute[260603]: 2025-10-02 08:57:33.411 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:33 compute-0 nova_compute[260603]: 2025-10-02 08:57:33.432 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:57:33 compute-0 nova_compute[260603]: 2025-10-02 08:57:33.519 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:33 compute-0 nova_compute[260603]: 2025-10-02 08:57:33.520 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:33 compute-0 nova_compute[260603]: 2025-10-02 08:57:33.532 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:57:33 compute-0 nova_compute[260603]: 2025-10-02 08:57:33.533 2 INFO nova.compute.claims [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:57:33 compute-0 nova_compute[260603]: 2025-10-02 08:57:33.667 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 44 KiB/s wr, 6 op/s
Oct 02 08:57:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:57:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1406298978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.142 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.149 2 DEBUG nova.compute.provider_tree [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.349 2 DEBUG nova.scheduler.client.report [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.377 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.378 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.440 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.441 2 DEBUG nova.network.neutron [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.468 2 INFO nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.492 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:57:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:34.602 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.627 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.630 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.630 2 INFO nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Creating image(s)
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.667 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.707 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.744 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.750 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.810 2 DEBUG nova.policy [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:57:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:34.838 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.857 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.858 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.858 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.859 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.889 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:57:34 compute-0 nova_compute[260603]: 2025-10-02 08:57:34.897 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 69862866-8141-4e02-bd49-4278bfdb7857_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:35 compute-0 ceph-mon[74477]: pgmap v2383: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 44 KiB/s wr, 6 op/s
Oct 02 08:57:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1406298978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:57:35 compute-0 nova_compute[260603]: 2025-10-02 08:57:35.261 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 69862866-8141-4e02-bd49-4278bfdb7857_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:35 compute-0 nova_compute[260603]: 2025-10-02 08:57:35.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:35 compute-0 nova_compute[260603]: 2025-10-02 08:57:35.373 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:57:35 compute-0 nova_compute[260603]: 2025-10-02 08:57:35.499 2 DEBUG nova.objects.instance [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 69862866-8141-4e02-bd49-4278bfdb7857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:57:35 compute-0 nova_compute[260603]: 2025-10-02 08:57:35.520 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:57:35 compute-0 nova_compute[260603]: 2025-10-02 08:57:35.520 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Ensure instance console log exists: /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:57:35 compute-0 nova_compute[260603]: 2025-10-02 08:57:35.521 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:35 compute-0 nova_compute[260603]: 2025-10-02 08:57:35.521 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:35 compute-0 nova_compute[260603]: 2025-10-02 08:57:35.522 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 02 08:57:35 compute-0 nova_compute[260603]: 2025-10-02 08:57:35.763 2 DEBUG nova.network.neutron [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Successfully created port: 81ef7b43-41dd-4a20-aa31-ed776a4299b0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:57:36 compute-0 nova_compute[260603]: 2025-10-02 08:57:36.805 2 DEBUG nova.network.neutron [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Successfully updated port: 81ef7b43-41dd-4a20-aa31-ed776a4299b0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:57:36 compute-0 nova_compute[260603]: 2025-10-02 08:57:36.831 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:57:36 compute-0 nova_compute[260603]: 2025-10-02 08:57:36.832 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:57:36 compute-0 nova_compute[260603]: 2025-10-02 08:57:36.832 2 DEBUG nova.network.neutron [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:57:36 compute-0 nova_compute[260603]: 2025-10-02 08:57:36.934 2 DEBUG nova.compute.manager [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-changed-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:57:36 compute-0 nova_compute[260603]: 2025-10-02 08:57:36.935 2 DEBUG nova.compute.manager [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Refreshing instance network info cache due to event network-changed-81ef7b43-41dd-4a20-aa31-ed776a4299b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:57:36 compute-0 nova_compute[260603]: 2025-10-02 08:57:36.935 2 DEBUG oslo_concurrency.lockutils [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.067 2 DEBUG nova.network.neutron [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:57:37 compute-0 ceph-mon[74477]: pgmap v2384: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 02 08:57:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 146 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.1 MiB/s wr, 4 op/s
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.874 2 DEBUG nova.network.neutron [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Updating instance_info_cache with network_info: [{"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.898 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.899 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Instance network_info: |[{"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.900 2 DEBUG oslo_concurrency.lockutils [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.900 2 DEBUG nova.network.neutron [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Refreshing network info cache for port 81ef7b43-41dd-4a20-aa31-ed776a4299b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.905 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Start _get_guest_xml network_info=[{"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.913 2 WARNING nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.926 2 DEBUG nova.virt.libvirt.host [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.927 2 DEBUG nova.virt.libvirt.host [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.932 2 DEBUG nova.virt.libvirt.host [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.933 2 DEBUG nova.virt.libvirt.host [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.934 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.934 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.935 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.936 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.936 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.936 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.937 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.937 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.938 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.938 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.939 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.939 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:57:37 compute-0 nova_compute[260603]: 2025-10-02 08:57:37.944 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:38 compute-0 nova_compute[260603]: 2025-10-02 08:57:38.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:57:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:57:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882067832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:57:38 compute-0 nova_compute[260603]: 2025-10-02 08:57:38.422 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:38 compute-0 nova_compute[260603]: 2025-10-02 08:57:38.463 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:57:38 compute-0 nova_compute[260603]: 2025-10-02 08:57:38.470 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:57:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/942956624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:57:38 compute-0 nova_compute[260603]: 2025-10-02 08:57:38.943 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:38 compute-0 nova_compute[260603]: 2025-10-02 08:57:38.945 2 DEBUG nova.virt.libvirt.vif [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:57:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-564779768',display_name='tempest-TestNetworkBasicOps-server-564779768',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-564779768',id=128,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhkz6+vSnvnT1XLzopmVXCIQtQIQCx9/0pOGIggxX0jLtkT3sEzs9rpRLJf4JbtSL6bjNbeilYykS+0k+nQ8Lt3eGZ1mjA3hrv/+JLu3lNSQ3HWi2ZIUcAuMwawnCYCeg==',key_name='tempest-TestNetworkBasicOps-620917724',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-5t7zdpi1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:57:34Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=69862866-8141-4e02-bd49-4278bfdb7857,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:57:38 compute-0 nova_compute[260603]: 2025-10-02 08:57:38.945 2 DEBUG nova.network.os_vif_util [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:57:38 compute-0 nova_compute[260603]: 2025-10-02 08:57:38.946 2 DEBUG nova.network.os_vif_util [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:57:38 compute-0 nova_compute[260603]: 2025-10-02 08:57:38.947 2 DEBUG nova.objects.instance [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69862866-8141-4e02-bd49-4278bfdb7857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009813318030865404 of space, bias 1.0, pg target 0.2943995409259621 quantized to 32 (current 32)
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:57:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.022 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:57:39 compute-0 nova_compute[260603]:   <uuid>69862866-8141-4e02-bd49-4278bfdb7857</uuid>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   <name>instance-00000080</name>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-564779768</nova:name>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:57:37</nova:creationTime>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:57:39 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:57:39 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:57:39 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:57:39 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:57:39 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:57:39 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:57:39 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:57:39 compute-0 nova_compute[260603]:         <nova:port uuid="81ef7b43-41dd-4a20-aa31-ed776a4299b0">
Oct 02 08:57:39 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <system>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <entry name="serial">69862866-8141-4e02-bd49-4278bfdb7857</entry>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <entry name="uuid">69862866-8141-4e02-bd49-4278bfdb7857</entry>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     </system>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   <os>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   </os>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   <features>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   </features>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/69862866-8141-4e02-bd49-4278bfdb7857_disk">
Oct 02 08:57:39 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       </source>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:57:39 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/69862866-8141-4e02-bd49-4278bfdb7857_disk.config">
Oct 02 08:57:39 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       </source>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:57:39 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:0f:9b:69"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <target dev="tap81ef7b43-41"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/console.log" append="off"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <video>
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     </video>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:57:39 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:57:39 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:57:39 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:57:39 compute-0 nova_compute[260603]: </domain>
Oct 02 08:57:39 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.024 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Preparing to wait for external event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.024 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.025 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.025 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.026 2 DEBUG nova.virt.libvirt.vif [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:57:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-564779768',display_name='tempest-TestNetworkBasicOps-server-564779768',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-564779768',id=128,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhkz6+vSnvnT1XLzopmVXCIQtQIQCx9/0pOGIggxX0jLtkT3sEzs9rpRLJf4JbtSL6bjNbeilYykS+0k+nQ8Lt3eGZ1mjA3hrv/+JLu3lNSQ3HWi2ZIUcAuMwawnCYCeg==',key_name='tempest-TestNetworkBasicOps-620917724',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-5t7zdpi1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:57:34Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=69862866-8141-4e02-bd49-4278bfdb7857,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.026 2 DEBUG nova.network.os_vif_util [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.027 2 DEBUG nova.network.os_vif_util [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.027 2 DEBUG os_vif [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.033 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81ef7b43-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.034 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81ef7b43-41, col_values=(('external_ids', {'iface-id': '81ef7b43-41dd-4a20-aa31-ed776a4299b0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:9b:69', 'vm-uuid': '69862866-8141-4e02-bd49-4278bfdb7857'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:39 compute-0 NetworkManager[45129]: <info>  [1759395459.0375] manager: (tap81ef7b43-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/549)
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.047 2 INFO os_vif [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41')
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.109 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.110 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.110 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:0f:9b:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.111 2 INFO nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Using config drive
Oct 02 08:57:39 compute-0 ceph-mon[74477]: pgmap v2385: 305 pgs: 305 active+clean; 146 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.1 MiB/s wr, 4 op/s
Oct 02 08:57:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2882067832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:57:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/942956624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:57:39 compute-0 nova_compute[260603]: 2025-10-02 08:57:39.138 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:57:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:57:40 compute-0 nova_compute[260603]: 2025-10-02 08:57:40.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:41 compute-0 ceph-mon[74477]: pgmap v2386: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:57:41 compute-0 nova_compute[260603]: 2025-10-02 08:57:41.635 2 INFO nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Creating config drive at /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/disk.config
Oct 02 08:57:41 compute-0 nova_compute[260603]: 2025-10-02 08:57:41.644 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdody5rxy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:57:41 compute-0 nova_compute[260603]: 2025-10-02 08:57:41.790 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdody5rxy" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:41 compute-0 nova_compute[260603]: 2025-10-02 08:57:41.828 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:57:41 compute-0 nova_compute[260603]: 2025-10-02 08:57:41.832 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/disk.config 69862866-8141-4e02-bd49-4278bfdb7857_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:41 compute-0 nova_compute[260603]: 2025-10-02 08:57:41.992 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/disk.config 69862866-8141-4e02-bd49-4278bfdb7857_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:41 compute-0 nova_compute[260603]: 2025-10-02 08:57:41.993 2 INFO nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Deleting local config drive /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/disk.config because it was imported into RBD.
Oct 02 08:57:42 compute-0 kernel: tap81ef7b43-41: entered promiscuous mode
Oct 02 08:57:42 compute-0 NetworkManager[45129]: <info>  [1759395462.0420] manager: (tap81ef7b43-41): new Tun device (/org/freedesktop/NetworkManager/Devices/550)
Oct 02 08:57:42 compute-0 ovn_controller[152344]: 2025-10-02T08:57:42Z|01378|binding|INFO|Claiming lport 81ef7b43-41dd-4a20-aa31-ed776a4299b0 for this chassis.
Oct 02 08:57:42 compute-0 ovn_controller[152344]: 2025-10-02T08:57:42Z|01379|binding|INFO|81ef7b43-41dd-4a20-aa31-ed776a4299b0: Claiming fa:16:3e:0f:9b:69 10.100.0.10
Oct 02 08:57:42 compute-0 nova_compute[260603]: 2025-10-02 08:57:42.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.051 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:9b:69 10.100.0.10'], port_security=['fa:16:3e:0f:9b:69 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '69862866-8141-4e02-bd49-4278bfdb7857', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50f973db-7622-438d-89a4-98949bc018c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ad0d47c-459b-4d77-aba0-ab968d7d2321', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6dd5c4a2-6437-4d79-b5d7-6dfc11d7b30a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=81ef7b43-41dd-4a20-aa31-ed776a4299b0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.052 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 81ef7b43-41dd-4a20-aa31-ed776a4299b0 in datapath 50f973db-7622-438d-89a4-98949bc018c7 bound to our chassis
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.053 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50f973db-7622-438d-89a4-98949bc018c7
Oct 02 08:57:42 compute-0 ovn_controller[152344]: 2025-10-02T08:57:42Z|01380|binding|INFO|Setting lport 81ef7b43-41dd-4a20-aa31-ed776a4299b0 ovn-installed in OVS
Oct 02 08:57:42 compute-0 ovn_controller[152344]: 2025-10-02T08:57:42Z|01381|binding|INFO|Setting lport 81ef7b43-41dd-4a20-aa31-ed776a4299b0 up in Southbound
Oct 02 08:57:42 compute-0 nova_compute[260603]: 2025-10-02 08:57:42.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:42 compute-0 systemd-machined[214636]: New machine qemu-162-instance-00000080.
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.076 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0514d44e-3a8a-4ddc-bf07-8315da8e0281]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:42 compute-0 systemd-udevd[397674]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:57:42 compute-0 systemd[1]: Started Virtual Machine qemu-162-instance-00000080.
Oct 02 08:57:42 compute-0 NetworkManager[45129]: <info>  [1759395462.0942] device (tap81ef7b43-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:57:42 compute-0 NetworkManager[45129]: <info>  [1759395462.0951] device (tap81ef7b43-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.114 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ce9ff23f-8140-402b-8f35-4df86dbc65fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.117 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2531b553-bb8b-4d26-bf71-c83f3284212c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.147 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f2c5d3-620c-455e-a8d9-ad0e6026f43d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.161 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b94b11eb-38da-4ad8-884a-c5489c144483]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50f973db-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:ed:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 389], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630260, 'reachable_time': 28777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397687, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.178 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0eaa6f41-035e-4574-a68c-f233ec7482bd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap50f973db-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630272, 'tstamp': 630272}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397688, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap50f973db-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630276, 'tstamp': 630276}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397688, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.179 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50f973db-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:42 compute-0 nova_compute[260603]: 2025-10-02 08:57:42.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:42 compute-0 nova_compute[260603]: 2025-10-02 08:57:42.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.182 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50f973db-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.182 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.182 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50f973db-70, col_values=(('external_ids', {'iface-id': '02679a4c-7fb0-490c-8428-8668ec8ddf0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:57:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.183 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:57:42 compute-0 nova_compute[260603]: 2025-10-02 08:57:42.831 2 DEBUG nova.network.neutron [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Updated VIF entry in instance network info cache for port 81ef7b43-41dd-4a20-aa31-ed776a4299b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:57:42 compute-0 nova_compute[260603]: 2025-10-02 08:57:42.832 2 DEBUG nova.network.neutron [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Updating instance_info_cache with network_info: [{"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:57:42 compute-0 nova_compute[260603]: 2025-10-02 08:57:42.911 2 DEBUG oslo_concurrency.lockutils [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:57:42 compute-0 nova_compute[260603]: 2025-10-02 08:57:42.919 2 DEBUG nova.compute.manager [req-dff261fa-3687-4cc5-a640-4ccb8595b326 req-15c40680-d8be-43f5-b1f3-165166d136ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:57:42 compute-0 nova_compute[260603]: 2025-10-02 08:57:42.921 2 DEBUG oslo_concurrency.lockutils [req-dff261fa-3687-4cc5-a640-4ccb8595b326 req-15c40680-d8be-43f5-b1f3-165166d136ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:42 compute-0 nova_compute[260603]: 2025-10-02 08:57:42.922 2 DEBUG oslo_concurrency.lockutils [req-dff261fa-3687-4cc5-a640-4ccb8595b326 req-15c40680-d8be-43f5-b1f3-165166d136ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:42 compute-0 nova_compute[260603]: 2025-10-02 08:57:42.922 2 DEBUG oslo_concurrency.lockutils [req-dff261fa-3687-4cc5-a640-4ccb8595b326 req-15c40680-d8be-43f5-b1f3-165166d136ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:42 compute-0 nova_compute[260603]: 2025-10-02 08:57:42.923 2 DEBUG nova.compute.manager [req-dff261fa-3687-4cc5-a640-4ccb8595b326 req-15c40680-d8be-43f5-b1f3-165166d136ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Processing event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:57:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:57:43 compute-0 ceph-mon[74477]: pgmap v2387: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.271 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395463.2706242, 69862866-8141-4e02-bd49-4278bfdb7857 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.271 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] VM Started (Lifecycle Event)
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.273 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.276 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.279 2 INFO nova.virt.libvirt.driver [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Instance spawned successfully.
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.279 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.302 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.306 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.307 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.307 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.307 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.308 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.308 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.312 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.350 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.351 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395463.27091, 69862866-8141-4e02-bd49-4278bfdb7857 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.351 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] VM Paused (Lifecycle Event)
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.374 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.376 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395463.275378, 69862866-8141-4e02-bd49-4278bfdb7857 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.376 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] VM Resumed (Lifecycle Event)
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.395 2 INFO nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Took 8.77 seconds to spawn the instance on the hypervisor.
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.395 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.397 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.402 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.530 2 INFO nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Took 10.04 seconds to build instance.
Oct 02 08:57:43 compute-0 nova_compute[260603]: 2025-10-02 08:57:43.549 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 08:57:44 compute-0 nova_compute[260603]: 2025-10-02 08:57:44.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:44 compute-0 nova_compute[260603]: 2025-10-02 08:57:44.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.044 2 DEBUG nova.compute.manager [req-84b7a2dc-24a2-4aea-b49b-1697701b17de req-ec9f0b4d-d5eb-4ea9-9c31-2e4a750bafdd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.045 2 DEBUG oslo_concurrency.lockutils [req-84b7a2dc-24a2-4aea-b49b-1697701b17de req-ec9f0b4d-d5eb-4ea9-9c31-2e4a750bafdd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.047 2 DEBUG oslo_concurrency.lockutils [req-84b7a2dc-24a2-4aea-b49b-1697701b17de req-ec9f0b4d-d5eb-4ea9-9c31-2e4a750bafdd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.047 2 DEBUG oslo_concurrency.lockutils [req-84b7a2dc-24a2-4aea-b49b-1697701b17de req-ec9f0b4d-d5eb-4ea9-9c31-2e4a750bafdd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.048 2 DEBUG nova.compute.manager [req-84b7a2dc-24a2-4aea-b49b-1697701b17de req-ec9f0b4d-d5eb-4ea9-9c31-2e4a750bafdd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] No waiting events found dispatching network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.048 2 WARNING nova.compute.manager [req-84b7a2dc-24a2-4aea-b49b-1697701b17de req-ec9f0b4d-d5eb-4ea9-9c31-2e4a750bafdd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received unexpected event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 for instance with vm_state active and task_state None.
Oct 02 08:57:45 compute-0 ceph-mon[74477]: pgmap v2388: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:57:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.812 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.812 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.812 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:57:45 compute-0 nova_compute[260603]: 2025-10-02 08:57:45.812 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:57:47 compute-0 ceph-mon[74477]: pgmap v2389: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 08:57:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 02 08:57:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:57:48 compute-0 nova_compute[260603]: 2025-10-02 08:57:48.983 2 DEBUG nova.compute.manager [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-changed-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:57:48 compute-0 nova_compute[260603]: 2025-10-02 08:57:48.984 2 DEBUG nova.compute.manager [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Refreshing instance network info cache due to event network-changed-81ef7b43-41dd-4a20-aa31-ed776a4299b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:57:48 compute-0 nova_compute[260603]: 2025-10-02 08:57:48.984 2 DEBUG oslo_concurrency.lockutils [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:57:48 compute-0 nova_compute[260603]: 2025-10-02 08:57:48.984 2 DEBUG oslo_concurrency.lockutils [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:57:48 compute-0 nova_compute[260603]: 2025-10-02 08:57:48.985 2 DEBUG nova.network.neutron [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Refreshing network info cache for port 81ef7b43-41dd-4a20-aa31-ed776a4299b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:57:49 compute-0 nova_compute[260603]: 2025-10-02 08:57:49.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:49 compute-0 nova_compute[260603]: 2025-10-02 08:57:49.081 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updating instance_info_cache with network_info: [{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:57:49 compute-0 nova_compute[260603]: 2025-10-02 08:57:49.109 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:57:49 compute-0 nova_compute[260603]: 2025-10-02 08:57:49.113 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:57:49 compute-0 nova_compute[260603]: 2025-10-02 08:57:49.115 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:57:49 compute-0 ceph-mon[74477]: pgmap v2390: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 02 08:57:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 666 KiB/s wr, 97 op/s
Oct 02 08:57:50 compute-0 nova_compute[260603]: 2025-10-02 08:57:50.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:51 compute-0 ceph-mon[74477]: pgmap v2391: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 666 KiB/s wr, 97 op/s
Oct 02 08:57:51 compute-0 nova_compute[260603]: 2025-10-02 08:57:51.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:57:51 compute-0 nova_compute[260603]: 2025-10-02 08:57:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:57:51 compute-0 nova_compute[260603]: 2025-10-02 08:57:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:57:51 compute-0 nova_compute[260603]: 2025-10-02 08:57:51.696 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:51 compute-0 nova_compute[260603]: 2025-10-02 08:57:51.696 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:51 compute-0 nova_compute[260603]: 2025-10-02 08:57:51.697 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:51 compute-0 nova_compute[260603]: 2025-10-02 08:57:51.697 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:57:51 compute-0 nova_compute[260603]: 2025-10-02 08:57:51.697 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 08:57:51 compute-0 nova_compute[260603]: 2025-10-02 08:57:51.762 2 DEBUG nova.network.neutron [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Updated VIF entry in instance network info cache for port 81ef7b43-41dd-4a20-aa31-ed776a4299b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:57:51 compute-0 nova_compute[260603]: 2025-10-02 08:57:51.767 2 DEBUG nova.network.neutron [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Updating instance_info_cache with network_info: [{"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:57:51 compute-0 nova_compute[260603]: 2025-10-02 08:57:51.795 2 DEBUG oslo_concurrency.lockutils [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:57:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:57:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142177687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.198 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2142177687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.287 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.288 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.291 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.291 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.488 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.490 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3270MB free_disk=59.921817779541016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.490 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.491 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.597 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.597 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 69862866-8141-4e02-bd49-4278bfdb7857 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.597 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.597 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:57:52 compute-0 nova_compute[260603]: 2025-10-02 08:57:52.651 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:57:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:57:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863687521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:57:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:57:53 compute-0 nova_compute[260603]: 2025-10-02 08:57:53.092 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:57:53 compute-0 nova_compute[260603]: 2025-10-02 08:57:53.099 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:57:53 compute-0 nova_compute[260603]: 2025-10-02 08:57:53.118 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:57:53 compute-0 nova_compute[260603]: 2025-10-02 08:57:53.143 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:57:53 compute-0 nova_compute[260603]: 2025-10-02 08:57:53.143 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:57:53 compute-0 ceph-mon[74477]: pgmap v2392: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 08:57:53 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2863687521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:57:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Oct 02 08:57:54 compute-0 nova_compute[260603]: 2025-10-02 08:57:54.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:55 compute-0 ovn_controller[152344]: 2025-10-02T08:57:55Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:9b:69 10.100.0.10
Oct 02 08:57:55 compute-0 ovn_controller[152344]: 2025-10-02T08:57:55Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:9b:69 10.100.0.10
Oct 02 08:57:55 compute-0 ceph-mon[74477]: pgmap v2393: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Oct 02 08:57:55 compute-0 nova_compute[260603]: 2025-10-02 08:57:55.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 65 op/s
Oct 02 08:57:56 compute-0 nova_compute[260603]: 2025-10-02 08:57:56.140 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:57:57 compute-0 podman[397778]: 2025-10-02 08:57:57.051569837 +0000 UTC m=+0.099871377 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 08:57:57 compute-0 podman[397777]: 2025-10-02 08:57:57.054569932 +0000 UTC m=+0.110919947 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 08:57:57 compute-0 ceph-mon[74477]: pgmap v2394: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 65 op/s
Oct 02 08:57:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 192 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 93 op/s
Oct 02 08:57:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:57:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:57:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:57:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:57:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:57:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:57:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:57:58 compute-0 nova_compute[260603]: 2025-10-02 08:57:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:57:59 compute-0 nova_compute[260603]: 2025-10-02 08:57:59.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:57:59 compute-0 ceph-mon[74477]: pgmap v2395: 305 pgs: 305 active+clean; 192 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 93 op/s
Oct 02 08:57:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 200 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 08:58:00 compute-0 nova_compute[260603]: 2025-10-02 08:58:00.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:01 compute-0 ceph-mon[74477]: pgmap v2396: 305 pgs: 305 active+clean; 200 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 08:58:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 200 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.392 2 INFO nova.compute.manager [None req-d00bbb4c-41b6-4fb2-a628-8e73f5eee5b6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Get console output
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.398 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.732 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.733 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.733 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.733 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.734 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.735 2 INFO nova.compute.manager [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Terminating instance
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.736 2 DEBUG nova.compute.manager [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:58:02 compute-0 kernel: tap81ef7b43-41 (unregistering): left promiscuous mode
Oct 02 08:58:02 compute-0 NetworkManager[45129]: <info>  [1759395482.7870] device (tap81ef7b43-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:02 compute-0 ovn_controller[152344]: 2025-10-02T08:58:02Z|01382|binding|INFO|Releasing lport 81ef7b43-41dd-4a20-aa31-ed776a4299b0 from this chassis (sb_readonly=0)
Oct 02 08:58:02 compute-0 ovn_controller[152344]: 2025-10-02T08:58:02Z|01383|binding|INFO|Setting lport 81ef7b43-41dd-4a20-aa31-ed776a4299b0 down in Southbound
Oct 02 08:58:02 compute-0 ovn_controller[152344]: 2025-10-02T08:58:02Z|01384|binding|INFO|Removing iface tap81ef7b43-41 ovn-installed in OVS
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.867 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:9b:69 10.100.0.10'], port_security=['fa:16:3e:0f:9b:69 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '69862866-8141-4e02-bd49-4278bfdb7857', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50f973db-7622-438d-89a4-98949bc018c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ad0d47c-459b-4d77-aba0-ab968d7d2321', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.228'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6dd5c4a2-6437-4d79-b5d7-6dfc11d7b30a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=81ef7b43-41dd-4a20-aa31-ed776a4299b0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:58:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.870 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 81ef7b43-41dd-4a20-aa31-ed776a4299b0 in datapath 50f973db-7622-438d-89a4-98949bc018c7 unbound from our chassis
Oct 02 08:58:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.872 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50f973db-7622-438d-89a4-98949bc018c7
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.900 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48deb66b-34c0-4db4-98e2-1e592a5370d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:02 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000080.scope: Deactivated successfully.
Oct 02 08:58:02 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000080.scope: Consumed 13.355s CPU time.
Oct 02 08:58:02 compute-0 systemd-machined[214636]: Machine qemu-162-instance-00000080 terminated.
Oct 02 08:58:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.946 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd24b01-86ec-4a9f-b823-ba48d33e3004]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.951 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1722d4ca-cbbe-480f-8696-3cbc964177d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:02 compute-0 podman[397822]: 2025-10-02 08:58:02.970831568 +0000 UTC m=+0.097169011 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS)
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.978 2 INFO nova.virt.libvirt.driver [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Instance destroyed successfully.
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.979 2 DEBUG nova.objects.instance [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 69862866-8141-4e02-bd49-4278bfdb7857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:58:02 compute-0 podman[397825]: 2025-10-02 08:58:02.987703183 +0000 UTC m=+0.108668286 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 02 08:58:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.993 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3662e541-9dec-4851-8714-3f7d582e4862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.996 2 DEBUG nova.virt.libvirt.vif [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:57:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-564779768',display_name='tempest-TestNetworkBasicOps-server-564779768',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-564779768',id=128,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhkz6+vSnvnT1XLzopmVXCIQtQIQCx9/0pOGIggxX0jLtkT3sEzs9rpRLJf4JbtSL6bjNbeilYykS+0k+nQ8Lt3eGZ1mjA3hrv/+JLu3lNSQ3HWi2ZIUcAuMwawnCYCeg==',key_name='tempest-TestNetworkBasicOps-620917724',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:57:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-5t7zdpi1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:57:43Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=69862866-8141-4e02-bd49-4278bfdb7857,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.996 2 DEBUG nova.network.os_vif_util [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.997 2 DEBUG nova.network.os_vif_util [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:58:02 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.998 2 DEBUG os_vif [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:02.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:03.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81ef7b43-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:03.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:03.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:03.005 2 INFO os_vif [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41')
Oct 02 08:58:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.018 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[840a3de9-6adc-4a27-b107-d418b62830b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50f973db-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:ed:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 389], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630260, 'reachable_time': 28777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397883, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.038 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[739182dc-6091-42c0-b98f-b1b6329508f3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap50f973db-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630272, 'tstamp': 630272}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397898, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap50f973db-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630276, 'tstamp': 630276}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397898, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.040 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50f973db-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:03.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:03.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.043 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50f973db-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.044 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:58:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.044 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50f973db-70, col_values=(('external_ids', {'iface-id': '02679a4c-7fb0-490c-8428-8668ec8ddf0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.044 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:58:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:58:03 compute-0 ceph-mon[74477]: pgmap v2397: 305 pgs: 305 active+clean; 200 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:03.388 2 INFO nova.virt.libvirt.driver [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Deleting instance files /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857_del
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:03.390 2 INFO nova.virt.libvirt.driver [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Deletion of /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857_del complete
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:03.458 2 INFO nova.compute.manager [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:03.458 2 DEBUG oslo.service.loopingcall [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:03.459 2 DEBUG nova.compute.manager [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:58:03 compute-0 nova_compute[260603]: 2025-10-02 08:58:03.459 2 DEBUG nova.network.neutron [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:58:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 171 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 02 08:58:04 compute-0 nova_compute[260603]: 2025-10-02 08:58:04.008 2 DEBUG nova.compute.manager [req-fe1af442-5cac-4b04-82dd-723ba487099c req-2b23612f-b22e-47a9-80b0-5def5d81522a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-vif-unplugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:58:04 compute-0 nova_compute[260603]: 2025-10-02 08:58:04.009 2 DEBUG oslo_concurrency.lockutils [req-fe1af442-5cac-4b04-82dd-723ba487099c req-2b23612f-b22e-47a9-80b0-5def5d81522a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:04 compute-0 nova_compute[260603]: 2025-10-02 08:58:04.010 2 DEBUG oslo_concurrency.lockutils [req-fe1af442-5cac-4b04-82dd-723ba487099c req-2b23612f-b22e-47a9-80b0-5def5d81522a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:04 compute-0 nova_compute[260603]: 2025-10-02 08:58:04.010 2 DEBUG oslo_concurrency.lockutils [req-fe1af442-5cac-4b04-82dd-723ba487099c req-2b23612f-b22e-47a9-80b0-5def5d81522a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:04 compute-0 nova_compute[260603]: 2025-10-02 08:58:04.010 2 DEBUG nova.compute.manager [req-fe1af442-5cac-4b04-82dd-723ba487099c req-2b23612f-b22e-47a9-80b0-5def5d81522a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] No waiting events found dispatching network-vif-unplugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:58:04 compute-0 nova_compute[260603]: 2025-10-02 08:58:04.011 2 DEBUG nova.compute.manager [req-fe1af442-5cac-4b04-82dd-723ba487099c req-2b23612f-b22e-47a9-80b0-5def5d81522a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-vif-unplugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:58:04 compute-0 sudo[397904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:58:04 compute-0 sudo[397904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:04 compute-0 sudo[397904]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:04 compute-0 sudo[397929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:58:04 compute-0 sudo[397929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:04 compute-0 sudo[397929]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:04 compute-0 sudo[397954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:58:04 compute-0 sudo[397954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:04 compute-0 sudo[397954]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:04 compute-0 sudo[397979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:58:04 compute-0 sudo[397979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:05 compute-0 sudo[397979]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:58:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:58:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:58:05 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:58:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:58:05 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:58:05 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev fd88696e-892c-4b56-863c-4df5b396f19f does not exist
Oct 02 08:58:05 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b06ad103-6b34-4ab1-8d7f-3caec1f87123 does not exist
Oct 02 08:58:05 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5cb53a80-a0e6-48ce-ae1b-a0d3c7d380e0 does not exist
Oct 02 08:58:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:58:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:58:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:58:05 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:58:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:58:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:58:05 compute-0 sudo[398035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:58:05 compute-0 sudo[398035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:05 compute-0 sudo[398035]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:05 compute-0 sudo[398060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:58:05 compute-0 ceph-mon[74477]: pgmap v2398: 305 pgs: 305 active+clean; 171 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 02 08:58:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:58:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:58:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:58:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:58:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:58:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:58:05 compute-0 sudo[398060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:05 compute-0 sudo[398060]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:05 compute-0 nova_compute[260603]: 2025-10-02 08:58:05.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:05 compute-0 sudo[398085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:58:05 compute-0 sudo[398085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:05 compute-0 sudo[398085]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:05 compute-0 nova_compute[260603]: 2025-10-02 08:58:05.415 2 DEBUG nova.network.neutron [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:58:05 compute-0 nova_compute[260603]: 2025-10-02 08:58:05.435 2 INFO nova.compute.manager [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Took 1.98 seconds to deallocate network for instance.
Oct 02 08:58:05 compute-0 sudo[398110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:58:05 compute-0 sudo[398110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:05 compute-0 nova_compute[260603]: 2025-10-02 08:58:05.505 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:05 compute-0 nova_compute[260603]: 2025-10-02 08:58:05.505 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:05 compute-0 nova_compute[260603]: 2025-10-02 08:58:05.582 2 DEBUG oslo_concurrency.processutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:58:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 171 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Oct 02 08:58:05 compute-0 podman[398194]: 2025-10-02 08:58:05.865611602 +0000 UTC m=+0.053490117 container create 65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bose, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:58:05 compute-0 systemd[1]: Started libpod-conmon-65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1.scope.
Oct 02 08:58:05 compute-0 podman[398194]: 2025-10-02 08:58:05.836683215 +0000 UTC m=+0.024561780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:58:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:58:05 compute-0 podman[398194]: 2025-10-02 08:58:05.989146408 +0000 UTC m=+0.177024993 container init 65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 08:58:06 compute-0 podman[398194]: 2025-10-02 08:58:06.003610707 +0000 UTC m=+0.191489232 container start 65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 08:58:06 compute-0 podman[398194]: 2025-10-02 08:58:06.007837451 +0000 UTC m=+0.195715986 container attach 65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 08:58:06 compute-0 funny_bose[398211]: 167 167
Oct 02 08:58:06 compute-0 systemd[1]: libpod-65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1.scope: Deactivated successfully.
Oct 02 08:58:06 compute-0 podman[398194]: 2025-10-02 08:58:06.012971284 +0000 UTC m=+0.200849809 container died 65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bose, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 08:58:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cfb7a3649a6c1371ae0e4c0a506db2744990b8ad1f675ff57ea42619e5be318-merged.mount: Deactivated successfully.
Oct 02 08:58:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:58:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3183847465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:58:06 compute-0 podman[398194]: 2025-10-02 08:58:06.061050918 +0000 UTC m=+0.248929443 container remove 65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.068 2 DEBUG oslo_concurrency.processutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.079 2 DEBUG nova.compute.provider_tree [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:58:06 compute-0 systemd[1]: libpod-conmon-65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1.scope: Deactivated successfully.
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.105 2 DEBUG nova.compute.manager [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.106 2 DEBUG oslo_concurrency.lockutils [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.106 2 DEBUG oslo_concurrency.lockutils [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.107 2 DEBUG oslo_concurrency.lockutils [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.107 2 DEBUG nova.compute.manager [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] No waiting events found dispatching network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.108 2 WARNING nova.compute.manager [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received unexpected event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 for instance with vm_state deleted and task_state None.
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.108 2 DEBUG nova.compute.manager [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-vif-deleted-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.126 2 DEBUG nova.scheduler.client.report [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.151 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.193 2 INFO nova.scheduler.client.report [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 69862866-8141-4e02-bd49-4278bfdb7857
Oct 02 08:58:06 compute-0 podman[398238]: 2025-10-02 08:58:06.263192207 +0000 UTC m=+0.060751748 container create 535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 08:58:06 compute-0 nova_compute[260603]: 2025-10-02 08:58:06.304 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:06 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3183847465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:58:06 compute-0 systemd[1]: Started libpod-conmon-535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6.scope.
Oct 02 08:58:06 compute-0 podman[398238]: 2025-10-02 08:58:06.228719464 +0000 UTC m=+0.026279045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:58:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488d5fea5ed2d749c2cdc5de69b7520bee246d3666735a5e644c8ea82634ec6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488d5fea5ed2d749c2cdc5de69b7520bee246d3666735a5e644c8ea82634ec6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488d5fea5ed2d749c2cdc5de69b7520bee246d3666735a5e644c8ea82634ec6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488d5fea5ed2d749c2cdc5de69b7520bee246d3666735a5e644c8ea82634ec6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488d5fea5ed2d749c2cdc5de69b7520bee246d3666735a5e644c8ea82634ec6d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:06 compute-0 podman[398238]: 2025-10-02 08:58:06.381122405 +0000 UTC m=+0.178681976 container init 535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 08:58:06 compute-0 podman[398238]: 2025-10-02 08:58:06.388490078 +0000 UTC m=+0.186049589 container start 535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:58:06 compute-0 podman[398238]: 2025-10-02 08:58:06.391864936 +0000 UTC m=+0.189424457 container attach 535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:58:07 compute-0 ceph-mon[74477]: pgmap v2399: 305 pgs: 305 active+clean; 171 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Oct 02 08:58:07 compute-0 eloquent_kepler[398254]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:58:07 compute-0 eloquent_kepler[398254]: --> relative data size: 1.0
Oct 02 08:58:07 compute-0 eloquent_kepler[398254]: --> All data devices are unavailable
Oct 02 08:58:07 compute-0 systemd[1]: libpod-535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6.scope: Deactivated successfully.
Oct 02 08:58:07 compute-0 systemd[1]: libpod-535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6.scope: Consumed 1.131s CPU time.
Oct 02 08:58:07 compute-0 podman[398283]: 2025-10-02 08:58:07.648521866 +0000 UTC m=+0.044865773 container died 535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 08:58:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-488d5fea5ed2d749c2cdc5de69b7520bee246d3666735a5e644c8ea82634ec6d-merged.mount: Deactivated successfully.
Oct 02 08:58:07 compute-0 podman[398283]: 2025-10-02 08:58:07.731516597 +0000 UTC m=+0.127860484 container remove 535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:58:07 compute-0 systemd[1]: libpod-conmon-535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6.scope: Deactivated successfully.
Oct 02 08:58:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 121 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 08:58:07 compute-0 sudo[398110]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:07 compute-0 sudo[398298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:58:07 compute-0 sudo[398298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:07 compute-0 sudo[398298]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:07 compute-0 sudo[398323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:58:07 compute-0 sudo[398323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:08 compute-0 sudo[398323]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:08 compute-0 nova_compute[260603]: 2025-10-02 08:58:08.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:58:08 compute-0 sudo[398348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:58:08 compute-0 sudo[398348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:08 compute-0 sudo[398348]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:08 compute-0 sudo[398373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:58:08 compute-0 sudo[398373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:08 compute-0 ceph-mon[74477]: pgmap v2400: 305 pgs: 305 active+clean; 121 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 08:58:08 compute-0 podman[398436]: 2025-10-02 08:58:08.56685703 +0000 UTC m=+0.058380842 container create 055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_beaver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 08:58:08 compute-0 systemd[1]: Started libpod-conmon-055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9.scope.
Oct 02 08:58:08 compute-0 podman[398436]: 2025-10-02 08:58:08.545409831 +0000 UTC m=+0.036933653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:58:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:58:08 compute-0 podman[398436]: 2025-10-02 08:58:08.672099947 +0000 UTC m=+0.163623769 container init 055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:58:08 compute-0 podman[398436]: 2025-10-02 08:58:08.683897341 +0000 UTC m=+0.175421153 container start 055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 08:58:08 compute-0 podman[398436]: 2025-10-02 08:58:08.68826836 +0000 UTC m=+0.179792222 container attach 055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 02 08:58:08 compute-0 inspiring_beaver[398453]: 167 167
Oct 02 08:58:08 compute-0 systemd[1]: libpod-055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9.scope: Deactivated successfully.
Oct 02 08:58:08 compute-0 podman[398436]: 2025-10-02 08:58:08.693179845 +0000 UTC m=+0.184703657 container died 055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:58:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b106b768b3a0abc6d243f26a0643f8e7c187608d73e84992b6a7ccf721c9d18-merged.mount: Deactivated successfully.
Oct 02 08:58:08 compute-0 podman[398436]: 2025-10-02 08:58:08.744498042 +0000 UTC m=+0.236021854 container remove 055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_beaver, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 08:58:08 compute-0 systemd[1]: libpod-conmon-055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9.scope: Deactivated successfully.
Oct 02 08:58:08 compute-0 podman[398476]: 2025-10-02 08:58:08.967049278 +0000 UTC m=+0.055927604 container create 0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:58:09 compute-0 systemd[1]: Started libpod-conmon-0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22.scope.
Oct 02 08:58:09 compute-0 podman[398476]: 2025-10-02 08:58:08.944707429 +0000 UTC m=+0.033585735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:58:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e53ea5e78cf662bfdf4fba776218df43e7279a42e8c7cde50c4af0af067fff46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e53ea5e78cf662bfdf4fba776218df43e7279a42e8c7cde50c4af0af067fff46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e53ea5e78cf662bfdf4fba776218df43e7279a42e8c7cde50c4af0af067fff46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e53ea5e78cf662bfdf4fba776218df43e7279a42e8c7cde50c4af0af067fff46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:09 compute-0 podman[398476]: 2025-10-02 08:58:09.067770641 +0000 UTC m=+0.156648977 container init 0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 08:58:09 compute-0 podman[398476]: 2025-10-02 08:58:09.082402915 +0000 UTC m=+0.171281211 container start 0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:58:09 compute-0 podman[398476]: 2025-10-02 08:58:09.08603811 +0000 UTC m=+0.174916406 container attach 0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_colden, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.270 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.271 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.272 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.272 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.273 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.274 2 INFO nova.compute.manager [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Terminating instance
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.276 2 DEBUG nova.compute.manager [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 08:58:09 compute-0 kernel: tapd9fd4c9a-6e (unregistering): left promiscuous mode
Oct 02 08:58:09 compute-0 NetworkManager[45129]: <info>  [1759395489.3470] device (tapd9fd4c9a-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 08:58:09 compute-0 ovn_controller[152344]: 2025-10-02T08:58:09Z|01385|binding|INFO|Releasing lport d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 from this chassis (sb_readonly=0)
Oct 02 08:58:09 compute-0 ovn_controller[152344]: 2025-10-02T08:58:09Z|01386|binding|INFO|Setting lport d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 down in Southbound
Oct 02 08:58:09 compute-0 ovn_controller[152344]: 2025-10-02T08:58:09Z|01387|binding|INFO|Removing iface tapd9fd4c9a-6e ovn-installed in OVS
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.376 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:a0:55 10.100.0.9'], port_security=['fa:16:3e:1a:a0:55 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '44f2da71-ac35-415b-bb4b-6fbc3afe6cf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50f973db-7622-438d-89a4-98949bc018c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38d1b2fd-ca30-472b-9abc-fb39d0f98549', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6dd5c4a2-6437-4d79-b5d7-6dfc11d7b30a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.378 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 in datapath 50f973db-7622-438d-89a4-98949bc018c7 unbound from our chassis
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.380 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50f973db-7622-438d-89a4-98949bc018c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.382 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[728e16b3-0906-4f82-a6a2-abc386f93cbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.384 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50f973db-7622-438d-89a4-98949bc018c7 namespace which is not needed anymore
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:09 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Oct 02 08:58:09 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d0000007f.scope: Consumed 15.658s CPU time.
Oct 02 08:58:09 compute-0 systemd-machined[214636]: Machine qemu-161-instance-0000007f terminated.
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.530 2 INFO nova.virt.libvirt.driver [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Instance destroyed successfully.
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.531 2 DEBUG nova.objects.instance [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.546 2 DEBUG nova.virt.libvirt.vif [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:56:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1392343358',display_name='tempest-TestNetworkBasicOps-server-1392343358',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1392343358',id=127,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM7vAmstY/1wj8zilQusjnWxCH9HDy5ckxC0RSN69DfVAe97aVFxd/BU6NsgAvBbRW+DKoyLRAwETsr2HsjzpTSAXv0ZjguOo1+U6bZINpkB4yql6sk6skb6iUdvJViCMg==',key_name='tempest-TestNetworkBasicOps-399164218',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:57:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-k0j5xbkd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:57:05Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=44f2da71-ac35-415b-bb4b-6fbc3afe6cf9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.546 2 DEBUG nova.network.os_vif_util [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.547 2 DEBUG nova.network.os_vif_util [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.548 2 DEBUG os_vif [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.551 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9fd4c9a-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.559 2 INFO os_vif [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e')
Oct 02 08:58:09 compute-0 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [NOTICE]   (397252) : haproxy version is 2.8.14-c23fe91
Oct 02 08:58:09 compute-0 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [NOTICE]   (397252) : path to executable is /usr/sbin/haproxy
Oct 02 08:58:09 compute-0 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [WARNING]  (397252) : Exiting Master process...
Oct 02 08:58:09 compute-0 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [WARNING]  (397252) : Exiting Master process...
Oct 02 08:58:09 compute-0 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [ALERT]    (397252) : Current worker (397254) exited with code 143 (Terminated)
Oct 02 08:58:09 compute-0 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [WARNING]  (397252) : All workers exited. Exiting... (0)
Oct 02 08:58:09 compute-0 systemd[1]: libpod-ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f.scope: Deactivated successfully.
Oct 02 08:58:09 compute-0 podman[398524]: 2025-10-02 08:58:09.60711356 +0000 UTC m=+0.080377959 container died ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 08:58:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f-userdata-shm.mount: Deactivated successfully.
Oct 02 08:58:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c715720152d167f38b377cf95f213b66460140411dee007ac5a683ebd4decaf-merged.mount: Deactivated successfully.
Oct 02 08:58:09 compute-0 podman[398524]: 2025-10-02 08:58:09.6869237 +0000 UTC m=+0.160188089 container cleanup ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:58:09 compute-0 systemd[1]: libpod-conmon-ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f.scope: Deactivated successfully.
Oct 02 08:58:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 121 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 511 KiB/s wr, 65 op/s
Oct 02 08:58:09 compute-0 podman[398577]: 2025-10-02 08:58:09.800703058 +0000 UTC m=+0.078279013 container remove ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.813 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a4b3ff-e960-4e0c-a4b9-cd4cdd2c7adf]: (4, ('Thu Oct  2 08:58:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7 (ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f)\nddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f\nThu Oct  2 08:58:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7 (ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f)\nddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.817 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[276188e3-c187-483d-a398-b6442c277b1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.819 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50f973db-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:09 compute-0 kernel: tap50f973db-70: left promiscuous mode
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.832 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4a167cb0-620d-442b-a97d-ae5e2d59110c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.867 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7de3092f-c878-43be-8050-02cf9300d0b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.869 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8eb0bb-2017-40c2-93b9-d17af03dba31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.869 2 DEBUG nova.compute.manager [req-5e6c9ba8-03a8-4ff3-a3da-b77ff2e7683e req-e2e8ed40-932a-45b8-9399-46552cb56b63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-vif-unplugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.869 2 DEBUG oslo_concurrency.lockutils [req-5e6c9ba8-03a8-4ff3-a3da-b77ff2e7683e req-e2e8ed40-932a-45b8-9399-46552cb56b63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.869 2 DEBUG oslo_concurrency.lockutils [req-5e6c9ba8-03a8-4ff3-a3da-b77ff2e7683e req-e2e8ed40-932a-45b8-9399-46552cb56b63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.870 2 DEBUG oslo_concurrency.lockutils [req-5e6c9ba8-03a8-4ff3-a3da-b77ff2e7683e req-e2e8ed40-932a-45b8-9399-46552cb56b63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.870 2 DEBUG nova.compute.manager [req-5e6c9ba8-03a8-4ff3-a3da-b77ff2e7683e req-e2e8ed40-932a-45b8-9399-46552cb56b63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] No waiting events found dispatching network-vif-unplugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:58:09 compute-0 nova_compute[260603]: 2025-10-02 08:58:09.870 2 DEBUG nova.compute.manager [req-5e6c9ba8-03a8-4ff3-a3da-b77ff2e7683e req-e2e8ed40-932a-45b8-9399-46552cb56b63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-vif-unplugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 08:58:09 compute-0 nice_colden[398492]: {
Oct 02 08:58:09 compute-0 nice_colden[398492]:     "0": [
Oct 02 08:58:09 compute-0 nice_colden[398492]:         {
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "devices": [
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "/dev/loop3"
Oct 02 08:58:09 compute-0 nice_colden[398492]:             ],
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_name": "ceph_lv0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_size": "21470642176",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "name": "ceph_lv0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "tags": {
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.cluster_name": "ceph",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.crush_device_class": "",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.encrypted": "0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.osd_id": "0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.type": "block",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.vdo": "0"
Oct 02 08:58:09 compute-0 nice_colden[398492]:             },
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "type": "block",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "vg_name": "ceph_vg0"
Oct 02 08:58:09 compute-0 nice_colden[398492]:         }
Oct 02 08:58:09 compute-0 nice_colden[398492]:     ],
Oct 02 08:58:09 compute-0 nice_colden[398492]:     "1": [
Oct 02 08:58:09 compute-0 nice_colden[398492]:         {
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "devices": [
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "/dev/loop4"
Oct 02 08:58:09 compute-0 nice_colden[398492]:             ],
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_name": "ceph_lv1",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_size": "21470642176",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "name": "ceph_lv1",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "tags": {
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.cluster_name": "ceph",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.crush_device_class": "",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.encrypted": "0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.osd_id": "1",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.type": "block",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.vdo": "0"
Oct 02 08:58:09 compute-0 nice_colden[398492]:             },
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "type": "block",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "vg_name": "ceph_vg1"
Oct 02 08:58:09 compute-0 nice_colden[398492]:         }
Oct 02 08:58:09 compute-0 nice_colden[398492]:     ],
Oct 02 08:58:09 compute-0 nice_colden[398492]:     "2": [
Oct 02 08:58:09 compute-0 nice_colden[398492]:         {
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "devices": [
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "/dev/loop5"
Oct 02 08:58:09 compute-0 nice_colden[398492]:             ],
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_name": "ceph_lv2",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_size": "21470642176",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "name": "ceph_lv2",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "tags": {
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.cluster_name": "ceph",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.crush_device_class": "",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.encrypted": "0",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.osd_id": "2",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.type": "block",
Oct 02 08:58:09 compute-0 nice_colden[398492]:                 "ceph.vdo": "0"
Oct 02 08:58:09 compute-0 nice_colden[398492]:             },
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "type": "block",
Oct 02 08:58:09 compute-0 nice_colden[398492]:             "vg_name": "ceph_vg2"
Oct 02 08:58:09 compute-0 nice_colden[398492]:         }
Oct 02 08:58:09 compute-0 nice_colden[398492]:     ]
Oct 02 08:58:09 compute-0 nice_colden[398492]: }
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.906 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7ca739-2413-4de5-8305-761cd402a062]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630253, 'reachable_time': 37397, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398596, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d50f973db\x2d7622\x2d438d\x2d89a4\x2d98949bc018c7.mount: Deactivated successfully.
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.913 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50f973db-7622-438d-89a4-98949bc018c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 08:58:09 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.913 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[40d29d55-6e34-4c79-88b7-8c50a46af58f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:09 compute-0 systemd[1]: libpod-0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22.scope: Deactivated successfully.
Oct 02 08:58:09 compute-0 conmon[398492]: conmon 0ed26643a50b36de853d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22.scope/container/memory.events
Oct 02 08:58:09 compute-0 podman[398476]: 2025-10-02 08:58:09.950968581 +0000 UTC m=+1.039846927 container died 0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:58:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e53ea5e78cf662bfdf4fba776218df43e7279a42e8c7cde50c4af0af067fff46-merged.mount: Deactivated successfully.
Oct 02 08:58:10 compute-0 podman[398476]: 2025-10-02 08:58:10.035276195 +0000 UTC m=+1.124154481 container remove 0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_colden, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 08:58:10 compute-0 systemd[1]: libpod-conmon-0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22.scope: Deactivated successfully.
Oct 02 08:58:10 compute-0 nova_compute[260603]: 2025-10-02 08:58:10.077 2 INFO nova.virt.libvirt.driver [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Deleting instance files /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_del
Oct 02 08:58:10 compute-0 nova_compute[260603]: 2025-10-02 08:58:10.078 2 INFO nova.virt.libvirt.driver [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Deletion of /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_del complete
Oct 02 08:58:10 compute-0 sudo[398373]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:10 compute-0 nova_compute[260603]: 2025-10-02 08:58:10.137 2 INFO nova.compute.manager [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Took 0.86 seconds to destroy the instance on the hypervisor.
Oct 02 08:58:10 compute-0 nova_compute[260603]: 2025-10-02 08:58:10.138 2 DEBUG oslo.service.loopingcall [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 08:58:10 compute-0 nova_compute[260603]: 2025-10-02 08:58:10.138 2 DEBUG nova.compute.manager [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 08:58:10 compute-0 nova_compute[260603]: 2025-10-02 08:58:10.139 2 DEBUG nova.network.neutron [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 08:58:10 compute-0 sudo[398610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:58:10 compute-0 sudo[398610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:10 compute-0 sudo[398610]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:10 compute-0 sudo[398635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:58:10 compute-0 sudo[398635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:10 compute-0 sudo[398635]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:10 compute-0 nova_compute[260603]: 2025-10-02 08:58:10.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:10 compute-0 sudo[398660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:58:10 compute-0 sudo[398660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:10 compute-0 sudo[398660]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:10 compute-0 sudo[398685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:58:10 compute-0 sudo[398685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:10 compute-0 nova_compute[260603]: 2025-10-02 08:58:10.627 2 DEBUG nova.network.neutron [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:58:10 compute-0 nova_compute[260603]: 2025-10-02 08:58:10.642 2 INFO nova.compute.manager [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Took 0.50 seconds to deallocate network for instance.
Oct 02 08:58:10 compute-0 nova_compute[260603]: 2025-10-02 08:58:10.682 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:10 compute-0 nova_compute[260603]: 2025-10-02 08:58:10.683 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:10 compute-0 nova_compute[260603]: 2025-10-02 08:58:10.723 2 DEBUG oslo_concurrency.processutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:58:10 compute-0 ceph-mon[74477]: pgmap v2401: 305 pgs: 305 active+clean; 121 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 511 KiB/s wr, 65 op/s
Oct 02 08:58:10 compute-0 podman[398770]: 2025-10-02 08:58:10.982194595 +0000 UTC m=+0.062776641 container create ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 02 08:58:11 compute-0 podman[398770]: 2025-10-02 08:58:10.948477296 +0000 UTC m=+0.029059322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:58:11 compute-0 systemd[1]: Started libpod-conmon-ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e.scope.
Oct 02 08:58:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:58:11 compute-0 podman[398770]: 2025-10-02 08:58:11.110455221 +0000 UTC m=+0.191037287 container init ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 08:58:11 compute-0 podman[398770]: 2025-10-02 08:58:11.119853119 +0000 UTC m=+0.200435175 container start ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:58:11 compute-0 podman[398770]: 2025-10-02 08:58:11.124240398 +0000 UTC m=+0.204822504 container attach ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 08:58:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:58:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/156203065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:58:11 compute-0 musing_euler[398786]: 167 167
Oct 02 08:58:11 compute-0 systemd[1]: libpod-ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e.scope: Deactivated successfully.
Oct 02 08:58:11 compute-0 podman[398770]: 2025-10-02 08:58:11.129538426 +0000 UTC m=+0.210120482 container died ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.145 2 DEBUG oslo_concurrency.processutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.156 2 DEBUG nova.compute.provider_tree [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-196be8544f85941d8104e1783719c8cb81c0439dbd958aa604d4d5c142bbc250-merged.mount: Deactivated successfully.
Oct 02 08:58:11 compute-0 podman[398770]: 2025-10-02 08:58:11.180842952 +0000 UTC m=+0.261424968 container remove ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.181 2 DEBUG nova.scheduler.client.report [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:58:11 compute-0 systemd[1]: libpod-conmon-ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e.scope: Deactivated successfully.
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.204 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.238 2 INFO nova.scheduler.client.report [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.302 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:11 compute-0 podman[398811]: 2025-10-02 08:58:11.423299919 +0000 UTC m=+0.058934379 container create 0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_newton, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:58:11 compute-0 systemd[1]: Started libpod-conmon-0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a.scope.
Oct 02 08:58:11 compute-0 podman[398811]: 2025-10-02 08:58:11.392738311 +0000 UTC m=+0.028372801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:58:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2486e289a7ca8e7ce6ab7d83fdc4f458c8720cd8ab204eb438ab514fbee958d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2486e289a7ca8e7ce6ab7d83fdc4f458c8720cd8ab204eb438ab514fbee958d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2486e289a7ca8e7ce6ab7d83fdc4f458c8720cd8ab204eb438ab514fbee958d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2486e289a7ca8e7ce6ab7d83fdc4f458c8720cd8ab204eb438ab514fbee958d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:11 compute-0 podman[398811]: 2025-10-02 08:58:11.52047412 +0000 UTC m=+0.156108650 container init 0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_newton, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 02 08:58:11 compute-0 podman[398811]: 2025-10-02 08:58:11.533991179 +0000 UTC m=+0.169625649 container start 0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:58:11 compute-0 podman[398811]: 2025-10-02 08:58:11.537634235 +0000 UTC m=+0.173268785 container attach 0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_newton, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:58:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 121 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct 02 08:58:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/156203065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.985 2 DEBUG nova.compute.manager [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.987 2 DEBUG oslo_concurrency.lockutils [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.987 2 DEBUG oslo_concurrency.lockutils [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.988 2 DEBUG oslo_concurrency.lockutils [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.988 2 DEBUG nova.compute.manager [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] No waiting events found dispatching network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.988 2 WARNING nova.compute.manager [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received unexpected event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 for instance with vm_state deleted and task_state None.
Oct 02 08:58:11 compute-0 nova_compute[260603]: 2025-10-02 08:58:11.989 2 DEBUG nova.compute.manager [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-vif-deleted-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:58:12 compute-0 trusting_newton[398828]: {
Oct 02 08:58:12 compute-0 trusting_newton[398828]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "osd_id": 2,
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "type": "bluestore"
Oct 02 08:58:12 compute-0 trusting_newton[398828]:     },
Oct 02 08:58:12 compute-0 trusting_newton[398828]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "osd_id": 1,
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "type": "bluestore"
Oct 02 08:58:12 compute-0 trusting_newton[398828]:     },
Oct 02 08:58:12 compute-0 trusting_newton[398828]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "osd_id": 0,
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:58:12 compute-0 trusting_newton[398828]:         "type": "bluestore"
Oct 02 08:58:12 compute-0 trusting_newton[398828]:     }
Oct 02 08:58:12 compute-0 trusting_newton[398828]: }
Oct 02 08:58:12 compute-0 systemd[1]: libpod-0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a.scope: Deactivated successfully.
Oct 02 08:58:12 compute-0 podman[398811]: 2025-10-02 08:58:12.645195648 +0000 UTC m=+1.280830148 container died 0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_newton, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 08:58:12 compute-0 systemd[1]: libpod-0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a.scope: Consumed 1.115s CPU time.
Oct 02 08:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2486e289a7ca8e7ce6ab7d83fdc4f458c8720cd8ab204eb438ab514fbee958d-merged.mount: Deactivated successfully.
Oct 02 08:58:12 compute-0 podman[398811]: 2025-10-02 08:58:12.718032427 +0000 UTC m=+1.353666917 container remove 0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_newton, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:58:12 compute-0 systemd[1]: libpod-conmon-0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a.scope: Deactivated successfully.
Oct 02 08:58:12 compute-0 sudo[398685]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:58:12 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:58:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:58:12 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:58:12 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c4a9fae0-60ef-40d9-ae38-8eb64b17c316 does not exist
Oct 02 08:58:12 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 7d6fcf71-c06b-4188-a1da-9ddc9f5e46db does not exist
Oct 02 08:58:12 compute-0 ceph-mon[74477]: pgmap v2402: 305 pgs: 305 active+clean; 121 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct 02 08:58:12 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:58:12 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:58:12 compute-0 sudo[398873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:58:12 compute-0 sudo[398873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:12 compute-0 sudo[398873]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:12 compute-0 sudo[398898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:58:12 compute-0 sudo[398898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:58:12 compute-0 sudo[398898]: pam_unix(sudo:session): session closed for user root
Oct 02 08:58:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:58:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 02 08:58:14 compute-0 nova_compute[260603]: 2025-10-02 08:58:14.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:14 compute-0 nova_compute[260603]: 2025-10-02 08:58:14.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:14 compute-0 nova_compute[260603]: 2025-10-02 08:58:14.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:14 compute-0 ceph-mon[74477]: pgmap v2403: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 02 08:58:15 compute-0 nova_compute[260603]: 2025-10-02 08:58:15.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.0 KiB/s wr, 38 op/s
Oct 02 08:58:16 compute-0 ceph-mon[74477]: pgmap v2404: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.0 KiB/s wr, 38 op/s
Oct 02 08:58:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.0 KiB/s wr, 38 op/s
Oct 02 08:58:17 compute-0 nova_compute[260603]: 2025-10-02 08:58:17.973 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395482.9719012, 69862866-8141-4e02-bd49-4278bfdb7857 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:58:17 compute-0 nova_compute[260603]: 2025-10-02 08:58:17.974 2 INFO nova.compute.manager [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] VM Stopped (Lifecycle Event)
Oct 02 08:58:17 compute-0 nova_compute[260603]: 2025-10-02 08:58:17.995 2 DEBUG nova.compute.manager [None req-b53f4112-be05-4e07-ae55-77eb47dcd3df - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:58:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:58:18 compute-0 ceph-mon[74477]: pgmap v2405: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.0 KiB/s wr, 38 op/s
Oct 02 08:58:19 compute-0 nova_compute[260603]: 2025-10-02 08:58:19.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Oct 02 08:58:20 compute-0 nova_compute[260603]: 2025-10-02 08:58:20.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:20 compute-0 ceph-mon[74477]: pgmap v2406: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Oct 02 08:58:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:58:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:58:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903598391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:58:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:58:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903598391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:58:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:58:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 39K writes, 154K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.75 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4589 writes, 18K keys, 4589 commit groups, 1.0 writes per commit group, ingest: 22.57 MB, 0.04 MB/s
                                           Interval WAL: 4589 writes, 1752 syncs, 2.62 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 08:58:22 compute-0 ceph-mon[74477]: pgmap v2407: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:58:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1903598391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:58:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1903598391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:58:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:58:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:58:24 compute-0 nova_compute[260603]: 2025-10-02 08:58:24.529 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395489.5268977, 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:58:24 compute-0 nova_compute[260603]: 2025-10-02 08:58:24.529 2 INFO nova.compute.manager [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] VM Stopped (Lifecycle Event)
Oct 02 08:58:24 compute-0 nova_compute[260603]: 2025-10-02 08:58:24.553 2 DEBUG nova.compute.manager [None req-39a443d8-17a8-48bd-8d0b-eb28af3d785f - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:58:24 compute-0 nova_compute[260603]: 2025-10-02 08:58:24.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:24 compute-0 ceph-mon[74477]: pgmap v2408: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 08:58:25 compute-0 nova_compute[260603]: 2025-10-02 08:58:25.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:58:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:58:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 41K writes, 160K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 41K writes, 15K syncs, 2.73 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4414 writes, 18K keys, 4414 commit groups, 1.0 writes per commit group, ingest: 19.80 MB, 0.03 MB/s
                                           Interval WAL: 4414 writes, 1688 syncs, 2.61 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 08:58:26 compute-0 ceph-mon[74477]: pgmap v2409: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:58:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:58:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:58:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:58:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:58:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:58:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:58:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:58:28 compute-0 podman[398925]: 2025-10-02 08:58:28.036155905 +0000 UTC m=+0.082262399 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:58:28
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.mgr', 'default.rgw.meta', '.rgw.root', 'backups', 'vms', 'cephfs.cephfs.meta']
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:58:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:58:28 compute-0 podman[398924]: 2025-10-02 08:58:28.136994601 +0000 UTC m=+0.182209627 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:58:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:58:28 compute-0 nova_compute[260603]: 2025-10-02 08:58:28.688 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:28 compute-0 nova_compute[260603]: 2025-10-02 08:58:28.689 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:28 compute-0 nova_compute[260603]: 2025-10-02 08:58:28.706 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:58:28 compute-0 nova_compute[260603]: 2025-10-02 08:58:28.785 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:28 compute-0 nova_compute[260603]: 2025-10-02 08:58:28.786 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:28 compute-0 nova_compute[260603]: 2025-10-02 08:58:28.797 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:58:28 compute-0 nova_compute[260603]: 2025-10-02 08:58:28.798 2 INFO nova.compute.claims [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:58:28 compute-0 ceph-mon[74477]: pgmap v2410: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:58:28 compute-0 nova_compute[260603]: 2025-10-02 08:58:28.914 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:58:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:58:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306506091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.385 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.395 2 DEBUG nova.compute.provider_tree [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.418 2 DEBUG nova.scheduler.client.report [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.454 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.455 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.511 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.513 2 DEBUG nova.network.neutron [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.542 2 INFO nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.564 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.704 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.706 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.707 2 INFO nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Creating image(s)
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.740 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:58:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.777 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.813 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.818 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.880 2 DEBUG nova.policy [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:58:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/306506091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.926 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.927 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.928 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.929 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.961 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:58:29 compute-0 nova_compute[260603]: 2025-10-02 08:58:29.968 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:58:30 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 02 08:58:30 compute-0 nova_compute[260603]: 2025-10-02 08:58:30.335 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:58:30 compute-0 nova_compute[260603]: 2025-10-02 08:58:30.410 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:58:30 compute-0 nova_compute[260603]: 2025-10-02 08:58:30.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:30 compute-0 nova_compute[260603]: 2025-10-02 08:58:30.506 2 DEBUG nova.objects.instance [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:58:30 compute-0 nova_compute[260603]: 2025-10-02 08:58:30.526 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:58:30 compute-0 nova_compute[260603]: 2025-10-02 08:58:30.526 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Ensure instance console log exists: /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:58:30 compute-0 nova_compute[260603]: 2025-10-02 08:58:30.527 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:30 compute-0 nova_compute[260603]: 2025-10-02 08:58:30.527 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:30 compute-0 nova_compute[260603]: 2025-10-02 08:58:30.527 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:30 compute-0 nova_compute[260603]: 2025-10-02 08:58:30.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:30.856 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:58:30 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:30.859 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:58:30 compute-0 ceph-mon[74477]: pgmap v2411: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:58:30 compute-0 nova_compute[260603]: 2025-10-02 08:58:30.943 2 DEBUG nova.network.neutron [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Successfully created port: 5ea70a9c-8299-4593-b2b2-5c3315870d73 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:58:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 08:58:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 32K writes, 124K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 32K writes, 11K syncs, 2.73 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3436 writes, 13K keys, 3436 commit groups, 1.0 writes per commit group, ingest: 14.09 MB, 0.02 MB/s
                                           Interval WAL: 3436 writes, 1397 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 08:58:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:58:31 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:31.862 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:31 compute-0 nova_compute[260603]: 2025-10-02 08:58:31.874 2 DEBUG nova.network.neutron [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Successfully updated port: 5ea70a9c-8299-4593-b2b2-5c3315870d73 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:58:31 compute-0 nova_compute[260603]: 2025-10-02 08:58:31.894 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:58:31 compute-0 nova_compute[260603]: 2025-10-02 08:58:31.894 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:58:31 compute-0 nova_compute[260603]: 2025-10-02 08:58:31.894 2 DEBUG nova.network.neutron [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:58:31 compute-0 nova_compute[260603]: 2025-10-02 08:58:31.948 2 DEBUG nova.compute.manager [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-changed-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:58:31 compute-0 nova_compute[260603]: 2025-10-02 08:58:31.949 2 DEBUG nova.compute.manager [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing instance network info cache due to event network-changed-5ea70a9c-8299-4593-b2b2-5c3315870d73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:58:31 compute-0 nova_compute[260603]: 2025-10-02 08:58:31.949 2 DEBUG oslo_concurrency.lockutils [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.020 2 DEBUG nova.network.neutron [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:58:32 compute-0 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.899 2 DEBUG nova.network.neutron [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:58:32 compute-0 ceph-mon[74477]: pgmap v2412: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.925 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.925 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Instance network_info: |[{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.926 2 DEBUG oslo_concurrency.lockutils [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.927 2 DEBUG nova.network.neutron [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing network info cache for port 5ea70a9c-8299-4593-b2b2-5c3315870d73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.932 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Start _get_guest_xml network_info=[{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.939 2 WARNING nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.946 2 DEBUG nova.virt.libvirt.host [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.947 2 DEBUG nova.virt.libvirt.host [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.952 2 DEBUG nova.virt.libvirt.host [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.953 2 DEBUG nova.virt.libvirt.host [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.954 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.954 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.955 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.956 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.957 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.957 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.958 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.958 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.959 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.960 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.960 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.961 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:58:32 compute-0 nova_compute[260603]: 2025-10-02 08:58:32.967 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:58:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:58:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:58:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145870380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.396 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.428 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.433 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:58:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:58:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:58:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719254403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.861 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.863 2 DEBUG nova.virt.libvirt.vif [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:58:29Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.863 2 DEBUG nova.network.os_vif_util [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.864 2 DEBUG nova.network.os_vif_util [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.866 2 DEBUG nova.objects.instance [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.881 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:58:33 compute-0 nova_compute[260603]:   <uuid>b4eacfa3-8b31-492a-b3c5-829a890a4aae</uuid>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   <name>instance-00000081</name>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:58:32</nova:creationTime>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:58:33 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:58:33 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:58:33 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:58:33 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:58:33 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:58:33 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:58:33 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:58:33 compute-0 nova_compute[260603]:         <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct 02 08:58:33 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <system>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <entry name="serial">b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <entry name="uuid">b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     </system>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   <os>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   </os>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   <features>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   </features>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk">
Oct 02 08:58:33 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       </source>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:58:33 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config">
Oct 02 08:58:33 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       </source>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:58:33 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:42:41:53"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <target dev="tap5ea70a9c-82"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log" append="off"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <video>
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     </video>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:58:33 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:58:33 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:58:33 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:58:33 compute-0 nova_compute[260603]: </domain>
Oct 02 08:58:33 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.882 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Preparing to wait for external event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.883 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.883 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.883 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.884 2 DEBUG nova.virt.libvirt.vif [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:58:29Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.884 2 DEBUG nova.network.os_vif_util [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.885 2 DEBUG nova.network.os_vif_util [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.885 2 DEBUG os_vif [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.897 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ea70a9c-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ea70a9c-82, col_values=(('external_ids', {'iface-id': '5ea70a9c-8299-4593-b2b2-5c3315870d73', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:41:53', 'vm-uuid': 'b4eacfa3-8b31-492a-b3c5-829a890a4aae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4145870380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:58:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2719254403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:58:33 compute-0 NetworkManager[45129]: <info>  [1759395513.9351] manager: (tap5ea70a9c-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:33 compute-0 nova_compute[260603]: 2025-10-02 08:58:33.947 2 INFO os_vif [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82')
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.034 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.035 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.035 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:42:41:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.035 2 INFO nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Using config drive
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.058 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:58:34 compute-0 podman[399218]: 2025-10-02 08:58:34.066017862 +0000 UTC m=+0.091402478 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 08:58:34 compute-0 podman[399221]: 2025-10-02 08:58:34.091328795 +0000 UTC m=+0.113432748 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.327 2 DEBUG nova.network.neutron [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updated VIF entry in instance network info cache for port 5ea70a9c-8299-4593-b2b2-5c3315870d73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.329 2 DEBUG nova.network.neutron [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.352 2 DEBUG oslo_concurrency.lockutils [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.381 2 INFO nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Creating config drive at /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/disk.config
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.391 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpztyre93w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.560 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpztyre93w" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.598 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.606 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/disk.config b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:58:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:34.838 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.838 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/disk.config b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.840 2 INFO nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Deleting local config drive /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/disk.config because it was imported into RBD.
Oct 02 08:58:34 compute-0 kernel: tap5ea70a9c-82: entered promiscuous mode
Oct 02 08:58:34 compute-0 ceph-mon[74477]: pgmap v2413: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:58:34 compute-0 NetworkManager[45129]: <info>  [1759395514.9426] manager: (tap5ea70a9c-82): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Oct 02 08:58:34 compute-0 ovn_controller[152344]: 2025-10-02T08:58:34Z|01388|binding|INFO|Claiming lport 5ea70a9c-8299-4593-b2b2-5c3315870d73 for this chassis.
Oct 02 08:58:34 compute-0 ovn_controller[152344]: 2025-10-02T08:58:34Z|01389|binding|INFO|5ea70a9c-8299-4593-b2b2-5c3315870d73: Claiming fa:16:3e:42:41:53 10.100.0.12
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:34 compute-0 nova_compute[260603]: 2025-10-02 08:58:34.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:34.986 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:41:53 10.100.0.12'], port_security=['fa:16:3e:42:41:53 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b4eacfa3-8b31-492a-b3c5-829a890a4aae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-531b0560-b279-49fe-a565-b902507e886d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41a1668d-b8be-4a47-8e43-ab11db6fabeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1a7ccfd-5e72-4548-90da-40016d961198, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5ea70a9c-8299-4593-b2b2-5c3315870d73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:58:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:34.988 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5ea70a9c-8299-4593-b2b2-5c3315870d73 in datapath 531b0560-b279-49fe-a565-b902507e886d bound to our chassis
Oct 02 08:58:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:34.990 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 531b0560-b279-49fe-a565-b902507e886d
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.010 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5477b588-0da9-4c13-a071-0e699b2fe999]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.012 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap531b0560-b1 in ovnmeta-531b0560-b279-49fe-a565-b902507e886d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.014 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap531b0560-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.014 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b80d51f2-9822-4e53-a40d-5cecbc319ce2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.016 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d85c61b4-f87b-4527-8147-2d30937e5468]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 systemd-machined[214636]: New machine qemu-163-instance-00000081.
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.035 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b9eff316-a108-4cda-a1dd-52820fc78533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 nova_compute[260603]: 2025-10-02 08:58:35.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:35 compute-0 systemd[1]: Started Virtual Machine qemu-163-instance-00000081.
Oct 02 08:58:35 compute-0 ovn_controller[152344]: 2025-10-02T08:58:35Z|01390|binding|INFO|Setting lport 5ea70a9c-8299-4593-b2b2-5c3315870d73 ovn-installed in OVS
Oct 02 08:58:35 compute-0 ovn_controller[152344]: 2025-10-02T08:58:35Z|01391|binding|INFO|Setting lport 5ea70a9c-8299-4593-b2b2-5c3315870d73 up in Southbound
Oct 02 08:58:35 compute-0 nova_compute[260603]: 2025-10-02 08:58:35.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.054 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[569caf3b-8968-4903-b056-7e245fdadb91]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 systemd-udevd[399333]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:58:35 compute-0 NetworkManager[45129]: <info>  [1759395515.0845] device (tap5ea70a9c-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:58:35 compute-0 NetworkManager[45129]: <info>  [1759395515.0852] device (tap5ea70a9c-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.088 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5044c39e-2a53-4386-a6d0-008c13039c78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 NetworkManager[45129]: <info>  [1759395515.0950] manager: (tap531b0560-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/553)
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.094 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0a3c1d-2c29-4dfb-8259-249ec5d060d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.129 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d4036b8b-8ac7-4931-869f-3da57726f7ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.133 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e331c4f6-c629-4778-b18d-f579b756f479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 NetworkManager[45129]: <info>  [1759395515.1521] device (tap531b0560-b0): carrier: link connected
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.155 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc22a7f-e784-4fda-bc43-1a295c2b0005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.178 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa86a756-7f3b-4b6e-84b1-083ac8f0c874]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap531b0560-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:97:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639375, 'reachable_time': 19284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399362, 'error': None, 'target': 'ovnmeta-531b0560-b279-49fe-a565-b902507e886d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.200 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7552b4-b15c-4e5e-b54d-151e67c301f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:972e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639375, 'tstamp': 639375}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399363, 'error': None, 'target': 'ovnmeta-531b0560-b279-49fe-a565-b902507e886d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.227 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[54783a43-7475-4c8f-873a-ca162ea4d459]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap531b0560-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:97:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639375, 'reachable_time': 19284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399364, 'error': None, 'target': 'ovnmeta-531b0560-b279-49fe-a565-b902507e886d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.272 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c031f4-1511-41e3-b4c5-f21f78ade978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.357 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f3908d-fa46-481f-8a0c-b2928c7c619a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.359 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap531b0560-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.359 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.360 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap531b0560-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:35 compute-0 nova_compute[260603]: 2025-10-02 08:58:35.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:35 compute-0 NetworkManager[45129]: <info>  [1759395515.3629] manager: (tap531b0560-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/554)
Oct 02 08:58:35 compute-0 kernel: tap531b0560-b0: entered promiscuous mode
Oct 02 08:58:35 compute-0 nova_compute[260603]: 2025-10-02 08:58:35.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.366 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap531b0560-b0, col_values=(('external_ids', {'iface-id': '34739963-aa72-473b-8b1d-5d0d09f0b1de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:58:35 compute-0 nova_compute[260603]: 2025-10-02 08:58:35.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:35 compute-0 ovn_controller[152344]: 2025-10-02T08:58:35Z|01392|binding|INFO|Releasing lport 34739963-aa72-473b-8b1d-5d0d09f0b1de from this chassis (sb_readonly=0)
Oct 02 08:58:35 compute-0 nova_compute[260603]: 2025-10-02 08:58:35.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.393 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/531b0560-b279-49fe-a565-b902507e886d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/531b0560-b279-49fe-a565-b902507e886d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.395 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4adadb-e839-4247-8de4-800cb52cc422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.396 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-531b0560-b279-49fe-a565-b902507e886d
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/531b0560-b279-49fe-a565-b902507e886d.pid.haproxy
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 531b0560-b279-49fe-a565-b902507e886d
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:58:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.398 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-531b0560-b279-49fe-a565-b902507e886d', 'env', 'PROCESS_TAG=haproxy-531b0560-b279-49fe-a565-b902507e886d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/531b0560-b279-49fe-a565-b902507e886d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:58:35 compute-0 nova_compute[260603]: 2025-10-02 08:58:35.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:35 compute-0 nova_compute[260603]: 2025-10-02 08:58:35.508 2 DEBUG nova.compute.manager [req-8c14ad5a-c8dd-4824-b921-35d1fe9521d4 req-d4b30c26-23f2-4df5-9342-122659b7cb29 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:58:35 compute-0 nova_compute[260603]: 2025-10-02 08:58:35.509 2 DEBUG oslo_concurrency.lockutils [req-8c14ad5a-c8dd-4824-b921-35d1fe9521d4 req-d4b30c26-23f2-4df5-9342-122659b7cb29 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:35 compute-0 nova_compute[260603]: 2025-10-02 08:58:35.510 2 DEBUG oslo_concurrency.lockutils [req-8c14ad5a-c8dd-4824-b921-35d1fe9521d4 req-d4b30c26-23f2-4df5-9342-122659b7cb29 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:35 compute-0 nova_compute[260603]: 2025-10-02 08:58:35.514 2 DEBUG oslo_concurrency.lockutils [req-8c14ad5a-c8dd-4824-b921-35d1fe9521d4 req-d4b30c26-23f2-4df5-9342-122659b7cb29 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:35 compute-0 nova_compute[260603]: 2025-10-02 08:58:35.516 2 DEBUG nova.compute.manager [req-8c14ad5a-c8dd-4824-b921-35d1fe9521d4 req-d4b30c26-23f2-4df5-9342-122659b7cb29 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Processing event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:58:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:58:35 compute-0 podman[399438]: 2025-10-02 08:58:35.80991716 +0000 UTC m=+0.057093811 container create 42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:58:35 compute-0 systemd[1]: Started libpod-conmon-42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd.scope.
Oct 02 08:58:35 compute-0 podman[399438]: 2025-10-02 08:58:35.781715906 +0000 UTC m=+0.028892567 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:58:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2789c6dcd510ee067d39b52a38d6115997f3323341899d6c11680c2dc54061e9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:58:35 compute-0 podman[399438]: 2025-10-02 08:58:35.913268946 +0000 UTC m=+0.160445607 container init 42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 08:58:35 compute-0 podman[399438]: 2025-10-02 08:58:35.921574209 +0000 UTC m=+0.168750850 container start 42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:58:35 compute-0 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [NOTICE]   (399457) : New worker (399459) forked
Oct 02 08:58:35 compute-0 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [NOTICE]   (399457) : Loading success.
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.099 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395516.099375, b4eacfa3-8b31-492a-b3c5-829a890a4aae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.100 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] VM Started (Lifecycle Event)
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.101 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.105 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.108 2 INFO nova.virt.libvirt.driver [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Instance spawned successfully.
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.108 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.126 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.132 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.138 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.139 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.139 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.140 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.141 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.141 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.167 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.167 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395516.0995774, b4eacfa3-8b31-492a-b3c5-829a890a4aae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.168 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] VM Paused (Lifecycle Event)
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.193 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.197 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395516.104985, b4eacfa3-8b31-492a-b3c5-829a890a4aae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.197 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] VM Resumed (Lifecycle Event)
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.380 2 INFO nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Took 6.67 seconds to spawn the instance on the hypervisor.
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.380 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.382 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.396 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.427 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.467 2 INFO nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Took 7.72 seconds to build instance.
Oct 02 08:58:36 compute-0 nova_compute[260603]: 2025-10-02 08:58:36.529 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:36 compute-0 ceph-mon[74477]: pgmap v2414: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:58:37 compute-0 nova_compute[260603]: 2025-10-02 08:58:37.590 2 DEBUG nova.compute.manager [req-a73fd1b9-bfd0-422d-92b6-2071e5c49bd9 req-eeb37efc-c542-4b5d-ac86-68d0e79c85c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:58:37 compute-0 nova_compute[260603]: 2025-10-02 08:58:37.591 2 DEBUG oslo_concurrency.lockutils [req-a73fd1b9-bfd0-422d-92b6-2071e5c49bd9 req-eeb37efc-c542-4b5d-ac86-68d0e79c85c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:37 compute-0 nova_compute[260603]: 2025-10-02 08:58:37.592 2 DEBUG oslo_concurrency.lockutils [req-a73fd1b9-bfd0-422d-92b6-2071e5c49bd9 req-eeb37efc-c542-4b5d-ac86-68d0e79c85c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:37 compute-0 nova_compute[260603]: 2025-10-02 08:58:37.592 2 DEBUG oslo_concurrency.lockutils [req-a73fd1b9-bfd0-422d-92b6-2071e5c49bd9 req-eeb37efc-c542-4b5d-ac86-68d0e79c85c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:37 compute-0 nova_compute[260603]: 2025-10-02 08:58:37.592 2 DEBUG nova.compute.manager [req-a73fd1b9-bfd0-422d-92b6-2071e5c49bd9 req-eeb37efc-c542-4b5d-ac86-68d0e79c85c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:58:37 compute-0 nova_compute[260603]: 2025-10-02 08:58:37.593 2 WARNING nova.compute.manager [req-a73fd1b9-bfd0-422d-92b6-2071e5c49bd9 req-eeb37efc-c542-4b5d-ac86-68d0e79c85c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received unexpected event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 for instance with vm_state active and task_state None.
Oct 02 08:58:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 984 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct 02 08:58:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:58:38 compute-0 nova_compute[260603]: 2025-10-02 08:58:38.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:38 compute-0 ceph-mon[74477]: pgmap v2415: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 984 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034834989744090644 of space, bias 1.0, pg target 0.10450496923227193 quantized to 32 (current 32)
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:58:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:58:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 02 08:58:40 compute-0 nova_compute[260603]: 2025-10-02 08:58:40.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:40 compute-0 ceph-mon[74477]: pgmap v2416: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 02 08:58:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 02 08:58:41 compute-0 ovn_controller[152344]: 2025-10-02T08:58:41Z|01393|binding|INFO|Releasing lport 34739963-aa72-473b-8b1d-5d0d09f0b1de from this chassis (sb_readonly=0)
Oct 02 08:58:41 compute-0 NetworkManager[45129]: <info>  [1759395521.8830] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Oct 02 08:58:41 compute-0 NetworkManager[45129]: <info>  [1759395521.8843] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Oct 02 08:58:41 compute-0 nova_compute[260603]: 2025-10-02 08:58:41.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:41 compute-0 ovn_controller[152344]: 2025-10-02T08:58:41Z|01394|binding|INFO|Releasing lport 34739963-aa72-473b-8b1d-5d0d09f0b1de from this chassis (sb_readonly=0)
Oct 02 08:58:41 compute-0 nova_compute[260603]: 2025-10-02 08:58:41.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:42 compute-0 nova_compute[260603]: 2025-10-02 08:58:42.163 2 DEBUG nova.compute.manager [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-changed-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:58:42 compute-0 nova_compute[260603]: 2025-10-02 08:58:42.165 2 DEBUG nova.compute.manager [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing instance network info cache due to event network-changed-5ea70a9c-8299-4593-b2b2-5c3315870d73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:58:42 compute-0 nova_compute[260603]: 2025-10-02 08:58:42.165 2 DEBUG oslo_concurrency.lockutils [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:58:42 compute-0 nova_compute[260603]: 2025-10-02 08:58:42.165 2 DEBUG oslo_concurrency.lockutils [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:58:42 compute-0 nova_compute[260603]: 2025-10-02 08:58:42.166 2 DEBUG nova.network.neutron [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing network info cache for port 5ea70a9c-8299-4593-b2b2-5c3315870d73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:58:42 compute-0 ceph-mon[74477]: pgmap v2417: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 02 08:58:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:58:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:58:43 compute-0 nova_compute[260603]: 2025-10-02 08:58:43.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:44 compute-0 nova_compute[260603]: 2025-10-02 08:58:44.096 2 DEBUG nova.network.neutron [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updated VIF entry in instance network info cache for port 5ea70a9c-8299-4593-b2b2-5c3315870d73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:58:44 compute-0 nova_compute[260603]: 2025-10-02 08:58:44.096 2 DEBUG nova.network.neutron [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:58:44 compute-0 nova_compute[260603]: 2025-10-02 08:58:44.124 2 DEBUG oslo_concurrency.lockutils [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:58:44 compute-0 ceph-mon[74477]: pgmap v2418: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 08:58:45 compute-0 nova_compute[260603]: 2025-10-02 08:58:45.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:45 compute-0 nova_compute[260603]: 2025-10-02 08:58:45.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:58:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:58:46 compute-0 ceph-mon[74477]: pgmap v2419: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 08:58:47 compute-0 nova_compute[260603]: 2025-10-02 08:58:47.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:58:47 compute-0 nova_compute[260603]: 2025-10-02 08:58:47.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:58:47 compute-0 nova_compute[260603]: 2025-10-02 08:58:47.566 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 08:58:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 105 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 107 op/s
Oct 02 08:58:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:58:48 compute-0 ovn_controller[152344]: 2025-10-02T08:58:48Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:41:53 10.100.0.12
Oct 02 08:58:48 compute-0 ovn_controller[152344]: 2025-10-02T08:58:48Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:41:53 10.100.0.12
Oct 02 08:58:48 compute-0 nova_compute[260603]: 2025-10-02 08:58:48.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:48 compute-0 ceph-mon[74477]: pgmap v2420: 305 pgs: 305 active+clean; 105 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 107 op/s
Oct 02 08:58:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 114 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 82 op/s
Oct 02 08:58:50 compute-0 nova_compute[260603]: 2025-10-02 08:58:50.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:50 compute-0 nova_compute[260603]: 2025-10-02 08:58:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:58:51 compute-0 ceph-mon[74477]: pgmap v2421: 305 pgs: 305 active+clean; 114 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 82 op/s
Oct 02 08:58:51 compute-0 nova_compute[260603]: 2025-10-02 08:58:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:58:51 compute-0 nova_compute[260603]: 2025-10-02 08:58:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:58:51 compute-0 nova_compute[260603]: 2025-10-02 08:58:51.560 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:51 compute-0 nova_compute[260603]: 2025-10-02 08:58:51.560 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:51 compute-0 nova_compute[260603]: 2025-10-02 08:58:51.560 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:51 compute-0 nova_compute[260603]: 2025-10-02 08:58:51.560 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:58:51 compute-0 nova_compute[260603]: 2025-10-02 08:58:51.561 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:58:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 114 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 719 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:58:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:58:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031318051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:58:51 compute-0 nova_compute[260603]: 2025-10-02 08:58:51.978 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:58:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1031318051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:58:52 compute-0 nova_compute[260603]: 2025-10-02 08:58:52.047 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:58:52 compute-0 nova_compute[260603]: 2025-10-02 08:58:52.047 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:58:52 compute-0 nova_compute[260603]: 2025-10-02 08:58:52.227 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:58:52 compute-0 nova_compute[260603]: 2025-10-02 08:58:52.228 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3453MB free_disk=59.94355773925781GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:58:52 compute-0 nova_compute[260603]: 2025-10-02 08:58:52.228 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:52 compute-0 nova_compute[260603]: 2025-10-02 08:58:52.229 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:52 compute-0 nova_compute[260603]: 2025-10-02 08:58:52.421 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance b4eacfa3-8b31-492a-b3c5-829a890a4aae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:58:52 compute-0 nova_compute[260603]: 2025-10-02 08:58:52.421 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:58:52 compute-0 nova_compute[260603]: 2025-10-02 08:58:52.422 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:58:52 compute-0 nova_compute[260603]: 2025-10-02 08:58:52.562 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:58:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:58:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/549809416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:58:52 compute-0 nova_compute[260603]: 2025-10-02 08:58:52.991 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:58:52 compute-0 nova_compute[260603]: 2025-10-02 08:58:52.997 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:58:53 compute-0 ceph-mon[74477]: pgmap v2422: 305 pgs: 305 active+clean; 114 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 719 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 08:58:53 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/549809416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:58:53 compute-0 nova_compute[260603]: 2025-10-02 08:58:53.025 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:58:53 compute-0 nova_compute[260603]: 2025-10-02 08:58:53.050 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:58:53 compute-0 nova_compute[260603]: 2025-10-02 08:58:53.050 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:58:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:58:53 compute-0 nova_compute[260603]: 2025-10-02 08:58:53.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:58:53 compute-0 nova_compute[260603]: 2025-10-02 08:58:53.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:58:53 compute-0 nova_compute[260603]: 2025-10-02 08:58:53.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 08:58:53 compute-0 nova_compute[260603]: 2025-10-02 08:58:53.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 08:58:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 778 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 08:58:53 compute-0 nova_compute[260603]: 2025-10-02 08:58:53.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:55 compute-0 ceph-mon[74477]: pgmap v2423: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 778 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 08:58:55 compute-0 nova_compute[260603]: 2025-10-02 08:58:55.280 2 INFO nova.compute.manager [None req-e4604733-4f94-4e89-9ed4-72bbb11abc21 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Get console output
Oct 02 08:58:55 compute-0 nova_compute[260603]: 2025-10-02 08:58:55.286 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 08:58:55 compute-0 nova_compute[260603]: 2025-10-02 08:58:55.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:55 compute-0 nova_compute[260603]: 2025-10-02 08:58:55.531 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:58:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 02 08:58:57 compute-0 ceph-mon[74477]: pgmap v2424: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 02 08:58:57 compute-0 nova_compute[260603]: 2025-10-02 08:58:57.740 2 DEBUG oslo_concurrency.lockutils [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "interface-b4eacfa3-8b31-492a-b3c5-829a890a4aae-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:58:57 compute-0 nova_compute[260603]: 2025-10-02 08:58:57.740 2 DEBUG oslo_concurrency.lockutils [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-b4eacfa3-8b31-492a-b3c5-829a890a4aae-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:58:57 compute-0 nova_compute[260603]: 2025-10-02 08:58:57.741 2 DEBUG nova.objects.instance [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'flavor' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:58:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 08:58:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:58:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:58:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:58:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:58:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:58:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:58:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:58:58 compute-0 nova_compute[260603]: 2025-10-02 08:58:58.837 2 DEBUG nova.objects.instance [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_requests' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:58:58 compute-0 nova_compute[260603]: 2025-10-02 08:58:58.851 2 DEBUG nova.network.neutron [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:58:58 compute-0 nova_compute[260603]: 2025-10-02 08:58:58.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:58:58 compute-0 podman[399517]: 2025-10-02 08:58:58.992190297 +0000 UTC m=+0.060645123 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 08:58:59 compute-0 podman[399516]: 2025-10-02 08:58:59.015391462 +0000 UTC m=+0.083885680 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 08:58:59 compute-0 ceph-mon[74477]: pgmap v2425: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 08:58:59 compute-0 nova_compute[260603]: 2025-10-02 08:58:59.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:58:59 compute-0 nova_compute[260603]: 2025-10-02 08:58:59.651 2 DEBUG nova.policy [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:58:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 131 KiB/s rd, 861 KiB/s wr, 27 op/s
Oct 02 08:59:00 compute-0 nova_compute[260603]: 2025-10-02 08:59:00.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:00 compute-0 nova_compute[260603]: 2025-10-02 08:59:00.806 2 DEBUG nova.network.neutron [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Successfully created port: 4e4cec89-b01e-4202-bc3e-a65ce8864017 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:59:01 compute-0 ceph-mon[74477]: pgmap v2426: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 131 KiB/s rd, 861 KiB/s wr, 27 op/s
Oct 02 08:59:01 compute-0 nova_compute[260603]: 2025-10-02 08:59:01.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:59:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 74 KiB/s wr, 12 op/s
Oct 02 08:59:01 compute-0 nova_compute[260603]: 2025-10-02 08:59:01.896 2 DEBUG nova.network.neutron [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Successfully updated port: 4e4cec89-b01e-4202-bc3e-a65ce8864017 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:59:01 compute-0 nova_compute[260603]: 2025-10-02 08:59:01.914 2 DEBUG oslo_concurrency.lockutils [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:59:01 compute-0 nova_compute[260603]: 2025-10-02 08:59:01.915 2 DEBUG oslo_concurrency.lockutils [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:59:01 compute-0 nova_compute[260603]: 2025-10-02 08:59:01.915 2 DEBUG nova.network.neutron [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:59:02 compute-0 nova_compute[260603]: 2025-10-02 08:59:02.008 2 DEBUG nova.compute.manager [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-changed-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:59:02 compute-0 nova_compute[260603]: 2025-10-02 08:59:02.009 2 DEBUG nova.compute.manager [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing instance network info cache due to event network-changed-4e4cec89-b01e-4202-bc3e-a65ce8864017. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:59:02 compute-0 nova_compute[260603]: 2025-10-02 08:59:02.009 2 DEBUG oslo_concurrency.lockutils [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:59:03 compute-0 ceph-mon[74477]: pgmap v2427: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 74 KiB/s wr, 12 op/s
Oct 02 08:59:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.628 2 DEBUG nova.network.neutron [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.766 2 DEBUG oslo_concurrency.lockutils [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.767 2 DEBUG oslo_concurrency.lockutils [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.767 2 DEBUG nova.network.neutron [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing network info cache for port 4e4cec89-b01e-4202-bc3e-a65ce8864017 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.770 2 DEBUG nova.virt.libvirt.vif [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.771 2 DEBUG nova.network.os_vif_util [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.771 2 DEBUG nova.network.os_vif_util [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.772 2 DEBUG os_vif [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.772 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 78 KiB/s wr, 12 op/s
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.773 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.776 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e4cec89-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.776 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4e4cec89-b0, col_values=(('external_ids', {'iface-id': '4e4cec89-b01e-4202-bc3e-a65ce8864017', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:88:e1', 'vm-uuid': 'b4eacfa3-8b31-492a-b3c5-829a890a4aae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:03 compute-0 NetworkManager[45129]: <info>  [1759395543.7802] manager: (tap4e4cec89-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/557)
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.788 2 INFO os_vif [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0')
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.788 2 DEBUG nova.virt.libvirt.vif [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.789 2 DEBUG nova.network.os_vif_util [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.789 2 DEBUG nova.network.os_vif_util [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.792 2 DEBUG nova.virt.libvirt.guest [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] attach device xml: <interface type="ethernet">
Oct 02 08:59:03 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:45:88:e1"/>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   <target dev="tap4e4cec89-b0"/>
Oct 02 08:59:03 compute-0 nova_compute[260603]: </interface>
Oct 02 08:59:03 compute-0 nova_compute[260603]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 08:59:03 compute-0 kernel: tap4e4cec89-b0: entered promiscuous mode
Oct 02 08:59:03 compute-0 NetworkManager[45129]: <info>  [1759395543.8041] manager: (tap4e4cec89-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/558)
Oct 02 08:59:03 compute-0 ovn_controller[152344]: 2025-10-02T08:59:03Z|01395|binding|INFO|Claiming lport 4e4cec89-b01e-4202-bc3e-a65ce8864017 for this chassis.
Oct 02 08:59:03 compute-0 ovn_controller[152344]: 2025-10-02T08:59:03Z|01396|binding|INFO|4e4cec89-b01e-4202-bc3e-a65ce8864017: Claiming fa:16:3e:45:88:e1 10.100.0.28
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:03 compute-0 systemd-udevd[399565]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.850 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:88:e1 10.100.0.28'], port_security=['fa:16:3e:45:88:e1 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'b4eacfa3-8b31-492a-b3c5-829a890a4aae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf29bdfe-eec4-4bc1-887c-39f99d9387e9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4e4cec89-b01e-4202-bc3e-a65ce8864017) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:59:03 compute-0 NetworkManager[45129]: <info>  [1759395543.8514] device (tap4e4cec89-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:03 compute-0 NetworkManager[45129]: <info>  [1759395543.8521] device (tap4e4cec89-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.852 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4e4cec89-b01e-4202-bc3e-a65ce8864017 in datapath 5d0f6b84-ebf5-436d-83fe-b7739dc629d9 bound to our chassis
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.853 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d0f6b84-ebf5-436d-83fe-b7739dc629d9
Oct 02 08:59:03 compute-0 ovn_controller[152344]: 2025-10-02T08:59:03Z|01397|binding|INFO|Setting lport 4e4cec89-b01e-4202-bc3e-a65ce8864017 ovn-installed in OVS
Oct 02 08:59:03 compute-0 ovn_controller[152344]: 2025-10-02T08:59:03Z|01398|binding|INFO|Setting lport 4e4cec89-b01e-4202-bc3e-a65ce8864017 up in Southbound
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.863 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e8f9e0-4c0b-4e68-af1e-7d07b62c23f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.864 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d0f6b84-e1 in ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.866 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d0f6b84-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.866 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83bf3796-b19d-4ecd-b085-ec69c2549a35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.867 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[baaf6604-8ffd-4df2-afe0-268f53f8f836]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.876 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f98bde0f-3b42-4c76-84c2-4ab5b156f2f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.898 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbfa6b2-6d88-4de0-ba52-c5704f4002f5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.925 2 DEBUG nova.virt.libvirt.driver [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.926 2 DEBUG nova.virt.libvirt.driver [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.926 2 DEBUG nova.virt.libvirt.driver [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:42:41:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.926 2 DEBUG nova.virt.libvirt.driver [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:45:88:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.926 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d3276dd1-8e33-4f40-83dc-bed4d76ae83d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.931 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8238dd6-9f3c-4439-a364-4e522f9e9fb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:03 compute-0 NetworkManager[45129]: <info>  [1759395543.9327] manager: (tap5d0f6b84-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/559)
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.957 2 DEBUG nova.virt.libvirt.guest [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:59:03 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:59:03</nova:creationTime>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 08:59:03 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 08:59:03 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 08:59:03 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 08:59:03 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:59:03 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 08:59:03 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:59:03 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 08:59:03 compute-0 nova_compute[260603]:     <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct 02 08:59:03 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 08:59:03 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:59:03 compute-0 nova_compute[260603]:     <nova:port uuid="4e4cec89-b01e-4202-bc3e-a65ce8864017">
Oct 02 08:59:03 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct 02 08:59:03 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 08:59:03 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 08:59:03 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 08:59:03 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.967 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd5373d-e862-404b-8f1e-a66da4c3a909]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.970 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e11230ff-aee5-49fa-bc4c-771d76677433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:03 compute-0 nova_compute[260603]: 2025-10-02 08:59:03.988 2 DEBUG oslo_concurrency.lockutils [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-b4eacfa3-8b31-492a-b3c5-829a890a4aae-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:03 compute-0 NetworkManager[45129]: <info>  [1759395543.9957] device (tap5d0f6b84-e0): carrier: link connected
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.002 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[551ce90e-975a-445b-9b4f-24f5120834b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.018 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1179fd6e-b2a7-4cd6-96e5-b2c3286bf3a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d0f6b84-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:31:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642259, 'reachable_time': 20336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399592, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.033 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c41f4d79-667a-4140-a63f-4049ac7c15d4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:3180'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642259, 'tstamp': 642259}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399593, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.048 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e849f778-d822-425e-83fe-1ffb636152e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d0f6b84-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:31:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642259, 'reachable_time': 20336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399594, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.077 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8cef4d82-cc5e-48da-960a-56b674f661c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:04 compute-0 nova_compute[260603]: 2025-10-02 08:59:04.097 2 DEBUG nova.compute.manager [req-ba450b3c-276e-4f8e-a363-9643bc151bdb req-a9a532c1-7952-4bd3-870b-27a44ca7d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:59:04 compute-0 nova_compute[260603]: 2025-10-02 08:59:04.098 2 DEBUG oslo_concurrency.lockutils [req-ba450b3c-276e-4f8e-a363-9643bc151bdb req-a9a532c1-7952-4bd3-870b-27a44ca7d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:59:04 compute-0 nova_compute[260603]: 2025-10-02 08:59:04.098 2 DEBUG oslo_concurrency.lockutils [req-ba450b3c-276e-4f8e-a363-9643bc151bdb req-a9a532c1-7952-4bd3-870b-27a44ca7d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:59:04 compute-0 nova_compute[260603]: 2025-10-02 08:59:04.099 2 DEBUG oslo_concurrency.lockutils [req-ba450b3c-276e-4f8e-a363-9643bc151bdb req-a9a532c1-7952-4bd3-870b-27a44ca7d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:04 compute-0 nova_compute[260603]: 2025-10-02 08:59:04.099 2 DEBUG nova.compute.manager [req-ba450b3c-276e-4f8e-a363-9643bc151bdb req-a9a532c1-7952-4bd3-870b-27a44ca7d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:59:04 compute-0 nova_compute[260603]: 2025-10-02 08:59:04.099 2 WARNING nova.compute.manager [req-ba450b3c-276e-4f8e-a363-9643bc151bdb req-a9a532c1-7952-4bd3-870b-27a44ca7d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received unexpected event network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 for instance with vm_state active and task_state None.
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.141 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f30eae9-a7cf-4445-8676-45e853898dd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.142 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d0f6b84-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.143 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.143 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d0f6b84-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:04 compute-0 nova_compute[260603]: 2025-10-02 08:59:04.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:04 compute-0 NetworkManager[45129]: <info>  [1759395544.1456] manager: (tap5d0f6b84-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Oct 02 08:59:04 compute-0 kernel: tap5d0f6b84-e0: entered promiscuous mode
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.147 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d0f6b84-e0, col_values=(('external_ids', {'iface-id': 'f202c767-dd88-4dcf-bf75-a2c0dfdb6c1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:04 compute-0 ovn_controller[152344]: 2025-10-02T08:59:04Z|01399|binding|INFO|Releasing lport f202c767-dd88-4dcf-bf75-a2c0dfdb6c1d from this chassis (sb_readonly=0)
Oct 02 08:59:04 compute-0 nova_compute[260603]: 2025-10-02 08:59:04.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:04 compute-0 nova_compute[260603]: 2025-10-02 08:59:04.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.162 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d0f6b84-ebf5-436d-83fe-b7739dc629d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d0f6b84-ebf5-436d-83fe-b7739dc629d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.163 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9a464b8a-18f8-4d28-aceb-20d59b06e0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.164 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: global
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-5d0f6b84-ebf5-436d-83fe-b7739dc629d9
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/5d0f6b84-ebf5-436d-83fe-b7739dc629d9.pid.haproxy
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 5d0f6b84-ebf5-436d-83fe-b7739dc629d9
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 08:59:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.165 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'env', 'PROCESS_TAG=haproxy-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d0f6b84-ebf5-436d-83fe-b7739dc629d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 08:59:04 compute-0 podman[399626]: 2025-10-02 08:59:04.579926116 +0000 UTC m=+0.057530695 container create 9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 08:59:04 compute-0 systemd[1]: Started libpod-conmon-9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb.scope.
Oct 02 08:59:04 compute-0 podman[399626]: 2025-10-02 08:59:04.547335102 +0000 UTC m=+0.024939691 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 08:59:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168d60d32efd9370edb94bf9730fe02355e22ea7cfe7215ac4f9b026d57c9878/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:04 compute-0 podman[399626]: 2025-10-02 08:59:04.68571584 +0000 UTC m=+0.163320409 container init 9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 08:59:04 compute-0 podman[399626]: 2025-10-02 08:59:04.692841456 +0000 UTC m=+0.170445985 container start 9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 08:59:04 compute-0 podman[399643]: 2025-10-02 08:59:04.704273798 +0000 UTC m=+0.057170683 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 08:59:04 compute-0 podman[399639]: 2025-10-02 08:59:04.704807125 +0000 UTC m=+0.066430717 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct 02 08:59:04 compute-0 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [NOTICE]   (399684) : New worker (399688) forked
Oct 02 08:59:04 compute-0 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [NOTICE]   (399684) : Loading success.
Oct 02 08:59:05 compute-0 nova_compute[260603]: 2025-10-02 08:59:05.051 2 DEBUG nova.network.neutron [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updated VIF entry in instance network info cache for port 4e4cec89-b01e-4202-bc3e-a65ce8864017. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:59:05 compute-0 nova_compute[260603]: 2025-10-02 08:59:05.053 2 DEBUG nova.network.neutron [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:59:05 compute-0 ceph-mon[74477]: pgmap v2428: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 78 KiB/s wr, 12 op/s
Oct 02 08:59:05 compute-0 nova_compute[260603]: 2025-10-02 08:59:05.070 2 DEBUG oslo_concurrency.lockutils [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:59:05 compute-0 nova_compute[260603]: 2025-10-02 08:59:05.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:05 compute-0 nova_compute[260603]: 2025-10-02 08:59:05.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:59:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s wr, 0 op/s
Oct 02 08:59:06 compute-0 ovn_controller[152344]: 2025-10-02T08:59:06Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:88:e1 10.100.0.28
Oct 02 08:59:06 compute-0 ovn_controller[152344]: 2025-10-02T08:59:06Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:88:e1 10.100.0.28
Oct 02 08:59:06 compute-0 nova_compute[260603]: 2025-10-02 08:59:06.231 2 DEBUG nova.compute.manager [req-9e8545b5-7c32-4e89-8ddd-c7c58454b458 req-961e7f52-f4af-4301-8129-4e3118cefc19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:59:06 compute-0 nova_compute[260603]: 2025-10-02 08:59:06.231 2 DEBUG oslo_concurrency.lockutils [req-9e8545b5-7c32-4e89-8ddd-c7c58454b458 req-961e7f52-f4af-4301-8129-4e3118cefc19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:59:06 compute-0 nova_compute[260603]: 2025-10-02 08:59:06.231 2 DEBUG oslo_concurrency.lockutils [req-9e8545b5-7c32-4e89-8ddd-c7c58454b458 req-961e7f52-f4af-4301-8129-4e3118cefc19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:59:06 compute-0 nova_compute[260603]: 2025-10-02 08:59:06.232 2 DEBUG oslo_concurrency.lockutils [req-9e8545b5-7c32-4e89-8ddd-c7c58454b458 req-961e7f52-f4af-4301-8129-4e3118cefc19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:06 compute-0 nova_compute[260603]: 2025-10-02 08:59:06.232 2 DEBUG nova.compute.manager [req-9e8545b5-7c32-4e89-8ddd-c7c58454b458 req-961e7f52-f4af-4301-8129-4e3118cefc19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:59:06 compute-0 nova_compute[260603]: 2025-10-02 08:59:06.232 2 WARNING nova.compute.manager [req-9e8545b5-7c32-4e89-8ddd-c7c58454b458 req-961e7f52-f4af-4301-8129-4e3118cefc19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received unexpected event network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 for instance with vm_state active and task_state None.
Oct 02 08:59:07 compute-0 ceph-mon[74477]: pgmap v2429: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s wr, 0 op/s
Oct 02 08:59:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 121 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 19 KiB/s wr, 1 op/s
Oct 02 08:59:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:59:08 compute-0 nova_compute[260603]: 2025-10-02 08:59:08.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:09 compute-0 ceph-mon[74477]: pgmap v2430: 305 pgs: 305 active+clean; 121 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 19 KiB/s wr, 1 op/s
Oct 02 08:59:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 121 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 0 op/s
Oct 02 08:59:10 compute-0 nova_compute[260603]: 2025-10-02 08:59:10.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:10 compute-0 nova_compute[260603]: 2025-10-02 08:59:10.531 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:59:10 compute-0 nova_compute[260603]: 2025-10-02 08:59:10.532 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 08:59:11 compute-0 ceph-mon[74477]: pgmap v2431: 305 pgs: 305 active+clean; 121 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 0 op/s
Oct 02 08:59:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 121 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 0 op/s
Oct 02 08:59:11 compute-0 nova_compute[260603]: 2025-10-02 08:59:11.915 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:59:11 compute-0 nova_compute[260603]: 2025-10-02 08:59:11.916 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:59:11 compute-0 nova_compute[260603]: 2025-10-02 08:59:11.935 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.002 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.002 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.010 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.011 2 INFO nova.compute.claims [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Claim successful on node compute-0.ctlplane.example.com
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.116 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:59:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:59:12 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051964863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.566 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.576 2 DEBUG nova.compute.provider_tree [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.593 2 DEBUG nova.scheduler.client.report [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.613 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.614 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.658 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.658 2 DEBUG nova.network.neutron [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.679 2 INFO nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.701 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.788 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.790 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.790 2 INFO nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Creating image(s)
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.816 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.843 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.871 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.875 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.974 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.975 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.976 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:59:12 compute-0 nova_compute[260603]: 2025-10-02 08:59:12.976 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:13 compute-0 nova_compute[260603]: 2025-10-02 08:59:13.010 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:59:13 compute-0 nova_compute[260603]: 2025-10-02 08:59:13.017 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f141f189-a224-4ac7-88b5-c28f198944e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:59:13 compute-0 sudo[399774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:59:13 compute-0 sudo[399774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:13 compute-0 sudo[399774]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:59:13 compute-0 sudo[399820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:59:13 compute-0 sudo[399820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:13 compute-0 sudo[399820]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:13 compute-0 ceph-mon[74477]: pgmap v2432: 305 pgs: 305 active+clean; 121 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 0 op/s
Oct 02 08:59:13 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3051964863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:59:13 compute-0 sudo[399863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:59:13 compute-0 sudo[399863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:13 compute-0 sudo[399863]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:13 compute-0 nova_compute[260603]: 2025-10-02 08:59:13.315 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f141f189-a224-4ac7-88b5-c28f198944e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:59:13 compute-0 sudo[399888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 08:59:13 compute-0 sudo[399888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:13 compute-0 nova_compute[260603]: 2025-10-02 08:59:13.409 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 08:59:13 compute-0 nova_compute[260603]: 2025-10-02 08:59:13.503 2 DEBUG nova.objects.instance [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid f141f189-a224-4ac7-88b5-c28f198944e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:59:13 compute-0 nova_compute[260603]: 2025-10-02 08:59:13.528 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 08:59:13 compute-0 nova_compute[260603]: 2025-10-02 08:59:13.529 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Ensure instance console log exists: /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 08:59:13 compute-0 nova_compute[260603]: 2025-10-02 08:59:13.530 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:59:13 compute-0 nova_compute[260603]: 2025-10-02 08:59:13.530 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:59:13 compute-0 nova_compute[260603]: 2025-10-02 08:59:13.531 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:13 compute-0 nova_compute[260603]: 2025-10-02 08:59:13.689 2 DEBUG nova.policy [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 08:59:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 131 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 481 KiB/s wr, 12 op/s
Oct 02 08:59:13 compute-0 nova_compute[260603]: 2025-10-02 08:59:13.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:13 compute-0 sudo[399888]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:59:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:59:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 08:59:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:59:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 08:59:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:59:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c0b6fb01-0bed-4b4b-a56a-27deff671a21 does not exist
Oct 02 08:59:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2b68cb83-a7f4-40b9-936c-9f3b8f9b44b5 does not exist
Oct 02 08:59:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b74cc849-e8b7-4bd8-a642-003e15510055 does not exist
Oct 02 08:59:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 08:59:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:59:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 08:59:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:59:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 08:59:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:59:13 compute-0 sudo[400016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:59:13 compute-0 sudo[400016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:13 compute-0 sudo[400016]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:13 compute-0 sudo[400041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:59:14 compute-0 sudo[400041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:14 compute-0 sudo[400041]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:14 compute-0 sudo[400066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:59:14 compute-0 sudo[400066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:14 compute-0 sudo[400066]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:14 compute-0 sudo[400091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 08:59:14 compute-0 sudo[400091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:59:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 08:59:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:59:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 08:59:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 08:59:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 08:59:14 compute-0 podman[400156]: 2025-10-02 08:59:14.447210775 +0000 UTC m=+0.047332403 container create 5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 08:59:14 compute-0 systemd[1]: Started libpod-conmon-5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5.scope.
Oct 02 08:59:14 compute-0 podman[400156]: 2025-10-02 08:59:14.425330831 +0000 UTC m=+0.025452499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:59:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:59:14 compute-0 podman[400156]: 2025-10-02 08:59:14.542367011 +0000 UTC m=+0.142488669 container init 5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:59:14 compute-0 podman[400156]: 2025-10-02 08:59:14.553027519 +0000 UTC m=+0.153149167 container start 5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 08:59:14 compute-0 podman[400156]: 2025-10-02 08:59:14.556999985 +0000 UTC m=+0.157121633 container attach 5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:59:14 compute-0 hungry_cartwright[400172]: 167 167
Oct 02 08:59:14 compute-0 systemd[1]: libpod-5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5.scope: Deactivated successfully.
Oct 02 08:59:14 compute-0 podman[400156]: 2025-10-02 08:59:14.560849967 +0000 UTC m=+0.160971605 container died 5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct 02 08:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7023d175c066b291013a3c6702d6740aa066d4fdbc3c2f34ada44fa37e46df8-merged.mount: Deactivated successfully.
Oct 02 08:59:14 compute-0 podman[400156]: 2025-10-02 08:59:14.601125064 +0000 UTC m=+0.201246682 container remove 5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:59:14 compute-0 systemd[1]: libpod-conmon-5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5.scope: Deactivated successfully.
Oct 02 08:59:14 compute-0 podman[400196]: 2025-10-02 08:59:14.811194344 +0000 UTC m=+0.049603434 container create a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 08:59:14 compute-0 systemd[1]: Started libpod-conmon-a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59.scope.
Oct 02 08:59:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f412558a045f7a2f0390bb7e7ae505eef113770507bfd71bfc4e324402daa33d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f412558a045f7a2f0390bb7e7ae505eef113770507bfd71bfc4e324402daa33d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f412558a045f7a2f0390bb7e7ae505eef113770507bfd71bfc4e324402daa33d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:14 compute-0 podman[400196]: 2025-10-02 08:59:14.785130838 +0000 UTC m=+0.023539958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f412558a045f7a2f0390bb7e7ae505eef113770507bfd71bfc4e324402daa33d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f412558a045f7a2f0390bb7e7ae505eef113770507bfd71bfc4e324402daa33d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:14 compute-0 podman[400196]: 2025-10-02 08:59:14.888391281 +0000 UTC m=+0.126800381 container init a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 08:59:14 compute-0 podman[400196]: 2025-10-02 08:59:14.897009394 +0000 UTC m=+0.135418484 container start a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 08:59:14 compute-0 podman[400196]: 2025-10-02 08:59:14.900514106 +0000 UTC m=+0.138923206 container attach a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 08:59:15 compute-0 ceph-mon[74477]: pgmap v2433: 305 pgs: 305 active+clean; 131 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 481 KiB/s wr, 12 op/s
Oct 02 08:59:15 compute-0 nova_compute[260603]: 2025-10-02 08:59:15.265 2 DEBUG nova.network.neutron [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Successfully created port: 48cfc997-bd30-40b4-a387-f40a75731793 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 08:59:15 compute-0 nova_compute[260603]: 2025-10-02 08:59:15.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 131 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 476 KiB/s wr, 12 op/s
Oct 02 08:59:16 compute-0 modest_kare[400212]: --> passed data devices: 0 physical, 3 LVM
Oct 02 08:59:16 compute-0 modest_kare[400212]: --> relative data size: 1.0
Oct 02 08:59:16 compute-0 modest_kare[400212]: --> All data devices are unavailable
Oct 02 08:59:16 compute-0 systemd[1]: libpod-a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59.scope: Deactivated successfully.
Oct 02 08:59:16 compute-0 podman[400196]: 2025-10-02 08:59:16.050840835 +0000 UTC m=+1.289249915 container died a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:59:16 compute-0 systemd[1]: libpod-a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59.scope: Consumed 1.052s CPU time.
Oct 02 08:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f412558a045f7a2f0390bb7e7ae505eef113770507bfd71bfc4e324402daa33d-merged.mount: Deactivated successfully.
Oct 02 08:59:16 compute-0 podman[400196]: 2025-10-02 08:59:16.216355542 +0000 UTC m=+1.454764622 container remove a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 08:59:16 compute-0 systemd[1]: libpod-conmon-a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59.scope: Deactivated successfully.
Oct 02 08:59:16 compute-0 sudo[400091]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:16 compute-0 sudo[400255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:59:16 compute-0 sudo[400255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:16 compute-0 sudo[400255]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:16 compute-0 sudo[400280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:59:16 compute-0 sudo[400280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:16 compute-0 sudo[400280]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:16 compute-0 sudo[400305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:59:16 compute-0 sudo[400305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:16 compute-0 sudo[400305]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:16 compute-0 sudo[400330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 08:59:16 compute-0 sudo[400330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:16 compute-0 nova_compute[260603]: 2025-10-02 08:59:16.650 2 DEBUG nova.network.neutron [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Successfully updated port: 48cfc997-bd30-40b4-a387-f40a75731793 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 08:59:16 compute-0 nova_compute[260603]: 2025-10-02 08:59:16.669 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-f141f189-a224-4ac7-88b5-c28f198944e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:59:16 compute-0 nova_compute[260603]: 2025-10-02 08:59:16.670 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-f141f189-a224-4ac7-88b5-c28f198944e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:59:16 compute-0 nova_compute[260603]: 2025-10-02 08:59:16.670 2 DEBUG nova.network.neutron [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 08:59:16 compute-0 nova_compute[260603]: 2025-10-02 08:59:16.775 2 DEBUG nova.compute.manager [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-changed-48cfc997-bd30-40b4-a387-f40a75731793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:59:16 compute-0 nova_compute[260603]: 2025-10-02 08:59:16.775 2 DEBUG nova.compute.manager [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Refreshing instance network info cache due to event network-changed-48cfc997-bd30-40b4-a387-f40a75731793. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:59:16 compute-0 nova_compute[260603]: 2025-10-02 08:59:16.776 2 DEBUG oslo_concurrency.lockutils [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f141f189-a224-4ac7-88b5-c28f198944e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:59:16 compute-0 nova_compute[260603]: 2025-10-02 08:59:16.878 2 DEBUG nova.network.neutron [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 08:59:16 compute-0 podman[400398]: 2025-10-02 08:59:16.956672883 +0000 UTC m=+0.077156537 container create c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:59:17 compute-0 systemd[1]: Started libpod-conmon-c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2.scope.
Oct 02 08:59:17 compute-0 podman[400398]: 2025-10-02 08:59:16.92659719 +0000 UTC m=+0.047080894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:59:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:59:17 compute-0 podman[400398]: 2025-10-02 08:59:17.061656571 +0000 UTC m=+0.182140285 container init c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:59:17 compute-0 podman[400398]: 2025-10-02 08:59:17.072432263 +0000 UTC m=+0.192915887 container start c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:59:17 compute-0 podman[400398]: 2025-10-02 08:59:17.076424299 +0000 UTC m=+0.196908013 container attach c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:59:17 compute-0 hardcore_chaplygin[400414]: 167 167
Oct 02 08:59:17 compute-0 systemd[1]: libpod-c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2.scope: Deactivated successfully.
Oct 02 08:59:17 compute-0 podman[400398]: 2025-10-02 08:59:17.080209259 +0000 UTC m=+0.200692963 container died c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaplygin, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 08:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-204697e4a0f8d029d99359bef7c99e743ce1fc32c0fbd68b2aaf4038d4fc00a2-merged.mount: Deactivated successfully.
Oct 02 08:59:17 compute-0 podman[400398]: 2025-10-02 08:59:17.129105149 +0000 UTC m=+0.249588773 container remove c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaplygin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 08:59:17 compute-0 systemd[1]: libpod-conmon-c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2.scope: Deactivated successfully.
Oct 02 08:59:17 compute-0 ceph-mon[74477]: pgmap v2434: 305 pgs: 305 active+clean; 131 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 476 KiB/s wr, 12 op/s
Oct 02 08:59:17 compute-0 podman[400439]: 2025-10-02 08:59:17.406287227 +0000 UTC m=+0.067239763 container create 943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:59:17 compute-0 systemd[1]: Started libpod-conmon-943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93.scope.
Oct 02 08:59:17 compute-0 podman[400439]: 2025-10-02 08:59:17.382641567 +0000 UTC m=+0.043594173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:59:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e48faacbb162b4916f6e59b5bdedeaed48843fe5c841ef1b3f5fbcab817442/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e48faacbb162b4916f6e59b5bdedeaed48843fe5c841ef1b3f5fbcab817442/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e48faacbb162b4916f6e59b5bdedeaed48843fe5c841ef1b3f5fbcab817442/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e48faacbb162b4916f6e59b5bdedeaed48843fe5c841ef1b3f5fbcab817442/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:17 compute-0 podman[400439]: 2025-10-02 08:59:17.512020679 +0000 UTC m=+0.172973285 container init 943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dijkstra, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:59:17 compute-0 podman[400439]: 2025-10-02 08:59:17.533138799 +0000 UTC m=+0.194091325 container start 943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dijkstra, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 08:59:17 compute-0 podman[400439]: 2025-10-02 08:59:17.536763983 +0000 UTC m=+0.197716539 container attach 943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:59:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 167 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 02 08:59:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]: {
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:     "0": [
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:         {
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "devices": [
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "/dev/loop3"
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             ],
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_name": "ceph_lv0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_size": "21470642176",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "name": "ceph_lv0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "tags": {
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.cluster_name": "ceph",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.crush_device_class": "",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.encrypted": "0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.osd_id": "0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.type": "block",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.vdo": "0"
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             },
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "type": "block",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "vg_name": "ceph_vg0"
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:         }
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:     ],
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:     "1": [
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:         {
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "devices": [
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "/dev/loop4"
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             ],
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_name": "ceph_lv1",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_size": "21470642176",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "name": "ceph_lv1",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "tags": {
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.cluster_name": "ceph",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.crush_device_class": "",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.encrypted": "0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.osd_id": "1",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.type": "block",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.vdo": "0"
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             },
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "type": "block",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "vg_name": "ceph_vg1"
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:         }
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:     ],
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:     "2": [
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:         {
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "devices": [
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "/dev/loop5"
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             ],
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_name": "ceph_lv2",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_size": "21470642176",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "name": "ceph_lv2",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "tags": {
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.cluster_name": "ceph",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.crush_device_class": "",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.encrypted": "0",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.osd_id": "2",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.type": "block",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:                 "ceph.vdo": "0"
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             },
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "type": "block",
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:             "vg_name": "ceph_vg2"
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:         }
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]:     ]
Oct 02 08:59:18 compute-0 frosty_dijkstra[400455]: }
Oct 02 08:59:18 compute-0 systemd[1]: libpod-943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93.scope: Deactivated successfully.
Oct 02 08:59:18 compute-0 podman[400439]: 2025-10-02 08:59:18.307329033 +0000 UTC m=+0.968281559 container died 943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 08:59:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-12e48faacbb162b4916f6e59b5bdedeaed48843fe5c841ef1b3f5fbcab817442-merged.mount: Deactivated successfully.
Oct 02 08:59:18 compute-0 podman[400439]: 2025-10-02 08:59:18.387382341 +0000 UTC m=+1.048334867 container remove 943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dijkstra, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 08:59:18 compute-0 systemd[1]: libpod-conmon-943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93.scope: Deactivated successfully.
Oct 02 08:59:18 compute-0 sudo[400330]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:18 compute-0 sudo[400479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:59:18 compute-0 sudo[400479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:18 compute-0 sudo[400479]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:18 compute-0 sudo[400504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 08:59:18 compute-0 sudo[400504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:18 compute-0 sudo[400504]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:18 compute-0 sudo[400529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:59:18 compute-0 sudo[400529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:18 compute-0 sudo[400529]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:18 compute-0 sudo[400554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 08:59:18 compute-0 sudo[400554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.851 2 DEBUG nova.network.neutron [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Updating instance_info_cache with network_info: [{"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.879 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-f141f189-a224-4ac7-88b5-c28f198944e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.880 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Instance network_info: |[{"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.881 2 DEBUG oslo_concurrency.lockutils [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f141f189-a224-4ac7-88b5-c28f198944e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.881 2 DEBUG nova.network.neutron [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Refreshing network info cache for port 48cfc997-bd30-40b4-a387-f40a75731793 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.886 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Start _get_guest_xml network_info=[{"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.895 2 WARNING nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.914 2 DEBUG nova.virt.libvirt.host [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.915 2 DEBUG nova.virt.libvirt.host [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.920 2 DEBUG nova.virt.libvirt.host [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.921 2 DEBUG nova.virt.libvirt.host [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.922 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.922 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.923 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.923 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.924 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.924 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.924 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.924 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.925 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.925 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.925 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.926 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 08:59:18 compute-0 nova_compute[260603]: 2025-10-02 08:59:18.931 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:59:19 compute-0 ceph-mon[74477]: pgmap v2435: 305 pgs: 305 active+clean; 167 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 02 08:59:19 compute-0 podman[400637]: 2025-10-02 08:59:19.31722694 +0000 UTC m=+0.059486687 container create 1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 08:59:19 compute-0 systemd[1]: Started libpod-conmon-1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee.scope.
Oct 02 08:59:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:59:19 compute-0 podman[400637]: 2025-10-02 08:59:19.286871208 +0000 UTC m=+0.029130985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:59:19 compute-0 podman[400637]: 2025-10-02 08:59:19.389686287 +0000 UTC m=+0.131946054 container init 1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:59:19 compute-0 podman[400637]: 2025-10-02 08:59:19.399297352 +0000 UTC m=+0.141557099 container start 1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 08:59:19 compute-0 podman[400637]: 2025-10-02 08:59:19.40239092 +0000 UTC m=+0.144650697 container attach 1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 08:59:19 compute-0 intelligent_rhodes[400654]: 167 167
Oct 02 08:59:19 compute-0 systemd[1]: libpod-1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee.scope: Deactivated successfully.
Oct 02 08:59:19 compute-0 podman[400637]: 2025-10-02 08:59:19.408943158 +0000 UTC m=+0.151202915 container died 1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 08:59:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:59:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/882428886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:59:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-29e8f6feb5bb5784d851269a6099e8b279d475a28e1ea7494c98b408192c895c-merged.mount: Deactivated successfully.
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.444 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:59:19 compute-0 podman[400637]: 2025-10-02 08:59:19.44748163 +0000 UTC m=+0.189741387 container remove 1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:59:19 compute-0 systemd[1]: libpod-conmon-1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee.scope: Deactivated successfully.
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.477 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.483 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:59:19 compute-0 podman[400697]: 2025-10-02 08:59:19.643580267 +0000 UTC m=+0.041755744 container create b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 08:59:19 compute-0 systemd[1]: Started libpod-conmon-b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a.scope.
Oct 02 08:59:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 08:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d90a6c527c8f6d473918ee6fd7f0fad7f8db4350821b61cdb03038154400bc5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d90a6c527c8f6d473918ee6fd7f0fad7f8db4350821b61cdb03038154400bc5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d90a6c527c8f6d473918ee6fd7f0fad7f8db4350821b61cdb03038154400bc5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d90a6c527c8f6d473918ee6fd7f0fad7f8db4350821b61cdb03038154400bc5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 08:59:19 compute-0 podman[400697]: 2025-10-02 08:59:19.626164005 +0000 UTC m=+0.024339502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 08:59:19 compute-0 podman[400697]: 2025-10-02 08:59:19.74145274 +0000 UTC m=+0.139628267 container init b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 08:59:19 compute-0 podman[400697]: 2025-10-02 08:59:19.748792223 +0000 UTC m=+0.146967700 container start b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 08:59:19 compute-0 podman[400697]: 2025-10-02 08:59:19.7524821 +0000 UTC m=+0.150657617 container attach b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 08:59:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:59:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 08:59:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766387724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.938 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.941 2 DEBUG nova.virt.libvirt.vif [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:59:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-300987939',display_name='tempest-TestNetworkBasicOps-server-300987939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-300987939',id=130,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCJjWfAj56mQeyeIvU6XGvvjANhAOW4ymkjaU7nDK7hdgdQPn23X31SFk66pyTpQCxaiWSbcIfBie1wBVKGMKO2QIcNLprOGSD1fN9DoZ9nw1hLigEh1SZTI3GQ/zevNFg==',key_name='tempest-TestNetworkBasicOps-1976267539',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-3sfliqo6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:59:12Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=f141f189-a224-4ac7-88b5-c28f198944e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.941 2 DEBUG nova.network.os_vif_util [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.942 2 DEBUG nova.network.os_vif_util [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.944 2 DEBUG nova.objects.instance [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid f141f189-a224-4ac7-88b5-c28f198944e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.969 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 08:59:19 compute-0 nova_compute[260603]:   <uuid>f141f189-a224-4ac7-88b5-c28f198944e4</uuid>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   <name>instance-00000082</name>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   <metadata>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-300987939</nova:name>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 08:59:18</nova:creationTime>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 08:59:19 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 08:59:19 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 08:59:19 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 08:59:19 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 08:59:19 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 08:59:19 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 08:59:19 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 08:59:19 compute-0 nova_compute[260603]:         <nova:port uuid="48cfc997-bd30-40b4-a387-f40a75731793">
Oct 02 08:59:19 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   </metadata>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <system>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <entry name="serial">f141f189-a224-4ac7-88b5-c28f198944e4</entry>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <entry name="uuid">f141f189-a224-4ac7-88b5-c28f198944e4</entry>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     </system>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   <os>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   </os>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   <features>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <apic/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   </features>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   </clock>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   </cpu>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   <devices>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f141f189-a224-4ac7-88b5-c28f198944e4_disk">
Oct 02 08:59:19 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       </source>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:59:19 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/f141f189-a224-4ac7-88b5-c28f198944e4_disk.config">
Oct 02 08:59:19 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       </source>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 08:59:19 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       </auth>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     </disk>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:51:dd:9b"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <target dev="tap48cfc997-bd"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     </interface>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/console.log" append="off"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     </serial>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <video>
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     </video>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     </rng>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 08:59:19 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 08:59:19 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 08:59:19 compute-0 nova_compute[260603]:   </devices>
Oct 02 08:59:19 compute-0 nova_compute[260603]: </domain>
Oct 02 08:59:19 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.970 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Preparing to wait for external event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.971 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.971 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.971 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.972 2 DEBUG nova.virt.libvirt.vif [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:59:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-300987939',display_name='tempest-TestNetworkBasicOps-server-300987939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-300987939',id=130,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCJjWfAj56mQeyeIvU6XGvvjANhAOW4ymkjaU7nDK7hdgdQPn23X31SFk66pyTpQCxaiWSbcIfBie1wBVKGMKO2QIcNLprOGSD1fN9DoZ9nw1hLigEh1SZTI3GQ/zevNFg==',key_name='tempest-TestNetworkBasicOps-1976267539',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-3sfliqo6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:59:12Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=f141f189-a224-4ac7-88b5-c28f198944e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.972 2 DEBUG nova.network.os_vif_util [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.972 2 DEBUG nova.network.os_vif_util [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.973 2 DEBUG os_vif [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.974 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.974 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48cfc997-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:19 compute-0 nova_compute[260603]: 2025-10-02 08:59:19.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap48cfc997-bd, col_values=(('external_ids', {'iface-id': '48cfc997-bd30-40b4-a387-f40a75731793', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:dd:9b', 'vm-uuid': 'f141f189-a224-4ac7-88b5-c28f198944e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:20 compute-0 NetworkManager[45129]: <info>  [1759395560.0055] manager: (tap48cfc997-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/561)
Oct 02 08:59:20 compute-0 nova_compute[260603]: 2025-10-02 08:59:20.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:20 compute-0 nova_compute[260603]: 2025-10-02 08:59:20.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 08:59:20 compute-0 nova_compute[260603]: 2025-10-02 08:59:20.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:20 compute-0 nova_compute[260603]: 2025-10-02 08:59:20.017 2 INFO os_vif [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd')
Oct 02 08:59:20 compute-0 nova_compute[260603]: 2025-10-02 08:59:20.099 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:59:20 compute-0 nova_compute[260603]: 2025-10-02 08:59:20.100 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 08:59:20 compute-0 nova_compute[260603]: 2025-10-02 08:59:20.100 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:51:dd:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 08:59:20 compute-0 nova_compute[260603]: 2025-10-02 08:59:20.101 2 INFO nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Using config drive
Oct 02 08:59:20 compute-0 nova_compute[260603]: 2025-10-02 08:59:20.132 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:59:20 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/882428886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:59:20 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2766387724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 08:59:20 compute-0 nova_compute[260603]: 2025-10-02 08:59:20.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:20 compute-0 youthful_neumann[400732]: {
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "osd_id": 2,
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "type": "bluestore"
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:     },
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "osd_id": 1,
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "type": "bluestore"
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:     },
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "osd_id": 0,
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:         "type": "bluestore"
Oct 02 08:59:20 compute-0 youthful_neumann[400732]:     }
Oct 02 08:59:20 compute-0 youthful_neumann[400732]: }
Oct 02 08:59:20 compute-0 systemd[1]: libpod-b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a.scope: Deactivated successfully.
Oct 02 08:59:20 compute-0 podman[400787]: 2025-10-02 08:59:20.790999824 +0000 UTC m=+0.030032923 container died b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 08:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d90a6c527c8f6d473918ee6fd7f0fad7f8db4350821b61cdb03038154400bc5d-merged.mount: Deactivated successfully.
Oct 02 08:59:20 compute-0 podman[400787]: 2025-10-02 08:59:20.859436003 +0000 UTC m=+0.098469022 container remove b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 08:59:20 compute-0 systemd[1]: libpod-conmon-b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a.scope: Deactivated successfully.
Oct 02 08:59:20 compute-0 sudo[400554]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 08:59:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:59:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 08:59:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:59:20 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 89fdd375-cd66-4865-8678-8614ec26bc1f does not exist
Oct 02 08:59:20 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev eb3b960d-0c60-424a-8e9f-d40ad4e5a9c3 does not exist
Oct 02 08:59:21 compute-0 sudo[400802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 08:59:21 compute-0 sudo[400802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:21 compute-0 sudo[400802]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:21 compute-0 sudo[400827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 08:59:21 compute-0 sudo[400827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 08:59:21 compute-0 sudo[400827]: pam_unix(sudo:session): session closed for user root
Oct 02 08:59:21 compute-0 ceph-mon[74477]: pgmap v2436: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:59:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:59:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 08:59:21 compute-0 nova_compute[260603]: 2025-10-02 08:59:21.618 2 INFO nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Creating config drive at /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/disk.config
Oct 02 08:59:21 compute-0 nova_compute[260603]: 2025-10-02 08:59:21.623 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9izmn284 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:59:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:59:21 compute-0 nova_compute[260603]: 2025-10-02 08:59:21.784 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9izmn284" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:59:21 compute-0 nova_compute[260603]: 2025-10-02 08:59:21.814 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 08:59:21 compute-0 nova_compute[260603]: 2025-10-02 08:59:21.818 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/disk.config f141f189-a224-4ac7-88b5-c28f198944e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:59:21 compute-0 nova_compute[260603]: 2025-10-02 08:59:21.994 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/disk.config f141f189-a224-4ac7-88b5-c28f198944e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:59:21 compute-0 nova_compute[260603]: 2025-10-02 08:59:21.996 2 INFO nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Deleting local config drive /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/disk.config because it was imported into RBD.
Oct 02 08:59:22 compute-0 kernel: tap48cfc997-bd: entered promiscuous mode
Oct 02 08:59:22 compute-0 NetworkManager[45129]: <info>  [1759395562.0621] manager: (tap48cfc997-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/562)
Oct 02 08:59:22 compute-0 ovn_controller[152344]: 2025-10-02T08:59:22Z|01400|binding|INFO|Claiming lport 48cfc997-bd30-40b4-a387-f40a75731793 for this chassis.
Oct 02 08:59:22 compute-0 ovn_controller[152344]: 2025-10-02T08:59:22Z|01401|binding|INFO|48cfc997-bd30-40b4-a387-f40a75731793: Claiming fa:16:3e:51:dd:9b 10.100.0.27
Oct 02 08:59:22 compute-0 nova_compute[260603]: 2025-10-02 08:59:22.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.090 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:dd:9b 10.100.0.27'], port_security=['fa:16:3e:51:dd:9b 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'f141f189-a224-4ac7-88b5-c28f198944e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3569eb1-74db-4df0-93c4-3e65c3d95428', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf29bdfe-eec4-4bc1-887c-39f99d9387e9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=48cfc997-bd30-40b4-a387-f40a75731793) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.091 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 48cfc997-bd30-40b4-a387-f40a75731793 in datapath 5d0f6b84-ebf5-436d-83fe-b7739dc629d9 bound to our chassis
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.093 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d0f6b84-ebf5-436d-83fe-b7739dc629d9
Oct 02 08:59:22 compute-0 systemd-udevd[400904]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 08:59:22 compute-0 systemd-machined[214636]: New machine qemu-164-instance-00000082.
Oct 02 08:59:22 compute-0 ovn_controller[152344]: 2025-10-02T08:59:22Z|01402|binding|INFO|Setting lport 48cfc997-bd30-40b4-a387-f40a75731793 ovn-installed in OVS
Oct 02 08:59:22 compute-0 ovn_controller[152344]: 2025-10-02T08:59:22Z|01403|binding|INFO|Setting lport 48cfc997-bd30-40b4-a387-f40a75731793 up in Southbound
Oct 02 08:59:22 compute-0 nova_compute[260603]: 2025-10-02 08:59:22.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.115 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9be95f4-179e-4c35-bad1-ed68e1d038a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:22 compute-0 NetworkManager[45129]: <info>  [1759395562.1191] device (tap48cfc997-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 08:59:22 compute-0 NetworkManager[45129]: <info>  [1759395562.1201] device (tap48cfc997-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 08:59:22 compute-0 systemd[1]: Started Virtual Machine qemu-164-instance-00000082.
Oct 02 08:59:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 08:59:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1851268524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:59:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 08:59:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1851268524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.158 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9d930683-1a5b-434c-b85b-c34f8b62a9a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.164 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[39c277a7-7b48-47a6-bd56-26754fda75ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.207 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ff688b8c-bd89-4e26-8e27-263bb16f6f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.225 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa23c17f-eee6-4ca0-9a89-a8ffdf2f8034]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d0f6b84-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:31:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642259, 'reachable_time': 20336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400919, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.245 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7637380f-50a0-4732-b412-c8e9948749d9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d0f6b84-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642270, 'tstamp': 642270}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400920, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap5d0f6b84-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642273, 'tstamp': 642273}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400920, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.247 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d0f6b84-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:22 compute-0 nova_compute[260603]: 2025-10-02 08:59:22.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:22 compute-0 nova_compute[260603]: 2025-10-02 08:59:22.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1851268524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 08:59:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1851268524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.255 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d0f6b84-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.255 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.256 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d0f6b84-e0, col_values=(('external_ids', {'iface-id': 'f202c767-dd88-4dcf-bf75-a2c0dfdb6c1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.257 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 08:59:22 compute-0 nova_compute[260603]: 2025-10-02 08:59:22.600 2 DEBUG nova.network.neutron [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Updated VIF entry in instance network info cache for port 48cfc997-bd30-40b4-a387-f40a75731793. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:59:22 compute-0 nova_compute[260603]: 2025-10-02 08:59:22.601 2 DEBUG nova.network.neutron [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Updating instance_info_cache with network_info: [{"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:59:22 compute-0 nova_compute[260603]: 2025-10-02 08:59:22.617 2 DEBUG oslo_concurrency.lockutils [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f141f189-a224-4ac7-88b5-c28f198944e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:59:22 compute-0 nova_compute[260603]: 2025-10-02 08:59:22.754 2 DEBUG nova.compute.manager [req-67d37b99-0ce9-4221-97a7-12cf413888c8 req-bb15543f-1fb4-4735-a904-ff149923bf74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:59:22 compute-0 nova_compute[260603]: 2025-10-02 08:59:22.755 2 DEBUG oslo_concurrency.lockutils [req-67d37b99-0ce9-4221-97a7-12cf413888c8 req-bb15543f-1fb4-4735-a904-ff149923bf74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:59:22 compute-0 nova_compute[260603]: 2025-10-02 08:59:22.755 2 DEBUG oslo_concurrency.lockutils [req-67d37b99-0ce9-4221-97a7-12cf413888c8 req-bb15543f-1fb4-4735-a904-ff149923bf74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:59:22 compute-0 nova_compute[260603]: 2025-10-02 08:59:22.756 2 DEBUG oslo_concurrency.lockutils [req-67d37b99-0ce9-4221-97a7-12cf413888c8 req-bb15543f-1fb4-4735-a904-ff149923bf74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:22 compute-0 nova_compute[260603]: 2025-10-02 08:59:22.756 2 DEBUG nova.compute.manager [req-67d37b99-0ce9-4221-97a7-12cf413888c8 req-bb15543f-1fb4-4735-a904-ff149923bf74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Processing event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 08:59:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:59:23 compute-0 ceph-mon[74477]: pgmap v2437: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 08:59:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.124 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395564.12463, f141f189-a224-4ac7-88b5-c28f198944e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.126 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] VM Started (Lifecycle Event)
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.127 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.133 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.137 2 INFO nova.virt.libvirt.driver [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Instance spawned successfully.
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.138 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.145 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.150 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.160 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.161 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.162 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.163 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.163 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.164 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.177 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.177 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395564.124697, f141f189-a224-4ac7-88b5-c28f198944e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.178 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] VM Paused (Lifecycle Event)
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.207 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.212 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395564.1313515, f141f189-a224-4ac7-88b5-c28f198944e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.212 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] VM Resumed (Lifecycle Event)
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.237 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.241 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.248 2 INFO nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Took 11.46 seconds to spawn the instance on the hypervisor.
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.249 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.263 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.314 2 INFO nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Took 12.34 seconds to build instance.
Oct 02 08:59:24 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:59:24 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.331 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.865 2 DEBUG nova.compute.manager [req-7e6bf174-037b-45e7-ba68-0b86e7172d2d req-36cfbfb4-83aa-4c07-8c53-b3df6134454b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.865 2 DEBUG oslo_concurrency.lockutils [req-7e6bf174-037b-45e7-ba68-0b86e7172d2d req-36cfbfb4-83aa-4c07-8c53-b3df6134454b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.865 2 DEBUG oslo_concurrency.lockutils [req-7e6bf174-037b-45e7-ba68-0b86e7172d2d req-36cfbfb4-83aa-4c07-8c53-b3df6134454b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.865 2 DEBUG oslo_concurrency.lockutils [req-7e6bf174-037b-45e7-ba68-0b86e7172d2d req-36cfbfb4-83aa-4c07-8c53-b3df6134454b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.866 2 DEBUG nova.compute.manager [req-7e6bf174-037b-45e7-ba68-0b86e7172d2d req-36cfbfb4-83aa-4c07-8c53-b3df6134454b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] No waiting events found dispatching network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 08:59:24 compute-0 nova_compute[260603]: 2025-10-02 08:59:24.866 2 WARNING nova.compute.manager [req-7e6bf174-037b-45e7-ba68-0b86e7172d2d req-36cfbfb4-83aa-4c07-8c53-b3df6134454b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received unexpected event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 for instance with vm_state active and task_state None.
Oct 02 08:59:25 compute-0 nova_compute[260603]: 2025-10-02 08:59:25.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:25 compute-0 ceph-mon[74477]: pgmap v2438: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 02 08:59:25 compute-0 nova_compute[260603]: 2025-10-02 08:59:25.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 21 op/s
Oct 02 08:59:27 compute-0 ceph-mon[74477]: pgmap v2439: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 21 op/s
Oct 02 08:59:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 108 op/s
Oct 02 08:59:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:59:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:59:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:59:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:59:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:59:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:59:28
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images', 'volumes', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 08:59:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:59:28 compute-0 ceph-mon[74477]: pgmap v2440: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 108 op/s
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:59:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 08:59:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 136 op/s
Oct 02 08:59:30 compute-0 nova_compute[260603]: 2025-10-02 08:59:30.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:30 compute-0 podman[400965]: 2025-10-02 08:59:30.002345705 +0000 UTC m=+0.068640098 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 08:59:30 compute-0 podman[400964]: 2025-10-02 08:59:30.056625095 +0000 UTC m=+0.122675781 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 08:59:30 compute-0 nova_compute[260603]: 2025-10-02 08:59:30.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:30 compute-0 ceph-mon[74477]: pgmap v2441: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 136 op/s
Oct 02 08:59:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 134 op/s
Oct 02 08:59:32 compute-0 ceph-mon[74477]: pgmap v2442: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 134 op/s
Oct 02 08:59:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:59:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 134 op/s
Oct 02 08:59:34 compute-0 nova_compute[260603]: 2025-10-02 08:59:34.538 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:59:34 compute-0 nova_compute[260603]: 2025-10-02 08:59:34.538 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 08:59:34 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 02 08:59:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:59:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:59:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:34.840 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:34 compute-0 ceph-mon[74477]: pgmap v2443: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 134 op/s
Oct 02 08:59:35 compute-0 nova_compute[260603]: 2025-10-02 08:59:35.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:35 compute-0 podman[401007]: 2025-10-02 08:59:35.048876345 +0000 UTC m=+0.096548531 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 08:59:35 compute-0 podman[401008]: 2025-10-02 08:59:35.057904522 +0000 UTC m=+0.101627153 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 08:59:35 compute-0 nova_compute[260603]: 2025-10-02 08:59:35.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 KiB/s wr, 129 op/s
Oct 02 08:59:35 compute-0 ovn_controller[152344]: 2025-10-02T08:59:35Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:51:dd:9b 10.100.0.27
Oct 02 08:59:35 compute-0 ovn_controller[152344]: 2025-10-02T08:59:35Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:51:dd:9b 10.100.0.27
Oct 02 08:59:36 compute-0 ceph-mon[74477]: pgmap v2444: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 KiB/s wr, 129 op/s
Oct 02 08:59:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 182 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 163 op/s
Oct 02 08:59:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:59:38 compute-0 ceph-mon[74477]: pgmap v2445: 305 pgs: 305 active+clean; 182 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 163 op/s
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013817624902324672 of space, bias 1.0, pg target 0.41452874706974013 quantized to 32 (current 32)
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 08:59:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 08:59:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 200 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 898 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Oct 02 08:59:40 compute-0 nova_compute[260603]: 2025-10-02 08:59:40.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:40 compute-0 nova_compute[260603]: 2025-10-02 08:59:40.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:40 compute-0 ceph-mon[74477]: pgmap v2446: 305 pgs: 305 active+clean; 200 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 898 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Oct 02 08:59:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 200 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 08:59:42 compute-0 ceph-mon[74477]: pgmap v2447: 305 pgs: 305 active+clean; 200 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 08:59:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:59:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:59:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:44.729 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 08:59:44 compute-0 nova_compute[260603]: 2025-10-02 08:59:44.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:44.731 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 08:59:44 compute-0 nova_compute[260603]: 2025-10-02 08:59:44.877 2 DEBUG nova.compute.manager [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-changed-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 08:59:44 compute-0 nova_compute[260603]: 2025-10-02 08:59:44.878 2 DEBUG nova.compute.manager [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing instance network info cache due to event network-changed-4e4cec89-b01e-4202-bc3e-a65ce8864017. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 08:59:44 compute-0 nova_compute[260603]: 2025-10-02 08:59:44.878 2 DEBUG oslo_concurrency.lockutils [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:59:44 compute-0 nova_compute[260603]: 2025-10-02 08:59:44.879 2 DEBUG oslo_concurrency.lockutils [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:59:44 compute-0 nova_compute[260603]: 2025-10-02 08:59:44.879 2 DEBUG nova.network.neutron [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing network info cache for port 4e4cec89-b01e-4202-bc3e-a65ce8864017 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 08:59:44 compute-0 ceph-mon[74477]: pgmap v2448: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 08:59:45 compute-0 nova_compute[260603]: 2025-10-02 08:59:45.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:45 compute-0 nova_compute[260603]: 2025-10-02 08:59:45.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 08:59:45.732 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 08:59:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 08:59:46 compute-0 nova_compute[260603]: 2025-10-02 08:59:46.431 2 DEBUG nova.network.neutron [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updated VIF entry in instance network info cache for port 4e4cec89-b01e-4202-bc3e-a65ce8864017. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 08:59:46 compute-0 nova_compute[260603]: 2025-10-02 08:59:46.432 2 DEBUG nova.network.neutron [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:59:46 compute-0 nova_compute[260603]: 2025-10-02 08:59:46.453 2 DEBUG oslo_concurrency.lockutils [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:59:46 compute-0 nova_compute[260603]: 2025-10-02 08:59:46.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:59:46 compute-0 ceph-mon[74477]: pgmap v2449: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 08:59:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 08:59:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:59:48 compute-0 nova_compute[260603]: 2025-10-02 08:59:48.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:59:48 compute-0 nova_compute[260603]: 2025-10-02 08:59:48.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 08:59:48 compute-0 nova_compute[260603]: 2025-10-02 08:59:48.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 08:59:48 compute-0 ceph-mon[74477]: pgmap v2450: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 08:59:49 compute-0 nova_compute[260603]: 2025-10-02 08:59:49.621 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 08:59:49 compute-0 nova_compute[260603]: 2025-10-02 08:59:49.622 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 08:59:49 compute-0 nova_compute[260603]: 2025-10-02 08:59:49.622 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 08:59:49 compute-0 nova_compute[260603]: 2025-10-02 08:59:49.623 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 08:59:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 767 KiB/s wr, 28 op/s
Oct 02 08:59:50 compute-0 nova_compute[260603]: 2025-10-02 08:59:50.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:50 compute-0 nova_compute[260603]: 2025-10-02 08:59:50.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:50 compute-0 ceph-mon[74477]: pgmap v2451: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 767 KiB/s wr, 28 op/s
Oct 02 08:59:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Oct 02 08:59:52 compute-0 ceph-mon[74477]: pgmap v2452: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Oct 02 08:59:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:59:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s wr, 1 op/s
Oct 02 08:59:54 compute-0 nova_compute[260603]: 2025-10-02 08:59:54.536 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 08:59:54 compute-0 nova_compute[260603]: 2025-10-02 08:59:54.559 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 08:59:54 compute-0 nova_compute[260603]: 2025-10-02 08:59:54.559 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 08:59:54 compute-0 nova_compute[260603]: 2025-10-02 08:59:54.560 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:59:54 compute-0 nova_compute[260603]: 2025-10-02 08:59:54.560 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:59:54 compute-0 nova_compute[260603]: 2025-10-02 08:59:54.561 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:59:54 compute-0 nova_compute[260603]: 2025-10-02 08:59:54.561 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 08:59:54 compute-0 nova_compute[260603]: 2025-10-02 08:59:54.591 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:59:54 compute-0 nova_compute[260603]: 2025-10-02 08:59:54.592 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:59:54 compute-0 nova_compute[260603]: 2025-10-02 08:59:54.592 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:54 compute-0 nova_compute[260603]: 2025-10-02 08:59:54.592 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 08:59:54 compute-0 nova_compute[260603]: 2025-10-02 08:59:54.593 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:59:54 compute-0 ceph-mon[74477]: pgmap v2453: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s wr, 1 op/s
Oct 02 08:59:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:59:55 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1736591728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.064 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.154 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.155 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.160 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.161 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.406 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.407 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3261MB free_disk=59.89701843261719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.407 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.407 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.486 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance b4eacfa3-8b31-492a-b3c5-829a890a4aae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.486 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance f141f189-a224-4ac7-88b5-c28f198944e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.486 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.487 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.503 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.522 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.522 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.535 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.561 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 08:59:55 compute-0 nova_compute[260603]: 2025-10-02 08:59:55.614 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 08:59:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Oct 02 08:59:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1736591728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:59:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 08:59:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894883514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:59:56 compute-0 nova_compute[260603]: 2025-10-02 08:59:56.029 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 08:59:56 compute-0 nova_compute[260603]: 2025-10-02 08:59:56.038 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 08:59:56 compute-0 nova_compute[260603]: 2025-10-02 08:59:56.070 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 08:59:56 compute-0 nova_compute[260603]: 2025-10-02 08:59:56.097 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 08:59:56 compute-0 nova_compute[260603]: 2025-10-02 08:59:56.097 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 08:59:56 compute-0 ceph-mon[74477]: pgmap v2454: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Oct 02 08:59:56 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2894883514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 08:59:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Oct 02 08:59:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:59:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:59:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:59:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:59:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 08:59:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 08:59:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 08:59:58 compute-0 ceph-mon[74477]: pgmap v2455: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Oct 02 08:59:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Oct 02 09:00:00 compute-0 nova_compute[260603]: 2025-10-02 09:00:00.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:00 compute-0 nova_compute[260603]: 2025-10-02 09:00:00.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:00 compute-0 ceph-mon[74477]: pgmap v2456: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Oct 02 09:00:01 compute-0 podman[401095]: 2025-10-02 09:00:01.035074663 +0000 UTC m=+0.084859612 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 09:00:01 compute-0 podman[401094]: 2025-10-02 09:00:01.084253702 +0000 UTC m=+0.138755580 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 09:00:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Oct 02 09:00:02 compute-0 ceph-mon[74477]: pgmap v2457: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Oct 02 09:00:03 compute-0 nova_compute[260603]: 2025-10-02 09:00:03.055 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:00:03 compute-0 nova_compute[260603]: 2025-10-02 09:00:03.055 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:00:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:00:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s wr, 2 op/s
Oct 02 09:00:04 compute-0 ceph-mon[74477]: pgmap v2458: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s wr, 2 op/s
Oct 02 09:00:05 compute-0 nova_compute[260603]: 2025-10-02 09:00:05.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:05 compute-0 nova_compute[260603]: 2025-10-02 09:00:05.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 09:00:06 compute-0 podman[401136]: 2025-10-02 09:00:06.005778663 +0000 UTC m=+0.075777543 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 09:00:06 compute-0 podman[401137]: 2025-10-02 09:00:06.043293332 +0000 UTC m=+0.098978689 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 09:00:07 compute-0 ceph-mon[74477]: pgmap v2459: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 09:00:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Oct 02 09:00:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:00:09 compute-0 ceph-mon[74477]: pgmap v2460: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Oct 02 09:00:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Oct 02 09:00:10 compute-0 nova_compute[260603]: 2025-10-02 09:00:10.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:10 compute-0 nova_compute[260603]: 2025-10-02 09:00:10.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:11 compute-0 ceph-mon[74477]: pgmap v2461: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Oct 02 09:00:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Oct 02 09:00:13 compute-0 ceph-mon[74477]: pgmap v2462: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Oct 02 09:00:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:00:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Oct 02 09:00:14 compute-0 ovn_controller[152344]: 2025-10-02T09:00:14Z|01404|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Oct 02 09:00:15 compute-0 nova_compute[260603]: 2025-10-02 09:00:15.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:15 compute-0 ceph-mon[74477]: pgmap v2463: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Oct 02 09:00:15 compute-0 nova_compute[260603]: 2025-10-02 09:00:15.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 09:00:17 compute-0 ceph-mon[74477]: pgmap v2464: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 09:00:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s wr, 1 op/s
Oct 02 09:00:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:00:18 compute-0 nova_compute[260603]: 2025-10-02 09:00:18.868 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:18 compute-0 nova_compute[260603]: 2025-10-02 09:00:18.869 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:18 compute-0 nova_compute[260603]: 2025-10-02 09:00:18.869 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:18 compute-0 nova_compute[260603]: 2025-10-02 09:00:18.869 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:18 compute-0 nova_compute[260603]: 2025-10-02 09:00:18.870 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:18 compute-0 nova_compute[260603]: 2025-10-02 09:00:18.871 2 INFO nova.compute.manager [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Terminating instance
Oct 02 09:00:18 compute-0 nova_compute[260603]: 2025-10-02 09:00:18.872 2 DEBUG nova.compute.manager [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:00:18 compute-0 kernel: tap48cfc997-bd (unregistering): left promiscuous mode
Oct 02 09:00:18 compute-0 NetworkManager[45129]: <info>  [1759395618.9215] device (tap48cfc997-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:00:18 compute-0 ovn_controller[152344]: 2025-10-02T09:00:18Z|01405|binding|INFO|Releasing lport 48cfc997-bd30-40b4-a387-f40a75731793 from this chassis (sb_readonly=0)
Oct 02 09:00:18 compute-0 ovn_controller[152344]: 2025-10-02T09:00:18Z|01406|binding|INFO|Setting lport 48cfc997-bd30-40b4-a387-f40a75731793 down in Southbound
Oct 02 09:00:18 compute-0 ovn_controller[152344]: 2025-10-02T09:00:18Z|01407|binding|INFO|Removing iface tap48cfc997-bd ovn-installed in OVS
Oct 02 09:00:18 compute-0 nova_compute[260603]: 2025-10-02 09:00:18.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:18 compute-0 nova_compute[260603]: 2025-10-02 09:00:18.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:18.943 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:dd:9b 10.100.0.27'], port_security=['fa:16:3e:51:dd:9b 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'f141f189-a224-4ac7-88b5-c28f198944e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e3569eb1-74db-4df0-93c4-3e65c3d95428', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf29bdfe-eec4-4bc1-887c-39f99d9387e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=48cfc997-bd30-40b4-a387-f40a75731793) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:00:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:18.944 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 48cfc997-bd30-40b4-a387-f40a75731793 in datapath 5d0f6b84-ebf5-436d-83fe-b7739dc629d9 unbound from our chassis
Oct 02 09:00:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:18.945 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d0f6b84-ebf5-436d-83fe-b7739dc629d9
Oct 02 09:00:18 compute-0 nova_compute[260603]: 2025-10-02 09:00:18.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:18.961 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1084c1c2-f3d1-419f-b72b-c30cc9ee5945]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:18 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000082.scope: Deactivated successfully.
Oct 02 09:00:18 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000082.scope: Consumed 15.627s CPU time.
Oct 02 09:00:18 compute-0 systemd-machined[214636]: Machine qemu-164-instance-00000082 terminated.
Oct 02 09:00:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:18.989 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f20e80-e6e2-4321-87af-a43e7f60f7b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:18.992 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d97fd89f-2a98-47f3-a8e7-f7afc479ee06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.019 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8df78092-76f2-426e-bcf1-cab1ee101a1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.040 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f952da-c781-4c5f-9448-d0a7a95e0a64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d0f6b84-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:31:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 7, 'rx_bytes': 1222, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 7, 'rx_bytes': 1222, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642259, 'reachable_time': 20336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 10, 'inoctets': 872, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 10, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 872, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 10, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401187, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.066 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[19bf9fe9-fa51-4f22-a0d4-04ed01686bd4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d0f6b84-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642270, 'tstamp': 642270}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401188, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap5d0f6b84-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642273, 'tstamp': 642273}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401188, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.067 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d0f6b84-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.075 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d0f6b84-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:00:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.076 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:00:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.076 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d0f6b84-e0, col_values=(('external_ids', {'iface-id': 'f202c767-dd88-4dcf-bf75-a2c0dfdb6c1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:00:19 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.077 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:00:19 compute-0 ceph-mon[74477]: pgmap v2465: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s wr, 1 op/s
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.114 2 INFO nova.virt.libvirt.driver [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Instance destroyed successfully.
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.114 2 DEBUG nova.objects.instance [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid f141f189-a224-4ac7-88b5-c28f198944e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.132 2 DEBUG nova.virt.libvirt.vif [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:59:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-300987939',display_name='tempest-TestNetworkBasicOps-server-300987939',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-300987939',id=130,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCJjWfAj56mQeyeIvU6XGvvjANhAOW4ymkjaU7nDK7hdgdQPn23X31SFk66pyTpQCxaiWSbcIfBie1wBVKGMKO2QIcNLprOGSD1fN9DoZ9nw1hLigEh1SZTI3GQ/zevNFg==',key_name='tempest-TestNetworkBasicOps-1976267539',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:59:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-3sfliqo6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:59:24Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=f141f189-a224-4ac7-88b5-c28f198944e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.132 2 DEBUG nova.network.os_vif_util [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.133 2 DEBUG nova.network.os_vif_util [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.134 2 DEBUG os_vif [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.138242) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395619138311, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2051, "num_deletes": 251, "total_data_size": 3390672, "memory_usage": 3460240, "flush_reason": "Manual Compaction"}
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48cfc997-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.142 2 INFO os_vif [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd')
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395619156356, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3335379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49775, "largest_seqno": 51825, "table_properties": {"data_size": 3325992, "index_size": 5945, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18802, "raw_average_key_size": 20, "raw_value_size": 3307475, "raw_average_value_size": 3541, "num_data_blocks": 263, "num_entries": 934, "num_filter_entries": 934, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395390, "oldest_key_time": 1759395390, "file_creation_time": 1759395619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 18148 microseconds, and 9269 cpu microseconds.
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.156399) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3335379 bytes OK
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.156418) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.157939) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.157957) EVENT_LOG_v1 {"time_micros": 1759395619157950, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.157974) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3382088, prev total WAL file size 3382088, number of live WAL files 2.
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.158925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3257KB)], [116(8268KB)]
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395619158958, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11802611, "oldest_snapshot_seqno": -1}
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7366 keys, 10085821 bytes, temperature: kUnknown
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395619204165, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10085821, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10036578, "index_size": 29739, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 190632, "raw_average_key_size": 25, "raw_value_size": 9904985, "raw_average_value_size": 1344, "num_data_blocks": 1165, "num_entries": 7366, "num_filter_entries": 7366, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.204443) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10085821 bytes
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.205627) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 260.6 rd, 222.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.1 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 7880, records dropped: 514 output_compression: NoCompression
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.205649) EVENT_LOG_v1 {"time_micros": 1759395619205639, "job": 70, "event": "compaction_finished", "compaction_time_micros": 45290, "compaction_time_cpu_micros": 22873, "output_level": 6, "num_output_files": 1, "total_output_size": 10085821, "num_input_records": 7880, "num_output_records": 7366, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395619206744, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395619209035, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.158877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.209159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.209164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.209165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.209167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:00:19 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.209168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.553 2 INFO nova.virt.libvirt.driver [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Deleting instance files /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4_del
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.555 2 INFO nova.virt.libvirt.driver [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Deletion of /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4_del complete
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.622 2 INFO nova.compute.manager [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.623 2 DEBUG oslo.service.loopingcall [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.623 2 DEBUG nova.compute.manager [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.623 2 DEBUG nova.network.neutron [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:00:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.935 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.957 2 DEBUG nova.compute.manager [req-f80a4b11-3945-4796-beb0-6c77794f8492 req-bd782c2a-ee07-4e3f-b3ab-0747522751c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-vif-unplugged-48cfc997-bd30-40b4-a387-f40a75731793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.957 2 DEBUG oslo_concurrency.lockutils [req-f80a4b11-3945-4796-beb0-6c77794f8492 req-bd782c2a-ee07-4e3f-b3ab-0747522751c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.957 2 DEBUG oslo_concurrency.lockutils [req-f80a4b11-3945-4796-beb0-6c77794f8492 req-bd782c2a-ee07-4e3f-b3ab-0747522751c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.958 2 DEBUG oslo_concurrency.lockutils [req-f80a4b11-3945-4796-beb0-6c77794f8492 req-bd782c2a-ee07-4e3f-b3ab-0747522751c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.958 2 DEBUG nova.compute.manager [req-f80a4b11-3945-4796-beb0-6c77794f8492 req-bd782c2a-ee07-4e3f-b3ab-0747522751c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] No waiting events found dispatching network-vif-unplugged-48cfc997-bd30-40b4-a387-f40a75731793 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.958 2 DEBUG nova.compute.manager [req-f80a4b11-3945-4796-beb0-6c77794f8492 req-bd782c2a-ee07-4e3f-b3ab-0747522751c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-vif-unplugged-48cfc997-bd30-40b4-a387-f40a75731793 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.963 2 WARNING nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.963 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.964 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid f141f189-a224-4ac7-88b5-c28f198944e4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.965 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.965 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.965 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:19 compute-0 nova_compute[260603]: 2025-10-02 09:00:19.998 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:20 compute-0 nova_compute[260603]: 2025-10-02 09:00:20.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:21 compute-0 ceph-mon[74477]: pgmap v2466: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 09:00:21 compute-0 sudo[401221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:00:21 compute-0 sudo[401221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:21 compute-0 sudo[401221]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:21 compute-0 sudo[401246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:00:21 compute-0 sudo[401246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:21 compute-0 sudo[401246]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:21 compute-0 sudo[401271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:00:21 compute-0 sudo[401271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:21 compute-0 sudo[401271]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:21 compute-0 sudo[401296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 09:00:21 compute-0 sudo[401296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:21 compute-0 nova_compute[260603]: 2025-10-02 09:00:21.690 2 DEBUG nova.network.neutron [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:00:21 compute-0 sudo[401296]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:21 compute-0 nova_compute[260603]: 2025-10-02 09:00:21.728 2 INFO nova.compute.manager [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Took 2.10 seconds to deallocate network for instance.
Oct 02 09:00:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:00:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:00:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:00:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:00:21 compute-0 nova_compute[260603]: 2025-10-02 09:00:21.773 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:21 compute-0 nova_compute[260603]: 2025-10-02 09:00:21.774 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 02 09:00:21 compute-0 sudo[401341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:00:21 compute-0 sudo[401341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:21 compute-0 sudo[401341]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:21 compute-0 nova_compute[260603]: 2025-10-02 09:00:21.868 2 DEBUG oslo_concurrency.processutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:00:21 compute-0 sudo[401366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:00:21 compute-0 sudo[401366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:21 compute-0 sudo[401366]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:21 compute-0 sudo[401392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:00:21 compute-0 sudo[401392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:22 compute-0 sudo[401392]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.079 2 DEBUG nova.compute.manager [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.080 2 DEBUG oslo_concurrency.lockutils [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.081 2 DEBUG oslo_concurrency.lockutils [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.081 2 DEBUG oslo_concurrency.lockutils [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.081 2 DEBUG nova.compute.manager [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] No waiting events found dispatching network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.082 2 WARNING nova.compute.manager [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received unexpected event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 for instance with vm_state deleted and task_state None.
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.082 2 DEBUG nova.compute.manager [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-vif-deleted-48cfc997-bd30-40b4-a387-f40a75731793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:00:22 compute-0 sudo[401419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:00:22 compute-0 sudo[401419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:00:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2717669239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:00:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2717669239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:00:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454614570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.331 2 DEBUG oslo_concurrency.processutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.340 2 DEBUG nova.compute.provider_tree [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.361 2 DEBUG nova.scheduler.client.report [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.406 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.440 2 INFO nova.scheduler.client.report [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance f141f189-a224-4ac7-88b5-c28f198944e4
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.519 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.520 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "f141f189-a224-4ac7-88b5-c28f198944e4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.521 2 INFO nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] During sync_power_state the instance has a pending task (deleting). Skip.
Oct 02 09:00:22 compute-0 nova_compute[260603]: 2025-10-02 09:00:22.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "f141f189-a224-4ac7-88b5-c28f198944e4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:22 compute-0 sudo[401419]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:00:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:00:22 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:00:22 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:00:22 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 957ad525-df82-4530-ab4f-ba561d5d9e45 does not exist
Oct 02 09:00:22 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f612839f-e552-447d-af1d-4612eb0e1e97 does not exist
Oct 02 09:00:22 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0a2c0a03-d564-4049-b5b4-add4f911d7fc does not exist
Oct 02 09:00:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:00:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:00:22 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:00:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:00:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:00:22 compute-0 ceph-mon[74477]: pgmap v2467: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 02 09:00:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2717669239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2717669239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1454614570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:00:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:00:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:00:22 compute-0 sudo[401496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:00:22 compute-0 sudo[401496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:22 compute-0 sudo[401496]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:22 compute-0 sudo[401521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:00:22 compute-0 sudo[401521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:22 compute-0 sudo[401521]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:22 compute-0 sudo[401546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:00:22 compute-0 sudo[401546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:22 compute-0 sudo[401546]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:22 compute-0 sudo[401571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:00:22 compute-0 sudo[401571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:00:23 compute-0 podman[401637]: 2025-10-02 09:00:23.333472528 +0000 UTC m=+0.047693043 container create c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 09:00:23 compute-0 systemd[1]: Started libpod-conmon-c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372.scope.
Oct 02 09:00:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:00:23 compute-0 podman[401637]: 2025-10-02 09:00:23.306687785 +0000 UTC m=+0.020908310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:00:23 compute-0 podman[401637]: 2025-10-02 09:00:23.418293263 +0000 UTC m=+0.132513758 container init c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:00:23 compute-0 podman[401637]: 2025-10-02 09:00:23.433063312 +0000 UTC m=+0.147283817 container start c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:00:23 compute-0 podman[401637]: 2025-10-02 09:00:23.436740217 +0000 UTC m=+0.150960712 container attach c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 09:00:23 compute-0 blissful_cray[401654]: 167 167
Oct 02 09:00:23 compute-0 systemd[1]: libpod-c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372.scope: Deactivated successfully.
Oct 02 09:00:23 compute-0 conmon[401654]: conmon c61008c9e6ff5a7c935b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372.scope/container/memory.events
Oct 02 09:00:23 compute-0 podman[401637]: 2025-10-02 09:00:23.439981798 +0000 UTC m=+0.154202273 container died c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-050580085cd34342ca23409f75bd5880f758e0624022ab8d48111ae4b4de85e3-merged.mount: Deactivated successfully.
Oct 02 09:00:23 compute-0 podman[401637]: 2025-10-02 09:00:23.475548663 +0000 UTC m=+0.189769138 container remove c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:00:23 compute-0 systemd[1]: libpod-conmon-c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372.scope: Deactivated successfully.
Oct 02 09:00:23 compute-0 podman[401678]: 2025-10-02 09:00:23.708051188 +0000 UTC m=+0.039524899 container create 16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:00:23 compute-0 systemd[1]: Started libpod-conmon-16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84.scope.
Oct 02 09:00:23 compute-0 podman[401678]: 2025-10-02 09:00:23.690428321 +0000 UTC m=+0.021902042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:00:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628fc67570b9ea406254d43c126ef6728d15d420492e71082e230094c00c8ad0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628fc67570b9ea406254d43c126ef6728d15d420492e71082e230094c00c8ad0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628fc67570b9ea406254d43c126ef6728d15d420492e71082e230094c00c8ad0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628fc67570b9ea406254d43c126ef6728d15d420492e71082e230094c00c8ad0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628fc67570b9ea406254d43c126ef6728d15d420492e71082e230094c00c8ad0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 121 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Oct 02 09:00:23 compute-0 podman[401678]: 2025-10-02 09:00:23.808705236 +0000 UTC m=+0.140178957 container init 16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 09:00:23 compute-0 podman[401678]: 2025-10-02 09:00:23.818843972 +0000 UTC m=+0.150317683 container start 16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 09:00:23 compute-0 podman[401678]: 2025-10-02 09:00:23.822281798 +0000 UTC m=+0.153755539 container attach 16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.122 2 DEBUG oslo_concurrency.lockutils [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "interface-b4eacfa3-8b31-492a-b3c5-829a890a4aae-4e4cec89-b01e-4202-bc3e-a65ce8864017" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.123 2 DEBUG oslo_concurrency.lockutils [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-b4eacfa3-8b31-492a-b3c5-829a890a4aae-4e4cec89-b01e-4202-bc3e-a65ce8864017" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.140 2 DEBUG nova.objects.instance [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'flavor' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.166 2 DEBUG nova.virt.libvirt.vif [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.167 2 DEBUG nova.network.os_vif_util [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.168 2 DEBUG nova.network.os_vif_util [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.173 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.178 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.182 2 DEBUG nova.virt.libvirt.driver [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Attempting to detach device tap4e4cec89-b0 from instance b4eacfa3-8b31-492a-b3c5-829a890a4aae from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.182 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] detach device xml: <interface type="ethernet">
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:45:88:e1"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <target dev="tap4e4cec89-b0"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]: </interface>
Oct 02 09:00:24 compute-0 nova_compute[260603]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.193 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.197 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface>not found in domain: <domain type='kvm' id='163'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <name>instance-00000081</name>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <uuid>b4eacfa3-8b31-492a-b3c5-829a890a4aae</uuid>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:59:03</nova:creationTime>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:port uuid="4e4cec89-b01e-4202-bc3e-a65ce8864017">
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 09:00:24 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <resource>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </resource>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <system>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <entry name='serial'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <entry name='uuid'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </system>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <os>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </os>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <features>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </features>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk' index='2'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       </source>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config' index='1'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       </source>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:42:41:53'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target dev='tap5ea70a9c-82'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:45:88:e1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target dev='tap4e4cec89-b0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='net1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       </target>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </console>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </input>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </input>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </input>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </graphics>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <video>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </video>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c363,c569</label>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c363,c569</imagelabel>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 09:00:24 compute-0 nova_compute[260603]: </domain>
Oct 02 09:00:24 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.198 2 INFO nova.virt.libvirt.driver [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully detached device tap4e4cec89-b0 from instance b4eacfa3-8b31-492a-b3c5-829a890a4aae from the persistent domain config.
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.198 2 DEBUG nova.virt.libvirt.driver [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] (1/8): Attempting to detach device tap4e4cec89-b0 with device alias net1 from instance b4eacfa3-8b31-492a-b3c5-829a890a4aae from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.199 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] detach device xml: <interface type="ethernet">
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <mac address="fa:16:3e:45:88:e1"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <model type="virtio"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <mtu size="1442"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <target dev="tap4e4cec89-b0"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]: </interface>
Oct 02 09:00:24 compute-0 nova_compute[260603]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 09:00:24 compute-0 kernel: tap4e4cec89-b0 (unregistering): left promiscuous mode
Oct 02 09:00:24 compute-0 NetworkManager[45129]: <info>  [1759395624.3202] device (tap4e4cec89-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:00:24 compute-0 ovn_controller[152344]: 2025-10-02T09:00:24Z|01408|binding|INFO|Releasing lport 4e4cec89-b01e-4202-bc3e-a65ce8864017 from this chassis (sb_readonly=0)
Oct 02 09:00:24 compute-0 ovn_controller[152344]: 2025-10-02T09:00:24Z|01409|binding|INFO|Setting lport 4e4cec89-b01e-4202-bc3e-a65ce8864017 down in Southbound
Oct 02 09:00:24 compute-0 ovn_controller[152344]: 2025-10-02T09:00:24Z|01410|binding|INFO|Removing iface tap4e4cec89-b0 ovn-installed in OVS
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.333 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:88:e1 10.100.0.28', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'b4eacfa3-8b31-492a-b3c5-829a890a4aae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf29bdfe-eec4-4bc1-887c-39f99d9387e9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4e4cec89-b01e-4202-bc3e-a65ce8864017) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.336 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4e4cec89-b01e-4202-bc3e-a65ce8864017 in datapath 5d0f6b84-ebf5-436d-83fe-b7739dc629d9 unbound from our chassis
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.338 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d0f6b84-ebf5-436d-83fe-b7739dc629d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.340 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a31415e0-f8f5-437b-a354-3f78f93d7c61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.342 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9 namespace which is not needed anymore
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.363 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Received event <DeviceRemovedEvent: 1759395624.360665, b4eacfa3-8b31-492a-b3c5-829a890a4aae => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.363 2 DEBUG nova.virt.libvirt.driver [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Start waiting for the detach event from libvirt for device tap4e4cec89-b0 with device alias net1 for instance b4eacfa3-8b31-492a-b3c5-829a890a4aae _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.364 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.376 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface>not found in domain: <domain type='kvm' id='163'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <name>instance-00000081</name>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <uuid>b4eacfa3-8b31-492a-b3c5-829a890a4aae</uuid>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 08:59:03</nova:creationTime>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:port uuid="4e4cec89-b01e-4202-bc3e-a65ce8864017">
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 09:00:24 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <resource>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </resource>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <system>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <entry name='serial'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <entry name='uuid'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </system>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <os>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </os>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <features>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </features>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk' index='2'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       </source>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config' index='1'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       </source>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:42:41:53'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target dev='tap5ea70a9c-82'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       </target>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </console>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </input>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </input>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </input>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </graphics>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <video>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </video>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c363,c569</label>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c363,c569</imagelabel>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 09:00:24 compute-0 nova_compute[260603]: </domain>
Oct 02 09:00:24 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.376 2 INFO nova.virt.libvirt.driver [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully detached device tap4e4cec89-b0 from instance b4eacfa3-8b31-492a-b3c5-829a890a4aae from the live domain config.
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.378 2 DEBUG nova.virt.libvirt.vif [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.378 2 DEBUG nova.network.os_vif_util [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.379 2 DEBUG nova.network.os_vif_util [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.380 2 DEBUG os_vif [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e4cec89-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.394 2 INFO os_vif [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0')
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.395 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 09:00:24</nova:creationTime>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct 02 09:00:24 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 09:00:24 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 09:00:24 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 09:00:24 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 09:00:24 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 09:00:24 compute-0 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [NOTICE]   (399684) : haproxy version is 2.8.14-c23fe91
Oct 02 09:00:24 compute-0 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [NOTICE]   (399684) : path to executable is /usr/sbin/haproxy
Oct 02 09:00:24 compute-0 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [WARNING]  (399684) : Exiting Master process...
Oct 02 09:00:24 compute-0 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [WARNING]  (399684) : Exiting Master process...
Oct 02 09:00:24 compute-0 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [ALERT]    (399684) : Current worker (399688) exited with code 143 (Terminated)
Oct 02 09:00:24 compute-0 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [WARNING]  (399684) : All workers exited. Exiting... (0)
Oct 02 09:00:24 compute-0 systemd[1]: libpod-9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb.scope: Deactivated successfully.
Oct 02 09:00:24 compute-0 conmon[399649]: conmon 9849527b8e31c0013aa9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb.scope/container/memory.events
Oct 02 09:00:24 compute-0 podman[401722]: 2025-10-02 09:00:24.555291757 +0000 UTC m=+0.057160817 container died 9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 09:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb-userdata-shm.mount: Deactivated successfully.
Oct 02 09:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-168d60d32efd9370edb94bf9730fe02355e22ea7cfe7215ac4f9b026d57c9878-merged.mount: Deactivated successfully.
Oct 02 09:00:24 compute-0 podman[401722]: 2025-10-02 09:00:24.620299497 +0000 UTC m=+0.122168477 container cleanup 9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:00:24 compute-0 systemd[1]: libpod-conmon-9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb.scope: Deactivated successfully.
Oct 02 09:00:24 compute-0 podman[401759]: 2025-10-02 09:00:24.691373595 +0000 UTC m=+0.044536864 container remove 9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.700 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78ed4d84-d9d0-4682-b285-6b8814062ecd]: (4, ('Thu Oct  2 09:00:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9 (9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb)\n9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb\nThu Oct  2 09:00:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9 (9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb)\n9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.702 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[39ddc4ca-be14-44c5-a9cc-e9de5b43a030]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.703 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d0f6b84-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:24 compute-0 kernel: tap5d0f6b84-e0: left promiscuous mode
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.717 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3d777e9b-2a8d-4781-9191-5076df536782]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.750 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d32944da-ba8b-4545-bde9-f2b6b1174ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.751 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba0caf1-03dc-40d2-8f8d-1b218f63b95b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.776 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb38a1f-21cb-4acc-96f5-2e11a9382526]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642252, 'reachable_time': 16942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401781, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.781 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:00:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.782 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[96d20189-6b67-49ab-99db-a598f5e5cc8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d5d0f6b84\x2debf5\x2d436d\x2d83fe\x2db7739dc629d9.mount: Deactivated successfully.
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.798 2 DEBUG nova.compute.manager [req-e02ddb9b-3a1b-43a8-bd05-13701fe29551 req-bc6c3a33-d655-4da6-a7b7-1c38adaac74a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-unplugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.799 2 DEBUG oslo_concurrency.lockutils [req-e02ddb9b-3a1b-43a8-bd05-13701fe29551 req-bc6c3a33-d655-4da6-a7b7-1c38adaac74a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.799 2 DEBUG oslo_concurrency.lockutils [req-e02ddb9b-3a1b-43a8-bd05-13701fe29551 req-bc6c3a33-d655-4da6-a7b7-1c38adaac74a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.800 2 DEBUG oslo_concurrency.lockutils [req-e02ddb9b-3a1b-43a8-bd05-13701fe29551 req-bc6c3a33-d655-4da6-a7b7-1c38adaac74a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.800 2 DEBUG nova.compute.manager [req-e02ddb9b-3a1b-43a8-bd05-13701fe29551 req-bc6c3a33-d655-4da6-a7b7-1c38adaac74a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-unplugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:00:24 compute-0 nova_compute[260603]: 2025-10-02 09:00:24.800 2 WARNING nova.compute.manager [req-e02ddb9b-3a1b-43a8-bd05-13701fe29551 req-bc6c3a33-d655-4da6-a7b7-1c38adaac74a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received unexpected event network-vif-unplugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 for instance with vm_state active and task_state None.
Oct 02 09:00:24 compute-0 ceph-mon[74477]: pgmap v2468: 305 pgs: 305 active+clean; 121 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Oct 02 09:00:24 compute-0 quizzical_cartwright[401694]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:00:24 compute-0 quizzical_cartwright[401694]: --> relative data size: 1.0
Oct 02 09:00:24 compute-0 quizzical_cartwright[401694]: --> All data devices are unavailable
Oct 02 09:00:24 compute-0 systemd[1]: libpod-16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84.scope: Deactivated successfully.
Oct 02 09:00:24 compute-0 systemd[1]: libpod-16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84.scope: Consumed 1.038s CPU time.
Oct 02 09:00:24 compute-0 podman[401678]: 2025-10-02 09:00:24.981067939 +0000 UTC m=+1.312541690 container died 16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 09:00:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-628fc67570b9ea406254d43c126ef6728d15d420492e71082e230094c00c8ad0-merged.mount: Deactivated successfully.
Oct 02 09:00:25 compute-0 podman[401678]: 2025-10-02 09:00:25.045029666 +0000 UTC m=+1.376503367 container remove 16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 09:00:25 compute-0 systemd[1]: libpod-conmon-16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84.scope: Deactivated successfully.
Oct 02 09:00:25 compute-0 sudo[401571]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:25 compute-0 nova_compute[260603]: 2025-10-02 09:00:25.094 2 DEBUG oslo_concurrency.lockutils [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:00:25 compute-0 nova_compute[260603]: 2025-10-02 09:00:25.095 2 DEBUG oslo_concurrency.lockutils [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:00:25 compute-0 nova_compute[260603]: 2025-10-02 09:00:25.095 2 DEBUG nova.network.neutron [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:00:25 compute-0 sudo[401806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:00:25 compute-0 sudo[401806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:25 compute-0 sudo[401806]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:25 compute-0 sudo[401831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:00:25 compute-0 sudo[401831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:25 compute-0 sudo[401831]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:25 compute-0 sudo[401856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:00:25 compute-0 sudo[401856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:25 compute-0 sudo[401856]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:25 compute-0 sudo[401881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:00:25 compute-0 sudo[401881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:25 compute-0 nova_compute[260603]: 2025-10-02 09:00:25.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:25 compute-0 podman[401947]: 2025-10-02 09:00:25.755200886 +0000 UTC m=+0.052510314 container create 528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moore, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:00:25 compute-0 systemd[1]: Started libpod-conmon-528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb.scope.
Oct 02 09:00:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 121 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 09:00:25 compute-0 podman[401947]: 2025-10-02 09:00:25.730246679 +0000 UTC m=+0.027556127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:00:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:00:25 compute-0 podman[401947]: 2025-10-02 09:00:25.857652079 +0000 UTC m=+0.154961577 container init 528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moore, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 02 09:00:25 compute-0 podman[401947]: 2025-10-02 09:00:25.874214874 +0000 UTC m=+0.171524272 container start 528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moore, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:00:25 compute-0 podman[401947]: 2025-10-02 09:00:25.877462905 +0000 UTC m=+0.174772393 container attach 528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:00:25 compute-0 laughing_moore[401963]: 167 167
Oct 02 09:00:25 compute-0 systemd[1]: libpod-528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb.scope: Deactivated successfully.
Oct 02 09:00:25 compute-0 podman[401947]: 2025-10-02 09:00:25.881975555 +0000 UTC m=+0.179284963 container died 528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moore, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:00:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ca703e9529d711bdc86a830faa6f1f1a1fa44b256921dc0bed78563f2f4f10c-merged.mount: Deactivated successfully.
Oct 02 09:00:25 compute-0 podman[401947]: 2025-10-02 09:00:25.938063477 +0000 UTC m=+0.235372915 container remove 528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:00:25 compute-0 systemd[1]: libpod-conmon-528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb.scope: Deactivated successfully.
Oct 02 09:00:26 compute-0 podman[401987]: 2025-10-02 09:00:26.172172862 +0000 UTC m=+0.070873833 container create 63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaplygin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 09:00:26 compute-0 systemd[1]: Started libpod-conmon-63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178.scope.
Oct 02 09:00:26 compute-0 podman[401987]: 2025-10-02 09:00:26.147220567 +0000 UTC m=+0.045921628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:00:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab38271c20e3b812a759a83d9576d27be055902ccab734541d362ff3abeb19f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab38271c20e3b812a759a83d9576d27be055902ccab734541d362ff3abeb19f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab38271c20e3b812a759a83d9576d27be055902ccab734541d362ff3abeb19f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab38271c20e3b812a759a83d9576d27be055902ccab734541d362ff3abeb19f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:26 compute-0 podman[401987]: 2025-10-02 09:00:26.303908187 +0000 UTC m=+0.202609228 container init 63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaplygin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:00:26 compute-0 podman[401987]: 2025-10-02 09:00:26.315504097 +0000 UTC m=+0.214205088 container start 63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaplygin, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:00:26 compute-0 podman[401987]: 2025-10-02 09:00:26.318979455 +0000 UTC m=+0.217680456 container attach 63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 09:00:26 compute-0 ceph-mon[74477]: pgmap v2469: 305 pgs: 305 active+clean; 121 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]: {
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:     "0": [
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:         {
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "devices": [
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "/dev/loop3"
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             ],
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_name": "ceph_lv0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_size": "21470642176",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "name": "ceph_lv0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "tags": {
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.cluster_name": "ceph",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.crush_device_class": "",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.encrypted": "0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.osd_id": "0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.type": "block",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.vdo": "0"
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             },
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "type": "block",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "vg_name": "ceph_vg0"
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:         }
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:     ],
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:     "1": [
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:         {
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "devices": [
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "/dev/loop4"
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             ],
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_name": "ceph_lv1",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_size": "21470642176",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "name": "ceph_lv1",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "tags": {
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.cluster_name": "ceph",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.crush_device_class": "",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.encrypted": "0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.osd_id": "1",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.type": "block",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.vdo": "0"
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             },
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "type": "block",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "vg_name": "ceph_vg1"
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:         }
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:     ],
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:     "2": [
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:         {
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "devices": [
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "/dev/loop5"
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             ],
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_name": "ceph_lv2",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_size": "21470642176",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "name": "ceph_lv2",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "tags": {
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.cluster_name": "ceph",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.crush_device_class": "",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.encrypted": "0",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.osd_id": "2",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.type": "block",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:                 "ceph.vdo": "0"
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             },
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "type": "block",
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:             "vg_name": "ceph_vg2"
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:         }
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]:     ]
Oct 02 09:00:27 compute-0 strange_chaplygin[402004]: }
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.054 2 DEBUG nova.compute.manager [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.055 2 DEBUG oslo_concurrency.lockutils [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.058 2 DEBUG oslo_concurrency.lockutils [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.058 2 DEBUG oslo_concurrency.lockutils [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.058 2 DEBUG nova.compute.manager [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.058 2 WARNING nova.compute.manager [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received unexpected event network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 for instance with vm_state active and task_state None.
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.058 2 DEBUG nova.compute.manager [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-deleted-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.059 2 INFO nova.compute.manager [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Neutron deleted interface 4e4cec89-b01e-4202-bc3e-a65ce8864017; detaching it from the instance and deleting it from the info cache
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.059 2 DEBUG nova.network.neutron [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:00:27 compute-0 systemd[1]: libpod-63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178.scope: Deactivated successfully.
Oct 02 09:00:27 compute-0 podman[401987]: 2025-10-02 09:00:27.074428741 +0000 UTC m=+0.973129702 container died 63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 09:00:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab38271c20e3b812a759a83d9576d27be055902ccab734541d362ff3abeb19f4-merged.mount: Deactivated successfully.
Oct 02 09:00:27 compute-0 podman[401987]: 2025-10-02 09:00:27.142112184 +0000 UTC m=+1.040813155 container remove 63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaplygin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:00:27 compute-0 systemd[1]: libpod-conmon-63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178.scope: Deactivated successfully.
Oct 02 09:00:27 compute-0 sudo[401881]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.225 2 DEBUG nova.objects.instance [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lazy-loading 'system_metadata' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:00:27 compute-0 sudo[402029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:00:27 compute-0 sudo[402029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:27 compute-0 sudo[402029]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.299 2 DEBUG nova.objects.instance [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lazy-loading 'flavor' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.348 2 DEBUG nova.virt.libvirt.vif [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.349 2 DEBUG nova.network.os_vif_util [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converting VIF {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.350 2 DEBUG nova.network.os_vif_util [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.356 2 DEBUG nova.virt.libvirt.guest [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 09:00:27 compute-0 sudo[402054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:00:27 compute-0 sudo[402054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.362 2 DEBUG nova.virt.libvirt.guest [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface>not found in domain: <domain type='kvm' id='163'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <name>instance-00000081</name>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <uuid>b4eacfa3-8b31-492a-b3c5-829a890a4aae</uuid>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 09:00:24</nova:creationTime>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 09:00:27 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <resource>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </resource>
Oct 02 09:00:27 compute-0 sudo[402054]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <system>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <entry name='serial'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <entry name='uuid'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </system>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <os>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </os>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <features>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </features>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk' index='2'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       </source>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config' index='1'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       </source>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:42:41:53'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target dev='tap5ea70a9c-82'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       </target>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </console>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </input>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </input>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </input>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </graphics>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <video>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </video>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c363,c569</label>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c363,c569</imagelabel>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 09:00:27 compute-0 nova_compute[260603]: </domain>
Oct 02 09:00:27 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.363 2 DEBUG nova.virt.libvirt.guest [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.373 2 DEBUG nova.virt.libvirt.guest [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface>not found in domain: <domain type='kvm' id='163'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <name>instance-00000081</name>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <uuid>b4eacfa3-8b31-492a-b3c5-829a890a4aae</uuid>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 09:00:24</nova:creationTime>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 09:00:27 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <memory unit='KiB'>131072</memory>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <vcpu placement='static'>1</vcpu>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <resource>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <partition>/machine</partition>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </resource>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <sysinfo type='smbios'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <system>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <entry name='manufacturer'>RDO</entry>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <entry name='serial'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <entry name='uuid'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <entry name='family'>Virtual Machine</entry>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </system>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <os>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <boot dev='hd'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <smbios mode='sysinfo'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </os>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <features>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <vmcoreinfo state='on'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </features>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <vendor>AMD</vendor>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='x2apic'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc-deadline'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='hypervisor'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='tsc_adjust'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='spec-ctrl'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='stibp'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='arch-capabilities'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='ssbd'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='cmp_legacy'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='overflow-recov'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='succor'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='ibrs'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='amd-ssbd'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='virt-ssbd'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='lbrv'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='tsc-scale'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='vmcb-clean'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='flushbyasid'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='pause-filter'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='pfthreshold'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='rdctl-no'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='mds-no'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='pschange-mc-no'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='gds-no'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='rfds-no'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='xsaves'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='svm'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='require' name='topoext'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='npt'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <feature policy='disable' name='nrip-save'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <clock offset='utc'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <timer name='hpet' present='no'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <on_poweroff>destroy</on_poweroff>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <on_reboot>restart</on_reboot>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <on_crash>destroy</on_crash>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <disk type='network' device='disk'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk' index='2'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       </source>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target dev='vda' bus='virtio'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='virtio-disk0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <disk type='network' device='cdrom'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <auth username='openstack'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:         <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config' index='1'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:         <host name='192.168.122.100' port='6789'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       </source>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target dev='sda' bus='sata'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <readonly/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='sata0-0-0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pcie.0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='1' port='0x10'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='2' port='0x11'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.2'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='3' port='0x12'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.3'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='4' port='0x13'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.4'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='5' port='0x14'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.5'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='6' port='0x15'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.6'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='7' port='0x16'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.7'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='8' port='0x17'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.8'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='9' port='0x18'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.9'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='10' port='0x19'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.10'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='11' port='0x1a'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.11'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='12' port='0x1b'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.12'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='13' port='0x1c'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.13'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='14' port='0x1d'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.14'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='15' port='0x1e'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.15'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='16' port='0x1f'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.16'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='17' port='0x20'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.17'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='18' port='0x21'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.18'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='19' port='0x22'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.19'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='20' port='0x23'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.20'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='21' port='0x24'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.21'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='22' port='0x25'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.22'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='23' port='0x26'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.23'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='24' port='0x27'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.24'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-root-port'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target chassis='25' port='0x28'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.25'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model name='pcie-pci-bridge'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='pci.26'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='usb'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <controller type='sata' index='0'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='ide'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </controller>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <interface type='ethernet'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <mac address='fa:16:3e:42:41:53'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target dev='tap5ea70a9c-82'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model type='virtio'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <mtu size='1442'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='net0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <serial type='pty'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target type='isa-serial' port='0'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:         <model name='isa-serial'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       </target>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <console type='pty' tty='/dev/pts/0'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <source path='/dev/pts/0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <target type='serial' port='0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='serial0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </console>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <input type='tablet' bus='usb'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='input0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='usb' bus='0' port='1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </input>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <input type='mouse' bus='ps2'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='input1'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </input>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <input type='keyboard' bus='ps2'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='input2'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </input>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <listen type='address' address='::0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </graphics>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <audio id='1' type='none'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <video>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='video0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </video>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <watchdog model='itco' action='reset'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='watchdog0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </watchdog>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <memballoon model='virtio'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <stats period='10'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='balloon0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <rng model='virtio'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <backend model='random'>/dev/urandom</backend>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <alias name='rng0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <label>system_u:system_r:svirt_t:s0:c363,c569</label>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c363,c569</imagelabel>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <label>+107:+107</label>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <imagelabel>+107:+107</imagelabel>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </seclabel>
Oct 02 09:00:27 compute-0 nova_compute[260603]: </domain>
Oct 02 09:00:27 compute-0 nova_compute[260603]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.373 2 WARNING nova.virt.libvirt.driver [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Detaching interface fa:16:3e:45:88:e1 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap4e4cec89-b0' not found.
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.375 2 DEBUG nova.virt.libvirt.vif [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.376 2 DEBUG nova.network.os_vif_util [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converting VIF {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.377 2 DEBUG nova.network.os_vif_util [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.378 2 DEBUG os_vif [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e4cec89-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.385 2 INFO os_vif [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0')
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.386 2 DEBUG nova.virt.libvirt.guest [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:creationTime>2025-10-02 09:00:27</nova:creationTime>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:flavor name="m1.nano">
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:memory>128</nova:memory>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:disk>1</nova:disk>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:swap>0</nova:swap>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:vcpus>1</nova:vcpus>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </nova:flavor>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:owner>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </nova:owner>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   <nova:ports>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct 02 09:00:27 compute-0 nova_compute[260603]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 09:00:27 compute-0 nova_compute[260603]:     </nova:port>
Oct 02 09:00:27 compute-0 nova_compute[260603]:   </nova:ports>
Oct 02 09:00:27 compute-0 nova_compute[260603]: </nova:instance>
Oct 02 09:00:27 compute-0 nova_compute[260603]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 09:00:27 compute-0 sudo[402079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:00:27 compute-0 sudo[402079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:27 compute-0 sudo[402079]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:27 compute-0 sudo[402104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:00:27 compute-0 sudo[402104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.656 2 INFO nova.network.neutron [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Port 4e4cec89-b01e-4202-bc3e-a65ce8864017 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.656 2 DEBUG nova.network.neutron [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.681 2 DEBUG oslo_concurrency.lockutils [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:00:27 compute-0 nova_compute[260603]: 2025-10-02 09:00:27.758 2 DEBUG oslo_concurrency.lockutils [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-b4eacfa3-8b31-492a-b3c5-829a890a4aae-4e4cec89-b01e-4202-bc3e-a65ce8864017" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 121 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 09:00:27 compute-0 podman[402170]: 2025-10-02 09:00:27.931369911 +0000 UTC m=+0.053687099 container create dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brattain, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 09:00:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:00:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:00:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:00:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:00:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:00:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:00:27 compute-0 systemd[1]: Started libpod-conmon-dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a.scope.
Oct 02 09:00:27 compute-0 podman[402170]: 2025-10-02 09:00:27.901830384 +0000 UTC m=+0.024147632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:00:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:00:28 compute-0 ovn_controller[152344]: 2025-10-02T09:00:28Z|01411|binding|INFO|Releasing lport 34739963-aa72-473b-8b1d-5d0d09f0b1de from this chassis (sb_readonly=0)
Oct 02 09:00:28 compute-0 podman[402170]: 2025-10-02 09:00:28.021209373 +0000 UTC m=+0.143526551 container init dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brattain, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 09:00:28 compute-0 podman[402170]: 2025-10-02 09:00:28.036022303 +0000 UTC m=+0.158339461 container start dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brattain, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:00:28 compute-0 podman[402170]: 2025-10-02 09:00:28.039874113 +0000 UTC m=+0.162191311 container attach dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:00:28 compute-0 festive_brattain[402186]: 167 167
Oct 02 09:00:28 compute-0 systemd[1]: libpod-dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a.scope: Deactivated successfully.
Oct 02 09:00:28 compute-0 conmon[402186]: conmon dee73864900d5101de5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a.scope/container/memory.events
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:00:28
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:00:28 compute-0 podman[402170]: 2025-10-02 09:00:28.047605974 +0000 UTC m=+0.169923162 container died dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brattain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'volumes', '.rgw.root', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms']
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:00:28 compute-0 nova_compute[260603]: 2025-10-02 09:00:28.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bd4b2766eb78863dd305654839772bef4d759280a31b1332f89d46ed0b3b563-merged.mount: Deactivated successfully.
Oct 02 09:00:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:00:28 compute-0 podman[402170]: 2025-10-02 09:00:28.114928996 +0000 UTC m=+0.237246154 container remove dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 09:00:28 compute-0 systemd[1]: libpod-conmon-dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a.scope: Deactivated successfully.
Oct 02 09:00:28 compute-0 podman[402212]: 2025-10-02 09:00:28.305431466 +0000 UTC m=+0.039933663 container create 830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 02 09:00:28 compute-0 systemd[1]: Started libpod-conmon-830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4.scope.
Oct 02 09:00:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ce96af2c51c21de8648950c3c312618d5e57369241f4d35f3929d633954306/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ce96af2c51c21de8648950c3c312618d5e57369241f4d35f3929d633954306/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ce96af2c51c21de8648950c3c312618d5e57369241f4d35f3929d633954306/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ce96af2c51c21de8648950c3c312618d5e57369241f4d35f3929d633954306/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:00:28 compute-0 podman[402212]: 2025-10-02 09:00:28.288230421 +0000 UTC m=+0.022732628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:00:28 compute-0 podman[402212]: 2025-10-02 09:00:28.411598134 +0000 UTC m=+0.146100331 container init 830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:00:28 compute-0 podman[402212]: 2025-10-02 09:00:28.419550271 +0000 UTC m=+0.154052458 container start 830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:00:28 compute-0 podman[402212]: 2025-10-02 09:00:28.423631808 +0000 UTC m=+0.158134025 container attach 830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:00:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:00:28 compute-0 ceph-mon[74477]: pgmap v2470: 305 pgs: 305 active+clean; 121 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.019 2 DEBUG nova.compute.manager [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-changed-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.021 2 DEBUG nova.compute.manager [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing instance network info cache due to event network-changed-5ea70a9c-8299-4593-b2b2-5c3315870d73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.021 2 DEBUG oslo_concurrency.lockutils [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.022 2 DEBUG oslo_concurrency.lockutils [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.022 2 DEBUG nova.network.neutron [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing network info cache for port 5ea70a9c-8299-4593-b2b2-5c3315870d73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.299 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.300 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.300 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.301 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.301 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.304 2 INFO nova.compute.manager [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Terminating instance
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.306 2 DEBUG nova.compute.manager [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:00:29 compute-0 kernel: tap5ea70a9c-82 (unregistering): left promiscuous mode
Oct 02 09:00:29 compute-0 NetworkManager[45129]: <info>  [1759395629.3755] device (tap5ea70a9c-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:29 compute-0 ovn_controller[152344]: 2025-10-02T09:00:29Z|01412|binding|INFO|Releasing lport 5ea70a9c-8299-4593-b2b2-5c3315870d73 from this chassis (sb_readonly=0)
Oct 02 09:00:29 compute-0 ovn_controller[152344]: 2025-10-02T09:00:29Z|01413|binding|INFO|Setting lport 5ea70a9c-8299-4593-b2b2-5c3315870d73 down in Southbound
Oct 02 09:00:29 compute-0 ovn_controller[152344]: 2025-10-02T09:00:29Z|01414|binding|INFO|Removing iface tap5ea70a9c-82 ovn-installed in OVS
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.432 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:41:53 10.100.0.12'], port_security=['fa:16:3e:42:41:53 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b4eacfa3-8b31-492a-b3c5-829a890a4aae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-531b0560-b279-49fe-a565-b902507e886d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41a1668d-b8be-4a47-8e43-ab11db6fabeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1a7ccfd-5e72-4548-90da-40016d961198, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5ea70a9c-8299-4593-b2b2-5c3315870d73) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.434 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5ea70a9c-8299-4593-b2b2-5c3315870d73 in datapath 531b0560-b279-49fe-a565-b902507e886d unbound from our chassis
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.435 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 531b0560-b279-49fe-a565-b902507e886d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3298e7-4e43-4feb-ba09-ce38a5b35d2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.437 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-531b0560-b279-49fe-a565-b902507e886d namespace which is not needed anymore
Oct 02 09:00:29 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000081.scope: Deactivated successfully.
Oct 02 09:00:29 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000081.scope: Consumed 18.246s CPU time.
Oct 02 09:00:29 compute-0 systemd-machined[214636]: Machine qemu-163-instance-00000081 terminated.
Oct 02 09:00:29 compute-0 naughty_franklin[402229]: {
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "osd_id": 2,
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "type": "bluestore"
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:     },
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "osd_id": 1,
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "type": "bluestore"
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:     },
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "osd_id": 0,
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:         "type": "bluestore"
Oct 02 09:00:29 compute-0 naughty_franklin[402229]:     }
Oct 02 09:00:29 compute-0 naughty_franklin[402229]: }
Oct 02 09:00:29 compute-0 systemd[1]: libpod-830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4.scope: Deactivated successfully.
Oct 02 09:00:29 compute-0 systemd[1]: libpod-830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4.scope: Consumed 1.097s CPU time.
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.550 2 INFO nova.virt.libvirt.driver [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Instance destroyed successfully.
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.551 2 DEBUG nova.objects.instance [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.581 2 DEBUG nova.virt.libvirt.vif [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.581 2 DEBUG nova.network.os_vif_util [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.582 2 DEBUG nova.network.os_vif_util [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.582 2 DEBUG os_vif [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.583 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ea70a9c-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.588 2 INFO os_vif [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82')
Oct 02 09:00:29 compute-0 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [NOTICE]   (399457) : haproxy version is 2.8.14-c23fe91
Oct 02 09:00:29 compute-0 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [NOTICE]   (399457) : path to executable is /usr/sbin/haproxy
Oct 02 09:00:29 compute-0 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [WARNING]  (399457) : Exiting Master process...
Oct 02 09:00:29 compute-0 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [ALERT]    (399457) : Current worker (399459) exited with code 143 (Terminated)
Oct 02 09:00:29 compute-0 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [WARNING]  (399457) : All workers exited. Exiting... (0)
Oct 02 09:00:29 compute-0 systemd[1]: libpod-42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd.scope: Deactivated successfully.
Oct 02 09:00:29 compute-0 conmon[399453]: conmon 42856d274aa3e208f62e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd.scope/container/memory.events
Oct 02 09:00:29 compute-0 podman[402290]: 2025-10-02 09:00:29.610335996 +0000 UTC m=+0.060695876 container died 42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 09:00:29 compute-0 podman[402291]: 2025-10-02 09:00:29.622572027 +0000 UTC m=+0.071318157 container died 830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 09:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2789c6dcd510ee067d39b52a38d6115997f3323341899d6c11680c2dc54061e9-merged.mount: Deactivated successfully.
Oct 02 09:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd-userdata-shm.mount: Deactivated successfully.
Oct 02 09:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-51ce96af2c51c21de8648950c3c312618d5e57369241f4d35f3929d633954306-merged.mount: Deactivated successfully.
Oct 02 09:00:29 compute-0 podman[402290]: 2025-10-02 09:00:29.670464995 +0000 UTC m=+0.120824865 container cleanup 42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:00:29 compute-0 systemd[1]: libpod-conmon-42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd.scope: Deactivated successfully.
Oct 02 09:00:29 compute-0 podman[402291]: 2025-10-02 09:00:29.701045315 +0000 UTC m=+0.149791435 container remove 830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:00:29 compute-0 systemd[1]: libpod-conmon-830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4.scope: Deactivated successfully.
Oct 02 09:00:29 compute-0 sudo[402104]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:00:29 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:00:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:00:29 compute-0 podman[402358]: 2025-10-02 09:00:29.771168694 +0000 UTC m=+0.061282745 container remove 42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:00:29 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:00:29 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 1984a701-eaa2-4397-90d6-351fba2e6e76 does not exist
Oct 02 09:00:29 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f14c438c-ea04-4619-bf18-525265cfdb4e does not exist
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.779 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd75fef4-8176-4fee-9b90-1d53bda0a304]: (4, ('Thu Oct  2 09:00:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d (42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd)\n42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd\nThu Oct  2 09:00:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d (42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd)\n42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.782 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[31f74bf6-780b-41a0-ae47-e01c827a86b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.783 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap531b0560-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:29 compute-0 kernel: tap531b0560-b0: left promiscuous mode
Oct 02 09:00:29 compute-0 nova_compute[260603]: 2025-10-02 09:00:29.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.805 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[337843cd-d2ec-481a-bfec-20b8e6e0a727]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.834 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[691ceab7-45e2-4b04-bbdf-2ef74ad7c281]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.836 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[95337864-72e5-45e7-9fdc-03f48e3ac8da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:29 compute-0 sudo[402371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.858 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[298fb04e-df6c-4006-a26c-c02f1f71b720]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639368, 'reachable_time': 36024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402396, 'error': None, 'target': 'ovnmeta-531b0560-b279-49fe-a565-b902507e886d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:29 compute-0 sudo[402371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d531b0560\x2db279\x2d49fe\x2da565\x2db902507e886d.mount: Deactivated successfully.
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.864 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-531b0560-b279-49fe-a565-b902507e886d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:00:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.865 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4d04ee-8f69-4b8e-871a-d348746d805f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:00:29 compute-0 sudo[402371]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:29 compute-0 sudo[402399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:00:29 compute-0 sudo[402399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:00:29 compute-0 sudo[402399]: pam_unix(sudo:session): session closed for user root
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.004 2 DEBUG nova.compute.manager [req-149e034a-7258-444f-83c6-3f7b8ba7ffba req-2b6ee9a7-1c74-4ca2-89e6-b7b861ebac71 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-unplugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.005 2 DEBUG oslo_concurrency.lockutils [req-149e034a-7258-444f-83c6-3f7b8ba7ffba req-2b6ee9a7-1c74-4ca2-89e6-b7b861ebac71 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.005 2 DEBUG oslo_concurrency.lockutils [req-149e034a-7258-444f-83c6-3f7b8ba7ffba req-2b6ee9a7-1c74-4ca2-89e6-b7b861ebac71 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.005 2 DEBUG oslo_concurrency.lockutils [req-149e034a-7258-444f-83c6-3f7b8ba7ffba req-2b6ee9a7-1c74-4ca2-89e6-b7b861ebac71 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.006 2 DEBUG nova.compute.manager [req-149e034a-7258-444f-83c6-3f7b8ba7ffba req-2b6ee9a7-1c74-4ca2-89e6-b7b861ebac71 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-unplugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.006 2 DEBUG nova.compute.manager [req-149e034a-7258-444f-83c6-3f7b8ba7ffba req-2b6ee9a7-1c74-4ca2-89e6-b7b861ebac71 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-unplugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.086 2 INFO nova.virt.libvirt.driver [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Deleting instance files /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae_del
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.087 2 INFO nova.virt.libvirt.driver [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Deletion of /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae_del complete
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.465 2 INFO nova.compute.manager [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Took 1.16 seconds to destroy the instance on the hypervisor.
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.466 2 DEBUG oslo.service.loopingcall [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.466 2 DEBUG nova.compute.manager [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.466 2 DEBUG nova.network.neutron [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:30 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:00:30 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:00:30 compute-0 ceph-mon[74477]: pgmap v2471: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.873 2 DEBUG nova.network.neutron [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updated VIF entry in instance network info cache for port 5ea70a9c-8299-4593-b2b2-5c3315870d73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:00:30 compute-0 nova_compute[260603]: 2025-10-02 09:00:30.874 2 DEBUG nova.network.neutron [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.100 2 DEBUG oslo_concurrency.lockutils [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.258 2 DEBUG nova.network.neutron [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.300 2 INFO nova.compute.manager [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Took 0.83 seconds to deallocate network for instance.
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.310 2 DEBUG nova.compute.manager [req-276629d8-4a55-4825-a47d-c6b6d9d4e91c req-4837ab2c-3b7f-415a-b5ed-565352f699ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-deleted-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.310 2 INFO nova.compute.manager [req-276629d8-4a55-4825-a47d-c6b6d9d4e91c req-4837ab2c-3b7f-415a-b5ed-565352f699ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Neutron deleted interface 5ea70a9c-8299-4593-b2b2-5c3315870d73; detaching it from the instance and deleting it from the info cache
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.311 2 DEBUG nova.network.neutron [req-276629d8-4a55-4825-a47d-c6b6d9d4e91c req-4837ab2c-3b7f-415a-b5ed-565352f699ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.345 2 DEBUG nova.compute.manager [req-276629d8-4a55-4825-a47d-c6b6d9d4e91c req-4837ab2c-3b7f-415a-b5ed-565352f699ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Detach interface failed, port_id=5ea70a9c-8299-4593-b2b2-5c3315870d73, reason: Instance b4eacfa3-8b31-492a-b3c5-829a890a4aae could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.363 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.364 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.426 2 DEBUG oslo_concurrency.processutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:00:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 09:00:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:00:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321827995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.893 2 DEBUG oslo_concurrency.processutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.900 2 DEBUG nova.compute.provider_tree [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.946 2 DEBUG nova.scheduler.client.report [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:00:31 compute-0 nova_compute[260603]: 2025-10-02 09:00:31.993 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:32 compute-0 podman[402448]: 2025-10-02 09:00:32.001415531 +0000 UTC m=+0.061229734 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 09:00:32 compute-0 nova_compute[260603]: 2025-10-02 09:00:32.024 2 INFO nova.scheduler.client.report [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance b4eacfa3-8b31-492a-b3c5-829a890a4aae
Oct 02 09:00:32 compute-0 podman[402447]: 2025-10-02 09:00:32.02937886 +0000 UTC m=+0.095770597 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:00:32 compute-0 nova_compute[260603]: 2025-10-02 09:00:32.132 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:32 compute-0 nova_compute[260603]: 2025-10-02 09:00:32.146 2 DEBUG nova.compute.manager [req-2f4b8cfa-874e-4e0c-bd04-666fdef0ba6c req-e128ff6f-e804-463b-aed3-95148b431459 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:00:32 compute-0 nova_compute[260603]: 2025-10-02 09:00:32.146 2 DEBUG oslo_concurrency.lockutils [req-2f4b8cfa-874e-4e0c-bd04-666fdef0ba6c req-e128ff6f-e804-463b-aed3-95148b431459 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:32 compute-0 nova_compute[260603]: 2025-10-02 09:00:32.147 2 DEBUG oslo_concurrency.lockutils [req-2f4b8cfa-874e-4e0c-bd04-666fdef0ba6c req-e128ff6f-e804-463b-aed3-95148b431459 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:32 compute-0 nova_compute[260603]: 2025-10-02 09:00:32.147 2 DEBUG oslo_concurrency.lockutils [req-2f4b8cfa-874e-4e0c-bd04-666fdef0ba6c req-e128ff6f-e804-463b-aed3-95148b431459 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:32 compute-0 nova_compute[260603]: 2025-10-02 09:00:32.147 2 DEBUG nova.compute.manager [req-2f4b8cfa-874e-4e0c-bd04-666fdef0ba6c req-e128ff6f-e804-463b-aed3-95148b431459 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:00:32 compute-0 nova_compute[260603]: 2025-10-02 09:00:32.147 2 WARNING nova.compute.manager [req-2f4b8cfa-874e-4e0c-bd04-666fdef0ba6c req-e128ff6f-e804-463b-aed3-95148b431459 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received unexpected event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 for instance with vm_state deleted and task_state None.
Oct 02 09:00:32 compute-0 ceph-mon[74477]: pgmap v2472: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 09:00:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3321827995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:00:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:00:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Oct 02 09:00:34 compute-0 nova_compute[260603]: 2025-10-02 09:00:34.112 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395619.1108572, f141f189-a224-4ac7-88b5-c28f198944e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:00:34 compute-0 nova_compute[260603]: 2025-10-02 09:00:34.113 2 INFO nova.compute.manager [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] VM Stopped (Lifecycle Event)
Oct 02 09:00:34 compute-0 nova_compute[260603]: 2025-10-02 09:00:34.142 2 DEBUG nova.compute.manager [None req-751b0ccc-3709-4626-8576-d57c4339be47 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:00:34 compute-0 nova_compute[260603]: 2025-10-02 09:00:34.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:00:34 compute-0 nova_compute[260603]: 2025-10-02 09:00:34.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:00:34 compute-0 nova_compute[260603]: 2025-10-02 09:00:34.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:34.840 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:34.840 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:34 compute-0 ceph-mon[74477]: pgmap v2473: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Oct 02 09:00:35 compute-0 nova_compute[260603]: 2025-10-02 09:00:35.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 09:00:36 compute-0 ceph-mon[74477]: pgmap v2474: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 09:00:37 compute-0 podman[402493]: 2025-10-02 09:00:37.033489566 +0000 UTC m=+0.093449285 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:00:37 compute-0 podman[402494]: 2025-10-02 09:00:37.038222963 +0000 UTC m=+0.101008740 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:00:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 09:00:37 compute-0 nova_compute[260603]: 2025-10-02 09:00:37.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:38 compute-0 nova_compute[260603]: 2025-10-02 09:00:38.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:00:38 compute-0 ceph-mon[74477]: pgmap v2475: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 09:00:38 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:00:39 compute-0 nova_compute[260603]: 2025-10-02 09:00:39.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 09:00:40 compute-0 nova_compute[260603]: 2025-10-02 09:00:40.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:40 compute-0 ceph-mon[74477]: pgmap v2476: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 09:00:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:00:42 compute-0 ceph-mon[74477]: pgmap v2477: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:00:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.115782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395643115814, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 481, "num_deletes": 257, "total_data_size": 455397, "memory_usage": 465896, "flush_reason": "Manual Compaction"}
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395643120161, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 440919, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51826, "largest_seqno": 52306, "table_properties": {"data_size": 438152, "index_size": 803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6586, "raw_average_key_size": 18, "raw_value_size": 432528, "raw_average_value_size": 1218, "num_data_blocks": 35, "num_entries": 355, "num_filter_entries": 355, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395620, "oldest_key_time": 1759395620, "file_creation_time": 1759395643, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 4415 microseconds, and 1878 cpu microseconds.
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.120200) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 440919 bytes OK
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.120214) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.121417) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.121430) EVENT_LOG_v1 {"time_micros": 1759395643121425, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.121444) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 452493, prev total WAL file size 452493, number of live WAL files 2.
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.121922) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303033' seq:72057594037927935, type:22 .. '6C6F676D0032323536' seq:0, type:0; will stop at (end)
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(430KB)], [119(9849KB)]
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395643122012, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10526740, "oldest_snapshot_seqno": -1}
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7195 keys, 10406469 bytes, temperature: kUnknown
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395643188992, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10406469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10357309, "index_size": 30040, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 188040, "raw_average_key_size": 26, "raw_value_size": 10227702, "raw_average_value_size": 1421, "num_data_blocks": 1175, "num_entries": 7195, "num_filter_entries": 7195, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395643, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.189195) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10406469 bytes
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.190476) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.0 rd, 155.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(47.5) write-amplify(23.6) OK, records in: 7721, records dropped: 526 output_compression: NoCompression
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.190490) EVENT_LOG_v1 {"time_micros": 1759395643190483, "job": 72, "event": "compaction_finished", "compaction_time_micros": 67030, "compaction_time_cpu_micros": 49369, "output_level": 6, "num_output_files": 1, "total_output_size": 10406469, "num_input_records": 7721, "num_output_records": 7195, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395643190671, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395643192361, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.121815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.192445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.192451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.192453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.192455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:00:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.192456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:00:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:00:44 compute-0 nova_compute[260603]: 2025-10-02 09:00:44.546 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395629.5459328, b4eacfa3-8b31-492a-b3c5-829a890a4aae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:00:44 compute-0 nova_compute[260603]: 2025-10-02 09:00:44.547 2 INFO nova.compute.manager [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] VM Stopped (Lifecycle Event)
Oct 02 09:00:44 compute-0 nova_compute[260603]: 2025-10-02 09:00:44.565 2 DEBUG nova.compute.manager [None req-553c7103-6137-4268-9bd2-2b598cd38dda - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:00:44 compute-0 nova_compute[260603]: 2025-10-02 09:00:44.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:44 compute-0 nova_compute[260603]: 2025-10-02 09:00:44.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:44.835 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:00:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:44.837 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:00:45 compute-0 ceph-mon[74477]: pgmap v2478: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:00:45 compute-0 nova_compute[260603]: 2025-10-02 09:00:45.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:47 compute-0 ceph-mon[74477]: pgmap v2479: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:00:48 compute-0 nova_compute[260603]: 2025-10-02 09:00:48.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:00:49 compute-0 ceph-mon[74477]: pgmap v2480: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:49 compute-0 nova_compute[260603]: 2025-10-02 09:00:49.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:50 compute-0 nova_compute[260603]: 2025-10-02 09:00:50.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:50 compute-0 nova_compute[260603]: 2025-10-02 09:00:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:00:50 compute-0 nova_compute[260603]: 2025-10-02 09:00:50.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:00:50 compute-0 nova_compute[260603]: 2025-10-02 09:00:50.538 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:00:51 compute-0 ceph-mon[74477]: pgmap v2481: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:51 compute-0 nova_compute[260603]: 2025-10-02 09:00:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:00:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:00:53 compute-0 ceph-mon[74477]: pgmap v2482: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:53 compute-0 nova_compute[260603]: 2025-10-02 09:00:53.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:00:53 compute-0 nova_compute[260603]: 2025-10-02 09:00:53.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:00:53 compute-0 nova_compute[260603]: 2025-10-02 09:00:53.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:53 compute-0 nova_compute[260603]: 2025-10-02 09:00:53.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:53 compute-0 nova_compute[260603]: 2025-10-02 09:00:53.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:53 compute-0 nova_compute[260603]: 2025-10-02 09:00:53.554 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:00:53 compute-0 nova_compute[260603]: 2025-10-02 09:00:53.555 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:00:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:00:53.839 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:00:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:00:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691722777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.011 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:00:54 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1691722777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.229 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.233 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3678MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.233 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.234 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.337 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.337 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.354 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:00:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2928471322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.798 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.804 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.822 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.841 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:00:54 compute-0 nova_compute[260603]: 2025-10-02 09:00:54.841 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:55 compute-0 ceph-mon[74477]: pgmap v2483: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2928471322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:00:55 compute-0 nova_compute[260603]: 2025-10-02 09:00:55.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:55 compute-0 nova_compute[260603]: 2025-10-02 09:00:55.840 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:00:56 compute-0 nova_compute[260603]: 2025-10-02 09:00:56.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:00:57 compute-0 ceph-mon[74477]: pgmap v2484: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:57 compute-0 nova_compute[260603]: 2025-10-02 09:00:57.924 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:57 compute-0 nova_compute[260603]: 2025-10-02 09:00:57.925 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:57 compute-0 nova_compute[260603]: 2025-10-02 09:00:57.942 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:00:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:00:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:00:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:00:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:00:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:00:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:00:57 compute-0 nova_compute[260603]: 2025-10-02 09:00:57.999 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:57.999 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.009 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.009 2 INFO nova.compute.claims [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.109 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:00:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:00:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:00:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1855295427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.569 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.576 2 DEBUG nova.compute.provider_tree [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.598 2 DEBUG nova.scheduler.client.report [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.625 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.627 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.672 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.673 2 DEBUG nova.network.neutron [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.695 2 INFO nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.716 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.818 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.820 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.821 2 INFO nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Creating image(s)
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.854 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.889 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.925 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:00:58 compute-0 nova_compute[260603]: 2025-10-02 09:00:58.930 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.029 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.030 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.031 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.031 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.058 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.061 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 05300b1b-c030-499f-af29-cc94f4bf9e11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.136 2 DEBUG nova.policy [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:00:59 compute-0 ceph-mon[74477]: pgmap v2485: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:00:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1855295427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.355 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 05300b1b-c030-499f-af29-cc94f4bf9e11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.435 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.531 2 DEBUG nova.objects.instance [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 05300b1b-c030-499f-af29-cc94f4bf9e11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.548 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.549 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Ensure instance console log exists: /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.550 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.551 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.551 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:00:59 compute-0 nova_compute[260603]: 2025-10-02 09:00:59.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:00:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:01:00 compute-0 nova_compute[260603]: 2025-10-02 09:01:00.211 2 DEBUG nova.network.neutron [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Successfully updated port: e14f3e85-84ff-49b3-8817-d926cb709f80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:01:00 compute-0 nova_compute[260603]: 2025-10-02 09:01:00.229 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:01:00 compute-0 nova_compute[260603]: 2025-10-02 09:01:00.230 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:01:00 compute-0 nova_compute[260603]: 2025-10-02 09:01:00.230 2 DEBUG nova.network.neutron [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:01:00 compute-0 nova_compute[260603]: 2025-10-02 09:01:00.315 2 DEBUG nova.compute.manager [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-changed-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:01:00 compute-0 nova_compute[260603]: 2025-10-02 09:01:00.315 2 DEBUG nova.compute.manager [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Refreshing instance network info cache due to event network-changed-e14f3e85-84ff-49b3-8817-d926cb709f80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:01:00 compute-0 nova_compute[260603]: 2025-10-02 09:01:00.316 2 DEBUG oslo_concurrency.lockutils [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:01:00 compute-0 nova_compute[260603]: 2025-10-02 09:01:00.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:00 compute-0 nova_compute[260603]: 2025-10-02 09:01:00.667 2 DEBUG nova.network.neutron [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:01:01 compute-0 ceph-mon[74477]: pgmap v2486: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:01:01 compute-0 CROND[402765]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 09:01:01 compute-0 run-parts[402768]: (/etc/cron.hourly) starting 0anacron
Oct 02 09:01:01 compute-0 run-parts[402774]: (/etc/cron.hourly) finished 0anacron
Oct 02 09:01:01 compute-0 CROND[402764]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.665 2 DEBUG nova.network.neutron [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Updating instance_info_cache with network_info: [{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.687 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.688 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Instance network_info: |[{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.688 2 DEBUG oslo_concurrency.lockutils [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.688 2 DEBUG nova.network.neutron [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Refreshing network info cache for port e14f3e85-84ff-49b3-8817-d926cb709f80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.691 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Start _get_guest_xml network_info=[{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.698 2 WARNING nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.705 2 DEBUG nova.virt.libvirt.host [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.706 2 DEBUG nova.virt.libvirt.host [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.716 2 DEBUG nova.virt.libvirt.host [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.717 2 DEBUG nova.virt.libvirt.host [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.717 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.718 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.719 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.719 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.720 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.720 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.721 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.721 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.722 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.722 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.723 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.724 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:01:01 compute-0 nova_compute[260603]: 2025-10-02 09:01:01.729 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:01:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:01:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3873591614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.161 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.195 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:01:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3873591614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.203 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:01:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/321709715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.700 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.702 2 DEBUG nova.virt.libvirt.vif [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-755603083',display_name='tempest-TestNetworkBasicOps-server-755603083',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-755603083',id=131,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBmg0ihNhYo/3Tzh1Pt7lcbXy6uOqrL/08u+LngrhuZOWILyEHTHWbsg89FOrPiGPon4wJrkMN6ZCefE/Caz8hqQjrlNm3r99qN4W1mTg5pj+yc4yq/l4Zq52/JNOcbaw==',key_name='tempest-TestNetworkBasicOps-1951820191',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-uj66bhc3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:00:58Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=05300b1b-c030-499f-af29-cc94f4bf9e11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.703 2 DEBUG nova.network.os_vif_util [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.705 2 DEBUG nova.network.os_vif_util [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.707 2 DEBUG nova.objects.instance [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 05300b1b-c030-499f-af29-cc94f4bf9e11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.725 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:01:02 compute-0 nova_compute[260603]:   <uuid>05300b1b-c030-499f-af29-cc94f4bf9e11</uuid>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   <name>instance-00000083</name>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-755603083</nova:name>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:01:01</nova:creationTime>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:01:02 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:01:02 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:01:02 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:01:02 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:01:02 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:01:02 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 09:01:02 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:01:02 compute-0 nova_compute[260603]:         <nova:port uuid="e14f3e85-84ff-49b3-8817-d926cb709f80">
Oct 02 09:01:02 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <system>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <entry name="serial">05300b1b-c030-499f-af29-cc94f4bf9e11</entry>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <entry name="uuid">05300b1b-c030-499f-af29-cc94f4bf9e11</entry>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     </system>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   <os>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   </os>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   <features>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   </features>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/05300b1b-c030-499f-af29-cc94f4bf9e11_disk">
Oct 02 09:01:02 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       </source>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:01:02 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/05300b1b-c030-499f-af29-cc94f4bf9e11_disk.config">
Oct 02 09:01:02 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       </source>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:01:02 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:56:61:55"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <target dev="tape14f3e85-84"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/console.log" append="off"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <video>
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     </video>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:01:02 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:01:02 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:01:02 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:01:02 compute-0 nova_compute[260603]: </domain>
Oct 02 09:01:02 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.727 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Preparing to wait for external event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.728 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.728 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.729 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.730 2 DEBUG nova.virt.libvirt.vif [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-755603083',display_name='tempest-TestNetworkBasicOps-server-755603083',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-755603083',id=131,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBmg0ihNhYo/3Tzh1Pt7lcbXy6uOqrL/08u+LngrhuZOWILyEHTHWbsg89FOrPiGPon4wJrkMN6ZCefE/Caz8hqQjrlNm3r99qN4W1mTg5pj+yc4yq/l4Zq52/JNOcbaw==',key_name='tempest-TestNetworkBasicOps-1951820191',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-uj66bhc3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:00:58Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=05300b1b-c030-499f-af29-cc94f4bf9e11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.731 2 DEBUG nova.network.os_vif_util [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.732 2 DEBUG nova.network.os_vif_util [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.733 2 DEBUG os_vif [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape14f3e85-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape14f3e85-84, col_values=(('external_ids', {'iface-id': 'e14f3e85-84ff-49b3-8817-d926cb709f80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:61:55', 'vm-uuid': '05300b1b-c030-499f-af29-cc94f4bf9e11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:02 compute-0 NetworkManager[45129]: <info>  [1759395662.7601] manager: (tape14f3e85-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.772 2 INFO os_vif [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84')
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.837 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.838 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.838 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:56:61:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.839 2 INFO nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Using config drive
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.871 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.958 2 DEBUG nova.network.neutron [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Updated VIF entry in instance network info cache for port e14f3e85-84ff-49b3-8817-d926cb709f80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.959 2 DEBUG nova.network.neutron [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Updating instance_info_cache with network_info: [{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:01:02 compute-0 nova_compute[260603]: 2025-10-02 09:01:02.976 2 DEBUG oslo_concurrency.lockutils [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:01:03 compute-0 podman[402858]: 2025-10-02 09:01:03.021162174 +0000 UTC m=+0.080416331 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:01:03 compute-0 podman[402857]: 2025-10-02 09:01:03.064901583 +0000 UTC m=+0.124368696 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:01:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:01:03 compute-0 ceph-mon[74477]: pgmap v2487: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:01:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/321709715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:01:03 compute-0 nova_compute[260603]: 2025-10-02 09:01:03.233 2 INFO nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Creating config drive at /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/disk.config
Oct 02 09:01:03 compute-0 nova_compute[260603]: 2025-10-02 09:01:03.243 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1sb28og execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:03 compute-0 nova_compute[260603]: 2025-10-02 09:01:03.417 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1sb28og" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:03 compute-0 nova_compute[260603]: 2025-10-02 09:01:03.450 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:01:03 compute-0 nova_compute[260603]: 2025-10-02 09:01:03.455 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/disk.config 05300b1b-c030-499f-af29-cc94f4bf9e11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:03 compute-0 nova_compute[260603]: 2025-10-02 09:01:03.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:01:03 compute-0 nova_compute[260603]: 2025-10-02 09:01:03.653 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/disk.config 05300b1b-c030-499f-af29-cc94f4bf9e11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:03 compute-0 nova_compute[260603]: 2025-10-02 09:01:03.654 2 INFO nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Deleting local config drive /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/disk.config because it was imported into RBD.
Oct 02 09:01:03 compute-0 NetworkManager[45129]: <info>  [1759395663.7078] manager: (tape14f3e85-84): new Tun device (/org/freedesktop/NetworkManager/Devices/564)
Oct 02 09:01:03 compute-0 kernel: tape14f3e85-84: entered promiscuous mode
Oct 02 09:01:03 compute-0 nova_compute[260603]: 2025-10-02 09:01:03.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:03 compute-0 ovn_controller[152344]: 2025-10-02T09:01:03Z|01415|binding|INFO|Claiming lport e14f3e85-84ff-49b3-8817-d926cb709f80 for this chassis.
Oct 02 09:01:03 compute-0 ovn_controller[152344]: 2025-10-02T09:01:03Z|01416|binding|INFO|e14f3e85-84ff-49b3-8817-d926cb709f80: Claiming fa:16:3e:56:61:55 10.100.0.6
Oct 02 09:01:03 compute-0 nova_compute[260603]: 2025-10-02 09:01:03.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.733 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:61:55 10.100.0.6'], port_security=['fa:16:3e:56:61:55 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '05300b1b-c030-499f-af29-cc94f4bf9e11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=393b5666-d876-4099-9af5-9247fa9f1ed9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e14f3e85-84ff-49b3-8817-d926cb709f80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.734 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e14f3e85-84ff-49b3-8817-d926cb709f80 in datapath 577c4506-1f5a-48bf-b267-75fd69fe5e1c bound to our chassis
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.735 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 577c4506-1f5a-48bf-b267-75fd69fe5e1c
Oct 02 09:01:03 compute-0 systemd-udevd[402953]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:01:03 compute-0 NetworkManager[45129]: <info>  [1759395663.7461] device (tape14f3e85-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:01:03 compute-0 NetworkManager[45129]: <info>  [1759395663.7470] device (tape14f3e85-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.751 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[702d1e6a-1ecc-41f4-b16f-958f9053a14a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.752 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap577c4506-11 in ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.754 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap577c4506-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.754 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd26fa3-c1dd-4473-9b06-bc445cad4bc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.755 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e81f1a8e-c88d-4c16-b5cd-8f532e203858]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:03 compute-0 systemd-machined[214636]: New machine qemu-165-instance-00000083.
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.773 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[24b11793-7f0c-493c-bb83-2af2ab578c97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:03 compute-0 systemd[1]: Started Virtual Machine qemu-165-instance-00000083.
Oct 02 09:01:03 compute-0 ovn_controller[152344]: 2025-10-02T09:01:03Z|01417|binding|INFO|Setting lport e14f3e85-84ff-49b3-8817-d926cb709f80 ovn-installed in OVS
Oct 02 09:01:03 compute-0 ovn_controller[152344]: 2025-10-02T09:01:03Z|01418|binding|INFO|Setting lport e14f3e85-84ff-49b3-8817-d926cb709f80 up in Southbound
Oct 02 09:01:03 compute-0 nova_compute[260603]: 2025-10-02 09:01:03.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.810 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c0251abe-6d82-4cc1-8ab9-23ac7f00e44c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.844 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[29ec9ba8-be1b-46ca-a33a-0319ed879302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:03 compute-0 NetworkManager[45129]: <info>  [1759395663.8504] manager: (tap577c4506-10): new Veth device (/org/freedesktop/NetworkManager/Devices/565)
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.849 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d6e47a-5c06-4f03-8699-6a5b52e4f568]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:03 compute-0 systemd-udevd[402957]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.890 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb19562f-aee2-4587-b755-493dd5ca9de3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.893 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[14c90486-e92b-4671-8a8f-88220a512962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:03 compute-0 NetworkManager[45129]: <info>  [1759395663.9229] device (tap577c4506-10): carrier: link connected
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.927 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[77b974ec-e0b3-45f8-a3cf-8bc1823104af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.948 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[decf1ad3-ad27-411f-97a8-7d60432f9e7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap577c4506-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:0e:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654252, 'reachable_time': 15701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402987, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.961 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f28868-622d-4a51-98d7-cf32899fc4fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:e80'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654252, 'tstamp': 654252}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402988, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.981 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0a7757ec-aed7-4b51-83a1-eecb36550e11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap577c4506-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:0e:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654252, 'reachable_time': 15701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402989, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.005 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7cbc6a-040b-49b0-b575-78f08954abf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.059 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[adc0f373-ea87-4064-8694-0b62565df3b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.060 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap577c4506-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.061 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.061 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap577c4506-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:04 compute-0 NetworkManager[45129]: <info>  [1759395664.0639] manager: (tap577c4506-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/566)
Oct 02 09:01:04 compute-0 kernel: tap577c4506-10: entered promiscuous mode
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.066 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap577c4506-10, col_values=(('external_ids', {'iface-id': '8acd815f-105a-41fb-b7bf-32e4f419ccc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.068 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/577c4506-1f5a-48bf-b267-75fd69fe5e1c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/577c4506-1f5a-48bf-b267-75fd69fe5e1c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:01:04 compute-0 ovn_controller[152344]: 2025-10-02T09:01:04Z|01419|binding|INFO|Releasing lport 8acd815f-105a-41fb-b7bf-32e4f419ccc2 from this chassis (sb_readonly=0)
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.069 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62af19a0-8dca-4d4b-80c3-06c865310cdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.070 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-577c4506-1f5a-48bf-b267-75fd69fe5e1c
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/577c4506-1f5a-48bf-b267-75fd69fe5e1c.pid.haproxy
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 577c4506-1f5a-48bf-b267-75fd69fe5e1c
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:01:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.071 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'env', 'PROCESS_TAG=haproxy-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/577c4506-1f5a-48bf-b267-75fd69fe5e1c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:04 compute-0 podman[403063]: 2025-10-02 09:01:04.416947169 +0000 UTC m=+0.051266634 container create d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:01:04 compute-0 systemd[1]: Started libpod-conmon-d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52.scope.
Oct 02 09:01:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:01:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8827b62d4c301f68ef38c01deef5bc5779710bd69d47a8e0e2f3379393e462c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:04 compute-0 podman[403063]: 2025-10-02 09:01:04.393714436 +0000 UTC m=+0.028033921 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:01:04 compute-0 podman[403063]: 2025-10-02 09:01:04.486466399 +0000 UTC m=+0.120785884 container init d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 09:01:04 compute-0 podman[403063]: 2025-10-02 09:01:04.491510056 +0000 UTC m=+0.125829521 container start d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 09:01:04 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [NOTICE]   (403082) : New worker (403084) forked
Oct 02 09:01:04 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [NOTICE]   (403082) : Loading success.
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.566 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395664.5659397, 05300b1b-c030-499f-af29-cc94f4bf9e11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.567 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] VM Started (Lifecycle Event)
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.589 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.592 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395664.5663855, 05300b1b-c030-499f-af29-cc94f4bf9e11 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.593 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] VM Paused (Lifecycle Event)
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.610 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.612 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.633 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.693 2 DEBUG nova.compute.manager [req-f1f81a71-d5c0-4f3b-b390-cacacc39109a req-e759d033-b609-49fa-a5e6-eeb8eae716a9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.693 2 DEBUG oslo_concurrency.lockutils [req-f1f81a71-d5c0-4f3b-b390-cacacc39109a req-e759d033-b609-49fa-a5e6-eeb8eae716a9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.694 2 DEBUG oslo_concurrency.lockutils [req-f1f81a71-d5c0-4f3b-b390-cacacc39109a req-e759d033-b609-49fa-a5e6-eeb8eae716a9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.694 2 DEBUG oslo_concurrency.lockutils [req-f1f81a71-d5c0-4f3b-b390-cacacc39109a req-e759d033-b609-49fa-a5e6-eeb8eae716a9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.694 2 DEBUG nova.compute.manager [req-f1f81a71-d5c0-4f3b-b390-cacacc39109a req-e759d033-b609-49fa-a5e6-eeb8eae716a9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Processing event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.695 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.699 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395664.6988919, 05300b1b-c030-499f-af29-cc94f4bf9e11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.699 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] VM Resumed (Lifecycle Event)
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.701 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.704 2 INFO nova.virt.libvirt.driver [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Instance spawned successfully.
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.704 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.717 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.724 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.728 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.728 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.729 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.729 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.729 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.730 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.754 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.782 2 INFO nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Took 5.96 seconds to spawn the instance on the hypervisor.
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.782 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.839 2 INFO nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Took 6.86 seconds to build instance.
Oct 02 09:01:04 compute-0 nova_compute[260603]: 2025-10-02 09:01:04.858 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:05 compute-0 ceph-mon[74477]: pgmap v2488: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:01:05 compute-0 nova_compute[260603]: 2025-10-02 09:01:05.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:01:06 compute-0 nova_compute[260603]: 2025-10-02 09:01:06.799 2 DEBUG nova.compute.manager [req-904a0fa2-ef73-4465-9ee6-d99851b2b942 req-3e67d1d4-6467-4814-8dee-4cb90e91d809 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:01:06 compute-0 nova_compute[260603]: 2025-10-02 09:01:06.800 2 DEBUG oslo_concurrency.lockutils [req-904a0fa2-ef73-4465-9ee6-d99851b2b942 req-3e67d1d4-6467-4814-8dee-4cb90e91d809 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:06 compute-0 nova_compute[260603]: 2025-10-02 09:01:06.800 2 DEBUG oslo_concurrency.lockutils [req-904a0fa2-ef73-4465-9ee6-d99851b2b942 req-3e67d1d4-6467-4814-8dee-4cb90e91d809 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:06 compute-0 nova_compute[260603]: 2025-10-02 09:01:06.800 2 DEBUG oslo_concurrency.lockutils [req-904a0fa2-ef73-4465-9ee6-d99851b2b942 req-3e67d1d4-6467-4814-8dee-4cb90e91d809 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:06 compute-0 nova_compute[260603]: 2025-10-02 09:01:06.801 2 DEBUG nova.compute.manager [req-904a0fa2-ef73-4465-9ee6-d99851b2b942 req-3e67d1d4-6467-4814-8dee-4cb90e91d809 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] No waiting events found dispatching network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:01:06 compute-0 nova_compute[260603]: 2025-10-02 09:01:06.801 2 WARNING nova.compute.manager [req-904a0fa2-ef73-4465-9ee6-d99851b2b942 req-3e67d1d4-6467-4814-8dee-4cb90e91d809 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received unexpected event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 for instance with vm_state active and task_state None.
Oct 02 09:01:07 compute-0 ceph-mon[74477]: pgmap v2489: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:01:07 compute-0 nova_compute[260603]: 2025-10-02 09:01:07.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 809 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 09:01:08 compute-0 podman[403093]: 2025-10-02 09:01:08.007005533 +0000 UTC m=+0.066036733 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:01:08 compute-0 podman[403094]: 2025-10-02 09:01:08.018869781 +0000 UTC m=+0.069986686 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 02 09:01:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:01:09 compute-0 ceph-mon[74477]: pgmap v2490: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 809 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 09:01:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:01:10 compute-0 nova_compute[260603]: 2025-10-02 09:01:10.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:10 compute-0 nova_compute[260603]: 2025-10-02 09:01:10.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:10 compute-0 NetworkManager[45129]: <info>  [1759395670.8582] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Oct 02 09:01:10 compute-0 NetworkManager[45129]: <info>  [1759395670.8605] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Oct 02 09:01:10 compute-0 ovn_controller[152344]: 2025-10-02T09:01:10Z|01420|binding|INFO|Releasing lport 8acd815f-105a-41fb-b7bf-32e4f419ccc2 from this chassis (sb_readonly=0)
Oct 02 09:01:10 compute-0 ovn_controller[152344]: 2025-10-02T09:01:10Z|01421|binding|INFO|Releasing lport 8acd815f-105a-41fb-b7bf-32e4f419ccc2 from this chassis (sb_readonly=0)
Oct 02 09:01:10 compute-0 nova_compute[260603]: 2025-10-02 09:01:10.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:10 compute-0 nova_compute[260603]: 2025-10-02 09:01:10.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:11 compute-0 ceph-mon[74477]: pgmap v2491: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:01:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.255 2 DEBUG nova.compute.manager [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-changed-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.256 2 DEBUG nova.compute.manager [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Refreshing instance network info cache due to event network-changed-e14f3e85-84ff-49b3-8817-d926cb709f80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.256 2 DEBUG oslo_concurrency.lockutils [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.256 2 DEBUG oslo_concurrency.lockutils [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.256 2 DEBUG nova.network.neutron [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Refreshing network info cache for port e14f3e85-84ff-49b3-8817-d926cb709f80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.471 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.472 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.472 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.473 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.473 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.475 2 INFO nova.compute.manager [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Terminating instance
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.476 2 DEBUG nova.compute.manager [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:01:12 compute-0 kernel: tape14f3e85-84 (unregistering): left promiscuous mode
Oct 02 09:01:12 compute-0 NetworkManager[45129]: <info>  [1759395672.5269] device (tape14f3e85-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:12 compute-0 ovn_controller[152344]: 2025-10-02T09:01:12Z|01422|binding|INFO|Releasing lport e14f3e85-84ff-49b3-8817-d926cb709f80 from this chassis (sb_readonly=0)
Oct 02 09:01:12 compute-0 ovn_controller[152344]: 2025-10-02T09:01:12Z|01423|binding|INFO|Setting lport e14f3e85-84ff-49b3-8817-d926cb709f80 down in Southbound
Oct 02 09:01:12 compute-0 ovn_controller[152344]: 2025-10-02T09:01:12Z|01424|binding|INFO|Removing iface tape14f3e85-84 ovn-installed in OVS
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.546 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:61:55 10.100.0.6'], port_security=['fa:16:3e:56:61:55 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '05300b1b-c030-499f-af29-cc94f4bf9e11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=393b5666-d876-4099-9af5-9247fa9f1ed9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e14f3e85-84ff-49b3-8817-d926cb709f80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.548 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e14f3e85-84ff-49b3-8817-d926cb709f80 in datapath 577c4506-1f5a-48bf-b267-75fd69fe5e1c unbound from our chassis
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.550 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 577c4506-1f5a-48bf-b267-75fd69fe5e1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.551 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[44fa93b0-9a6f-44b1-b847-20619f8a6bfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.552 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c namespace which is not needed anymore
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:12 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000083.scope: Deactivated successfully.
Oct 02 09:01:12 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000083.scope: Consumed 8.591s CPU time.
Oct 02 09:01:12 compute-0 systemd-machined[214636]: Machine qemu-165-instance-00000083 terminated.
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.715 2 INFO nova.virt.libvirt.driver [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Instance destroyed successfully.
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.716 2 DEBUG nova.objects.instance [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 05300b1b-c030-499f-af29-cc94f4bf9e11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:01:12 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [NOTICE]   (403082) : haproxy version is 2.8.14-c23fe91
Oct 02 09:01:12 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [NOTICE]   (403082) : path to executable is /usr/sbin/haproxy
Oct 02 09:01:12 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [WARNING]  (403082) : Exiting Master process...
Oct 02 09:01:12 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [ALERT]    (403082) : Current worker (403084) exited with code 143 (Terminated)
Oct 02 09:01:12 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [WARNING]  (403082) : All workers exited. Exiting... (0)
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.731 2 DEBUG nova.virt.libvirt.vif [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-755603083',display_name='tempest-TestNetworkBasicOps-server-755603083',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-755603083',id=131,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBmg0ihNhYo/3Tzh1Pt7lcbXy6uOqrL/08u+LngrhuZOWILyEHTHWbsg89FOrPiGPon4wJrkMN6ZCefE/Caz8hqQjrlNm3r99qN4W1mTg5pj+yc4yq/l4Zq52/JNOcbaw==',key_name='tempest-TestNetworkBasicOps-1951820191',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:01:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-uj66bhc3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:01:04Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=05300b1b-c030-499f-af29-cc94f4bf9e11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:01:12 compute-0 systemd[1]: libpod-d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52.scope: Deactivated successfully.
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.732 2 DEBUG nova.network.os_vif_util [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.733 2 DEBUG nova.network.os_vif_util [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.734 2 DEBUG os_vif [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:01:12 compute-0 podman[403158]: 2025-10-02 09:01:12.735114802 +0000 UTC m=+0.072741022 container died d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape14f3e85-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.741 2 INFO os_vif [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84')
Oct 02 09:01:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52-userdata-shm.mount: Deactivated successfully.
Oct 02 09:01:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8827b62d4c301f68ef38c01deef5bc5779710bd69d47a8e0e2f3379393e462c6-merged.mount: Deactivated successfully.
Oct 02 09:01:12 compute-0 podman[403158]: 2025-10-02 09:01:12.779681078 +0000 UTC m=+0.117307308 container cleanup d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.793 2 DEBUG nova.compute.manager [req-9bd606f5-6b42-4b11-9425-0c6e659558c4 req-579957c4-ca28-4a5c-b422-8a3d61a65696 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-vif-unplugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.794 2 DEBUG oslo_concurrency.lockutils [req-9bd606f5-6b42-4b11-9425-0c6e659558c4 req-579957c4-ca28-4a5c-b422-8a3d61a65696 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.794 2 DEBUG oslo_concurrency.lockutils [req-9bd606f5-6b42-4b11-9425-0c6e659558c4 req-579957c4-ca28-4a5c-b422-8a3d61a65696 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.797 2 DEBUG oslo_concurrency.lockutils [req-9bd606f5-6b42-4b11-9425-0c6e659558c4 req-579957c4-ca28-4a5c-b422-8a3d61a65696 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.797 2 DEBUG nova.compute.manager [req-9bd606f5-6b42-4b11-9425-0c6e659558c4 req-579957c4-ca28-4a5c-b422-8a3d61a65696 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] No waiting events found dispatching network-vif-unplugged-e14f3e85-84ff-49b3-8817-d926cb709f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.797 2 DEBUG nova.compute.manager [req-9bd606f5-6b42-4b11-9425-0c6e659558c4 req-579957c4-ca28-4a5c-b422-8a3d61a65696 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-vif-unplugged-e14f3e85-84ff-49b3-8817-d926cb709f80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:01:12 compute-0 systemd[1]: libpod-conmon-d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52.scope: Deactivated successfully.
Oct 02 09:01:12 compute-0 podman[403210]: 2025-10-02 09:01:12.856263477 +0000 UTC m=+0.049575551 container remove d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.861 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa3a54b-f084-44bf-ad02-ca86bb2b6604]: (4, ('Thu Oct  2 09:01:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c (d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52)\nd25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52\nThu Oct  2 09:01:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c (d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52)\nd25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.863 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[69d6cc48-0cb5-43a8-bc8f-fa51579ba5c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.865 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap577c4506-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:12 compute-0 kernel: tap577c4506-10: left promiscuous mode
Oct 02 09:01:12 compute-0 nova_compute[260603]: 2025-10-02 09:01:12.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.883 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78aebcda-7969-4e6b-bfe2-d600c16d87f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.906 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[13c1dca8-951d-4020-8186-b90186fee4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.907 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83233edd-86a0-480c-a73d-6c295a9c2250]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.933 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[235a096c-dd2a-4655-9fb4-57d2f867b52d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654243, 'reachable_time': 38871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403231, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d577c4506\x2d1f5a\x2d48bf\x2db267\x2d75fd69fe5e1c.mount: Deactivated successfully.
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.936 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:01:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.936 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[2567aa57-3b2e-4dda-b02a-e23b27a5ee4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:13 compute-0 nova_compute[260603]: 2025-10-02 09:01:13.095 2 INFO nova.virt.libvirt.driver [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Deleting instance files /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11_del
Oct 02 09:01:13 compute-0 nova_compute[260603]: 2025-10-02 09:01:13.096 2 INFO nova.virt.libvirt.driver [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Deletion of /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11_del complete
Oct 02 09:01:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:01:13 compute-0 nova_compute[260603]: 2025-10-02 09:01:13.173 2 INFO nova.compute.manager [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Took 0.69 seconds to destroy the instance on the hypervisor.
Oct 02 09:01:13 compute-0 nova_compute[260603]: 2025-10-02 09:01:13.173 2 DEBUG oslo.service.loopingcall [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:01:13 compute-0 nova_compute[260603]: 2025-10-02 09:01:13.173 2 DEBUG nova.compute.manager [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:01:13 compute-0 nova_compute[260603]: 2025-10-02 09:01:13.174 2 DEBUG nova.network.neutron [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:01:13 compute-0 ceph-mon[74477]: pgmap v2492: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:01:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 66 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 02 09:01:14 compute-0 nova_compute[260603]: 2025-10-02 09:01:14.583 2 DEBUG nova.network.neutron [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Updated VIF entry in instance network info cache for port e14f3e85-84ff-49b3-8817-d926cb709f80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:01:14 compute-0 nova_compute[260603]: 2025-10-02 09:01:14.583 2 DEBUG nova.network.neutron [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Updating instance_info_cache with network_info: [{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:01:14 compute-0 nova_compute[260603]: 2025-10-02 09:01:14.609 2 DEBUG oslo_concurrency.lockutils [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:01:14 compute-0 nova_compute[260603]: 2025-10-02 09:01:14.928 2 DEBUG nova.compute.manager [req-28984e71-e71e-4001-9bb7-cb0e7d618c80 req-232d0e06-6da6-4195-b15b-8394980963bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:01:14 compute-0 nova_compute[260603]: 2025-10-02 09:01:14.929 2 DEBUG oslo_concurrency.lockutils [req-28984e71-e71e-4001-9bb7-cb0e7d618c80 req-232d0e06-6da6-4195-b15b-8394980963bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:14 compute-0 nova_compute[260603]: 2025-10-02 09:01:14.929 2 DEBUG oslo_concurrency.lockutils [req-28984e71-e71e-4001-9bb7-cb0e7d618c80 req-232d0e06-6da6-4195-b15b-8394980963bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:14 compute-0 nova_compute[260603]: 2025-10-02 09:01:14.930 2 DEBUG oslo_concurrency.lockutils [req-28984e71-e71e-4001-9bb7-cb0e7d618c80 req-232d0e06-6da6-4195-b15b-8394980963bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:14 compute-0 nova_compute[260603]: 2025-10-02 09:01:14.930 2 DEBUG nova.compute.manager [req-28984e71-e71e-4001-9bb7-cb0e7d618c80 req-232d0e06-6da6-4195-b15b-8394980963bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] No waiting events found dispatching network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:01:14 compute-0 nova_compute[260603]: 2025-10-02 09:01:14.931 2 WARNING nova.compute.manager [req-28984e71-e71e-4001-9bb7-cb0e7d618c80 req-232d0e06-6da6-4195-b15b-8394980963bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received unexpected event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 for instance with vm_state active and task_state deleting.
Oct 02 09:01:15 compute-0 nova_compute[260603]: 2025-10-02 09:01:15.111 2 DEBUG nova.network.neutron [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:01:15 compute-0 nova_compute[260603]: 2025-10-02 09:01:15.134 2 INFO nova.compute.manager [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Took 1.96 seconds to deallocate network for instance.
Oct 02 09:01:15 compute-0 nova_compute[260603]: 2025-10-02 09:01:15.209 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:15 compute-0 nova_compute[260603]: 2025-10-02 09:01:15.211 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:15 compute-0 ceph-mon[74477]: pgmap v2493: 305 pgs: 305 active+clean; 66 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 02 09:01:15 compute-0 nova_compute[260603]: 2025-10-02 09:01:15.282 2 DEBUG oslo_concurrency.processutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:15 compute-0 nova_compute[260603]: 2025-10-02 09:01:15.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:01:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3785030230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:01:15 compute-0 nova_compute[260603]: 2025-10-02 09:01:15.752 2 DEBUG oslo_concurrency.processutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:15 compute-0 nova_compute[260603]: 2025-10-02 09:01:15.762 2 DEBUG nova.compute.provider_tree [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:01:15 compute-0 nova_compute[260603]: 2025-10-02 09:01:15.788 2 DEBUG nova.scheduler.client.report [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:01:15 compute-0 nova_compute[260603]: 2025-10-02 09:01:15.821 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 66 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Oct 02 09:01:15 compute-0 nova_compute[260603]: 2025-10-02 09:01:15.854 2 INFO nova.scheduler.client.report [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 05300b1b-c030-499f-af29-cc94f4bf9e11
Oct 02 09:01:15 compute-0 nova_compute[260603]: 2025-10-02 09:01:15.949 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3785030230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:01:17 compute-0 ceph-mon[74477]: pgmap v2494: 305 pgs: 305 active+clean; 66 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Oct 02 09:01:17 compute-0 nova_compute[260603]: 2025-10-02 09:01:17.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 41 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 02 09:01:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:01:19 compute-0 ceph-mon[74477]: pgmap v2495: 305 pgs: 305 active+clean; 41 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 02 09:01:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.5 KiB/s wr, 65 op/s
Oct 02 09:01:20 compute-0 nova_compute[260603]: 2025-10-02 09:01:20.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:21 compute-0 ceph-mon[74477]: pgmap v2496: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.5 KiB/s wr, 65 op/s
Oct 02 09:01:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 09:01:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:01:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2074779171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:01:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:01:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2074779171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:01:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2074779171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:01:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2074779171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:01:22 compute-0 nova_compute[260603]: 2025-10-02 09:01:22.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:01:23 compute-0 ceph-mon[74477]: pgmap v2497: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 09:01:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 09:01:25 compute-0 ceph-mon[74477]: pgmap v2498: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 09:01:25 compute-0 nova_compute[260603]: 2025-10-02 09:01:25.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 21 op/s
Oct 02 09:01:26 compute-0 nova_compute[260603]: 2025-10-02 09:01:26.985 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:26 compute-0 nova_compute[260603]: 2025-10-02 09:01:26.986 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.018 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.104 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.105 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.115 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.116 2 INFO nova.compute.claims [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.221 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:27 compute-0 ceph-mon[74477]: pgmap v2499: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 21 op/s
Oct 02 09:01:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:01:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3182663924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.659 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.666 2 DEBUG nova.compute.provider_tree [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.687 2 DEBUG nova.scheduler.client.report [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.714 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395672.713664, 05300b1b-c030-499f-af29-cc94f4bf9e11 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.714 2 INFO nova.compute.manager [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] VM Stopped (Lifecycle Event)
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.716 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.717 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.761 2 DEBUG nova.compute.manager [None req-db783599-46f3-4398-ae7b-2f75fd51270c - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.777 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.777 2 DEBUG nova.network.neutron [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.797 2 INFO nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.817 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:01:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 21 op/s
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.906 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.907 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.907 2 INFO nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Creating image(s)
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.928 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:01:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:01:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:01:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:01:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:01:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:01:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.950 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.973 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:01:27 compute-0 nova_compute[260603]: 2025-10-02 09:01:27.977 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:01:28
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['backups', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control']
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.067 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.067 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.068 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.068 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.088 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.091 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:01:28 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3182663924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.342 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.426 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:01:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.527 2 DEBUG nova.objects.instance [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 2f4391e7-f233-4092-b54f-89a4c7840cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.546 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.547 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Ensure instance console log exists: /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.547 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.548 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.548 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:28 compute-0 nova_compute[260603]: 2025-10-02 09:01:28.888 2 DEBUG nova.policy [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:01:29 compute-0 ceph-mon[74477]: pgmap v2500: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 21 op/s
Oct 02 09:01:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 67 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 1.0 MiB/s wr, 4 op/s
Oct 02 09:01:30 compute-0 sudo[403444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:01:30 compute-0 sudo[403444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:30 compute-0 sudo[403444]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:30 compute-0 sudo[403469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:01:30 compute-0 sudo[403469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:30 compute-0 sudo[403469]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:30 compute-0 sudo[403494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:01:30 compute-0 sudo[403494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:30 compute-0 sudo[403494]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:30 compute-0 sudo[403519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 09:01:30 compute-0 sudo[403519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:30 compute-0 nova_compute[260603]: 2025-10-02 09:01:30.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:30 compute-0 nova_compute[260603]: 2025-10-02 09:01:30.971 2 DEBUG nova.network.neutron [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Successfully updated port: e14f3e85-84ff-49b3-8817-d926cb709f80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:01:30 compute-0 nova_compute[260603]: 2025-10-02 09:01:30.989 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-2f4391e7-f233-4092-b54f-89a4c7840cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:01:30 compute-0 nova_compute[260603]: 2025-10-02 09:01:30.989 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-2f4391e7-f233-4092-b54f-89a4c7840cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:01:30 compute-0 nova_compute[260603]: 2025-10-02 09:01:30.989 2 DEBUG nova.network.neutron [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:01:31 compute-0 podman[403618]: 2025-10-02 09:01:31.014916805 +0000 UTC m=+0.079236423 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 09:01:31 compute-0 nova_compute[260603]: 2025-10-02 09:01:31.064 2 DEBUG nova.compute.manager [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received event network-changed-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:01:31 compute-0 nova_compute[260603]: 2025-10-02 09:01:31.064 2 DEBUG nova.compute.manager [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Refreshing instance network info cache due to event network-changed-e14f3e85-84ff-49b3-8817-d926cb709f80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:01:31 compute-0 nova_compute[260603]: 2025-10-02 09:01:31.064 2 DEBUG oslo_concurrency.lockutils [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-2f4391e7-f233-4092-b54f-89a4c7840cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:01:31 compute-0 podman[403618]: 2025-10-02 09:01:31.121319941 +0000 UTC m=+0.185639550 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 09:01:31 compute-0 ceph-mon[74477]: pgmap v2501: 305 pgs: 305 active+clean; 67 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 1.0 MiB/s wr, 4 op/s
Oct 02 09:01:31 compute-0 nova_compute[260603]: 2025-10-02 09:01:31.694 2 DEBUG nova.network.neutron [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:01:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 67 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 1.0 MiB/s wr, 4 op/s
Oct 02 09:01:31 compute-0 sudo[403519]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:01:31 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:01:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:01:31 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:01:32 compute-0 sudo[403774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:01:32 compute-0 sudo[403774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:32 compute-0 sudo[403774]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:32 compute-0 sudo[403799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:01:32 compute-0 sudo[403799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:32 compute-0 sudo[403799]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:32 compute-0 sudo[403824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:01:32 compute-0 sudo[403824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:32 compute-0 sudo[403824]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:32 compute-0 sudo[403849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:01:32 compute-0 sudo[403849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.745 2 DEBUG nova.network.neutron [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Updating instance_info_cache with network_info: [{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.767 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-2f4391e7-f233-4092-b54f-89a4c7840cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.767 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Instance network_info: |[{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.768 2 DEBUG oslo_concurrency.lockutils [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-2f4391e7-f233-4092-b54f-89a4c7840cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.768 2 DEBUG nova.network.neutron [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Refreshing network info cache for port e14f3e85-84ff-49b3-8817-d926cb709f80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.771 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Start _get_guest_xml network_info=[{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.777 2 WARNING nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.786 2 DEBUG nova.virt.libvirt.host [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.786 2 DEBUG nova.virt.libvirt.host [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.790 2 DEBUG nova.virt.libvirt.host [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.790 2 DEBUG nova.virt.libvirt.host [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.791 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.791 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.792 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.792 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.792 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.793 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.793 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.793 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.793 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.794 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.794 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.794 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.797 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:32 compute-0 ceph-mon[74477]: pgmap v2502: 305 pgs: 305 active+clean; 67 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 1.0 MiB/s wr, 4 op/s
Oct 02 09:01:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:01:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:01:32 compute-0 nova_compute[260603]: 2025-10-02 09:01:32.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:32 compute-0 sudo[403849]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 09:01:32 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 09:01:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:01:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:01:33 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:01:33 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:01:33 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2fb9b33e-3176-4f4f-87e6-487b915c4415 does not exist
Oct 02 09:01:33 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev fe764605-ff52-4471-9f99-aa9508867e49 does not exist
Oct 02 09:01:33 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 482b979a-d50b-4a64-bc21-d53d1df72a78 does not exist
Oct 02 09:01:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:01:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:01:33 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:01:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:01:33 compute-0 sudo[403924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:01:33 compute-0 sudo[403924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:33 compute-0 sudo[403924]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:33 compute-0 sudo[403958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:01:33 compute-0 sudo[403958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:33 compute-0 podman[403950]: 2025-10-02 09:01:33.225230164 +0000 UTC m=+0.077302343 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 09:01:33 compute-0 sudo[403958]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:33 compute-0 podman[403949]: 2025-10-02 09:01:33.271403528 +0000 UTC m=+0.133929592 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 09:01:33 compute-0 sudo[404013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:01:33 compute-0 sudo[404013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:33 compute-0 sudo[404013]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:01:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569992552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:01:33 compute-0 sudo[404043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:01:33 compute-0 sudo[404043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.373 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.400 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.404 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:33 compute-0 podman[404149]: 2025-10-02 09:01:33.822152694 +0000 UTC m=+0.060807061 container create 573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:01:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:01:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/85280588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.853 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.857 2 DEBUG nova.virt.libvirt.vif [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:01:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-442472618',display_name='tempest-TestNetworkBasicOps-server-442472618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-442472618',id=132,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMhdM2B/LrbDEwL4IycEJOfFLzMu6CpL77K1ym75rDBPo/cEgtvAALjq8eovnQtT7xFChCXYYkzy7ApfLKPYhw2LEullF9QdZuX88UmbA5oS4LeC35DxK9kYFS0r2dt6vg==',key_name='tempest-TestNetworkBasicOps-18474127',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-b1cmahn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:01:27Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=2f4391e7-f233-4092-b54f-89a4c7840cd8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.857 2 DEBUG nova.network.os_vif_util [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.859 2 DEBUG nova.network.os_vif_util [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.862 2 DEBUG nova.objects.instance [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f4391e7-f233-4092-b54f-89a4c7840cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:01:33 compute-0 systemd[1]: Started libpod-conmon-573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd.scope.
Oct 02 09:01:33 compute-0 podman[404149]: 2025-10-02 09:01:33.793870254 +0000 UTC m=+0.032524661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.890 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:01:33 compute-0 nova_compute[260603]:   <uuid>2f4391e7-f233-4092-b54f-89a4c7840cd8</uuid>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   <name>instance-00000084</name>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-442472618</nova:name>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:01:32</nova:creationTime>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:01:33 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:01:33 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:01:33 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:01:33 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:01:33 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:01:33 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 09:01:33 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:01:33 compute-0 nova_compute[260603]:         <nova:port uuid="e14f3e85-84ff-49b3-8817-d926cb709f80">
Oct 02 09:01:33 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <system>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <entry name="serial">2f4391e7-f233-4092-b54f-89a4c7840cd8</entry>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <entry name="uuid">2f4391e7-f233-4092-b54f-89a4c7840cd8</entry>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     </system>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   <os>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   </os>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   <features>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   </features>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/2f4391e7-f233-4092-b54f-89a4c7840cd8_disk">
Oct 02 09:01:33 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       </source>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:01:33 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/2f4391e7-f233-4092-b54f-89a4c7840cd8_disk.config">
Oct 02 09:01:33 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       </source>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:01:33 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:56:61:55"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <target dev="tape14f3e85-84"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/console.log" append="off"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <video>
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     </video>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:01:33 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:01:33 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:01:33 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:01:33 compute-0 nova_compute[260603]: </domain>
Oct 02 09:01:33 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.893 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Preparing to wait for external event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.894 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.894 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.895 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.896 2 DEBUG nova.virt.libvirt.vif [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:01:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-442472618',display_name='tempest-TestNetworkBasicOps-server-442472618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-442472618',id=132,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMhdM2B/LrbDEwL4IycEJOfFLzMu6CpL77K1ym75rDBPo/cEgtvAALjq8eovnQtT7xFChCXYYkzy7ApfLKPYhw2LEullF9QdZuX88UmbA5oS4LeC35DxK9kYFS0r2dt6vg==',key_name='tempest-TestNetworkBasicOps-18474127',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-b1cmahn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:01:27Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=2f4391e7-f233-4092-b54f-89a4c7840cd8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.897 2 DEBUG nova.network.os_vif_util [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.899 2 DEBUG nova.network.os_vif_util [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.900 2 DEBUG os_vif [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape14f3e85-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape14f3e85-84, col_values=(('external_ids', {'iface-id': 'e14f3e85-84ff-49b3-8817-d926cb709f80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:61:55', 'vm-uuid': '2f4391e7-f233-4092-b54f-89a4c7840cd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:33 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:01:33 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1569992552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:01:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/85280588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:01:33 compute-0 NetworkManager[45129]: <info>  [1759395693.9548] manager: (tape14f3e85-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:33 compute-0 nova_compute[260603]: 2025-10-02 09:01:33.966 2 INFO os_vif [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84')
Oct 02 09:01:33 compute-0 podman[404149]: 2025-10-02 09:01:33.979601866 +0000 UTC m=+0.218256313 container init 573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:01:33 compute-0 podman[404149]: 2025-10-02 09:01:33.988014577 +0000 UTC m=+0.226668924 container start 573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 09:01:33 compute-0 podman[404149]: 2025-10-02 09:01:33.991404583 +0000 UTC m=+0.230058960 container attach 573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:01:33 compute-0 strange_allen[404167]: 167 167
Oct 02 09:01:33 compute-0 systemd[1]: libpod-573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd.scope: Deactivated successfully.
Oct 02 09:01:33 compute-0 podman[404149]: 2025-10-02 09:01:33.998672599 +0000 UTC m=+0.237326946 container died 573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-03d45cd62560ee12ac27f1bf28c7db46914abca9a0f6c9b9612b1348d820d19f-merged.mount: Deactivated successfully.
Oct 02 09:01:34 compute-0 podman[404149]: 2025-10-02 09:01:34.041962915 +0000 UTC m=+0.280617262 container remove 573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.056 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.056 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.056 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:56:61:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.057 2 INFO nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Using config drive
Oct 02 09:01:34 compute-0 systemd[1]: libpod-conmon-573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd.scope: Deactivated successfully.
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.081 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:01:34 compute-0 podman[404211]: 2025-10-02 09:01:34.249694729 +0000 UTC m=+0.049509589 container create d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:01:34 compute-0 systemd[1]: Started libpod-conmon-d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea.scope.
Oct 02 09:01:34 compute-0 podman[404211]: 2025-10-02 09:01:34.226907491 +0000 UTC m=+0.026722351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:01:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adf27e840e3e4d392d94ffd8cae900ed55da66019836d2c3369829914de73d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adf27e840e3e4d392d94ffd8cae900ed55da66019836d2c3369829914de73d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adf27e840e3e4d392d94ffd8cae900ed55da66019836d2c3369829914de73d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adf27e840e3e4d392d94ffd8cae900ed55da66019836d2c3369829914de73d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adf27e840e3e4d392d94ffd8cae900ed55da66019836d2c3369829914de73d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:34 compute-0 podman[404211]: 2025-10-02 09:01:34.373846847 +0000 UTC m=+0.173661678 container init d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_galois, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:01:34 compute-0 podman[404211]: 2025-10-02 09:01:34.390774464 +0000 UTC m=+0.190589324 container start d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:01:34 compute-0 podman[404211]: 2025-10-02 09:01:34.39449552 +0000 UTC m=+0.194310380 container attach d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_galois, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.572 2 DEBUG nova.network.neutron [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Updated VIF entry in instance network info cache for port e14f3e85-84ff-49b3-8817-d926cb709f80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.574 2 DEBUG nova.network.neutron [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Updating instance_info_cache with network_info: [{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.606 2 DEBUG oslo_concurrency.lockutils [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-2f4391e7-f233-4092-b54f-89a4c7840cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.670 2 INFO nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Creating config drive at /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/disk.config
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.679 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv_qhdrbe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:34.840 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:34.841 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:34.841 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.850 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv_qhdrbe" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.892 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:01:34 compute-0 nova_compute[260603]: 2025-10-02 09:01:34.900 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/disk.config 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:35 compute-0 ceph-mon[74477]: pgmap v2503: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.113 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/disk.config 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.116 2 INFO nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Deleting local config drive /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/disk.config because it was imported into RBD.
Oct 02 09:01:35 compute-0 kernel: tape14f3e85-84: entered promiscuous mode
Oct 02 09:01:35 compute-0 NetworkManager[45129]: <info>  [1759395695.1909] manager: (tape14f3e85-84): new Tun device (/org/freedesktop/NetworkManager/Devices/570)
Oct 02 09:01:35 compute-0 ovn_controller[152344]: 2025-10-02T09:01:35Z|01425|binding|INFO|Claiming lport e14f3e85-84ff-49b3-8817-d926cb709f80 for this chassis.
Oct 02 09:01:35 compute-0 ovn_controller[152344]: 2025-10-02T09:01:35Z|01426|binding|INFO|e14f3e85-84ff-49b3-8817-d926cb709f80: Claiming fa:16:3e:56:61:55 10.100.0.6
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:35 compute-0 ovn_controller[152344]: 2025-10-02T09:01:35Z|01427|binding|INFO|Setting lport e14f3e85-84ff-49b3-8817-d926cb709f80 ovn-installed in OVS
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:35 compute-0 systemd-udevd[404295]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:01:35 compute-0 NetworkManager[45129]: <info>  [1759395695.2459] device (tape14f3e85-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:01:35 compute-0 NetworkManager[45129]: <info>  [1759395695.2477] device (tape14f3e85-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:01:35 compute-0 ovn_controller[152344]: 2025-10-02T09:01:35Z|01428|binding|INFO|Setting lport e14f3e85-84ff-49b3-8817-d926cb709f80 up in Southbound
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.248 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:61:55 10.100.0.6'], port_security=['fa:16:3e:56:61:55 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2f4391e7-f233-4092-b54f-89a4c7840cd8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=393b5666-d876-4099-9af5-9247fa9f1ed9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e14f3e85-84ff-49b3-8817-d926cb709f80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.249 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e14f3e85-84ff-49b3-8817-d926cb709f80 in datapath 577c4506-1f5a-48bf-b267-75fd69fe5e1c bound to our chassis
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.251 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 577c4506-1f5a-48bf-b267-75fd69fe5e1c
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.268 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ac11afe4-08be-4883-b929-5bfc8a64fbf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.269 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap577c4506-11 in ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.271 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap577c4506-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.272 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83cce7a1-fc69-440a-a2cc-6cfe54646e74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.273 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f32c605c-9f04-4033-9e91-ae293b16dc00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.287 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[531e94aa-0155-4f8a-b925-5c740e491406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 systemd-machined[214636]: New machine qemu-166-instance-00000084.
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.304 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[224caecb-ac33-45c4-bd6f-00a296411579]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 systemd[1]: Started Virtual Machine qemu-166-instance-00000084.
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.353 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbf93d1-6854-43e9-b9ce-0376ece9e9ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.360 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[88d832af-3767-4de1-950d-79d2696561dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 NetworkManager[45129]: <info>  [1759395695.3629] manager: (tap577c4506-10): new Veth device (/org/freedesktop/NetworkManager/Devices/571)
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.412 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[70ab1354-915d-42eb-bd56-21e9a7fa33ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.417 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d630763d-ebaf-40ad-b94c-bde9fc8744ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 NetworkManager[45129]: <info>  [1759395695.4527] device (tap577c4506-10): carrier: link connected
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.469 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a4be2566-0117-4249-885f-bdde2f6a190e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.472 2 DEBUG nova.compute.manager [req-66609f5a-395d-48c9-9fb7-aab465fc2919 req-0d37f5d8-81fe-4c9a-9d83-6f42aa28fedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.473 2 DEBUG oslo_concurrency.lockutils [req-66609f5a-395d-48c9-9fb7-aab465fc2919 req-0d37f5d8-81fe-4c9a-9d83-6f42aa28fedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.473 2 DEBUG oslo_concurrency.lockutils [req-66609f5a-395d-48c9-9fb7-aab465fc2919 req-0d37f5d8-81fe-4c9a-9d83-6f42aa28fedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.473 2 DEBUG oslo_concurrency.lockutils [req-66609f5a-395d-48c9-9fb7-aab465fc2919 req-0d37f5d8-81fe-4c9a-9d83-6f42aa28fedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.473 2 DEBUG nova.compute.manager [req-66609f5a-395d-48c9-9fb7-aab465fc2919 req-0d37f5d8-81fe-4c9a-9d83-6f42aa28fedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Processing event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.493 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4531fd-5002-4941-aaf6-8cf615bbb67b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap577c4506-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:0e:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657405, 'reachable_time': 26949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404342, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 romantic_galois[404228]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:01:35 compute-0 romantic_galois[404228]: --> relative data size: 1.0
Oct 02 09:01:35 compute-0 romantic_galois[404228]: --> All data devices are unavailable
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.521 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d785a274-168d-4492-a5f3-787e42d5c471]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:e80'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657405, 'tstamp': 657405}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404344, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:35 compute-0 systemd[1]: libpod-d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea.scope: Deactivated successfully.
Oct 02 09:01:35 compute-0 systemd[1]: libpod-d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea.scope: Consumed 1.058s CPU time.
Oct 02 09:01:35 compute-0 podman[404211]: 2025-10-02 09:01:35.540485813 +0000 UTC m=+1.340300633 container died d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_galois, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.550 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab831c67-6170-433d-a105-12dee78ce919]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap577c4506-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:0e:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657405, 'reachable_time': 26949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404345, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8adf27e840e3e4d392d94ffd8cae900ed55da66019836d2c3369829914de73d1-merged.mount: Deactivated successfully.
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.596 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d12f2961-0f97-43af-96ec-ba4834f03998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 podman[404211]: 2025-10-02 09:01:35.603344486 +0000 UTC m=+1.403159306 container remove d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:01:35 compute-0 systemd[1]: libpod-conmon-d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea.scope: Deactivated successfully.
Oct 02 09:01:35 compute-0 sudo[404043]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.665 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[efd31fdc-e593-4460-9514-f217e7503b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.667 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap577c4506-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.668 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.668 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap577c4506-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:35 compute-0 kernel: tap577c4506-10: entered promiscuous mode
Oct 02 09:01:35 compute-0 NetworkManager[45129]: <info>  [1759395695.6719] manager: (tap577c4506-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.674 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap577c4506-10, col_values=(('external_ids', {'iface-id': '8acd815f-105a-41fb-b7bf-32e4f419ccc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:35 compute-0 ovn_controller[152344]: 2025-10-02T09:01:35Z|01429|binding|INFO|Releasing lport 8acd815f-105a-41fb-b7bf-32e4f419ccc2 from this chassis (sb_readonly=0)
Oct 02 09:01:35 compute-0 nova_compute[260603]: 2025-10-02 09:01:35.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.691 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/577c4506-1f5a-48bf-b267-75fd69fe5e1c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/577c4506-1f5a-48bf-b267-75fd69fe5e1c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.692 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[74895d1a-3d13-4b99-ab09-a498ccba9456]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.694 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-577c4506-1f5a-48bf-b267-75fd69fe5e1c
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/577c4506-1f5a-48bf-b267-75fd69fe5e1c.pid.haproxy
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 577c4506-1f5a-48bf-b267-75fd69fe5e1c
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:01:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.697 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'env', 'PROCESS_TAG=haproxy-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/577c4506-1f5a-48bf-b267-75fd69fe5e1c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:01:35 compute-0 sudo[404363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:01:35 compute-0 sudo[404363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:35 compute-0 sudo[404363]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:35 compute-0 sudo[404427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:01:35 compute-0 sudo[404427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:35 compute-0 sudo[404427]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:01:35 compute-0 sudo[404457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:01:35 compute-0 sudo[404457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:35 compute-0 sudo[404457]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:35 compute-0 sudo[404483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:01:35 compute-0 sudo[404483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:36 compute-0 podman[404527]: 2025-10-02 09:01:36.094683055 +0000 UTC m=+0.055247818 container create d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:01:36 compute-0 systemd[1]: Started libpod-conmon-d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26.scope.
Oct 02 09:01:36 compute-0 podman[404527]: 2025-10-02 09:01:36.067884372 +0000 UTC m=+0.028449165 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:01:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82b48f74f74c0fe00f86049b4511e73bea7d27c0713f68a9f2381a5dc573159/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:36 compute-0 podman[404527]: 2025-10-02 09:01:36.187080257 +0000 UTC m=+0.147645020 container init d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:01:36 compute-0 podman[404527]: 2025-10-02 09:01:36.194032152 +0000 UTC m=+0.154596905 container start d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 09:01:36 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [NOTICE]   (404569) : New worker (404571) forked
Oct 02 09:01:36 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [NOTICE]   (404569) : Loading success.
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.293 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395696.2929516, 2f4391e7-f233-4092-b54f-89a4c7840cd8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.293 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] VM Started (Lifecycle Event)
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.295 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.298 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.301 2 INFO nova.virt.libvirt.driver [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Instance spawned successfully.
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.302 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.321 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.329 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.334 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.334 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.335 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.335 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.335 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.336 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.360 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.360 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395696.2936819, 2f4391e7-f233-4092-b54f-89a4c7840cd8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.361 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] VM Paused (Lifecycle Event)
Oct 02 09:01:36 compute-0 podman[404594]: 2025-10-02 09:01:36.362731505 +0000 UTC m=+0.048875940 container create f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.383 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.387 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395696.297624, 2f4391e7-f233-4092-b54f-89a4c7840cd8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.387 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] VM Resumed (Lifecycle Event)
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.394 2 INFO nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Took 8.49 seconds to spawn the instance on the hypervisor.
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.395 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:01:36 compute-0 systemd[1]: Started libpod-conmon-f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50.scope.
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.403 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.407 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.422 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:01:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:01:36 compute-0 podman[404594]: 2025-10-02 09:01:36.343109626 +0000 UTC m=+0.029254061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:01:36 compute-0 podman[404594]: 2025-10-02 09:01:36.445678433 +0000 UTC m=+0.131822878 container init f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.448 2 INFO nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Took 9.38 seconds to build instance.
Oct 02 09:01:36 compute-0 podman[404594]: 2025-10-02 09:01:36.452499505 +0000 UTC m=+0.138643920 container start f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:01:36 compute-0 podman[404594]: 2025-10-02 09:01:36.45523998 +0000 UTC m=+0.141384425 container attach f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:01:36 compute-0 vibrant_carver[404610]: 167 167
Oct 02 09:01:36 compute-0 systemd[1]: libpod-f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50.scope: Deactivated successfully.
Oct 02 09:01:36 compute-0 podman[404594]: 2025-10-02 09:01:36.458099299 +0000 UTC m=+0.144243714 container died f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:01:36 compute-0 nova_compute[260603]: 2025-10-02 09:01:36.462 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe6bd4eb3b65bc4a2000ce87d585c95cee4497ee8f840b7830995826ec3f9395-merged.mount: Deactivated successfully.
Oct 02 09:01:36 compute-0 podman[404594]: 2025-10-02 09:01:36.499197106 +0000 UTC m=+0.185341521 container remove f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:01:36 compute-0 systemd[1]: libpod-conmon-f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50.scope: Deactivated successfully.
Oct 02 09:01:36 compute-0 podman[404632]: 2025-10-02 09:01:36.71235816 +0000 UTC m=+0.056941941 container create 99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 09:01:36 compute-0 systemd[1]: Started libpod-conmon-99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149.scope.
Oct 02 09:01:36 compute-0 podman[404632]: 2025-10-02 09:01:36.693721681 +0000 UTC m=+0.038305432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:01:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d5d7b749fa4d5b0415fcef2a9ff510686ba8f2f3756d1533be5882f9a7813e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d5d7b749fa4d5b0415fcef2a9ff510686ba8f2f3756d1533be5882f9a7813e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d5d7b749fa4d5b0415fcef2a9ff510686ba8f2f3756d1533be5882f9a7813e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d5d7b749fa4d5b0415fcef2a9ff510686ba8f2f3756d1533be5882f9a7813e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:36 compute-0 podman[404632]: 2025-10-02 09:01:36.825827377 +0000 UTC m=+0.170411138 container init 99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:01:36 compute-0 podman[404632]: 2025-10-02 09:01:36.849869704 +0000 UTC m=+0.194453465 container start 99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 09:01:36 compute-0 podman[404632]: 2025-10-02 09:01:36.853415914 +0000 UTC m=+0.197999845 container attach 99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:01:37 compute-0 ceph-mon[74477]: pgmap v2504: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:01:37 compute-0 nova_compute[260603]: 2025-10-02 09:01:37.630 2 DEBUG nova.compute.manager [req-7d392555-804f-4811-9c9a-5b9c55cf03aa req-4c4829cd-997e-4622-a579-01758180eb31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:01:37 compute-0 nova_compute[260603]: 2025-10-02 09:01:37.630 2 DEBUG oslo_concurrency.lockutils [req-7d392555-804f-4811-9c9a-5b9c55cf03aa req-4c4829cd-997e-4622-a579-01758180eb31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:37 compute-0 nova_compute[260603]: 2025-10-02 09:01:37.631 2 DEBUG oslo_concurrency.lockutils [req-7d392555-804f-4811-9c9a-5b9c55cf03aa req-4c4829cd-997e-4622-a579-01758180eb31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:37 compute-0 nova_compute[260603]: 2025-10-02 09:01:37.631 2 DEBUG oslo_concurrency.lockutils [req-7d392555-804f-4811-9c9a-5b9c55cf03aa req-4c4829cd-997e-4622-a579-01758180eb31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:37 compute-0 nova_compute[260603]: 2025-10-02 09:01:37.636 2 DEBUG nova.compute.manager [req-7d392555-804f-4811-9c9a-5b9c55cf03aa req-4c4829cd-997e-4622-a579-01758180eb31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] No waiting events found dispatching network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:01:37 compute-0 nova_compute[260603]: 2025-10-02 09:01:37.637 2 WARNING nova.compute.manager [req-7d392555-804f-4811-9c9a-5b9c55cf03aa req-4c4829cd-997e-4622-a579-01758180eb31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received unexpected event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 for instance with vm_state active and task_state None.
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]: {
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:     "0": [
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:         {
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "devices": [
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "/dev/loop3"
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             ],
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_name": "ceph_lv0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_size": "21470642176",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "name": "ceph_lv0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "tags": {
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.cluster_name": "ceph",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.crush_device_class": "",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.encrypted": "0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.osd_id": "0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.type": "block",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.vdo": "0"
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             },
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "type": "block",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "vg_name": "ceph_vg0"
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:         }
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:     ],
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:     "1": [
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:         {
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "devices": [
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "/dev/loop4"
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             ],
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_name": "ceph_lv1",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_size": "21470642176",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "name": "ceph_lv1",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "tags": {
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.cluster_name": "ceph",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.crush_device_class": "",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.encrypted": "0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.osd_id": "1",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.type": "block",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.vdo": "0"
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             },
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "type": "block",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "vg_name": "ceph_vg1"
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:         }
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:     ],
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:     "2": [
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:         {
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "devices": [
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "/dev/loop5"
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             ],
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_name": "ceph_lv2",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_size": "21470642176",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "name": "ceph_lv2",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "tags": {
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.cluster_name": "ceph",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.crush_device_class": "",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.encrypted": "0",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.osd_id": "2",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.type": "block",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:                 "ceph.vdo": "0"
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             },
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "type": "block",
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:             "vg_name": "ceph_vg2"
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:         }
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]:     ]
Oct 02 09:01:37 compute-0 adoring_wescoff[404649]: }
Oct 02 09:01:37 compute-0 systemd[1]: libpod-99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149.scope: Deactivated successfully.
Oct 02 09:01:37 compute-0 podman[404632]: 2025-10-02 09:01:37.749310195 +0000 UTC m=+1.093893986 container died 99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 09:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-98d5d7b749fa4d5b0415fcef2a9ff510686ba8f2f3756d1533be5882f9a7813e-merged.mount: Deactivated successfully.
Oct 02 09:01:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Oct 02 09:01:37 compute-0 podman[404632]: 2025-10-02 09:01:37.85439286 +0000 UTC m=+1.198976621 container remove 99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:01:37 compute-0 systemd[1]: libpod-conmon-99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149.scope: Deactivated successfully.
Oct 02 09:01:37 compute-0 sudo[404483]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:38 compute-0 sudo[404672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:01:38 compute-0 sudo[404672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:38 compute-0 sudo[404672]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:01:38 compute-0 sudo[404697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:01:38 compute-0 sudo[404697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:38 compute-0 sudo[404697]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:38 compute-0 podman[404722]: 2025-10-02 09:01:38.224674617 +0000 UTC m=+0.080521753 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 09:01:38 compute-0 podman[404721]: 2025-10-02 09:01:38.228710982 +0000 UTC m=+0.085872729 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:01:38 compute-0 sudo[404734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:01:38 compute-0 sudo[404734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:38 compute-0 sudo[404734]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:38 compute-0 sudo[404784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:01:38 compute-0 sudo[404784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:38 compute-0 podman[404850]: 2025-10-02 09:01:38.836601793 +0000 UTC m=+0.110604368 container create c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 09:01:38 compute-0 podman[404850]: 2025-10-02 09:01:38.770371885 +0000 UTC m=+0.044374540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:01:38 compute-0 systemd[1]: Started libpod-conmon-c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41.scope.
Oct 02 09:01:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:01:38 compute-0 nova_compute[260603]: 2025-10-02 09:01:38.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:38 compute-0 podman[404850]: 2025-10-02 09:01:38.957961525 +0000 UTC m=+0.231964120 container init c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 09:01:38 compute-0 podman[404850]: 2025-10-02 09:01:38.970841224 +0000 UTC m=+0.244843909 container start c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:01:38 compute-0 objective_tu[404866]: 167 167
Oct 02 09:01:38 compute-0 systemd[1]: libpod-c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41.scope: Deactivated successfully.
Oct 02 09:01:38 compute-0 podman[404850]: 2025-10-02 09:01:38.98838748 +0000 UTC m=+0.262390155 container attach c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:01:38 compute-0 podman[404850]: 2025-10-02 09:01:38.989236516 +0000 UTC m=+0.263239131 container died c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:01:39 compute-0 ceph-mon[74477]: pgmap v2505: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Oct 02 09:01:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e86c1f62c105ae2a20ceddb2c7c23a43056760b1298122941114b3e37c6a7a55-merged.mount: Deactivated successfully.
Oct 02 09:01:39 compute-0 podman[404850]: 2025-10-02 09:01:39.091874106 +0000 UTC m=+0.365876701 container remove c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:01:39 compute-0 systemd[1]: libpod-conmon-c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41.scope: Deactivated successfully.
Oct 02 09:01:39 compute-0 podman[404894]: 2025-10-02 09:01:39.384771978 +0000 UTC m=+0.072670899 container create 040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 09:01:39 compute-0 systemd[1]: Started libpod-conmon-040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8.scope.
Oct 02 09:01:39 compute-0 podman[404894]: 2025-10-02 09:01:39.357453409 +0000 UTC m=+0.045352360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:01:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dbbe85e09d11099e0ef94e78d9f511abef2888a5913392f460602df123924a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dbbe85e09d11099e0ef94e78d9f511abef2888a5913392f460602df123924a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dbbe85e09d11099e0ef94e78d9f511abef2888a5913392f460602df123924a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dbbe85e09d11099e0ef94e78d9f511abef2888a5913392f460602df123924a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:01:39 compute-0 podman[404894]: 2025-10-02 09:01:39.498020527 +0000 UTC m=+0.185919528 container init 040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 02 09:01:39 compute-0 podman[404894]: 2025-10-02 09:01:39.512789236 +0000 UTC m=+0.200688157 container start 040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:01:39 compute-0 podman[404894]: 2025-10-02 09:01:39.516634925 +0000 UTC m=+0.204533876 container attach 040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:01:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.208 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.212 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.213 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.215 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.216 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.217 2 INFO nova.compute.manager [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Terminating instance
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.219 2 DEBUG nova.compute.manager [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:01:40 compute-0 kernel: tape14f3e85-84 (unregistering): left promiscuous mode
Oct 02 09:01:40 compute-0 NetworkManager[45129]: <info>  [1759395700.2668] device (tape14f3e85-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:01:40 compute-0 ovn_controller[152344]: 2025-10-02T09:01:40Z|01430|binding|INFO|Releasing lport e14f3e85-84ff-49b3-8817-d926cb709f80 from this chassis (sb_readonly=0)
Oct 02 09:01:40 compute-0 ovn_controller[152344]: 2025-10-02T09:01:40Z|01431|binding|INFO|Setting lport e14f3e85-84ff-49b3-8817-d926cb709f80 down in Southbound
Oct 02 09:01:40 compute-0 ovn_controller[152344]: 2025-10-02T09:01:40Z|01432|binding|INFO|Removing iface tape14f3e85-84 ovn-installed in OVS
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.286 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:61:55 10.100.0.6'], port_security=['fa:16:3e:56:61:55 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2f4391e7-f233-4092-b54f-89a4c7840cd8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=393b5666-d876-4099-9af5-9247fa9f1ed9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e14f3e85-84ff-49b3-8817-d926cb709f80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.288 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e14f3e85-84ff-49b3-8817-d926cb709f80 in datapath 577c4506-1f5a-48bf-b267-75fd69fe5e1c unbound from our chassis
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.289 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 577c4506-1f5a-48bf-b267-75fd69fe5e1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.292 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6645510c-85e3-4fb3-a25d-a642955965d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.292 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c namespace which is not needed anymore
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:40 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct 02 09:01:40 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000084.scope: Consumed 4.893s CPU time.
Oct 02 09:01:40 compute-0 systemd-machined[214636]: Machine qemu-166-instance-00000084 terminated.
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.464 2 INFO nova.virt.libvirt.driver [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Instance destroyed successfully.
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.466 2 DEBUG nova.objects.instance [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 2f4391e7-f233-4092-b54f-89a4c7840cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:01:40 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [NOTICE]   (404569) : haproxy version is 2.8.14-c23fe91
Oct 02 09:01:40 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [NOTICE]   (404569) : path to executable is /usr/sbin/haproxy
Oct 02 09:01:40 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [WARNING]  (404569) : Exiting Master process...
Oct 02 09:01:40 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [WARNING]  (404569) : Exiting Master process...
Oct 02 09:01:40 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [ALERT]    (404569) : Current worker (404571) exited with code 143 (Terminated)
Oct 02 09:01:40 compute-0 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [WARNING]  (404569) : All workers exited. Exiting... (0)
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.484 2 DEBUG nova.virt.libvirt.vif [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:01:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-442472618',display_name='tempest-TestNetworkBasicOps-server-442472618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-442472618',id=132,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMhdM2B/LrbDEwL4IycEJOfFLzMu6CpL77K1ym75rDBPo/cEgtvAALjq8eovnQtT7xFChCXYYkzy7ApfLKPYhw2LEullF9QdZuX88UmbA5oS4LeC35DxK9kYFS0r2dt6vg==',key_name='tempest-TestNetworkBasicOps-18474127',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:01:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-b1cmahn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:01:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=2f4391e7-f233-4092-b54f-89a4c7840cd8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.484 2 DEBUG nova.network.os_vif_util [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.485 2 DEBUG nova.network.os_vif_util [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:01:40 compute-0 systemd[1]: libpod-d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26.scope: Deactivated successfully.
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.486 2 DEBUG os_vif [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.489 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape14f3e85-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:40 compute-0 podman[404960]: 2025-10-02 09:01:40.493887644 +0000 UTC m=+0.067448657 container died d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.496 2 INFO os_vif [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84')
Oct 02 09:01:40 compute-0 admiring_cannon[404910]: {
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "osd_id": 2,
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "type": "bluestore"
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:     },
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "osd_id": 1,
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "type": "bluestore"
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:     },
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "osd_id": 0,
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:         "type": "bluestore"
Oct 02 09:01:40 compute-0 admiring_cannon[404910]:     }
Oct 02 09:01:40 compute-0 admiring_cannon[404910]: }
Oct 02 09:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26-userdata-shm.mount: Deactivated successfully.
Oct 02 09:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d82b48f74f74c0fe00f86049b4511e73bea7d27c0713f68a9f2381a5dc573159-merged.mount: Deactivated successfully.
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:40 compute-0 systemd[1]: libpod-040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8.scope: Deactivated successfully.
Oct 02 09:01:40 compute-0 systemd[1]: libpod-040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8.scope: Consumed 1.028s CPU time.
Oct 02 09:01:40 compute-0 conmon[404910]: conmon 040aebf735a84c5f97c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8.scope/container/memory.events
Oct 02 09:01:40 compute-0 podman[404894]: 2025-10-02 09:01:40.550462813 +0000 UTC m=+1.238361734 container died 040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 02 09:01:40 compute-0 podman[404960]: 2025-10-02 09:01:40.581003691 +0000 UTC m=+0.154564694 container cleanup d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 09:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6dbbe85e09d11099e0ef94e78d9f511abef2888a5913392f460602df123924a-merged.mount: Deactivated successfully.
Oct 02 09:01:40 compute-0 systemd[1]: libpod-conmon-d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26.scope: Deactivated successfully.
Oct 02 09:01:40 compute-0 podman[404894]: 2025-10-02 09:01:40.617592279 +0000 UTC m=+1.305491200 container remove 040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:01:40 compute-0 systemd[1]: libpod-conmon-040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8.scope: Deactivated successfully.
Oct 02 09:01:40 compute-0 sudo[404784]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:01:40 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:01:40 compute-0 podman[405040]: 2025-10-02 09:01:40.661471173 +0000 UTC m=+0.052694539 container remove d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 09:01:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:01:40 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:01:40 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev d01b29a7-e352-43b8-8a06-edd278ac8d6c does not exist
Oct 02 09:01:40 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 879cecf4-5919-447a-aa60-39b3ce169e04 does not exist
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.671 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e08d5c-9309-4e67-9a62-080881f9fe2b]: (4, ('Thu Oct  2 09:01:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c (d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26)\nd028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26\nThu Oct  2 09:01:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c (d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26)\nd028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.675 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3e25632e-68d4-4def-94f2-1af5739f3d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.676 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap577c4506-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:40 compute-0 kernel: tap577c4506-10: left promiscuous mode
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.698 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[05d6bc7c-feee-43a6-b1d9-56f6b806ff41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.721 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ba08c17b-657f-4c50-a5e8-c384d6e70e14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.722 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5e245973-fde8-46df-aeba-03bfb97042cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.739 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[46390a86-e9d3-4003-a739-be3ef04bf9d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657395, 'reachable_time': 24370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405078, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d577c4506\x2d1f5a\x2d48bf\x2db267\x2d75fd69fe5e1c.mount: Deactivated successfully.
Oct 02 09:01:40 compute-0 sudo[405053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.745 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:01:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.745 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[98c471e1-764e-417a-bbe6-1d22abea8b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:01:40 compute-0 sudo[405053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:40 compute-0 sudo[405053]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:40 compute-0 sudo[405082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:01:40 compute-0 sudo[405082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:01:40 compute-0 sudo[405082]: pam_unix(sudo:session): session closed for user root
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.870 2 DEBUG nova.compute.manager [req-ef344852-779b-49ad-82e8-341840f957c1 req-da5cde94-7adf-4cb0-b484-339df275f8b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received event network-vif-unplugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.871 2 DEBUG oslo_concurrency.lockutils [req-ef344852-779b-49ad-82e8-341840f957c1 req-da5cde94-7adf-4cb0-b484-339df275f8b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.871 2 DEBUG oslo_concurrency.lockutils [req-ef344852-779b-49ad-82e8-341840f957c1 req-da5cde94-7adf-4cb0-b484-339df275f8b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.872 2 DEBUG oslo_concurrency.lockutils [req-ef344852-779b-49ad-82e8-341840f957c1 req-da5cde94-7adf-4cb0-b484-339df275f8b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.872 2 DEBUG nova.compute.manager [req-ef344852-779b-49ad-82e8-341840f957c1 req-da5cde94-7adf-4cb0-b484-339df275f8b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] No waiting events found dispatching network-vif-unplugged-e14f3e85-84ff-49b3-8817-d926cb709f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.872 2 DEBUG nova.compute.manager [req-ef344852-779b-49ad-82e8-341840f957c1 req-da5cde94-7adf-4cb0-b484-339df275f8b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received event network-vif-unplugged-e14f3e85-84ff-49b3-8817-d926cb709f80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.917 2 INFO nova.virt.libvirt.driver [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Deleting instance files /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8_del
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.918 2 INFO nova.virt.libvirt.driver [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Deletion of /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8_del complete
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.966 2 INFO nova.compute.manager [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.967 2 DEBUG oslo.service.loopingcall [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.968 2 DEBUG nova.compute.manager [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:01:40 compute-0 nova_compute[260603]: 2025-10-02 09:01:40.968 2 DEBUG nova.network.neutron [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:01:41 compute-0 ceph-mon[74477]: pgmap v2506: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 09:01:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:01:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:01:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 773 KiB/s wr, 82 op/s
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.224 2 DEBUG nova.network.neutron [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.240 2 INFO nova.compute.manager [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Took 1.27 seconds to deallocate network for instance.
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.282 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.283 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.351 2 DEBUG oslo_concurrency.processutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:01:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3000227654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.832 2 DEBUG oslo_concurrency.processutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.839 2 DEBUG nova.compute.provider_tree [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.857 2 DEBUG nova.scheduler.client.report [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.885 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.920 2 INFO nova.scheduler.client.report [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 2f4391e7-f233-4092-b54f-89a4c7840cd8
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.964 2 DEBUG nova.compute.manager [req-d4daa0c5-78ac-4179-a7e2-248ef70287d4 req-077ad738-dfd0-498a-8193-7b6dcf52e55a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.965 2 DEBUG oslo_concurrency.lockutils [req-d4daa0c5-78ac-4179-a7e2-248ef70287d4 req-077ad738-dfd0-498a-8193-7b6dcf52e55a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.965 2 DEBUG oslo_concurrency.lockutils [req-d4daa0c5-78ac-4179-a7e2-248ef70287d4 req-077ad738-dfd0-498a-8193-7b6dcf52e55a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.965 2 DEBUG oslo_concurrency.lockutils [req-d4daa0c5-78ac-4179-a7e2-248ef70287d4 req-077ad738-dfd0-498a-8193-7b6dcf52e55a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.966 2 DEBUG nova.compute.manager [req-d4daa0c5-78ac-4179-a7e2-248ef70287d4 req-077ad738-dfd0-498a-8193-7b6dcf52e55a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] No waiting events found dispatching network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.966 2 WARNING nova.compute.manager [req-d4daa0c5-78ac-4179-a7e2-248ef70287d4 req-077ad738-dfd0-498a-8193-7b6dcf52e55a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received unexpected event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 for instance with vm_state deleted and task_state None.
Oct 02 09:01:42 compute-0 nova_compute[260603]: 2025-10-02 09:01:42.981 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:43 compute-0 ceph-mon[74477]: pgmap v2507: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 773 KiB/s wr, 82 op/s
Oct 02 09:01:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3000227654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:01:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:01:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 775 KiB/s wr, 122 op/s
Oct 02 09:01:45 compute-0 nova_compute[260603]: 2025-10-02 09:01:45.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:45.032 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:01:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:45.034 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:01:45 compute-0 ceph-mon[74477]: pgmap v2508: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 775 KiB/s wr, 122 op/s
Oct 02 09:01:45 compute-0 nova_compute[260603]: 2025-10-02 09:01:45.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:45 compute-0 nova_compute[260603]: 2025-10-02 09:01:45.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Oct 02 09:01:47 compute-0 ceph-mon[74477]: pgmap v2509: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Oct 02 09:01:47 compute-0 nova_compute[260603]: 2025-10-02 09:01:47.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:47 compute-0 nova_compute[260603]: 2025-10-02 09:01:47.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 02 09:01:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:01:48 compute-0 nova_compute[260603]: 2025-10-02 09:01:48.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:01:49 compute-0 ceph-mon[74477]: pgmap v2510: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 02 09:01:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 KiB/s wr, 78 op/s
Oct 02 09:01:50 compute-0 nova_compute[260603]: 2025-10-02 09:01:50.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:50 compute-0 nova_compute[260603]: 2025-10-02 09:01:50.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:01:50 compute-0 nova_compute[260603]: 2025-10-02 09:01:50.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:01:50 compute-0 nova_compute[260603]: 2025-10-02 09:01:50.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:01:50 compute-0 nova_compute[260603]: 2025-10-02 09:01:50.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:01:50 compute-0 nova_compute[260603]: 2025-10-02 09:01:50.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:51 compute-0 ceph-mon[74477]: pgmap v2511: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 KiB/s wr, 78 op/s
Oct 02 09:01:51 compute-0 nova_compute[260603]: 2025-10-02 09:01:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:01:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 432 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Oct 02 09:01:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:01:53.037 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:01:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:01:53 compute-0 ceph-mon[74477]: pgmap v2512: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 432 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Oct 02 09:01:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 432 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Oct 02 09:01:54 compute-0 nova_compute[260603]: 2025-10-02 09:01:54.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:01:54 compute-0 nova_compute[260603]: 2025-10-02 09:01:54.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:01:54 compute-0 nova_compute[260603]: 2025-10-02 09:01:54.627 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:54 compute-0 nova_compute[260603]: 2025-10-02 09:01:54.628 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:54 compute-0 nova_compute[260603]: 2025-10-02 09:01:54.628 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:54 compute-0 nova_compute[260603]: 2025-10-02 09:01:54.628 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:01:54 compute-0 nova_compute[260603]: 2025-10-02 09:01:54.629 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:01:55 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/75075170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.091 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:55 compute-0 ceph-mon[74477]: pgmap v2513: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 432 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Oct 02 09:01:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/75075170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.254 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.255 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3687MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.255 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.255 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.350 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.350 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.366 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.462 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395700.4615595, 2f4391e7-f233-4092-b54f-89a4c7840cd8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.463 2 INFO nova.compute.manager [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] VM Stopped (Lifecycle Event)
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.492 2 DEBUG nova.compute.manager [None req-dd906b1b-348e-4e40-b27d-03906e0ca8a9 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:01:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:01:55 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/591262340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.783 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.790 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.803 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.833 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:01:55 compute-0 nova_compute[260603]: 2025-10-02 09:01:55.833 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:01:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:01:56 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/591262340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:01:56 compute-0 nova_compute[260603]: 2025-10-02 09:01:56.834 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:01:57 compute-0 ceph-mon[74477]: pgmap v2514: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:01:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:01:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:01:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:01:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:01:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:01:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:01:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:01:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:01:58 compute-0 nova_compute[260603]: 2025-10-02 09:01:58.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:01:59 compute-0 ceph-mon[74477]: pgmap v2515: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:01:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:00 compute-0 nova_compute[260603]: 2025-10-02 09:02:00.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:00 compute-0 nova_compute[260603]: 2025-10-02 09:02:00.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:01 compute-0 ceph-mon[74477]: pgmap v2516: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:02:03 compute-0 ceph-mon[74477]: pgmap v2517: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:04 compute-0 podman[405176]: 2025-10-02 09:02:04.059260075 +0000 UTC m=+0.108141782 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 09:02:04 compute-0 podman[405175]: 2025-10-02 09:02:04.091179097 +0000 UTC m=+0.141888870 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 09:02:04 compute-0 nova_compute[260603]: 2025-10-02 09:02:04.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:02:05 compute-0 ceph-mon[74477]: pgmap v2518: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:05 compute-0 nova_compute[260603]: 2025-10-02 09:02:05.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:05 compute-0 nova_compute[260603]: 2025-10-02 09:02:05.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:06 compute-0 nova_compute[260603]: 2025-10-02 09:02:06.997 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:06 compute-0 nova_compute[260603]: 2025-10-02 09:02:06.998 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.018 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.100 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.101 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.111 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.111 2 INFO nova.compute.claims [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:02:07 compute-0 ceph-mon[74477]: pgmap v2519: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.215 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:02:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:02:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1225500844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.699 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.709 2 DEBUG nova.compute.provider_tree [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.726 2 DEBUG nova.scheduler.client.report [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.758 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.759 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.806 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.807 2 DEBUG nova.network.neutron [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.828 2 INFO nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.847 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:02:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.939 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.941 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.941 2 INFO nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Creating image(s)
Oct 02 09:02:07 compute-0 nova_compute[260603]: 2025-10-02 09:02:07.979 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.016 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.048 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.052 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.123 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.124 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.125 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.125 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.144 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.147 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3e3c7092-c580-477b-8596-4fd3b719e700_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:02:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1225500844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.420 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3e3c7092-c580-477b-8596-4fd3b719e700_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.492 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.595 2 DEBUG nova.objects.instance [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 3e3c7092-c580-477b-8596-4fd3b719e700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.613 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.614 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Ensure instance console log exists: /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.614 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.615 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.615 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:08 compute-0 nova_compute[260603]: 2025-10-02 09:02:08.761 2 DEBUG nova.policy [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:02:09 compute-0 podman[405410]: 2025-10-02 09:02:09.03533267 +0000 UTC m=+0.093243779 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 09:02:09 compute-0 podman[405409]: 2025-10-02 09:02:09.03565219 +0000 UTC m=+0.103169746 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 09:02:09 compute-0 ceph-mon[74477]: pgmap v2520: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 43 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 92 KiB/s wr, 3 op/s
Oct 02 09:02:10 compute-0 nova_compute[260603]: 2025-10-02 09:02:10.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:10 compute-0 nova_compute[260603]: 2025-10-02 09:02:10.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:10 compute-0 nova_compute[260603]: 2025-10-02 09:02:10.800 2 DEBUG nova.network.neutron [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Successfully created port: 43ff7902-a749-4e8e-9c64-efd972acf1a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:02:11 compute-0 ceph-mon[74477]: pgmap v2521: 305 pgs: 305 active+clean; 43 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 92 KiB/s wr, 3 op/s
Oct 02 09:02:11 compute-0 nova_compute[260603]: 2025-10-02 09:02:11.716 2 DEBUG nova.network.neutron [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Successfully updated port: 43ff7902-a749-4e8e-9c64-efd972acf1a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:02:11 compute-0 nova_compute[260603]: 2025-10-02 09:02:11.730 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:02:11 compute-0 nova_compute[260603]: 2025-10-02 09:02:11.730 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:02:11 compute-0 nova_compute[260603]: 2025-10-02 09:02:11.730 2 DEBUG nova.network.neutron [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:02:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 43 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 92 KiB/s wr, 3 op/s
Oct 02 09:02:11 compute-0 nova_compute[260603]: 2025-10-02 09:02:11.968 2 DEBUG nova.compute.manager [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-changed-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:02:11 compute-0 nova_compute[260603]: 2025-10-02 09:02:11.969 2 DEBUG nova.compute.manager [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Refreshing instance network info cache due to event network-changed-43ff7902-a749-4e8e-9c64-efd972acf1a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:02:11 compute-0 nova_compute[260603]: 2025-10-02 09:02:11.969 2 DEBUG oslo_concurrency.lockutils [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.042 2 DEBUG nova.network.neutron [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.909 2 DEBUG nova.network.neutron [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updating instance_info_cache with network_info: [{"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.927 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.927 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Instance network_info: |[{"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.928 2 DEBUG oslo_concurrency.lockutils [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.928 2 DEBUG nova.network.neutron [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Refreshing network info cache for port 43ff7902-a749-4e8e-9c64-efd972acf1a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.931 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Start _get_guest_xml network_info=[{"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.935 2 WARNING nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.939 2 DEBUG nova.virt.libvirt.host [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.940 2 DEBUG nova.virt.libvirt.host [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.945 2 DEBUG nova.virt.libvirt.host [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.945 2 DEBUG nova.virt.libvirt.host [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.946 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.946 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.946 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.946 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.947 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.947 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.947 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.947 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.947 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.948 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.948 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.948 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:02:12 compute-0 nova_compute[260603]: 2025-10-02 09:02:12.951 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:02:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:02:13 compute-0 ceph-mon[74477]: pgmap v2522: 305 pgs: 305 active+clean; 43 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 92 KiB/s wr, 3 op/s
Oct 02 09:02:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:02:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1203845190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:02:13 compute-0 nova_compute[260603]: 2025-10-02 09:02:13.426 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:02:13 compute-0 nova_compute[260603]: 2025-10-02 09:02:13.445 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:02:13 compute-0 nova_compute[260603]: 2025-10-02 09:02:13.451 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:02:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:02:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:02:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/714042361' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:02:13 compute-0 nova_compute[260603]: 2025-10-02 09:02:13.983 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:02:13 compute-0 nova_compute[260603]: 2025-10-02 09:02:13.985 2 DEBUG nova.virt.libvirt.vif [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:02:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2141391792',display_name='tempest-TestNetworkBasicOps-server-2141391792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2141391792',id=133,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKDp11F2lbrxxMl62/j7fHXkV6+flkvleRV3SlQtVIKr0IAxEI8xda+NGodZ4HaJd7EmZSJjX6G2CVi6Mz+byyGzwwm3n9HV8PIJ995e7gObIrbEd/QY1wwigSXqVllt8g==',key_name='tempest-TestNetworkBasicOps-1602850552',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-2y5fgdwx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:02:07Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=3e3c7092-c580-477b-8596-4fd3b719e700,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:02:13 compute-0 nova_compute[260603]: 2025-10-02 09:02:13.985 2 DEBUG nova.network.os_vif_util [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:02:13 compute-0 nova_compute[260603]: 2025-10-02 09:02:13.986 2 DEBUG nova.network.os_vif_util [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:02:13 compute-0 nova_compute[260603]: 2025-10-02 09:02:13.987 2 DEBUG nova.objects.instance [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e3c7092-c580-477b-8596-4fd3b719e700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.003 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:02:14 compute-0 nova_compute[260603]:   <uuid>3e3c7092-c580-477b-8596-4fd3b719e700</uuid>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   <name>instance-00000085</name>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-2141391792</nova:name>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:02:12</nova:creationTime>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:02:14 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:02:14 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:02:14 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:02:14 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:02:14 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:02:14 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 09:02:14 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:02:14 compute-0 nova_compute[260603]:         <nova:port uuid="43ff7902-a749-4e8e-9c64-efd972acf1a7">
Oct 02 09:02:14 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <system>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <entry name="serial">3e3c7092-c580-477b-8596-4fd3b719e700</entry>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <entry name="uuid">3e3c7092-c580-477b-8596-4fd3b719e700</entry>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     </system>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   <os>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   </os>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   <features>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   </features>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/3e3c7092-c580-477b-8596-4fd3b719e700_disk">
Oct 02 09:02:14 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       </source>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:02:14 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/3e3c7092-c580-477b-8596-4fd3b719e700_disk.config">
Oct 02 09:02:14 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       </source>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:02:14 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:ad:49:2a"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <target dev="tap43ff7902-a7"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/console.log" append="off"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <video>
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     </video>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:02:14 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:02:14 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:02:14 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:02:14 compute-0 nova_compute[260603]: </domain>
Oct 02 09:02:14 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.004 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Preparing to wait for external event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.004 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.004 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.005 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.005 2 DEBUG nova.virt.libvirt.vif [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:02:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2141391792',display_name='tempest-TestNetworkBasicOps-server-2141391792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2141391792',id=133,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKDp11F2lbrxxMl62/j7fHXkV6+flkvleRV3SlQtVIKr0IAxEI8xda+NGodZ4HaJd7EmZSJjX6G2CVi6Mz+byyGzwwm3n9HV8PIJ995e7gObIrbEd/QY1wwigSXqVllt8g==',key_name='tempest-TestNetworkBasicOps-1602850552',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-2y5fgdwx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:02:07Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=3e3c7092-c580-477b-8596-4fd3b719e700,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.005 2 DEBUG nova.network.os_vif_util [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.006 2 DEBUG nova.network.os_vif_util [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.006 2 DEBUG os_vif [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.007 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.012 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ff7902-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.012 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43ff7902-a7, col_values=(('external_ids', {'iface-id': '43ff7902-a749-4e8e-9c64-efd972acf1a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:49:2a', 'vm-uuid': '3e3c7092-c580-477b-8596-4fd3b719e700'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:14 compute-0 NetworkManager[45129]: <info>  [1759395734.0157] manager: (tap43ff7902-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.022 2 INFO os_vif [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7')
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.072 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.072 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.072 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:ad:49:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.073 2 INFO nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Using config drive
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.094 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:02:14 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1203845190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:02:14 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/714042361' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.563 2 INFO nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Creating config drive at /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/disk.config
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.573 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkfusyop0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.638 2 DEBUG nova.network.neutron [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updated VIF entry in instance network info cache for port 43ff7902-a749-4e8e-9c64-efd972acf1a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.640 2 DEBUG nova.network.neutron [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updating instance_info_cache with network_info: [{"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.660 2 DEBUG oslo_concurrency.lockutils [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.750 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkfusyop0" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.792 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:02:14 compute-0 nova_compute[260603]: 2025-10-02 09:02:14.798 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/disk.config 3e3c7092-c580-477b-8596-4fd3b719e700_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.001 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/disk.config 3e3c7092-c580-477b-8596-4fd3b719e700_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.003 2 INFO nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Deleting local config drive /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/disk.config because it was imported into RBD.
Oct 02 09:02:15 compute-0 kernel: tap43ff7902-a7: entered promiscuous mode
Oct 02 09:02:15 compute-0 NetworkManager[45129]: <info>  [1759395735.1037] manager: (tap43ff7902-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/574)
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:15 compute-0 ovn_controller[152344]: 2025-10-02T09:02:15Z|01433|binding|INFO|Claiming lport 43ff7902-a749-4e8e-9c64-efd972acf1a7 for this chassis.
Oct 02 09:02:15 compute-0 ovn_controller[152344]: 2025-10-02T09:02:15Z|01434|binding|INFO|43ff7902-a749-4e8e-9c64-efd972acf1a7: Claiming fa:16:3e:ad:49:2a 10.100.0.6
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.125 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:49:2a 10.100.0.6'], port_security=['fa:16:3e:ad:49:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e3c7092-c580-477b-8596-4fd3b719e700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87157236-5092-4eb4-a9f0-535aee31f502', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96e8905a-c47c-4bff-936a-cf4b9c9d02f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dca513d-9223-4241-809a-b7da8c65d0fb, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=43ff7902-a749-4e8e-9c64-efd972acf1a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.126 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 43ff7902-a749-4e8e-9c64-efd972acf1a7 in datapath 87157236-5092-4eb4-a9f0-535aee31f502 bound to our chassis
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.127 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87157236-5092-4eb4-a9f0-535aee31f502
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.143 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[31a3f820-a873-4118-b5b5-17013bb1fc76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.144 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87157236-51 in ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.148 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87157236-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.148 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3f1dc8-0729-43ab-ae7b-a2f155391d8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.149 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[64ff5bad-b3b0-4efe-8e47-fb47acfb3664]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.165 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1ca1b4-3d20-44b6-8fc8-8e419b6fdc48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 systemd-machined[214636]: New machine qemu-167-instance-00000085.
Oct 02 09:02:15 compute-0 systemd[1]: Started Virtual Machine qemu-167-instance-00000085.
Oct 02 09:02:15 compute-0 systemd-udevd[405587]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.206 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ff7801-2a7e-414b-a821-ed2f664a0abb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:15 compute-0 ovn_controller[152344]: 2025-10-02T09:02:15Z|01435|binding|INFO|Setting lport 43ff7902-a749-4e8e-9c64-efd972acf1a7 ovn-installed in OVS
Oct 02 09:02:15 compute-0 ovn_controller[152344]: 2025-10-02T09:02:15Z|01436|binding|INFO|Setting lport 43ff7902-a749-4e8e-9c64-efd972acf1a7 up in Southbound
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:15 compute-0 NetworkManager[45129]: <info>  [1759395735.2318] device (tap43ff7902-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:02:15 compute-0 NetworkManager[45129]: <info>  [1759395735.2355] device (tap43ff7902-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.254 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5dffd124-ec9f-42b5-8c52-74e559babd78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ceph-mon[74477]: pgmap v2523: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:02:15 compute-0 NetworkManager[45129]: <info>  [1759395735.2640] manager: (tap87157236-50): new Veth device (/org/freedesktop/NetworkManager/Devices/575)
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.263 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a6473ce0-5a02-47d0-8767-5322f3c61297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.298 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[337c7674-2b0d-4af1-b4b7-a680e38cfd79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.300 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1d05542e-48f6-474a-9d1d-15091760c7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 NetworkManager[45129]: <info>  [1759395735.3191] device (tap87157236-50): carrier: link connected
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.324 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[aafbb723-f388-43a9-a136-c54c1d32db23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.339 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[93b667a1-c4dc-473b-9efa-c18561a47bda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87157236-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:3c:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661392, 'reachable_time': 26033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405617, 'error': None, 'target': 'ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.353 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4cb069-fe4f-44e2-a045-2e781445e653]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:3c41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661392, 'tstamp': 661392}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405618, 'error': None, 'target': 'ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.368 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a47492-984a-45cc-b7ff-b787d1d7b049]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87157236-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:3c:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661392, 'reachable_time': 26033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405619, 'error': None, 'target': 'ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.393 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eff6b800-e905-4ea9-b30e-6aa4aac41359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.447 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc4add9-d8c7-43b9-979c-28e7d4ca9f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.449 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87157236-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.450 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.451 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87157236-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:15 compute-0 NetworkManager[45129]: <info>  [1759395735.4546] manager: (tap87157236-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/576)
Oct 02 09:02:15 compute-0 kernel: tap87157236-50: entered promiscuous mode
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.456 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87157236-50, col_values=(('external_ids', {'iface-id': '9693941d-7c4e-431b-8552-b9f105de9d56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:02:15 compute-0 ovn_controller[152344]: 2025-10-02T09:02:15Z|01437|binding|INFO|Releasing lport 9693941d-7c4e-431b-8552-b9f105de9d56 from this chassis (sb_readonly=0)
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.465 2 DEBUG nova.compute.manager [req-8c95d0a7-3d91-4722-8d97-19468a05844c req-97d4487e-6286-4384-bb00-552618938ff1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.466 2 DEBUG oslo_concurrency.lockutils [req-8c95d0a7-3d91-4722-8d97-19468a05844c req-97d4487e-6286-4384-bb00-552618938ff1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.467 2 DEBUG oslo_concurrency.lockutils [req-8c95d0a7-3d91-4722-8d97-19468a05844c req-97d4487e-6286-4384-bb00-552618938ff1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.467 2 DEBUG oslo_concurrency.lockutils [req-8c95d0a7-3d91-4722-8d97-19468a05844c req-97d4487e-6286-4384-bb00-552618938ff1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.467 2 DEBUG nova.compute.manager [req-8c95d0a7-3d91-4722-8d97-19468a05844c req-97d4487e-6286-4384-bb00-552618938ff1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Processing event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.471 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87157236-5092-4eb4-a9f0-535aee31f502.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87157236-5092-4eb4-a9f0-535aee31f502.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.472 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[547a39e0-197a-440a-8084-2a91df011ca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.473 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-87157236-5092-4eb4-a9f0-535aee31f502
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/87157236-5092-4eb4-a9f0-535aee31f502.pid.haproxy
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 87157236-5092-4eb4-a9f0-535aee31f502
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:02:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.473 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502', 'env', 'PROCESS_TAG=haproxy-87157236-5092-4eb4-a9f0-535aee31f502', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87157236-5092-4eb4-a9f0-535aee31f502.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:02:15 compute-0 nova_compute[260603]: 2025-10-02 09:02:15.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:15 compute-0 podman[405692]: 2025-10-02 09:02:15.838431763 +0000 UTC m=+0.063942458 container create ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:02:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:02:15 compute-0 systemd[1]: Started libpod-conmon-ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e.scope.
Oct 02 09:02:15 compute-0 podman[405692]: 2025-10-02 09:02:15.79974458 +0000 UTC m=+0.025255295 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:02:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e266c57055b81fe307404eb21b2013f29455cb3246668f7032d1fc218411294e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:15 compute-0 podman[405692]: 2025-10-02 09:02:15.937271954 +0000 UTC m=+0.162782710 container init ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 09:02:15 compute-0 podman[405692]: 2025-10-02 09:02:15.942073464 +0000 UTC m=+0.167584179 container start ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:02:15 compute-0 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [NOTICE]   (405712) : New worker (405714) forked
Oct 02 09:02:15 compute-0 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [NOTICE]   (405712) : Loading success.
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.205 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395736.204791, 3e3c7092-c580-477b-8596-4fd3b719e700 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.206 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] VM Started (Lifecycle Event)
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.208 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.213 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.216 2 INFO nova.virt.libvirt.driver [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Instance spawned successfully.
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.217 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.261 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.267 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.268 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.269 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.269 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.270 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.270 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.276 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.316 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.317 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395736.2055516, 3e3c7092-c580-477b-8596-4fd3b719e700 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.317 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] VM Paused (Lifecycle Event)
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.347 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.350 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395736.210921, 3e3c7092-c580-477b-8596-4fd3b719e700 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.351 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] VM Resumed (Lifecycle Event)
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.355 2 INFO nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Took 8.42 seconds to spawn the instance on the hypervisor.
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.356 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.366 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.369 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.397 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.417 2 INFO nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Took 9.35 seconds to build instance.
Oct 02 09:02:16 compute-0 nova_compute[260603]: 2025-10-02 09:02:16.439 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:17 compute-0 ceph-mon[74477]: pgmap v2524: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:02:17 compute-0 nova_compute[260603]: 2025-10-02 09:02:17.591 2 DEBUG nova.compute.manager [req-11ce8d95-b716-4c0c-88cd-29760224f27a req-d79fbb30-52e1-406c-b186-b354e2afe713 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:02:17 compute-0 nova_compute[260603]: 2025-10-02 09:02:17.592 2 DEBUG oslo_concurrency.lockutils [req-11ce8d95-b716-4c0c-88cd-29760224f27a req-d79fbb30-52e1-406c-b186-b354e2afe713 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:17 compute-0 nova_compute[260603]: 2025-10-02 09:02:17.593 2 DEBUG oslo_concurrency.lockutils [req-11ce8d95-b716-4c0c-88cd-29760224f27a req-d79fbb30-52e1-406c-b186-b354e2afe713 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:17 compute-0 nova_compute[260603]: 2025-10-02 09:02:17.593 2 DEBUG oslo_concurrency.lockutils [req-11ce8d95-b716-4c0c-88cd-29760224f27a req-d79fbb30-52e1-406c-b186-b354e2afe713 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:17 compute-0 nova_compute[260603]: 2025-10-02 09:02:17.593 2 DEBUG nova.compute.manager [req-11ce8d95-b716-4c0c-88cd-29760224f27a req-d79fbb30-52e1-406c-b186-b354e2afe713 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] No waiting events found dispatching network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:02:17 compute-0 nova_compute[260603]: 2025-10-02 09:02:17.594 2 WARNING nova.compute.manager [req-11ce8d95-b716-4c0c-88cd-29760224f27a req-d79fbb30-52e1-406c-b186-b354e2afe713 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received unexpected event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 for instance with vm_state active and task_state None.
Oct 02 09:02:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 09:02:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:02:19 compute-0 nova_compute[260603]: 2025-10-02 09:02:19.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:19 compute-0 ceph-mon[74477]: pgmap v2525: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 09:02:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:02:19 compute-0 ovn_controller[152344]: 2025-10-02T09:02:19Z|01438|binding|INFO|Releasing lport 9693941d-7c4e-431b-8552-b9f105de9d56 from this chassis (sb_readonly=0)
Oct 02 09:02:19 compute-0 nova_compute[260603]: 2025-10-02 09:02:19.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:19 compute-0 NetworkManager[45129]: <info>  [1759395739.9991] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/577)
Oct 02 09:02:19 compute-0 NetworkManager[45129]: <info>  [1759395739.9999] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/578)
Oct 02 09:02:20 compute-0 ovn_controller[152344]: 2025-10-02T09:02:20Z|01439|binding|INFO|Releasing lport 9693941d-7c4e-431b-8552-b9f105de9d56 from this chassis (sb_readonly=0)
Oct 02 09:02:20 compute-0 nova_compute[260603]: 2025-10-02 09:02:20.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:20 compute-0 nova_compute[260603]: 2025-10-02 09:02:20.201 2 DEBUG nova.compute.manager [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-changed-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:02:20 compute-0 nova_compute[260603]: 2025-10-02 09:02:20.201 2 DEBUG nova.compute.manager [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Refreshing instance network info cache due to event network-changed-43ff7902-a749-4e8e-9c64-efd972acf1a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:02:20 compute-0 nova_compute[260603]: 2025-10-02 09:02:20.202 2 DEBUG oslo_concurrency.lockutils [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:02:20 compute-0 nova_compute[260603]: 2025-10-02 09:02:20.202 2 DEBUG oslo_concurrency.lockutils [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:02:20 compute-0 nova_compute[260603]: 2025-10-02 09:02:20.202 2 DEBUG nova.network.neutron [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Refreshing network info cache for port 43ff7902-a749-4e8e-9c64-efd972acf1a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:02:20 compute-0 nova_compute[260603]: 2025-10-02 09:02:20.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:21 compute-0 ceph-mon[74477]: pgmap v2526: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:02:21 compute-0 nova_compute[260603]: 2025-10-02 09:02:21.469 2 DEBUG nova.network.neutron [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updated VIF entry in instance network info cache for port 43ff7902-a749-4e8e-9c64-efd972acf1a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:02:21 compute-0 nova_compute[260603]: 2025-10-02 09:02:21.469 2 DEBUG nova.network.neutron [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updating instance_info_cache with network_info: [{"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:02:21 compute-0 nova_compute[260603]: 2025-10-02 09:02:21.493 2 DEBUG oslo_concurrency.lockutils [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:02:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 97 op/s
Oct 02 09:02:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:02:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/130922234' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:02:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:02:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/130922234' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:02:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/130922234' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:02:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/130922234' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:02:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:02:23 compute-0 ceph-mon[74477]: pgmap v2527: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 97 op/s
Oct 02 09:02:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 97 op/s
Oct 02 09:02:24 compute-0 nova_compute[260603]: 2025-10-02 09:02:24.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:25 compute-0 ceph-mon[74477]: pgmap v2528: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 97 op/s
Oct 02 09:02:25 compute-0 nova_compute[260603]: 2025-10-02 09:02:25.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:02:27 compute-0 ceph-mon[74477]: pgmap v2529: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:02:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 106 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 114 op/s
Oct 02 09:02:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:02:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:02:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:02:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:02:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:02:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:02:28
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['images', 'backups', 'vms', 'volumes', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log']
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:02:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:02:28 compute-0 ovn_controller[152344]: 2025-10-02T09:02:28Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:49:2a 10.100.0.6
Oct 02 09:02:28 compute-0 ovn_controller[152344]: 2025-10-02T09:02:28Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:49:2a 10.100.0.6
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:02:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:02:29 compute-0 nova_compute[260603]: 2025-10-02 09:02:29.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:29 compute-0 ceph-mon[74477]: pgmap v2530: 305 pgs: 305 active+clean; 106 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 114 op/s
Oct 02 09:02:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 118 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:02:30 compute-0 nova_compute[260603]: 2025-10-02 09:02:30.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:31 compute-0 ceph-mon[74477]: pgmap v2531: 305 pgs: 305 active+clean; 118 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:02:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 118 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct 02 09:02:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:02:33 compute-0 ceph-mon[74477]: pgmap v2532: 305 pgs: 305 active+clean; 118 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct 02 09:02:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:02:34 compute-0 nova_compute[260603]: 2025-10-02 09:02:34.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:34.841 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:34.843 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:34.843 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:35 compute-0 podman[405725]: 2025-10-02 09:02:35.01081942 +0000 UTC m=+0.053421271 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 09:02:35 compute-0 podman[405724]: 2025-10-02 09:02:35.041773732 +0000 UTC m=+0.097121229 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:02:35 compute-0 nova_compute[260603]: 2025-10-02 09:02:35.207 2 INFO nova.compute.manager [None req-4f446e10-c0be-4a7b-839f-ea5ebc58a922 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Get console output
Oct 02 09:02:35 compute-0 nova_compute[260603]: 2025-10-02 09:02:35.212 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 09:02:35 compute-0 ceph-mon[74477]: pgmap v2533: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:02:35 compute-0 nova_compute[260603]: 2025-10-02 09:02:35.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:02:35 compute-0 nova_compute[260603]: 2025-10-02 09:02:35.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:02:35 compute-0 nova_compute[260603]: 2025-10-02 09:02:35.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:02:36 compute-0 ceph-mon[74477]: pgmap v2534: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:02:37 compute-0 ovn_controller[152344]: 2025-10-02T09:02:37Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:49:2a 10.100.0.6
Oct 02 09:02:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:02:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:02:38 compute-0 ceph-mon[74477]: pgmap v2535: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007587643257146578 of space, bias 1.0, pg target 0.22762929771439736 quantized to 32 (current 32)
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:02:39 compute-0 nova_compute[260603]: 2025-10-02 09:02:39.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 480 KiB/s wr, 22 op/s
Oct 02 09:02:40 compute-0 podman[405769]: 2025-10-02 09:02:40.006626599 +0000 UTC m=+0.059237011 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 09:02:40 compute-0 podman[405768]: 2025-10-02 09:02:40.039717738 +0000 UTC m=+0.088232313 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 02 09:02:40 compute-0 ovn_controller[152344]: 2025-10-02T09:02:40Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:49:2a 10.100.0.6
Oct 02 09:02:40 compute-0 nova_compute[260603]: 2025-10-02 09:02:40.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:40.706 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:02:40 compute-0 nova_compute[260603]: 2025-10-02 09:02:40.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:40.708 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:02:40 compute-0 sudo[405808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:02:40 compute-0 sudo[405808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:40 compute-0 sudo[405808]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:40 compute-0 ceph-mon[74477]: pgmap v2536: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 480 KiB/s wr, 22 op/s
Oct 02 09:02:40 compute-0 sudo[405833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:02:40 compute-0 sudo[405833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:40 compute-0 sudo[405833]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:41 compute-0 sudo[405858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.026 2 DEBUG nova.compute.manager [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-changed-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.027 2 DEBUG nova.compute.manager [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Refreshing instance network info cache due to event network-changed-43ff7902-a749-4e8e-9c64-efd972acf1a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.027 2 DEBUG oslo_concurrency.lockutils [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.027 2 DEBUG oslo_concurrency.lockutils [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.027 2 DEBUG nova.network.neutron [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Refreshing network info cache for port 43ff7902-a749-4e8e-9c64-efd972acf1a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:02:41 compute-0 sudo[405858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:41 compute-0 sudo[405858]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:41 compute-0 sudo[405883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:02:41 compute-0 sudo[405883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.115 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.116 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.116 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.116 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.116 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.117 2 INFO nova.compute.manager [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Terminating instance
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.118 2 DEBUG nova.compute.manager [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:02:41 compute-0 kernel: tap43ff7902-a7 (unregistering): left promiscuous mode
Oct 02 09:02:41 compute-0 NetworkManager[45129]: <info>  [1759395761.1676] device (tap43ff7902-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:02:41 compute-0 ovn_controller[152344]: 2025-10-02T09:02:41Z|01440|binding|INFO|Releasing lport 43ff7902-a749-4e8e-9c64-efd972acf1a7 from this chassis (sb_readonly=0)
Oct 02 09:02:41 compute-0 ovn_controller[152344]: 2025-10-02T09:02:41Z|01441|binding|INFO|Setting lport 43ff7902-a749-4e8e-9c64-efd972acf1a7 down in Southbound
Oct 02 09:02:41 compute-0 ovn_controller[152344]: 2025-10-02T09:02:41Z|01442|binding|INFO|Removing iface tap43ff7902-a7 ovn-installed in OVS
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.196 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:49:2a 10.100.0.6'], port_security=['fa:16:3e:ad:49:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e3c7092-c580-477b-8596-4fd3b719e700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87157236-5092-4eb4-a9f0-535aee31f502', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96e8905a-c47c-4bff-936a-cf4b9c9d02f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dca513d-9223-4241-809a-b7da8c65d0fb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=43ff7902-a749-4e8e-9c64-efd972acf1a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.197 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 43ff7902-a749-4e8e-9c64-efd972acf1a7 in datapath 87157236-5092-4eb4-a9f0-535aee31f502 unbound from our chassis
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.198 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87157236-5092-4eb4-a9f0-535aee31f502, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.199 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5a1688-393d-46b4-ac72-9c9055a544e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.199 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502 namespace which is not needed anymore
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:41 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000085.scope: Deactivated successfully.
Oct 02 09:02:41 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000085.scope: Consumed 12.599s CPU time.
Oct 02 09:02:41 compute-0 systemd-machined[214636]: Machine qemu-167-instance-00000085 terminated.
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:41 compute-0 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [NOTICE]   (405712) : haproxy version is 2.8.14-c23fe91
Oct 02 09:02:41 compute-0 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [NOTICE]   (405712) : path to executable is /usr/sbin/haproxy
Oct 02 09:02:41 compute-0 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [WARNING]  (405712) : Exiting Master process...
Oct 02 09:02:41 compute-0 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [WARNING]  (405712) : Exiting Master process...
Oct 02 09:02:41 compute-0 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [ALERT]    (405712) : Current worker (405714) exited with code 143 (Terminated)
Oct 02 09:02:41 compute-0 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [WARNING]  (405712) : All workers exited. Exiting... (0)
Oct 02 09:02:41 compute-0 systemd[1]: libpod-ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e.scope: Deactivated successfully.
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.359 2 INFO nova.virt.libvirt.driver [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Instance destroyed successfully.
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.360 2 DEBUG nova.objects.instance [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 3e3c7092-c580-477b-8596-4fd3b719e700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:02:41 compute-0 podman[405949]: 2025-10-02 09:02:41.365226939 +0000 UTC m=+0.061766441 container died ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.375 2 DEBUG nova.virt.libvirt.vif [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:02:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2141391792',display_name='tempest-TestNetworkBasicOps-server-2141391792',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2141391792',id=133,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKDp11F2lbrxxMl62/j7fHXkV6+flkvleRV3SlQtVIKr0IAxEI8xda+NGodZ4HaJd7EmZSJjX6G2CVi6Mz+byyGzwwm3n9HV8PIJ995e7gObIrbEd/QY1wwigSXqVllt8g==',key_name='tempest-TestNetworkBasicOps-1602850552',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:02:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-2y5fgdwx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:02:16Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=3e3c7092-c580-477b-8596-4fd3b719e700,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.375 2 DEBUG nova.network.os_vif_util [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.376 2 DEBUG nova.network.os_vif_util [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.376 2 DEBUG os_vif [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.378 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ff7902-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.383 2 INFO os_vif [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7')
Oct 02 09:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e-userdata-shm.mount: Deactivated successfully.
Oct 02 09:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e266c57055b81fe307404eb21b2013f29455cb3246668f7032d1fc218411294e-merged.mount: Deactivated successfully.
Oct 02 09:02:41 compute-0 podman[405949]: 2025-10-02 09:02:41.438430323 +0000 UTC m=+0.134969825 container cleanup ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 09:02:41 compute-0 systemd[1]: libpod-conmon-ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e.scope: Deactivated successfully.
Oct 02 09:02:41 compute-0 podman[406010]: 2025-10-02 09:02:41.509901644 +0000 UTC m=+0.040514910 container remove ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:02:41 compute-0 sudo[405883]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.529 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bab0c7ba-1290-4965-a4ae-ed0ee0bf4a65]: (4, ('Thu Oct  2 09:02:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502 (ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e)\nea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e\nThu Oct  2 09:02:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502 (ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e)\nea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.534 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ae72a6-56da-428d-b775-96d6bf809cb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.537 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87157236-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:41 compute-0 kernel: tap87157236-50: left promiscuous mode
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.562 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62975d89-a539-4c0c-8bc6-97ef34400421]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:02:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:02:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:02:41 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:02:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.589 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[33ab6a3f-948e-4d74-96f9-d53de015e3f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.592 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29fef401-8dc6-476b-a242-3317b4776925]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:41 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:02:41 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8771f0df-d64d-46e3-8839-72e177e77685 does not exist
Oct 02 09:02:41 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9eb2f582-8ac8-4d90-9148-9ead670ad731 does not exist
Oct 02 09:02:41 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 983ff226-60d0-485f-a12d-8af2b57c3e5c does not exist
Oct 02 09:02:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:02:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:02:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:02:41 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:02:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:02:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[598cec1b-faaf-4086-98d9-b23edef69119]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661384, 'reachable_time': 33485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406035, 'error': None, 'target': 'ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.616 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.616 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6f23d3-65a2-4788-8ce4-dee1df818590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:02:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d87157236\x2d5092\x2d4eb4\x2da9f0\x2d535aee31f502.mount: Deactivated successfully.
Oct 02 09:02:41 compute-0 sudo[406036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:02:41 compute-0 sudo[406036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:41 compute-0 sudo[406036]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:41 compute-0 sudo[406061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:02:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.709 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:02:41 compute-0 sudo[406061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:41 compute-0 sudo[406061]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.730 2 INFO nova.virt.libvirt.driver [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Deleting instance files /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700_del
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.730 2 INFO nova.virt.libvirt.driver [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Deletion of /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700_del complete
Oct 02 09:02:41 compute-0 sudo[406086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:02:41 compute-0 sudo[406086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:41 compute-0 sudo[406086]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.784 2 INFO nova.compute.manager [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Took 0.67 seconds to destroy the instance on the hypervisor.
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.784 2 DEBUG oslo.service.loopingcall [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.785 2 DEBUG nova.compute.manager [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:02:41 compute-0 nova_compute[260603]: 2025-10-02 09:02:41.785 2 DEBUG nova.network.neutron [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:02:41 compute-0 sudo[406111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:02:41 compute-0 sudo[406111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 25 KiB/s wr, 9 op/s
Oct 02 09:02:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:02:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:02:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:02:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:02:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:02:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:02:42 compute-0 podman[406171]: 2025-10-02 09:02:42.222133418 +0000 UTC m=+0.060791310 container create 1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 09:02:42 compute-0 systemd[1]: Started libpod-conmon-1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca.scope.
Oct 02 09:02:42 compute-0 podman[406171]: 2025-10-02 09:02:42.190542306 +0000 UTC m=+0.029200288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:02:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:02:42 compute-0 podman[406171]: 2025-10-02 09:02:42.313167667 +0000 UTC m=+0.151825589 container init 1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 09:02:42 compute-0 podman[406171]: 2025-10-02 09:02:42.318996618 +0000 UTC m=+0.157654510 container start 1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 09:02:42 compute-0 podman[406171]: 2025-10-02 09:02:42.321953089 +0000 UTC m=+0.160611011 container attach 1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 02 09:02:42 compute-0 jovial_khorana[406187]: 167 167
Oct 02 09:02:42 compute-0 systemd[1]: libpod-1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca.scope: Deactivated successfully.
Oct 02 09:02:42 compute-0 podman[406171]: 2025-10-02 09:02:42.327877503 +0000 UTC m=+0.166535395 container died 1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:02:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1edc8c5b8c9e3be7f7a230f507eef2fcce2f1286c12005e51807c6a39ac3b06-merged.mount: Deactivated successfully.
Oct 02 09:02:42 compute-0 nova_compute[260603]: 2025-10-02 09:02:42.364 2 DEBUG nova.network.neutron [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updated VIF entry in instance network info cache for port 43ff7902-a749-4e8e-9c64-efd972acf1a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:02:42 compute-0 nova_compute[260603]: 2025-10-02 09:02:42.365 2 DEBUG nova.network.neutron [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updating instance_info_cache with network_info: [{"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:02:42 compute-0 podman[406171]: 2025-10-02 09:02:42.372796329 +0000 UTC m=+0.211454281 container remove 1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:02:42 compute-0 nova_compute[260603]: 2025-10-02 09:02:42.384 2 DEBUG oslo_concurrency.lockutils [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:02:42 compute-0 systemd[1]: libpod-conmon-1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca.scope: Deactivated successfully.
Oct 02 09:02:42 compute-0 nova_compute[260603]: 2025-10-02 09:02:42.522 2 DEBUG nova.network.neutron [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:02:42 compute-0 nova_compute[260603]: 2025-10-02 09:02:42.543 2 INFO nova.compute.manager [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Took 0.76 seconds to deallocate network for instance.
Oct 02 09:02:42 compute-0 podman[406211]: 2025-10-02 09:02:42.556053114 +0000 UTC m=+0.050574982 container create 56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:02:42 compute-0 systemd[1]: Started libpod-conmon-56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17.scope.
Oct 02 09:02:42 compute-0 nova_compute[260603]: 2025-10-02 09:02:42.591 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:42 compute-0 nova_compute[260603]: 2025-10-02 09:02:42.592 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f759079d289600ffffef1376c8720362708cc1d819aa84049580b59b737f21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f759079d289600ffffef1376c8720362708cc1d819aa84049580b59b737f21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f759079d289600ffffef1376c8720362708cc1d819aa84049580b59b737f21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f759079d289600ffffef1376c8720362708cc1d819aa84049580b59b737f21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f759079d289600ffffef1376c8720362708cc1d819aa84049580b59b737f21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:42 compute-0 nova_compute[260603]: 2025-10-02 09:02:42.610 2 DEBUG nova.compute.manager [req-60f82552-0747-4868-8545-a2318ecb20ff req-7a8da71b-d40f-4e24-ad68-ee8ea978b0c2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-vif-deleted-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:02:42 compute-0 podman[406211]: 2025-10-02 09:02:42.623623564 +0000 UTC m=+0.118145462 container init 56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 09:02:42 compute-0 podman[406211]: 2025-10-02 09:02:42.528537159 +0000 UTC m=+0.023059087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:02:42 compute-0 podman[406211]: 2025-10-02 09:02:42.632721427 +0000 UTC m=+0.127243315 container start 56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:02:42 compute-0 podman[406211]: 2025-10-02 09:02:42.63764226 +0000 UTC m=+0.132164138 container attach 56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:02:42 compute-0 nova_compute[260603]: 2025-10-02 09:02:42.671 2 DEBUG oslo_concurrency.processutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:02:42 compute-0 ceph-mon[74477]: pgmap v2537: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 25 KiB/s wr, 9 op/s
Oct 02 09:02:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:02:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552772987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.100 2 DEBUG oslo_concurrency.processutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.107 2 DEBUG nova.compute.provider_tree [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.116 2 DEBUG nova.compute.manager [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-vif-unplugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.117 2 DEBUG oslo_concurrency.lockutils [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.117 2 DEBUG oslo_concurrency.lockutils [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.117 2 DEBUG oslo_concurrency.lockutils [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.117 2 DEBUG nova.compute.manager [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] No waiting events found dispatching network-vif-unplugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.117 2 WARNING nova.compute.manager [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received unexpected event network-vif-unplugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 for instance with vm_state deleted and task_state None.
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.117 2 DEBUG nova.compute.manager [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.118 2 DEBUG oslo_concurrency.lockutils [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.118 2 DEBUG oslo_concurrency.lockutils [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.118 2 DEBUG oslo_concurrency.lockutils [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.118 2 DEBUG nova.compute.manager [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] No waiting events found dispatching network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.118 2 WARNING nova.compute.manager [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received unexpected event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 for instance with vm_state deleted and task_state None.
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.124 2 DEBUG nova.scheduler.client.report [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:02:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.150 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.174 2 INFO nova.scheduler.client.report [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 3e3c7092-c580-477b-8596-4fd3b719e700
Oct 02 09:02:43 compute-0 nova_compute[260603]: 2025-10-02 09:02:43.240 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:43 compute-0 trusting_mendeleev[406228]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:02:43 compute-0 trusting_mendeleev[406228]: --> relative data size: 1.0
Oct 02 09:02:43 compute-0 trusting_mendeleev[406228]: --> All data devices are unavailable
Oct 02 09:02:43 compute-0 systemd[1]: libpod-56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17.scope: Deactivated successfully.
Oct 02 09:02:43 compute-0 systemd[1]: libpod-56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17.scope: Consumed 1.006s CPU time.
Oct 02 09:02:43 compute-0 podman[406211]: 2025-10-02 09:02:43.694897665 +0000 UTC m=+1.189419573 container died 56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 09:02:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-26f759079d289600ffffef1376c8720362708cc1d819aa84049580b59b737f21-merged.mount: Deactivated successfully.
Oct 02 09:02:43 compute-0 podman[406211]: 2025-10-02 09:02:43.766264733 +0000 UTC m=+1.260786621 container remove 56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 02 09:02:43 compute-0 systemd[1]: libpod-conmon-56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17.scope: Deactivated successfully.
Oct 02 09:02:43 compute-0 sudo[406111]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 32 KiB/s wr, 37 op/s
Oct 02 09:02:43 compute-0 sudo[406293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:02:43 compute-0 sudo[406293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:43 compute-0 sudo[406293]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3552772987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:02:43 compute-0 sudo[406318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:02:43 compute-0 sudo[406318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:43 compute-0 sudo[406318]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:44 compute-0 sudo[406343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:02:44 compute-0 sudo[406343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:44 compute-0 sudo[406343]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:44 compute-0 sudo[406368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:02:44 compute-0 sudo[406368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:44 compute-0 podman[406433]: 2025-10-02 09:02:44.636481845 +0000 UTC m=+0.084531938 container create 33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 09:02:44 compute-0 systemd[1]: Started libpod-conmon-33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69.scope.
Oct 02 09:02:44 compute-0 podman[406433]: 2025-10-02 09:02:44.598868986 +0000 UTC m=+0.046919139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:02:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:02:44 compute-0 podman[406433]: 2025-10-02 09:02:44.753988997 +0000 UTC m=+0.202039160 container init 33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 09:02:44 compute-0 podman[406433]: 2025-10-02 09:02:44.766119874 +0000 UTC m=+0.214169927 container start 33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:02:44 compute-0 podman[406433]: 2025-10-02 09:02:44.773688349 +0000 UTC m=+0.221738502 container attach 33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:02:44 compute-0 cool_mcnulty[406449]: 167 167
Oct 02 09:02:44 compute-0 systemd[1]: libpod-33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69.scope: Deactivated successfully.
Oct 02 09:02:44 compute-0 podman[406433]: 2025-10-02 09:02:44.777102335 +0000 UTC m=+0.225152398 container died 33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-538ac7d8497c896b51df05aaf530a38102c7a60acc8b34691cce52b6e5b90a5f-merged.mount: Deactivated successfully.
Oct 02 09:02:44 compute-0 podman[406433]: 2025-10-02 09:02:44.840818025 +0000 UTC m=+0.288868118 container remove 33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 09:02:44 compute-0 systemd[1]: libpod-conmon-33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69.scope: Deactivated successfully.
Oct 02 09:02:44 compute-0 ceph-mon[74477]: pgmap v2538: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 32 KiB/s wr, 37 op/s
Oct 02 09:02:45 compute-0 podman[406473]: 2025-10-02 09:02:45.11451469 +0000 UTC m=+0.076675143 container create 6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 09:02:45 compute-0 systemd[1]: Started libpod-conmon-6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559.scope.
Oct 02 09:02:45 compute-0 podman[406473]: 2025-10-02 09:02:45.083867539 +0000 UTC m=+0.046028022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:02:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffe4ae17ec74ccddfd1a9433f3e49931b806b2d05d3a515dcf9c832b290bc95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffe4ae17ec74ccddfd1a9433f3e49931b806b2d05d3a515dcf9c832b290bc95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffe4ae17ec74ccddfd1a9433f3e49931b806b2d05d3a515dcf9c832b290bc95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffe4ae17ec74ccddfd1a9433f3e49931b806b2d05d3a515dcf9c832b290bc95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:45 compute-0 podman[406473]: 2025-10-02 09:02:45.212465914 +0000 UTC m=+0.174626387 container init 6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 02 09:02:45 compute-0 podman[406473]: 2025-10-02 09:02:45.220117312 +0000 UTC m=+0.182277755 container start 6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:02:45 compute-0 podman[406473]: 2025-10-02 09:02:45.230149904 +0000 UTC m=+0.192310367 container attach 6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 09:02:45 compute-0 nova_compute[260603]: 2025-10-02 09:02:45.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 18 KiB/s wr, 28 op/s
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]: {
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:     "0": [
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:         {
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "devices": [
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "/dev/loop3"
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             ],
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_name": "ceph_lv0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_size": "21470642176",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "name": "ceph_lv0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "tags": {
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.cluster_name": "ceph",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.crush_device_class": "",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.encrypted": "0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.osd_id": "0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.type": "block",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.vdo": "0"
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             },
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "type": "block",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "vg_name": "ceph_vg0"
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:         }
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:     ],
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:     "1": [
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:         {
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "devices": [
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "/dev/loop4"
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             ],
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_name": "ceph_lv1",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_size": "21470642176",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "name": "ceph_lv1",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "tags": {
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.cluster_name": "ceph",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.crush_device_class": "",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.encrypted": "0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.osd_id": "1",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.type": "block",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.vdo": "0"
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             },
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "type": "block",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "vg_name": "ceph_vg1"
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:         }
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:     ],
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:     "2": [
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:         {
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "devices": [
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "/dev/loop5"
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             ],
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_name": "ceph_lv2",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_size": "21470642176",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "name": "ceph_lv2",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "tags": {
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.cluster_name": "ceph",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.crush_device_class": "",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.encrypted": "0",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.osd_id": "2",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.type": "block",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:                 "ceph.vdo": "0"
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             },
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "type": "block",
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:             "vg_name": "ceph_vg2"
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:         }
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]:     ]
Oct 02 09:02:45 compute-0 suspicious_brattain[406490]: }
Oct 02 09:02:45 compute-0 systemd[1]: libpod-6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559.scope: Deactivated successfully.
Oct 02 09:02:45 compute-0 podman[406473]: 2025-10-02 09:02:45.960353175 +0000 UTC m=+0.922513618 container died 6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:02:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ffe4ae17ec74ccddfd1a9433f3e49931b806b2d05d3a515dcf9c832b290bc95-merged.mount: Deactivated successfully.
Oct 02 09:02:46 compute-0 podman[406473]: 2025-10-02 09:02:46.039696151 +0000 UTC m=+1.001856604 container remove 6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 09:02:46 compute-0 systemd[1]: libpod-conmon-6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559.scope: Deactivated successfully.
Oct 02 09:02:46 compute-0 sudo[406368]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:46 compute-0 sudo[406511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:02:46 compute-0 sudo[406511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:46 compute-0 sudo[406511]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:46 compute-0 sudo[406536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:02:46 compute-0 sudo[406536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:46 compute-0 sudo[406536]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:46 compute-0 sudo[406561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:02:46 compute-0 sudo[406561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:46 compute-0 sudo[406561]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:46 compute-0 sudo[406586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:02:46 compute-0 sudo[406586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:46 compute-0 nova_compute[260603]: 2025-10-02 09:02:46.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:46 compute-0 nova_compute[260603]: 2025-10-02 09:02:46.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:46 compute-0 nova_compute[260603]: 2025-10-02 09:02:46.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:46 compute-0 podman[406653]: 2025-10-02 09:02:46.769780419 +0000 UTC m=+0.053660718 container create 1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 09:02:46 compute-0 systemd[1]: Started libpod-conmon-1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438.scope.
Oct 02 09:02:46 compute-0 podman[406653]: 2025-10-02 09:02:46.743335227 +0000 UTC m=+0.027215576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:02:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:02:46 compute-0 podman[406653]: 2025-10-02 09:02:46.856987899 +0000 UTC m=+0.140868218 container init 1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct 02 09:02:46 compute-0 podman[406653]: 2025-10-02 09:02:46.86665755 +0000 UTC m=+0.150537849 container start 1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 09:02:46 compute-0 podman[406653]: 2025-10-02 09:02:46.869856439 +0000 UTC m=+0.153736758 container attach 1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 09:02:46 compute-0 epic_brown[406669]: 167 167
Oct 02 09:02:46 compute-0 systemd[1]: libpod-1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438.scope: Deactivated successfully.
Oct 02 09:02:46 compute-0 conmon[406669]: conmon 1198e98236c38bccc9fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438.scope/container/memory.events
Oct 02 09:02:46 compute-0 podman[406653]: 2025-10-02 09:02:46.874233805 +0000 UTC m=+0.158114074 container died 1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 09:02:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-64edf0f366bfef4201bada809d6f03329668547195fa5e940555f50a232c0e7a-merged.mount: Deactivated successfully.
Oct 02 09:02:46 compute-0 podman[406653]: 2025-10-02 09:02:46.920238074 +0000 UTC m=+0.204118373 container remove 1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct 02 09:02:46 compute-0 systemd[1]: libpod-conmon-1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438.scope: Deactivated successfully.
Oct 02 09:02:46 compute-0 ceph-mon[74477]: pgmap v2539: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 18 KiB/s wr, 28 op/s
Oct 02 09:02:47 compute-0 podman[406694]: 2025-10-02 09:02:47.148659073 +0000 UTC m=+0.063238986 container create 74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 09:02:47 compute-0 systemd[1]: Started libpod-conmon-74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460.scope.
Oct 02 09:02:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1e5f676dda206acfa672be8c59e25b13c14c98378202839698830982d3c61f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:47 compute-0 podman[406694]: 2025-10-02 09:02:47.123673016 +0000 UTC m=+0.038253059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1e5f676dda206acfa672be8c59e25b13c14c98378202839698830982d3c61f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1e5f676dda206acfa672be8c59e25b13c14c98378202839698830982d3c61f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1e5f676dda206acfa672be8c59e25b13c14c98378202839698830982d3c61f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:02:47 compute-0 podman[406694]: 2025-10-02 09:02:47.230722304 +0000 UTC m=+0.145302217 container init 74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jackson, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:02:47 compute-0 podman[406694]: 2025-10-02 09:02:47.254960847 +0000 UTC m=+0.169540730 container start 74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jackson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:02:47 compute-0 podman[406694]: 2025-10-02 09:02:47.258550658 +0000 UTC m=+0.173130621 container attach 74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jackson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 09:02:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 18 KiB/s wr, 28 op/s
Oct 02 09:02:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:02:48 compute-0 reverent_jackson[406712]: {
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "osd_id": 2,
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "type": "bluestore"
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:     },
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "osd_id": 1,
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "type": "bluestore"
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:     },
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "osd_id": 0,
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:         "type": "bluestore"
Oct 02 09:02:48 compute-0 reverent_jackson[406712]:     }
Oct 02 09:02:48 compute-0 reverent_jackson[406712]: }
Oct 02 09:02:48 compute-0 systemd[1]: libpod-74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460.scope: Deactivated successfully.
Oct 02 09:02:48 compute-0 conmon[406712]: conmon 74f76364917a7f523133 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460.scope/container/memory.events
Oct 02 09:02:48 compute-0 podman[406694]: 2025-10-02 09:02:48.225074984 +0000 UTC m=+1.139654867 container died 74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:02:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b1e5f676dda206acfa672be8c59e25b13c14c98378202839698830982d3c61f-merged.mount: Deactivated successfully.
Oct 02 09:02:48 compute-0 podman[406694]: 2025-10-02 09:02:48.279837895 +0000 UTC m=+1.194417768 container remove 74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:02:48 compute-0 systemd[1]: libpod-conmon-74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460.scope: Deactivated successfully.
Oct 02 09:02:48 compute-0 sudo[406586]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:02:48 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:02:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:02:48 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:02:48 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 637675f4-f21b-4e1e-be77-380d6b7ffa31 does not exist
Oct 02 09:02:48 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ee515401-6d49-4c0c-bd52-ac81da1fbcf7 does not exist
Oct 02 09:02:48 compute-0 sudo[406757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:02:48 compute-0 sudo[406757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:48 compute-0 sudo[406757]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:48 compute-0 sudo[406782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:02:48 compute-0 sudo[406782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:02:48 compute-0 sudo[406782]: pam_unix(sudo:session): session closed for user root
Oct 02 09:02:48 compute-0 nova_compute[260603]: 2025-10-02 09:02:48.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:02:49 compute-0 ceph-mon[74477]: pgmap v2540: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 18 KiB/s wr, 28 op/s
Oct 02 09:02:49 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:02:49 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:02:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 27 op/s
Oct 02 09:02:50 compute-0 nova_compute[260603]: 2025-10-02 09:02:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:02:50 compute-0 nova_compute[260603]: 2025-10-02 09:02:50.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:02:50 compute-0 nova_compute[260603]: 2025-10-02 09:02:50.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:02:50 compute-0 nova_compute[260603]: 2025-10-02 09:02:50.546 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:02:50 compute-0 nova_compute[260603]: 2025-10-02 09:02:50.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:51 compute-0 ceph-mon[74477]: pgmap v2541: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 27 op/s
Oct 02 09:02:51 compute-0 nova_compute[260603]: 2025-10-02 09:02:51.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 27 op/s
Oct 02 09:02:52 compute-0 nova_compute[260603]: 2025-10-02 09:02:52.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:02:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:02:53 compute-0 ceph-mon[74477]: pgmap v2542: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 27 op/s
Oct 02 09:02:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 27 op/s
Oct 02 09:02:55 compute-0 ceph-mon[74477]: pgmap v2543: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 27 op/s
Oct 02 09:02:55 compute-0 nova_compute[260603]: 2025-10-02 09:02:55.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:56 compute-0 nova_compute[260603]: 2025-10-02 09:02:56.354 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395761.353559, 3e3c7092-c580-477b-8596-4fd3b719e700 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:02:56 compute-0 nova_compute[260603]: 2025-10-02 09:02:56.355 2 INFO nova.compute.manager [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] VM Stopped (Lifecycle Event)
Oct 02 09:02:56 compute-0 nova_compute[260603]: 2025-10-02 09:02:56.378 2 DEBUG nova.compute.manager [None req-6f52d5c5-aedd-483e-ade1-22bc85ef95ff - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:02:56 compute-0 nova_compute[260603]: 2025-10-02 09:02:56.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:02:56 compute-0 nova_compute[260603]: 2025-10-02 09:02:56.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:02:56 compute-0 nova_compute[260603]: 2025-10-02 09:02:56.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:02:56 compute-0 nova_compute[260603]: 2025-10-02 09:02:56.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:02:56 compute-0 nova_compute[260603]: 2025-10-02 09:02:56.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:56 compute-0 nova_compute[260603]: 2025-10-02 09:02:56.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:56 compute-0 nova_compute[260603]: 2025-10-02 09:02:56.555 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:56 compute-0 nova_compute[260603]: 2025-10-02 09:02:56.555 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:02:56 compute-0 nova_compute[260603]: 2025-10-02 09:02:56.556 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:02:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:02:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292687566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.025 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.208 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.209 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3633MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.209 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.209 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.314 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.315 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.344 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:02:57 compute-0 ceph-mon[74477]: pgmap v2544: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:57 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1292687566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:02:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:02:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1718814926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.851 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.858 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.878 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:02:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.909 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:02:57 compute-0 nova_compute[260603]: 2025-10-02 09:02:57.910 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:02:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:02:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:02:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:02:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:02:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:02:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:02:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:02:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1718814926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:02:59 compute-0 ceph-mon[74477]: pgmap v2545: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:02:59 compute-0 nova_compute[260603]: 2025-10-02 09:02:59.905 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:03:00 compute-0 nova_compute[260603]: 2025-10-02 09:03:00.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:01 compute-0 ceph-mon[74477]: pgmap v2546: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:01 compute-0 nova_compute[260603]: 2025-10-02 09:03:01.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.153666) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395783153714, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1405, "num_deletes": 250, "total_data_size": 2146852, "memory_usage": 2186512, "flush_reason": "Manual Compaction"}
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395783165441, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1262814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52307, "largest_seqno": 53711, "table_properties": {"data_size": 1257915, "index_size": 2231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13047, "raw_average_key_size": 20, "raw_value_size": 1247115, "raw_average_value_size": 1979, "num_data_blocks": 102, "num_entries": 630, "num_filter_entries": 630, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395644, "oldest_key_time": 1759395644, "file_creation_time": 1759395783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 11844 microseconds, and 6953 cpu microseconds.
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.165507) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1262814 bytes OK
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.165533) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.167434) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.167451) EVENT_LOG_v1 {"time_micros": 1759395783167445, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.167468) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2140631, prev total WAL file size 2140631, number of live WAL files 2.
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.168537) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303032' seq:72057594037927935, type:22 .. '6D6772737461740032323533' seq:0, type:0; will stop at (end)
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1233KB)], [122(10162KB)]
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395783168639, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11669283, "oldest_snapshot_seqno": -1}
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7377 keys, 9210485 bytes, temperature: kUnknown
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395783236449, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9210485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9163109, "index_size": 27808, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18501, "raw_key_size": 191974, "raw_average_key_size": 26, "raw_value_size": 9033229, "raw_average_value_size": 1224, "num_data_blocks": 1088, "num_entries": 7377, "num_filter_entries": 7377, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.236879) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9210485 bytes
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.238566) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.8 rd, 135.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.9 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(16.5) write-amplify(7.3) OK, records in: 7825, records dropped: 448 output_compression: NoCompression
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.238597) EVENT_LOG_v1 {"time_micros": 1759395783238583, "job": 74, "event": "compaction_finished", "compaction_time_micros": 67914, "compaction_time_cpu_micros": 47230, "output_level": 6, "num_output_files": 1, "total_output_size": 9210485, "num_input_records": 7825, "num_output_records": 7377, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395783239327, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395783243500, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.168368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.243701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.243710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.243713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.243716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:03:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.243719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:03:03 compute-0 ceph-mon[74477]: pgmap v2547: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:05 compute-0 ceph-mon[74477]: pgmap v2548: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:05 compute-0 nova_compute[260603]: 2025-10-02 09:03:05.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:06 compute-0 podman[406853]: 2025-10-02 09:03:06.049425541 +0000 UTC m=+0.105378496 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 09:03:06 compute-0 podman[406852]: 2025-10-02 09:03:06.087634849 +0000 UTC m=+0.152900143 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 09:03:06 compute-0 nova_compute[260603]: 2025-10-02 09:03:06.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:06 compute-0 ceph-mon[74477]: pgmap v2549: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:06 compute-0 nova_compute[260603]: 2025-10-02 09:03:06.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:03:07 compute-0 nova_compute[260603]: 2025-10-02 09:03:07.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:03:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:03:08 compute-0 ceph-mon[74477]: pgmap v2550: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.007 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.007 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.023 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.104 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.104 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.113 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.114 2 INFO nova.compute.claims [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.215 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:03:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3286293280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.655 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.664 2 DEBUG nova.compute.provider_tree [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.689 2 DEBUG nova.scheduler.client.report [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.716 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.717 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.775 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.775 2 DEBUG nova.network.neutron [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.802 2 INFO nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.830 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.947 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.948 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.949 2 INFO nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Creating image(s)
Oct 02 09:03:10 compute-0 nova_compute[260603]: 2025-10-02 09:03:10.974 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:10 compute-0 ceph-mon[74477]: pgmap v2551: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3286293280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:03:11 compute-0 podman[406918]: 2025-10-02 09:03:11.000704696 +0000 UTC m=+0.071649558 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:03:11 compute-0 podman[406919]: 2025-10-02 09:03:11.018936832 +0000 UTC m=+0.078790608 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.031 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.062 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.065 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.134 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.135 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.136 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.136 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.157 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.160 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.402 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.477 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.566 2 DEBUG nova.objects.instance [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.588 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.588 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Ensure instance console log exists: /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.589 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.590 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.590 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:11 compute-0 nova_compute[260603]: 2025-10-02 09:03:11.757 2 DEBUG nova.policy [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:03:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:12 compute-0 nova_compute[260603]: 2025-10-02 09:03:12.853 2 DEBUG nova.network.neutron [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Successfully created port: 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:03:13 compute-0 ceph-mon[74477]: pgmap v2552: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:03:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:03:13 compute-0 nova_compute[260603]: 2025-10-02 09:03:13.516 2 DEBUG nova.network.neutron [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Successfully updated port: 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:03:13 compute-0 nova_compute[260603]: 2025-10-02 09:03:13.531 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:03:13 compute-0 nova_compute[260603]: 2025-10-02 09:03:13.532 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:03:13 compute-0 nova_compute[260603]: 2025-10-02 09:03:13.532 2 DEBUG nova.network.neutron [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:03:13 compute-0 nova_compute[260603]: 2025-10-02 09:03:13.598 2 DEBUG nova.compute.manager [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:13 compute-0 nova_compute[260603]: 2025-10-02 09:03:13.599 2 DEBUG nova.compute.manager [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing instance network info cache due to event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:03:13 compute-0 nova_compute[260603]: 2025-10-02 09:03:13.599 2 DEBUG oslo_concurrency.lockutils [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:03:13 compute-0 nova_compute[260603]: 2025-10-02 09:03:13.729 2 DEBUG nova.network.neutron [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:03:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.698 2 DEBUG nova.network.neutron [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.721 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.722 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Instance network_info: |[{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.724 2 DEBUG oslo_concurrency.lockutils [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.724 2 DEBUG nova.network.neutron [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.730 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Start _get_guest_xml network_info=[{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.737 2 WARNING nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.748 2 DEBUG nova.virt.libvirt.host [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.749 2 DEBUG nova.virt.libvirt.host [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.754 2 DEBUG nova.virt.libvirt.host [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.755 2 DEBUG nova.virt.libvirt.host [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.755 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.756 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.757 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.758 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.758 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.759 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.759 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.760 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.761 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.761 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.762 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.762 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:03:14 compute-0 nova_compute[260603]: 2025-10-02 09:03:14.767 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:15 compute-0 ceph-mon[74477]: pgmap v2553: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:03:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:03:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030734456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.275 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.308 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.314 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:03:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3617887475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.778 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.779 2 DEBUG nova.virt.libvirt.vif [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:03:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2044507795',display_name='tempest-TestNetworkBasicOps-server-2044507795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2044507795',id=134,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJyVGuJYWkz5cUrKkan7kCe4MbiRPaa7v1Q6d5xWchgY/tX4JduFQB6JZ0q369VSitON6EJsRLVHtNMGsTz7PTKCbeKQtDY6sbIs7RX5gGPDqTs/0LJrpZ68VxyA10mrYQ==',key_name='tempest-TestNetworkBasicOps-2118371598',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-oalw9fqv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:03:10Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=64577e3d-aa56-4fa9-a1b5-dc76a7a80754,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.780 2 DEBUG nova.network.os_vif_util [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.781 2 DEBUG nova.network.os_vif_util [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.782 2 DEBUG nova.objects.instance [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.799 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:03:15 compute-0 nova_compute[260603]:   <uuid>64577e3d-aa56-4fa9-a1b5-dc76a7a80754</uuid>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   <name>instance-00000086</name>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-2044507795</nova:name>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:03:14</nova:creationTime>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:03:15 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:03:15 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:03:15 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:03:15 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:03:15 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:03:15 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 09:03:15 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:03:15 compute-0 nova_compute[260603]:         <nova:port uuid="3df4c898-fd96-4b0f-90ee-add24ca56aa2">
Oct 02 09:03:15 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <system>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <entry name="serial">64577e3d-aa56-4fa9-a1b5-dc76a7a80754</entry>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <entry name="uuid">64577e3d-aa56-4fa9-a1b5-dc76a7a80754</entry>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     </system>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   <os>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   </os>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   <features>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   </features>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk">
Oct 02 09:03:15 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       </source>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:03:15 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk.config">
Oct 02 09:03:15 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       </source>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:03:15 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:b0:95:1a"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <target dev="tap3df4c898-fd"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/console.log" append="off"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <video>
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     </video>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:03:15 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:03:15 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:03:15 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:03:15 compute-0 nova_compute[260603]: </domain>
Oct 02 09:03:15 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.800 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Preparing to wait for external event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.801 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.801 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.801 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.802 2 DEBUG nova.virt.libvirt.vif [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:03:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2044507795',display_name='tempest-TestNetworkBasicOps-server-2044507795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2044507795',id=134,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJyVGuJYWkz5cUrKkan7kCe4MbiRPaa7v1Q6d5xWchgY/tX4JduFQB6JZ0q369VSitON6EJsRLVHtNMGsTz7PTKCbeKQtDY6sbIs7RX5gGPDqTs/0LJrpZ68VxyA10mrYQ==',key_name='tempest-TestNetworkBasicOps-2118371598',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-oalw9fqv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:03:10Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=64577e3d-aa56-4fa9-a1b5-dc76a7a80754,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.802 2 DEBUG nova.network.os_vif_util [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.803 2 DEBUG nova.network.os_vif_util [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.803 2 DEBUG os_vif [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.804 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.805 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3df4c898-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3df4c898-fd, col_values=(('external_ids', {'iface-id': '3df4c898-fd96-4b0f-90ee-add24ca56aa2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:95:1a', 'vm-uuid': '64577e3d-aa56-4fa9-a1b5-dc76a7a80754'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:15 compute-0 NetworkManager[45129]: <info>  [1759395795.8115] manager: (tap3df4c898-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/579)
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.821 2 INFO os_vif [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd')
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.864 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.864 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.865 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:b0:95:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.865 2 INFO nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Using config drive
Oct 02 09:03:15 compute-0 nova_compute[260603]: 2025-10-02 09:03:15.885 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:03:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3030734456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:03:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3617887475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:03:16 compute-0 nova_compute[260603]: 2025-10-02 09:03:16.249 2 DEBUG nova.network.neutron [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updated VIF entry in instance network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:03:16 compute-0 nova_compute[260603]: 2025-10-02 09:03:16.249 2 DEBUG nova.network.neutron [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:03:16 compute-0 nova_compute[260603]: 2025-10-02 09:03:16.270 2 DEBUG oslo_concurrency.lockutils [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:03:16 compute-0 nova_compute[260603]: 2025-10-02 09:03:16.704 2 INFO nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Creating config drive at /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/disk.config
Oct 02 09:03:16 compute-0 nova_compute[260603]: 2025-10-02 09:03:16.708 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkzpt5vlk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:16 compute-0 nova_compute[260603]: 2025-10-02 09:03:16.873 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkzpt5vlk" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:16 compute-0 nova_compute[260603]: 2025-10-02 09:03:16.896 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:16 compute-0 nova_compute[260603]: 2025-10-02 09:03:16.900 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/disk.config 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:17 compute-0 ceph-mon[74477]: pgmap v2554: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.076 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/disk.config 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.077 2 INFO nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Deleting local config drive /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/disk.config because it was imported into RBD.
Oct 02 09:03:17 compute-0 kernel: tap3df4c898-fd: entered promiscuous mode
Oct 02 09:03:17 compute-0 NetworkManager[45129]: <info>  [1759395797.1556] manager: (tap3df4c898-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/580)
Oct 02 09:03:17 compute-0 ovn_controller[152344]: 2025-10-02T09:03:17Z|01443|binding|INFO|Claiming lport 3df4c898-fd96-4b0f-90ee-add24ca56aa2 for this chassis.
Oct 02 09:03:17 compute-0 ovn_controller[152344]: 2025-10-02T09:03:17Z|01444|binding|INFO|3df4c898-fd96-4b0f-90ee-add24ca56aa2: Claiming fa:16:3e:b0:95:1a 10.100.0.12
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.179 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:95:1a 10.100.0.12'], port_security=['fa:16:3e:b0:95:1a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '64577e3d-aa56-4fa9-a1b5-dc76a7a80754', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c605495-2750-431a-94c8-fc1511dea80b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3f414f43-ca38-4bce-aa83-1fdd3cd738fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df881951-0cfd-4c8c-9854-241ef8244cff, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3df4c898-fd96-4b0f-90ee-add24ca56aa2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.181 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 in datapath 2c605495-2750-431a-94c8-fc1511dea80b bound to our chassis
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.182 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c605495-2750-431a-94c8-fc1511dea80b
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.204 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad13dbb-3039-442e-8ce0-2aedc2d1c8e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.205 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2c605495-21 in ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.213 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2c605495-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.213 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dfab6cb9-609f-4ad0-908c-7ec3b36d0484]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.214 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5337f8a9-8756-4d64-b32c-d30a5d55ab7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 systemd-udevd[407260]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:03:17 compute-0 systemd-machined[214636]: New machine qemu-168-instance-00000086.
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.233 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0a68b7c8-7d4e-4374-8c1c-54d39f209855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 NetworkManager[45129]: <info>  [1759395797.2401] device (tap3df4c898-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:03:17 compute-0 NetworkManager[45129]: <info>  [1759395797.2411] device (tap3df4c898-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.257 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48aef28c-f20e-46cd-b7fa-d592cfbe8c05]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 systemd[1]: Started Virtual Machine qemu-168-instance-00000086.
Oct 02 09:03:17 compute-0 ovn_controller[152344]: 2025-10-02T09:03:17Z|01445|binding|INFO|Setting lport 3df4c898-fd96-4b0f-90ee-add24ca56aa2 ovn-installed in OVS
Oct 02 09:03:17 compute-0 ovn_controller[152344]: 2025-10-02T09:03:17Z|01446|binding|INFO|Setting lport 3df4c898-fd96-4b0f-90ee-add24ca56aa2 up in Southbound
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.289 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[003705d8-ce02-4487-9ae4-279335ff522c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.294 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42f8b8a0-402c-4366-922d-b8324c776e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 NetworkManager[45129]: <info>  [1759395797.2958] manager: (tap2c605495-20): new Veth device (/org/freedesktop/NetworkManager/Devices/581)
Oct 02 09:03:17 compute-0 systemd-udevd[407263]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.333 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4eeda1-03d2-496a-bb92-8b4300abe5c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.336 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d51ea629-9763-4ab0-9cd6-00b0cf519b5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 NetworkManager[45129]: <info>  [1759395797.3615] device (tap2c605495-20): carrier: link connected
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.365 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[573fb78f-363b-4232-891e-f652907d76cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.384 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ffa9ef8-e663-436c-afd1-c26cba4f4ace]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c605495-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:5b:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667596, 'reachable_time': 18629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407292, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.400 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e231521-cf72-4d90-928a-b2baa7db1ffc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedf:5b4e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667596, 'tstamp': 667596}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407293, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.418 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d1520d7a-a6d1-4733-be87-2ae868747fd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c605495-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:5b:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667596, 'reachable_time': 18629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407294, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.457 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8ab912-a9a3-4f28-8799-f7beb4c34cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.517 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3e589147-ce1f-4901-9667-6fde94942c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.518 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c605495-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.519 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.519 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c605495-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:17 compute-0 NetworkManager[45129]: <info>  [1759395797.5222] manager: (tap2c605495-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Oct 02 09:03:17 compute-0 kernel: tap2c605495-20: entered promiscuous mode
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.525 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c605495-20, col_values=(('external_ids', {'iface-id': '88f0a719-bef7-4fa7-ad0c-3658148f5bdf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:17 compute-0 ovn_controller[152344]: 2025-10-02T09:03:17Z|01447|binding|INFO|Releasing lport 88f0a719-bef7-4fa7-ad0c-3658148f5bdf from this chassis (sb_readonly=0)
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.539 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2c605495-2750-431a-94c8-fc1511dea80b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2c605495-2750-431a-94c8-fc1511dea80b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.540 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0db28c-322e-4386-8e3e-29a470c71bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.541 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-2c605495-2750-431a-94c8-fc1511dea80b
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/2c605495-2750-431a-94c8-fc1511dea80b.pid.haproxy
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 2c605495-2750-431a-94c8-fc1511dea80b
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:03:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.541 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'env', 'PROCESS_TAG=haproxy-2c605495-2750-431a-94c8-fc1511dea80b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2c605495-2750-431a-94c8-fc1511dea80b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.847 2 DEBUG nova.compute.manager [req-24b5164e-c7fc-45d0-81bc-d24bbcf9afc4 req-d7fa9c5c-8506-4eb5-95cc-03abcadb9a0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.849 2 DEBUG oslo_concurrency.lockutils [req-24b5164e-c7fc-45d0-81bc-d24bbcf9afc4 req-d7fa9c5c-8506-4eb5-95cc-03abcadb9a0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.849 2 DEBUG oslo_concurrency.lockutils [req-24b5164e-c7fc-45d0-81bc-d24bbcf9afc4 req-d7fa9c5c-8506-4eb5-95cc-03abcadb9a0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.850 2 DEBUG oslo_concurrency.lockutils [req-24b5164e-c7fc-45d0-81bc-d24bbcf9afc4 req-d7fa9c5c-8506-4eb5-95cc-03abcadb9a0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:17 compute-0 nova_compute[260603]: 2025-10-02 09:03:17.850 2 DEBUG nova.compute.manager [req-24b5164e-c7fc-45d0-81bc-d24bbcf9afc4 req-d7fa9c5c-8506-4eb5-95cc-03abcadb9a0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Processing event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:03:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 09:03:17 compute-0 podman[407367]: 2025-10-02 09:03:17.931077183 +0000 UTC m=+0.048828478 container create 620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 09:03:17 compute-0 systemd[1]: Started libpod-conmon-620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43.scope.
Oct 02 09:03:18 compute-0 podman[407367]: 2025-10-02 09:03:17.906576072 +0000 UTC m=+0.024327407 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:03:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:03:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/addf8c19743aa1800a38e7a2c27551ad430b760bd20734d7536da7cb23fa4ebc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.033 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395798.0318964, 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.033 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] VM Started (Lifecycle Event)
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.035 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.038 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.041 2 INFO nova.virt.libvirt.driver [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Instance spawned successfully.
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.042 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:03:18 compute-0 podman[407367]: 2025-10-02 09:03:18.048379388 +0000 UTC m=+0.166130693 container init 620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:03:18 compute-0 podman[407367]: 2025-10-02 09:03:18.053862669 +0000 UTC m=+0.171613974 container start 620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.057 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.061 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.062 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.062 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.063 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.063 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.064 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.068 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:03:18 compute-0 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [NOTICE]   (407387) : New worker (407389) forked
Oct 02 09:03:18 compute-0 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [NOTICE]   (407387) : Loading success.
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.109 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.109 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395798.031998, 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.109 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] VM Paused (Lifecycle Event)
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.146 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.150 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395798.037696, 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.151 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] VM Resumed (Lifecycle Event)
Oct 02 09:03:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.155 2 INFO nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Took 7.21 seconds to spawn the instance on the hypervisor.
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.156 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.168 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.171 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.209 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.227 2 INFO nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Took 8.16 seconds to build instance.
Oct 02 09:03:18 compute-0 nova_compute[260603]: 2025-10-02 09:03:18.254 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:19 compute-0 ceph-mon[74477]: pgmap v2555: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 09:03:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 09:03:19 compute-0 nova_compute[260603]: 2025-10-02 09:03:19.990 2 DEBUG nova.compute.manager [req-4ed44c85-4404-42f6-a140-4a0dc6ad5553 req-b6dcd1eb-5e62-4ebb-baa0-9ef708dfb5d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:19 compute-0 nova_compute[260603]: 2025-10-02 09:03:19.991 2 DEBUG oslo_concurrency.lockutils [req-4ed44c85-4404-42f6-a140-4a0dc6ad5553 req-b6dcd1eb-5e62-4ebb-baa0-9ef708dfb5d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:19 compute-0 nova_compute[260603]: 2025-10-02 09:03:19.991 2 DEBUG oslo_concurrency.lockutils [req-4ed44c85-4404-42f6-a140-4a0dc6ad5553 req-b6dcd1eb-5e62-4ebb-baa0-9ef708dfb5d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:19 compute-0 nova_compute[260603]: 2025-10-02 09:03:19.991 2 DEBUG oslo_concurrency.lockutils [req-4ed44c85-4404-42f6-a140-4a0dc6ad5553 req-b6dcd1eb-5e62-4ebb-baa0-9ef708dfb5d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:19 compute-0 nova_compute[260603]: 2025-10-02 09:03:19.991 2 DEBUG nova.compute.manager [req-4ed44c85-4404-42f6-a140-4a0dc6ad5553 req-b6dcd1eb-5e62-4ebb-baa0-9ef708dfb5d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:03:19 compute-0 nova_compute[260603]: 2025-10-02 09:03:19.992 2 WARNING nova.compute.manager [req-4ed44c85-4404-42f6-a140-4a0dc6ad5553 req-b6dcd1eb-5e62-4ebb-baa0-9ef708dfb5d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state active and task_state None.
Oct 02 09:03:20 compute-0 nova_compute[260603]: 2025-10-02 09:03:20.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:20 compute-0 nova_compute[260603]: 2025-10-02 09:03:20.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:21 compute-0 ceph-mon[74477]: pgmap v2556: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 09:03:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 09:03:22 compute-0 ovn_controller[152344]: 2025-10-02T09:03:22Z|01448|binding|INFO|Releasing lport 88f0a719-bef7-4fa7-ad0c-3658148f5bdf from this chassis (sb_readonly=0)
Oct 02 09:03:22 compute-0 nova_compute[260603]: 2025-10-02 09:03:22.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:22 compute-0 NetworkManager[45129]: <info>  [1759395802.0437] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/583)
Oct 02 09:03:22 compute-0 NetworkManager[45129]: <info>  [1759395802.0449] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/584)
Oct 02 09:03:22 compute-0 nova_compute[260603]: 2025-10-02 09:03:22.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:22 compute-0 ovn_controller[152344]: 2025-10-02T09:03:22Z|01449|binding|INFO|Releasing lport 88f0a719-bef7-4fa7-ad0c-3658148f5bdf from this chassis (sb_readonly=0)
Oct 02 09:03:22 compute-0 nova_compute[260603]: 2025-10-02 09:03:22.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:03:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482455865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:03:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:03:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482455865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:03:22 compute-0 nova_compute[260603]: 2025-10-02 09:03:22.306 2 DEBUG nova.compute.manager [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:22 compute-0 nova_compute[260603]: 2025-10-02 09:03:22.306 2 DEBUG nova.compute.manager [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing instance network info cache due to event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:03:22 compute-0 nova_compute[260603]: 2025-10-02 09:03:22.307 2 DEBUG oslo_concurrency.lockutils [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:03:22 compute-0 nova_compute[260603]: 2025-10-02 09:03:22.307 2 DEBUG oslo_concurrency.lockutils [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:03:22 compute-0 nova_compute[260603]: 2025-10-02 09:03:22.308 2 DEBUG nova.network.neutron [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:03:23 compute-0 ceph-mon[74477]: pgmap v2557: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 09:03:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1482455865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:03:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1482455865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:03:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:03:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:03:23 compute-0 nova_compute[260603]: 2025-10-02 09:03:23.979 2 DEBUG nova.network.neutron [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updated VIF entry in instance network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:03:23 compute-0 nova_compute[260603]: 2025-10-02 09:03:23.980 2 DEBUG nova.network.neutron [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:03:24 compute-0 nova_compute[260603]: 2025-10-02 09:03:24.014 2 DEBUG oslo_concurrency.lockutils [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:03:25 compute-0 ceph-mon[74477]: pgmap v2558: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:03:25 compute-0 nova_compute[260603]: 2025-10-02 09:03:25.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:25 compute-0 nova_compute[260603]: 2025-10-02 09:03:25.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:03:27 compute-0 ceph-mon[74477]: pgmap v2559: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:03:27 compute-0 nova_compute[260603]: 2025-10-02 09:03:27.470 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:27 compute-0 nova_compute[260603]: 2025-10-02 09:03:27.470 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:27 compute-0 nova_compute[260603]: 2025-10-02 09:03:27.496 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:03:27 compute-0 nova_compute[260603]: 2025-10-02 09:03:27.591 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:27 compute-0 nova_compute[260603]: 2025-10-02 09:03:27.591 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:27 compute-0 nova_compute[260603]: 2025-10-02 09:03:27.603 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:03:27 compute-0 nova_compute[260603]: 2025-10-02 09:03:27.603 2 INFO nova.compute.claims [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:03:27 compute-0 nova_compute[260603]: 2025-10-02 09:03:27.800 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:03:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:03:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:03:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:03:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:03:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:03:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:03:28
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'vms', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data']
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:03:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:03:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:03:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2377320350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.328 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.336 2 DEBUG nova.compute.provider_tree [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.351 2 DEBUG nova.scheduler.client.report [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.376 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.377 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.426 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.427 2 DEBUG nova.network.neutron [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.446 2 INFO nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.470 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:03:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.561 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.563 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.563 2 INFO nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Creating image(s)
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.594 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.624 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.654 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.658 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.730 2 DEBUG nova.policy [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.750 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.751 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.751 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.752 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.774 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:28 compute-0 nova_compute[260603]: 2025-10-02 09:03:28.777 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:29 compute-0 nova_compute[260603]: 2025-10-02 09:03:29.051 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:29 compute-0 ceph-mon[74477]: pgmap v2560: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:03:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2377320350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:03:29 compute-0 nova_compute[260603]: 2025-10-02 09:03:29.118 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:03:29 compute-0 nova_compute[260603]: 2025-10-02 09:03:29.220 2 DEBUG nova.objects.instance [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid def7636a-ab83-489d-ba8d-6f3dd1ccc841 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:03:29 compute-0 nova_compute[260603]: 2025-10-02 09:03:29.235 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:03:29 compute-0 nova_compute[260603]: 2025-10-02 09:03:29.236 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Ensure instance console log exists: /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:03:29 compute-0 nova_compute[260603]: 2025-10-02 09:03:29.236 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:29 compute-0 nova_compute[260603]: 2025-10-02 09:03:29.237 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:29 compute-0 nova_compute[260603]: 2025-10-02 09:03:29.237 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:29 compute-0 ovn_controller[152344]: 2025-10-02T09:03:29Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:95:1a 10.100.0.12
Oct 02 09:03:29 compute-0 ovn_controller[152344]: 2025-10-02T09:03:29Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:95:1a 10.100.0.12
Oct 02 09:03:29 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 02 09:03:29 compute-0 nova_compute[260603]: 2025-10-02 09:03:29.808 2 DEBUG nova.network.neutron [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Successfully created port: 0d6d454c-ed95-44d0-8bd1-e20589c708d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:03:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:03:30 compute-0 nova_compute[260603]: 2025-10-02 09:03:30.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:30 compute-0 nova_compute[260603]: 2025-10-02 09:03:30.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:30 compute-0 nova_compute[260603]: 2025-10-02 09:03:30.979 2 DEBUG nova.network.neutron [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Successfully updated port: 0d6d454c-ed95-44d0-8bd1-e20589c708d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:03:31 compute-0 nova_compute[260603]: 2025-10-02 09:03:31.003 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:03:31 compute-0 nova_compute[260603]: 2025-10-02 09:03:31.004 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:03:31 compute-0 nova_compute[260603]: 2025-10-02 09:03:31.004 2 DEBUG nova.network.neutron [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:03:31 compute-0 ceph-mon[74477]: pgmap v2561: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:03:31 compute-0 nova_compute[260603]: 2025-10-02 09:03:31.100 2 DEBUG nova.compute.manager [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-changed-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:31 compute-0 nova_compute[260603]: 2025-10-02 09:03:31.101 2 DEBUG nova.compute.manager [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Refreshing instance network info cache due to event network-changed-0d6d454c-ed95-44d0-8bd1-e20589c708d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:03:31 compute-0 nova_compute[260603]: 2025-10-02 09:03:31.101 2 DEBUG oslo_concurrency.lockutils [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:03:31 compute-0 nova_compute[260603]: 2025-10-02 09:03:31.187 2 DEBUG nova.network.neutron [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:03:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 67 op/s
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.064 2 DEBUG nova.network.neutron [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updating instance_info_cache with network_info: [{"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.088 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.089 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Instance network_info: |[{"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.089 2 DEBUG oslo_concurrency.lockutils [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.090 2 DEBUG nova.network.neutron [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Refreshing network info cache for port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.098 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Start _get_guest_xml network_info=[{"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.103 2 WARNING nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.110 2 DEBUG nova.virt.libvirt.host [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.111 2 DEBUG nova.virt.libvirt.host [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.121 2 DEBUG nova.virt.libvirt.host [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.121 2 DEBUG nova.virt.libvirt.host [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.122 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.123 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.123 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.124 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.125 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.125 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.125 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.126 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.126 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.127 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.127 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.128 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.133 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:03:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2364713645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.651 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.679 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:32 compute-0 nova_compute[260603]: 2025-10-02 09:03:32.684 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:33 compute-0 ceph-mon[74477]: pgmap v2562: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 67 op/s
Oct 02 09:03:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2364713645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:03:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:03:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813651229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:03:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.169 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.172 2 DEBUG nova.virt.libvirt.vif [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-556721783',display_name='tempest-TestNetworkBasicOps-server-556721783',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-556721783',id=135,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFBWd49oFUTm6XgyrgFAJAvAD9R5S9h35IghxF+WDuWxqO67NuyfvmqcjpF3R4Okql0uPjy7xGWqOKFWo5bhMt5wCOH87LjC+Dpu6giEiY38iIQYyWXpiLlgRndqhZX6/w==',key_name='tempest-TestNetworkBasicOps-141517795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-kkoo640r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:03:28Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=def7636a-ab83-489d-ba8d-6f3dd1ccc841,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.172 2 DEBUG nova.network.os_vif_util [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.174 2 DEBUG nova.network.os_vif_util [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.175 2 DEBUG nova.objects.instance [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid def7636a-ab83-489d-ba8d-6f3dd1ccc841 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.192 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:03:33 compute-0 nova_compute[260603]:   <uuid>def7636a-ab83-489d-ba8d-6f3dd1ccc841</uuid>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   <name>instance-00000087</name>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-556721783</nova:name>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:03:32</nova:creationTime>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:03:33 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:03:33 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:03:33 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:03:33 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:03:33 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:03:33 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 09:03:33 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:03:33 compute-0 nova_compute[260603]:         <nova:port uuid="0d6d454c-ed95-44d0-8bd1-e20589c708d1">
Oct 02 09:03:33 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <system>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <entry name="serial">def7636a-ab83-489d-ba8d-6f3dd1ccc841</entry>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <entry name="uuid">def7636a-ab83-489d-ba8d-6f3dd1ccc841</entry>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     </system>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   <os>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   </os>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   <features>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   </features>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk">
Oct 02 09:03:33 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       </source>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:03:33 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk.config">
Oct 02 09:03:33 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       </source>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:03:33 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:ac:f1:a7"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <target dev="tap0d6d454c-ed"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/console.log" append="off"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <video>
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     </video>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:03:33 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:03:33 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:03:33 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:03:33 compute-0 nova_compute[260603]: </domain>
Oct 02 09:03:33 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.192 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Preparing to wait for external event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.193 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.193 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.194 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.195 2 DEBUG nova.virt.libvirt.vif [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-556721783',display_name='tempest-TestNetworkBasicOps-server-556721783',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-556721783',id=135,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFBWd49oFUTm6XgyrgFAJAvAD9R5S9h35IghxF+WDuWxqO67NuyfvmqcjpF3R4Okql0uPjy7xGWqOKFWo5bhMt5wCOH87LjC+Dpu6giEiY38iIQYyWXpiLlgRndqhZX6/w==',key_name='tempest-TestNetworkBasicOps-141517795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-kkoo640r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:03:28Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=def7636a-ab83-489d-ba8d-6f3dd1ccc841,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.195 2 DEBUG nova.network.os_vif_util [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.196 2 DEBUG nova.network.os_vif_util [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.197 2 DEBUG os_vif [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.198 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d6d454c-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d6d454c-ed, col_values=(('external_ids', {'iface-id': '0d6d454c-ed95-44d0-8bd1-e20589c708d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:f1:a7', 'vm-uuid': 'def7636a-ab83-489d-ba8d-6f3dd1ccc841'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:33 compute-0 NetworkManager[45129]: <info>  [1759395813.2111] manager: (tap0d6d454c-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/585)
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.221 2 INFO os_vif [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed')
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.289 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.290 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.290 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:ac:f1:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.291 2 INFO nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Using config drive
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.320 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.347 2 DEBUG nova.network.neutron [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updated VIF entry in instance network info cache for port 0d6d454c-ed95-44d0-8bd1-e20589c708d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.348 2 DEBUG nova.network.neutron [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updating instance_info_cache with network_info: [{"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.388 2 DEBUG oslo_concurrency.lockutils [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.742 2 INFO nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Creating config drive at /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/disk.config
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.748 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl3lrv261 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.916 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl3lrv261" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.956 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:03:33 compute-0 nova_compute[260603]: 2025-10-02 09:03:33.961 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/disk.config def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1813651229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:03:34 compute-0 nova_compute[260603]: 2025-10-02 09:03:34.167 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/disk.config def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:34 compute-0 nova_compute[260603]: 2025-10-02 09:03:34.169 2 INFO nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Deleting local config drive /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/disk.config because it was imported into RBD.
Oct 02 09:03:34 compute-0 kernel: tap0d6d454c-ed: entered promiscuous mode
Oct 02 09:03:34 compute-0 NetworkManager[45129]: <info>  [1759395814.2414] manager: (tap0d6d454c-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/586)
Oct 02 09:03:34 compute-0 ovn_controller[152344]: 2025-10-02T09:03:34Z|01450|binding|INFO|Claiming lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 for this chassis.
Oct 02 09:03:34 compute-0 ovn_controller[152344]: 2025-10-02T09:03:34Z|01451|binding|INFO|0d6d454c-ed95-44d0-8bd1-e20589c708d1: Claiming fa:16:3e:ac:f1:a7 10.100.0.5
Oct 02 09:03:34 compute-0 nova_compute[260603]: 2025-10-02 09:03:34.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.253 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:f1:a7 10.100.0.5'], port_security=['fa:16:3e:ac:f1:a7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'def7636a-ab83-489d-ba8d-6f3dd1ccc841', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c605495-2750-431a-94c8-fc1511dea80b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b93b7300-a114-4509-a5d5-f258c80e5fdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df881951-0cfd-4c8c-9854-241ef8244cff, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0d6d454c-ed95-44d0-8bd1-e20589c708d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.254 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 in datapath 2c605495-2750-431a-94c8-fc1511dea80b bound to our chassis
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.255 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c605495-2750-431a-94c8-fc1511dea80b
Oct 02 09:03:34 compute-0 ovn_controller[152344]: 2025-10-02T09:03:34Z|01452|binding|INFO|Setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 ovn-installed in OVS
Oct 02 09:03:34 compute-0 ovn_controller[152344]: 2025-10-02T09:03:34Z|01453|binding|INFO|Setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 up in Southbound
Oct 02 09:03:34 compute-0 nova_compute[260603]: 2025-10-02 09:03:34.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.279 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d11bce-fac2-450a-bde5-ca62f07f803c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:34 compute-0 systemd-machined[214636]: New machine qemu-169-instance-00000087.
Oct 02 09:03:34 compute-0 systemd[1]: Started Virtual Machine qemu-169-instance-00000087.
Oct 02 09:03:34 compute-0 systemd-udevd[407729]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.315 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c943c5e2-d2e2-4aec-bc37-a1d50d416877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.320 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1b12ba88-1313-4b08-8db8-454caeda75b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:34 compute-0 NetworkManager[45129]: <info>  [1759395814.3319] device (tap0d6d454c-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:03:34 compute-0 NetworkManager[45129]: <info>  [1759395814.3337] device (tap0d6d454c-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.357 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[706a1e86-d3a4-4f84-955f-a5b3cda135e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.385 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3c48e5c4-4490-4d80-8a6e-fd06530a5a8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c605495-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:5b:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667596, 'reachable_time': 36735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407738, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.409 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d94b364e-12e9-4ace-b014-320cd499ff93]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667608, 'tstamp': 667608}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407740, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667611, 'tstamp': 667611}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407740, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.411 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c605495-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:34 compute-0 nova_compute[260603]: 2025-10-02 09:03:34.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:34 compute-0 nova_compute[260603]: 2025-10-02 09:03:34.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.416 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c605495-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.417 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.418 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c605495-20, col_values=(('external_ids', {'iface-id': '88f0a719-bef7-4fa7-ad0c-3658148f5bdf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.419 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:03:34 compute-0 nova_compute[260603]: 2025-10-02 09:03:34.497 2 DEBUG nova.compute.manager [req-8686b3c7-085c-4650-80ec-1545c163ec94 req-9a597719-145a-47b2-a12b-212673dbed20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:34 compute-0 nova_compute[260603]: 2025-10-02 09:03:34.497 2 DEBUG oslo_concurrency.lockutils [req-8686b3c7-085c-4650-80ec-1545c163ec94 req-9a597719-145a-47b2-a12b-212673dbed20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:34 compute-0 nova_compute[260603]: 2025-10-02 09:03:34.498 2 DEBUG oslo_concurrency.lockutils [req-8686b3c7-085c-4650-80ec-1545c163ec94 req-9a597719-145a-47b2-a12b-212673dbed20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:34 compute-0 nova_compute[260603]: 2025-10-02 09:03:34.498 2 DEBUG oslo_concurrency.lockutils [req-8686b3c7-085c-4650-80ec-1545c163ec94 req-9a597719-145a-47b2-a12b-212673dbed20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:34 compute-0 nova_compute[260603]: 2025-10-02 09:03:34.499 2 DEBUG nova.compute.manager [req-8686b3c7-085c-4650-80ec-1545c163ec94 req-9a597719-145a-47b2-a12b-212673dbed20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Processing event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.842 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.843 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.843 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:35 compute-0 ceph-mon[74477]: pgmap v2563: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.156 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395815.155862, def7636a-ab83-489d-ba8d-6f3dd1ccc841 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.156 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] VM Started (Lifecycle Event)
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.158 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.161 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.164 2 INFO nova.virt.libvirt.driver [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Instance spawned successfully.
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.164 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.181 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.186 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.189 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.189 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.190 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.190 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.190 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.191 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.220 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.220 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395815.1560867, def7636a-ab83-489d-ba8d-6f3dd1ccc841 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.220 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] VM Paused (Lifecycle Event)
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.249 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.251 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395815.160564, def7636a-ab83-489d-ba8d-6f3dd1ccc841 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.251 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] VM Resumed (Lifecycle Event)
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.262 2 INFO nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Took 6.70 seconds to spawn the instance on the hypervisor.
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.262 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.274 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.276 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.303 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.324 2 INFO nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Took 7.77 seconds to build instance.
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.339 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:35 compute-0 nova_compute[260603]: 2025-10-02 09:03:35.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 09:03:36 compute-0 nova_compute[260603]: 2025-10-02 09:03:36.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:03:36 compute-0 nova_compute[260603]: 2025-10-02 09:03:36.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:03:36 compute-0 nova_compute[260603]: 2025-10-02 09:03:36.558 2 DEBUG nova.compute.manager [req-374cb3bf-0180-44f1-b8ba-c55313505fdb req-2f8a4613-ae74-4f39-b904-fe903cd37230 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:36 compute-0 nova_compute[260603]: 2025-10-02 09:03:36.558 2 DEBUG oslo_concurrency.lockutils [req-374cb3bf-0180-44f1-b8ba-c55313505fdb req-2f8a4613-ae74-4f39-b904-fe903cd37230 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:36 compute-0 nova_compute[260603]: 2025-10-02 09:03:36.559 2 DEBUG oslo_concurrency.lockutils [req-374cb3bf-0180-44f1-b8ba-c55313505fdb req-2f8a4613-ae74-4f39-b904-fe903cd37230 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:36 compute-0 nova_compute[260603]: 2025-10-02 09:03:36.559 2 DEBUG oslo_concurrency.lockutils [req-374cb3bf-0180-44f1-b8ba-c55313505fdb req-2f8a4613-ae74-4f39-b904-fe903cd37230 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:36 compute-0 nova_compute[260603]: 2025-10-02 09:03:36.559 2 DEBUG nova.compute.manager [req-374cb3bf-0180-44f1-b8ba-c55313505fdb req-2f8a4613-ae74-4f39-b904-fe903cd37230 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:03:36 compute-0 nova_compute[260603]: 2025-10-02 09:03:36.559 2 WARNING nova.compute.manager [req-374cb3bf-0180-44f1-b8ba-c55313505fdb req-2f8a4613-ae74-4f39-b904-fe903cd37230 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received unexpected event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with vm_state active and task_state None.
Oct 02 09:03:37 compute-0 podman[407785]: 2025-10-02 09:03:37.028457191 +0000 UTC m=+0.078894563 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 02 09:03:37 compute-0 podman[407784]: 2025-10-02 09:03:37.104185954 +0000 UTC m=+0.151258031 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:03:37 compute-0 ceph-mon[74477]: pgmap v2564: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 09:03:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 968 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Oct 02 09:03:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:03:38 compute-0 nova_compute[260603]: 2025-10-02 09:03:38.199 2 DEBUG nova.compute.manager [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-changed-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:38 compute-0 nova_compute[260603]: 2025-10-02 09:03:38.199 2 DEBUG nova.compute.manager [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Refreshing instance network info cache due to event network-changed-0d6d454c-ed95-44d0-8bd1-e20589c708d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:03:38 compute-0 nova_compute[260603]: 2025-10-02 09:03:38.199 2 DEBUG oslo_concurrency.lockutils [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:03:38 compute-0 nova_compute[260603]: 2025-10-02 09:03:38.200 2 DEBUG oslo_concurrency.lockutils [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:03:38 compute-0 nova_compute[260603]: 2025-10-02 09:03:38.200 2 DEBUG nova.network.neutron [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Refreshing network info cache for port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:03:38 compute-0 nova_compute[260603]: 2025-10-02 09:03:38.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011072414045712052 of space, bias 1.0, pg target 0.33217242137136155 quantized to 32 (current 32)
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:03:39 compute-0 ceph-mon[74477]: pgmap v2565: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 968 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Oct 02 09:03:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 02 09:03:40 compute-0 nova_compute[260603]: 2025-10-02 09:03:40.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:41 compute-0 ceph-mon[74477]: pgmap v2566: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 02 09:03:41 compute-0 nova_compute[260603]: 2025-10-02 09:03:41.741 2 DEBUG nova.network.neutron [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updated VIF entry in instance network info cache for port 0d6d454c-ed95-44d0-8bd1-e20589c708d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:03:41 compute-0 nova_compute[260603]: 2025-10-02 09:03:41.742 2 DEBUG nova.network.neutron [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updating instance_info_cache with network_info: [{"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:03:41 compute-0 nova_compute[260603]: 2025-10-02 09:03:41.770 2 DEBUG oslo_concurrency.lockutils [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:03:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 02 09:03:42 compute-0 podman[407829]: 2025-10-02 09:03:42.019729489 +0000 UTC m=+0.086587813 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:03:42 compute-0 podman[407830]: 2025-10-02 09:03:42.049105001 +0000 UTC m=+0.102960520 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:03:43 compute-0 ceph-mon[74477]: pgmap v2567: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 02 09:03:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:03:43 compute-0 nova_compute[260603]: 2025-10-02 09:03:43.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 02 09:03:45 compute-0 ceph-mon[74477]: pgmap v2568: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 02 09:03:45 compute-0 nova_compute[260603]: 2025-10-02 09:03:45.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Oct 02 09:03:47 compute-0 ceph-mon[74477]: pgmap v2569: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Oct 02 09:03:47 compute-0 ovn_controller[152344]: 2025-10-02T09:03:47Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:f1:a7 10.100.0.5
Oct 02 09:03:47 compute-0 ovn_controller[152344]: 2025-10-02T09:03:47Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:f1:a7 10.100.0.5
Oct 02 09:03:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 183 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 124 op/s
Oct 02 09:03:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:03:48 compute-0 nova_compute[260603]: 2025-10-02 09:03:48.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:48 compute-0 sudo[407871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:03:48 compute-0 sudo[407871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:48 compute-0 sudo[407871]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:48 compute-0 sudo[407896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:03:48 compute-0 sudo[407896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:48 compute-0 sudo[407896]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:48 compute-0 sudo[407921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:03:48 compute-0 sudo[407921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:48 compute-0 sudo[407921]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:48 compute-0 sudo[407946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:03:48 compute-0 sudo[407946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:49 compute-0 ceph-mon[74477]: pgmap v2570: 305 pgs: 305 active+clean; 183 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 124 op/s
Oct 02 09:03:49 compute-0 sudo[407946]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:03:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:03:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:03:49 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:03:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:03:49 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:03:49 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev d335dfbe-1458-4455-b080-b4a14a577106 does not exist
Oct 02 09:03:49 compute-0 nova_compute[260603]: 2025-10-02 09:03:49.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:03:49 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 4298d917-b6ba-410f-bc8a-4e8aa2d6356b does not exist
Oct 02 09:03:49 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8dabad9f-79d7-45e4-a3d1-84eb324876b9 does not exist
Oct 02 09:03:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:03:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:03:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:03:49 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:03:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:03:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:03:49 compute-0 sudo[408002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:03:49 compute-0 sudo[408002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:49 compute-0 sudo[408002]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:49 compute-0 sudo[408027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:03:49 compute-0 sudo[408027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:49 compute-0 sudo[408027]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:49 compute-0 sudo[408052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:03:49 compute-0 sudo[408052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:49 compute-0 sudo[408052]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:49 compute-0 sudo[408077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:03:49 compute-0 sudo[408077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 197 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Oct 02 09:03:50 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:03:50 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:03:50 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:03:50 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:03:50 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:03:50 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:03:50 compute-0 podman[408145]: 2025-10-02 09:03:50.255922285 +0000 UTC m=+0.073037081 container create 9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_varahamihira, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:03:50 compute-0 systemd[1]: Started libpod-conmon-9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0.scope.
Oct 02 09:03:50 compute-0 podman[408145]: 2025-10-02 09:03:50.225421617 +0000 UTC m=+0.042536493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:03:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:03:50 compute-0 podman[408145]: 2025-10-02 09:03:50.362082943 +0000 UTC m=+0.179197819 container init 9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 09:03:50 compute-0 podman[408145]: 2025-10-02 09:03:50.369935837 +0000 UTC m=+0.187050663 container start 9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_varahamihira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:03:50 compute-0 podman[408145]: 2025-10-02 09:03:50.373351624 +0000 UTC m=+0.190466510 container attach 9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 09:03:50 compute-0 agitated_varahamihira[408161]: 167 167
Oct 02 09:03:50 compute-0 systemd[1]: libpod-9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0.scope: Deactivated successfully.
Oct 02 09:03:50 compute-0 conmon[408161]: conmon 9782357870df519fe52d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0.scope/container/memory.events
Oct 02 09:03:50 compute-0 podman[408145]: 2025-10-02 09:03:50.379740223 +0000 UTC m=+0.196855019 container died 9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_varahamihira, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 09:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0140082edceed8d6cc8881be0a910587a627da7119c12a3173fbb3c63e22012-merged.mount: Deactivated successfully.
Oct 02 09:03:50 compute-0 podman[408145]: 2025-10-02 09:03:50.43052047 +0000 UTC m=+0.247635296 container remove 9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_varahamihira, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:03:50 compute-0 systemd[1]: libpod-conmon-9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0.scope: Deactivated successfully.
Oct 02 09:03:50 compute-0 nova_compute[260603]: 2025-10-02 09:03:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:03:50 compute-0 nova_compute[260603]: 2025-10-02 09:03:50.525 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:03:50 compute-0 nova_compute[260603]: 2025-10-02 09:03:50.525 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:03:50 compute-0 podman[408185]: 2025-10-02 09:03:50.689954922 +0000 UTC m=+0.055779964 container create db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 09:03:50 compute-0 nova_compute[260603]: 2025-10-02 09:03:50.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:50 compute-0 systemd[1]: Started libpod-conmon-db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d.scope.
Oct 02 09:03:50 compute-0 podman[408185]: 2025-10-02 09:03:50.663325514 +0000 UTC m=+0.029150636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:03:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13eca15de58962a32371f1fefa70679da297cc4ba488918308335549f2fcda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13eca15de58962a32371f1fefa70679da297cc4ba488918308335549f2fcda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13eca15de58962a32371f1fefa70679da297cc4ba488918308335549f2fcda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13eca15de58962a32371f1fefa70679da297cc4ba488918308335549f2fcda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13eca15de58962a32371f1fefa70679da297cc4ba488918308335549f2fcda/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:50 compute-0 podman[408185]: 2025-10-02 09:03:50.818298921 +0000 UTC m=+0.184123983 container init db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:03:50 compute-0 podman[408185]: 2025-10-02 09:03:50.82470112 +0000 UTC m=+0.190526142 container start db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:03:50 compute-0 podman[408185]: 2025-10-02 09:03:50.82828034 +0000 UTC m=+0.194105382 container attach db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:03:51 compute-0 ceph-mon[74477]: pgmap v2571: 305 pgs: 305 active+clean; 197 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Oct 02 09:03:51 compute-0 nova_compute[260603]: 2025-10-02 09:03:51.730 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:03:51 compute-0 nova_compute[260603]: 2025-10-02 09:03:51.731 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:03:51 compute-0 nova_compute[260603]: 2025-10-02 09:03:51.731 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 09:03:51 compute-0 nova_compute[260603]: 2025-10-02 09:03:51.731 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:03:51 compute-0 loving_sanderson[408202]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:03:51 compute-0 loving_sanderson[408202]: --> relative data size: 1.0
Oct 02 09:03:51 compute-0 loving_sanderson[408202]: --> All data devices are unavailable
Oct 02 09:03:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 197 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 09:03:51 compute-0 systemd[1]: libpod-db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d.scope: Deactivated successfully.
Oct 02 09:03:51 compute-0 systemd[1]: libpod-db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d.scope: Consumed 1.051s CPU time.
Oct 02 09:03:51 compute-0 conmon[408202]: conmon db8ffdf647b7b7b66029 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d.scope/container/memory.events
Oct 02 09:03:51 compute-0 podman[408185]: 2025-10-02 09:03:51.943325592 +0000 UTC m=+1.309150604 container died db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 09:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f13eca15de58962a32371f1fefa70679da297cc4ba488918308335549f2fcda-merged.mount: Deactivated successfully.
Oct 02 09:03:52 compute-0 podman[408185]: 2025-10-02 09:03:52.005878506 +0000 UTC m=+1.371703528 container remove db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:03:52 compute-0 systemd[1]: libpod-conmon-db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d.scope: Deactivated successfully.
Oct 02 09:03:52 compute-0 sudo[408077]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:52 compute-0 sudo[408245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:03:52 compute-0 sudo[408245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:52 compute-0 sudo[408245]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:52 compute-0 sudo[408270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:03:52 compute-0 sudo[408270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:52 compute-0 sudo[408270]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:52 compute-0 sudo[408295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:03:52 compute-0 sudo[408295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:52 compute-0 sudo[408295]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:52 compute-0 sudo[408320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:03:52 compute-0 sudo[408320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:52 compute-0 podman[408386]: 2025-10-02 09:03:52.746503521 +0000 UTC m=+0.051154571 container create a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:03:52 compute-0 systemd[1]: Started libpod-conmon-a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d.scope.
Oct 02 09:03:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:03:52 compute-0 podman[408386]: 2025-10-02 09:03:52.821282234 +0000 UTC m=+0.125933334 container init a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:03:52 compute-0 podman[408386]: 2025-10-02 09:03:52.731513686 +0000 UTC m=+0.036164756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:03:52 compute-0 podman[408386]: 2025-10-02 09:03:52.827685334 +0000 UTC m=+0.132336394 container start a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 09:03:52 compute-0 podman[408386]: 2025-10-02 09:03:52.831049639 +0000 UTC m=+0.135700699 container attach a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 09:03:52 compute-0 cranky_agnesi[408402]: 167 167
Oct 02 09:03:52 compute-0 systemd[1]: libpod-a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d.scope: Deactivated successfully.
Oct 02 09:03:52 compute-0 podman[408386]: 2025-10-02 09:03:52.832862565 +0000 UTC m=+0.137513605 container died a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 09:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca265ae51332b021f1effa7d92e4798d73a78a27c8f03ab0ae3cf94580c8d917-merged.mount: Deactivated successfully.
Oct 02 09:03:52 compute-0 podman[408386]: 2025-10-02 09:03:52.870966209 +0000 UTC m=+0.175617259 container remove a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 02 09:03:52 compute-0 systemd[1]: libpod-conmon-a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d.scope: Deactivated successfully.
Oct 02 09:03:53 compute-0 podman[408426]: 2025-10-02 09:03:53.121928078 +0000 UTC m=+0.055807636 container create 45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 09:03:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:03:53 compute-0 systemd[1]: Started libpod-conmon-45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726.scope.
Oct 02 09:03:53 compute-0 ceph-mon[74477]: pgmap v2572: 305 pgs: 305 active+clean; 197 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 09:03:53 compute-0 podman[408426]: 2025-10-02 09:03:53.10397297 +0000 UTC m=+0.037852548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:03:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f4d2f2a140b18bf5f1c7c00efc974ab0219bdb28ea38d9238010bf4c822c93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f4d2f2a140b18bf5f1c7c00efc974ab0219bdb28ea38d9238010bf4c822c93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f4d2f2a140b18bf5f1c7c00efc974ab0219bdb28ea38d9238010bf4c822c93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f4d2f2a140b18bf5f1c7c00efc974ab0219bdb28ea38d9238010bf4c822c93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:53 compute-0 podman[408426]: 2025-10-02 09:03:53.245112636 +0000 UTC m=+0.178992244 container init 45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feynman, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:03:53 compute-0 nova_compute[260603]: 2025-10-02 09:03:53.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:53 compute-0 podman[408426]: 2025-10-02 09:03:53.309673052 +0000 UTC m=+0.243552610 container start 45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feynman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:03:53 compute-0 podman[408426]: 2025-10-02 09:03:53.313385737 +0000 UTC m=+0.247265395 container attach 45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:03:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:03:53 compute-0 nova_compute[260603]: 2025-10-02 09:03:53.998 2 INFO nova.compute.manager [None req-76439550-94e3-4153-b92a-8620b5da26f3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Get console output
Oct 02 09:03:54 compute-0 nova_compute[260603]: 2025-10-02 09:03:54.007 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 09:03:54 compute-0 magical_feynman[408442]: {
Oct 02 09:03:54 compute-0 magical_feynman[408442]:     "0": [
Oct 02 09:03:54 compute-0 magical_feynman[408442]:         {
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "devices": [
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "/dev/loop3"
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             ],
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_name": "ceph_lv0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_size": "21470642176",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "name": "ceph_lv0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "tags": {
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.cluster_name": "ceph",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.crush_device_class": "",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.encrypted": "0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.osd_id": "0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.type": "block",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.vdo": "0"
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             },
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "type": "block",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "vg_name": "ceph_vg0"
Oct 02 09:03:54 compute-0 magical_feynman[408442]:         }
Oct 02 09:03:54 compute-0 magical_feynman[408442]:     ],
Oct 02 09:03:54 compute-0 magical_feynman[408442]:     "1": [
Oct 02 09:03:54 compute-0 magical_feynman[408442]:         {
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "devices": [
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "/dev/loop4"
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             ],
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_name": "ceph_lv1",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_size": "21470642176",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "name": "ceph_lv1",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "tags": {
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.cluster_name": "ceph",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.crush_device_class": "",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.encrypted": "0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.osd_id": "1",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.type": "block",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.vdo": "0"
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             },
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "type": "block",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "vg_name": "ceph_vg1"
Oct 02 09:03:54 compute-0 magical_feynman[408442]:         }
Oct 02 09:03:54 compute-0 magical_feynman[408442]:     ],
Oct 02 09:03:54 compute-0 magical_feynman[408442]:     "2": [
Oct 02 09:03:54 compute-0 magical_feynman[408442]:         {
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "devices": [
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "/dev/loop5"
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             ],
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_name": "ceph_lv2",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_size": "21470642176",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "name": "ceph_lv2",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "tags": {
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.cluster_name": "ceph",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.crush_device_class": "",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.encrypted": "0",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.osd_id": "2",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.type": "block",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:                 "ceph.vdo": "0"
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             },
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "type": "block",
Oct 02 09:03:54 compute-0 magical_feynman[408442]:             "vg_name": "ceph_vg2"
Oct 02 09:03:54 compute-0 magical_feynman[408442]:         }
Oct 02 09:03:54 compute-0 magical_feynman[408442]:     ]
Oct 02 09:03:54 compute-0 magical_feynman[408442]: }
Oct 02 09:03:54 compute-0 systemd[1]: libpod-45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726.scope: Deactivated successfully.
Oct 02 09:03:54 compute-0 podman[408426]: 2025-10-02 09:03:54.087963168 +0000 UTC m=+1.021842766 container died 45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feynman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 09:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-83f4d2f2a140b18bf5f1c7c00efc974ab0219bdb28ea38d9238010bf4c822c93-merged.mount: Deactivated successfully.
Oct 02 09:03:54 compute-0 podman[408426]: 2025-10-02 09:03:54.150606494 +0000 UTC m=+1.084486062 container remove 45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 09:03:54 compute-0 systemd[1]: libpod-conmon-45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726.scope: Deactivated successfully.
Oct 02 09:03:54 compute-0 sudo[408320]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:54 compute-0 sudo[408465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:03:54 compute-0 sudo[408465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:54 compute-0 sudo[408465]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:54 compute-0 sudo[408490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:03:54 compute-0 sudo[408490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:54 compute-0 sudo[408490]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:54 compute-0 sudo[408515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:03:54 compute-0 sudo[408515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:54 compute-0 sudo[408515]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:54 compute-0 sudo[408540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:03:54 compute-0 sudo[408540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:54 compute-0 nova_compute[260603]: 2025-10-02 09:03:54.651 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:03:54 compute-0 nova_compute[260603]: 2025-10-02 09:03:54.673 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:03:54 compute-0 nova_compute[260603]: 2025-10-02 09:03:54.673 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 09:03:54 compute-0 nova_compute[260603]: 2025-10-02 09:03:54.673 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:03:54 compute-0 podman[408605]: 2025-10-02 09:03:54.799442858 +0000 UTC m=+0.045608938 container create 66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 09:03:54 compute-0 systemd[1]: Started libpod-conmon-66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d.scope.
Oct 02 09:03:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:03:54 compute-0 podman[408605]: 2025-10-02 09:03:54.781324144 +0000 UTC m=+0.027490284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:03:54 compute-0 podman[408605]: 2025-10-02 09:03:54.884119349 +0000 UTC m=+0.130285459 container init 66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:03:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:54.888 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:03:54 compute-0 nova_compute[260603]: 2025-10-02 09:03:54.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:54.890 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:03:54 compute-0 podman[408605]: 2025-10-02 09:03:54.892359135 +0000 UTC m=+0.138525215 container start 66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:03:54 compute-0 podman[408605]: 2025-10-02 09:03:54.896308008 +0000 UTC m=+0.142474128 container attach 66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:03:54 compute-0 stoic_blackwell[408621]: 167 167
Oct 02 09:03:54 compute-0 systemd[1]: libpod-66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d.scope: Deactivated successfully.
Oct 02 09:03:54 compute-0 conmon[408621]: conmon 66a7d05d08837afd4642 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d.scope/container/memory.events
Oct 02 09:03:54 compute-0 podman[408605]: 2025-10-02 09:03:54.900386184 +0000 UTC m=+0.146552294 container died 66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 09:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4c462ff857141b74263b060707af1c6f8a9462bfae72cf8d6d3ee0c53c5babf-merged.mount: Deactivated successfully.
Oct 02 09:03:54 compute-0 podman[408605]: 2025-10-02 09:03:54.946439466 +0000 UTC m=+0.192605576 container remove 66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:03:54 compute-0 systemd[1]: libpod-conmon-66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d.scope: Deactivated successfully.
Oct 02 09:03:55 compute-0 nova_compute[260603]: 2025-10-02 09:03:55.062 2 DEBUG nova.compute.manager [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:55 compute-0 nova_compute[260603]: 2025-10-02 09:03:55.064 2 DEBUG nova.compute.manager [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing instance network info cache due to event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:03:55 compute-0 nova_compute[260603]: 2025-10-02 09:03:55.065 2 DEBUG oslo_concurrency.lockutils [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:03:55 compute-0 nova_compute[260603]: 2025-10-02 09:03:55.065 2 DEBUG oslo_concurrency.lockutils [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:03:55 compute-0 nova_compute[260603]: 2025-10-02 09:03:55.067 2 DEBUG nova.network.neutron [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:03:55 compute-0 podman[408645]: 2025-10-02 09:03:55.175110201 +0000 UTC m=+0.049394255 container create b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:03:55 compute-0 ceph-mon[74477]: pgmap v2573: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:03:55 compute-0 systemd[1]: Started libpod-conmon-b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324.scope.
Oct 02 09:03:55 compute-0 podman[408645]: 2025-10-02 09:03:55.151063035 +0000 UTC m=+0.025347099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:03:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb3a24fdc58c3d17bffb670d9aebd3f8f4ac880148f544000961ed8c7660607/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb3a24fdc58c3d17bffb670d9aebd3f8f4ac880148f544000961ed8c7660607/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb3a24fdc58c3d17bffb670d9aebd3f8f4ac880148f544000961ed8c7660607/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb3a24fdc58c3d17bffb670d9aebd3f8f4ac880148f544000961ed8c7660607/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:03:55 compute-0 podman[408645]: 2025-10-02 09:03:55.279150685 +0000 UTC m=+0.153434709 container init b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 09:03:55 compute-0 podman[408645]: 2025-10-02 09:03:55.299824508 +0000 UTC m=+0.174108532 container start b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:03:55 compute-0 podman[408645]: 2025-10-02 09:03:55.302865792 +0000 UTC m=+0.177149816 container attach b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 09:03:55 compute-0 nova_compute[260603]: 2025-10-02 09:03:55.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:03:56 compute-0 nova_compute[260603]: 2025-10-02 09:03:56.135 2 INFO nova.compute.manager [None req-37e81269-4f54-40c4-a885-c1bf2a5d477d ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Get console output
Oct 02 09:03:56 compute-0 nova_compute[260603]: 2025-10-02 09:03:56.144 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 09:03:56 compute-0 focused_volhard[408662]: {
Oct 02 09:03:56 compute-0 focused_volhard[408662]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "osd_id": 2,
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "type": "bluestore"
Oct 02 09:03:56 compute-0 focused_volhard[408662]:     },
Oct 02 09:03:56 compute-0 focused_volhard[408662]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "osd_id": 1,
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "type": "bluestore"
Oct 02 09:03:56 compute-0 focused_volhard[408662]:     },
Oct 02 09:03:56 compute-0 focused_volhard[408662]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "osd_id": 0,
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:03:56 compute-0 focused_volhard[408662]:         "type": "bluestore"
Oct 02 09:03:56 compute-0 focused_volhard[408662]:     }
Oct 02 09:03:56 compute-0 focused_volhard[408662]: }
Oct 02 09:03:56 compute-0 systemd[1]: libpod-b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324.scope: Deactivated successfully.
Oct 02 09:03:56 compute-0 systemd[1]: libpod-b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324.scope: Consumed 1.039s CPU time.
Oct 02 09:03:56 compute-0 podman[408645]: 2025-10-02 09:03:56.335951396 +0000 UTC m=+1.210235460 container died b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:03:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfb3a24fdc58c3d17bffb670d9aebd3f8f4ac880148f544000961ed8c7660607-merged.mount: Deactivated successfully.
Oct 02 09:03:56 compute-0 podman[408645]: 2025-10-02 09:03:56.399914743 +0000 UTC m=+1.274198787 container remove b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 09:03:56 compute-0 systemd[1]: libpod-conmon-b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324.scope: Deactivated successfully.
Oct 02 09:03:56 compute-0 sudo[408540]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:03:56 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:03:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.461669) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395836461847, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 682, "num_deletes": 251, "total_data_size": 791559, "memory_usage": 803736, "flush_reason": "Manual Compaction"}
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395836468427, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 783850, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53712, "largest_seqno": 54393, "table_properties": {"data_size": 780282, "index_size": 1412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8201, "raw_average_key_size": 19, "raw_value_size": 773124, "raw_average_value_size": 1827, "num_data_blocks": 64, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395783, "oldest_key_time": 1759395783, "file_creation_time": 1759395836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 6687 microseconds, and 3100 cpu microseconds.
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.468494) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 783850 bytes OK
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.468519) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.470622) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.470637) EVENT_LOG_v1 {"time_micros": 1759395836470632, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.470654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 787974, prev total WAL file size 808561, number of live WAL files 2.
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.473206) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(765KB)], [125(8994KB)]
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395836473238, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 9994335, "oldest_snapshot_seqno": -1}
Oct 02 09:03:56 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:03:56 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a02c48ef-4d18-4d3d-9710-958a51fb301a does not exist
Oct 02 09:03:56 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 925549ba-77b0-4f5c-bb79-6a4989a7c5a5 does not exist
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7287 keys, 8318868 bytes, temperature: kUnknown
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395836517889, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8318868, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8272997, "index_size": 26558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 190797, "raw_average_key_size": 26, "raw_value_size": 8145586, "raw_average_value_size": 1117, "num_data_blocks": 1028, "num_entries": 7287, "num_filter_entries": 7287, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:03:56 compute-0 nova_compute[260603]: 2025-10-02 09:03:56.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.518179) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8318868 bytes
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.520552) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.2 rd, 185.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.8 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(23.4) write-amplify(10.6) OK, records in: 7800, records dropped: 513 output_compression: NoCompression
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.520573) EVENT_LOG_v1 {"time_micros": 1759395836520564, "job": 76, "event": "compaction_finished", "compaction_time_micros": 44772, "compaction_time_cpu_micros": 24613, "output_level": 6, "num_output_files": 1, "total_output_size": 8318868, "num_input_records": 7800, "num_output_records": 7287, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395836520849, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395836522844, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.473095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.522928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.522940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.522943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.522945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:03:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.522947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:03:56 compute-0 sudo[408709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:03:56 compute-0 sudo[408709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:56 compute-0 sudo[408709]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:56 compute-0 sudo[408734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:03:56 compute-0 sudo[408734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:03:56 compute-0 sudo[408734]: pam_unix(sudo:session): session closed for user root
Oct 02 09:03:56 compute-0 nova_compute[260603]: 2025-10-02 09:03:56.735 2 DEBUG nova.network.neutron [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updated VIF entry in instance network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:03:56 compute-0 nova_compute[260603]: 2025-10-02 09:03:56.736 2 DEBUG nova.network.neutron [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:03:56 compute-0 nova_compute[260603]: 2025-10-02 09:03:56.756 2 DEBUG oslo_concurrency.lockutils [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:03:57 compute-0 nova_compute[260603]: 2025-10-02 09:03:57.150 2 DEBUG nova.compute.manager [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-unplugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:57 compute-0 nova_compute[260603]: 2025-10-02 09:03:57.150 2 DEBUG oslo_concurrency.lockutils [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:57 compute-0 nova_compute[260603]: 2025-10-02 09:03:57.151 2 DEBUG oslo_concurrency.lockutils [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:57 compute-0 nova_compute[260603]: 2025-10-02 09:03:57.151 2 DEBUG oslo_concurrency.lockutils [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:57 compute-0 nova_compute[260603]: 2025-10-02 09:03:57.152 2 DEBUG nova.compute.manager [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-unplugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:03:57 compute-0 nova_compute[260603]: 2025-10-02 09:03:57.152 2 WARNING nova.compute.manager [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-unplugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state active and task_state None.
Oct 02 09:03:57 compute-0 nova_compute[260603]: 2025-10-02 09:03:57.153 2 DEBUG nova.compute.manager [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:57 compute-0 nova_compute[260603]: 2025-10-02 09:03:57.153 2 DEBUG oslo_concurrency.lockutils [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:57 compute-0 nova_compute[260603]: 2025-10-02 09:03:57.154 2 DEBUG oslo_concurrency.lockutils [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:57 compute-0 nova_compute[260603]: 2025-10-02 09:03:57.154 2 DEBUG oslo_concurrency.lockutils [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:57 compute-0 nova_compute[260603]: 2025-10-02 09:03:57.155 2 DEBUG nova.compute.manager [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:03:57 compute-0 nova_compute[260603]: 2025-10-02 09:03:57.155 2 WARNING nova.compute.manager [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state active and task_state None.
Oct 02 09:03:57 compute-0 ceph-mon[74477]: pgmap v2574: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:03:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:03:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:03:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:03:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:03:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:03:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:03:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:03:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:03:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.064 2 DEBUG nova.compute.manager [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.065 2 DEBUG nova.compute.manager [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing instance network info cache due to event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.065 2 DEBUG oslo_concurrency.lockutils [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.066 2 DEBUG oslo_concurrency.lockutils [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.066 2 DEBUG nova.network.neutron [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:03:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.279 2 INFO nova.compute.manager [None req-0f53f6e5-4200-43b5-a718-32d2783e3aa1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Get console output
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.287 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:03:58 compute-0 nova_compute[260603]: 2025-10-02 09:03:58.549 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:03:59 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2051790108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.029 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.131 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.132 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.135 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.135 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:03:59 compute-0 ceph-mon[74477]: pgmap v2575: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:03:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2051790108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.256 2 DEBUG nova.compute.manager [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.257 2 DEBUG oslo_concurrency.lockutils [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.257 2 DEBUG oslo_concurrency.lockutils [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.257 2 DEBUG oslo_concurrency.lockutils [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.258 2 DEBUG nova.compute.manager [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.258 2 WARNING nova.compute.manager [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state active and task_state None.
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.258 2 DEBUG nova.compute.manager [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.258 2 DEBUG oslo_concurrency.lockutils [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.259 2 DEBUG oslo_concurrency.lockutils [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.259 2 DEBUG oslo_concurrency.lockutils [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.259 2 DEBUG nova.compute.manager [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.259 2 WARNING nova.compute.manager [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state active and task_state None.
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.337 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.338 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3170MB free_disk=59.897216796875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.338 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.339 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.439 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.440 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.440 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.440 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.441 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.442 2 INFO nova.compute.manager [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Terminating instance
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.443 2 DEBUG nova.compute.manager [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.454 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.455 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance def7636a-ab83-489d-ba8d-6f3dd1ccc841 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.455 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.455 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:03:59 compute-0 kernel: tap0d6d454c-ed (unregistering): left promiscuous mode
Oct 02 09:03:59 compute-0 NetworkManager[45129]: <info>  [1759395839.5046] device (tap0d6d454c-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01454|binding|INFO|Releasing lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 from this chassis (sb_readonly=0)
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01455|binding|INFO|Setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 down in Southbound
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01456|binding|INFO|Removing iface tap0d6d454c-ed ovn-installed in OVS
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.531 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:f1:a7 10.100.0.5'], port_security=['fa:16:3e:ac:f1:a7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'def7636a-ab83-489d-ba8d-6f3dd1ccc841', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c605495-2750-431a-94c8-fc1511dea80b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b93b7300-a114-4509-a5d5-f258c80e5fdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df881951-0cfd-4c8c-9854-241ef8244cff, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0d6d454c-ed95-44d0-8bd1-e20589c708d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.533 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 in datapath 2c605495-2750-431a-94c8-fc1511dea80b unbound from our chassis
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.535 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c605495-2750-431a-94c8-fc1511dea80b
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:59 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000087.scope: Deactivated successfully.
Oct 02 09:03:59 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000087.scope: Consumed 13.109s CPU time.
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.560 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[246ab13d-fbbb-4270-a2c6-339cde2335cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 systemd-machined[214636]: Machine qemu-169-instance-00000087 terminated.
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.597 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.603 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[937b64d9-14e9-4f34-941c-3a2136c4ec5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.608 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5216203e-3247-419e-94cf-e5e271a2b890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.646 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a0054107-ac9e-44a8-834f-35c6eb72e3a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 kernel: tap0d6d454c-ed: entered promiscuous mode
Oct 02 09:03:59 compute-0 NetworkManager[45129]: <info>  [1759395839.6650] manager: (tap0d6d454c-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/587)
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01457|binding|INFO|Claiming lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 for this chassis.
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01458|binding|INFO|0d6d454c-ed95-44d0-8bd1-e20589c708d1: Claiming fa:16:3e:ac:f1:a7 10.100.0.5
Oct 02 09:03:59 compute-0 kernel: tap0d6d454c-ed (unregistering): left promiscuous mode
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.674 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:f1:a7 10.100.0.5'], port_security=['fa:16:3e:ac:f1:a7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'def7636a-ab83-489d-ba8d-6f3dd1ccc841', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c605495-2750-431a-94c8-fc1511dea80b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b93b7300-a114-4509-a5d5-f258c80e5fdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df881951-0cfd-4c8c-9854-241ef8244cff, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0d6d454c-ed95-44d0-8bd1-e20589c708d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.675 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83eb457f-abf8-4e98-8a1e-a237ceb908da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c605495-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:5b:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667596, 'reachable_time': 36735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408796, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.693 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5cdf3f-47e1-4890-ba51-a7675fa41617]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667608, 'tstamp': 667608}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408799, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667611, 'tstamp': 667611}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408799, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.695 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c605495-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01459|binding|INFO|Setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 ovn-installed in OVS
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01460|binding|INFO|Setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 up in Southbound
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01461|binding|INFO|Releasing lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 from this chassis (sb_readonly=1)
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01462|if_status|INFO|Dropped 2 log messages in last 576 seconds (most recently, 576 seconds ago) due to excessive rate
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01463|if_status|INFO|Not setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 down as sb is readonly
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01464|binding|INFO|Removing iface tap0d6d454c-ed ovn-installed in OVS
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01465|binding|INFO|Releasing lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 from this chassis (sb_readonly=0)
Oct 02 09:03:59 compute-0 ovn_controller[152344]: 2025-10-02T09:03:59Z|01466|binding|INFO|Setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 down in Southbound
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.713 2 INFO nova.virt.libvirt.driver [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Instance destroyed successfully.
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.714 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:f1:a7 10.100.0.5'], port_security=['fa:16:3e:ac:f1:a7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'def7636a-ab83-489d-ba8d-6f3dd1ccc841', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c605495-2750-431a-94c8-fc1511dea80b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b93b7300-a114-4509-a5d5-f258c80e5fdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df881951-0cfd-4c8c-9854-241ef8244cff, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0d6d454c-ed95-44d0-8bd1-e20589c708d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.714 2 DEBUG nova.objects.instance [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid def7636a-ab83-489d-ba8d-6f3dd1ccc841 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.725 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c605495-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.726 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.726 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c605495-20, col_values=(('external_ids', {'iface-id': '88f0a719-bef7-4fa7-ad0c-3658148f5bdf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.727 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.728 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 in datapath 2c605495-2750-431a-94c8-fc1511dea80b unbound from our chassis
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.729 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c605495-2750-431a-94c8-fc1511dea80b
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.731 2 DEBUG nova.virt.libvirt.vif [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-556721783',display_name='tempest-TestNetworkBasicOps-server-556721783',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-556721783',id=135,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFBWd49oFUTm6XgyrgFAJAvAD9R5S9h35IghxF+WDuWxqO67NuyfvmqcjpF3R4Okql0uPjy7xGWqOKFWo5bhMt5wCOH87LjC+Dpu6giEiY38iIQYyWXpiLlgRndqhZX6/w==',key_name='tempest-TestNetworkBasicOps-141517795',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:03:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-kkoo640r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:03:35Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=def7636a-ab83-489d-ba8d-6f3dd1ccc841,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.732 2 DEBUG nova.network.os_vif_util [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.733 2 DEBUG nova.network.os_vif_util [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.733 2 DEBUG os_vif [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d6d454c-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.747 2 INFO os_vif [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed')
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.748 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b2caa98c-64be-440d-b541-e234bb0796fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.786 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[49405ad4-f7cd-4856-b42d-2c8982f01946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.790 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3aefb1bb-a76b-401e-9115-e43b70fd720e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.820 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ca944b13-aa55-4fc2-a85a-4bb4640a5af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.848 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc3cb0e-fc27-4359-af5a-47f2e306e147]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c605495-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:5b:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667596, 'reachable_time': 36735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408844, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.868 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[721b79e8-1666-4d5d-9e0d-2dc53dc42575]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667608, 'tstamp': 667608}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408845, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667611, 'tstamp': 667611}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408845, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.870 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c605495-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.872 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c605495-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.873 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.873 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c605495-20, col_values=(('external_ids', {'iface-id': '88f0a719-bef7-4fa7-ad0c-3658148f5bdf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.873 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.874 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 in datapath 2c605495-2750-431a-94c8-fc1511dea80b unbound from our chassis
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.875 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c605495-2750-431a-94c8-fc1511dea80b
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.890 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b97d7350-03c2-4205-bfb1-6bb0c2c99999]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 730 KiB/s wr, 14 op/s
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.928 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4513f1-ff45-4a57-b083-cf04ec11c7e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.931 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e3688d-c619-4f45-8109-78b7dcd22721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.959 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[484c1158-5b9b-4611-8d89-377463f75e55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.975 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c36adf33-7a71-404c-a54c-5fdcc914239a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c605495-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:5b:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667596, 'reachable_time': 36735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408852, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.990 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4ebb8c-a1af-4d12-b30c-e7c8c79c373d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667608, 'tstamp': 667608}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408853, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667611, 'tstamp': 667611}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408853, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.991 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c605495-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.994 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c605495-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:59 compute-0 nova_compute[260603]: 2025-10-02 09:03:59.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.994 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.995 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c605495-20, col_values=(('external_ids', {'iface-id': '88f0a719-bef7-4fa7-ad0c-3658148f5bdf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:03:59 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.995 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:04:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:04:00 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3658447094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.059 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.065 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.093 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.109 2 INFO nova.virt.libvirt.driver [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Deleting instance files /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841_del
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.110 2 INFO nova.virt.libvirt.driver [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Deletion of /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841_del complete
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.158 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.159 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.180 2 INFO nova.compute.manager [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.181 2 DEBUG oslo.service.loopingcall [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.181 2 DEBUG nova.compute.manager [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.181 2 DEBUG nova.network.neutron [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.202 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-unplugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.203 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.203 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.204 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.204 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-unplugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.204 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-unplugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.205 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.205 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.205 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.205 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.206 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.206 2 WARNING nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received unexpected event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with vm_state active and task_state deleting.
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.206 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.207 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.207 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.207 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.208 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.208 2 WARNING nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received unexpected event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with vm_state active and task_state deleting.
Oct 02 09:04:00 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3658447094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.303 2 DEBUG nova.network.neutron [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updated VIF entry in instance network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.304 2 DEBUG nova.network.neutron [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.320 2 DEBUG oslo_concurrency.lockutils [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.950 2 DEBUG nova.network.neutron [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:04:00 compute-0 nova_compute[260603]: 2025-10-02 09:04:00.969 2 INFO nova.compute.manager [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Took 0.79 seconds to deallocate network for instance.
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.014 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.015 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.084 2 DEBUG oslo_concurrency.processutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:04:01 compute-0 ceph-mon[74477]: pgmap v2576: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 730 KiB/s wr, 14 op/s
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.359 2 DEBUG nova.compute.manager [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-changed-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.359 2 DEBUG nova.compute.manager [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Refreshing instance network info cache due to event network-changed-0d6d454c-ed95-44d0-8bd1-e20589c708d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.360 2 DEBUG oslo_concurrency.lockutils [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.360 2 DEBUG oslo_concurrency.lockutils [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.360 2 DEBUG nova.network.neutron [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Refreshing network info cache for port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:04:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:04:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2114879543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.560 2 DEBUG oslo_concurrency.processutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.565 2 DEBUG nova.compute.provider_tree [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.633 2 DEBUG nova.scheduler.client.report [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.708 2 DEBUG nova.network.neutron [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.805 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 17 KiB/s wr, 5 op/s
Oct 02 09:04:01 compute-0 nova_compute[260603]: 2025-10-02 09:04:01.942 2 INFO nova.scheduler.client.report [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance def7636a-ab83-489d-ba8d-6f3dd1ccc841
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.123 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.154 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.180 2 DEBUG nova.network.neutron [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:04:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2114879543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.274 2 DEBUG oslo_concurrency.lockutils [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.275 2 DEBUG nova.compute.manager [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-deleted-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.356 2 DEBUG nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.356 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.357 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.357 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.358 2 DEBUG nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.358 2 WARNING nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received unexpected event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with vm_state deleted and task_state None.
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.358 2 DEBUG nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-unplugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.359 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.359 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.360 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.360 2 DEBUG nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-unplugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.361 2 WARNING nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received unexpected event network-vif-unplugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with vm_state deleted and task_state None.
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.361 2 DEBUG nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.362 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.362 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.363 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.363 2 DEBUG nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:04:02 compute-0 nova_compute[260603]: 2025-10-02 09:04:02.364 2 WARNING nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received unexpected event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with vm_state deleted and task_state None.
Oct 02 09:04:02 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:02.892 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:04:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:04:03 compute-0 ceph-mon[74477]: pgmap v2577: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 17 KiB/s wr, 5 op/s
Oct 02 09:04:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 121 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 25 KiB/s wr, 34 op/s
Oct 02 09:04:04 compute-0 nova_compute[260603]: 2025-10-02 09:04:04.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.116 2 DEBUG nova.compute.manager [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.117 2 DEBUG nova.compute.manager [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing instance network info cache due to event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.118 2 DEBUG oslo_concurrency.lockutils [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.118 2 DEBUG oslo_concurrency.lockutils [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.118 2 DEBUG nova.network.neutron [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.167 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.168 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.168 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.169 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.169 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.170 2 INFO nova.compute.manager [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Terminating instance
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.172 2 DEBUG nova.compute.manager [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:04:05 compute-0 kernel: tap3df4c898-fd (unregistering): left promiscuous mode
Oct 02 09:04:05 compute-0 NetworkManager[45129]: <info>  [1759395845.2455] device (tap3df4c898-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:04:05 compute-0 ceph-mon[74477]: pgmap v2578: 305 pgs: 305 active+clean; 121 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 25 KiB/s wr, 34 op/s
Oct 02 09:04:05 compute-0 ovn_controller[152344]: 2025-10-02T09:04:05Z|01467|binding|INFO|Releasing lport 3df4c898-fd96-4b0f-90ee-add24ca56aa2 from this chassis (sb_readonly=0)
Oct 02 09:04:05 compute-0 ovn_controller[152344]: 2025-10-02T09:04:05Z|01468|binding|INFO|Setting lport 3df4c898-fd96-4b0f-90ee-add24ca56aa2 down in Southbound
Oct 02 09:04:05 compute-0 ovn_controller[152344]: 2025-10-02T09:04:05Z|01469|binding|INFO|Removing iface tap3df4c898-fd ovn-installed in OVS
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.264 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:95:1a 10.100.0.12'], port_security=['fa:16:3e:b0:95:1a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '64577e3d-aa56-4fa9-a1b5-dc76a7a80754', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c605495-2750-431a-94c8-fc1511dea80b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3f414f43-ca38-4bce-aa83-1fdd3cd738fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df881951-0cfd-4c8c-9854-241ef8244cff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3df4c898-fd96-4b0f-90ee-add24ca56aa2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.266 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 in datapath 2c605495-2750-431a-94c8-fc1511dea80b unbound from our chassis
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.268 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2c605495-2750-431a-94c8-fc1511dea80b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.269 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[76dbd92f-1420-48a4-894d-e758ac560bcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.269 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b namespace which is not needed anymore
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:05 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Deactivated successfully.
Oct 02 09:04:05 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Consumed 13.586s CPU time.
Oct 02 09:04:05 compute-0 systemd-machined[214636]: Machine qemu-168-instance-00000086 terminated.
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.417 2 INFO nova.virt.libvirt.driver [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Instance destroyed successfully.
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.417 2 DEBUG nova.objects.instance [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.431 2 DEBUG nova.virt.libvirt.vif [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:03:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2044507795',display_name='tempest-TestNetworkBasicOps-server-2044507795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2044507795',id=134,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJyVGuJYWkz5cUrKkan7kCe4MbiRPaa7v1Q6d5xWchgY/tX4JduFQB6JZ0q369VSitON6EJsRLVHtNMGsTz7PTKCbeKQtDY6sbIs7RX5gGPDqTs/0LJrpZ68VxyA10mrYQ==',key_name='tempest-TestNetworkBasicOps-2118371598',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:03:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-oalw9fqv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:03:18Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=64577e3d-aa56-4fa9-a1b5-dc76a7a80754,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.432 2 DEBUG nova.network.os_vif_util [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.432 2 DEBUG nova.network.os_vif_util [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.433 2 DEBUG os_vif [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.435 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3df4c898-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.445 2 INFO os_vif [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd')
Oct 02 09:04:05 compute-0 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [NOTICE]   (407387) : haproxy version is 2.8.14-c23fe91
Oct 02 09:04:05 compute-0 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [NOTICE]   (407387) : path to executable is /usr/sbin/haproxy
Oct 02 09:04:05 compute-0 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [WARNING]  (407387) : Exiting Master process...
Oct 02 09:04:05 compute-0 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [WARNING]  (407387) : Exiting Master process...
Oct 02 09:04:05 compute-0 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [ALERT]    (407387) : Current worker (407389) exited with code 143 (Terminated)
Oct 02 09:04:05 compute-0 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [WARNING]  (407387) : All workers exited. Exiting... (0)
Oct 02 09:04:05 compute-0 systemd[1]: libpod-620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43.scope: Deactivated successfully.
Oct 02 09:04:05 compute-0 podman[408902]: 2025-10-02 09:04:05.458235719 +0000 UTC m=+0.065293050 container died 620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 09:04:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43-userdata-shm.mount: Deactivated successfully.
Oct 02 09:04:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-addf8c19743aa1800a38e7a2c27551ad430b760bd20734d7536da7cb23fa4ebc-merged.mount: Deactivated successfully.
Oct 02 09:04:05 compute-0 podman[408902]: 2025-10-02 09:04:05.497019894 +0000 UTC m=+0.104077225 container cleanup 620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 09:04:05 compute-0 systemd[1]: libpod-conmon-620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43.scope: Deactivated successfully.
Oct 02 09:04:05 compute-0 podman[408962]: 2025-10-02 09:04:05.575442841 +0000 UTC m=+0.046532727 container remove 620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.585 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cbda5f44-b7e1-46b0-850a-c192134244f8]: (4, ('Thu Oct  2 09:04:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b (620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43)\n620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43\nThu Oct  2 09:04:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b (620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43)\n620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.588 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eeef59f1-0037-4e6d-8af5-350a5f859d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.590 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c605495-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:05 compute-0 kernel: tap2c605495-20: left promiscuous mode
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.624 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fd9248-6804-4a8a-bc17-fa9dbea7fb38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.651 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ed13a5b0-77d9-47e2-a01f-171e98085474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.652 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[de9253f7-6ffe-44ed-8c11-deee87ae088d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.668 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d12e57e1-bb59-4d5d-9121-9302f4a07b15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667588, 'reachable_time': 41444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408979, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d2c605495\x2d2750\x2d431a\x2d94c8\x2dfc1511dea80b.mount: Deactivated successfully.
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.672 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:04:05 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.672 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf03fda-1ca5-4b7f-a5fb-d21e6de36bc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.792 2 INFO nova.virt.libvirt.driver [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Deleting instance files /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754_del
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.793 2 INFO nova.virt.libvirt.driver [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Deletion of /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754_del complete
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.866 2 INFO nova.compute.manager [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Took 0.69 seconds to destroy the instance on the hypervisor.
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.866 2 DEBUG oslo.service.loopingcall [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.867 2 DEBUG nova.compute.manager [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:04:05 compute-0 nova_compute[260603]: 2025-10-02 09:04:05.867 2 DEBUG nova.network.neutron [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:04:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 121 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 29 op/s
Oct 02 09:04:06 compute-0 nova_compute[260603]: 2025-10-02 09:04:06.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:04:06 compute-0 nova_compute[260603]: 2025-10-02 09:04:06.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 09:04:06 compute-0 nova_compute[260603]: 2025-10-02 09:04:06.543 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.108 2 DEBUG nova.network.neutron [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.129 2 INFO nova.compute.manager [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Took 1.26 seconds to deallocate network for instance.
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.165 2 DEBUG nova.network.neutron [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updated VIF entry in instance network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.166 2 DEBUG nova.network.neutron [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.193 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.194 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.197 2 DEBUG oslo_concurrency.lockutils [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.215 2 DEBUG nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-unplugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.216 2 DEBUG oslo_concurrency.lockutils [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.216 2 DEBUG oslo_concurrency.lockutils [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.217 2 DEBUG oslo_concurrency.lockutils [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.217 2 DEBUG nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-unplugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.218 2 WARNING nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-unplugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state deleted and task_state None.
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.218 2 DEBUG nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.219 2 DEBUG oslo_concurrency.lockutils [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.219 2 DEBUG oslo_concurrency.lockutils [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.219 2 DEBUG oslo_concurrency.lockutils [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.220 2 DEBUG nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.220 2 WARNING nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state deleted and task_state None.
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.221 2 DEBUG nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-deleted-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.221 2 INFO nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Neutron deleted interface 3df4c898-fd96-4b0f-90ee-add24ca56aa2; detaching it from the instance and deleting it from the info cache
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.221 2 DEBUG nova.network.neutron [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.261 2 DEBUG nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Detach interface failed, port_id=3df4c898-fd96-4b0f-90ee-add24ca56aa2, reason: Instance 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.272 2 DEBUG oslo_concurrency.processutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:04:07 compute-0 ceph-mon[74477]: pgmap v2579: 305 pgs: 305 active+clean; 121 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 29 op/s
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.544 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:04:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:04:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322879297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.743 2 DEBUG oslo_concurrency.processutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.752 2 DEBUG nova.compute.provider_tree [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.779 2 DEBUG nova.scheduler.client.report [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.816 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.853 2 INFO nova.scheduler.client.report [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 64577e3d-aa56-4fa9-a1b5-dc76a7a80754
Oct 02 09:04:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 64 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 21 KiB/s wr, 44 op/s
Oct 02 09:04:07 compute-0 nova_compute[260603]: 2025-10-02 09:04:07.934 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:08 compute-0 podman[409003]: 2025-10-02 09:04:08.045637094 +0000 UTC m=+0.090016057 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:04:08 compute-0 podman[409002]: 2025-10-02 09:04:08.080654133 +0000 UTC m=+0.133555981 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 09:04:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:04:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1322879297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:04:09 compute-0 ceph-mon[74477]: pgmap v2580: 305 pgs: 305 active+clean; 64 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 21 KiB/s wr, 44 op/s
Oct 02 09:04:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.3 KiB/s wr, 56 op/s
Oct 02 09:04:10 compute-0 nova_compute[260603]: 2025-10-02 09:04:10.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:10 compute-0 nova_compute[260603]: 2025-10-02 09:04:10.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:11 compute-0 nova_compute[260603]: 2025-10-02 09:04:11.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:11 compute-0 nova_compute[260603]: 2025-10-02 09:04:11.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:11 compute-0 ceph-mon[74477]: pgmap v2581: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.3 KiB/s wr, 56 op/s
Oct 02 09:04:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct 02 09:04:13 compute-0 podman[409048]: 2025-10-02 09:04:13.016629122 +0000 UTC m=+0.080494582 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:04:13 compute-0 podman[409047]: 2025-10-02 09:04:13.030300128 +0000 UTC m=+0.092880888 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 09:04:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:04:13 compute-0 ceph-mon[74477]: pgmap v2582: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct 02 09:04:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct 02 09:04:14 compute-0 nova_compute[260603]: 2025-10-02 09:04:14.707 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395839.702803, def7636a-ab83-489d-ba8d-6f3dd1ccc841 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:04:14 compute-0 nova_compute[260603]: 2025-10-02 09:04:14.708 2 INFO nova.compute.manager [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] VM Stopped (Lifecycle Event)
Oct 02 09:04:14 compute-0 nova_compute[260603]: 2025-10-02 09:04:14.730 2 DEBUG nova.compute.manager [None req-21e9fb51-1d39-4308-a62e-30a6086d94e8 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:04:15 compute-0 ceph-mon[74477]: pgmap v2583: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct 02 09:04:15 compute-0 nova_compute[260603]: 2025-10-02 09:04:15.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:15 compute-0 nova_compute[260603]: 2025-10-02 09:04:15.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:04:16 compute-0 nova_compute[260603]: 2025-10-02 09:04:16.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:04:17 compute-0 ceph-mon[74477]: pgmap v2584: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:04:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:04:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:04:19 compute-0 ceph-mon[74477]: pgmap v2585: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:04:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 341 B/s wr, 12 op/s
Oct 02 09:04:20 compute-0 nova_compute[260603]: 2025-10-02 09:04:20.415 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395845.4146574, 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:04:20 compute-0 nova_compute[260603]: 2025-10-02 09:04:20.416 2 INFO nova.compute.manager [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] VM Stopped (Lifecycle Event)
Oct 02 09:04:20 compute-0 nova_compute[260603]: 2025-10-02 09:04:20.441 2 DEBUG nova.compute.manager [None req-1f32a34e-c674-4f41-9705-c61ab9c62821 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:04:20 compute-0 nova_compute[260603]: 2025-10-02 09:04:20.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:20 compute-0 nova_compute[260603]: 2025-10-02 09:04:20.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:21 compute-0 ceph-mon[74477]: pgmap v2586: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 341 B/s wr, 12 op/s
Oct 02 09:04:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:04:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:04:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1884056596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:04:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:04:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1884056596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:04:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1884056596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:04:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1884056596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:04:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:04:23 compute-0 ceph-mon[74477]: pgmap v2587: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:04:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:04:24 compute-0 nova_compute[260603]: 2025-10-02 09:04:24.540 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:04:24 compute-0 nova_compute[260603]: 2025-10-02 09:04:24.540 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 09:04:25 compute-0 ceph-mon[74477]: pgmap v2588: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:04:25 compute-0 nova_compute[260603]: 2025-10-02 09:04:25.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:25 compute-0 nova_compute[260603]: 2025-10-02 09:04:25.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:04:26 compute-0 ceph-mon[74477]: pgmap v2589: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:04:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:04:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:04:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:04:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:04:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:04:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:04:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:04:28
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.mgr', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:04:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:04:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:04:29 compute-0 ceph-mon[74477]: pgmap v2590: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:04:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:04:30 compute-0 nova_compute[260603]: 2025-10-02 09:04:30.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:30 compute-0 nova_compute[260603]: 2025-10-02 09:04:30.555 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:30 compute-0 nova_compute[260603]: 2025-10-02 09:04:30.556 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:30 compute-0 nova_compute[260603]: 2025-10-02 09:04:30.572 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:04:30 compute-0 nova_compute[260603]: 2025-10-02 09:04:30.648 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:30 compute-0 nova_compute[260603]: 2025-10-02 09:04:30.649 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:30 compute-0 nova_compute[260603]: 2025-10-02 09:04:30.658 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:04:30 compute-0 nova_compute[260603]: 2025-10-02 09:04:30.659 2 INFO nova.compute.claims [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:04:30 compute-0 nova_compute[260603]: 2025-10-02 09:04:30.776 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:04:30 compute-0 nova_compute[260603]: 2025-10-02 09:04:30.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:31 compute-0 ceph-mon[74477]: pgmap v2591: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:04:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:04:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/13869599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.230 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.237 2 DEBUG nova.compute.provider_tree [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.253 2 DEBUG nova.scheduler.client.report [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.277 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.278 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.325 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.326 2 DEBUG nova.network.neutron [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.346 2 INFO nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.365 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.460 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.462 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.462 2 INFO nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Creating image(s)
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.494 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.525 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.549 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.553 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.593 2 DEBUG nova.policy [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.641 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.641 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.642 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.642 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.665 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.669 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 4e056573-a7f9-40a3-b57a-8415af28183c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:04:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:04:31 compute-0 nova_compute[260603]: 2025-10-02 09:04:31.981 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 4e056573-a7f9-40a3-b57a-8415af28183c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:04:32 compute-0 nova_compute[260603]: 2025-10-02 09:04:32.067 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:04:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/13869599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:04:32 compute-0 nova_compute[260603]: 2025-10-02 09:04:32.178 2 DEBUG nova.objects.instance [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e056573-a7f9-40a3-b57a-8415af28183c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:04:32 compute-0 nova_compute[260603]: 2025-10-02 09:04:32.196 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:04:32 compute-0 nova_compute[260603]: 2025-10-02 09:04:32.197 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Ensure instance console log exists: /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:04:32 compute-0 nova_compute[260603]: 2025-10-02 09:04:32.198 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:32 compute-0 nova_compute[260603]: 2025-10-02 09:04:32.198 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:32 compute-0 nova_compute[260603]: 2025-10-02 09:04:32.199 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:33 compute-0 ceph-mon[74477]: pgmap v2592: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:04:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:04:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 02 09:04:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:34.843 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:34.844 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:34.844 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:35 compute-0 ceph-mon[74477]: pgmap v2593: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 02 09:04:35 compute-0 nova_compute[260603]: 2025-10-02 09:04:35.529 2 DEBUG nova.network.neutron [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Successfully created port: 3426f15c-bff7-478f-a7d7-2fd7499af1c4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:04:35 compute-0 nova_compute[260603]: 2025-10-02 09:04:35.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:35 compute-0 nova_compute[260603]: 2025-10-02 09:04:35.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 02 09:04:36 compute-0 nova_compute[260603]: 2025-10-02 09:04:36.536 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:04:36 compute-0 nova_compute[260603]: 2025-10-02 09:04:36.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:04:37 compute-0 ceph-mon[74477]: pgmap v2594: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 02 09:04:37 compute-0 nova_compute[260603]: 2025-10-02 09:04:37.809 2 DEBUG nova.network.neutron [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Successfully updated port: 3426f15c-bff7-478f-a7d7-2fd7499af1c4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:04:37 compute-0 nova_compute[260603]: 2025-10-02 09:04:37.825 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:04:37 compute-0 nova_compute[260603]: 2025-10-02 09:04:37.825 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:04:37 compute-0 nova_compute[260603]: 2025-10-02 09:04:37.825 2 DEBUG nova.network.neutron [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:04:37 compute-0 nova_compute[260603]: 2025-10-02 09:04:37.913 2 DEBUG nova.compute.manager [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-changed-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:37 compute-0 nova_compute[260603]: 2025-10-02 09:04:37.914 2 DEBUG nova.compute.manager [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Refreshing instance network info cache due to event network-changed-3426f15c-bff7-478f-a7d7-2fd7499af1c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:04:37 compute-0 nova_compute[260603]: 2025-10-02 09:04:37.914 2 DEBUG oslo_concurrency.lockutils [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:04:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:04:37 compute-0 nova_compute[260603]: 2025-10-02 09:04:37.963 2 DEBUG nova.network.neutron [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:04:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:04:38 compute-0 podman[409276]: 2025-10-02 09:04:38.996297074 +0000 UTC m=+0.057006752 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:04:39 compute-0 podman[409275]: 2025-10-02 09:04:39.075543978 +0000 UTC m=+0.134869553 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 09:04:39 compute-0 ceph-mon[74477]: pgmap v2595: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:04:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:04:39 compute-0 nova_compute[260603]: 2025-10-02 09:04:39.960 2 DEBUG nova.network.neutron [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updating instance_info_cache with network_info: [{"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:04:39 compute-0 nova_compute[260603]: 2025-10-02 09:04:39.985 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:04:39 compute-0 nova_compute[260603]: 2025-10-02 09:04:39.985 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Instance network_info: |[{"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:04:39 compute-0 nova_compute[260603]: 2025-10-02 09:04:39.985 2 DEBUG oslo_concurrency.lockutils [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:04:39 compute-0 nova_compute[260603]: 2025-10-02 09:04:39.986 2 DEBUG nova.network.neutron [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Refreshing network info cache for port 3426f15c-bff7-478f-a7d7-2fd7499af1c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:04:39 compute-0 nova_compute[260603]: 2025-10-02 09:04:39.988 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Start _get_guest_xml network_info=[{"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:04:39 compute-0 nova_compute[260603]: 2025-10-02 09:04:39.992 2 WARNING nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:39.998 2 DEBUG nova.virt.libvirt.host [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.000 2 DEBUG nova.virt.libvirt.host [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.017 2 DEBUG nova.virt.libvirt.host [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.018 2 DEBUG nova.virt.libvirt.host [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.018 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.018 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.019 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.019 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.019 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.019 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.020 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.020 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.020 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.020 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.020 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.021 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.023 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:04:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:04:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2207960740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.548 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.571 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.575 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:04:40 compute-0 nova_compute[260603]: 2025-10-02 09:04:40.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:04:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2930881053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.011 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.014 2 DEBUG nova.virt.libvirt.vif [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:04:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-702449656',display_name='tempest-TestNetworkBasicOps-server-702449656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-702449656',id=136,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG4yVL3kWMKssiMMTxBkYCYMGOyIrMgKWW6mSphQLUGjYPZbZn8kAoyGfeCty+GAJO7M0ajY8H+P8bZ7xOYgSuU5Lwh2F7EvM3DqGuRUQe2gvgkzvhd/Nxhr2daBHDjEw==',key_name='tempest-TestNetworkBasicOps-1401796894',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-aolvfudx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:04:31Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=4e056573-a7f9-40a3-b57a-8415af28183c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.015 2 DEBUG nova.network.os_vif_util [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.017 2 DEBUG nova.network.os_vif_util [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.020 2 DEBUG nova.objects.instance [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e056573-a7f9-40a3-b57a-8415af28183c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.045 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:04:41 compute-0 nova_compute[260603]:   <uuid>4e056573-a7f9-40a3-b57a-8415af28183c</uuid>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   <name>instance-00000088</name>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <nova:name>tempest-TestNetworkBasicOps-server-702449656</nova:name>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:04:39</nova:creationTime>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:04:41 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:04:41 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:04:41 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:04:41 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:04:41 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:04:41 compute-0 nova_compute[260603]:         <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct 02 09:04:41 compute-0 nova_compute[260603]:         <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:04:41 compute-0 nova_compute[260603]:         <nova:port uuid="3426f15c-bff7-478f-a7d7-2fd7499af1c4">
Oct 02 09:04:41 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <system>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <entry name="serial">4e056573-a7f9-40a3-b57a-8415af28183c</entry>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <entry name="uuid">4e056573-a7f9-40a3-b57a-8415af28183c</entry>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     </system>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   <os>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   </os>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   <features>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   </features>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/4e056573-a7f9-40a3-b57a-8415af28183c_disk">
Oct 02 09:04:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       </source>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:04:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/4e056573-a7f9-40a3-b57a-8415af28183c_disk.config">
Oct 02 09:04:41 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       </source>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:04:41 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:bf:3b:0c"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <target dev="tap3426f15c-bf"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/console.log" append="off"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <video>
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     </video>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:04:41 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:04:41 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:04:41 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:04:41 compute-0 nova_compute[260603]: </domain>
Oct 02 09:04:41 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.047 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Preparing to wait for external event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.048 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.048 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.049 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.050 2 DEBUG nova.virt.libvirt.vif [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:04:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-702449656',display_name='tempest-TestNetworkBasicOps-server-702449656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-702449656',id=136,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG4yVL3kWMKssiMMTxBkYCYMGOyIrMgKWW6mSphQLUGjYPZbZn8kAoyGfeCty+GAJO7M0ajY8H+P8bZ7xOYgSuU5Lwh2F7EvM3DqGuRUQe2gvgkzvhd/Nxhr2daBHDjEw==',key_name='tempest-TestNetworkBasicOps-1401796894',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-aolvfudx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:04:31Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=4e056573-a7f9-40a3-b57a-8415af28183c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.051 2 DEBUG nova.network.os_vif_util [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.052 2 DEBUG nova.network.os_vif_util [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.053 2 DEBUG os_vif [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.054 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.064 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3426f15c-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.065 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3426f15c-bf, col_values=(('external_ids', {'iface-id': '3426f15c-bff7-478f-a7d7-2fd7499af1c4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:3b:0c', 'vm-uuid': '4e056573-a7f9-40a3-b57a-8415af28183c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:41 compute-0 NetworkManager[45129]: <info>  [1759395881.0691] manager: (tap3426f15c-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/588)
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.075 2 INFO os_vif [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf')
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.140 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.141 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.142 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:bf:3b:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.143 2 INFO nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Using config drive
Oct 02 09:04:41 compute-0 ceph-mon[74477]: pgmap v2596: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:04:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2207960740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:04:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2930881053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.177 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.913 2 INFO nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Creating config drive at /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/disk.config
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.918 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphzinsskj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:04:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.958 2 DEBUG nova.network.neutron [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updated VIF entry in instance network info cache for port 3426f15c-bff7-478f-a7d7-2fd7499af1c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.959 2 DEBUG nova.network.neutron [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updating instance_info_cache with network_info: [{"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:04:41 compute-0 nova_compute[260603]: 2025-10-02 09:04:41.975 2 DEBUG oslo_concurrency.lockutils [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.068 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphzinsskj" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.106 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.112 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/disk.config 4e056573-a7f9-40a3-b57a-8415af28183c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.311 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/disk.config 4e056573-a7f9-40a3-b57a-8415af28183c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.313 2 INFO nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Deleting local config drive /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/disk.config because it was imported into RBD.
Oct 02 09:04:42 compute-0 kernel: tap3426f15c-bf: entered promiscuous mode
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:42 compute-0 ovn_controller[152344]: 2025-10-02T09:04:42Z|01470|binding|INFO|Claiming lport 3426f15c-bff7-478f-a7d7-2fd7499af1c4 for this chassis.
Oct 02 09:04:42 compute-0 ovn_controller[152344]: 2025-10-02T09:04:42Z|01471|binding|INFO|3426f15c-bff7-478f-a7d7-2fd7499af1c4: Claiming fa:16:3e:bf:3b:0c 10.100.0.6
Oct 02 09:04:42 compute-0 NetworkManager[45129]: <info>  [1759395882.3850] manager: (tap3426f15c-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/589)
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.415 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:3b:0c 10.100.0.6'], port_security=['fa:16:3e:bf:3b:0c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e056573-a7f9-40a3-b57a-8415af28183c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b56304ae-559d-4697-b965-787fd568f6ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56d2bc0b-a112-4501-852b-a30e94c83df4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8bddaa3-c9e7-4b8d-b560-6df44a76261b, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3426f15c-bff7-478f-a7d7-2fd7499af1c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.416 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3426f15c-bff7-478f-a7d7-2fd7499af1c4 in datapath b56304ae-559d-4697-b965-787fd568f6ea bound to our chassis
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.417 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b56304ae-559d-4697-b965-787fd568f6ea
Oct 02 09:04:42 compute-0 systemd-machined[214636]: New machine qemu-170-instance-00000088.
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.430 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8800f1-f50f-469c-a3a8-4fea82bbac95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.431 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb56304ae-51 in ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.433 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb56304ae-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.433 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b04bb4ad-353e-4075-9ced-322458b04666]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.434 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18e7784f-c061-421b-9792-92127825e797]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.445 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe6637d-082e-4400-8c0e-2864fbd66c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.476 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b9c5cf-f01a-4652-abc1-850544eabaef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 systemd[1]: Started Virtual Machine qemu-170-instance-00000088.
Oct 02 09:04:42 compute-0 ovn_controller[152344]: 2025-10-02T09:04:42Z|01472|binding|INFO|Setting lport 3426f15c-bff7-478f-a7d7-2fd7499af1c4 ovn-installed in OVS
Oct 02 09:04:42 compute-0 ovn_controller[152344]: 2025-10-02T09:04:42Z|01473|binding|INFO|Setting lport 3426f15c-bff7-478f-a7d7-2fd7499af1c4 up in Southbound
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:42 compute-0 systemd-udevd[409461]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:04:42 compute-0 NetworkManager[45129]: <info>  [1759395882.5009] device (tap3426f15c-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:04:42 compute-0 NetworkManager[45129]: <info>  [1759395882.5019] device (tap3426f15c-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.505 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[815553ef-d3d3-4a4b-8d4b-2dafffd56354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 systemd-udevd[409465]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d9aa6c-81af-4610-b890-3b70a128ada6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 NetworkManager[45129]: <info>  [1759395882.5107] manager: (tapb56304ae-50): new Veth device (/org/freedesktop/NetworkManager/Devices/590)
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.534 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e16344-0fad-401c-b6fa-0af492ede544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.538 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3861cc3c-ea70-45a1-9937-a542d5c9cf03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 NetworkManager[45129]: <info>  [1759395882.5586] device (tapb56304ae-50): carrier: link connected
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.564 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4c89c72b-8fa5-4e00-9797-f85b594602c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.581 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[514b9cfb-05b4-49bc-ab08-a772f56661b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb56304ae-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:db:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 415], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676116, 'reachable_time': 42425, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409490, 'error': None, 'target': 'ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.595 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd42bf83-820e-4ca7-b7be-87c4f85a71d3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:db41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676116, 'tstamp': 676116}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409491, 'error': None, 'target': 'ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.613 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[133e4f50-f855-4c35-bd87-c13ce5f25764]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb56304ae-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:db:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 415], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676116, 'reachable_time': 42425, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 409492, 'error': None, 'target': 'ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.649 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac5d415-3d25-402b-bed4-5488f698de61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.738 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d25625-53e1-4472-9eee-6f1c73c4b4a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.740 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb56304ae-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.740 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.741 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb56304ae-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:42 compute-0 NetworkManager[45129]: <info>  [1759395882.7449] manager: (tapb56304ae-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Oct 02 09:04:42 compute-0 kernel: tapb56304ae-50: entered promiscuous mode
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.749 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb56304ae-50, col_values=(('external_ids', {'iface-id': '4b059aed-66ee-4478-ba1c-eaee8b0f3c46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:42 compute-0 ovn_controller[152344]: 2025-10-02T09:04:42Z|01474|binding|INFO|Releasing lport 4b059aed-66ee-4478-ba1c-eaee8b0f3c46 from this chassis (sb_readonly=0)
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.779 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b56304ae-559d-4697-b965-787fd568f6ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b56304ae-559d-4697-b965-787fd568f6ea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.780 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22962bf9-70ec-45cb-ae12-cfffde1f4fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.781 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-b56304ae-559d-4697-b965-787fd568f6ea
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/b56304ae-559d-4697-b965-787fd568f6ea.pid.haproxy
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID b56304ae-559d-4697-b965-787fd568f6ea
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:04:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.782 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea', 'env', 'PROCESS_TAG=haproxy-b56304ae-559d-4697-b965-787fd568f6ea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b56304ae-559d-4697-b965-787fd568f6ea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.907 2 DEBUG nova.compute.manager [req-1032a2e2-1753-4876-a709-c40cb02a304d req-3a4ab219-81cf-427f-8e41-6a0d04cb805d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.907 2 DEBUG oslo_concurrency.lockutils [req-1032a2e2-1753-4876-a709-c40cb02a304d req-3a4ab219-81cf-427f-8e41-6a0d04cb805d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.908 2 DEBUG oslo_concurrency.lockutils [req-1032a2e2-1753-4876-a709-c40cb02a304d req-3a4ab219-81cf-427f-8e41-6a0d04cb805d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.908 2 DEBUG oslo_concurrency.lockutils [req-1032a2e2-1753-4876-a709-c40cb02a304d req-3a4ab219-81cf-427f-8e41-6a0d04cb805d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:42 compute-0 nova_compute[260603]: 2025-10-02 09:04:42.908 2 DEBUG nova.compute.manager [req-1032a2e2-1753-4876-a709-c40cb02a304d req-3a4ab219-81cf-427f-8e41-6a0d04cb805d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Processing event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:04:43 compute-0 ceph-mon[74477]: pgmap v2597: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:04:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:04:43 compute-0 podman[409566]: 2025-10-02 09:04:43.181661849 +0000 UTC m=+0.071109071 container create 9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:04:43 compute-0 podman[409566]: 2025-10-02 09:04:43.148020314 +0000 UTC m=+0.037467556 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:04:43 compute-0 systemd[1]: Started libpod-conmon-9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4.scope.
Oct 02 09:04:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:04:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b747b7aa890dff406452ac20375c7d0d240a41a3a839599776ae767a20c998/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:04:43 compute-0 podman[409566]: 2025-10-02 09:04:43.307186079 +0000 UTC m=+0.196633321 container init 9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 09:04:43 compute-0 podman[409582]: 2025-10-02 09:04:43.309013517 +0000 UTC m=+0.075304101 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 09:04:43 compute-0 podman[409566]: 2025-10-02 09:04:43.312628559 +0000 UTC m=+0.202075781 container start 9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 09:04:43 compute-0 podman[409583]: 2025-10-02 09:04:43.327692287 +0000 UTC m=+0.083858298 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid)
Oct 02 09:04:43 compute-0 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [NOTICE]   (409625) : New worker (409628) forked
Oct 02 09:04:43 compute-0 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [NOTICE]   (409625) : Loading success.
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.510 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.510 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395883.5092514, 4e056573-a7f9-40a3-b57a-8415af28183c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.510 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] VM Started (Lifecycle Event)
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.519 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.522 2 INFO nova.virt.libvirt.driver [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Instance spawned successfully.
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.523 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.540 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.546 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.550 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.550 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.550 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.551 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.551 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.552 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.582 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.583 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395883.509438, 4e056573-a7f9-40a3-b57a-8415af28183c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.583 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] VM Paused (Lifecycle Event)
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.628 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.631 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395883.515296, 4e056573-a7f9-40a3-b57a-8415af28183c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.631 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] VM Resumed (Lifecycle Event)
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.655 2 INFO nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Took 12.19 seconds to spawn the instance on the hypervisor.
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.655 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.686 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.689 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.717 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.736 2 INFO nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Took 13.12 seconds to build instance.
Oct 02 09:04:43 compute-0 nova_compute[260603]: 2025-10-02 09:04:43.756 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:04:45 compute-0 nova_compute[260603]: 2025-10-02 09:04:45.014 2 DEBUG nova.compute.manager [req-183edad4-59c9-41e1-b7f8-bb0277570873 req-679c4893-9824-46b7-9e04-f3edd249ce09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:45 compute-0 nova_compute[260603]: 2025-10-02 09:04:45.015 2 DEBUG oslo_concurrency.lockutils [req-183edad4-59c9-41e1-b7f8-bb0277570873 req-679c4893-9824-46b7-9e04-f3edd249ce09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:04:45 compute-0 nova_compute[260603]: 2025-10-02 09:04:45.015 2 DEBUG oslo_concurrency.lockutils [req-183edad4-59c9-41e1-b7f8-bb0277570873 req-679c4893-9824-46b7-9e04-f3edd249ce09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:04:45 compute-0 nova_compute[260603]: 2025-10-02 09:04:45.015 2 DEBUG oslo_concurrency.lockutils [req-183edad4-59c9-41e1-b7f8-bb0277570873 req-679c4893-9824-46b7-9e04-f3edd249ce09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:04:45 compute-0 nova_compute[260603]: 2025-10-02 09:04:45.015 2 DEBUG nova.compute.manager [req-183edad4-59c9-41e1-b7f8-bb0277570873 req-679c4893-9824-46b7-9e04-f3edd249ce09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] No waiting events found dispatching network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:04:45 compute-0 nova_compute[260603]: 2025-10-02 09:04:45.016 2 WARNING nova.compute.manager [req-183edad4-59c9-41e1-b7f8-bb0277570873 req-679c4893-9824-46b7-9e04-f3edd249ce09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received unexpected event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 for instance with vm_state active and task_state None.
Oct 02 09:04:45 compute-0 ceph-mon[74477]: pgmap v2598: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:04:45 compute-0 nova_compute[260603]: 2025-10-02 09:04:45.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 170 B/s wr, 3 op/s
Oct 02 09:04:46 compute-0 nova_compute[260603]: 2025-10-02 09:04:46.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:47 compute-0 nova_compute[260603]: 2025-10-02 09:04:47.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:47 compute-0 NetworkManager[45129]: <info>  [1759395887.0434] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Oct 02 09:04:47 compute-0 NetworkManager[45129]: <info>  [1759395887.0443] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Oct 02 09:04:47 compute-0 ovn_controller[152344]: 2025-10-02T09:04:47Z|01475|binding|INFO|Releasing lport 4b059aed-66ee-4478-ba1c-eaee8b0f3c46 from this chassis (sb_readonly=0)
Oct 02 09:04:47 compute-0 nova_compute[260603]: 2025-10-02 09:04:47.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:47 compute-0 ovn_controller[152344]: 2025-10-02T09:04:47Z|01476|binding|INFO|Releasing lport 4b059aed-66ee-4478-ba1c-eaee8b0f3c46 from this chassis (sb_readonly=0)
Oct 02 09:04:47 compute-0 nova_compute[260603]: 2025-10-02 09:04:47.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:47 compute-0 ceph-mon[74477]: pgmap v2599: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 170 B/s wr, 3 op/s
Oct 02 09:04:47 compute-0 nova_compute[260603]: 2025-10-02 09:04:47.888 2 DEBUG nova.compute.manager [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-changed-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:04:47 compute-0 nova_compute[260603]: 2025-10-02 09:04:47.889 2 DEBUG nova.compute.manager [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Refreshing instance network info cache due to event network-changed-3426f15c-bff7-478f-a7d7-2fd7499af1c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:04:47 compute-0 nova_compute[260603]: 2025-10-02 09:04:47.890 2 DEBUG oslo_concurrency.lockutils [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:04:47 compute-0 nova_compute[260603]: 2025-10-02 09:04:47.890 2 DEBUG oslo_concurrency.lockutils [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:04:47 compute-0 nova_compute[260603]: 2025-10-02 09:04:47.890 2 DEBUG nova.network.neutron [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Refreshing network info cache for port 3426f15c-bff7-478f-a7d7-2fd7499af1c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:04:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 59 op/s
Oct 02 09:04:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:04:49 compute-0 ceph-mon[74477]: pgmap v2600: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 59 op/s
Oct 02 09:04:49 compute-0 nova_compute[260603]: 2025-10-02 09:04:49.265 2 DEBUG nova.network.neutron [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updated VIF entry in instance network info cache for port 3426f15c-bff7-478f-a7d7-2fd7499af1c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:04:49 compute-0 nova_compute[260603]: 2025-10-02 09:04:49.265 2 DEBUG nova.network.neutron [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updating instance_info_cache with network_info: [{"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:04:49 compute-0 nova_compute[260603]: 2025-10-02 09:04:49.282 2 DEBUG oslo_concurrency.lockutils [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:04:49 compute-0 nova_compute[260603]: 2025-10-02 09:04:49.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:04:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:04:50 compute-0 nova_compute[260603]: 2025-10-02 09:04:50.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:51 compute-0 nova_compute[260603]: 2025-10-02 09:04:51.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:51 compute-0 ceph-mon[74477]: pgmap v2601: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:04:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:04:52 compute-0 nova_compute[260603]: 2025-10-02 09:04:52.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:04:52 compute-0 nova_compute[260603]: 2025-10-02 09:04:52.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:04:52 compute-0 nova_compute[260603]: 2025-10-02 09:04:52.544 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:04:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:04:53 compute-0 ceph-mon[74477]: pgmap v2602: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:04:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:04:54 compute-0 nova_compute[260603]: 2025-10-02 09:04:54.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:04:55 compute-0 ceph-mon[74477]: pgmap v2603: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:04:55 compute-0 nova_compute[260603]: 2025-10-02 09:04:55.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:55 compute-0 ovn_controller[152344]: 2025-10-02T09:04:55Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:3b:0c 10.100.0.6
Oct 02 09:04:55 compute-0 ovn_controller[152344]: 2025-10-02T09:04:55Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:3b:0c 10.100.0.6
Oct 02 09:04:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:04:56 compute-0 nova_compute[260603]: 2025-10-02 09:04:56.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:04:56 compute-0 sudo[409640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:04:56 compute-0 sudo[409640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:04:56 compute-0 sudo[409640]: pam_unix(sudo:session): session closed for user root
Oct 02 09:04:56 compute-0 sudo[409665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:04:56 compute-0 sudo[409665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:04:56 compute-0 sudo[409665]: pam_unix(sudo:session): session closed for user root
Oct 02 09:04:56 compute-0 sudo[409690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:04:56 compute-0 sudo[409690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:04:56 compute-0 sudo[409690]: pam_unix(sudo:session): session closed for user root
Oct 02 09:04:57 compute-0 sudo[409715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:04:57 compute-0 sudo[409715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:04:57 compute-0 ceph-mon[74477]: pgmap v2604: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:04:57 compute-0 sudo[409715]: pam_unix(sudo:session): session closed for user root
Oct 02 09:04:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:04:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:04:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:04:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:04:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:04:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:04:57 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 818c2aa8-767e-4aa7-b08b-31675c452b17 does not exist
Oct 02 09:04:57 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8c016e4c-6202-4f54-84a4-a5e186ef02f5 does not exist
Oct 02 09:04:57 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ff51f9e2-63ae-4324-b1bc-ce4ea160b49e does not exist
Oct 02 09:04:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:04:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:04:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:04:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:04:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:04:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:04:57 compute-0 sudo[409771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:04:57 compute-0 sudo[409771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:04:57 compute-0 sudo[409771]: pam_unix(sudo:session): session closed for user root
Oct 02 09:04:57 compute-0 sudo[409796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:04:57 compute-0 sudo[409796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:04:57 compute-0 sudo[409796]: pam_unix(sudo:session): session closed for user root
Oct 02 09:04:57 compute-0 sudo[409821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:04:57 compute-0 sudo[409821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:04:57 compute-0 sudo[409821]: pam_unix(sudo:session): session closed for user root
Oct 02 09:04:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 117 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Oct 02 09:04:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:04:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:04:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:04:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:04:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:04:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:04:57 compute-0 sudo[409846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:04:57 compute-0 sudo[409846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:04:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:04:58 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:04:58 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:04:58 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:04:58 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:04:58 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:04:58 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:04:58 compute-0 podman[409911]: 2025-10-02 09:04:58.310461469 +0000 UTC m=+0.037087283 container create 83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:04:58 compute-0 systemd[1]: Started libpod-conmon-83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6.scope.
Oct 02 09:04:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:04:58 compute-0 podman[409911]: 2025-10-02 09:04:58.295536835 +0000 UTC m=+0.022162669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:04:58 compute-0 podman[409911]: 2025-10-02 09:04:58.398599048 +0000 UTC m=+0.125224872 container init 83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 09:04:58 compute-0 podman[409911]: 2025-10-02 09:04:58.404503591 +0000 UTC m=+0.131129405 container start 83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:04:58 compute-0 podman[409911]: 2025-10-02 09:04:58.407960599 +0000 UTC m=+0.134586423 container attach 83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:04:58 compute-0 flamboyant_swanson[409928]: 167 167
Oct 02 09:04:58 compute-0 systemd[1]: libpod-83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6.scope: Deactivated successfully.
Oct 02 09:04:58 compute-0 podman[409911]: 2025-10-02 09:04:58.411412816 +0000 UTC m=+0.138038630 container died 83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab80f4e03f87f3eeee57bed41bda5149c58eb507322234902e476c77c7b94b5a-merged.mount: Deactivated successfully.
Oct 02 09:04:58 compute-0 podman[409911]: 2025-10-02 09:04:58.448352264 +0000 UTC m=+0.174978078 container remove 83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 09:04:58 compute-0 systemd[1]: libpod-conmon-83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6.scope: Deactivated successfully.
Oct 02 09:04:58 compute-0 nova_compute[260603]: 2025-10-02 09:04:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:04:58 compute-0 nova_compute[260603]: 2025-10-02 09:04:58.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:04:58 compute-0 podman[409952]: 2025-10-02 09:04:58.628676138 +0000 UTC m=+0.047022053 container create 2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:04:58 compute-0 systemd[1]: Started libpod-conmon-2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204.scope.
Oct 02 09:04:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1730308bcbc97db7f8799b89d8cc7312dfae721b41ccfa11e4573b5099ff2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1730308bcbc97db7f8799b89d8cc7312dfae721b41ccfa11e4573b5099ff2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1730308bcbc97db7f8799b89d8cc7312dfae721b41ccfa11e4573b5099ff2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1730308bcbc97db7f8799b89d8cc7312dfae721b41ccfa11e4573b5099ff2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1730308bcbc97db7f8799b89d8cc7312dfae721b41ccfa11e4573b5099ff2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:04:58 compute-0 podman[409952]: 2025-10-02 09:04:58.607129609 +0000 UTC m=+0.025475554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:04:58 compute-0 podman[409952]: 2025-10-02 09:04:58.718089977 +0000 UTC m=+0.136435892 container init 2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:04:58 compute-0 podman[409952]: 2025-10-02 09:04:58.728916012 +0000 UTC m=+0.147261917 container start 2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 09:04:58 compute-0 podman[409952]: 2025-10-02 09:04:58.739927695 +0000 UTC m=+0.158273590 container attach 2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:04:59 compute-0 ceph-mon[74477]: pgmap v2605: 305 pgs: 305 active+clean; 117 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Oct 02 09:04:59 compute-0 wizardly_jepsen[409968]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:04:59 compute-0 wizardly_jepsen[409968]: --> relative data size: 1.0
Oct 02 09:04:59 compute-0 wizardly_jepsen[409968]: --> All data devices are unavailable
Oct 02 09:04:59 compute-0 systemd[1]: libpod-2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204.scope: Deactivated successfully.
Oct 02 09:04:59 compute-0 systemd[1]: libpod-2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204.scope: Consumed 1.025s CPU time.
Oct 02 09:04:59 compute-0 conmon[409968]: conmon 2bd00fba6e4f636cf4c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204.scope/container/memory.events
Oct 02 09:04:59 compute-0 podman[409952]: 2025-10-02 09:04:59.810288408 +0000 UTC m=+1.228634323 container died 2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:04:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 09:04:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dc1730308bcbc97db7f8799b89d8cc7312dfae721b41ccfa11e4573b5099ff2-merged.mount: Deactivated successfully.
Oct 02 09:05:00 compute-0 podman[409952]: 2025-10-02 09:05:00.010492958 +0000 UTC m=+1.428838863 container remove 2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 09:05:00 compute-0 systemd[1]: libpod-conmon-2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204.scope: Deactivated successfully.
Oct 02 09:05:00 compute-0 sudo[409846]: pam_unix(sudo:session): session closed for user root
Oct 02 09:05:00 compute-0 sudo[410011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:05:00 compute-0 sudo[410011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:05:00 compute-0 sudo[410011]: pam_unix(sudo:session): session closed for user root
Oct 02 09:05:00 compute-0 sudo[410036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:05:00 compute-0 sudo[410036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:05:00 compute-0 sudo[410036]: pam_unix(sudo:session): session closed for user root
Oct 02 09:05:00 compute-0 sudo[410061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:05:00 compute-0 sudo[410061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:05:00 compute-0 sudo[410061]: pam_unix(sudo:session): session closed for user root
Oct 02 09:05:00 compute-0 sudo[410086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:05:00 compute-0 sudo[410086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:05:00 compute-0 nova_compute[260603]: 2025-10-02 09:05:00.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:05:00 compute-0 nova_compute[260603]: 2025-10-02 09:05:00.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:00 compute-0 nova_compute[260603]: 2025-10-02 09:05:00.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:00 compute-0 nova_compute[260603]: 2025-10-02 09:05:00.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:00 compute-0 nova_compute[260603]: 2025-10-02 09:05:00.549 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:05:00 compute-0 nova_compute[260603]: 2025-10-02 09:05:00.549 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:05:00 compute-0 podman[410152]: 2025-10-02 09:05:00.557495958 +0000 UTC m=+0.032817392 container create 978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 09:05:00 compute-0 systemd[1]: Started libpod-conmon-978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5.scope.
Oct 02 09:05:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:05:00 compute-0 podman[410152]: 2025-10-02 09:05:00.543944026 +0000 UTC m=+0.019265480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:05:00 compute-0 podman[410152]: 2025-10-02 09:05:00.640740734 +0000 UTC m=+0.116062188 container init 978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:05:00 compute-0 podman[410152]: 2025-10-02 09:05:00.646307168 +0000 UTC m=+0.121628602 container start 978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 09:05:00 compute-0 crazy_zhukovsky[410169]: 167 167
Oct 02 09:05:00 compute-0 systemd[1]: libpod-978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5.scope: Deactivated successfully.
Oct 02 09:05:00 compute-0 podman[410152]: 2025-10-02 09:05:00.651459568 +0000 UTC m=+0.126781022 container attach 978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:05:00 compute-0 podman[410152]: 2025-10-02 09:05:00.652114768 +0000 UTC m=+0.127436202 container died 978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4040110297a0a830c907d0b46d0890b5ebf2470712133555eede93812bfb1bed-merged.mount: Deactivated successfully.
Oct 02 09:05:00 compute-0 podman[410152]: 2025-10-02 09:05:00.70688965 +0000 UTC m=+0.182211084 container remove 978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:05:00 compute-0 systemd[1]: libpod-conmon-978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5.scope: Deactivated successfully.
Oct 02 09:05:00 compute-0 nova_compute[260603]: 2025-10-02 09:05:00.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:00 compute-0 podman[410211]: 2025-10-02 09:05:00.880893687 +0000 UTC m=+0.052404389 container create 93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:05:00 compute-0 systemd[1]: Started libpod-conmon-93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446.scope.
Oct 02 09:05:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:05:00 compute-0 podman[410211]: 2025-10-02 09:05:00.856408037 +0000 UTC m=+0.027918769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17215ff42829cb82d782a0f12ee468b463c32d05163120d226fc54170e7783c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17215ff42829cb82d782a0f12ee468b463c32d05163120d226fc54170e7783c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17215ff42829cb82d782a0f12ee468b463c32d05163120d226fc54170e7783c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17215ff42829cb82d782a0f12ee468b463c32d05163120d226fc54170e7783c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:05:00 compute-0 podman[410211]: 2025-10-02 09:05:00.967914592 +0000 UTC m=+0.139425304 container init 93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:05:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:05:00 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3038396416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:05:00 compute-0 podman[410211]: 2025-10-02 09:05:00.977442588 +0000 UTC m=+0.148953290 container start 93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 09:05:00 compute-0 podman[410211]: 2025-10-02 09:05:00.982205556 +0000 UTC m=+0.153716278 container attach 93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:05:00 compute-0 nova_compute[260603]: 2025-10-02 09:05:00.990 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.065 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.065 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.202 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.203 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3432MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.203 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.203 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.269 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 4e056573-a7f9-40a3-b57a-8415af28183c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.269 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.270 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:05:01 compute-0 ceph-mon[74477]: pgmap v2606: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 09:05:01 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3038396416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.289 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.304 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.304 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.317 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.335 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.362 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:05:01 compute-0 vibrant_wu[410227]: {
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:     "0": [
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:         {
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "devices": [
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "/dev/loop3"
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             ],
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_name": "ceph_lv0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_size": "21470642176",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "name": "ceph_lv0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "tags": {
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.cluster_name": "ceph",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.crush_device_class": "",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.encrypted": "0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.osd_id": "0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.type": "block",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.vdo": "0"
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             },
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "type": "block",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "vg_name": "ceph_vg0"
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:         }
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:     ],
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:     "1": [
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:         {
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "devices": [
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "/dev/loop4"
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             ],
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_name": "ceph_lv1",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_size": "21470642176",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "name": "ceph_lv1",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "tags": {
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.cluster_name": "ceph",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.crush_device_class": "",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.encrypted": "0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.osd_id": "1",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.type": "block",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.vdo": "0"
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             },
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "type": "block",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "vg_name": "ceph_vg1"
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:         }
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:     ],
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:     "2": [
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:         {
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "devices": [
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "/dev/loop5"
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             ],
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_name": "ceph_lv2",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_size": "21470642176",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "name": "ceph_lv2",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "tags": {
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.cluster_name": "ceph",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.crush_device_class": "",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.encrypted": "0",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.osd_id": "2",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.type": "block",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:                 "ceph.vdo": "0"
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             },
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "type": "block",
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:             "vg_name": "ceph_vg2"
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:         }
Oct 02 09:05:01 compute-0 vibrant_wu[410227]:     ]
Oct 02 09:05:01 compute-0 vibrant_wu[410227]: }
Oct 02 09:05:01 compute-0 systemd[1]: libpod-93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446.scope: Deactivated successfully.
Oct 02 09:05:01 compute-0 podman[410211]: 2025-10-02 09:05:01.762203066 +0000 UTC m=+0.933713778 container died 93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:05:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:05:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476907137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-17215ff42829cb82d782a0f12ee468b463c32d05163120d226fc54170e7783c9-merged.mount: Deactivated successfully.
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.792 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.800 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:05:01 compute-0 podman[410211]: 2025-10-02 09:05:01.815058538 +0000 UTC m=+0.986569240 container remove 93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.819 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:05:01 compute-0 systemd[1]: libpod-conmon-93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446.scope: Deactivated successfully.
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.845 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:05:01 compute-0 nova_compute[260603]: 2025-10-02 09:05:01.845 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:01 compute-0 sudo[410086]: pam_unix(sudo:session): session closed for user root
Oct 02 09:05:01 compute-0 sudo[410272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:05:01 compute-0 sudo[410272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:05:01 compute-0 sudo[410272]: pam_unix(sudo:session): session closed for user root
Oct 02 09:05:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 02 09:05:01 compute-0 sudo[410297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:05:01 compute-0 sudo[410297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:05:01 compute-0 sudo[410297]: pam_unix(sudo:session): session closed for user root
Oct 02 09:05:02 compute-0 sudo[410322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:05:02 compute-0 sudo[410322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:05:02 compute-0 sudo[410322]: pam_unix(sudo:session): session closed for user root
Oct 02 09:05:02 compute-0 sudo[410347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:05:02 compute-0 sudo[410347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:05:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3476907137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:05:02 compute-0 podman[410413]: 2025-10-02 09:05:02.461317022 +0000 UTC m=+0.043529394 container create c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 09:05:02 compute-0 systemd[1]: Started libpod-conmon-c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c.scope.
Oct 02 09:05:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:05:02 compute-0 podman[410413]: 2025-10-02 09:05:02.445072827 +0000 UTC m=+0.027285209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:05:02 compute-0 podman[410413]: 2025-10-02 09:05:02.545808998 +0000 UTC m=+0.128021460 container init c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 09:05:02 compute-0 podman[410413]: 2025-10-02 09:05:02.553650701 +0000 UTC m=+0.135863063 container start c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 09:05:02 compute-0 podman[410413]: 2025-10-02 09:05:02.556247012 +0000 UTC m=+0.138459404 container attach c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:05:02 compute-0 reverent_fermi[410429]: 167 167
Oct 02 09:05:02 compute-0 systemd[1]: libpod-c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c.scope: Deactivated successfully.
Oct 02 09:05:02 compute-0 podman[410413]: 2025-10-02 09:05:02.56070947 +0000 UTC m=+0.142921832 container died c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:05:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9618d3f43999ea77ad0fb3b2d4022b7f9fc7cf7fdd42e69a4905256c93173ac-merged.mount: Deactivated successfully.
Oct 02 09:05:02 compute-0 podman[410413]: 2025-10-02 09:05:02.601957933 +0000 UTC m=+0.184170305 container remove c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:05:02 compute-0 systemd[1]: libpod-conmon-c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c.scope: Deactivated successfully.
Oct 02 09:05:02 compute-0 nova_compute[260603]: 2025-10-02 09:05:02.729 2 INFO nova.compute.manager [None req-01c3bff4-d70b-4745-b768-42beb3af0ca8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Get console output
Oct 02 09:05:02 compute-0 nova_compute[260603]: 2025-10-02 09:05:02.737 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 09:05:02 compute-0 podman[410455]: 2025-10-02 09:05:02.757868818 +0000 UTC m=+0.038326603 container create d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:05:02 compute-0 systemd[1]: Started libpod-conmon-d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81.scope.
Oct 02 09:05:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b7d8e0d4d87e3095393c79c2dd5d618708c56e631b3b6c3261471170683072/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b7d8e0d4d87e3095393c79c2dd5d618708c56e631b3b6c3261471170683072/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b7d8e0d4d87e3095393c79c2dd5d618708c56e631b3b6c3261471170683072/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b7d8e0d4d87e3095393c79c2dd5d618708c56e631b3b6c3261471170683072/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:05:02 compute-0 podman[410455]: 2025-10-02 09:05:02.743854532 +0000 UTC m=+0.024312327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:05:02 compute-0 podman[410455]: 2025-10-02 09:05:02.843773247 +0000 UTC m=+0.124231062 container init d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 09:05:02 compute-0 podman[410455]: 2025-10-02 09:05:02.850256498 +0000 UTC m=+0.130714283 container start d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:05:02 compute-0 podman[410455]: 2025-10-02 09:05:02.871686124 +0000 UTC m=+0.152143939 container attach d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 09:05:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:05:03 compute-0 ceph-mon[74477]: pgmap v2607: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 02 09:05:03 compute-0 boring_curie[410471]: {
Oct 02 09:05:03 compute-0 boring_curie[410471]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "osd_id": 2,
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "type": "bluestore"
Oct 02 09:05:03 compute-0 boring_curie[410471]:     },
Oct 02 09:05:03 compute-0 boring_curie[410471]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "osd_id": 1,
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "type": "bluestore"
Oct 02 09:05:03 compute-0 boring_curie[410471]:     },
Oct 02 09:05:03 compute-0 boring_curie[410471]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "osd_id": 0,
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:05:03 compute-0 boring_curie[410471]:         "type": "bluestore"
Oct 02 09:05:03 compute-0 boring_curie[410471]:     }
Oct 02 09:05:03 compute-0 boring_curie[410471]: }
Oct 02 09:05:03 compute-0 systemd[1]: libpod-d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81.scope: Deactivated successfully.
Oct 02 09:05:03 compute-0 podman[410455]: 2025-10-02 09:05:03.832439901 +0000 UTC m=+1.112897696 container died d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:05:03 compute-0 nova_compute[260603]: 2025-10-02 09:05:03.840 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-09b7d8e0d4d87e3095393c79c2dd5d618708c56e631b3b6c3261471170683072-merged.mount: Deactivated successfully.
Oct 02 09:05:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 09:05:04 compute-0 podman[410455]: 2025-10-02 09:05:04.164371805 +0000 UTC m=+1.444829580 container remove d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 09:05:04 compute-0 systemd[1]: libpod-conmon-d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81.scope: Deactivated successfully.
Oct 02 09:05:04 compute-0 sudo[410347]: pam_unix(sudo:session): session closed for user root
Oct 02 09:05:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:05:04 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:05:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:05:04 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:05:04 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 56502a37-5e4b-486c-bd1f-99ea82239a15 does not exist
Oct 02 09:05:04 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 60c556e1-f2c4-4408-9d3d-f57d5f8f9a15 does not exist
Oct 02 09:05:04 compute-0 sudo[410518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:05:04 compute-0 sudo[410518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:05:04 compute-0 sudo[410518]: pam_unix(sudo:session): session closed for user root
Oct 02 09:05:04 compute-0 sudo[410543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:05:04 compute-0 sudo[410543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:05:04 compute-0 sudo[410543]: pam_unix(sudo:session): session closed for user root
Oct 02 09:05:04 compute-0 ovn_controller[152344]: 2025-10-02T09:05:04Z|01477|binding|INFO|Releasing lport 4b059aed-66ee-4478-ba1c-eaee8b0f3c46 from this chassis (sb_readonly=0)
Oct 02 09:05:04 compute-0 nova_compute[260603]: 2025-10-02 09:05:04.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:04 compute-0 ovn_controller[152344]: 2025-10-02T09:05:04Z|01478|binding|INFO|Releasing lport 4b059aed-66ee-4478-ba1c-eaee8b0f3c46 from this chassis (sb_readonly=0)
Oct 02 09:05:04 compute-0 nova_compute[260603]: 2025-10-02 09:05:04.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:05 compute-0 ceph-mon[74477]: pgmap v2608: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 09:05:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:05:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:05:05 compute-0 nova_compute[260603]: 2025-10-02 09:05:05.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 09:05:05 compute-0 nova_compute[260603]: 2025-10-02 09:05:05.984 2 INFO nova.compute.manager [None req-bc712684-999b-4858-8cd5-0b9ac2599d88 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Get console output
Oct 02 09:05:05 compute-0 nova_compute[260603]: 2025-10-02 09:05:05.989 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 09:05:06 compute-0 nova_compute[260603]: 2025-10-02 09:05:06.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:07 compute-0 nova_compute[260603]: 2025-10-02 09:05:07.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:07 compute-0 NetworkManager[45129]: <info>  [1759395907.0860] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Oct 02 09:05:07 compute-0 NetworkManager[45129]: <info>  [1759395907.0871] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Oct 02 09:05:07 compute-0 ovn_controller[152344]: 2025-10-02T09:05:07Z|01479|binding|INFO|Releasing lport 4b059aed-66ee-4478-ba1c-eaee8b0f3c46 from this chassis (sb_readonly=0)
Oct 02 09:05:07 compute-0 nova_compute[260603]: 2025-10-02 09:05:07.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:07 compute-0 nova_compute[260603]: 2025-10-02 09:05:07.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:07 compute-0 ceph-mon[74477]: pgmap v2609: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 09:05:07 compute-0 nova_compute[260603]: 2025-10-02 09:05:07.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:07.326 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:05:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:07.327 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:05:07 compute-0 nova_compute[260603]: 2025-10-02 09:05:07.421 2 INFO nova.compute.manager [None req-f3225b62-afa3-48a8-aa36-b14dd795f3e6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Get console output
Oct 02 09:05:07 compute-0 nova_compute[260603]: 2025-10-02 09:05:07.425 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 09:05:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.012 2 DEBUG nova.compute.manager [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-changed-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.012 2 DEBUG nova.compute.manager [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Refreshing instance network info cache due to event network-changed-3426f15c-bff7-478f-a7d7-2fd7499af1c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.013 2 DEBUG oslo_concurrency.lockutils [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.013 2 DEBUG oslo_concurrency.lockutils [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.014 2 DEBUG nova.network.neutron [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Refreshing network info cache for port 3426f15c-bff7-478f-a7d7-2fd7499af1c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.076 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.077 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.078 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.078 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.080 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.083 2 INFO nova.compute.manager [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Terminating instance
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.085 2 DEBUG nova.compute.manager [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:05:08 compute-0 kernel: tap3426f15c-bf (unregistering): left promiscuous mode
Oct 02 09:05:08 compute-0 NetworkManager[45129]: <info>  [1759395908.1634] device (tap3426f15c-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:05:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:05:08 compute-0 ovn_controller[152344]: 2025-10-02T09:05:08Z|01480|binding|INFO|Releasing lport 3426f15c-bff7-478f-a7d7-2fd7499af1c4 from this chassis (sb_readonly=0)
Oct 02 09:05:08 compute-0 ovn_controller[152344]: 2025-10-02T09:05:08Z|01481|binding|INFO|Setting lport 3426f15c-bff7-478f-a7d7-2fd7499af1c4 down in Southbound
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:08 compute-0 ovn_controller[152344]: 2025-10-02T09:05:08Z|01482|binding|INFO|Removing iface tap3426f15c-bf ovn-installed in OVS
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.187 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:3b:0c 10.100.0.6'], port_security=['fa:16:3e:bf:3b:0c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e056573-a7f9-40a3-b57a-8415af28183c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b56304ae-559d-4697-b965-787fd568f6ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56d2bc0b-a112-4501-852b-a30e94c83df4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8bddaa3-c9e7-4b8d-b560-6df44a76261b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3426f15c-bff7-478f-a7d7-2fd7499af1c4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.189 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3426f15c-bff7-478f-a7d7-2fd7499af1c4 in datapath b56304ae-559d-4697-b965-787fd568f6ea unbound from our chassis
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.190 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b56304ae-559d-4697-b965-787fd568f6ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.192 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6ccfd1-d0e1-4f6e-b55e-a3e24d8d70da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.193 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea namespace which is not needed anymore
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:08 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000088.scope: Deactivated successfully.
Oct 02 09:05:08 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000088.scope: Consumed 12.513s CPU time.
Oct 02 09:05:08 compute-0 systemd-machined[214636]: Machine qemu-170-instance-00000088 terminated.
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.320 2 INFO nova.virt.libvirt.driver [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Instance destroyed successfully.
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.321 2 DEBUG nova.objects.instance [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 4e056573-a7f9-40a3-b57a-8415af28183c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:05:08 compute-0 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [NOTICE]   (409625) : haproxy version is 2.8.14-c23fe91
Oct 02 09:05:08 compute-0 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [NOTICE]   (409625) : path to executable is /usr/sbin/haproxy
Oct 02 09:05:08 compute-0 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [WARNING]  (409625) : Exiting Master process...
Oct 02 09:05:08 compute-0 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [ALERT]    (409625) : Current worker (409628) exited with code 143 (Terminated)
Oct 02 09:05:08 compute-0 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [WARNING]  (409625) : All workers exited. Exiting... (0)
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.335 2 DEBUG nova.virt.libvirt.vif [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:04:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-702449656',display_name='tempest-TestNetworkBasicOps-server-702449656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-702449656',id=136,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG4yVL3kWMKssiMMTxBkYCYMGOyIrMgKWW6mSphQLUGjYPZbZn8kAoyGfeCty+GAJO7M0ajY8H+P8bZ7xOYgSuU5Lwh2F7EvM3DqGuRUQe2gvgkzvhd/Nxhr2daBHDjEw==',key_name='tempest-TestNetworkBasicOps-1401796894',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:04:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-aolvfudx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:04:43Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=4e056573-a7f9-40a3-b57a-8415af28183c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.336 2 DEBUG nova.network.os_vif_util [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:05:08 compute-0 systemd[1]: libpod-9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4.scope: Deactivated successfully.
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.340 2 DEBUG nova.network.os_vif_util [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.341 2 DEBUG os_vif [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3426f15c-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:08 compute-0 podman[410594]: 2025-10-02 09:05:08.345775655 +0000 UTC m=+0.059281392 container died 9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.349 2 INFO os_vif [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf')
Oct 02 09:05:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4-userdata-shm.mount: Deactivated successfully.
Oct 02 09:05:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9b747b7aa890dff406452ac20375c7d0d240a41a3a839599776ae767a20c998-merged.mount: Deactivated successfully.
Oct 02 09:05:08 compute-0 podman[410594]: 2025-10-02 09:05:08.400720464 +0000 UTC m=+0.114226201 container cleanup 9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:05:08 compute-0 systemd[1]: libpod-conmon-9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4.scope: Deactivated successfully.
Oct 02 09:05:08 compute-0 podman[410652]: 2025-10-02 09:05:08.464596779 +0000 UTC m=+0.044328779 container remove 9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.470 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29012875-bcce-4301-9a57-7d31a8e77d99]: (4, ('Thu Oct  2 09:05:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea (9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4)\n9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4\nThu Oct  2 09:05:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea (9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4)\n9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.472 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[39f80d3d-0f93-4e96-9d1b-857d38a2d75f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.473 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb56304ae-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:08 compute-0 kernel: tapb56304ae-50: left promiscuous mode
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.493 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c29291c2-65fb-469e-bfb2-126aba3b0c6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.528 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cbda0e84-1cfe-40d3-b34d-58694bd615e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.529 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1f47c573-5554-4ac6-b9a3-9c1910f45d9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.544 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f4eaa2-eb18-48ef-b411-4d13ff3e0d67]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676110, 'reachable_time': 31686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410667, 'error': None, 'target': 'ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:08 compute-0 systemd[1]: run-netns-ovnmeta\x2db56304ae\x2d559d\x2d4697\x2db965\x2d787fd568f6ea.mount: Deactivated successfully.
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.548 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:05:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.548 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccd5481-5f10-4291-b8b8-a3cfa17d5f66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.772 2 INFO nova.virt.libvirt.driver [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Deleting instance files /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c_del
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.773 2 INFO nova.virt.libvirt.driver [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Deletion of /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c_del complete
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.854 2 INFO nova.compute.manager [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.855 2 DEBUG oslo.service.loopingcall [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.856 2 DEBUG nova.compute.manager [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:05:08 compute-0 nova_compute[260603]: 2025-10-02 09:05:08.856 2 DEBUG nova.network.neutron [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:05:09 compute-0 ceph-mon[74477]: pgmap v2610: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 09:05:09 compute-0 nova_compute[260603]: 2025-10-02 09:05:09.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:05:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 314 KiB/s wr, 15 op/s
Oct 02 09:05:09 compute-0 nova_compute[260603]: 2025-10-02 09:05:09.951 2 DEBUG nova.network.neutron [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:05:09 compute-0 nova_compute[260603]: 2025-10-02 09:05:09.968 2 INFO nova.compute.manager [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Took 1.11 seconds to deallocate network for instance.
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.009 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.010 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:10 compute-0 podman[410669]: 2025-10-02 09:05:10.049917184 +0000 UTC m=+0.121336561 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 09:05:10 compute-0 podman[410670]: 2025-10-02 09:05:10.051967088 +0000 UTC m=+0.112057644 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.069 2 DEBUG oslo_concurrency.processutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.132 2 DEBUG nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-vif-unplugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.133 2 DEBUG oslo_concurrency.lockutils [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.134 2 DEBUG oslo_concurrency.lockutils [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.135 2 DEBUG oslo_concurrency.lockutils [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.136 2 DEBUG nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] No waiting events found dispatching network-vif-unplugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.136 2 WARNING nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received unexpected event network-vif-unplugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 for instance with vm_state deleted and task_state None.
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.137 2 DEBUG nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.137 2 DEBUG oslo_concurrency.lockutils [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.138 2 DEBUG oslo_concurrency.lockutils [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.138 2 DEBUG oslo_concurrency.lockutils [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.139 2 DEBUG nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] No waiting events found dispatching network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.139 2 WARNING nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received unexpected event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 for instance with vm_state deleted and task_state None.
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.140 2 DEBUG nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-vif-deleted-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:05:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:05:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3266333853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.537 2 DEBUG oslo_concurrency.processutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.544 2 DEBUG nova.compute.provider_tree [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.559 2 DEBUG nova.scheduler.client.report [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.584 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.611 2 INFO nova.scheduler.client.report [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 4e056573-a7f9-40a3-b57a-8415af28183c
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.674 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.676 2 DEBUG nova.network.neutron [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updated VIF entry in instance network info cache for port 3426f15c-bff7-478f-a7d7-2fd7499af1c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.676 2 DEBUG nova.network.neutron [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updating instance_info_cache with network_info: [{"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.695 2 DEBUG oslo_concurrency.lockutils [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:05:10 compute-0 nova_compute[260603]: 2025-10-02 09:05:10.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:11 compute-0 ceph-mon[74477]: pgmap v2611: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 314 KiB/s wr, 15 op/s
Oct 02 09:05:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3266333853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:05:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Oct 02 09:05:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:05:13 compute-0 ceph-mon[74477]: pgmap v2612: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Oct 02 09:05:13 compute-0 nova_compute[260603]: 2025-10-02 09:05:13.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Oct 02 09:05:13 compute-0 podman[410736]: 2025-10-02 09:05:13.983935268 +0000 UTC m=+0.050912934 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 09:05:14 compute-0 podman[410735]: 2025-10-02 09:05:14.018935025 +0000 UTC m=+0.090989319 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:05:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:14.331 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:05:15 compute-0 ceph-mon[74477]: pgmap v2613: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Oct 02 09:05:15 compute-0 nova_compute[260603]: 2025-10-02 09:05:15.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct 02 09:05:17 compute-0 ceph-mon[74477]: pgmap v2614: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct 02 09:05:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct 02 09:05:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:05:18 compute-0 nova_compute[260603]: 2025-10-02 09:05:18.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:19 compute-0 nova_compute[260603]: 2025-10-02 09:05:19.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:19 compute-0 nova_compute[260603]: 2025-10-02 09:05:19.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:19 compute-0 ceph-mon[74477]: pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct 02 09:05:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct 02 09:05:20 compute-0 ceph-mon[74477]: pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct 02 09:05:20 compute-0 nova_compute[260603]: 2025-10-02 09:05:20.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct 02 09:05:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:05:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1588677404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:05:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:05:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1588677404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:05:23 compute-0 ceph-mon[74477]: pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct 02 09:05:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1588677404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:05:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1588677404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:05:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:05:23 compute-0 nova_compute[260603]: 2025-10-02 09:05:23.319 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395908.3182437, 4e056573-a7f9-40a3-b57a-8415af28183c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:05:23 compute-0 nova_compute[260603]: 2025-10-02 09:05:23.320 2 INFO nova.compute.manager [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] VM Stopped (Lifecycle Event)
Oct 02 09:05:23 compute-0 nova_compute[260603]: 2025-10-02 09:05:23.342 2 DEBUG nova.compute.manager [None req-f9e99111-bc52-418f-8c72-a1d122eb949b - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:05:23 compute-0 nova_compute[260603]: 2025-10-02 09:05:23.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct 02 09:05:25 compute-0 ceph-mon[74477]: pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct 02 09:05:25 compute-0 nova_compute[260603]: 2025-10-02 09:05:25.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:27 compute-0 ceph-mon[74477]: pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:05:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:05:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:05:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:05:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:05:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:05:28
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms', 'images', 'default.rgw.control', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:05:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:05:28 compute-0 nova_compute[260603]: 2025-10-02 09:05:28.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:05:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:05:29 compute-0 ceph-mon[74477]: pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:30 compute-0 nova_compute[260603]: 2025-10-02 09:05:30.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:31 compute-0 ceph-mon[74477]: pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:05:33 compute-0 nova_compute[260603]: 2025-10-02 09:05:33.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:33 compute-0 ceph-mon[74477]: pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:34.844 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:34.844 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:34.844 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:35 compute-0 ceph-mon[74477]: pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:35 compute-0 nova_compute[260603]: 2025-10-02 09:05:35.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:36 compute-0 nova_compute[260603]: 2025-10-02 09:05:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:05:36 compute-0 nova_compute[260603]: 2025-10-02 09:05:36.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:05:37 compute-0 ceph-mon[74477]: pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:05:38 compute-0 nova_compute[260603]: 2025-10-02 09:05:38.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:38.447 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:67:c2 10.100.0.2 2001:db8::f816:3eff:fee2:67c2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee2:67c2/64', 'neutron:device_id': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d710978-7032-4293-a883-5a767163ed11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9915992-ac5f-4a55-8b96-3511c2ec67d2, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1fb74237-5dc8-49ee-a35b-4801dc5960b2) old=Port_Binding(mac=['fa:16:3e:e2:67:c2 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d710978-7032-4293-a883-5a767163ed11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:05:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:38.448 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1fb74237-5dc8-49ee-a35b-4801dc5960b2 in datapath 4d710978-7032-4293-a883-5a767163ed11 updated
Oct 02 09:05:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:38.449 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d710978-7032-4293-a883-5a767163ed11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:05:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:38.451 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2b552220-86b8-49fb-84c6-42a2615ae1cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:38 compute-0 ceph-mon[74477]: pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:05:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:40 compute-0 nova_compute[260603]: 2025-10-02 09:05:40.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:40 compute-0 podman[410777]: 2025-10-02 09:05:40.98580244 +0000 UTC m=+0.046582579 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 02 09:05:41 compute-0 ceph-mon[74477]: pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:41 compute-0 podman[410776]: 2025-10-02 09:05:41.022552911 +0000 UTC m=+0.085608071 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 09:05:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:43 compute-0 ceph-mon[74477]: pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:05:43 compute-0 nova_compute[260603]: 2025-10-02 09:05:43.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.070 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.071 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.086 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.168 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.169 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.181 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.181 2 INFO nova.compute.claims [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.302 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:05:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:05:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/251730679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.763 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.769 2 DEBUG nova.compute.provider_tree [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.786 2 DEBUG nova.scheduler.client.report [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.814 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.815 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.861 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.861 2 DEBUG nova.network.neutron [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.884 2 INFO nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:05:44 compute-0 nova_compute[260603]: 2025-10-02 09:05:44.905 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.007 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.008 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.009 2 INFO nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Creating image(s)
Oct 02 09:05:45 compute-0 podman[410841]: 2025-10-02 09:05:45.014280897 +0000 UTC m=+0.079001275 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 09:05:45 compute-0 podman[410840]: 2025-10-02 09:05:45.015564438 +0000 UTC m=+0.082757733 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.034 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:05:45 compute-0 ceph-mon[74477]: pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/251730679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.060 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.092 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.097 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.150 2 DEBUG nova.policy [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.196 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.196 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.197 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.198 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.221 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.225 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 88de4189-44e3-48fe-aa38-c33334b314b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.491 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 88de4189-44e3-48fe-aa38-c33334b314b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.536 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.615 2 DEBUG nova.objects.instance [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 88de4189-44e3-48fe-aa38-c33334b314b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.636 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.636 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Ensure instance console log exists: /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.637 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.637 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.637 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.748 2 DEBUG nova.network.neutron [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Successfully created port: 8d7734cf-6636-4070-a868-e4d1d2cfab65 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:05:45 compute-0 nova_compute[260603]: 2025-10-02 09:05:45.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:46 compute-0 nova_compute[260603]: 2025-10-02 09:05:46.895 2 DEBUG nova.network.neutron [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Successfully updated port: 8d7734cf-6636-4070-a868-e4d1d2cfab65 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:05:46 compute-0 nova_compute[260603]: 2025-10-02 09:05:46.944 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:05:46 compute-0 nova_compute[260603]: 2025-10-02 09:05:46.945 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:05:46 compute-0 nova_compute[260603]: 2025-10-02 09:05:46.945 2 DEBUG nova.network.neutron [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:05:46 compute-0 nova_compute[260603]: 2025-10-02 09:05:46.998 2 DEBUG nova.compute.manager [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-changed-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:05:46 compute-0 nova_compute[260603]: 2025-10-02 09:05:46.999 2 DEBUG nova.compute.manager [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Refreshing instance network info cache due to event network-changed-8d7734cf-6636-4070-a868-e4d1d2cfab65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:05:47 compute-0 nova_compute[260603]: 2025-10-02 09:05:46.999 2 DEBUG oslo_concurrency.lockutils [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:05:47 compute-0 ceph-mon[74477]: pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:05:47 compute-0 nova_compute[260603]: 2025-10-02 09:05:47.078 2 DEBUG nova.network.neutron [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:05:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 72 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.046 2 DEBUG nova.network.neutron [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updating instance_info_cache with network_info: [{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.069 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.070 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Instance network_info: |[{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.072 2 DEBUG oslo_concurrency.lockutils [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.072 2 DEBUG nova.network.neutron [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Refreshing network info cache for port 8d7734cf-6636-4070-a868-e4d1d2cfab65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.079 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Start _get_guest_xml network_info=[{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.085 2 WARNING nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.091 2 DEBUG nova.virt.libvirt.host [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.092 2 DEBUG nova.virt.libvirt.host [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.096 2 DEBUG nova.virt.libvirt.host [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.097 2 DEBUG nova.virt.libvirt.host [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.097 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.097 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.098 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.098 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.098 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.098 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.098 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.099 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.099 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.099 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.099 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.099 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.102 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:05:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:05:48 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3374293255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.578 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.601 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:05:48 compute-0 nova_compute[260603]: 2025-10-02 09:05:48.607 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:05:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:05:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2965104233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:05:49 compute-0 ceph-mon[74477]: pgmap v2630: 305 pgs: 305 active+clean; 72 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Oct 02 09:05:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3374293255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:05:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2965104233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.060 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.062 2 DEBUG nova.virt.libvirt.vif [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:05:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1987487733',display_name='tempest-TestGettingAddress-server-1987487733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1987487733',id=137,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOqsV+mRqA94BGsKqn96a/KPTTGiBWD+95ZJ/Yh7ODb2zqPMdXbtdzNYLEW6fE5OS4mYGF0KIkuvDnPSxXUjDfpHSgx5rD0Ef4PCofSlDC/ZVRctKKrWVNvfvA+fGJmQQ==',key_name='tempest-TestGettingAddress-1976047243',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-qbro3vsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:05:44Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=88de4189-44e3-48fe-aa38-c33334b314b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.063 2 DEBUG nova.network.os_vif_util [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.064 2 DEBUG nova.network.os_vif_util [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.066 2 DEBUG nova.objects.instance [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 88de4189-44e3-48fe-aa38-c33334b314b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.085 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:05:49 compute-0 nova_compute[260603]:   <uuid>88de4189-44e3-48fe-aa38-c33334b314b5</uuid>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   <name>instance-00000089</name>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-1987487733</nova:name>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:05:48</nova:creationTime>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:05:49 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:05:49 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:05:49 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:05:49 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:05:49 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:05:49 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:05:49 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:05:49 compute-0 nova_compute[260603]:         <nova:port uuid="8d7734cf-6636-4070-a868-e4d1d2cfab65">
Oct 02 09:05:49 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fed3:2da6" ipVersion="6"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <system>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <entry name="serial">88de4189-44e3-48fe-aa38-c33334b314b5</entry>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <entry name="uuid">88de4189-44e3-48fe-aa38-c33334b314b5</entry>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     </system>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   <os>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   </os>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   <features>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   </features>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/88de4189-44e3-48fe-aa38-c33334b314b5_disk">
Oct 02 09:05:49 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       </source>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:05:49 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/88de4189-44e3-48fe-aa38-c33334b314b5_disk.config">
Oct 02 09:05:49 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       </source>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:05:49 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:d3:2d:a6"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <target dev="tap8d7734cf-66"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/console.log" append="off"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <video>
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     </video>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:05:49 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:05:49 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:05:49 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:05:49 compute-0 nova_compute[260603]: </domain>
Oct 02 09:05:49 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.086 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Preparing to wait for external event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.086 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.087 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.087 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.087 2 DEBUG nova.virt.libvirt.vif [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:05:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1987487733',display_name='tempest-TestGettingAddress-server-1987487733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1987487733',id=137,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOqsV+mRqA94BGsKqn96a/KPTTGiBWD+95ZJ/Yh7ODb2zqPMdXbtdzNYLEW6fE5OS4mYGF0KIkuvDnPSxXUjDfpHSgx5rD0Ef4PCofSlDC/ZVRctKKrWVNvfvA+fGJmQQ==',key_name='tempest-TestGettingAddress-1976047243',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-qbro3vsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:05:44Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=88de4189-44e3-48fe-aa38-c33334b314b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.088 2 DEBUG nova.network.os_vif_util [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.088 2 DEBUG nova.network.os_vif_util [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.089 2 DEBUG os_vif [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.092 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d7734cf-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d7734cf-66, col_values=(('external_ids', {'iface-id': '8d7734cf-6636-4070-a868-e4d1d2cfab65', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:2d:a6', 'vm-uuid': '88de4189-44e3-48fe-aa38-c33334b314b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:49 compute-0 NetworkManager[45129]: <info>  [1759395949.0951] manager: (tap8d7734cf-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/596)
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.100 2 INFO os_vif [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66')
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.154 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.154 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.155 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:d3:2d:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.155 2 INFO nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Using config drive
Oct 02 09:05:49 compute-0 nova_compute[260603]: 2025-10-02 09:05:49.179 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:05:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:05:50 compute-0 nova_compute[260603]: 2025-10-02 09:05:50.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:05:50 compute-0 nova_compute[260603]: 2025-10-02 09:05:50.735 2 INFO nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Creating config drive at /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/disk.config
Oct 02 09:05:50 compute-0 nova_compute[260603]: 2025-10-02 09:05:50.743 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwtoqqs7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:05:50 compute-0 nova_compute[260603]: 2025-10-02 09:05:50.895 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwtoqqs7" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:05:50 compute-0 nova_compute[260603]: 2025-10-02 09:05:50.930 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:05:50 compute-0 nova_compute[260603]: 2025-10-02 09:05:50.934 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/disk.config 88de4189-44e3-48fe-aa38-c33334b314b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:05:50 compute-0 nova_compute[260603]: 2025-10-02 09:05:50.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:51 compute-0 ceph-mon[74477]: pgmap v2631: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.072 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/disk.config 88de4189-44e3-48fe-aa38-c33334b314b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.073 2 INFO nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Deleting local config drive /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/disk.config because it was imported into RBD.
Oct 02 09:05:51 compute-0 kernel: tap8d7734cf-66: entered promiscuous mode
Oct 02 09:05:51 compute-0 NetworkManager[45129]: <info>  [1759395951.1252] manager: (tap8d7734cf-66): new Tun device (/org/freedesktop/NetworkManager/Devices/597)
Oct 02 09:05:51 compute-0 ovn_controller[152344]: 2025-10-02T09:05:51Z|01483|binding|INFO|Claiming lport 8d7734cf-6636-4070-a868-e4d1d2cfab65 for this chassis.
Oct 02 09:05:51 compute-0 ovn_controller[152344]: 2025-10-02T09:05:51Z|01484|binding|INFO|8d7734cf-6636-4070-a868-e4d1d2cfab65: Claiming fa:16:3e:d3:2d:a6 10.100.0.11 2001:db8::f816:3eff:fed3:2da6
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:51 compute-0 systemd-udevd[411182]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.158 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:2d:a6 10.100.0.11 2001:db8::f816:3eff:fed3:2da6'], port_security=['fa:16:3e:d3:2d:a6 10.100.0.11 2001:db8::f816:3eff:fed3:2da6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fed3:2da6/64', 'neutron:device_id': '88de4189-44e3-48fe-aa38-c33334b314b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d710978-7032-4293-a883-5a767163ed11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8f998d1a-9f03-4830-9263-e8f19e5bb79e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9915992-ac5f-4a55-8b96-3511c2ec67d2, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8d7734cf-6636-4070-a868-e4d1d2cfab65) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.158 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8d7734cf-6636-4070-a868-e4d1d2cfab65 in datapath 4d710978-7032-4293-a883-5a767163ed11 bound to our chassis
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.159 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d710978-7032-4293-a883-5a767163ed11
Oct 02 09:05:51 compute-0 NetworkManager[45129]: <info>  [1759395951.1631] device (tap8d7734cf-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:05:51 compute-0 NetworkManager[45129]: <info>  [1759395951.1639] device (tap8d7734cf-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:05:51 compute-0 systemd-machined[214636]: New machine qemu-171-instance-00000089.
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.170 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7987df72-eeee-4100-9113-3b59ae2df46e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.171 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d710978-71 in ovnmeta-4d710978-7032-4293-a883-5a767163ed11 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.173 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d710978-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.173 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e1f9e3-8140-412d-a198-dfd6b8c1f891]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.173 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fa33417f-9249-4cc0-ae51-ad7fb9d323bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.186 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8176f6-e0f8-4967-ade0-bd0aa5ba1dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_controller[152344]: 2025-10-02T09:05:51Z|01485|binding|INFO|Setting lport 8d7734cf-6636-4070-a868-e4d1d2cfab65 ovn-installed in OVS
Oct 02 09:05:51 compute-0 ovn_controller[152344]: 2025-10-02T09:05:51Z|01486|binding|INFO|Setting lport 8d7734cf-6636-4070-a868-e4d1d2cfab65 up in Southbound
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:51 compute-0 systemd[1]: Started Virtual Machine qemu-171-instance-00000089.
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.210 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b6b6ee-6645-4943-8fb8-327861a178fc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.238 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7f6612-b6bf-45ff-a8ce-6b4937f8a919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.242 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0137a932-5a85-4e9d-849e-3192c0039526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 NetworkManager[45129]: <info>  [1759395951.2439] manager: (tap4d710978-70): new Veth device (/org/freedesktop/NetworkManager/Devices/598)
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.280 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[038be9f2-e9e0-4957-be88-fd2f42bcb473]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.282 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c54d0341-65fc-46d0-808b-2461932c0fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 NetworkManager[45129]: <info>  [1759395951.3052] device (tap4d710978-70): carrier: link connected
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.312 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0d8635-83e4-475c-8f33-0d381b9828bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.330 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[693897b2-925a-467a-bb8c-1f91aa3e4ed0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d710978-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:67:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682990, 'reachable_time': 19213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411216, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.347 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2715b6-38b6-42f2-a117-482b2599d4d7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:67c2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 682990, 'tstamp': 682990}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411217, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.365 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c541e48d-b400-4d9d-b76a-a710bd422157]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d710978-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:67:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682990, 'reachable_time': 19213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 411218, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.396 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc4982f-cf80-4066-8d4d-db64a8c3d97c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.461 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[451d690c-138e-41d4-8697-a1d58e932148]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.462 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d710978-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.462 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.463 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d710978-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:05:51 compute-0 kernel: tap4d710978-70: entered promiscuous mode
Oct 02 09:05:51 compute-0 NetworkManager[45129]: <info>  [1759395951.4657] manager: (tap4d710978-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.467 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d710978-70, col_values=(('external_ids', {'iface-id': '1fb74237-5dc8-49ee-a35b-4801dc5960b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:51 compute-0 ovn_controller[152344]: 2025-10-02T09:05:51Z|01487|binding|INFO|Releasing lport 1fb74237-5dc8-49ee-a35b-4801dc5960b2 from this chassis (sb_readonly=0)
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.470 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d710978-7032-4293-a883-5a767163ed11.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d710978-7032-4293-a883-5a767163ed11.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.471 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1457a177-ce6b-4af7-9657-0fa8e0174a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.472 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-4d710978-7032-4293-a883-5a767163ed11
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/4d710978-7032-4293-a883-5a767163ed11.pid.haproxy
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 4d710978-7032-4293-a883-5a767163ed11
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:05:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.474 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'env', 'PROCESS_TAG=haproxy-4d710978-7032-4293-a883-5a767163ed11', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d710978-7032-4293-a883-5a767163ed11.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:51 compute-0 podman[411292]: 2025-10-02 09:05:51.827970569 +0000 UTC m=+0.054873087 container create 217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:05:51 compute-0 systemd[1]: Started libpod-conmon-217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661.scope.
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.881 2 DEBUG nova.compute.manager [req-78964cce-ca95-4c34-9d62-d5aa4170c3a8 req-1b93e170-839c-483f-95c2-0098c803fa23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.883 2 DEBUG oslo_concurrency.lockutils [req-78964cce-ca95-4c34-9d62-d5aa4170c3a8 req-1b93e170-839c-483f-95c2-0098c803fa23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.884 2 DEBUG oslo_concurrency.lockutils [req-78964cce-ca95-4c34-9d62-d5aa4170c3a8 req-1b93e170-839c-483f-95c2-0098c803fa23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.884 2 DEBUG oslo_concurrency.lockutils [req-78964cce-ca95-4c34-9d62-d5aa4170c3a8 req-1b93e170-839c-483f-95c2-0098c803fa23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:51 compute-0 nova_compute[260603]: 2025-10-02 09:05:51.885 2 DEBUG nova.compute.manager [req-78964cce-ca95-4c34-9d62-d5aa4170c3a8 req-1b93e170-839c-483f-95c2-0098c803fa23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Processing event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:05:51 compute-0 podman[411292]: 2025-10-02 09:05:51.796430989 +0000 UTC m=+0.023333597 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:05:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40017d254e3a7ef326f21d5b61e3393056686d0000a01e658c3264ac29a6b4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:05:51 compute-0 podman[411292]: 2025-10-02 09:05:51.927625425 +0000 UTC m=+0.154527953 container init 217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:05:51 compute-0 podman[411292]: 2025-10-02 09:05:51.935847351 +0000 UTC m=+0.162749869 container start 217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 09:05:51 compute-0 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [NOTICE]   (411311) : New worker (411313) forked
Oct 02 09:05:51 compute-0 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [NOTICE]   (411311) : Loading success.
Oct 02 09:05:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.096 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.097 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395952.095616, 88de4189-44e3-48fe-aa38-c33334b314b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.097 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] VM Started (Lifecycle Event)
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.102 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.106 2 INFO nova.virt.libvirt.driver [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Instance spawned successfully.
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.106 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.120 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.124 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.135 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.135 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.136 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.136 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.137 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.137 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.140 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.140 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395952.0965545, 88de4189-44e3-48fe-aa38-c33334b314b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.141 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] VM Paused (Lifecycle Event)
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.167 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.170 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395952.101426, 88de4189-44e3-48fe-aa38-c33334b314b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.171 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] VM Resumed (Lifecycle Event)
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.189 2 INFO nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Took 7.18 seconds to spawn the instance on the hypervisor.
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.189 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.190 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.196 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.232 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.248 2 INFO nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Took 8.11 seconds to build instance.
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.262 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.791 2 DEBUG nova.network.neutron [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updated VIF entry in instance network info cache for port 8d7734cf-6636-4070-a868-e4d1d2cfab65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.791 2 DEBUG nova.network.neutron [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updating instance_info_cache with network_info: [{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:05:52 compute-0 nova_compute[260603]: 2025-10-02 09:05:52.806 2 DEBUG oslo_concurrency.lockutils [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:05:53 compute-0 ceph-mon[74477]: pgmap v2632: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:05:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.737 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.737 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.738 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.738 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 88de4189-44e3-48fe-aa38-c33334b314b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:05:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.973 2 DEBUG nova.compute.manager [req-f5de253e-2f31-4bed-b49b-9487c10d53ec req-c365f300-62eb-4455-98a7-6fb520c23598 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.973 2 DEBUG oslo_concurrency.lockutils [req-f5de253e-2f31-4bed-b49b-9487c10d53ec req-c365f300-62eb-4455-98a7-6fb520c23598 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.974 2 DEBUG oslo_concurrency.lockutils [req-f5de253e-2f31-4bed-b49b-9487c10d53ec req-c365f300-62eb-4455-98a7-6fb520c23598 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.974 2 DEBUG oslo_concurrency.lockutils [req-f5de253e-2f31-4bed-b49b-9487c10d53ec req-c365f300-62eb-4455-98a7-6fb520c23598 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.974 2 DEBUG nova.compute.manager [req-f5de253e-2f31-4bed-b49b-9487c10d53ec req-c365f300-62eb-4455-98a7-6fb520c23598 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] No waiting events found dispatching network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:05:53 compute-0 nova_compute[260603]: 2025-10-02 09:05:53.974 2 WARNING nova.compute.manager [req-f5de253e-2f31-4bed-b49b-9487c10d53ec req-c365f300-62eb-4455-98a7-6fb520c23598 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received unexpected event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 for instance with vm_state active and task_state None.
Oct 02 09:05:54 compute-0 nova_compute[260603]: 2025-10-02 09:05:54.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:55 compute-0 ceph-mon[74477]: pgmap v2633: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct 02 09:05:55 compute-0 nova_compute[260603]: 2025-10-02 09:05:55.087 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updating instance_info_cache with network_info: [{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:05:55 compute-0 nova_compute[260603]: 2025-10-02 09:05:55.111 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:05:55 compute-0 nova_compute[260603]: 2025-10-02 09:05:55.112 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 09:05:55 compute-0 NetworkManager[45129]: <info>  [1759395955.6941] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/600)
Oct 02 09:05:55 compute-0 NetworkManager[45129]: <info>  [1759395955.6949] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/601)
Oct 02 09:05:55 compute-0 nova_compute[260603]: 2025-10-02 09:05:55.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:55 compute-0 ovn_controller[152344]: 2025-10-02T09:05:55Z|01488|binding|INFO|Releasing lport 1fb74237-5dc8-49ee-a35b-4801dc5960b2 from this chassis (sb_readonly=0)
Oct 02 09:05:55 compute-0 nova_compute[260603]: 2025-10-02 09:05:55.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:55 compute-0 ovn_controller[152344]: 2025-10-02T09:05:55Z|01489|binding|INFO|Releasing lport 1fb74237-5dc8-49ee-a35b-4801dc5960b2 from this chassis (sb_readonly=0)
Oct 02 09:05:55 compute-0 nova_compute[260603]: 2025-10-02 09:05:55.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct 02 09:05:56 compute-0 nova_compute[260603]: 2025-10-02 09:05:56.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:05:56 compute-0 nova_compute[260603]: 2025-10-02 09:05:56.895 2 DEBUG nova.compute.manager [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-changed-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:05:56 compute-0 nova_compute[260603]: 2025-10-02 09:05:56.895 2 DEBUG nova.compute.manager [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Refreshing instance network info cache due to event network-changed-8d7734cf-6636-4070-a868-e4d1d2cfab65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:05:56 compute-0 nova_compute[260603]: 2025-10-02 09:05:56.896 2 DEBUG oslo_concurrency.lockutils [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:05:56 compute-0 nova_compute[260603]: 2025-10-02 09:05:56.896 2 DEBUG oslo_concurrency.lockutils [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:05:56 compute-0 nova_compute[260603]: 2025-10-02 09:05:56.896 2 DEBUG nova.network.neutron [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Refreshing network info cache for port 8d7734cf-6636-4070-a868-e4d1d2cfab65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:05:57 compute-0 ceph-mon[74477]: pgmap v2634: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct 02 09:05:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:05:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:05:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:05:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:05:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:05:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:05:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:05:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:05:59 compute-0 nova_compute[260603]: 2025-10-02 09:05:59.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:05:59 compute-0 ceph-mon[74477]: pgmap v2635: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:05:59 compute-0 nova_compute[260603]: 2025-10-02 09:05:59.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:05:59 compute-0 nova_compute[260603]: 2025-10-02 09:05:59.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:05:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 351 KiB/s wr, 87 op/s
Oct 02 09:06:00 compute-0 nova_compute[260603]: 2025-10-02 09:06:00.754 2 DEBUG nova.network.neutron [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updated VIF entry in instance network info cache for port 8d7734cf-6636-4070-a868-e4d1d2cfab65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:06:00 compute-0 nova_compute[260603]: 2025-10-02 09:06:00.755 2 DEBUG nova.network.neutron [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updating instance_info_cache with network_info: [{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:06:00 compute-0 nova_compute[260603]: 2025-10-02 09:06:00.775 2 DEBUG oslo_concurrency.lockutils [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:06:00 compute-0 nova_compute[260603]: 2025-10-02 09:06:00.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:01 compute-0 ceph-mon[74477]: pgmap v2636: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 351 KiB/s wr, 87 op/s
Oct 02 09:06:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:06:02 compute-0 nova_compute[260603]: 2025-10-02 09:06:02.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:06:02 compute-0 nova_compute[260603]: 2025-10-02 09:06:02.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:02 compute-0 nova_compute[260603]: 2025-10-02 09:06:02.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:02 compute-0 nova_compute[260603]: 2025-10-02 09:06:02.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:02 compute-0 nova_compute[260603]: 2025-10-02 09:06:02.546 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:06:02 compute-0 nova_compute[260603]: 2025-10-02 09:06:02.546 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:06:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:06:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1939563937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:06:03 compute-0 nova_compute[260603]: 2025-10-02 09:06:03.020 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:06:03 compute-0 nova_compute[260603]: 2025-10-02 09:06:03.097 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:06:03 compute-0 nova_compute[260603]: 2025-10-02 09:06:03.098 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:06:03 compute-0 ceph-mon[74477]: pgmap v2637: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:06:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1939563937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:06:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:06:03 compute-0 nova_compute[260603]: 2025-10-02 09:06:03.254 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:06:03 compute-0 nova_compute[260603]: 2025-10-02 09:06:03.256 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3447MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:06:03 compute-0 nova_compute[260603]: 2025-10-02 09:06:03.257 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:03 compute-0 nova_compute[260603]: 2025-10-02 09:06:03.257 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:03 compute-0 ovn_controller[152344]: 2025-10-02T09:06:03Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:2d:a6 10.100.0.11
Oct 02 09:06:03 compute-0 ovn_controller[152344]: 2025-10-02T09:06:03Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:2d:a6 10.100.0.11
Oct 02 09:06:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 94 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 677 KiB/s wr, 96 op/s
Oct 02 09:06:04 compute-0 nova_compute[260603]: 2025-10-02 09:06:04.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:04 compute-0 nova_compute[260603]: 2025-10-02 09:06:04.185 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 88de4189-44e3-48fe-aa38-c33334b314b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:06:04 compute-0 nova_compute[260603]: 2025-10-02 09:06:04.186 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:06:04 compute-0 nova_compute[260603]: 2025-10-02 09:06:04.186 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:06:04 compute-0 sudo[411346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:06:04 compute-0 sudo[411346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:04 compute-0 sudo[411346]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:04 compute-0 sudo[411371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:06:04 compute-0 sudo[411371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:04 compute-0 sudo[411371]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:04 compute-0 sudo[411396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:06:04 compute-0 sudo[411396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:04 compute-0 sudo[411396]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:04 compute-0 sudo[411421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:06:04 compute-0 sudo[411421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:04 compute-0 nova_compute[260603]: 2025-10-02 09:06:04.985 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:06:05 compute-0 ceph-mon[74477]: pgmap v2638: 305 pgs: 305 active+clean; 94 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 677 KiB/s wr, 96 op/s
Oct 02 09:06:05 compute-0 sudo[411421]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:06:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:06:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:06:05 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:06:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:06:05 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:06:05 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 875d9811-5edb-40fa-aa87-bec749d34134 does not exist
Oct 02 09:06:05 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a6e7a63e-3254-4ae5-9405-4a34a8318539 does not exist
Oct 02 09:06:05 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev eb17e1f1-9105-4496-8e43-33a8db5780ab does not exist
Oct 02 09:06:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:06:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:06:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:06:05 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:06:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:06:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:06:05 compute-0 sudo[411497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:06:05 compute-0 sudo[411497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:05 compute-0 sudo[411497]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:05 compute-0 sudo[411522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:06:05 compute-0 sudo[411522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:05 compute-0 sudo[411522]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:06:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4183541134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:06:05 compute-0 sudo[411547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:06:05 compute-0 sudo[411547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:05 compute-0 sudo[411547]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:05 compute-0 nova_compute[260603]: 2025-10-02 09:06:05.425 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:06:05 compute-0 nova_compute[260603]: 2025-10-02 09:06:05.432 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:06:05 compute-0 nova_compute[260603]: 2025-10-02 09:06:05.454 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:06:05 compute-0 sudo[411574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:06:05 compute-0 sudo[411574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:05 compute-0 nova_compute[260603]: 2025-10-02 09:06:05.481 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:06:05 compute-0 nova_compute[260603]: 2025-10-02 09:06:05.482 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:05 compute-0 podman[411637]: 2025-10-02 09:06:05.806807602 +0000 UTC m=+0.036302319 container create fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:06:05 compute-0 systemd[1]: Started libpod-conmon-fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665.scope.
Oct 02 09:06:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:06:05 compute-0 podman[411637]: 2025-10-02 09:06:05.883765424 +0000 UTC m=+0.113260161 container init fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:06:05 compute-0 podman[411637]: 2025-10-02 09:06:05.789296398 +0000 UTC m=+0.018791135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:06:05 compute-0 podman[411637]: 2025-10-02 09:06:05.890169942 +0000 UTC m=+0.119664649 container start fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:06:05 compute-0 podman[411637]: 2025-10-02 09:06:05.892786834 +0000 UTC m=+0.122281551 container attach fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 02 09:06:05 compute-0 modest_zhukovsky[411653]: 167 167
Oct 02 09:06:05 compute-0 systemd[1]: libpod-fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665.scope: Deactivated successfully.
Oct 02 09:06:05 compute-0 conmon[411653]: conmon fede4b5732310de836a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665.scope/container/memory.events
Oct 02 09:06:05 compute-0 podman[411637]: 2025-10-02 09:06:05.895936422 +0000 UTC m=+0.125431139 container died fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 09:06:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2fd0139ab503f896729f91775f4266da5f50286da552416622bf91ce3e09dbf-merged.mount: Deactivated successfully.
Oct 02 09:06:05 compute-0 podman[411637]: 2025-10-02 09:06:05.931317071 +0000 UTC m=+0.160811788 container remove fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 09:06:05 compute-0 systemd[1]: libpod-conmon-fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665.scope: Deactivated successfully.
Oct 02 09:06:05 compute-0 nova_compute[260603]: 2025-10-02 09:06:05.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 94 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 665 KiB/s wr, 71 op/s
Oct 02 09:06:06 compute-0 podman[411678]: 2025-10-02 09:06:06.12276195 +0000 UTC m=+0.045578317 container create de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 09:06:06 compute-0 systemd[1]: Started libpod-conmon-de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09.scope.
Oct 02 09:06:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:06:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:06:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:06:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:06:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:06:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:06:06 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4183541134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:06:06 compute-0 podman[411678]: 2025-10-02 09:06:06.106289038 +0000 UTC m=+0.029105425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:06:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8beb36660f452f24b66a8dd653087d150696091e3db853d0862f6c6119986be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8beb36660f452f24b66a8dd653087d150696091e3db853d0862f6c6119986be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8beb36660f452f24b66a8dd653087d150696091e3db853d0862f6c6119986be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8beb36660f452f24b66a8dd653087d150696091e3db853d0862f6c6119986be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8beb36660f452f24b66a8dd653087d150696091e3db853d0862f6c6119986be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:06 compute-0 podman[411678]: 2025-10-02 09:06:06.249942893 +0000 UTC m=+0.172759350 container init de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:06:06 compute-0 podman[411678]: 2025-10-02 09:06:06.257940211 +0000 UTC m=+0.180756608 container start de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:06:06 compute-0 podman[411678]: 2025-10-02 09:06:06.263099922 +0000 UTC m=+0.185916319 container attach de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:06:07 compute-0 fervent_payne[411694]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:06:07 compute-0 fervent_payne[411694]: --> relative data size: 1.0
Oct 02 09:06:07 compute-0 fervent_payne[411694]: --> All data devices are unavailable
Oct 02 09:06:07 compute-0 systemd[1]: libpod-de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09.scope: Deactivated successfully.
Oct 02 09:06:07 compute-0 podman[411678]: 2025-10-02 09:06:07.423069109 +0000 UTC m=+1.345885486 container died de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 09:06:07 compute-0 systemd[1]: libpod-de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09.scope: Consumed 1.108s CPU time.
Oct 02 09:06:07 compute-0 ceph-mon[74477]: pgmap v2639: 305 pgs: 305 active+clean; 94 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 665 KiB/s wr, 71 op/s
Oct 02 09:06:07 compute-0 nova_compute[260603]: 2025-10-02 09:06:07.478 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:06:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8beb36660f452f24b66a8dd653087d150696091e3db853d0862f6c6119986be-merged.mount: Deactivated successfully.
Oct 02 09:06:07 compute-0 podman[411678]: 2025-10-02 09:06:07.718660894 +0000 UTC m=+1.641477261 container remove de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 09:06:07 compute-0 systemd[1]: libpod-conmon-de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09.scope: Deactivated successfully.
Oct 02 09:06:07 compute-0 sudo[411574]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:07 compute-0 sudo[411735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:06:07 compute-0 sudo[411735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:07 compute-0 sudo[411735]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:07 compute-0 sudo[411760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:06:07 compute-0 sudo[411760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:07 compute-0 sudo[411760]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 116 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 02 09:06:08 compute-0 sudo[411785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:06:08 compute-0 sudo[411785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:08 compute-0 sudo[411785]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:08 compute-0 sudo[411810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:06:08 compute-0 sudo[411810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:06:08 compute-0 podman[411878]: 2025-10-02 09:06:08.613126 +0000 UTC m=+0.044907136 container create 9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 09:06:08 compute-0 systemd[1]: Started libpod-conmon-9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88.scope.
Oct 02 09:06:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:06:08 compute-0 podman[411878]: 2025-10-02 09:06:08.593683175 +0000 UTC m=+0.025464341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:06:08 compute-0 podman[411878]: 2025-10-02 09:06:08.693941732 +0000 UTC m=+0.125722898 container init 9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brahmagupta, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:06:08 compute-0 podman[411878]: 2025-10-02 09:06:08.701461235 +0000 UTC m=+0.133242371 container start 9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brahmagupta, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:06:08 compute-0 podman[411878]: 2025-10-02 09:06:08.704968875 +0000 UTC m=+0.136750011 container attach 9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 09:06:08 compute-0 agitated_brahmagupta[411894]: 167 167
Oct 02 09:06:08 compute-0 systemd[1]: libpod-9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88.scope: Deactivated successfully.
Oct 02 09:06:08 compute-0 podman[411878]: 2025-10-02 09:06:08.707102531 +0000 UTC m=+0.138883667 container died 9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 09:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-586769ccf336b029e89699e437000c9f63b92eddeba544d7e0fb09a2c4243c70-merged.mount: Deactivated successfully.
Oct 02 09:06:08 compute-0 podman[411878]: 2025-10-02 09:06:08.74861452 +0000 UTC m=+0.180395686 container remove 9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:06:08 compute-0 systemd[1]: libpod-conmon-9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88.scope: Deactivated successfully.
Oct 02 09:06:08 compute-0 podman[411918]: 2025-10-02 09:06:08.927738477 +0000 UTC m=+0.040925993 container create e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 02 09:06:08 compute-0 systemd[1]: Started libpod-conmon-e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3.scope.
Oct 02 09:06:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a307a2a74f7358e42027e4608657ee44bcfdf44397614f6ef4c0c0122665d20f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a307a2a74f7358e42027e4608657ee44bcfdf44397614f6ef4c0c0122665d20f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a307a2a74f7358e42027e4608657ee44bcfdf44397614f6ef4c0c0122665d20f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a307a2a74f7358e42027e4608657ee44bcfdf44397614f6ef4c0c0122665d20f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:08 compute-0 podman[411918]: 2025-10-02 09:06:08.993562862 +0000 UTC m=+0.106750408 container init e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:06:09 compute-0 podman[411918]: 2025-10-02 09:06:09.000500028 +0000 UTC m=+0.113687564 container start e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:06:09 compute-0 podman[411918]: 2025-10-02 09:06:08.908391455 +0000 UTC m=+0.021578981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:06:09 compute-0 podman[411918]: 2025-10-02 09:06:09.003742718 +0000 UTC m=+0.116930234 container attach e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hofstadter, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 02 09:06:09 compute-0 nova_compute[260603]: 2025-10-02 09:06:09.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:09 compute-0 ceph-mon[74477]: pgmap v2640: 305 pgs: 305 active+clean; 116 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 02 09:06:09 compute-0 nova_compute[260603]: 2025-10-02 09:06:09.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]: {
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:     "0": [
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:         {
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "devices": [
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "/dev/loop3"
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             ],
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_name": "ceph_lv0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_size": "21470642176",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "name": "ceph_lv0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "tags": {
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.cluster_name": "ceph",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.crush_device_class": "",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.encrypted": "0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.osd_id": "0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.type": "block",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.vdo": "0"
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             },
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "type": "block",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "vg_name": "ceph_vg0"
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:         }
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:     ],
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:     "1": [
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:         {
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "devices": [
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "/dev/loop4"
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             ],
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_name": "ceph_lv1",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_size": "21470642176",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "name": "ceph_lv1",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "tags": {
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.cluster_name": "ceph",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.crush_device_class": "",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.encrypted": "0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.osd_id": "1",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.type": "block",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.vdo": "0"
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             },
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "type": "block",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "vg_name": "ceph_vg1"
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:         }
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:     ],
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:     "2": [
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:         {
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "devices": [
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "/dev/loop5"
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             ],
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_name": "ceph_lv2",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_size": "21470642176",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "name": "ceph_lv2",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "tags": {
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.cluster_name": "ceph",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.crush_device_class": "",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.encrypted": "0",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.osd_id": "2",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.type": "block",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:                 "ceph.vdo": "0"
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             },
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "type": "block",
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:             "vg_name": "ceph_vg2"
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:         }
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]:     ]
Oct 02 09:06:09 compute-0 unruffled_hofstadter[411935]: }
Oct 02 09:06:09 compute-0 systemd[1]: libpod-e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3.scope: Deactivated successfully.
Oct 02 09:06:09 compute-0 conmon[411935]: conmon e3f807d88903815013bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3.scope/container/memory.events
Oct 02 09:06:09 compute-0 podman[411918]: 2025-10-02 09:06:09.787474424 +0000 UTC m=+0.900661970 container died e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 09:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a307a2a74f7358e42027e4608657ee44bcfdf44397614f6ef4c0c0122665d20f-merged.mount: Deactivated successfully.
Oct 02 09:06:09 compute-0 podman[411918]: 2025-10-02 09:06:09.865878991 +0000 UTC m=+0.979066517 container remove e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 09:06:09 compute-0 systemd[1]: libpod-conmon-e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3.scope: Deactivated successfully.
Oct 02 09:06:09 compute-0 sudo[411810]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:09 compute-0 sudo[411959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:06:09 compute-0 sudo[411959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:09 compute-0 sudo[411959]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:06:10 compute-0 sudo[411984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:06:10 compute-0 sudo[411984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:10 compute-0 sudo[411984]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:10 compute-0 sudo[412009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:06:10 compute-0 sudo[412009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:10 compute-0 sudo[412009]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:10 compute-0 sudo[412034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:06:10 compute-0 sudo[412034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:10 compute-0 podman[412103]: 2025-10-02 09:06:10.449085704 +0000 UTC m=+0.035252076 container create a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:06:10 compute-0 systemd[1]: Started libpod-conmon-a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f.scope.
Oct 02 09:06:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:06:10 compute-0 podman[412103]: 2025-10-02 09:06:10.529342418 +0000 UTC m=+0.115508810 container init a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:06:10 compute-0 podman[412103]: 2025-10-02 09:06:10.434007246 +0000 UTC m=+0.020173628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:06:10 compute-0 podman[412103]: 2025-10-02 09:06:10.537206443 +0000 UTC m=+0.123372855 container start a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 09:06:10 compute-0 dreamy_banach[412119]: 167 167
Oct 02 09:06:10 compute-0 systemd[1]: libpod-a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f.scope: Deactivated successfully.
Oct 02 09:06:10 compute-0 podman[412103]: 2025-10-02 09:06:10.557442131 +0000 UTC m=+0.143608523 container attach a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 09:06:10 compute-0 podman[412103]: 2025-10-02 09:06:10.557888985 +0000 UTC m=+0.144055357 container died a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:06:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e64017814e4b0002ce2a1836426459a3b7d938e525caa37f6d21310ae91af749-merged.mount: Deactivated successfully.
Oct 02 09:06:10 compute-0 podman[412103]: 2025-10-02 09:06:10.630860052 +0000 UTC m=+0.217026424 container remove a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 02 09:06:10 compute-0 systemd[1]: libpod-conmon-a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f.scope: Deactivated successfully.
Oct 02 09:06:10 compute-0 podman[412142]: 2025-10-02 09:06:10.887923882 +0000 UTC m=+0.089979968 container create b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:06:10 compute-0 podman[412142]: 2025-10-02 09:06:10.845452832 +0000 UTC m=+0.047508988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:06:10 compute-0 systemd[1]: Started libpod-conmon-b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258.scope.
Oct 02 09:06:10 compute-0 nova_compute[260603]: 2025-10-02 09:06:10.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860eee515520521de262e7f9aa70bc534e314a4152dd42b9ec47beb7ef40f620/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860eee515520521de262e7f9aa70bc534e314a4152dd42b9ec47beb7ef40f620/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860eee515520521de262e7f9aa70bc534e314a4152dd42b9ec47beb7ef40f620/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860eee515520521de262e7f9aa70bc534e314a4152dd42b9ec47beb7ef40f620/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:06:11 compute-0 podman[412142]: 2025-10-02 09:06:11.005849686 +0000 UTC m=+0.207905852 container init b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:06:11 compute-0 podman[412142]: 2025-10-02 09:06:11.020045217 +0000 UTC m=+0.222101293 container start b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:06:11 compute-0 podman[412142]: 2025-10-02 09:06:11.024597668 +0000 UTC m=+0.226653734 container attach b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:06:11 compute-0 ceph-mon[74477]: pgmap v2641: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:06:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:06:12 compute-0 podman[412183]: 2025-10-02 09:06:12.029683402 +0000 UTC m=+0.077255862 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 09:06:12 compute-0 busy_banzai[412158]: {
Oct 02 09:06:12 compute-0 busy_banzai[412158]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "osd_id": 2,
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "type": "bluestore"
Oct 02 09:06:12 compute-0 busy_banzai[412158]:     },
Oct 02 09:06:12 compute-0 busy_banzai[412158]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "osd_id": 1,
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "type": "bluestore"
Oct 02 09:06:12 compute-0 busy_banzai[412158]:     },
Oct 02 09:06:12 compute-0 busy_banzai[412158]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "osd_id": 0,
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:06:12 compute-0 busy_banzai[412158]:         "type": "bluestore"
Oct 02 09:06:12 compute-0 busy_banzai[412158]:     }
Oct 02 09:06:12 compute-0 busy_banzai[412158]: }
Oct 02 09:06:12 compute-0 systemd[1]: libpod-b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258.scope: Deactivated successfully.
Oct 02 09:06:12 compute-0 podman[412142]: 2025-10-02 09:06:12.071145971 +0000 UTC m=+1.273202037 container died b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:06:12 compute-0 systemd[1]: libpod-b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258.scope: Consumed 1.054s CPU time.
Oct 02 09:06:12 compute-0 podman[412180]: 2025-10-02 09:06:12.078179089 +0000 UTC m=+0.136337067 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 09:06:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-860eee515520521de262e7f9aa70bc534e314a4152dd42b9ec47beb7ef40f620-merged.mount: Deactivated successfully.
Oct 02 09:06:12 compute-0 podman[412142]: 2025-10-02 09:06:12.144351436 +0000 UTC m=+1.346407492 container remove b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:06:12 compute-0 systemd[1]: libpod-conmon-b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258.scope: Deactivated successfully.
Oct 02 09:06:12 compute-0 sudo[412034]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:06:12 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:06:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:06:12 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:06:12 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 1dfd4a3a-6e8f-4f23-bcf2-9e2dbd36ac43 does not exist
Oct 02 09:06:12 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6972d3a0-e438-46d5-aa56-0c153c8901c6 does not exist
Oct 02 09:06:12 compute-0 sudo[412247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:06:12 compute-0 sudo[412247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:12 compute-0 sudo[412247]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:12 compute-0 sudo[412272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:06:12 compute-0 sudo[412272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:06:12 compute-0 sudo[412272]: pam_unix(sudo:session): session closed for user root
Oct 02 09:06:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:06:13 compute-0 ceph-mon[74477]: pgmap v2642: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:06:13 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:06:13 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:06:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:06:14 compute-0 nova_compute[260603]: 2025-10-02 09:06:14.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:15 compute-0 ceph-mon[74477]: pgmap v2643: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:06:15 compute-0 nova_compute[260603]: 2025-10-02 09:06:15.594 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:15 compute-0 nova_compute[260603]: 2025-10-02 09:06:15.594 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:15 compute-0 nova_compute[260603]: 2025-10-02 09:06:15.619 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:06:15 compute-0 nova_compute[260603]: 2025-10-02 09:06:15.738 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:15 compute-0 nova_compute[260603]: 2025-10-02 09:06:15.739 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:15 compute-0 nova_compute[260603]: 2025-10-02 09:06:15.752 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:06:15 compute-0 nova_compute[260603]: 2025-10-02 09:06:15.752 2 INFO nova.compute.claims [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:06:15 compute-0 nova_compute[260603]: 2025-10-02 09:06:15.906 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:06:15 compute-0 nova_compute[260603]: 2025-10-02 09:06:15.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 1.5 MiB/s wr, 41 op/s
Oct 02 09:06:16 compute-0 podman[412297]: 2025-10-02 09:06:16.009636062 +0000 UTC m=+0.068842750 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:06:16 compute-0 podman[412299]: 2025-10-02 09:06:16.020366326 +0000 UTC m=+0.064787365 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:06:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:06:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3514338891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.380 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.390 2 DEBUG nova.compute.provider_tree [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:06:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3514338891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.422 2 DEBUG nova.scheduler.client.report [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.447 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.448 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.491 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.492 2 DEBUG nova.network.neutron [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.509 2 INFO nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.531 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.660 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.661 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.662 2 INFO nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Creating image(s)
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.690 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.712 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.730 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.733 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.773 2 DEBUG nova.policy [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.807 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.808 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.808 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.809 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.829 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:06:16 compute-0 nova_compute[260603]: 2025-10-02 09:06:16.833 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:06:17 compute-0 nova_compute[260603]: 2025-10-02 09:06:17.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:17.490 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:06:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:17.492 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:06:17 compute-0 ceph-mon[74477]: pgmap v2644: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 1.5 MiB/s wr, 41 op/s
Oct 02 09:06:17 compute-0 nova_compute[260603]: 2025-10-02 09:06:17.842 2 DEBUG nova.network.neutron [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Successfully created port: 84190bf6-548d-4a11-83b3-0e6be88619c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:06:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 41 op/s
Oct 02 09:06:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:06:18 compute-0 nova_compute[260603]: 2025-10-02 09:06:18.314 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:06:18 compute-0 nova_compute[260603]: 2025-10-02 09:06:18.397 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:06:18 compute-0 ceph-mon[74477]: pgmap v2645: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 41 op/s
Oct 02 09:06:18 compute-0 nova_compute[260603]: 2025-10-02 09:06:18.697 2 DEBUG nova.objects.instance [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid a021c5bb-f6b0-4434-bf26-81f294f0fe00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:06:18 compute-0 nova_compute[260603]: 2025-10-02 09:06:18.717 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:06:18 compute-0 nova_compute[260603]: 2025-10-02 09:06:18.718 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Ensure instance console log exists: /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:06:18 compute-0 nova_compute[260603]: 2025-10-02 09:06:18.718 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:18 compute-0 nova_compute[260603]: 2025-10-02 09:06:18.719 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:18 compute-0 nova_compute[260603]: 2025-10-02 09:06:18.719 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:19 compute-0 nova_compute[260603]: 2025-10-02 09:06:19.041 2 DEBUG nova.network.neutron [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Successfully updated port: 84190bf6-548d-4a11-83b3-0e6be88619c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:06:19 compute-0 nova_compute[260603]: 2025-10-02 09:06:19.062 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:06:19 compute-0 nova_compute[260603]: 2025-10-02 09:06:19.062 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:06:19 compute-0 nova_compute[260603]: 2025-10-02 09:06:19.062 2 DEBUG nova.network.neutron [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:06:19 compute-0 nova_compute[260603]: 2025-10-02 09:06:19.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:19 compute-0 nova_compute[260603]: 2025-10-02 09:06:19.154 2 DEBUG nova.compute.manager [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-changed-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:06:19 compute-0 nova_compute[260603]: 2025-10-02 09:06:19.155 2 DEBUG nova.compute.manager [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Refreshing instance network info cache due to event network-changed-84190bf6-548d-4a11-83b3-0e6be88619c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:06:19 compute-0 nova_compute[260603]: 2025-10-02 09:06:19.155 2 DEBUG oslo_concurrency.lockutils [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:06:19 compute-0 nova_compute[260603]: 2025-10-02 09:06:19.247 2 DEBUG nova.network.neutron [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:06:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 144 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1011 KiB/s wr, 16 op/s
Oct 02 09:06:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:20.496 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.539 2 DEBUG nova.network.neutron [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updating instance_info_cache with network_info: [{"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.569 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.570 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Instance network_info: |[{"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.570 2 DEBUG oslo_concurrency.lockutils [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.570 2 DEBUG nova.network.neutron [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Refreshing network info cache for port 84190bf6-548d-4a11-83b3-0e6be88619c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.574 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Start _get_guest_xml network_info=[{"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.579 2 WARNING nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.585 2 DEBUG nova.virt.libvirt.host [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.586 2 DEBUG nova.virt.libvirt.host [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.590 2 DEBUG nova.virt.libvirt.host [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.590 2 DEBUG nova.virt.libvirt.host [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.590 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.591 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.591 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.591 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.592 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.592 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.592 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.593 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.593 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.593 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.593 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.594 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.597 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:06:20 compute-0 nova_compute[260603]: 2025-10-02 09:06:20.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:06:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630912268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.025 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:06:21 compute-0 ceph-mon[74477]: pgmap v2646: 305 pgs: 305 active+clean; 144 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1011 KiB/s wr, 16 op/s
Oct 02 09:06:21 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1630912268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.065 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.070 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:06:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:06:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470849595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.500 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.502 2 DEBUG nova.virt.libvirt.vif [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:06:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-420309713',display_name='tempest-TestGettingAddress-server-420309713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-420309713',id=138,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOqsV+mRqA94BGsKqn96a/KPTTGiBWD+95ZJ/Yh7ODb2zqPMdXbtdzNYLEW6fE5OS4mYGF0KIkuvDnPSxXUjDfpHSgx5rD0Ef4PCofSlDC/ZVRctKKrWVNvfvA+fGJmQQ==',key_name='tempest-TestGettingAddress-1976047243',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-sen5rcgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:06:16Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=a021c5bb-f6b0-4434-bf26-81f294f0fe00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.503 2 DEBUG nova.network.os_vif_util [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.504 2 DEBUG nova.network.os_vif_util [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.505 2 DEBUG nova.objects.instance [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid a021c5bb-f6b0-4434-bf26-81f294f0fe00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.523 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:06:21 compute-0 nova_compute[260603]:   <uuid>a021c5bb-f6b0-4434-bf26-81f294f0fe00</uuid>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   <name>instance-0000008a</name>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-420309713</nova:name>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:06:20</nova:creationTime>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:06:21 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:06:21 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:06:21 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:06:21 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:06:21 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:06:21 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:06:21 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:06:21 compute-0 nova_compute[260603]:         <nova:port uuid="84190bf6-548d-4a11-83b3-0e6be88619c6">
Oct 02 09:06:21 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe32:cb0b" ipVersion="6"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <system>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <entry name="serial">a021c5bb-f6b0-4434-bf26-81f294f0fe00</entry>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <entry name="uuid">a021c5bb-f6b0-4434-bf26-81f294f0fe00</entry>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     </system>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   <os>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   </os>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   <features>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   </features>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk">
Oct 02 09:06:21 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       </source>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:06:21 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk.config">
Oct 02 09:06:21 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       </source>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:06:21 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:32:cb:0b"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <target dev="tap84190bf6-54"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/console.log" append="off"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <video>
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     </video>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:06:21 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:06:21 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:06:21 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:06:21 compute-0 nova_compute[260603]: </domain>
Oct 02 09:06:21 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.524 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Preparing to wait for external event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.525 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.525 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.525 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.526 2 DEBUG nova.virt.libvirt.vif [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:06:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-420309713',display_name='tempest-TestGettingAddress-server-420309713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-420309713',id=138,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOqsV+mRqA94BGsKqn96a/KPTTGiBWD+95ZJ/Yh7ODb2zqPMdXbtdzNYLEW6fE5OS4mYGF0KIkuvDnPSxXUjDfpHSgx5rD0Ef4PCofSlDC/ZVRctKKrWVNvfvA+fGJmQQ==',key_name='tempest-TestGettingAddress-1976047243',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-sen5rcgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:06:16Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=a021c5bb-f6b0-4434-bf26-81f294f0fe00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.526 2 DEBUG nova.network.os_vif_util [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.528 2 DEBUG nova.network.os_vif_util [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.528 2 DEBUG os_vif [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.529 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84190bf6-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84190bf6-54, col_values=(('external_ids', {'iface-id': '84190bf6-548d-4a11-83b3-0e6be88619c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:cb:0b', 'vm-uuid': 'a021c5bb-f6b0-4434-bf26-81f294f0fe00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:21 compute-0 NetworkManager[45129]: <info>  [1759395981.5366] manager: (tap84190bf6-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.543 2 INFO os_vif [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54')
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.600 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.602 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.602 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:32:cb:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.603 2 INFO nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Using config drive
Oct 02 09:06:21 compute-0 nova_compute[260603]: 2025-10-02 09:06:21.627 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:06:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 144 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 979 KiB/s wr, 5 op/s
Oct 02 09:06:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/470849595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:06:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:06:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2177048171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:06:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:06:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2177048171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:06:22 compute-0 nova_compute[260603]: 2025-10-02 09:06:22.717 2 INFO nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Creating config drive at /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/disk.config
Oct 02 09:06:22 compute-0 nova_compute[260603]: 2025-10-02 09:06:22.726 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gn16f8l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:06:22 compute-0 nova_compute[260603]: 2025-10-02 09:06:22.895 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gn16f8l" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:06:22 compute-0 nova_compute[260603]: 2025-10-02 09:06:22.928 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:06:22 compute-0 nova_compute[260603]: 2025-10-02 09:06:22.931 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/disk.config a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:06:23 compute-0 ceph-mon[74477]: pgmap v2647: 305 pgs: 305 active+clean; 144 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 979 KiB/s wr, 5 op/s
Oct 02 09:06:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2177048171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:06:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2177048171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:06:23 compute-0 nova_compute[260603]: 2025-10-02 09:06:23.126 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/disk.config a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:06:23 compute-0 nova_compute[260603]: 2025-10-02 09:06:23.127 2 INFO nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Deleting local config drive /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/disk.config because it was imported into RBD.
Oct 02 09:06:23 compute-0 kernel: tap84190bf6-54: entered promiscuous mode
Oct 02 09:06:23 compute-0 NetworkManager[45129]: <info>  [1759395983.1814] manager: (tap84190bf6-54): new Tun device (/org/freedesktop/NetworkManager/Devices/603)
Oct 02 09:06:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:06:23 compute-0 nova_compute[260603]: 2025-10-02 09:06:23.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:23 compute-0 ovn_controller[152344]: 2025-10-02T09:06:23Z|01490|binding|INFO|Claiming lport 84190bf6-548d-4a11-83b3-0e6be88619c6 for this chassis.
Oct 02 09:06:23 compute-0 ovn_controller[152344]: 2025-10-02T09:06:23Z|01491|binding|INFO|84190bf6-548d-4a11-83b3-0e6be88619c6: Claiming fa:16:3e:32:cb:0b 10.100.0.4 2001:db8::f816:3eff:fe32:cb0b
Oct 02 09:06:23 compute-0 systemd-udevd[412659]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.233 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:cb:0b 10.100.0.4 2001:db8::f816:3eff:fe32:cb0b'], port_security=['fa:16:3e:32:cb:0b 10.100.0.4 2001:db8::f816:3eff:fe32:cb0b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe32:cb0b/64', 'neutron:device_id': 'a021c5bb-f6b0-4434-bf26-81f294f0fe00', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d710978-7032-4293-a883-5a767163ed11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8f998d1a-9f03-4830-9263-e8f19e5bb79e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9915992-ac5f-4a55-8b96-3511c2ec67d2, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=84190bf6-548d-4a11-83b3-0e6be88619c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.234 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 84190bf6-548d-4a11-83b3-0e6be88619c6 in datapath 4d710978-7032-4293-a883-5a767163ed11 bound to our chassis
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.235 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d710978-7032-4293-a883-5a767163ed11
Oct 02 09:06:23 compute-0 NetworkManager[45129]: <info>  [1759395983.2390] device (tap84190bf6-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:06:23 compute-0 NetworkManager[45129]: <info>  [1759395983.2402] device (tap84190bf6-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:06:23 compute-0 ovn_controller[152344]: 2025-10-02T09:06:23Z|01492|binding|INFO|Setting lport 84190bf6-548d-4a11-83b3-0e6be88619c6 ovn-installed in OVS
Oct 02 09:06:23 compute-0 ovn_controller[152344]: 2025-10-02T09:06:23Z|01493|binding|INFO|Setting lport 84190bf6-548d-4a11-83b3-0e6be88619c6 up in Southbound
Oct 02 09:06:23 compute-0 nova_compute[260603]: 2025-10-02 09:06:23.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:23 compute-0 nova_compute[260603]: 2025-10-02 09:06:23.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.250 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1e97e7-337a-4e4a-88e6-0061977c72db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:23 compute-0 systemd-machined[214636]: New machine qemu-172-instance-0000008a.
Oct 02 09:06:23 compute-0 systemd[1]: Started Virtual Machine qemu-172-instance-0000008a.
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.280 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1594b3-36f6-4502-999c-52263307ee0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.283 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b69ffb27-ea46-44aa-a735-d5fa8d58f6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.313 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c6671382-65ca-4030-901b-15b0620e2490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.329 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd06330-c9aa-4940-af7a-8d5db0594459]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d710978-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:67:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682990, 'reachable_time': 19213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412676, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.342 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3d9765-bc9e-4246-b87b-ee866792577e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d710978-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683002, 'tstamp': 683002}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412677, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d710978-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683005, 'tstamp': 683005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412677, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.344 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d710978-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:23 compute-0 nova_compute[260603]: 2025-10-02 09:06:23.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:23 compute-0 nova_compute[260603]: 2025-10-02 09:06:23.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.347 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d710978-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.347 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.348 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d710978-70, col_values=(('external_ids', {'iface-id': '1fb74237-5dc8-49ee-a35b-4801dc5960b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.348 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:06:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:06:23 compute-0 nova_compute[260603]: 2025-10-02 09:06:23.981 2 DEBUG nova.compute.manager [req-49c73821-0a89-43b7-85bf-1366d8146677 req-425d9c5b-ed41-40da-add5-e8ab8ede9147 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:06:23 compute-0 nova_compute[260603]: 2025-10-02 09:06:23.981 2 DEBUG oslo_concurrency.lockutils [req-49c73821-0a89-43b7-85bf-1366d8146677 req-425d9c5b-ed41-40da-add5-e8ab8ede9147 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:23 compute-0 nova_compute[260603]: 2025-10-02 09:06:23.982 2 DEBUG oslo_concurrency.lockutils [req-49c73821-0a89-43b7-85bf-1366d8146677 req-425d9c5b-ed41-40da-add5-e8ab8ede9147 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:23 compute-0 nova_compute[260603]: 2025-10-02 09:06:23.982 2 DEBUG oslo_concurrency.lockutils [req-49c73821-0a89-43b7-85bf-1366d8146677 req-425d9c5b-ed41-40da-add5-e8ab8ede9147 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:23 compute-0 nova_compute[260603]: 2025-10-02 09:06:23.982 2 DEBUG nova.compute.manager [req-49c73821-0a89-43b7-85bf-1366d8146677 req-425d9c5b-ed41-40da-add5-e8ab8ede9147 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Processing event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.094 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395984.0933611, a021c5bb-f6b0-4434-bf26-81f294f0fe00 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.094 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] VM Started (Lifecycle Event)
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.100 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.109 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.119 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.121 2 INFO nova.virt.libvirt.driver [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Instance spawned successfully.
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.122 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.132 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.150 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.152 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.153 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.153 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.154 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.155 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.161 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.162 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395984.0935256, a021c5bb-f6b0-4434-bf26-81f294f0fe00 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.162 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] VM Paused (Lifecycle Event)
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.198 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.203 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395984.1035948, a021c5bb-f6b0-4434-bf26-81f294f0fe00 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.203 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] VM Resumed (Lifecycle Event)
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.227 2 INFO nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Took 7.57 seconds to spawn the instance on the hypervisor.
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.227 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.236 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.247 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.285 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.317 2 INFO nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Took 8.63 seconds to build instance.
Oct 02 09:06:24 compute-0 nova_compute[260603]: 2025-10-02 09:06:24.333 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:25 compute-0 ceph-mon[74477]: pgmap v2648: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:06:25 compute-0 nova_compute[260603]: 2025-10-02 09:06:25.357 2 DEBUG nova.network.neutron [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updated VIF entry in instance network info cache for port 84190bf6-548d-4a11-83b3-0e6be88619c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:06:25 compute-0 nova_compute[260603]: 2025-10-02 09:06:25.357 2 DEBUG nova.network.neutron [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updating instance_info_cache with network_info: [{"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:06:25 compute-0 nova_compute[260603]: 2025-10-02 09:06:25.383 2 DEBUG oslo_concurrency.lockutils [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:06:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:06:25 compute-0 nova_compute[260603]: 2025-10-02 09:06:25.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:26 compute-0 nova_compute[260603]: 2025-10-02 09:06:26.063 2 DEBUG nova.compute.manager [req-d0fc7a51-59c3-463e-a3b9-487b10d77c55 req-8e915cdb-8822-4cfb-8535-eb348db71166 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:06:26 compute-0 nova_compute[260603]: 2025-10-02 09:06:26.064 2 DEBUG oslo_concurrency.lockutils [req-d0fc7a51-59c3-463e-a3b9-487b10d77c55 req-8e915cdb-8822-4cfb-8535-eb348db71166 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:26 compute-0 nova_compute[260603]: 2025-10-02 09:06:26.065 2 DEBUG oslo_concurrency.lockutils [req-d0fc7a51-59c3-463e-a3b9-487b10d77c55 req-8e915cdb-8822-4cfb-8535-eb348db71166 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:26 compute-0 nova_compute[260603]: 2025-10-02 09:06:26.066 2 DEBUG oslo_concurrency.lockutils [req-d0fc7a51-59c3-463e-a3b9-487b10d77c55 req-8e915cdb-8822-4cfb-8535-eb348db71166 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:26 compute-0 nova_compute[260603]: 2025-10-02 09:06:26.066 2 DEBUG nova.compute.manager [req-d0fc7a51-59c3-463e-a3b9-487b10d77c55 req-8e915cdb-8822-4cfb-8535-eb348db71166 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] No waiting events found dispatching network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:06:26 compute-0 nova_compute[260603]: 2025-10-02 09:06:26.067 2 WARNING nova.compute.manager [req-d0fc7a51-59c3-463e-a3b9-487b10d77c55 req-8e915cdb-8822-4cfb-8535-eb348db71166 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received unexpected event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 for instance with vm_state active and task_state None.
Oct 02 09:06:26 compute-0 nova_compute[260603]: 2025-10-02 09:06:26.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:27 compute-0 ceph-mon[74477]: pgmap v2649: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:06:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:06:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:06:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:06:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:06:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:06:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:06:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 824 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:06:28
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr', 'volumes']
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:06:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:06:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:06:29 compute-0 ceph-mon[74477]: pgmap v2650: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 824 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct 02 09:06:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:06:30 compute-0 nova_compute[260603]: 2025-10-02 09:06:30.015 2 DEBUG nova.compute.manager [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-changed-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:06:30 compute-0 nova_compute[260603]: 2025-10-02 09:06:30.017 2 DEBUG nova.compute.manager [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Refreshing instance network info cache due to event network-changed-84190bf6-548d-4a11-83b3-0e6be88619c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:06:30 compute-0 nova_compute[260603]: 2025-10-02 09:06:30.017 2 DEBUG oslo_concurrency.lockutils [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:06:30 compute-0 nova_compute[260603]: 2025-10-02 09:06:30.018 2 DEBUG oslo_concurrency.lockutils [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:06:30 compute-0 nova_compute[260603]: 2025-10-02 09:06:30.019 2 DEBUG nova.network.neutron [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Refreshing network info cache for port 84190bf6-548d-4a11-83b3-0e6be88619c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:06:30 compute-0 nova_compute[260603]: 2025-10-02 09:06:30.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:31 compute-0 ceph-mon[74477]: pgmap v2651: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:06:31 compute-0 nova_compute[260603]: 2025-10-02 09:06:31.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 863 KiB/s wr, 96 op/s
Oct 02 09:06:32 compute-0 nova_compute[260603]: 2025-10-02 09:06:32.140 2 DEBUG nova.network.neutron [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updated VIF entry in instance network info cache for port 84190bf6-548d-4a11-83b3-0e6be88619c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:06:32 compute-0 nova_compute[260603]: 2025-10-02 09:06:32.141 2 DEBUG nova.network.neutron [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updating instance_info_cache with network_info: [{"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:06:32 compute-0 nova_compute[260603]: 2025-10-02 09:06:32.160 2 DEBUG oslo_concurrency.lockutils [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:06:33 compute-0 ceph-mon[74477]: pgmap v2652: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 863 KiB/s wr, 96 op/s
Oct 02 09:06:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:06:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 863 KiB/s wr, 96 op/s
Oct 02 09:06:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:34.845 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:34.845 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:34.846 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:35 compute-0 ceph-mon[74477]: pgmap v2653: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 863 KiB/s wr, 96 op/s
Oct 02 09:06:35 compute-0 ovn_controller[152344]: 2025-10-02T09:06:35Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:cb:0b 10.100.0.4
Oct 02 09:06:35 compute-0 ovn_controller[152344]: 2025-10-02T09:06:35Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:cb:0b 10.100.0.4
Oct 02 09:06:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:06:35 compute-0 nova_compute[260603]: 2025-10-02 09:06:35.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:36 compute-0 nova_compute[260603]: 2025-10-02 09:06:36.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:37 compute-0 ceph-mon[74477]: pgmap v2654: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:06:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 187 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 875 KiB/s wr, 113 op/s
Oct 02 09:06:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:06:38 compute-0 nova_compute[260603]: 2025-10-02 09:06:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:06:38 compute-0 nova_compute[260603]: 2025-10-02 09:06:38.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012710510679169108 of space, bias 1.0, pg target 0.38131532037507326 quantized to 32 (current 32)
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:06:39 compute-0 ceph-mon[74477]: pgmap v2655: 305 pgs: 305 active+clean; 187 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 875 KiB/s wr, 113 op/s
Oct 02 09:06:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 02 09:06:40 compute-0 nova_compute[260603]: 2025-10-02 09:06:40.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:41 compute-0 ceph-mon[74477]: pgmap v2656: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 02 09:06:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:06:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 55K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1344 writes, 5845 keys, 1344 commit groups, 1.0 writes per commit group, ingest: 8.68 MB, 0.01 MB/s
                                           Interval WAL: 1344 writes, 1344 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    111.5      0.59              0.25        38    0.016       0      0       0.0       0.0
                                             L6      1/0    7.93 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6    168.0    141.2      2.16              1.08        37    0.058    224K    20K       0.0       0.0
                                            Sum      1/0    7.93 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6    132.0    134.9      2.75              1.33        75    0.037    224K    20K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.5    157.7    157.1      0.27              0.17         8    0.033     31K   2001       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    168.0    141.2      2.16              1.08        37    0.058    224K    20K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    112.4      0.59              0.25        37    0.016       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.064, interval 0.005
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.36 GB write, 0.08 MB/s write, 0.36 GB read, 0.08 MB/s read, 2.8 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 40.98 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000338 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2654,39.32 MB,12.9333%) FilterBlock(76,635.67 KB,0.204202%) IndexBlock(76,1.04 MB,0.341832%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 09:06:41 compute-0 nova_compute[260603]: 2025-10-02 09:06:41.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 09:06:43 compute-0 podman[412722]: 2025-10-02 09:06:43.004899992 +0000 UTC m=+0.056983422 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 09:06:43 compute-0 podman[412721]: 2025-10-02 09:06:43.052142891 +0000 UTC m=+0.106219392 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:06:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:06:43 compute-0 ceph-mon[74477]: pgmap v2657: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 09:06:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 09:06:45 compute-0 ceph-mon[74477]: pgmap v2658: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 09:06:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 09:06:45 compute-0 nova_compute[260603]: 2025-10-02 09:06:45.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:46 compute-0 nova_compute[260603]: 2025-10-02 09:06:46.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:46 compute-0 podman[412766]: 2025-10-02 09:06:46.985727869 +0000 UTC m=+0.056576138 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 09:06:47 compute-0 podman[412767]: 2025-10-02 09:06:47.01181503 +0000 UTC m=+0.068375015 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:06:47 compute-0 ceph-mon[74477]: pgmap v2659: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.548 2 DEBUG nova.compute.manager [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-changed-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.548 2 DEBUG nova.compute.manager [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Refreshing instance network info cache due to event network-changed-84190bf6-548d-4a11-83b3-0e6be88619c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.549 2 DEBUG oslo_concurrency.lockutils [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.549 2 DEBUG oslo_concurrency.lockutils [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.549 2 DEBUG nova.network.neutron [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Refreshing network info cache for port 84190bf6-548d-4a11-83b3-0e6be88619c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.630 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.631 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.631 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.632 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.632 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.633 2 INFO nova.compute.manager [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Terminating instance
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.634 2 DEBUG nova.compute.manager [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:06:47 compute-0 kernel: tap84190bf6-54 (unregistering): left promiscuous mode
Oct 02 09:06:47 compute-0 NetworkManager[45129]: <info>  [1759396007.6928] device (tap84190bf6-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:06:47 compute-0 ovn_controller[152344]: 2025-10-02T09:06:47Z|01494|binding|INFO|Releasing lport 84190bf6-548d-4a11-83b3-0e6be88619c6 from this chassis (sb_readonly=0)
Oct 02 09:06:47 compute-0 ovn_controller[152344]: 2025-10-02T09:06:47Z|01495|binding|INFO|Setting lport 84190bf6-548d-4a11-83b3-0e6be88619c6 down in Southbound
Oct 02 09:06:47 compute-0 ovn_controller[152344]: 2025-10-02T09:06:47Z|01496|binding|INFO|Removing iface tap84190bf6-54 ovn-installed in OVS
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.709 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:cb:0b 10.100.0.4 2001:db8::f816:3eff:fe32:cb0b'], port_security=['fa:16:3e:32:cb:0b 10.100.0.4 2001:db8::f816:3eff:fe32:cb0b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe32:cb0b/64', 'neutron:device_id': 'a021c5bb-f6b0-4434-bf26-81f294f0fe00', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d710978-7032-4293-a883-5a767163ed11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f998d1a-9f03-4830-9263-e8f19e5bb79e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9915992-ac5f-4a55-8b96-3511c2ec67d2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=84190bf6-548d-4a11-83b3-0e6be88619c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.710 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 84190bf6-548d-4a11-83b3-0e6be88619c6 in datapath 4d710978-7032-4293-a883-5a767163ed11 unbound from our chassis
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.711 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d710978-7032-4293-a883-5a767163ed11
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.729 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9386a98-629b-42a1-8eb6-eb4818b6f56a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:47 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Oct 02 09:06:47 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008a.scope: Consumed 12.989s CPU time.
Oct 02 09:06:47 compute-0 systemd-machined[214636]: Machine qemu-172-instance-0000008a terminated.
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.761 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5f195c8d-fe45-442a-bf2c-366c1bf75a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.764 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ecc20b-38ef-4d82-8e0c-33770291f211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.797 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5db08a93-a412-4b02-a845-3feccc6f4d02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.819 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8872b7dd-da8e-4771-b679-623c4273f543]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d710978-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:67:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682990, 'reachable_time': 19213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412816, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.839 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[01544148-17e2-4e9e-b70d-2279e96ac3c7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d710978-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683002, 'tstamp': 683002}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412817, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d710978-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683005, 'tstamp': 683005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412817, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.841 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d710978-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.849 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d710978-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.849 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.850 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d710978-70, col_values=(('external_ids', {'iface-id': '1fb74237-5dc8-49ee-a35b-4801dc5960b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.850 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.875 2 INFO nova.virt.libvirt.driver [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Instance destroyed successfully.
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.876 2 DEBUG nova.objects.instance [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid a021c5bb-f6b0-4434-bf26-81f294f0fe00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.889 2 DEBUG nova.virt.libvirt.vif [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:06:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-420309713',display_name='tempest-TestGettingAddress-server-420309713',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-420309713',id=138,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOqsV+mRqA94BGsKqn96a/KPTTGiBWD+95ZJ/Yh7ODb2zqPMdXbtdzNYLEW6fE5OS4mYGF0KIkuvDnPSxXUjDfpHSgx5rD0Ef4PCofSlDC/ZVRctKKrWVNvfvA+fGJmQQ==',key_name='tempest-TestGettingAddress-1976047243',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:06:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-sen5rcgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:06:24Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=a021c5bb-f6b0-4434-bf26-81f294f0fe00,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.890 2 DEBUG nova.network.os_vif_util [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.891 2 DEBUG nova.network.os_vif_util [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.891 2 DEBUG os_vif [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84190bf6-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:06:47 compute-0 nova_compute[260603]: 2025-10-02 09:06:47.898 2 INFO os_vif [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54')
Oct 02 09:06:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:06:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:06:48 compute-0 nova_compute[260603]: 2025-10-02 09:06:48.211 2 INFO nova.virt.libvirt.driver [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Deleting instance files /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00_del
Oct 02 09:06:48 compute-0 nova_compute[260603]: 2025-10-02 09:06:48.212 2 INFO nova.virt.libvirt.driver [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Deletion of /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00_del complete
Oct 02 09:06:48 compute-0 nova_compute[260603]: 2025-10-02 09:06:48.257 2 INFO nova.compute.manager [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Took 0.62 seconds to destroy the instance on the hypervisor.
Oct 02 09:06:48 compute-0 nova_compute[260603]: 2025-10-02 09:06:48.257 2 DEBUG oslo.service.loopingcall [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:06:48 compute-0 nova_compute[260603]: 2025-10-02 09:06:48.257 2 DEBUG nova.compute.manager [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:06:48 compute-0 nova_compute[260603]: 2025-10-02 09:06:48.258 2 DEBUG nova.network.neutron [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.067 2 DEBUG nova.network.neutron [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.092 2 INFO nova.compute.manager [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Took 0.83 seconds to deallocate network for instance.
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.142 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.143 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.203 2 DEBUG nova.network.neutron [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updated VIF entry in instance network info cache for port 84190bf6-548d-4a11-83b3-0e6be88619c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.204 2 DEBUG nova.network.neutron [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updating instance_info_cache with network_info: [{"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.218 2 DEBUG oslo_concurrency.processutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.252 2 DEBUG oslo_concurrency.lockutils [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:06:49 compute-0 ceph-mon[74477]: pgmap v2660: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:06:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:06:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1950101900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.629 2 DEBUG nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-vif-unplugged-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.629 2 DEBUG oslo_concurrency.lockutils [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.630 2 DEBUG oslo_concurrency.lockutils [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.630 2 DEBUG oslo_concurrency.lockutils [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.630 2 DEBUG nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] No waiting events found dispatching network-vif-unplugged-84190bf6-548d-4a11-83b3-0e6be88619c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.631 2 WARNING nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received unexpected event network-vif-unplugged-84190bf6-548d-4a11-83b3-0e6be88619c6 for instance with vm_state deleted and task_state None.
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.631 2 DEBUG nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.631 2 DEBUG oslo_concurrency.lockutils [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.631 2 DEBUG oslo_concurrency.lockutils [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.632 2 DEBUG oslo_concurrency.lockutils [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.632 2 DEBUG nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] No waiting events found dispatching network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.632 2 WARNING nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received unexpected event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 for instance with vm_state deleted and task_state None.
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.632 2 DEBUG nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-vif-deleted-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.633 2 INFO nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Neutron deleted interface 84190bf6-548d-4a11-83b3-0e6be88619c6; detaching it from the instance and deleting it from the info cache
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.633 2 DEBUG nova.network.neutron [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.640 2 DEBUG oslo_concurrency.processutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.647 2 DEBUG nova.compute.provider_tree [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.659 2 DEBUG nova.scheduler.client.report [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.664 2 DEBUG nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Detach interface failed, port_id=84190bf6-548d-4a11-83b3-0e6be88619c6, reason: Instance a021c5bb-f6b0-4434-bf26-81f294f0fe00 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.684 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.703 2 INFO nova.scheduler.client.report [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance a021c5bb-f6b0-4434-bf26-81f294f0fe00
Oct 02 09:06:49 compute-0 nova_compute[260603]: 2025-10-02 09:06:49.757 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 202 KiB/s rd, 1.3 MiB/s wr, 32 op/s
Oct 02 09:06:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1950101900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:06:50 compute-0 nova_compute[260603]: 2025-10-02 09:06:50.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.409 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.409 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.410 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.410 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.410 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.411 2 INFO nova.compute.manager [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Terminating instance
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.412 2 DEBUG nova.compute.manager [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:06:51 compute-0 ceph-mon[74477]: pgmap v2661: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 202 KiB/s rd, 1.3 MiB/s wr, 32 op/s
Oct 02 09:06:51 compute-0 kernel: tap8d7734cf-66 (unregistering): left promiscuous mode
Oct 02 09:06:51 compute-0 NetworkManager[45129]: <info>  [1759396011.6825] device (tap8d7734cf-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:51 compute-0 ovn_controller[152344]: 2025-10-02T09:06:51Z|01497|binding|INFO|Releasing lport 8d7734cf-6636-4070-a868-e4d1d2cfab65 from this chassis (sb_readonly=0)
Oct 02 09:06:51 compute-0 ovn_controller[152344]: 2025-10-02T09:06:51Z|01498|binding|INFO|Setting lport 8d7734cf-6636-4070-a868-e4d1d2cfab65 down in Southbound
Oct 02 09:06:51 compute-0 ovn_controller[152344]: 2025-10-02T09:06:51Z|01499|binding|INFO|Removing iface tap8d7734cf-66 ovn-installed in OVS
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:51.701 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:2d:a6 10.100.0.11 2001:db8::f816:3eff:fed3:2da6'], port_security=['fa:16:3e:d3:2d:a6 10.100.0.11 2001:db8::f816:3eff:fed3:2da6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fed3:2da6/64', 'neutron:device_id': '88de4189-44e3-48fe-aa38-c33334b314b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d710978-7032-4293-a883-5a767163ed11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f998d1a-9f03-4830-9263-e8f19e5bb79e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9915992-ac5f-4a55-8b96-3511c2ec67d2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8d7734cf-6636-4070-a868-e4d1d2cfab65) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:06:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:51.703 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8d7734cf-6636-4070-a868-e4d1d2cfab65 in datapath 4d710978-7032-4293-a883-5a767163ed11 unbound from our chassis
Oct 02 09:06:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:51.703 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d710978-7032-4293-a883-5a767163ed11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:06:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:51.705 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5fadcc7a-98d3-4b55-9d5e-ee004452b021]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:51.705 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d710978-7032-4293-a883-5a767163ed11 namespace which is not needed anymore
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.713 2 DEBUG nova.compute.manager [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-changed-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.714 2 DEBUG nova.compute.manager [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Refreshing instance network info cache due to event network-changed-8d7734cf-6636-4070-a868-e4d1d2cfab65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.714 2 DEBUG oslo_concurrency.lockutils [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.714 2 DEBUG oslo_concurrency.lockutils [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.714 2 DEBUG nova.network.neutron [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Refreshing network info cache for port 8d7734cf-6636-4070-a868-e4d1d2cfab65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:06:51 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Deactivated successfully.
Oct 02 09:06:51 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Consumed 14.701s CPU time.
Oct 02 09:06:51 compute-0 systemd-machined[214636]: Machine qemu-171-instance-00000089 terminated.
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.856 2 INFO nova.virt.libvirt.driver [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Instance destroyed successfully.
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.856 2 DEBUG nova.objects.instance [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 88de4189-44e3-48fe-aa38-c33334b314b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.874 2 DEBUG nova.virt.libvirt.vif [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:05:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1987487733',display_name='tempest-TestGettingAddress-server-1987487733',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1987487733',id=137,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOqsV+mRqA94BGsKqn96a/KPTTGiBWD+95ZJ/Yh7ODb2zqPMdXbtdzNYLEW6fE5OS4mYGF0KIkuvDnPSxXUjDfpHSgx5rD0Ef4PCofSlDC/ZVRctKKrWVNvfvA+fGJmQQ==',key_name='tempest-TestGettingAddress-1976047243',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:05:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-qbro3vsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:05:52Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=88de4189-44e3-48fe-aa38-c33334b314b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.875 2 DEBUG nova.network.os_vif_util [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.876 2 DEBUG nova.network.os_vif_util [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.876 2 DEBUG os_vif [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.878 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d7734cf-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:51 compute-0 nova_compute[260603]: 2025-10-02 09:06:51.885 2 INFO os_vif [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66')
Oct 02 09:06:51 compute-0 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [NOTICE]   (411311) : haproxy version is 2.8.14-c23fe91
Oct 02 09:06:51 compute-0 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [NOTICE]   (411311) : path to executable is /usr/sbin/haproxy
Oct 02 09:06:51 compute-0 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [WARNING]  (411311) : Exiting Master process...
Oct 02 09:06:51 compute-0 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [ALERT]    (411311) : Current worker (411313) exited with code 143 (Terminated)
Oct 02 09:06:51 compute-0 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [WARNING]  (411311) : All workers exited. Exiting... (0)
Oct 02 09:06:51 compute-0 systemd[1]: libpod-217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661.scope: Deactivated successfully.
Oct 02 09:06:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 02 09:06:51 compute-0 podman[412895]: 2025-10-02 09:06:51.987364092 +0000 UTC m=+0.175384253 container died 217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661-userdata-shm.mount: Deactivated successfully.
Oct 02 09:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f40017d254e3a7ef326f21d5b61e3393056686d0000a01e658c3264ac29a6b4b-merged.mount: Deactivated successfully.
Oct 02 09:06:52 compute-0 podman[412895]: 2025-10-02 09:06:52.498828845 +0000 UTC m=+0.686849036 container cleanup 217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 09:06:52 compute-0 systemd[1]: libpod-conmon-217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661.scope: Deactivated successfully.
Oct 02 09:06:52 compute-0 nova_compute[260603]: 2025-10-02 09:06:52.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:06:52 compute-0 ceph-mon[74477]: pgmap v2662: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 02 09:06:52 compute-0 podman[412953]: 2025-10-02 09:06:52.738956028 +0000 UTC m=+0.211425202 container remove 217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 09:06:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.749 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[faab7c0d-5d33-44bc-9d98-ed4b484043ad]: (4, ('Thu Oct  2 09:06:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11 (217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661)\n217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661\nThu Oct  2 09:06:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11 (217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661)\n217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.751 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3150beaa-e291-4bea-8a57-9d95c6c488e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.752 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d710978-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:06:52 compute-0 nova_compute[260603]: 2025-10-02 09:06:52.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:52 compute-0 kernel: tap4d710978-70: left promiscuous mode
Oct 02 09:06:52 compute-0 nova_compute[260603]: 2025-10-02 09:06:52.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.776 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8c18ae29-2347-40d1-a6f8-de9786955977]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.798 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8a097d80-2867-47fa-918e-e610b8ec546f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.799 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8e8ba1-e8a5-4b05-91c1-031cc9f878d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.826 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8bb3e6-c898-441d-a11b-61c59460281e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682983, 'reachable_time': 42999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412970, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d4d710978\x2d7032\x2d4293\x2da883\x2d5a767163ed11.mount: Deactivated successfully.
Oct 02 09:06:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.831 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d710978-7032-4293-a883-5a767163ed11 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:06:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.832 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f77991-598d-40ec-a4a7-c17e90b870f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.056 2 DEBUG nova.network.neutron [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updated VIF entry in instance network info cache for port 8d7734cf-6636-4070-a868-e4d1d2cfab65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.056 2 DEBUG nova.network.neutron [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updating instance_info_cache with network_info: [{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.077 2 DEBUG oslo_concurrency.lockutils [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:06:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.805 2 DEBUG nova.compute.manager [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-vif-unplugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.805 2 DEBUG oslo_concurrency.lockutils [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.805 2 DEBUG oslo_concurrency.lockutils [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.805 2 DEBUG oslo_concurrency.lockutils [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.806 2 DEBUG nova.compute.manager [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] No waiting events found dispatching network-vif-unplugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.806 2 DEBUG nova.compute.manager [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-vif-unplugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.806 2 DEBUG nova.compute.manager [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.806 2 DEBUG oslo_concurrency.lockutils [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.806 2 DEBUG oslo_concurrency.lockutils [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.806 2 DEBUG oslo_concurrency.lockutils [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.807 2 DEBUG nova.compute.manager [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] No waiting events found dispatching network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:06:53 compute-0 nova_compute[260603]: 2025-10-02 09:06:53.807 2 WARNING nova.compute.manager [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received unexpected event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 for instance with vm_state active and task_state deleting.
Oct 02 09:06:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 107 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 21 KiB/s wr, 36 op/s
Oct 02 09:06:54 compute-0 nova_compute[260603]: 2025-10-02 09:06:54.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:06:54 compute-0 nova_compute[260603]: 2025-10-02 09:06:54.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:06:54 compute-0 nova_compute[260603]: 2025-10-02 09:06:54.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:06:54 compute-0 nova_compute[260603]: 2025-10-02 09:06:54.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 09:06:54 compute-0 nova_compute[260603]: 2025-10-02 09:06:54.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:06:55 compute-0 ceph-mon[74477]: pgmap v2663: 305 pgs: 305 active+clean; 107 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 21 KiB/s wr, 36 op/s
Oct 02 09:06:55 compute-0 nova_compute[260603]: 2025-10-02 09:06:55.271 2 INFO nova.virt.libvirt.driver [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Deleting instance files /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5_del
Oct 02 09:06:55 compute-0 nova_compute[260603]: 2025-10-02 09:06:55.272 2 INFO nova.virt.libvirt.driver [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Deletion of /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5_del complete
Oct 02 09:06:55 compute-0 nova_compute[260603]: 2025-10-02 09:06:55.395 2 INFO nova.compute.manager [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Took 3.98 seconds to destroy the instance on the hypervisor.
Oct 02 09:06:55 compute-0 nova_compute[260603]: 2025-10-02 09:06:55.396 2 DEBUG oslo.service.loopingcall [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:06:55 compute-0 nova_compute[260603]: 2025-10-02 09:06:55.396 2 DEBUG nova.compute.manager [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:06:55 compute-0 nova_compute[260603]: 2025-10-02 09:06:55.396 2 DEBUG nova.network.neutron [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:06:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 107 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 9.2 KiB/s wr, 36 op/s
Oct 02 09:06:55 compute-0 nova_compute[260603]: 2025-10-02 09:06:55.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:56 compute-0 nova_compute[260603]: 2025-10-02 09:06:56.102 2 DEBUG nova.network.neutron [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:06:56 compute-0 nova_compute[260603]: 2025-10-02 09:06:56.148 2 INFO nova.compute.manager [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Took 0.75 seconds to deallocate network for instance.
Oct 02 09:06:56 compute-0 nova_compute[260603]: 2025-10-02 09:06:56.185 2 DEBUG nova.compute.manager [req-81ec2e55-c926-476e-adf6-878702e49dd4 req-37550955-00e2-4146-8fed-fe86acfd7612 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-vif-deleted-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:06:56 compute-0 nova_compute[260603]: 2025-10-02 09:06:56.211 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:06:56 compute-0 nova_compute[260603]: 2025-10-02 09:06:56.212 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:06:56 compute-0 nova_compute[260603]: 2025-10-02 09:06:56.293 2 DEBUG oslo_concurrency.processutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:06:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:06:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4140111897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:06:56 compute-0 nova_compute[260603]: 2025-10-02 09:06:56.772 2 DEBUG oslo_concurrency.processutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:06:56 compute-0 nova_compute[260603]: 2025-10-02 09:06:56.783 2 DEBUG nova.compute.provider_tree [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:06:56 compute-0 nova_compute[260603]: 2025-10-02 09:06:56.823 2 DEBUG nova.scheduler.client.report [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:06:56 compute-0 nova_compute[260603]: 2025-10-02 09:06:56.878 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:56 compute-0 nova_compute[260603]: 2025-10-02 09:06:56.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:06:56 compute-0 nova_compute[260603]: 2025-10-02 09:06:56.965 2 INFO nova.scheduler.client.report [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 88de4189-44e3-48fe-aa38-c33334b314b5
Oct 02 09:06:57 compute-0 nova_compute[260603]: 2025-10-02 09:06:57.134 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:06:57 compute-0 ceph-mon[74477]: pgmap v2664: 305 pgs: 305 active+clean; 107 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 9.2 KiB/s wr, 36 op/s
Oct 02 09:06:57 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4140111897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:06:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:06:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:06:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:06:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:06:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:06:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:06:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 52 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 10 KiB/s wr, 55 op/s
Oct 02 09:06:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:06:58 compute-0 ceph-mon[74477]: pgmap v2665: 305 pgs: 305 active+clean; 52 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 10 KiB/s wr, 55 op/s
Oct 02 09:06:58 compute-0 nova_compute[260603]: 2025-10-02 09:06:58.657 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:06:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Oct 02 09:07:00 compute-0 nova_compute[260603]: 2025-10-02 09:07:00.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:07:00 compute-0 nova_compute[260603]: 2025-10-02 09:07:00.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:07:00 compute-0 nova_compute[260603]: 2025-10-02 09:07:00.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:01 compute-0 ceph-mon[74477]: pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Oct 02 09:07:01 compute-0 nova_compute[260603]: 2025-10-02 09:07:01.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 10 KiB/s wr, 50 op/s
Oct 02 09:07:02 compute-0 nova_compute[260603]: 2025-10-02 09:07:02.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:07:02 compute-0 nova_compute[260603]: 2025-10-02 09:07:02.578 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:02 compute-0 nova_compute[260603]: 2025-10-02 09:07:02.578 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:02 compute-0 nova_compute[260603]: 2025-10-02 09:07:02.578 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:02 compute-0 nova_compute[260603]: 2025-10-02 09:07:02.579 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:07:02 compute-0 nova_compute[260603]: 2025-10-02 09:07:02.579 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:07:02 compute-0 nova_compute[260603]: 2025-10-02 09:07:02.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:02 compute-0 nova_compute[260603]: 2025-10-02 09:07:02.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:02 compute-0 nova_compute[260603]: 2025-10-02 09:07:02.873 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396007.8730283, a021c5bb-f6b0-4434-bf26-81f294f0fe00 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:07:02 compute-0 nova_compute[260603]: 2025-10-02 09:07:02.874 2 INFO nova.compute.manager [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] VM Stopped (Lifecycle Event)
Oct 02 09:07:02 compute-0 nova_compute[260603]: 2025-10-02 09:07:02.920 2 DEBUG nova.compute.manager [None req-438e03ff-6f83-4b4e-82ec-a404a3f7a9cf - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:07:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:07:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/108144706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.031 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.186 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.187 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3610MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.187 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.188 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:07:03 compute-0 ceph-mon[74477]: pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 10 KiB/s wr, 50 op/s
Oct 02 09:07:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/108144706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.254 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.254 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.270 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:07:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:07:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2619791510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.710 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.716 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.745 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.811 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:07:03 compute-0 nova_compute[260603]: 2025-10-02 09:07:03.812 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 10 KiB/s wr, 50 op/s
Oct 02 09:07:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2619791510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:07:05 compute-0 ceph-mon[74477]: pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 10 KiB/s wr, 50 op/s
Oct 02 09:07:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1023 B/s wr, 21 op/s
Oct 02 09:07:05 compute-0 nova_compute[260603]: 2025-10-02 09:07:05.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:06 compute-0 ceph-mon[74477]: pgmap v2669: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1023 B/s wr, 21 op/s
Oct 02 09:07:06 compute-0 nova_compute[260603]: 2025-10-02 09:07:06.808 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:07:06 compute-0 nova_compute[260603]: 2025-10-02 09:07:06.854 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396011.8534527, 88de4189-44e3-48fe-aa38-c33334b314b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:07:06 compute-0 nova_compute[260603]: 2025-10-02 09:07:06.854 2 INFO nova.compute.manager [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] VM Stopped (Lifecycle Event)
Oct 02 09:07:06 compute-0 nova_compute[260603]: 2025-10-02 09:07:06.872 2 DEBUG nova.compute.manager [None req-e7036673-c503-469f-8cfa-f067de1ce14f - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:07:06 compute-0 nova_compute[260603]: 2025-10-02 09:07:06.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1023 B/s wr, 21 op/s
Oct 02 09:07:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:07:09 compute-0 ceph-mon[74477]: pgmap v2670: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1023 B/s wr, 21 op/s
Oct 02 09:07:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 938 B/s rd, 170 B/s wr, 1 op/s
Oct 02 09:07:10 compute-0 nova_compute[260603]: 2025-10-02 09:07:10.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:11 compute-0 ceph-mon[74477]: pgmap v2671: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 938 B/s rd, 170 B/s wr, 1 op/s
Oct 02 09:07:11 compute-0 nova_compute[260603]: 2025-10-02 09:07:11.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:07:11 compute-0 nova_compute[260603]: 2025-10-02 09:07:11.550 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:07:11 compute-0 nova_compute[260603]: 2025-10-02 09:07:11.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:12 compute-0 sudo[413040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:07:12 compute-0 sudo[413040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:12 compute-0 sudo[413040]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:12 compute-0 sudo[413065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:07:12 compute-0 sudo[413065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:12 compute-0 sudo[413065]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:12 compute-0 sudo[413090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:07:12 compute-0 sudo[413090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:12 compute-0 sudo[413090]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:12 compute-0 sudo[413115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:07:12 compute-0 sudo[413115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:13 compute-0 ceph-mon[74477]: pgmap v2672: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:07:13 compute-0 sudo[413115]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:07:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:07:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:07:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:07:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:07:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:07:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c9505a8d-364c-4984-a9c9-7aa22f5062ef does not exist
Oct 02 09:07:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8ea607fe-7849-4884-8621-035754a6bfd6 does not exist
Oct 02 09:07:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 33b07dea-7539-484c-9d12-c18a1e7f66be does not exist
Oct 02 09:07:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:07:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:07:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:07:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:07:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:07:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:07:13 compute-0 sudo[413171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:07:13 compute-0 sudo[413171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:13 compute-0 sudo[413171]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:13 compute-0 sudo[413203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:07:13 compute-0 sudo[413203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:13 compute-0 sudo[413203]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:13 compute-0 podman[413196]: 2025-10-02 09:07:13.520338196 +0000 UTC m=+0.082327410 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 09:07:13 compute-0 sudo[413255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:07:13 compute-0 sudo[413255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:13 compute-0 sudo[413255]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:13 compute-0 podman[413195]: 2025-10-02 09:07:13.556863591 +0000 UTC m=+0.107858593 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:07:13 compute-0 sudo[413289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:07:13 compute-0 sudo[413289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:13 compute-0 podman[413355]: 2025-10-02 09:07:13.964701114 +0000 UTC m=+0.055692511 container create 22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:07:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:14 compute-0 systemd[1]: Started libpod-conmon-22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71.scope.
Oct 02 09:07:14 compute-0 podman[413355]: 2025-10-02 09:07:13.935453836 +0000 UTC m=+0.026445283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:07:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:07:14 compute-0 podman[413355]: 2025-10-02 09:07:14.071500183 +0000 UTC m=+0.162491560 container init 22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 09:07:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:07:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:07:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:07:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:07:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:07:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:07:14 compute-0 podman[413355]: 2025-10-02 09:07:14.084986262 +0000 UTC m=+0.175977619 container start 22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 09:07:14 compute-0 podman[413355]: 2025-10-02 09:07:14.091216696 +0000 UTC m=+0.182208103 container attach 22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brown, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:07:14 compute-0 nice_brown[413371]: 167 167
Oct 02 09:07:14 compute-0 systemd[1]: libpod-22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71.scope: Deactivated successfully.
Oct 02 09:07:14 compute-0 conmon[413371]: conmon 22d14a50149e947d89c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71.scope/container/memory.events
Oct 02 09:07:14 compute-0 podman[413355]: 2025-10-02 09:07:14.094927512 +0000 UTC m=+0.185918909 container died 22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brown, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 09:07:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4df60a9b8fc40f1b30df43c4d86ad93d84cf5e67ae61cdb44ae992d20a2cdbce-merged.mount: Deactivated successfully.
Oct 02 09:07:14 compute-0 podman[413355]: 2025-10-02 09:07:14.148368582 +0000 UTC m=+0.239359969 container remove 22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brown, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:07:14 compute-0 systemd[1]: libpod-conmon-22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71.scope: Deactivated successfully.
Oct 02 09:07:14 compute-0 podman[413395]: 2025-10-02 09:07:14.355061645 +0000 UTC m=+0.043688808 container create e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:07:14 compute-0 systemd[1]: Started libpod-conmon-e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1.scope.
Oct 02 09:07:14 compute-0 podman[413395]: 2025-10-02 09:07:14.338500971 +0000 UTC m=+0.027128114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:07:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092ca119c11d8923d6c7c85457ef2a64dc3bfdcefac2768f55fa06a03175f1e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092ca119c11d8923d6c7c85457ef2a64dc3bfdcefac2768f55fa06a03175f1e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092ca119c11d8923d6c7c85457ef2a64dc3bfdcefac2768f55fa06a03175f1e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092ca119c11d8923d6c7c85457ef2a64dc3bfdcefac2768f55fa06a03175f1e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092ca119c11d8923d6c7c85457ef2a64dc3bfdcefac2768f55fa06a03175f1e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:14 compute-0 podman[413395]: 2025-10-02 09:07:14.461008598 +0000 UTC m=+0.149635821 container init e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 09:07:14 compute-0 podman[413395]: 2025-10-02 09:07:14.475091275 +0000 UTC m=+0.163718408 container start e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermat, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:07:14 compute-0 podman[413395]: 2025-10-02 09:07:14.478102079 +0000 UTC m=+0.166729232 container attach e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermat, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:07:15 compute-0 ceph-mon[74477]: pgmap v2673: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:15 compute-0 thirsty_fermat[413411]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:07:15 compute-0 thirsty_fermat[413411]: --> relative data size: 1.0
Oct 02 09:07:15 compute-0 thirsty_fermat[413411]: --> All data devices are unavailable
Oct 02 09:07:15 compute-0 systemd[1]: libpod-e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1.scope: Deactivated successfully.
Oct 02 09:07:15 compute-0 podman[413440]: 2025-10-02 09:07:15.535848849 +0000 UTC m=+0.028383193 container died e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:07:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-092ca119c11d8923d6c7c85457ef2a64dc3bfdcefac2768f55fa06a03175f1e6-merged.mount: Deactivated successfully.
Oct 02 09:07:15 compute-0 podman[413440]: 2025-10-02 09:07:15.600600581 +0000 UTC m=+0.093134915 container remove e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:07:15 compute-0 systemd[1]: libpod-conmon-e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1.scope: Deactivated successfully.
Oct 02 09:07:15 compute-0 sudo[413289]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:15 compute-0 sudo[413455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:07:15 compute-0 sudo[413455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:15 compute-0 sudo[413455]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:15 compute-0 sudo[413480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:07:15 compute-0 sudo[413480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:15 compute-0 sudo[413480]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:15 compute-0 sudo[413505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:07:15 compute-0 sudo[413505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:15 compute-0 sudo[413505]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:15 compute-0 sudo[413530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:07:15 compute-0 sudo[413530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:16 compute-0 nova_compute[260603]: 2025-10-02 09:07:15.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:16 compute-0 podman[413596]: 2025-10-02 09:07:16.277378992 +0000 UTC m=+0.062979197 container create a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:07:16 compute-0 systemd[1]: Started libpod-conmon-a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa.scope.
Oct 02 09:07:16 compute-0 podman[413596]: 2025-10-02 09:07:16.255619306 +0000 UTC m=+0.041219491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:07:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:07:16 compute-0 podman[413596]: 2025-10-02 09:07:16.395373999 +0000 UTC m=+0.180974214 container init a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 09:07:16 compute-0 podman[413596]: 2025-10-02 09:07:16.405944608 +0000 UTC m=+0.191544813 container start a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:07:16 compute-0 fervent_brattain[413612]: 167 167
Oct 02 09:07:16 compute-0 systemd[1]: libpod-a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa.scope: Deactivated successfully.
Oct 02 09:07:16 compute-0 conmon[413612]: conmon a944f4081f9d4d00b7c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa.scope/container/memory.events
Oct 02 09:07:16 compute-0 podman[413596]: 2025-10-02 09:07:16.415259318 +0000 UTC m=+0.200859533 container attach a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:07:16 compute-0 podman[413596]: 2025-10-02 09:07:16.416599939 +0000 UTC m=+0.202200104 container died a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 09:07:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-602fa73cd6703b527709bff6586a02c71414f09837ff88f3e982299f23304e4d-merged.mount: Deactivated successfully.
Oct 02 09:07:16 compute-0 podman[413596]: 2025-10-02 09:07:16.461293488 +0000 UTC m=+0.246893653 container remove a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:07:16 compute-0 systemd[1]: libpod-conmon-a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa.scope: Deactivated successfully.
Oct 02 09:07:16 compute-0 podman[413634]: 2025-10-02 09:07:16.628989729 +0000 UTC m=+0.053935227 container create 519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 09:07:16 compute-0 systemd[1]: Started libpod-conmon-519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d.scope.
Oct 02 09:07:16 compute-0 podman[413634]: 2025-10-02 09:07:16.605644014 +0000 UTC m=+0.030589492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:07:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecb1fb191b9a6f40f2e76f36d2fab5e00b24d4db0079b16e28717891dc51436b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecb1fb191b9a6f40f2e76f36d2fab5e00b24d4db0079b16e28717891dc51436b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecb1fb191b9a6f40f2e76f36d2fab5e00b24d4db0079b16e28717891dc51436b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecb1fb191b9a6f40f2e76f36d2fab5e00b24d4db0079b16e28717891dc51436b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:16 compute-0 podman[413634]: 2025-10-02 09:07:16.739615187 +0000 UTC m=+0.164560655 container init 519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:07:16 compute-0 podman[413634]: 2025-10-02 09:07:16.748050379 +0000 UTC m=+0.172995837 container start 519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:07:16 compute-0 podman[413634]: 2025-10-02 09:07:16.752865819 +0000 UTC m=+0.177811307 container attach 519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:07:16 compute-0 nova_compute[260603]: 2025-10-02 09:07:16.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:17 compute-0 ceph-mon[74477]: pgmap v2674: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]: {
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:     "0": [
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:         {
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "devices": [
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "/dev/loop3"
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             ],
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_name": "ceph_lv0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_size": "21470642176",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "name": "ceph_lv0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "tags": {
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.cluster_name": "ceph",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.crush_device_class": "",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.encrypted": "0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.osd_id": "0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.type": "block",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.vdo": "0"
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             },
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "type": "block",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "vg_name": "ceph_vg0"
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:         }
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:     ],
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:     "1": [
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:         {
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "devices": [
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "/dev/loop4"
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             ],
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_name": "ceph_lv1",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_size": "21470642176",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "name": "ceph_lv1",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "tags": {
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.cluster_name": "ceph",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.crush_device_class": "",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.encrypted": "0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.osd_id": "1",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.type": "block",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.vdo": "0"
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             },
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "type": "block",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "vg_name": "ceph_vg1"
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:         }
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:     ],
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:     "2": [
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:         {
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "devices": [
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "/dev/loop5"
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             ],
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_name": "ceph_lv2",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_size": "21470642176",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "name": "ceph_lv2",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "tags": {
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.cluster_name": "ceph",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.crush_device_class": "",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.encrypted": "0",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.osd_id": "2",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.type": "block",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:                 "ceph.vdo": "0"
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             },
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "type": "block",
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:             "vg_name": "ceph_vg2"
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:         }
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]:     ]
Oct 02 09:07:17 compute-0 quizzical_hoover[413651]: }
Oct 02 09:07:17 compute-0 systemd[1]: libpod-519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d.scope: Deactivated successfully.
Oct 02 09:07:17 compute-0 podman[413634]: 2025-10-02 09:07:17.60629686 +0000 UTC m=+1.031242328 container died 519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:07:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecb1fb191b9a6f40f2e76f36d2fab5e00b24d4db0079b16e28717891dc51436b-merged.mount: Deactivated successfully.
Oct 02 09:07:17 compute-0 podman[413634]: 2025-10-02 09:07:17.678108872 +0000 UTC m=+1.103054330 container remove 519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:07:17 compute-0 systemd[1]: libpod-conmon-519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d.scope: Deactivated successfully.
Oct 02 09:07:17 compute-0 sudo[413530]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:17 compute-0 podman[413672]: 2025-10-02 09:07:17.757706575 +0000 UTC m=+0.099858484 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 09:07:17 compute-0 podman[413661]: 2025-10-02 09:07:17.758070216 +0000 UTC m=+0.096914892 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:07:17 compute-0 sudo[413711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:07:17 compute-0 sudo[413711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:17 compute-0 sudo[413711]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:17 compute-0 sudo[413736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:07:17 compute-0 sudo[413736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:17 compute-0 sudo[413736]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:17 compute-0 sudo[413761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:07:17 compute-0 sudo[413761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:17 compute-0 sudo[413761]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:18 compute-0 sudo[413786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:07:18 compute-0 sudo[413786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:07:18 compute-0 podman[413852]: 2025-10-02 09:07:18.487683849 +0000 UTC m=+0.121639831 container create 8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:07:18 compute-0 podman[413852]: 2025-10-02 09:07:18.403470323 +0000 UTC m=+0.037426315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:07:18 compute-0 systemd[1]: Started libpod-conmon-8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab.scope.
Oct 02 09:07:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:07:18 compute-0 podman[413852]: 2025-10-02 09:07:18.584024393 +0000 UTC m=+0.217980445 container init 8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 02 09:07:18 compute-0 podman[413852]: 2025-10-02 09:07:18.593320092 +0000 UTC m=+0.227276104 container start 8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:07:18 compute-0 pedantic_tu[413869]: 167 167
Oct 02 09:07:18 compute-0 systemd[1]: libpod-8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab.scope: Deactivated successfully.
Oct 02 09:07:18 compute-0 podman[413852]: 2025-10-02 09:07:18.602709274 +0000 UTC m=+0.236665256 container attach 8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:07:18 compute-0 podman[413852]: 2025-10-02 09:07:18.603979714 +0000 UTC m=+0.237935726 container died 8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 09:07:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecf1f9c1fb4efea777fe93973e501d5e4bfec1cd7aa1274c5284f87ea024a309-merged.mount: Deactivated successfully.
Oct 02 09:07:18 compute-0 podman[413852]: 2025-10-02 09:07:18.709203044 +0000 UTC m=+0.343159026 container remove 8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:07:18 compute-0 systemd[1]: libpod-conmon-8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab.scope: Deactivated successfully.
Oct 02 09:07:18 compute-0 podman[413893]: 2025-10-02 09:07:18.938163208 +0000 UTC m=+0.075560118 container create 078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 09:07:18 compute-0 podman[413893]: 2025-10-02 09:07:18.899548429 +0000 UTC m=+0.036945419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:07:19 compute-0 systemd[1]: Started libpod-conmon-078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d.scope.
Oct 02 09:07:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678e085ce6bc85f623ac5167ea4941398005a9bed9e6404d725ff9ab01c9be41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678e085ce6bc85f623ac5167ea4941398005a9bed9e6404d725ff9ab01c9be41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678e085ce6bc85f623ac5167ea4941398005a9bed9e6404d725ff9ab01c9be41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678e085ce6bc85f623ac5167ea4941398005a9bed9e6404d725ff9ab01c9be41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:19 compute-0 podman[413893]: 2025-10-02 09:07:19.073791184 +0000 UTC m=+0.211188124 container init 078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lederberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:07:19 compute-0 podman[413893]: 2025-10-02 09:07:19.08172094 +0000 UTC m=+0.219117840 container start 078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 09:07:19 compute-0 podman[413893]: 2025-10-02 09:07:19.08494667 +0000 UTC m=+0.222343570 container attach 078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lederberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:07:19 compute-0 ceph-mon[74477]: pgmap v2675: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]: {
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "osd_id": 2,
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "type": "bluestore"
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:     },
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "osd_id": 1,
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "type": "bluestore"
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:     },
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "osd_id": 0,
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:         "type": "bluestore"
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]:     }
Oct 02 09:07:20 compute-0 nervous_lederberg[413909]: }
Oct 02 09:07:20 compute-0 systemd[1]: libpod-078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d.scope: Deactivated successfully.
Oct 02 09:07:20 compute-0 systemd[1]: libpod-078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d.scope: Consumed 1.017s CPU time.
Oct 02 09:07:20 compute-0 podman[413893]: 2025-10-02 09:07:20.093459131 +0000 UTC m=+1.230856071 container died 078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:07:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:20.116 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:07:20 compute-0 nova_compute[260603]: 2025-10-02 09:07:20.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:20 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:20.119 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:07:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-678e085ce6bc85f623ac5167ea4941398005a9bed9e6404d725ff9ab01c9be41-merged.mount: Deactivated successfully.
Oct 02 09:07:20 compute-0 podman[413893]: 2025-10-02 09:07:20.152002199 +0000 UTC m=+1.289399099 container remove 078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:07:20 compute-0 systemd[1]: libpod-conmon-078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d.scope: Deactivated successfully.
Oct 02 09:07:20 compute-0 sudo[413786]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:07:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:07:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:07:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:07:20 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e3250fec-268f-4232-bffe-17f296eec8eb does not exist
Oct 02 09:07:20 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ba9cae1f-efd4-483e-9416-3ef110f745bc does not exist
Oct 02 09:07:20 compute-0 sudo[413955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:07:20 compute-0 sudo[413955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:20 compute-0 sudo[413955]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:20 compute-0 sudo[413980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:07:20 compute-0 sudo[413980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:07:20 compute-0 sudo[413980]: pam_unix(sudo:session): session closed for user root
Oct 02 09:07:21 compute-0 nova_compute[260603]: 2025-10-02 09:07:20.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:21 compute-0 ceph-mon[74477]: pgmap v2676: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:07:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:07:21 compute-0 nova_compute[260603]: 2025-10-02 09:07:21.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:07:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3198808913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:07:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:07:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3198808913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:07:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3198808913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:07:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3198808913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:07:22 compute-0 nova_compute[260603]: 2025-10-02 09:07:22.363 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:22 compute-0 nova_compute[260603]: 2025-10-02 09:07:22.363 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:22 compute-0 nova_compute[260603]: 2025-10-02 09:07:22.398 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:07:22 compute-0 nova_compute[260603]: 2025-10-02 09:07:22.466 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:22 compute-0 nova_compute[260603]: 2025-10-02 09:07:22.466 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:22 compute-0 nova_compute[260603]: 2025-10-02 09:07:22.473 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:07:22 compute-0 nova_compute[260603]: 2025-10-02 09:07:22.474 2 INFO nova.compute.claims [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:07:22 compute-0 nova_compute[260603]: 2025-10-02 09:07:22.715 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:07:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:07:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1190906338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.132 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.137 2 DEBUG nova.compute.provider_tree [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.154 2 DEBUG nova.scheduler.client.report [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.173 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.174 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:07:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.218 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.219 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.239 2 INFO nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.257 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:07:23 compute-0 ceph-mon[74477]: pgmap v2677: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1190906338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.359 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.361 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.362 2 INFO nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Creating image(s)
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.393 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.427 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.457 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.461 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.503 2 DEBUG nova.policy [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.536 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.537 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.538 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.538 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.571 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.576 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 086604c0-28d5-41d4-995c-17db322b3ded_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:07:23 compute-0 nova_compute[260603]: 2025-10-02 09:07:23.929 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 086604c0-28d5-41d4-995c-17db322b3ded_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:07:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:24 compute-0 nova_compute[260603]: 2025-10-02 09:07:24.018 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:07:24 compute-0 nova_compute[260603]: 2025-10-02 09:07:24.145 2 DEBUG nova.objects.instance [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 086604c0-28d5-41d4-995c-17db322b3ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:07:24 compute-0 nova_compute[260603]: 2025-10-02 09:07:24.168 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:07:24 compute-0 nova_compute[260603]: 2025-10-02 09:07:24.168 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Ensure instance console log exists: /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:07:24 compute-0 nova_compute[260603]: 2025-10-02 09:07:24.169 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:24 compute-0 nova_compute[260603]: 2025-10-02 09:07:24.169 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:24 compute-0 nova_compute[260603]: 2025-10-02 09:07:24.169 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:24 compute-0 nova_compute[260603]: 2025-10-02 09:07:24.887 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Successfully created port: 1bf113e0-562b-45fb-9b97-aa76d5dac283 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:07:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:25.121 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:25 compute-0 ceph-mon[74477]: pgmap v2678: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:25 compute-0 nova_compute[260603]: 2025-10-02 09:07:25.399 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Successfully created port: 97679643-cd71-4857-a615-c21d643d15c2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:07:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:26 compute-0 nova_compute[260603]: 2025-10-02 09:07:26.021 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Successfully updated port: 1bf113e0-562b-45fb-9b97-aa76d5dac283 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:07:26 compute-0 nova_compute[260603]: 2025-10-02 09:07:26.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:26 compute-0 nova_compute[260603]: 2025-10-02 09:07:26.146 2 DEBUG nova.compute.manager [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-changed-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:07:26 compute-0 nova_compute[260603]: 2025-10-02 09:07:26.147 2 DEBUG nova.compute.manager [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing instance network info cache due to event network-changed-1bf113e0-562b-45fb-9b97-aa76d5dac283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:07:26 compute-0 nova_compute[260603]: 2025-10-02 09:07:26.147 2 DEBUG oslo_concurrency.lockutils [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:07:26 compute-0 nova_compute[260603]: 2025-10-02 09:07:26.147 2 DEBUG oslo_concurrency.lockutils [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:07:26 compute-0 nova_compute[260603]: 2025-10-02 09:07:26.147 2 DEBUG nova.network.neutron [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing network info cache for port 1bf113e0-562b-45fb-9b97-aa76d5dac283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:07:26 compute-0 nova_compute[260603]: 2025-10-02 09:07:26.370 2 DEBUG nova.network.neutron [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:07:26 compute-0 nova_compute[260603]: 2025-10-02 09:07:26.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:26 compute-0 nova_compute[260603]: 2025-10-02 09:07:26.987 2 DEBUG nova.network.neutron [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:07:27 compute-0 nova_compute[260603]: 2025-10-02 09:07:27.001 2 DEBUG oslo_concurrency.lockutils [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:07:27 compute-0 nova_compute[260603]: 2025-10-02 09:07:27.074 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Successfully updated port: 97679643-cd71-4857-a615-c21d643d15c2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:07:27 compute-0 nova_compute[260603]: 2025-10-02 09:07:27.122 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:07:27 compute-0 nova_compute[260603]: 2025-10-02 09:07:27.122 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:07:27 compute-0 nova_compute[260603]: 2025-10-02 09:07:27.122 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:07:27 compute-0 nova_compute[260603]: 2025-10-02 09:07:27.242 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:07:27 compute-0 ceph-mon[74477]: pgmap v2679: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:07:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:07:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:07:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:07:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:07:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:07:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 71 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:07:28
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data']
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:07:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:07:28 compute-0 nova_compute[260603]: 2025-10-02 09:07:28.217 2 DEBUG nova.compute.manager [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-changed-97679643-cd71-4857-a615-c21d643d15c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:07:28 compute-0 nova_compute[260603]: 2025-10-02 09:07:28.218 2 DEBUG nova.compute.manager [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing instance network info cache due to event network-changed-97679643-cd71-4857-a615-c21d643d15c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:07:28 compute-0 nova_compute[260603]: 2025-10-02 09:07:28.218 2 DEBUG oslo_concurrency.lockutils [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:07:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:07:29 compute-0 ceph-mon[74477]: pgmap v2680: 305 pgs: 305 active+clean; 71 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Oct 02 09:07:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 88 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:07:30 compute-0 ceph-mon[74477]: pgmap v2681: 305 pgs: 305 active+clean; 88 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:07:31 compute-0 nova_compute[260603]: 2025-10-02 09:07:31.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:31 compute-0 nova_compute[260603]: 2025-10-02 09:07:31.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 88 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.509 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.550 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.551 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Instance network_info: |[{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.551 2 DEBUG oslo_concurrency.lockutils [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.552 2 DEBUG nova.network.neutron [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing network info cache for port 97679643-cd71-4857-a615-c21d643d15c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.558 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Start _get_guest_xml network_info=[{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.564 2 WARNING nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.571 2 DEBUG nova.virt.libvirt.host [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.572 2 DEBUG nova.virt.libvirt.host [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.576 2 DEBUG nova.virt.libvirt.host [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.577 2 DEBUG nova.virt.libvirt.host [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.577 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.578 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.579 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.579 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.580 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.580 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.581 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.581 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.581 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.582 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.582 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.583 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:07:32 compute-0 nova_compute[260603]: 2025-10-02 09:07:32.588 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:07:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:07:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100699432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.011 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.044 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.050 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:07:33 compute-0 ceph-mon[74477]: pgmap v2682: 305 pgs: 305 active+clean; 88 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:07:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4100699432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:07:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:07:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:07:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1284669158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.514 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.516 2 DEBUG nova.virt.libvirt.vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892205724',display_name='tempest-TestGettingAddress-server-1892205724',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892205724',id=139,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-2q1pc96q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:07:23Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=086604c0-28d5-41d4-995c-17db322b3ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.516 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.517 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.518 2 DEBUG nova.virt.libvirt.vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892205724',display_name='tempest-TestGettingAddress-server-1892205724',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892205724',id=139,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-2q1pc96q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:07:23Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=086604c0-28d5-41d4-995c-17db322b3ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.518 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.519 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.520 2 DEBUG nova.objects.instance [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 086604c0-28d5-41d4-995c-17db322b3ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.536 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:07:33 compute-0 nova_compute[260603]:   <uuid>086604c0-28d5-41d4-995c-17db322b3ded</uuid>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   <name>instance-0000008b</name>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-1892205724</nova:name>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:07:32</nova:creationTime>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <nova:port uuid="1bf113e0-562b-45fb-9b97-aa76d5dac283">
Oct 02 09:07:33 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <nova:port uuid="97679643-cd71-4857-a615-c21d643d15c2">
Oct 02 09:07:33 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe9d:b136" ipVersion="6"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <system>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <entry name="serial">086604c0-28d5-41d4-995c-17db322b3ded</entry>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <entry name="uuid">086604c0-28d5-41d4-995c-17db322b3ded</entry>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     </system>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   <os>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   </os>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   <features>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   </features>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/086604c0-28d5-41d4-995c-17db322b3ded_disk">
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       </source>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/086604c0-28d5-41d4-995c-17db322b3ded_disk.config">
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       </source>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:07:33 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:fc:e1:b8"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <target dev="tap1bf113e0-56"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:9d:b1:36"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <target dev="tap97679643-cd"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/console.log" append="off"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <video>
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     </video>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:07:33 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:07:33 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:07:33 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:07:33 compute-0 nova_compute[260603]: </domain>
Oct 02 09:07:33 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.538 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Preparing to wait for external event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.538 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.539 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.539 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.539 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Preparing to wait for external event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.539 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.540 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.540 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.541 2 DEBUG nova.virt.libvirt.vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892205724',display_name='tempest-TestGettingAddress-server-1892205724',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892205724',id=139,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-2q1pc96q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:07:23Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=086604c0-28d5-41d4-995c-17db322b3ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.541 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.542 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.543 2 DEBUG os_vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bf113e0-56, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1bf113e0-56, col_values=(('external_ids', {'iface-id': '1bf113e0-562b-45fb-9b97-aa76d5dac283', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:e1:b8', 'vm-uuid': '086604c0-28d5-41d4-995c-17db322b3ded'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:33 compute-0 NetworkManager[45129]: <info>  [1759396053.5514] manager: (tap1bf113e0-56): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/604)
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.561 2 INFO os_vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56')
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.562 2 DEBUG nova.virt.libvirt.vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892205724',display_name='tempest-TestGettingAddress-server-1892205724',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892205724',id=139,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-2q1pc96q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:07:23Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=086604c0-28d5-41d4-995c-17db322b3ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.562 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.563 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.563 2 DEBUG os_vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.564 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.564 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97679643-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97679643-cd, col_values=(('external_ids', {'iface-id': '97679643-cd71-4857-a615-c21d643d15c2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:b1:36', 'vm-uuid': '086604c0-28d5-41d4-995c-17db322b3ded'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:33 compute-0 NetworkManager[45129]: <info>  [1759396053.5675] manager: (tap97679643-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.575 2 INFO os_vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd')
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.637 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.637 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.638 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:fc:e1:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.638 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:9d:b1:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.639 2 INFO nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Using config drive
Oct 02 09:07:33 compute-0 nova_compute[260603]: 2025-10-02 09:07:33.668 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.001 2 INFO nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Creating config drive at /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/disk.config
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.006 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8s4gwmh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:07:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 88 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:07:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1284669158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.169 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8s4gwmh" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.192 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.195 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/disk.config 086604c0-28d5-41d4-995c-17db322b3ded_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.339 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/disk.config 086604c0-28d5-41d4-995c-17db322b3ded_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.341 2 INFO nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Deleting local config drive /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/disk.config because it was imported into RBD.
Oct 02 09:07:34 compute-0 kernel: tap1bf113e0-56: entered promiscuous mode
Oct 02 09:07:34 compute-0 NetworkManager[45129]: <info>  [1759396054.3933] manager: (tap1bf113e0-56): new Tun device (/org/freedesktop/NetworkManager/Devices/606)
Oct 02 09:07:34 compute-0 ovn_controller[152344]: 2025-10-02T09:07:34Z|01500|binding|INFO|Claiming lport 1bf113e0-562b-45fb-9b97-aa76d5dac283 for this chassis.
Oct 02 09:07:34 compute-0 ovn_controller[152344]: 2025-10-02T09:07:34Z|01501|binding|INFO|1bf113e0-562b-45fb-9b97-aa76d5dac283: Claiming fa:16:3e:fc:e1:b8 10.100.0.7
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:34 compute-0 NetworkManager[45129]: <info>  [1759396054.4078] manager: (tap97679643-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Oct 02 09:07:34 compute-0 kernel: tap97679643-cd: entered promiscuous mode
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.417 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:e1:b8 10.100.0.7'], port_security=['fa:16:3e:fc:e1:b8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '086604c0-28d5-41d4-995c-17db322b3ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8531ee7b-e9fa-4aeb-a901-a54a8597544d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=1bf113e0-562b-45fb-9b97-aa76d5dac283) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.419 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 1bf113e0-562b-45fb-9b97-aa76d5dac283 in datapath 0cd32ebe-6aa8-4400-8c00-a3546d677f2c bound to our chassis
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.420 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cd32ebe-6aa8-4400-8c00-a3546d677f2c
Oct 02 09:07:34 compute-0 systemd-udevd[414333]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:07:34 compute-0 systemd-udevd[414332]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.433 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f9b1cf-addd-462b-86af-b69f8c336fda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.434 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0cd32ebe-61 in ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.436 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0cd32ebe-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[264435ea-06fa-41df-93cd-fca80705e84d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.437 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78f0cea0-7e3f-49ca-8b43-f0ccf517bc2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 NetworkManager[45129]: <info>  [1759396054.4403] device (tap97679643-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:07:34 compute-0 NetworkManager[45129]: <info>  [1759396054.4418] device (tap97679643-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:07:34 compute-0 NetworkManager[45129]: <info>  [1759396054.4476] device (tap1bf113e0-56): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:07:34 compute-0 NetworkManager[45129]: <info>  [1759396054.4489] device (tap1bf113e0-56): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:07:34 compute-0 systemd-machined[214636]: New machine qemu-173-instance-0000008b.
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.454 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[da0af809-938a-43b1-841f-3be5f0662cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 ovn_controller[152344]: 2025-10-02T09:07:34Z|01502|binding|INFO|Claiming lport 97679643-cd71-4857-a615-c21d643d15c2 for this chassis.
Oct 02 09:07:34 compute-0 systemd[1]: Started Virtual Machine qemu-173-instance-0000008b.
Oct 02 09:07:34 compute-0 ovn_controller[152344]: 2025-10-02T09:07:34Z|01503|binding|INFO|97679643-cd71-4857-a615-c21d643d15c2: Claiming fa:16:3e:9d:b1:36 2001:db8::f816:3eff:fe9d:b136
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.487 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe34197-7d3a-40ab-88bd-b049876db650]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:34 compute-0 ovn_controller[152344]: 2025-10-02T09:07:34Z|01504|binding|INFO|Setting lport 1bf113e0-562b-45fb-9b97-aa76d5dac283 ovn-installed in OVS
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:34 compute-0 ovn_controller[152344]: 2025-10-02T09:07:34Z|01505|binding|INFO|Setting lport 97679643-cd71-4857-a615-c21d643d15c2 ovn-installed in OVS
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:34 compute-0 ovn_controller[152344]: 2025-10-02T09:07:34Z|01506|binding|INFO|Setting lport 97679643-cd71-4857-a615-c21d643d15c2 up in Southbound
Oct 02 09:07:34 compute-0 ovn_controller[152344]: 2025-10-02T09:07:34Z|01507|binding|INFO|Setting lport 1bf113e0-562b-45fb-9b97-aa76d5dac283 up in Southbound
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.511 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:b1:36 2001:db8::f816:3eff:fe9d:b136'], port_security=['fa:16:3e:9d:b1:36 2001:db8::f816:3eff:fe9d:b136'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9d:b136/64', 'neutron:device_id': '086604c0-28d5-41d4-995c-17db322b3ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d94def-f0a5-4beb-85d6-bc3ad333488d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=97679643-cd71-4857-a615-c21d643d15c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.522 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3600b49d-d5b6-4862-a4f4-1274b1744284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.527 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[23614e40-88cc-41cc-9b41-09de7d4fff13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 NetworkManager[45129]: <info>  [1759396054.5297] manager: (tap0cd32ebe-60): new Veth device (/org/freedesktop/NetworkManager/Devices/608)
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.566 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c55bf3-e585-4e92-bd1d-78598aeb3a89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.569 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3d8f61-5b9b-45cf-8ceb-2f89effcb703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 NetworkManager[45129]: <info>  [1759396054.5921] device (tap0cd32ebe-60): carrier: link connected
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.601 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1d40cefd-9b80-4389-a1bc-16106104c1b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.614 2 DEBUG nova.network.neutron [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updated VIF entry in instance network info cache for port 97679643-cd71-4857-a615-c21d643d15c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.614 2 DEBUG nova.network.neutron [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.631 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b7cb7584-374a-4a8f-b84a-b4bc1aae2e82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cd32ebe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:64:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693319, 'reachable_time': 33516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414369, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.646 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e90b06c-a38e-4b6e-ae79-ab09348069b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:64c4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693319, 'tstamp': 693319}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414370, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.670 2 DEBUG oslo_concurrency.lockutils [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.668 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5bf68b-08de-4353-a820-3bb165362e20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cd32ebe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:64:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693319, 'reachable_time': 33516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 414371, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.709 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[683dbdba-3dd8-42ac-b84d-782f2d7a719f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.785 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09196310-f5a3-4166-a118-c04ea09e0529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.788 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cd32ebe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.788 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.789 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cd32ebe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:34 compute-0 kernel: tap0cd32ebe-60: entered promiscuous mode
Oct 02 09:07:34 compute-0 NetworkManager[45129]: <info>  [1759396054.7928] manager: (tap0cd32ebe-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.796 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cd32ebe-60, col_values=(('external_ids', {'iface-id': '251c2ade-6e56-457d-a6d2-c79238c2f10d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:34 compute-0 ovn_controller[152344]: 2025-10-02T09:07:34Z|01508|binding|INFO|Releasing lport 251c2ade-6e56-457d-a6d2-c79238c2f10d from this chassis (sb_readonly=0)
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.798 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0cd32ebe-6aa8-4400-8c00-a3546d677f2c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0cd32ebe-6aa8-4400-8c00-a3546d677f2c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.799 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fe74a79c-cffc-4429-8ef2-480ea911288a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.800 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-0cd32ebe-6aa8-4400-8c00-a3546d677f2c
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/0cd32ebe-6aa8-4400-8c00-a3546d677f2c.pid.haproxy
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 0cd32ebe-6aa8-4400-8c00-a3546d677f2c
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.802 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'env', 'PROCESS_TAG=haproxy-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0cd32ebe-6aa8-4400-8c00-a3546d677f2c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.813 2 DEBUG nova.compute.manager [req-98e512eb-915d-4e0b-8fea-2384ec0e0a2c req-368f0a49-db0e-46e2-9328-82e5b5d1a86c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.813 2 DEBUG oslo_concurrency.lockutils [req-98e512eb-915d-4e0b-8fea-2384ec0e0a2c req-368f0a49-db0e-46e2-9328-82e5b5d1a86c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.814 2 DEBUG oslo_concurrency.lockutils [req-98e512eb-915d-4e0b-8fea-2384ec0e0a2c req-368f0a49-db0e-46e2-9328-82e5b5d1a86c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.814 2 DEBUG oslo_concurrency.lockutils [req-98e512eb-915d-4e0b-8fea-2384ec0e0a2c req-368f0a49-db0e-46e2-9328-82e5b5d1a86c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.814 2 DEBUG nova.compute.manager [req-98e512eb-915d-4e0b-8fea-2384ec0e0a2c req-368f0a49-db0e-46e2-9328-82e5b5d1a86c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Processing event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.819 2 DEBUG nova.compute.manager [req-9352d819-3b57-4b80-8c70-a02aaa90db5e req-3a2d5397-f4ba-46c1-b40a-e267df74ac76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.820 2 DEBUG oslo_concurrency.lockutils [req-9352d819-3b57-4b80-8c70-a02aaa90db5e req-3a2d5397-f4ba-46c1-b40a-e267df74ac76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.820 2 DEBUG oslo_concurrency.lockutils [req-9352d819-3b57-4b80-8c70-a02aaa90db5e req-3a2d5397-f4ba-46c1-b40a-e267df74ac76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.820 2 DEBUG oslo_concurrency.lockutils [req-9352d819-3b57-4b80-8c70-a02aaa90db5e req-3a2d5397-f4ba-46c1-b40a-e267df74ac76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:34 compute-0 nova_compute[260603]: 2025-10-02 09:07:34.821 2 DEBUG nova.compute.manager [req-9352d819-3b57-4b80-8c70-a02aaa90db5e req-3a2d5397-f4ba-46c1-b40a-e267df74ac76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Processing event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.845 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.846 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.847 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:35 compute-0 ceph-mon[74477]: pgmap v2683: 305 pgs: 305 active+clean; 88 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:07:35 compute-0 podman[414446]: 2025-10-02 09:07:35.229962498 +0000 UTC m=+0.064553257 container create fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 09:07:35 compute-0 systemd[1]: Started libpod-conmon-fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df.scope.
Oct 02 09:07:35 compute-0 podman[414446]: 2025-10-02 09:07:35.19428811 +0000 UTC m=+0.028878859 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:07:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df163648e30c438ee7956b6dd528661349b15b516a55e59776780490c120fbcc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:35 compute-0 podman[414446]: 2025-10-02 09:07:35.33362561 +0000 UTC m=+0.168216359 container init fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 09:07:35 compute-0 podman[414446]: 2025-10-02 09:07:35.3449148 +0000 UTC m=+0.179505529 container start fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.364 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396055.363428, 086604c0-28d5-41d4-995c-17db322b3ded => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.365 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] VM Started (Lifecycle Event)
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.367 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:07:35 compute-0 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [NOTICE]   (414465) : New worker (414467) forked
Oct 02 09:07:35 compute-0 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [NOTICE]   (414465) : Loading success.
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.373 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.378 2 INFO nova.virt.libvirt.driver [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Instance spawned successfully.
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.378 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.397 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.401 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 97679643-cd71-4857-a615-c21d643d15c2 in datapath 3b4c5c4f-7410-4ce4-9e83-46e3156b929c unbound from our chassis
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.401 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.403 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b4c5c4f-7410-4ce4-9e83-46e3156b929c
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.416 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[de872a1f-f0d2-4072-92a6-99ef2dc31900]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.417 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3b4c5c4f-71 in ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.420 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3b4c5c4f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.420 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[23cff393-80d1-4b88-922b-7fecae9c071a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.421 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[44b81c19-f69e-43d1-a68a-e6224d3c67f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.422 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.422 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.423 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.424 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.425 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.425 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.429 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.429 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396055.3637102, 086604c0-28d5-41d4-995c-17db322b3ded => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.429 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] VM Paused (Lifecycle Event)
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.437 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[38bbfc6a-eb61-4164-9d06-db59a4702a3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.469 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86c9befa-9c73-4c67-86dc-7715042429f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.494 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.503 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf36a40-7e57-4545-9df3-499793e94a14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.505 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396055.3720748, 086604c0-28d5-41d4-995c-17db322b3ded => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.505 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] VM Resumed (Lifecycle Event)
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.515 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c15d01-f212-4e6d-97ba-a6882ead5de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 systemd-udevd[414354]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:07:35 compute-0 NetworkManager[45129]: <info>  [1759396055.5226] manager: (tap3b4c5c4f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/610)
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.560 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.563 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.574 2 INFO nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Took 12.21 seconds to spawn the instance on the hypervisor.
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.574 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.579 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f34fe1-922e-4038-b6f0-3c415ec6347a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.583 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[96e43c85-ba2d-4b9a-97bb-ac3f6de48ad9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.587 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:07:35 compute-0 NetworkManager[45129]: <info>  [1759396055.6205] device (tap3b4c5c4f-70): carrier: link connected
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.630 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[78a9a571-aefe-4d7a-a66d-61caa3d0077b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.644 2 INFO nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Took 13.20 seconds to build instance.
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.658 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[88f43043-2cab-44ad-9d6f-914295cdad33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b4c5c4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:e1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693422, 'reachable_time': 34478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414486, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.672 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.677 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[179dbf13-75bd-4646-ab35-21eb25c26afc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:e118'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693422, 'tstamp': 693422}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414487, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.696 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c63354-924c-4f2e-905f-0c8847246ec9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b4c5c4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:e1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693422, 'reachable_time': 34478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 414488, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.727 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c7363c-87c5-4347-a3af-8d2ffbc1c08d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.768 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bed96f02-033e-4293-876b-1f5d3d8dbae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.769 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b4c5c4f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.770 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.770 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b4c5c4f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:35 compute-0 kernel: tap3b4c5c4f-70: entered promiscuous mode
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:35 compute-0 NetworkManager[45129]: <info>  [1759396055.7772] manager: (tap3b4c5c4f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/611)
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.778 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b4c5c4f-70, col_values=(('external_ids', {'iface-id': 'f16ced3a-20be-47b1-aeb4-8904dff1366f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:35 compute-0 ovn_controller[152344]: 2025-10-02T09:07:35Z|01509|binding|INFO|Releasing lport f16ced3a-20be-47b1-aeb4-8904dff1366f from this chassis (sb_readonly=0)
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.783 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3b4c5c4f-7410-4ce4-9e83-46e3156b929c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3b4c5c4f-7410-4ce4-9e83-46e3156b929c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.784 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[75a10915-ec6f-414a-86d2-041909eceb04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.784 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-3b4c5c4f-7410-4ce4-9e83-46e3156b929c
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/3b4c5c4f-7410-4ce4-9e83-46e3156b929c.pid.haproxy
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 3b4c5c4f-7410-4ce4-9e83-46e3156b929c
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:07:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.785 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'env', 'PROCESS_TAG=haproxy-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3b4c5c4f-7410-4ce4-9e83-46e3156b929c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:07:35 compute-0 nova_compute[260603]: 2025-10-02 09:07:35.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 88 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:07:36 compute-0 nova_compute[260603]: 2025-10-02 09:07:36.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:36 compute-0 podman[414518]: 2025-10-02 09:07:36.235647661 +0000 UTC m=+0.051913154 container create e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 09:07:36 compute-0 systemd[1]: Started libpod-conmon-e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea.scope.
Oct 02 09:07:36 compute-0 podman[414518]: 2025-10-02 09:07:36.208339872 +0000 UTC m=+0.024605385 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:07:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:07:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da359b613ffa5d68b88a20f5d2354085bb5af6508d378872b99a7f638a446a90/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:07:36 compute-0 podman[414518]: 2025-10-02 09:07:36.322316395 +0000 UTC m=+0.138581918 container init e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:07:36 compute-0 podman[414518]: 2025-10-02 09:07:36.329471016 +0000 UTC m=+0.145736509 container start e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 09:07:36 compute-0 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [NOTICE]   (414538) : New worker (414540) forked
Oct 02 09:07:36 compute-0 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [NOTICE]   (414538) : Loading success.
Oct 02 09:07:36 compute-0 nova_compute[260603]: 2025-10-02 09:07:36.933 2 DEBUG nova.compute.manager [req-4a563e24-8137-4149-ab6b-a6560905f11d req-af2158d1-7d47-455d-b60b-4c1bbc91701f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:07:36 compute-0 nova_compute[260603]: 2025-10-02 09:07:36.935 2 DEBUG oslo_concurrency.lockutils [req-4a563e24-8137-4149-ab6b-a6560905f11d req-af2158d1-7d47-455d-b60b-4c1bbc91701f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:36 compute-0 nova_compute[260603]: 2025-10-02 09:07:36.936 2 DEBUG oslo_concurrency.lockutils [req-4a563e24-8137-4149-ab6b-a6560905f11d req-af2158d1-7d47-455d-b60b-4c1bbc91701f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:36 compute-0 nova_compute[260603]: 2025-10-02 09:07:36.937 2 DEBUG oslo_concurrency.lockutils [req-4a563e24-8137-4149-ab6b-a6560905f11d req-af2158d1-7d47-455d-b60b-4c1bbc91701f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:36 compute-0 nova_compute[260603]: 2025-10-02 09:07:36.937 2 DEBUG nova.compute.manager [req-4a563e24-8137-4149-ab6b-a6560905f11d req-af2158d1-7d47-455d-b60b-4c1bbc91701f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] No waiting events found dispatching network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:07:36 compute-0 nova_compute[260603]: 2025-10-02 09:07:36.938 2 WARNING nova.compute.manager [req-4a563e24-8137-4149-ab6b-a6560905f11d req-af2158d1-7d47-455d-b60b-4c1bbc91701f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received unexpected event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 for instance with vm_state active and task_state None.
Oct 02 09:07:37 compute-0 nova_compute[260603]: 2025-10-02 09:07:37.017 2 DEBUG nova.compute.manager [req-2ba9b71f-1bf7-4cad-97bd-0e405da60e5d req-5c65ca03-0b7c-4493-b55d-027fcebebef2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:07:37 compute-0 nova_compute[260603]: 2025-10-02 09:07:37.017 2 DEBUG oslo_concurrency.lockutils [req-2ba9b71f-1bf7-4cad-97bd-0e405da60e5d req-5c65ca03-0b7c-4493-b55d-027fcebebef2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:07:37 compute-0 nova_compute[260603]: 2025-10-02 09:07:37.018 2 DEBUG oslo_concurrency.lockutils [req-2ba9b71f-1bf7-4cad-97bd-0e405da60e5d req-5c65ca03-0b7c-4493-b55d-027fcebebef2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:07:37 compute-0 nova_compute[260603]: 2025-10-02 09:07:37.018 2 DEBUG oslo_concurrency.lockutils [req-2ba9b71f-1bf7-4cad-97bd-0e405da60e5d req-5c65ca03-0b7c-4493-b55d-027fcebebef2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:07:37 compute-0 nova_compute[260603]: 2025-10-02 09:07:37.018 2 DEBUG nova.compute.manager [req-2ba9b71f-1bf7-4cad-97bd-0e405da60e5d req-5c65ca03-0b7c-4493-b55d-027fcebebef2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] No waiting events found dispatching network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:07:37 compute-0 nova_compute[260603]: 2025-10-02 09:07:37.019 2 WARNING nova.compute.manager [req-2ba9b71f-1bf7-4cad-97bd-0e405da60e5d req-5c65ca03-0b7c-4493-b55d-027fcebebef2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received unexpected event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 for instance with vm_state active and task_state None.
Oct 02 09:07:37 compute-0 ceph-mon[74477]: pgmap v2684: 305 pgs: 305 active+clean; 88 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:07:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 02 09:07:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:07:38 compute-0 sshd-session[414549]: Invalid user admin from 78.128.112.74 port 48626
Oct 02 09:07:38 compute-0 nova_compute[260603]: 2025-10-02 09:07:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:07:38 compute-0 nova_compute[260603]: 2025-10-02 09:07:38.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:07:38 compute-0 nova_compute[260603]: 2025-10-02 09:07:38.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:38 compute-0 sshd-session[414549]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 09:07:38 compute-0 sshd-session[414549]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=78.128.112.74
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003482863067330859 of space, bias 1.0, pg target 0.10448589201992577 quantized to 32 (current 32)
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:07:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:07:39 compute-0 ceph-mon[74477]: pgmap v2685: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 02 09:07:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 628 KiB/s wr, 74 op/s
Oct 02 09:07:40 compute-0 sshd-session[414549]: Failed password for invalid user admin from 78.128.112.74 port 48626 ssh2
Oct 02 09:07:40 compute-0 NetworkManager[45129]: <info>  [1759396060.8730] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Oct 02 09:07:40 compute-0 NetworkManager[45129]: <info>  [1759396060.8746] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/613)
Oct 02 09:07:40 compute-0 nova_compute[260603]: 2025-10-02 09:07:40.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:40 compute-0 ovn_controller[152344]: 2025-10-02T09:07:40Z|01510|binding|INFO|Releasing lport 251c2ade-6e56-457d-a6d2-c79238c2f10d from this chassis (sb_readonly=0)
Oct 02 09:07:40 compute-0 ovn_controller[152344]: 2025-10-02T09:07:40Z|01511|binding|INFO|Releasing lport f16ced3a-20be-47b1-aeb4-8904dff1366f from this chassis (sb_readonly=0)
Oct 02 09:07:40 compute-0 ovn_controller[152344]: 2025-10-02T09:07:40Z|01512|binding|INFO|Releasing lport 251c2ade-6e56-457d-a6d2-c79238c2f10d from this chassis (sb_readonly=0)
Oct 02 09:07:40 compute-0 ovn_controller[152344]: 2025-10-02T09:07:40Z|01513|binding|INFO|Releasing lport f16ced3a-20be-47b1-aeb4-8904dff1366f from this chassis (sb_readonly=0)
Oct 02 09:07:40 compute-0 nova_compute[260603]: 2025-10-02 09:07:40.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:40 compute-0 nova_compute[260603]: 2025-10-02 09:07:40.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:41 compute-0 nova_compute[260603]: 2025-10-02 09:07:41.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:41 compute-0 ceph-mon[74477]: pgmap v2686: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 628 KiB/s wr, 74 op/s
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.127940) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396061128007, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2054, "num_deletes": 251, "total_data_size": 3393546, "memory_usage": 3449888, "flush_reason": "Manual Compaction"}
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396061147649, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3328059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54394, "largest_seqno": 56447, "table_properties": {"data_size": 3318701, "index_size": 5915, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18897, "raw_average_key_size": 20, "raw_value_size": 3300113, "raw_average_value_size": 3518, "num_data_blocks": 262, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395836, "oldest_key_time": 1759395836, "file_creation_time": 1759396061, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 19758 microseconds, and 12529 cpu microseconds.
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.147707) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3328059 bytes OK
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.147729) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.149539) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.149553) EVENT_LOG_v1 {"time_micros": 1759396061149549, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.149570) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3384941, prev total WAL file size 3384941, number of live WAL files 2.
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.150453) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3250KB)], [128(8123KB)]
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396061150495, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11646927, "oldest_snapshot_seqno": -1}
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7711 keys, 9932464 bytes, temperature: kUnknown
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396061225619, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 9932464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9882203, "index_size": 29860, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19333, "raw_key_size": 200267, "raw_average_key_size": 25, "raw_value_size": 9745786, "raw_average_value_size": 1263, "num_data_blocks": 1165, "num_entries": 7711, "num_filter_entries": 7711, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396061, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.226253) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 9932464 bytes
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.228440) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.1 rd, 131.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 8225, records dropped: 514 output_compression: NoCompression
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.228469) EVENT_LOG_v1 {"time_micros": 1759396061228455, "job": 78, "event": "compaction_finished", "compaction_time_micros": 75562, "compaction_time_cpu_micros": 44258, "output_level": 6, "num_output_files": 1, "total_output_size": 9932464, "num_input_records": 8225, "num_output_records": 7711, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396061230284, "job": 78, "event": "table_file_deletion", "file_number": 130}
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396061233510, "job": 78, "event": "table_file_deletion", "file_number": 128}
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.150274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.233691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.233695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.233697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.233699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:07:41 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.233700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:07:41 compute-0 sshd-session[414549]: Connection closed by invalid user admin 78.128.112.74 port 48626 [preauth]
Oct 02 09:07:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:07:42 compute-0 nova_compute[260603]: 2025-10-02 09:07:42.052 2 DEBUG nova.compute.manager [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-changed-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:07:42 compute-0 nova_compute[260603]: 2025-10-02 09:07:42.052 2 DEBUG nova.compute.manager [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing instance network info cache due to event network-changed-1bf113e0-562b-45fb-9b97-aa76d5dac283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:07:42 compute-0 nova_compute[260603]: 2025-10-02 09:07:42.052 2 DEBUG oslo_concurrency.lockutils [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:07:42 compute-0 nova_compute[260603]: 2025-10-02 09:07:42.053 2 DEBUG oslo_concurrency.lockutils [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:07:42 compute-0 nova_compute[260603]: 2025-10-02 09:07:42.053 2 DEBUG nova.network.neutron [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing network info cache for port 1bf113e0-562b-45fb-9b97-aa76d5dac283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:07:43 compute-0 ceph-mon[74477]: pgmap v2687: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:07:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:07:43 compute-0 nova_compute[260603]: 2025-10-02 09:07:43.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:43 compute-0 podman[414554]: 2025-10-02 09:07:43.985523125 +0000 UTC m=+0.049360696 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 09:07:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:07:44 compute-0 nova_compute[260603]: 2025-10-02 09:07:44.016 2 DEBUG nova.network.neutron [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updated VIF entry in instance network info cache for port 1bf113e0-562b-45fb-9b97-aa76d5dac283. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:07:44 compute-0 nova_compute[260603]: 2025-10-02 09:07:44.017 2 DEBUG nova.network.neutron [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:07:44 compute-0 podman[414553]: 2025-10-02 09:07:44.027908691 +0000 UTC m=+0.091593537 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 09:07:44 compute-0 nova_compute[260603]: 2025-10-02 09:07:44.042 2 DEBUG oslo_concurrency.lockutils [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:07:45 compute-0 ceph-mon[74477]: pgmap v2688: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:07:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:07:46 compute-0 nova_compute[260603]: 2025-10-02 09:07:46.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:46 compute-0 ovn_controller[152344]: 2025-10-02T09:07:46Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:e1:b8 10.100.0.7
Oct 02 09:07:46 compute-0 ovn_controller[152344]: 2025-10-02T09:07:46Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:e1:b8 10.100.0.7
Oct 02 09:07:47 compute-0 ceph-mon[74477]: pgmap v2689: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:07:47 compute-0 podman[414602]: 2025-10-02 09:07:47.990350937 +0000 UTC m=+0.053641658 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct 02 09:07:47 compute-0 podman[414601]: 2025-10-02 09:07:47.992904996 +0000 UTC m=+0.060873362 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible)
Oct 02 09:07:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 117 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 120 op/s
Oct 02 09:07:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:07:48 compute-0 nova_compute[260603]: 2025-10-02 09:07:48.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:49 compute-0 ceph-mon[74477]: pgmap v2690: 305 pgs: 305 active+clean; 117 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 120 op/s
Oct 02 09:07:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 121 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 09:07:51 compute-0 nova_compute[260603]: 2025-10-02 09:07:51.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:51 compute-0 ceph-mon[74477]: pgmap v2691: 305 pgs: 305 active+clean; 121 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 09:07:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 121 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 09:07:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:07:53 compute-0 ceph-mon[74477]: pgmap v2692: 305 pgs: 305 active+clean; 121 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 09:07:53 compute-0 nova_compute[260603]: 2025-10-02 09:07:53.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:07:53 compute-0 nova_compute[260603]: 2025-10-02 09:07:53.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:07:55 compute-0 ceph-mon[74477]: pgmap v2693: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:07:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:07:56 compute-0 nova_compute[260603]: 2025-10-02 09:07:56.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:56 compute-0 nova_compute[260603]: 2025-10-02 09:07:56.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:07:56 compute-0 nova_compute[260603]: 2025-10-02 09:07:56.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:07:56 compute-0 nova_compute[260603]: 2025-10-02 09:07:56.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:07:56 compute-0 nova_compute[260603]: 2025-10-02 09:07:56.842 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:07:56 compute-0 nova_compute[260603]: 2025-10-02 09:07:56.843 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:07:56 compute-0 nova_compute[260603]: 2025-10-02 09:07:56.843 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 09:07:56 compute-0 nova_compute[260603]: 2025-10-02 09:07:56.844 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 086604c0-28d5-41d4-995c-17db322b3ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:07:57 compute-0 ceph-mon[74477]: pgmap v2694: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:07:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:07:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:07:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:07:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:07:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:07:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:07:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:07:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:07:58 compute-0 nova_compute[260603]: 2025-10-02 09:07:58.545 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:07:58 compute-0 nova_compute[260603]: 2025-10-02 09:07:58.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:07:58 compute-0 nova_compute[260603]: 2025-10-02 09:07:58.717 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:07:58 compute-0 nova_compute[260603]: 2025-10-02 09:07:58.718 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 09:07:58 compute-0 nova_compute[260603]: 2025-10-02 09:07:58.718 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:07:59 compute-0 ceph-mon[74477]: pgmap v2695: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:08:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 290 KiB/s wr, 17 op/s
Oct 02 09:08:00 compute-0 nova_compute[260603]: 2025-10-02 09:08:00.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:08:00 compute-0 nova_compute[260603]: 2025-10-02 09:08:00.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:08:01 compute-0 nova_compute[260603]: 2025-10-02 09:08:01.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:01 compute-0 ceph-mon[74477]: pgmap v2696: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 290 KiB/s wr, 17 op/s
Oct 02 09:08:01 compute-0 nova_compute[260603]: 2025-10-02 09:08:01.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:08:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 12 KiB/s wr, 2 op/s
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.143 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.144 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.183 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.247 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.248 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.257 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.258 2 INFO nova.compute.claims [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.396 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:08:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552982764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.893 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.899 2 DEBUG nova.compute.provider_tree [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.920 2 DEBUG nova.scheduler.client.report [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.948 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.948 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.951 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.952 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.952 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:08:02 compute-0 nova_compute[260603]: 2025-10-02 09:08:02.952 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.035 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.036 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.053 2 INFO nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.070 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.152 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.154 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.154 2 INFO nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Creating image(s)
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.174 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.193 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.215 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:08:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.220 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.223183) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396083223244, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 407, "num_deletes": 255, "total_data_size": 318079, "memory_usage": 327064, "flush_reason": "Manual Compaction"}
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396083227237, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 315738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56448, "largest_seqno": 56854, "table_properties": {"data_size": 313263, "index_size": 574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5618, "raw_average_key_size": 17, "raw_value_size": 308500, "raw_average_value_size": 979, "num_data_blocks": 26, "num_entries": 315, "num_filter_entries": 315, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396062, "oldest_key_time": 1759396062, "file_creation_time": 1759396083, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 4085 microseconds, and 1445 cpu microseconds.
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.227277) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 315738 bytes OK
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.227292) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.229237) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.229255) EVENT_LOG_v1 {"time_micros": 1759396083229249, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.229274) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 315503, prev total WAL file size 315503, number of live WAL files 2.
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.229697) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323535' seq:72057594037927935, type:22 .. '6C6F676D0032353036' seq:0, type:0; will stop at (end)
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(308KB)], [131(9699KB)]
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396083229831, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10248202, "oldest_snapshot_seqno": -1}
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.296 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.297 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.298 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.298 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7509 keys, 10140944 bytes, temperature: kUnknown
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396083304935, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10140944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10091164, "index_size": 29891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18821, "raw_key_size": 196983, "raw_average_key_size": 26, "raw_value_size": 9957447, "raw_average_value_size": 1326, "num_data_blocks": 1164, "num_entries": 7509, "num_filter_entries": 7509, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396083, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.305293) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10140944 bytes
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.307009) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.3 rd, 134.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(64.6) write-amplify(32.1) OK, records in: 8026, records dropped: 517 output_compression: NoCompression
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.307182) EVENT_LOG_v1 {"time_micros": 1759396083307019, "job": 80, "event": "compaction_finished", "compaction_time_micros": 75212, "compaction_time_cpu_micros": 28687, "output_level": 6, "num_output_files": 1, "total_output_size": 10140944, "num_input_records": 8026, "num_output_records": 7509, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396083307383, "job": 80, "event": "table_file_deletion", "file_number": 133}
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396083309721, "job": 80, "event": "table_file_deletion", "file_number": 131}
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.229567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.309791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.309797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.309800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.309802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:08:03 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.309804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.318 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.321 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:08:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:08:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554511929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.431 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:08:03 compute-0 ceph-mon[74477]: pgmap v2697: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 12 KiB/s wr, 2 op/s
Oct 02 09:08:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3552982764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:08:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3554511929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.520 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.522 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.648 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.707 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.793 2 DEBUG nova.objects.instance [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 5a11349c-a726-40c7-83f0-95f708b3f5d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.809 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.809 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Ensure instance console log exists: /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.810 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.810 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.810 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.827 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.828 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3439MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.828 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.828 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.896 2 DEBUG nova.policy [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.909 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 086604c0-28d5-41d4-995c-17db322b3ded actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.910 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 5a11349c-a726-40c7-83f0-95f708b3f5d2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.910 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.910 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:08:03 compute-0 nova_compute[260603]: 2025-10-02 09:08:03.961 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:08:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct 02 09:08:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:08:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3650775426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:08:04 compute-0 nova_compute[260603]: 2025-10-02 09:08:04.372 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:08:04 compute-0 nova_compute[260603]: 2025-10-02 09:08:04.380 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:08:04 compute-0 nova_compute[260603]: 2025-10-02 09:08:04.404 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:08:04 compute-0 nova_compute[260603]: 2025-10-02 09:08:04.434 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:08:04 compute-0 nova_compute[260603]: 2025-10-02 09:08:04.434 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3650775426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:08:04 compute-0 nova_compute[260603]: 2025-10-02 09:08:04.988 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Successfully created port: da8c962b-f6a3-4056-a774-9f03b36f62d5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:08:05 compute-0 ceph-mon[74477]: pgmap v2698: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct 02 09:08:05 compute-0 nova_compute[260603]: 2025-10-02 09:08:05.997 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Successfully created port: 95e5f1fa-72f5-4111-a730-1d1fb3203c6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:08:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 09:08:06 compute-0 nova_compute[260603]: 2025-10-02 09:08:06.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:06 compute-0 ceph-mon[74477]: pgmap v2699: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 09:08:07 compute-0 nova_compute[260603]: 2025-10-02 09:08:07.137 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Successfully updated port: da8c962b-f6a3-4056-a774-9f03b36f62d5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:08:07 compute-0 nova_compute[260603]: 2025-10-02 09:08:07.291 2 DEBUG nova.compute.manager [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-changed-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:07 compute-0 nova_compute[260603]: 2025-10-02 09:08:07.292 2 DEBUG nova.compute.manager [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing instance network info cache due to event network-changed-da8c962b-f6a3-4056-a774-9f03b36f62d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:08:07 compute-0 nova_compute[260603]: 2025-10-02 09:08:07.292 2 DEBUG oslo_concurrency.lockutils [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:08:07 compute-0 nova_compute[260603]: 2025-10-02 09:08:07.292 2 DEBUG oslo_concurrency.lockutils [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:08:07 compute-0 nova_compute[260603]: 2025-10-02 09:08:07.293 2 DEBUG nova.network.neutron [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing network info cache for port da8c962b-f6a3-4056-a774-9f03b36f62d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:08:07 compute-0 nova_compute[260603]: 2025-10-02 09:08:07.504 2 DEBUG nova.network.neutron [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:08:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 144 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 727 KiB/s wr, 26 op/s
Oct 02 09:08:08 compute-0 nova_compute[260603]: 2025-10-02 09:08:08.065 2 DEBUG nova.network.neutron [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:08:08 compute-0 nova_compute[260603]: 2025-10-02 09:08:08.080 2 DEBUG oslo_concurrency.lockutils [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:08:08 compute-0 nova_compute[260603]: 2025-10-02 09:08:08.101 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Successfully updated port: 95e5f1fa-72f5-4111-a730-1d1fb3203c6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:08:08 compute-0 nova_compute[260603]: 2025-10-02 09:08:08.161 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:08:08 compute-0 nova_compute[260603]: 2025-10-02 09:08:08.162 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:08:08 compute-0 nova_compute[260603]: 2025-10-02 09:08:08.162 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:08:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:08:08 compute-0 nova_compute[260603]: 2025-10-02 09:08:08.306 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:08:08 compute-0 nova_compute[260603]: 2025-10-02 09:08:08.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:09 compute-0 ceph-mon[74477]: pgmap v2700: 305 pgs: 305 active+clean; 144 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 727 KiB/s wr, 26 op/s
Oct 02 09:08:09 compute-0 nova_compute[260603]: 2025-10-02 09:08:09.391 2 DEBUG nova.compute.manager [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-changed-95e5f1fa-72f5-4111-a730-1d1fb3203c6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:09 compute-0 nova_compute[260603]: 2025-10-02 09:08:09.391 2 DEBUG nova.compute.manager [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing instance network info cache due to event network-changed-95e5f1fa-72f5-4111-a730-1d1fb3203c6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:08:09 compute-0 nova_compute[260603]: 2025-10-02 09:08:09.392 2 DEBUG oslo_concurrency.lockutils [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:08:09 compute-0 nova_compute[260603]: 2025-10-02 09:08:09.428 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:08:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.071 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [{"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.095 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.096 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Instance network_info: |[{"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.096 2 DEBUG oslo_concurrency.lockutils [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.097 2 DEBUG nova.network.neutron [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing network info cache for port 95e5f1fa-72f5-4111-a730-1d1fb3203c6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.099 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Start _get_guest_xml network_info=[{"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.103 2 WARNING nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.108 2 DEBUG nova.virt.libvirt.host [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.108 2 DEBUG nova.virt.libvirt.host [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.113 2 DEBUG nova.virt.libvirt.host [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.114 2 DEBUG nova.virt.libvirt.host [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.114 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.115 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.115 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.115 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.115 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.116 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.116 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.116 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.116 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.116 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.117 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.117 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.119 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:08:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:08:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4241435548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.550 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.570 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.573 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:08:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:08:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1286380769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.994 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.995 2 DEBUG nova.virt.libvirt.vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-539987703',display_name='tempest-TestGettingAddress-server-539987703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-539987703',id=140,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-pggoj88q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:08:03Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=5a11349c-a726-40c7-83f0-95f708b3f5d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.996 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.996 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.997 2 DEBUG nova.virt.libvirt.vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-539987703',display_name='tempest-TestGettingAddress-server-539987703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-539987703',id=140,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-pggoj88q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:08:03Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=5a11349c-a726-40c7-83f0-95f708b3f5d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.997 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.998 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:08:10 compute-0 nova_compute[260603]: 2025-10-02 09:08:10.999 2 DEBUG nova.objects.instance [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a11349c-a726-40c7-83f0-95f708b3f5d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.016 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:08:11 compute-0 nova_compute[260603]:   <uuid>5a11349c-a726-40c7-83f0-95f708b3f5d2</uuid>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   <name>instance-0000008c</name>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-539987703</nova:name>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:08:10</nova:creationTime>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <nova:port uuid="da8c962b-f6a3-4056-a774-9f03b36f62d5">
Oct 02 09:08:11 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <nova:port uuid="95e5f1fa-72f5-4111-a730-1d1fb3203c6f">
Oct 02 09:08:11 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe2e:fd19" ipVersion="6"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <system>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <entry name="serial">5a11349c-a726-40c7-83f0-95f708b3f5d2</entry>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <entry name="uuid">5a11349c-a726-40c7-83f0-95f708b3f5d2</entry>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     </system>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   <os>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   </os>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   <features>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   </features>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5a11349c-a726-40c7-83f0-95f708b3f5d2_disk">
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       </source>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/5a11349c-a726-40c7-83f0-95f708b3f5d2_disk.config">
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       </source>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:08:11 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:2d:04:b3"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <target dev="tapda8c962b-f6"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:2e:fd:19"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <target dev="tap95e5f1fa-72"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/console.log" append="off"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <video>
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     </video>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:08:11 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:08:11 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:08:11 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:08:11 compute-0 nova_compute[260603]: </domain>
Oct 02 09:08:11 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.018 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Preparing to wait for external event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.019 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.019 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.019 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.019 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Preparing to wait for external event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.020 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.020 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.020 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.021 2 DEBUG nova.virt.libvirt.vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-539987703',display_name='tempest-TestGettingAddress-server-539987703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-539987703',id=140,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-pggoj88q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:08:03Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=5a11349c-a726-40c7-83f0-95f708b3f5d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.021 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.022 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.022 2 DEBUG os_vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.024 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda8c962b-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda8c962b-f6, col_values=(('external_ids', {'iface-id': 'da8c962b-f6a3-4056-a774-9f03b36f62d5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:04:b3', 'vm-uuid': '5a11349c-a726-40c7-83f0-95f708b3f5d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:11 compute-0 NetworkManager[45129]: <info>  [1759396091.0303] manager: (tapda8c962b-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/614)
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.038 2 INFO os_vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6')
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.039 2 DEBUG nova.virt.libvirt.vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-539987703',display_name='tempest-TestGettingAddress-server-539987703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-539987703',id=140,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-pggoj88q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:08:03Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=5a11349c-a726-40c7-83f0-95f708b3f5d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.039 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.040 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.040 2 DEBUG os_vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.040 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.041 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95e5f1fa-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95e5f1fa-72, col_values=(('external_ids', {'iface-id': '95e5f1fa-72f5-4111-a730-1d1fb3203c6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:fd:19', 'vm-uuid': '5a11349c-a726-40c7-83f0-95f708b3f5d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:11 compute-0 NetworkManager[45129]: <info>  [1759396091.0456] manager: (tap95e5f1fa-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.053 2 INFO os_vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72')
Oct 02 09:08:11 compute-0 ceph-mon[74477]: pgmap v2701: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:08:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4241435548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:08:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1286380769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.105 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.105 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.105 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:2d:04:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.105 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:2e:fd:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.106 2 INFO nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Using config drive
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.123 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.128 2 DEBUG nova.network.neutron [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updated VIF entry in instance network info cache for port 95e5f1fa-72f5-4111-a730-1d1fb3203c6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.128 2 DEBUG nova.network.neutron [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [{"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.152 2 DEBUG oslo_concurrency.lockutils [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.527 2 INFO nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Creating config drive at /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/disk.config
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.532 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5w4hr_5y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.674 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5w4hr_5y" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.700 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.703 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/disk.config 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.856 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/disk.config 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.857 2 INFO nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Deleting local config drive /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/disk.config because it was imported into RBD.
Oct 02 09:08:11 compute-0 NetworkManager[45129]: <info>  [1759396091.9008] manager: (tapda8c962b-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/616)
Oct 02 09:08:11 compute-0 kernel: tapda8c962b-f6: entered promiscuous mode
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:11 compute-0 ovn_controller[152344]: 2025-10-02T09:08:11Z|01514|binding|INFO|Claiming lport da8c962b-f6a3-4056-a774-9f03b36f62d5 for this chassis.
Oct 02 09:08:11 compute-0 ovn_controller[152344]: 2025-10-02T09:08:11Z|01515|binding|INFO|da8c962b-f6a3-4056-a774-9f03b36f62d5: Claiming fa:16:3e:2d:04:b3 10.100.0.9
Oct 02 09:08:11 compute-0 systemd-udevd[415010]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:08:11 compute-0 NetworkManager[45129]: <info>  [1759396091.9440] manager: (tap95e5f1fa-72): new Tun device (/org/freedesktop/NetworkManager/Devices/617)
Oct 02 09:08:11 compute-0 systemd-udevd[415014]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:08:11 compute-0 ovn_controller[152344]: 2025-10-02T09:08:11Z|01516|binding|INFO|Setting lport da8c962b-f6a3-4056-a774-9f03b36f62d5 ovn-installed in OVS
Oct 02 09:08:11 compute-0 kernel: tap95e5f1fa-72: entered promiscuous mode
Oct 02 09:08:11 compute-0 ovn_controller[152344]: 2025-10-02T09:08:11Z|01517|binding|INFO|Setting lport da8c962b-f6a3-4056-a774-9f03b36f62d5 up in Southbound
Oct 02 09:08:11 compute-0 NetworkManager[45129]: <info>  [1759396091.9528] device (tapda8c962b-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:11 compute-0 NetworkManager[45129]: <info>  [1759396091.9552] device (tapda8c962b-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:08:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:11.955 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:04:b3 10.100.0.9'], port_security=['fa:16:3e:2d:04:b3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5a11349c-a726-40c7-83f0-95f708b3f5d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8531ee7b-e9fa-4aeb-a901-a54a8597544d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=da8c962b-f6a3-4056-a774-9f03b36f62d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:08:11 compute-0 ovn_controller[152344]: 2025-10-02T09:08:11Z|01518|if_status|INFO|Not updating pb chassis for 95e5f1fa-72f5-4111-a730-1d1fb3203c6f now as sb is readonly
Oct 02 09:08:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:11.956 162357 INFO neutron.agent.ovn.metadata.agent [-] Port da8c962b-f6a3-4056-a774-9f03b36f62d5 in datapath 0cd32ebe-6aa8-4400-8c00-a3546d677f2c bound to our chassis
Oct 02 09:08:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:11.958 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cd32ebe-6aa8-4400-8c00-a3546d677f2c
Oct 02 09:08:11 compute-0 ovn_controller[152344]: 2025-10-02T09:08:11Z|01519|binding|INFO|Claiming lport 95e5f1fa-72f5-4111-a730-1d1fb3203c6f for this chassis.
Oct 02 09:08:11 compute-0 ovn_controller[152344]: 2025-10-02T09:08:11Z|01520|binding|INFO|95e5f1fa-72f5-4111-a730-1d1fb3203c6f: Claiming fa:16:3e:2e:fd:19 2001:db8::f816:3eff:fe2e:fd19
Oct 02 09:08:11 compute-0 NetworkManager[45129]: <info>  [1759396091.9640] device (tap95e5f1fa-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:08:11 compute-0 NetworkManager[45129]: <info>  [1759396091.9649] device (tap95e5f1fa-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:08:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:11.968 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:fd:19 2001:db8::f816:3eff:fe2e:fd19'], port_security=['fa:16:3e:2e:fd:19 2001:db8::f816:3eff:fe2e:fd19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe2e:fd19/64', 'neutron:device_id': '5a11349c-a726-40c7-83f0-95f708b3f5d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d94def-f0a5-4beb-85d6-bc3ad333488d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=95e5f1fa-72f5-4111-a730-1d1fb3203c6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:08:11 compute-0 ovn_controller[152344]: 2025-10-02T09:08:11Z|01521|binding|INFO|Setting lport 95e5f1fa-72f5-4111-a730-1d1fb3203c6f ovn-installed in OVS
Oct 02 09:08:11 compute-0 ovn_controller[152344]: 2025-10-02T09:08:11Z|01522|binding|INFO|Setting lport 95e5f1fa-72f5-4111-a730-1d1fb3203c6f up in Southbound
Oct 02 09:08:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:11.974 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[15417d41-de67-4814-a792-7179e0af2a91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:11 compute-0 nova_compute[260603]: 2025-10-02 09:08:11.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:11 compute-0 systemd-machined[214636]: New machine qemu-174-instance-0000008c.
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.004 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a876ad58-6df4-4dbd-aa97-2acf874e119b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:12 compute-0 systemd[1]: Started Virtual Machine qemu-174-instance-0000008c.
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.007 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[82f59c5d-b27d-43f9-b408-72c402bcf64c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.039 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[31425227-0edb-47dd-824b-6d0d7f2e2d6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.055 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a341fe75-61d8-490a-aa50-1a1668d6fe61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cd32ebe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:64:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693319, 'reachable_time': 33516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415031, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.069 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f62868-38cf-4b55-8114-68a99a2d89d7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0cd32ebe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693334, 'tstamp': 693334}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415033, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0cd32ebe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693338, 'tstamp': 693338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415033, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.070 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cd32ebe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.073 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cd32ebe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.073 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.073 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cd32ebe-60, col_values=(('external_ids', {'iface-id': '251c2ade-6e56-457d-a6d2-c79238c2f10d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.073 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.074 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 95e5f1fa-72f5-4111-a730-1d1fb3203c6f in datapath 3b4c5c4f-7410-4ce4-9e83-46e3156b929c unbound from our chassis
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.075 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b4c5c4f-7410-4ce4-9e83-46e3156b929c
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.091 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1198bb9c-e929-4663-964c-c9de54936432]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.126 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d4beea-37c8-4f38-af8f-8d5a766682cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.129 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e707ab0f-0d60-4d16-83b1-9b06eb737509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.167 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4a9267-e5c4-41db-b980-6796b8a402d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.187 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[415fcba8-d685-4942-b4c5-7e242168da80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b4c5c4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:e1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1802, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1802, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693422, 'reachable_time': 34478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415039, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.205 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a99e6a-fc37-4ab8-a719-8e5f7c1515de]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3b4c5c4f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693436, 'tstamp': 693436}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415040, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.207 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b4c5c4f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.210 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b4c5c4f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.210 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.210 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b4c5c4f-70, col_values=(('external_ids', {'iface-id': 'f16ced3a-20be-47b1-aeb4-8904dff1366f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:12 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.211 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.641 2 DEBUG nova.compute.manager [req-520a57f4-c9b0-4533-84a3-2341cf73d3d1 req-d5c5889e-b806-4a5c-817f-c0a418105709 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.642 2 DEBUG oslo_concurrency.lockutils [req-520a57f4-c9b0-4533-84a3-2341cf73d3d1 req-d5c5889e-b806-4a5c-817f-c0a418105709 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.642 2 DEBUG oslo_concurrency.lockutils [req-520a57f4-c9b0-4533-84a3-2341cf73d3d1 req-d5c5889e-b806-4a5c-817f-c0a418105709 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.642 2 DEBUG oslo_concurrency.lockutils [req-520a57f4-c9b0-4533-84a3-2341cf73d3d1 req-d5c5889e-b806-4a5c-817f-c0a418105709 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.642 2 DEBUG nova.compute.manager [req-520a57f4-c9b0-4533-84a3-2341cf73d3d1 req-d5c5889e-b806-4a5c-817f-c0a418105709 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Processing event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.717 2 DEBUG nova.compute.manager [req-d7bd375c-f5b8-4745-a850-fe8aa3374c4c req-3a21d8b0-4eed-4a09-b999-dab2a49e86b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.718 2 DEBUG oslo_concurrency.lockutils [req-d7bd375c-f5b8-4745-a850-fe8aa3374c4c req-3a21d8b0-4eed-4a09-b999-dab2a49e86b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.718 2 DEBUG oslo_concurrency.lockutils [req-d7bd375c-f5b8-4745-a850-fe8aa3374c4c req-3a21d8b0-4eed-4a09-b999-dab2a49e86b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.718 2 DEBUG oslo_concurrency.lockutils [req-d7bd375c-f5b8-4745-a850-fe8aa3374c4c req-3a21d8b0-4eed-4a09-b999-dab2a49e86b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.719 2 DEBUG nova.compute.manager [req-d7bd375c-f5b8-4745-a850-fe8aa3374c4c req-3a21d8b0-4eed-4a09-b999-dab2a49e86b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Processing event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.904 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396092.904153, 5a11349c-a726-40c7-83f0-95f708b3f5d2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.905 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] VM Started (Lifecycle Event)
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.907 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.909 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.913 2 INFO nova.virt.libvirt.driver [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Instance spawned successfully.
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.913 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.933 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.936 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.946 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.947 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.947 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.948 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.948 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.948 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.960 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.960 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396092.9043505, 5a11349c-a726-40c7-83f0-95f708b3f5d2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.960 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] VM Paused (Lifecycle Event)
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.981 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.984 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396092.9091687, 5a11349c-a726-40c7-83f0-95f708b3f5d2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:08:12 compute-0 nova_compute[260603]: 2025-10-02 09:08:12.984 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] VM Resumed (Lifecycle Event)
Oct 02 09:08:13 compute-0 nova_compute[260603]: 2025-10-02 09:08:13.008 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:08:13 compute-0 nova_compute[260603]: 2025-10-02 09:08:13.012 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:08:13 compute-0 nova_compute[260603]: 2025-10-02 09:08:13.022 2 INFO nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Took 9.87 seconds to spawn the instance on the hypervisor.
Oct 02 09:08:13 compute-0 nova_compute[260603]: 2025-10-02 09:08:13.023 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:08:13 compute-0 nova_compute[260603]: 2025-10-02 09:08:13.033 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:08:13 compute-0 nova_compute[260603]: 2025-10-02 09:08:13.081 2 INFO nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Took 10.86 seconds to build instance.
Oct 02 09:08:13 compute-0 nova_compute[260603]: 2025-10-02 09:08:13.119 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:13 compute-0 ceph-mon[74477]: pgmap v2702: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:08:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:08:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 02 09:08:14 compute-0 nova_compute[260603]: 2025-10-02 09:08:14.733 2 DEBUG nova.compute.manager [req-70137433-5b0b-4db7-98f4-adc27ade3b16 req-c96e2544-5b66-41eb-aa26-ebc75d0735db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:14 compute-0 nova_compute[260603]: 2025-10-02 09:08:14.733 2 DEBUG oslo_concurrency.lockutils [req-70137433-5b0b-4db7-98f4-adc27ade3b16 req-c96e2544-5b66-41eb-aa26-ebc75d0735db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:14 compute-0 nova_compute[260603]: 2025-10-02 09:08:14.734 2 DEBUG oslo_concurrency.lockutils [req-70137433-5b0b-4db7-98f4-adc27ade3b16 req-c96e2544-5b66-41eb-aa26-ebc75d0735db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:14 compute-0 nova_compute[260603]: 2025-10-02 09:08:14.734 2 DEBUG oslo_concurrency.lockutils [req-70137433-5b0b-4db7-98f4-adc27ade3b16 req-c96e2544-5b66-41eb-aa26-ebc75d0735db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:14 compute-0 nova_compute[260603]: 2025-10-02 09:08:14.734 2 DEBUG nova.compute.manager [req-70137433-5b0b-4db7-98f4-adc27ade3b16 req-c96e2544-5b66-41eb-aa26-ebc75d0735db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] No waiting events found dispatching network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:08:14 compute-0 nova_compute[260603]: 2025-10-02 09:08:14.734 2 WARNING nova.compute.manager [req-70137433-5b0b-4db7-98f4-adc27ade3b16 req-c96e2544-5b66-41eb-aa26-ebc75d0735db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received unexpected event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 for instance with vm_state active and task_state None.
Oct 02 09:08:14 compute-0 nova_compute[260603]: 2025-10-02 09:08:14.795 2 DEBUG nova.compute.manager [req-ba7ee148-625a-4dc4-9bc5-3142c955da11 req-2b88e0c3-c49a-482d-8931-37e89218c794 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:14 compute-0 nova_compute[260603]: 2025-10-02 09:08:14.796 2 DEBUG oslo_concurrency.lockutils [req-ba7ee148-625a-4dc4-9bc5-3142c955da11 req-2b88e0c3-c49a-482d-8931-37e89218c794 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:14 compute-0 nova_compute[260603]: 2025-10-02 09:08:14.796 2 DEBUG oslo_concurrency.lockutils [req-ba7ee148-625a-4dc4-9bc5-3142c955da11 req-2b88e0c3-c49a-482d-8931-37e89218c794 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:14 compute-0 nova_compute[260603]: 2025-10-02 09:08:14.797 2 DEBUG oslo_concurrency.lockutils [req-ba7ee148-625a-4dc4-9bc5-3142c955da11 req-2b88e0c3-c49a-482d-8931-37e89218c794 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:14 compute-0 nova_compute[260603]: 2025-10-02 09:08:14.797 2 DEBUG nova.compute.manager [req-ba7ee148-625a-4dc4-9bc5-3142c955da11 req-2b88e0c3-c49a-482d-8931-37e89218c794 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] No waiting events found dispatching network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:08:14 compute-0 nova_compute[260603]: 2025-10-02 09:08:14.797 2 WARNING nova.compute.manager [req-ba7ee148-625a-4dc4-9bc5-3142c955da11 req-2b88e0c3-c49a-482d-8931-37e89218c794 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received unexpected event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f for instance with vm_state active and task_state None.
Oct 02 09:08:15 compute-0 podman[415085]: 2025-10-02 09:08:14.998639213 +0000 UTC m=+0.061702918 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:08:15 compute-0 podman[415084]: 2025-10-02 09:08:15.047912145 +0000 UTC m=+0.113209809 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 09:08:15 compute-0 ceph-mon[74477]: pgmap v2703: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 02 09:08:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 02 09:08:16 compute-0 nova_compute[260603]: 2025-10-02 09:08:16.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:16 compute-0 nova_compute[260603]: 2025-10-02 09:08:16.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:17 compute-0 ceph-mon[74477]: pgmap v2704: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 02 09:08:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 826 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 02 09:08:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:08:19 compute-0 podman[415128]: 2025-10-02 09:08:19.006194331 +0000 UTC m=+0.065503396 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 02 09:08:19 compute-0 podman[415129]: 2025-10-02 09:08:19.026271365 +0000 UTC m=+0.085883820 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 02 09:08:19 compute-0 ceph-mon[74477]: pgmap v2705: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 826 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 02 09:08:19 compute-0 nova_compute[260603]: 2025-10-02 09:08:19.951 2 DEBUG nova.compute.manager [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-changed-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:19 compute-0 nova_compute[260603]: 2025-10-02 09:08:19.952 2 DEBUG nova.compute.manager [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing instance network info cache due to event network-changed-da8c962b-f6a3-4056-a774-9f03b36f62d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:08:19 compute-0 nova_compute[260603]: 2025-10-02 09:08:19.953 2 DEBUG oslo_concurrency.lockutils [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:08:19 compute-0 nova_compute[260603]: 2025-10-02 09:08:19.953 2 DEBUG oslo_concurrency.lockutils [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:08:19 compute-0 nova_compute[260603]: 2025-10-02 09:08:19.953 2 DEBUG nova.network.neutron [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing network info cache for port da8c962b-f6a3-4056-a774-9f03b36f62d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:08:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 75 op/s
Oct 02 09:08:20 compute-0 sudo[415168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:08:20 compute-0 sudo[415168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:20 compute-0 sudo[415168]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:20 compute-0 sudo[415193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:08:20 compute-0 sudo[415193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:20 compute-0 sudo[415193]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:20 compute-0 sudo[415218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:08:20 compute-0 sudo[415218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:20 compute-0 sudo[415218]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:20 compute-0 sudo[415243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:08:20 compute-0 sudo[415243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:21 compute-0 nova_compute[260603]: 2025-10-02 09:08:21.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:21 compute-0 nova_compute[260603]: 2025-10-02 09:08:21.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:21 compute-0 sudo[415243]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:21 compute-0 ceph-mon[74477]: pgmap v2706: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 75 op/s
Oct 02 09:08:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:08:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:08:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:08:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:08:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:08:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:08:21 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0777dd04-7237-40e8-b590-1c6960028b07 does not exist
Oct 02 09:08:21 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e20fe698-86e7-44bd-8d3b-136ad302a918 does not exist
Oct 02 09:08:21 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 46974bf2-9ffb-464d-926a-a239363d3523 does not exist
Oct 02 09:08:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:08:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:08:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:08:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:08:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:08:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:08:21 compute-0 sudo[415298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:08:21 compute-0 sudo[415298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:21 compute-0 sudo[415298]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:21 compute-0 sudo[415323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:08:21 compute-0 sudo[415323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:21 compute-0 sudo[415323]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:21 compute-0 sudo[415348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:08:21 compute-0 sudo[415348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:21 compute-0 sudo[415348]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:21 compute-0 sudo[415373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:08:21 compute-0 sudo[415373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:21 compute-0 podman[415439]: 2025-10-02 09:08:21.971181081 +0000 UTC m=+0.056520657 container create 88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:08:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 09:08:22 compute-0 podman[415439]: 2025-10-02 09:08:21.93832664 +0000 UTC m=+0.023666266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:08:22 compute-0 systemd[1]: Started libpod-conmon-88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16.scope.
Oct 02 09:08:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:08:22 compute-0 podman[415439]: 2025-10-02 09:08:22.101115509 +0000 UTC m=+0.186455065 container init 88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:08:22 compute-0 podman[415439]: 2025-10-02 09:08:22.114563656 +0000 UTC m=+0.199903192 container start 88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 09:08:22 compute-0 podman[415439]: 2025-10-02 09:08:22.118012283 +0000 UTC m=+0.203351839 container attach 88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 09:08:22 compute-0 wizardly_rhodes[415455]: 167 167
Oct 02 09:08:22 compute-0 systemd[1]: libpod-88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16.scope: Deactivated successfully.
Oct 02 09:08:22 compute-0 conmon[415455]: conmon 88211743a6f8fc3f6d72 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16.scope/container/memory.events
Oct 02 09:08:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:08:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 41K writes, 164K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s
                                           Cumulative WAL: 41K writes, 15K syncs, 2.73 writes per sync, written: 0.16 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2386 writes, 9534 keys, 2386 commit groups, 1.0 writes per commit group, ingest: 11.32 MB, 0.02 MB/s
                                           Interval WAL: 2386 writes, 937 syncs, 2.55 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:08:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:08:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/116952037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:08:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:08:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/116952037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:08:22 compute-0 podman[415460]: 2025-10-02 09:08:22.170260737 +0000 UTC m=+0.033068348 container died 88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 09:08:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d5cf87a7c357a7be41f53ed1e7ca64791d0a68f3e20db315b894ace7b4078d7-merged.mount: Deactivated successfully.
Oct 02 09:08:22 compute-0 podman[415460]: 2025-10-02 09:08:22.208406333 +0000 UTC m=+0.071213944 container remove 88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 09:08:22 compute-0 systemd[1]: libpod-conmon-88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16.scope: Deactivated successfully.
Oct 02 09:08:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:08:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:08:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:08:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:08:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:08:22 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:08:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/116952037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:08:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/116952037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:08:22 compute-0 podman[415482]: 2025-10-02 09:08:22.521563545 +0000 UTC m=+0.077088637 container create 92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 09:08:22 compute-0 systemd[1]: Started libpod-conmon-92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3.scope.
Oct 02 09:08:22 compute-0 podman[415482]: 2025-10-02 09:08:22.489728985 +0000 UTC m=+0.045254147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:08:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4fc053baf88006f82b6849155a4b29e8263f182a917dc1b88928cc82d427ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4fc053baf88006f82b6849155a4b29e8263f182a917dc1b88928cc82d427ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4fc053baf88006f82b6849155a4b29e8263f182a917dc1b88928cc82d427ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4fc053baf88006f82b6849155a4b29e8263f182a917dc1b88928cc82d427ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4fc053baf88006f82b6849155a4b29e8263f182a917dc1b88928cc82d427ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:22 compute-0 podman[415482]: 2025-10-02 09:08:22.621284103 +0000 UTC m=+0.176809165 container init 92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclaren, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 09:08:22 compute-0 podman[415482]: 2025-10-02 09:08:22.628815887 +0000 UTC m=+0.184340949 container start 92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclaren, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 09:08:22 compute-0 podman[415482]: 2025-10-02 09:08:22.631413528 +0000 UTC m=+0.186938590 container attach 92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclaren, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 09:08:22 compute-0 nova_compute[260603]: 2025-10-02 09:08:22.669 2 DEBUG nova.network.neutron [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updated VIF entry in instance network info cache for port da8c962b-f6a3-4056-a774-9f03b36f62d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:08:22 compute-0 nova_compute[260603]: 2025-10-02 09:08:22.670 2 DEBUG nova.network.neutron [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [{"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:08:22 compute-0 nova_compute[260603]: 2025-10-02 09:08:22.692 2 DEBUG oslo_concurrency.lockutils [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:08:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:08:23 compute-0 ceph-mon[74477]: pgmap v2707: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 09:08:23 compute-0 amazing_mclaren[415499]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:08:23 compute-0 amazing_mclaren[415499]: --> relative data size: 1.0
Oct 02 09:08:23 compute-0 amazing_mclaren[415499]: --> All data devices are unavailable
Oct 02 09:08:23 compute-0 systemd[1]: libpod-92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3.scope: Deactivated successfully.
Oct 02 09:08:23 compute-0 systemd[1]: libpod-92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3.scope: Consumed 1.047s CPU time.
Oct 02 09:08:23 compute-0 podman[415482]: 2025-10-02 09:08:23.761263289 +0000 UTC m=+1.316788351 container died 92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclaren, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:08:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b4fc053baf88006f82b6849155a4b29e8263f182a917dc1b88928cc82d427ee-merged.mount: Deactivated successfully.
Oct 02 09:08:23 compute-0 podman[415482]: 2025-10-02 09:08:23.826461745 +0000 UTC m=+1.381986797 container remove 92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 02 09:08:23 compute-0 systemd[1]: libpod-conmon-92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3.scope: Deactivated successfully.
Oct 02 09:08:23 compute-0 sudo[415373]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:23 compute-0 sudo[415543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:08:23 compute-0 sudo[415543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:23 compute-0 sudo[415543]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:24 compute-0 sudo[415568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:08:24 compute-0 sudo[415568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:24 compute-0 sudo[415568]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 79 op/s
Oct 02 09:08:24 compute-0 sudo[415593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:08:24 compute-0 sudo[415593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:24 compute-0 sudo[415593]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:24 compute-0 sudo[415618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:08:24 compute-0 sudo[415618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:24 compute-0 podman[415683]: 2025-10-02 09:08:24.473930376 +0000 UTC m=+0.043203184 container create 4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 09:08:24 compute-0 systemd[1]: Started libpod-conmon-4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2.scope.
Oct 02 09:08:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:08:24 compute-0 podman[415683]: 2025-10-02 09:08:24.533898759 +0000 UTC m=+0.103171557 container init 4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:08:24 compute-0 podman[415683]: 2025-10-02 09:08:24.540507925 +0000 UTC m=+0.109780723 container start 4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:08:24 compute-0 elastic_hugle[415699]: 167 167
Oct 02 09:08:24 compute-0 systemd[1]: libpod-4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2.scope: Deactivated successfully.
Oct 02 09:08:24 compute-0 podman[415683]: 2025-10-02 09:08:24.547165441 +0000 UTC m=+0.116438239 container attach 4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:08:24 compute-0 podman[415683]: 2025-10-02 09:08:24.547537553 +0000 UTC m=+0.116810361 container died 4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 09:08:24 compute-0 podman[415683]: 2025-10-02 09:08:24.454371927 +0000 UTC m=+0.023644745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:08:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e8cd68ee7f48e7405a9c5b2f7287c9f6490771039d7010ef4ac7df5762bf0e0-merged.mount: Deactivated successfully.
Oct 02 09:08:24 compute-0 podman[415683]: 2025-10-02 09:08:24.584525452 +0000 UTC m=+0.153798250 container remove 4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:08:24 compute-0 systemd[1]: libpod-conmon-4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2.scope: Deactivated successfully.
Oct 02 09:08:24 compute-0 podman[415721]: 2025-10-02 09:08:24.784330212 +0000 UTC m=+0.047694553 container create 472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:08:24 compute-0 systemd[1]: Started libpod-conmon-472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb.scope.
Oct 02 09:08:24 compute-0 podman[415721]: 2025-10-02 09:08:24.761701798 +0000 UTC m=+0.025066139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:08:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b88b07abb261d572d01d6a6604bfdb0aa78142a49c61a6b37e70ad26f1cd5a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b88b07abb261d572d01d6a6604bfdb0aa78142a49c61a6b37e70ad26f1cd5a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b88b07abb261d572d01d6a6604bfdb0aa78142a49c61a6b37e70ad26f1cd5a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b88b07abb261d572d01d6a6604bfdb0aa78142a49c61a6b37e70ad26f1cd5a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:24 compute-0 podman[415721]: 2025-10-02 09:08:24.883829393 +0000 UTC m=+0.147193684 container init 472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_perlman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:08:24 compute-0 podman[415721]: 2025-10-02 09:08:24.891942826 +0000 UTC m=+0.155307137 container start 472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_perlman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 09:08:24 compute-0 podman[415721]: 2025-10-02 09:08:24.895420564 +0000 UTC m=+0.158784885 container attach 472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 09:08:25 compute-0 ovn_controller[152344]: 2025-10-02T09:08:25Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:04:b3 10.100.0.9
Oct 02 09:08:25 compute-0 ovn_controller[152344]: 2025-10-02T09:08:25Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:04:b3 10.100.0.9
Oct 02 09:08:25 compute-0 ceph-mon[74477]: pgmap v2708: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 79 op/s
Oct 02 09:08:25 compute-0 funny_perlman[415737]: {
Oct 02 09:08:25 compute-0 funny_perlman[415737]:     "0": [
Oct 02 09:08:25 compute-0 funny_perlman[415737]:         {
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "devices": [
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "/dev/loop3"
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             ],
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_name": "ceph_lv0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_size": "21470642176",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "name": "ceph_lv0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "tags": {
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.cluster_name": "ceph",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.crush_device_class": "",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.encrypted": "0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.osd_id": "0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.type": "block",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.vdo": "0"
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             },
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "type": "block",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "vg_name": "ceph_vg0"
Oct 02 09:08:25 compute-0 funny_perlman[415737]:         }
Oct 02 09:08:25 compute-0 funny_perlman[415737]:     ],
Oct 02 09:08:25 compute-0 funny_perlman[415737]:     "1": [
Oct 02 09:08:25 compute-0 funny_perlman[415737]:         {
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "devices": [
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "/dev/loop4"
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             ],
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_name": "ceph_lv1",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_size": "21470642176",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "name": "ceph_lv1",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "tags": {
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.cluster_name": "ceph",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.crush_device_class": "",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.encrypted": "0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.osd_id": "1",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.type": "block",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.vdo": "0"
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             },
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "type": "block",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "vg_name": "ceph_vg1"
Oct 02 09:08:25 compute-0 funny_perlman[415737]:         }
Oct 02 09:08:25 compute-0 funny_perlman[415737]:     ],
Oct 02 09:08:25 compute-0 funny_perlman[415737]:     "2": [
Oct 02 09:08:25 compute-0 funny_perlman[415737]:         {
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "devices": [
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "/dev/loop5"
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             ],
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_name": "ceph_lv2",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_size": "21470642176",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "name": "ceph_lv2",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "tags": {
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.cluster_name": "ceph",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.crush_device_class": "",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.encrypted": "0",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.osd_id": "2",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.type": "block",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:                 "ceph.vdo": "0"
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             },
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "type": "block",
Oct 02 09:08:25 compute-0 funny_perlman[415737]:             "vg_name": "ceph_vg2"
Oct 02 09:08:25 compute-0 funny_perlman[415737]:         }
Oct 02 09:08:25 compute-0 funny_perlman[415737]:     ]
Oct 02 09:08:25 compute-0 funny_perlman[415737]: }
Oct 02 09:08:25 compute-0 systemd[1]: libpod-472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb.scope: Deactivated successfully.
Oct 02 09:08:25 compute-0 podman[415721]: 2025-10-02 09:08:25.673774742 +0000 UTC m=+0.937139073 container died 472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_perlman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:08:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b88b07abb261d572d01d6a6604bfdb0aa78142a49c61a6b37e70ad26f1cd5a7-merged.mount: Deactivated successfully.
Oct 02 09:08:25 compute-0 podman[415721]: 2025-10-02 09:08:25.774884154 +0000 UTC m=+1.038248455 container remove 472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_perlman, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:08:25 compute-0 systemd[1]: libpod-conmon-472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb.scope: Deactivated successfully.
Oct 02 09:08:25 compute-0 sudo[415618]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:25 compute-0 sudo[415758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:08:25 compute-0 sudo[415758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:25 compute-0 sudo[415758]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:25 compute-0 sudo[415783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:08:25 compute-0 sudo[415783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:25 compute-0 sudo[415783]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:25 compute-0 sudo[415808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:08:25 compute-0 sudo[415808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:25 compute-0 sudo[415808]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 74 op/s
Oct 02 09:08:26 compute-0 sudo[415833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:08:26 compute-0 sudo[415833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:26 compute-0 nova_compute[260603]: 2025-10-02 09:08:26.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:26 compute-0 nova_compute[260603]: 2025-10-02 09:08:26.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:26 compute-0 podman[415899]: 2025-10-02 09:08:26.398684469 +0000 UTC m=+0.048585021 container create cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:08:26 compute-0 systemd[1]: Started libpod-conmon-cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3.scope.
Oct 02 09:08:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:08:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 44K writes, 172K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 44K writes, 16K syncs, 2.72 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2934 writes, 11K keys, 2934 commit groups, 1.0 writes per commit group, ingest: 14.99 MB, 0.02 MB/s
                                           Interval WAL: 2934 writes, 1145 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:08:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:08:26 compute-0 podman[415899]: 2025-10-02 09:08:26.377023565 +0000 UTC m=+0.026924147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:08:26 compute-0 podman[415899]: 2025-10-02 09:08:26.480228953 +0000 UTC m=+0.130129535 container init cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:08:26 compute-0 podman[415899]: 2025-10-02 09:08:26.487379285 +0000 UTC m=+0.137279817 container start cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:08:26 compute-0 podman[415899]: 2025-10-02 09:08:26.490957126 +0000 UTC m=+0.140857678 container attach cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 09:08:26 compute-0 lucid_haibt[415915]: 167 167
Oct 02 09:08:26 compute-0 systemd[1]: libpod-cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3.scope: Deactivated successfully.
Oct 02 09:08:26 compute-0 podman[415899]: 2025-10-02 09:08:26.493702221 +0000 UTC m=+0.143602753 container died cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 09:08:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-68c479875b13517180f87a5d6e6aa19fc157f812ddc4369c7d40f1b57932911c-merged.mount: Deactivated successfully.
Oct 02 09:08:26 compute-0 podman[415899]: 2025-10-02 09:08:26.54254958 +0000 UTC m=+0.192450122 container remove cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:08:26 compute-0 systemd[1]: libpod-conmon-cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3.scope: Deactivated successfully.
Oct 02 09:08:26 compute-0 podman[415941]: 2025-10-02 09:08:26.767914063 +0000 UTC m=+0.049474238 container create 536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:08:26 compute-0 systemd[1]: Started libpod-conmon-536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69.scope.
Oct 02 09:08:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd5757af9281af490ac9828d5ba7b35ad0f3d4d0e941ebe8ce460e34315b45a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd5757af9281af490ac9828d5ba7b35ad0f3d4d0e941ebe8ce460e34315b45a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd5757af9281af490ac9828d5ba7b35ad0f3d4d0e941ebe8ce460e34315b45a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:26 compute-0 podman[415941]: 2025-10-02 09:08:26.752879336 +0000 UTC m=+0.034439531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd5757af9281af490ac9828d5ba7b35ad0f3d4d0e941ebe8ce460e34315b45a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:08:26 compute-0 podman[415941]: 2025-10-02 09:08:26.855370031 +0000 UTC m=+0.136930226 container init 536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:08:26 compute-0 podman[415941]: 2025-10-02 09:08:26.861392678 +0000 UTC m=+0.142952853 container start 536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 09:08:26 compute-0 podman[415941]: 2025-10-02 09:08:26.864567106 +0000 UTC m=+0.146127301 container attach 536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 09:08:27 compute-0 ceph-mon[74477]: pgmap v2709: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 74 op/s
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]: {
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "osd_id": 2,
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "type": "bluestore"
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:     },
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "osd_id": 1,
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "type": "bluestore"
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:     },
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "osd_id": 0,
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:         "type": "bluestore"
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]:     }
Oct 02 09:08:27 compute-0 admiring_chandrasekhar[415958]: }
Oct 02 09:08:27 compute-0 systemd[1]: libpod-536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69.scope: Deactivated successfully.
Oct 02 09:08:27 compute-0 systemd[1]: libpod-536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69.scope: Consumed 1.065s CPU time.
Oct 02 09:08:27 compute-0 podman[415941]: 2025-10-02 09:08:27.923039499 +0000 UTC m=+1.204599694 container died 536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:08:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdd5757af9281af490ac9828d5ba7b35ad0f3d4d0e941ebe8ce460e34315b45a-merged.mount: Deactivated successfully.
Oct 02 09:08:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:08:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:08:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:08:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:08:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:08:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:08:27 compute-0 podman[415941]: 2025-10-02 09:08:27.978182213 +0000 UTC m=+1.259742388 container remove 536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 02 09:08:27 compute-0 systemd[1]: libpod-conmon-536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69.scope: Deactivated successfully.
Oct 02 09:08:28 compute-0 sudo[415833]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:08:28 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:08:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 02 09:08:28 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e0ac35d5-47b1-40b9-bdce-61bd31a6cbbc does not exist
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b7868b03-5672-4794-b38d-d5f645187536 does not exist
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:08:28
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'images', 'volumes']
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:08:28 compute-0 sudo[416003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:08:28 compute-0 sudo[416003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:28 compute-0 sudo[416003]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:28 compute-0 sudo[416028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:08:28 compute-0 sudo[416028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:08:28 compute-0 sudo[416028]: pam_unix(sudo:session): session closed for user root
Oct 02 09:08:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:08:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:08:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:08:29 compute-0 ceph-mon[74477]: pgmap v2710: 305 pgs: 305 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 02 09:08:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:08:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct 02 09:08:31 compute-0 ceph-mon[74477]: pgmap v2711: 305 pgs: 305 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct 02 09:08:31 compute-0 nova_compute[260603]: 2025-10-02 09:08:31.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:31 compute-0 nova_compute[260603]: 2025-10-02 09:08:31.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:08:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 34K writes, 133K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 34K writes, 12K syncs, 2.71 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2358 writes, 9236 keys, 2358 commit groups, 1.0 writes per commit group, ingest: 9.65 MB, 0.02 MB/s
                                           Interval WAL: 2358 writes, 983 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:08:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:08:32 compute-0 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct 02 09:08:33 compute-0 ceph-mon[74477]: pgmap v2712: 305 pgs: 305 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:08:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:08:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:08:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:34.846 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:34.846 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:34.847 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:35 compute-0 ceph-mon[74477]: pgmap v2713: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:08:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 02 09:08:36 compute-0 nova_compute[260603]: 2025-10-02 09:08:36.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:36 compute-0 nova_compute[260603]: 2025-10-02 09:08:36.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:37 compute-0 ceph-mon[74477]: pgmap v2714: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 02 09:08:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 09:08:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:08:38 compute-0 nova_compute[260603]: 2025-10-02 09:08:38.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:08:38 compute-0 nova_compute[260603]: 2025-10-02 09:08:38.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:08:38 compute-0 nova_compute[260603]: 2025-10-02 09:08:38.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:38.881 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:08:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:38.882 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:08:39 compute-0 ceph-mon[74477]: pgmap v2715: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015185461027544442 of space, bias 1.0, pg target 0.4555638308263333 quantized to 32 (current 32)
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:08:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.136 2 DEBUG nova.compute.manager [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-changed-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.137 2 DEBUG nova.compute.manager [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing instance network info cache due to event network-changed-da8c962b-f6a3-4056-a774-9f03b36f62d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.138 2 DEBUG oslo_concurrency.lockutils [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.138 2 DEBUG oslo_concurrency.lockutils [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.138 2 DEBUG nova.network.neutron [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing network info cache for port da8c962b-f6a3-4056-a774-9f03b36f62d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.250 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.251 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.251 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.251 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.251 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.252 2 INFO nova.compute.manager [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Terminating instance
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.253 2 DEBUG nova.compute.manager [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:08:39 compute-0 kernel: tapda8c962b-f6 (unregistering): left promiscuous mode
Oct 02 09:08:39 compute-0 NetworkManager[45129]: <info>  [1759396119.3068] device (tapda8c962b-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:08:39 compute-0 ovn_controller[152344]: 2025-10-02T09:08:39Z|01523|binding|INFO|Releasing lport da8c962b-f6a3-4056-a774-9f03b36f62d5 from this chassis (sb_readonly=0)
Oct 02 09:08:39 compute-0 ovn_controller[152344]: 2025-10-02T09:08:39Z|01524|binding|INFO|Setting lport da8c962b-f6a3-4056-a774-9f03b36f62d5 down in Southbound
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 ovn_controller[152344]: 2025-10-02T09:08:39Z|01525|binding|INFO|Removing iface tapda8c962b-f6 ovn-installed in OVS
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.327 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:04:b3 10.100.0.9'], port_security=['fa:16:3e:2d:04:b3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5a11349c-a726-40c7-83f0-95f708b3f5d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8531ee7b-e9fa-4aeb-a901-a54a8597544d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=da8c962b-f6a3-4056-a774-9f03b36f62d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.329 162357 INFO neutron.agent.ovn.metadata.agent [-] Port da8c962b-f6a3-4056-a774-9f03b36f62d5 in datapath 0cd32ebe-6aa8-4400-8c00-a3546d677f2c unbound from our chassis
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.330 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cd32ebe-6aa8-4400-8c00-a3546d677f2c
Oct 02 09:08:39 compute-0 kernel: tap95e5f1fa-72 (unregistering): left promiscuous mode
Oct 02 09:08:39 compute-0 NetworkManager[45129]: <info>  [1759396119.3486] device (tap95e5f1fa-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:08:39 compute-0 ovn_controller[152344]: 2025-10-02T09:08:39Z|01526|binding|INFO|Releasing lport 95e5f1fa-72f5-4111-a730-1d1fb3203c6f from this chassis (sb_readonly=0)
Oct 02 09:08:39 compute-0 ovn_controller[152344]: 2025-10-02T09:08:39Z|01527|binding|INFO|Setting lport 95e5f1fa-72f5-4111-a730-1d1fb3203c6f down in Southbound
Oct 02 09:08:39 compute-0 ovn_controller[152344]: 2025-10-02T09:08:39Z|01528|binding|INFO|Removing iface tap95e5f1fa-72 ovn-installed in OVS
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.362 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[14f592f9-aa6b-4c13-ac7c-36472d701d1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.382 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:fd:19 2001:db8::f816:3eff:fe2e:fd19'], port_security=['fa:16:3e:2e:fd:19 2001:db8::f816:3eff:fe2e:fd19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe2e:fd19/64', 'neutron:device_id': '5a11349c-a726-40c7-83f0-95f708b3f5d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d94def-f0a5-4beb-85d6-bc3ad333488d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=95e5f1fa-72f5-4111-a730-1d1fb3203c6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.404 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[391662c0-6893-48d2-8092-7f669b99192b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:39 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.409 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd2dc6f-50bb-4af9-a1fa-03e22233c79f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:39 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008c.scope: Consumed 12.855s CPU time.
Oct 02 09:08:39 compute-0 systemd-machined[214636]: Machine qemu-174-instance-0000008c terminated.
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.442 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1363a344-bced-4358-a0c7-1ba82612b1d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.463 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c3678575-8765-4a6f-81de-fabf3fcbbbae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cd32ebe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:64:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693319, 'reachable_time': 33650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416071, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:39 compute-0 NetworkManager[45129]: <info>  [1759396119.4888] manager: (tap95e5f1fa-72): new Tun device (/org/freedesktop/NetworkManager/Devices/618)
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.490 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86b8ed8c-84f1-4b3a-83ac-4e62b0e1cf66]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0cd32ebe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693334, 'tstamp': 693334}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416074, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0cd32ebe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693338, 'tstamp': 693338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416074, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.492 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cd32ebe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.501 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cd32ebe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.501 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.501 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cd32ebe-60, col_values=(('external_ids', {'iface-id': '251c2ade-6e56-457d-a6d2-c79238c2f10d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.502 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.503 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 95e5f1fa-72f5-4111-a730-1d1fb3203c6f in datapath 3b4c5c4f-7410-4ce4-9e83-46e3156b929c unbound from our chassis
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.505 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b4c5c4f-7410-4ce4-9e83-46e3156b929c
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.504 2 INFO nova.virt.libvirt.driver [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Instance destroyed successfully.
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.505 2 DEBUG nova.objects.instance [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 5a11349c-a726-40c7-83f0-95f708b3f5d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.528 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df812588-4f94-49fb-9aac-b19324a4fb94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.529 2 DEBUG nova.virt.libvirt.vif [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-539987703',display_name='tempest-TestGettingAddress-server-539987703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-539987703',id=140,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:08:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-pggoj88q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:08:13Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=5a11349c-a726-40c7-83f0-95f708b3f5d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.530 2 DEBUG nova.network.os_vif_util [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.531 2 DEBUG nova.network.os_vif_util [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.531 2 DEBUG os_vif [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda8c962b-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.559 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[929aedfa-76c8-4e5b-b7e7-ee7d5f3d0972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.562 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[82c6fe64-c056-4b00-9cdf-ded421abdeef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.588 2 INFO os_vif [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6')
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.589 2 DEBUG nova.virt.libvirt.vif [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-539987703',display_name='tempest-TestGettingAddress-server-539987703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-539987703',id=140,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:08:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-pggoj88q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:08:13Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=5a11349c-a726-40c7-83f0-95f708b3f5d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.590 2 DEBUG nova.network.os_vif_util [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.591 2 DEBUG nova.network.os_vif_util [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.591 2 DEBUG os_vif [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.593 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95e5f1fa-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.596 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6baec88f-d2d5-4b43-9b6f-f639f5c531ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.599 2 INFO os_vif [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72')
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.618 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[730ddf14-1a2d-495b-a923-29162d3b57fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b4c5c4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:e1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 6, 'rx_bytes': 2772, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 6, 'rx_bytes': 2772, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693422, 'reachable_time': 38076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416104, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.640 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d40caf7c-b967-44a5-b381-1f2f2d96caaa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3b4c5c4f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693436, 'tstamp': 693436}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416120, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.643 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b4c5c4f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 nova_compute[260603]: 2025-10-02 09:08:39.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.648 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b4c5c4f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.648 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.649 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b4c5c4f-70, col_values=(('external_ids', {'iface-id': 'f16ced3a-20be-47b1-aeb4-8904dff1366f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.649 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:08:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 14 KiB/s wr, 2 op/s
Oct 02 09:08:40 compute-0 nova_compute[260603]: 2025-10-02 09:08:40.053 2 INFO nova.virt.libvirt.driver [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Deleting instance files /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2_del
Oct 02 09:08:40 compute-0 nova_compute[260603]: 2025-10-02 09:08:40.054 2 INFO nova.virt.libvirt.driver [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Deletion of /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2_del complete
Oct 02 09:08:40 compute-0 nova_compute[260603]: 2025-10-02 09:08:40.086 2 DEBUG nova.compute.manager [req-dcf5af10-e048-485b-ab72-2cbadccf85fb req-b2cc3ab6-cb4c-4257-9fcf-85de0cb19ec8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-unplugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:40 compute-0 nova_compute[260603]: 2025-10-02 09:08:40.087 2 DEBUG oslo_concurrency.lockutils [req-dcf5af10-e048-485b-ab72-2cbadccf85fb req-b2cc3ab6-cb4c-4257-9fcf-85de0cb19ec8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:40 compute-0 nova_compute[260603]: 2025-10-02 09:08:40.088 2 DEBUG oslo_concurrency.lockutils [req-dcf5af10-e048-485b-ab72-2cbadccf85fb req-b2cc3ab6-cb4c-4257-9fcf-85de0cb19ec8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:40 compute-0 nova_compute[260603]: 2025-10-02 09:08:40.088 2 DEBUG oslo_concurrency.lockutils [req-dcf5af10-e048-485b-ab72-2cbadccf85fb req-b2cc3ab6-cb4c-4257-9fcf-85de0cb19ec8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:40 compute-0 nova_compute[260603]: 2025-10-02 09:08:40.089 2 DEBUG nova.compute.manager [req-dcf5af10-e048-485b-ab72-2cbadccf85fb req-b2cc3ab6-cb4c-4257-9fcf-85de0cb19ec8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] No waiting events found dispatching network-vif-unplugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:08:40 compute-0 nova_compute[260603]: 2025-10-02 09:08:40.090 2 DEBUG nova.compute.manager [req-dcf5af10-e048-485b-ab72-2cbadccf85fb req-b2cc3ab6-cb4c-4257-9fcf-85de0cb19ec8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-unplugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:08:40 compute-0 nova_compute[260603]: 2025-10-02 09:08:40.182 2 INFO nova.compute.manager [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 02 09:08:40 compute-0 nova_compute[260603]: 2025-10-02 09:08:40.184 2 DEBUG oslo.service.loopingcall [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:08:40 compute-0 nova_compute[260603]: 2025-10-02 09:08:40.185 2 DEBUG nova.compute.manager [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:08:40 compute-0 nova_compute[260603]: 2025-10-02 09:08:40.185 2 DEBUG nova.network.neutron [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:08:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:40.884 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:41 compute-0 ceph-mon[74477]: pgmap v2716: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 14 KiB/s wr, 2 op/s
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.244 2 DEBUG nova.compute.manager [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-unplugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.244 2 DEBUG oslo_concurrency.lockutils [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.245 2 DEBUG oslo_concurrency.lockutils [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.245 2 DEBUG oslo_concurrency.lockutils [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.245 2 DEBUG nova.compute.manager [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] No waiting events found dispatching network-vif-unplugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.245 2 DEBUG nova.compute.manager [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-unplugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.245 2 DEBUG nova.compute.manager [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.245 2 DEBUG oslo_concurrency.lockutils [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.246 2 DEBUG oslo_concurrency.lockutils [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.246 2 DEBUG oslo_concurrency.lockutils [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.246 2 DEBUG nova.compute.manager [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] No waiting events found dispatching network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.246 2 WARNING nova.compute.manager [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received unexpected event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 for instance with vm_state active and task_state deleting.
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.878 2 DEBUG nova.network.neutron [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updated VIF entry in instance network info cache for port da8c962b-f6a3-4056-a774-9f03b36f62d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.879 2 DEBUG nova.network.neutron [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [{"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:08:41 compute-0 nova_compute[260603]: 2025-10-02 09:08:41.922 2 DEBUG oslo_concurrency.lockutils [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:08:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 14 KiB/s wr, 2 op/s
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.190 2 DEBUG nova.compute.manager [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.190 2 DEBUG oslo_concurrency.lockutils [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.191 2 DEBUG oslo_concurrency.lockutils [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.191 2 DEBUG oslo_concurrency.lockutils [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.191 2 DEBUG nova.compute.manager [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] No waiting events found dispatching network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.191 2 WARNING nova.compute.manager [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received unexpected event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f for instance with vm_state active and task_state deleting.
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.192 2 DEBUG nova.compute.manager [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-deleted-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.192 2 INFO nova.compute.manager [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Neutron deleted interface da8c962b-f6a3-4056-a774-9f03b36f62d5; detaching it from the instance and deleting it from the info cache
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.192 2 DEBUG nova.network.neutron [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [{"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.218 2 DEBUG nova.compute.manager [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Detach interface failed, port_id=da8c962b-f6a3-4056-a774-9f03b36f62d5, reason: Instance 5a11349c-a726-40c7-83f0-95f708b3f5d2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.377 2 DEBUG nova.network.neutron [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.410 2 INFO nova.compute.manager [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Took 2.22 seconds to deallocate network for instance.
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.471 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.472 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:42 compute-0 nova_compute[260603]: 2025-10-02 09:08:42.537 2 DEBUG oslo_concurrency.processutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:08:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:08:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534899484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:08:43 compute-0 nova_compute[260603]: 2025-10-02 09:08:43.017 2 DEBUG oslo_concurrency.processutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:08:43 compute-0 nova_compute[260603]: 2025-10-02 09:08:43.025 2 DEBUG nova.compute.provider_tree [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:08:43 compute-0 nova_compute[260603]: 2025-10-02 09:08:43.062 2 DEBUG nova.scheduler.client.report [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:08:43 compute-0 nova_compute[260603]: 2025-10-02 09:08:43.096 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:43 compute-0 nova_compute[260603]: 2025-10-02 09:08:43.148 2 INFO nova.scheduler.client.report [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 5a11349c-a726-40c7-83f0-95f708b3f5d2
Oct 02 09:08:43 compute-0 ceph-mon[74477]: pgmap v2717: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 14 KiB/s wr, 2 op/s
Oct 02 09:08:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1534899484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:08:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:08:43 compute-0 nova_compute[260603]: 2025-10-02 09:08:43.246 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 19 KiB/s wr, 30 op/s
Oct 02 09:08:44 compute-0 nova_compute[260603]: 2025-10-02 09:08:44.308 2 DEBUG nova.compute.manager [req-fc467e5d-2eb6-4a3a-b27c-adf93d31b9db req-a170d89c-eb7d-477b-9987-8036cdbed94d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-deleted-95e5f1fa-72f5-4111-a730-1d1fb3203c6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:44 compute-0 nova_compute[260603]: 2025-10-02 09:08:44.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:45 compute-0 ceph-mon[74477]: pgmap v2718: 305 pgs: 305 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 19 KiB/s wr, 30 op/s
Oct 02 09:08:45 compute-0 nova_compute[260603]: 2025-10-02 09:08:45.902 2 DEBUG nova.compute.manager [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-changed-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:45 compute-0 nova_compute[260603]: 2025-10-02 09:08:45.902 2 DEBUG nova.compute.manager [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing instance network info cache due to event network-changed-1bf113e0-562b-45fb-9b97-aa76d5dac283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:08:45 compute-0 nova_compute[260603]: 2025-10-02 09:08:45.903 2 DEBUG oslo_concurrency.lockutils [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:08:45 compute-0 nova_compute[260603]: 2025-10-02 09:08:45.903 2 DEBUG oslo_concurrency.lockutils [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:08:45 compute-0 nova_compute[260603]: 2025-10-02 09:08:45.903 2 DEBUG nova.network.neutron [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing network info cache for port 1bf113e0-562b-45fb-9b97-aa76d5dac283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:08:45 compute-0 nova_compute[260603]: 2025-10-02 09:08:45.947 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:45 compute-0 nova_compute[260603]: 2025-10-02 09:08:45.948 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:45 compute-0 nova_compute[260603]: 2025-10-02 09:08:45.948 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:45 compute-0 nova_compute[260603]: 2025-10-02 09:08:45.949 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:45 compute-0 nova_compute[260603]: 2025-10-02 09:08:45.949 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:45 compute-0 nova_compute[260603]: 2025-10-02 09:08:45.951 2 INFO nova.compute.manager [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Terminating instance
Oct 02 09:08:45 compute-0 nova_compute[260603]: 2025-10-02 09:08:45.952 2 DEBUG nova.compute.manager [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:08:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 7.0 KiB/s wr, 30 op/s
Oct 02 09:08:46 compute-0 kernel: tap1bf113e0-56 (unregistering): left promiscuous mode
Oct 02 09:08:46 compute-0 podman[416148]: 2025-10-02 09:08:46.067886451 +0000 UTC m=+0.119373350 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:08:46 compute-0 NetworkManager[45129]: <info>  [1759396126.0691] device (tap1bf113e0-56): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:08:46 compute-0 ovn_controller[152344]: 2025-10-02T09:08:46Z|01529|binding|INFO|Releasing lport 1bf113e0-562b-45fb-9b97-aa76d5dac283 from this chassis (sb_readonly=0)
Oct 02 09:08:46 compute-0 ovn_controller[152344]: 2025-10-02T09:08:46Z|01530|binding|INFO|Setting lport 1bf113e0-562b-45fb-9b97-aa76d5dac283 down in Southbound
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 ovn_controller[152344]: 2025-10-02T09:08:46Z|01531|binding|INFO|Removing iface tap1bf113e0-56 ovn-installed in OVS
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.088 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:e1:b8 10.100.0.7'], port_security=['fa:16:3e:fc:e1:b8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '086604c0-28d5-41d4-995c-17db322b3ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8531ee7b-e9fa-4aeb-a901-a54a8597544d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=1bf113e0-562b-45fb-9b97-aa76d5dac283) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.090 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 1bf113e0-562b-45fb-9b97-aa76d5dac283 in datapath 0cd32ebe-6aa8-4400-8c00-a3546d677f2c unbound from our chassis
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.091 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0cd32ebe-6aa8-4400-8c00-a3546d677f2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.093 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0651d79a-5c3e-41f6-ba5d-7d71f70e7d4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.094 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c namespace which is not needed anymore
Oct 02 09:08:46 compute-0 kernel: tap97679643-cd (unregistering): left promiscuous mode
Oct 02 09:08:46 compute-0 NetworkManager[45129]: <info>  [1759396126.1107] device (tap97679643-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 ovn_controller[152344]: 2025-10-02T09:08:46Z|01532|binding|INFO|Releasing lport 97679643-cd71-4857-a615-c21d643d15c2 from this chassis (sb_readonly=0)
Oct 02 09:08:46 compute-0 ovn_controller[152344]: 2025-10-02T09:08:46Z|01533|binding|INFO|Setting lport 97679643-cd71-4857-a615-c21d643d15c2 down in Southbound
Oct 02 09:08:46 compute-0 ovn_controller[152344]: 2025-10-02T09:08:46Z|01534|binding|INFO|Removing iface tap97679643-cd ovn-installed in OVS
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.130 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:b1:36 2001:db8::f816:3eff:fe9d:b136'], port_security=['fa:16:3e:9d:b1:36 2001:db8::f816:3eff:fe9d:b136'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9d:b136/64', 'neutron:device_id': '086604c0-28d5-41d4-995c-17db322b3ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d94def-f0a5-4beb-85d6-bc3ad333488d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=97679643-cd71-4857-a615-c21d643d15c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:08:46 compute-0 podman[416147]: 2025-10-02 09:08:46.13700966 +0000 UTC m=+0.187032483 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008b.scope: Consumed 15.089s CPU time.
Oct 02 09:08:46 compute-0 systemd-machined[214636]: Machine qemu-173-instance-0000008b terminated.
Oct 02 09:08:46 compute-0 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [NOTICE]   (414465) : haproxy version is 2.8.14-c23fe91
Oct 02 09:08:46 compute-0 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [NOTICE]   (414465) : path to executable is /usr/sbin/haproxy
Oct 02 09:08:46 compute-0 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [WARNING]  (414465) : Exiting Master process...
Oct 02 09:08:46 compute-0 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [WARNING]  (414465) : Exiting Master process...
Oct 02 09:08:46 compute-0 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [ALERT]    (414465) : Current worker (414467) exited with code 143 (Terminated)
Oct 02 09:08:46 compute-0 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [WARNING]  (414465) : All workers exited. Exiting... (0)
Oct 02 09:08:46 compute-0 systemd[1]: libpod-fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df.scope: Deactivated successfully.
Oct 02 09:08:46 compute-0 podman[416222]: 2025-10-02 09:08:46.27314906 +0000 UTC m=+0.061685997 container died fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 09:08:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df-userdata-shm.mount: Deactivated successfully.
Oct 02 09:08:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-df163648e30c438ee7956b6dd528661349b15b516a55e59776780490c120fbcc-merged.mount: Deactivated successfully.
Oct 02 09:08:46 compute-0 podman[416222]: 2025-10-02 09:08:46.331169354 +0000 UTC m=+0.119706261 container cleanup fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:08:46 compute-0 systemd[1]: libpod-conmon-fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df.scope: Deactivated successfully.
Oct 02 09:08:46 compute-0 podman[416250]: 2025-10-02 09:08:46.404214473 +0000 UTC m=+0.048900200 container remove fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.416 2 INFO nova.virt.libvirt.driver [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Instance destroyed successfully.
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.417 2 DEBUG nova.objects.instance [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 086604c0-28d5-41d4-995c-17db322b3ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.417 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[630b7cbd-8e03-4944-81d4-6919b9462638]: (4, ('Thu Oct  2 09:08:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c (fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df)\nfc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df\nThu Oct  2 09:08:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c (fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df)\nfc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.420 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09b1e2d3-f526-41a7-96d0-f0fb66213b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.421 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cd32ebe-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 kernel: tap0cd32ebe-60: left promiscuous mode
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.431 2 DEBUG nova.virt.libvirt.vif [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892205724',display_name='tempest-TestGettingAddress-server-1892205724',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892205724',id=139,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:07:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-2q1pc96q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:07:35Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=086604c0-28d5-41d4-995c-17db322b3ded,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.432 2 DEBUG nova.network.os_vif_util [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.433 2 DEBUG nova.network.os_vif_util [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.433 2 DEBUG os_vif [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bf113e0-56, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.451 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09d66405-bc55-4d7f-a77c-c102b0e9be6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.457 2 INFO os_vif [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56')
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.459 2 DEBUG nova.virt.libvirt.vif [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892205724',display_name='tempest-TestGettingAddress-server-1892205724',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892205724',id=139,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:07:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-2q1pc96q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:07:35Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=086604c0-28d5-41d4-995c-17db322b3ded,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.459 2 DEBUG nova.network.os_vif_util [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.460 2 DEBUG nova.network.os_vif_util [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.461 2 DEBUG os_vif [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97679643-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:08:46 compute-0 nova_compute[260603]: 2025-10-02 09:08:46.471 2 INFO os_vif [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd')
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.483 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4c3184-2018-4589-8f14-00dd47a02329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.485 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aafd63bd-bbb9-4d54-a3ce-fb39a79c4db5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.504 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8af5c7f0-3bf1-4e7a-b17d-ddc2c8c7b57a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693311, 'reachable_time': 40542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416298, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d0cd32ebe\x2d6aa8\x2d4400\x2d8c00\x2da3546d677f2c.mount: Deactivated successfully.
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.512 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.512 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e6ba2b-6a47-42ac-89a8-e0e6cd0016d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.513 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 97679643-cd71-4857-a615-c21d643d15c2 in datapath 3b4c5c4f-7410-4ce4-9e83-46e3156b929c unbound from our chassis
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.514 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b4c5c4f-7410-4ce4-9e83-46e3156b929c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.515 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[93819e33-a064-4f38-9a61-e6dc92b1d1e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:46 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.515 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c namespace which is not needed anymore
Oct 02 09:08:46 compute-0 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [NOTICE]   (414538) : haproxy version is 2.8.14-c23fe91
Oct 02 09:08:46 compute-0 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [NOTICE]   (414538) : path to executable is /usr/sbin/haproxy
Oct 02 09:08:46 compute-0 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [WARNING]  (414538) : Exiting Master process...
Oct 02 09:08:46 compute-0 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [WARNING]  (414538) : Exiting Master process...
Oct 02 09:08:46 compute-0 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [ALERT]    (414538) : Current worker (414540) exited with code 143 (Terminated)
Oct 02 09:08:46 compute-0 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [WARNING]  (414538) : All workers exited. Exiting... (0)
Oct 02 09:08:46 compute-0 systemd[1]: libpod-e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea.scope: Deactivated successfully.
Oct 02 09:08:46 compute-0 podman[416326]: 2025-10-02 09:08:46.720666438 +0000 UTC m=+0.105105468 container died e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:08:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea-userdata-shm.mount: Deactivated successfully.
Oct 02 09:08:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-da359b613ffa5d68b88a20f5d2354085bb5af6508d378872b99a7f638a446a90-merged.mount: Deactivated successfully.
Oct 02 09:08:46 compute-0 podman[416326]: 2025-10-02 09:08:46.92313654 +0000 UTC m=+0.307575540 container cleanup e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 09:08:46 compute-0 systemd[1]: libpod-conmon-e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea.scope: Deactivated successfully.
Oct 02 09:08:47 compute-0 podman[416358]: 2025-10-02 09:08:47.109387827 +0000 UTC m=+0.149975491 container remove e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 09:08:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.118 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3048d5d2-ae45-433d-99fc-283ac4ea9d76]: (4, ('Thu Oct  2 09:08:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c (e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea)\ne7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea\nThu Oct  2 09:08:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c (e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea)\ne7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.122 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8488c3d6-9169-4533-8132-b42b15557714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.123 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b4c5c4f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:08:47 compute-0 nova_compute[260603]: 2025-10-02 09:08:47.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:47 compute-0 kernel: tap3b4c5c4f-70: left promiscuous mode
Oct 02 09:08:47 compute-0 nova_compute[260603]: 2025-10-02 09:08:47.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.134 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18ccf2fb-9ff3-45c4-be84-242ea90b7e74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:47 compute-0 nova_compute[260603]: 2025-10-02 09:08:47.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:47 compute-0 nova_compute[260603]: 2025-10-02 09:08:47.155 2 DEBUG nova.compute.manager [req-d3e335a6-b3b4-4404-a830-61587986e2d3 req-b0234ac3-507d-488c-818e-737b050e9a92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-unplugged-97679643-cd71-4857-a615-c21d643d15c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:47 compute-0 nova_compute[260603]: 2025-10-02 09:08:47.156 2 DEBUG oslo_concurrency.lockutils [req-d3e335a6-b3b4-4404-a830-61587986e2d3 req-b0234ac3-507d-488c-818e-737b050e9a92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.156 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7f099f-6e1f-40ae-804d-6383adb34b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:47 compute-0 nova_compute[260603]: 2025-10-02 09:08:47.157 2 DEBUG oslo_concurrency.lockutils [req-d3e335a6-b3b4-4404-a830-61587986e2d3 req-b0234ac3-507d-488c-818e-737b050e9a92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:47 compute-0 nova_compute[260603]: 2025-10-02 09:08:47.157 2 DEBUG oslo_concurrency.lockutils [req-d3e335a6-b3b4-4404-a830-61587986e2d3 req-b0234ac3-507d-488c-818e-737b050e9a92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.157 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e3915374-d113-4d6e-a623-0f0deced2d51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:47 compute-0 nova_compute[260603]: 2025-10-02 09:08:47.158 2 DEBUG nova.compute.manager [req-d3e335a6-b3b4-4404-a830-61587986e2d3 req-b0234ac3-507d-488c-818e-737b050e9a92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] No waiting events found dispatching network-vif-unplugged-97679643-cd71-4857-a615-c21d643d15c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:08:47 compute-0 nova_compute[260603]: 2025-10-02 09:08:47.158 2 DEBUG nova.compute.manager [req-d3e335a6-b3b4-4404-a830-61587986e2d3 req-b0234ac3-507d-488c-818e-737b050e9a92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-unplugged-97679643-cd71-4857-a615-c21d643d15c2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:08:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.184 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[57af7869-adbd-4766-920b-c44062cbc2fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693409, 'reachable_time': 25430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416373, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.187 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:08:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.187 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0583ae78-eac9-41c7-ad2a-0faced1e0a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:08:47 compute-0 ceph-mon[74477]: pgmap v2719: 305 pgs: 305 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 7.0 KiB/s wr, 30 op/s
Oct 02 09:08:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d3b4c5c4f\x2d7410\x2d4ce4\x2d9e83\x2d46e3156b929c.mount: Deactivated successfully.
Oct 02 09:08:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 57 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 7.4 KiB/s wr, 44 op/s
Oct 02 09:08:48 compute-0 nova_compute[260603]: 2025-10-02 09:08:48.133 2 DEBUG nova.compute.manager [req-625c5453-5bab-47c2-b5a1-5ea97c0c8c13 req-3f774b30-d968-4b24-9646-dbd00949a581 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-unplugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:48 compute-0 nova_compute[260603]: 2025-10-02 09:08:48.134 2 DEBUG oslo_concurrency.lockutils [req-625c5453-5bab-47c2-b5a1-5ea97c0c8c13 req-3f774b30-d968-4b24-9646-dbd00949a581 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:48 compute-0 nova_compute[260603]: 2025-10-02 09:08:48.134 2 DEBUG oslo_concurrency.lockutils [req-625c5453-5bab-47c2-b5a1-5ea97c0c8c13 req-3f774b30-d968-4b24-9646-dbd00949a581 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:48 compute-0 nova_compute[260603]: 2025-10-02 09:08:48.135 2 DEBUG oslo_concurrency.lockutils [req-625c5453-5bab-47c2-b5a1-5ea97c0c8c13 req-3f774b30-d968-4b24-9646-dbd00949a581 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:48 compute-0 nova_compute[260603]: 2025-10-02 09:08:48.135 2 DEBUG nova.compute.manager [req-625c5453-5bab-47c2-b5a1-5ea97c0c8c13 req-3f774b30-d968-4b24-9646-dbd00949a581 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] No waiting events found dispatching network-vif-unplugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:08:48 compute-0 nova_compute[260603]: 2025-10-02 09:08:48.135 2 DEBUG nova.compute.manager [req-625c5453-5bab-47c2-b5a1-5ea97c0c8c13 req-3f774b30-d968-4b24-9646-dbd00949a581 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-unplugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:08:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:08:48 compute-0 nova_compute[260603]: 2025-10-02 09:08:48.627 2 INFO nova.virt.libvirt.driver [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Deleting instance files /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded_del
Oct 02 09:08:48 compute-0 nova_compute[260603]: 2025-10-02 09:08:48.627 2 INFO nova.virt.libvirt.driver [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Deletion of /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded_del complete
Oct 02 09:08:48 compute-0 nova_compute[260603]: 2025-10-02 09:08:48.683 2 INFO nova.compute.manager [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Took 2.73 seconds to destroy the instance on the hypervisor.
Oct 02 09:08:48 compute-0 nova_compute[260603]: 2025-10-02 09:08:48.684 2 DEBUG oslo.service.loopingcall [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:08:48 compute-0 nova_compute[260603]: 2025-10-02 09:08:48.684 2 DEBUG nova.compute.manager [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:08:48 compute-0 nova_compute[260603]: 2025-10-02 09:08:48.684 2 DEBUG nova.network.neutron [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:08:49 compute-0 nova_compute[260603]: 2025-10-02 09:08:49.150 2 DEBUG nova.network.neutron [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updated VIF entry in instance network info cache for port 1bf113e0-562b-45fb-9b97-aa76d5dac283. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:08:49 compute-0 nova_compute[260603]: 2025-10-02 09:08:49.151 2 DEBUG nova.network.neutron [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:08:49 compute-0 nova_compute[260603]: 2025-10-02 09:08:49.175 2 DEBUG oslo_concurrency.lockutils [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:08:49 compute-0 nova_compute[260603]: 2025-10-02 09:08:49.249 2 DEBUG nova.compute.manager [req-ad222c61-2e89-43cb-8450-2b5e187cbe19 req-386bfdf0-e053-4eed-89ba-4feaaa30461a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:49 compute-0 nova_compute[260603]: 2025-10-02 09:08:49.250 2 DEBUG oslo_concurrency.lockutils [req-ad222c61-2e89-43cb-8450-2b5e187cbe19 req-386bfdf0-e053-4eed-89ba-4feaaa30461a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:49 compute-0 nova_compute[260603]: 2025-10-02 09:08:49.250 2 DEBUG oslo_concurrency.lockutils [req-ad222c61-2e89-43cb-8450-2b5e187cbe19 req-386bfdf0-e053-4eed-89ba-4feaaa30461a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:49 compute-0 nova_compute[260603]: 2025-10-02 09:08:49.251 2 DEBUG oslo_concurrency.lockutils [req-ad222c61-2e89-43cb-8450-2b5e187cbe19 req-386bfdf0-e053-4eed-89ba-4feaaa30461a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:49 compute-0 nova_compute[260603]: 2025-10-02 09:08:49.251 2 DEBUG nova.compute.manager [req-ad222c61-2e89-43cb-8450-2b5e187cbe19 req-386bfdf0-e053-4eed-89ba-4feaaa30461a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] No waiting events found dispatching network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:08:49 compute-0 nova_compute[260603]: 2025-10-02 09:08:49.251 2 WARNING nova.compute.manager [req-ad222c61-2e89-43cb-8450-2b5e187cbe19 req-386bfdf0-e053-4eed-89ba-4feaaa30461a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received unexpected event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 for instance with vm_state active and task_state deleting.
Oct 02 09:08:49 compute-0 ceph-mon[74477]: pgmap v2720: 305 pgs: 305 active+clean; 57 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 7.4 KiB/s wr, 44 op/s
Oct 02 09:08:50 compute-0 podman[416376]: 2025-10-02 09:08:50.009909873 +0000 UTC m=+0.062604046 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 09:08:50 compute-0 podman[416375]: 2025-10-02 09:08:50.015175007 +0000 UTC m=+0.071578265 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 09:08:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 57 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.0 KiB/s wr, 42 op/s
Oct 02 09:08:50 compute-0 nova_compute[260603]: 2025-10-02 09:08:50.220 2 DEBUG nova.compute.manager [req-7685f995-f85f-4303-9a48-3f0b2e0213ac req-3cea4447-9783-458e-807a-277de863b8af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:50 compute-0 nova_compute[260603]: 2025-10-02 09:08:50.220 2 DEBUG oslo_concurrency.lockutils [req-7685f995-f85f-4303-9a48-3f0b2e0213ac req-3cea4447-9783-458e-807a-277de863b8af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:50 compute-0 nova_compute[260603]: 2025-10-02 09:08:50.221 2 DEBUG oslo_concurrency.lockutils [req-7685f995-f85f-4303-9a48-3f0b2e0213ac req-3cea4447-9783-458e-807a-277de863b8af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:50 compute-0 nova_compute[260603]: 2025-10-02 09:08:50.221 2 DEBUG oslo_concurrency.lockutils [req-7685f995-f85f-4303-9a48-3f0b2e0213ac req-3cea4447-9783-458e-807a-277de863b8af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:50 compute-0 nova_compute[260603]: 2025-10-02 09:08:50.221 2 DEBUG nova.compute.manager [req-7685f995-f85f-4303-9a48-3f0b2e0213ac req-3cea4447-9783-458e-807a-277de863b8af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] No waiting events found dispatching network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:08:50 compute-0 nova_compute[260603]: 2025-10-02 09:08:50.221 2 WARNING nova.compute.manager [req-7685f995-f85f-4303-9a48-3f0b2e0213ac req-3cea4447-9783-458e-807a-277de863b8af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received unexpected event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 for instance with vm_state active and task_state deleting.
Oct 02 09:08:50 compute-0 ceph-mon[74477]: pgmap v2721: 305 pgs: 305 active+clean; 57 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.0 KiB/s wr, 42 op/s
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.111 2 DEBUG nova.network.neutron [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.136 2 INFO nova.compute.manager [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Took 2.45 seconds to deallocate network for instance.
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.194 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.194 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.258 2 DEBUG oslo_concurrency.processutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.324 2 DEBUG nova.compute.manager [req-57b43742-2723-4f75-a579-4f566b5218db req-fba99988-2cba-4a60-8a08-e026ac7a66ce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-deleted-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.325 2 DEBUG nova.compute.manager [req-57b43742-2723-4f75-a579-4f566b5218db req-fba99988-2cba-4a60-8a08-e026ac7a66ce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-deleted-97679643-cd71-4857-a615-c21d643d15c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:08:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1841118579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.715 2 DEBUG oslo_concurrency.processutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.722 2 DEBUG nova.compute.provider_tree [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.737 2 DEBUG nova.scheduler.client.report [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:08:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1841118579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.755 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.782 2 INFO nova.scheduler.client.report [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 086604c0-28d5-41d4-995c-17db322b3ded
Oct 02 09:08:51 compute-0 nova_compute[260603]: 2025-10-02 09:08:51.837 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:08:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 57 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.0 KiB/s wr, 42 op/s
Oct 02 09:08:52 compute-0 ceph-mon[74477]: pgmap v2722: 305 pgs: 305 active+clean; 57 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.0 KiB/s wr, 42 op/s
Oct 02 09:08:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:08:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 55 op/s
Oct 02 09:08:54 compute-0 nova_compute[260603]: 2025-10-02 09:08:54.504 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396119.502605, 5a11349c-a726-40c7-83f0-95f708b3f5d2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:08:54 compute-0 nova_compute[260603]: 2025-10-02 09:08:54.504 2 INFO nova.compute.manager [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] VM Stopped (Lifecycle Event)
Oct 02 09:08:54 compute-0 nova_compute[260603]: 2025-10-02 09:08:54.521 2 DEBUG nova.compute.manager [None req-65b687f8-104d-4010-8be3-2a1313b5ac2b - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:08:55 compute-0 ceph-mon[74477]: pgmap v2723: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 55 op/s
Oct 02 09:08:55 compute-0 nova_compute[260603]: 2025-10-02 09:08:55.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:08:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:08:56 compute-0 nova_compute[260603]: 2025-10-02 09:08:56.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:56 compute-0 nova_compute[260603]: 2025-10-02 09:08:56.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:56 compute-0 nova_compute[260603]: 2025-10-02 09:08:56.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:56 compute-0 nova_compute[260603]: 2025-10-02 09:08:56.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:08:57 compute-0 ceph-mon[74477]: pgmap v2724: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:08:57 compute-0 nova_compute[260603]: 2025-10-02 09:08:57.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:08:57 compute-0 nova_compute[260603]: 2025-10-02 09:08:57.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:08:57 compute-0 nova_compute[260603]: 2025-10-02 09:08:57.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:08:57 compute-0 nova_compute[260603]: 2025-10-02 09:08:57.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:08:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:08:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:08:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:08:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:08:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:08:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:08:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:08:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:08:59 compute-0 ceph-mon[74477]: pgmap v2725: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:09:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 767 B/s wr, 13 op/s
Oct 02 09:09:00 compute-0 nova_compute[260603]: 2025-10-02 09:09:00.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:01 compute-0 nova_compute[260603]: 2025-10-02 09:09:01.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:01 compute-0 ceph-mon[74477]: pgmap v2726: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 767 B/s wr, 13 op/s
Oct 02 09:09:01 compute-0 nova_compute[260603]: 2025-10-02 09:09:01.415 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396126.413921, 086604c0-28d5-41d4-995c-17db322b3ded => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:09:01 compute-0 nova_compute[260603]: 2025-10-02 09:09:01.415 2 INFO nova.compute.manager [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] VM Stopped (Lifecycle Event)
Oct 02 09:09:01 compute-0 nova_compute[260603]: 2025-10-02 09:09:01.434 2 DEBUG nova.compute.manager [None req-74f1beda-7d5c-413a-95ae-41c1c78bc243 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:09:01 compute-0 nova_compute[260603]: 2025-10-02 09:09:01.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:01 compute-0 nova_compute[260603]: 2025-10-02 09:09:01.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 767 B/s wr, 13 op/s
Oct 02 09:09:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:09:03 compute-0 ceph-mon[74477]: pgmap v2727: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 767 B/s wr, 13 op/s
Oct 02 09:09:03 compute-0 nova_compute[260603]: 2025-10-02 09:09:03.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:03 compute-0 nova_compute[260603]: 2025-10-02 09:09:03.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:03 compute-0 nova_compute[260603]: 2025-10-02 09:09:03.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:03 compute-0 nova_compute[260603]: 2025-10-02 09:09:03.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:03 compute-0 nova_compute[260603]: 2025-10-02 09:09:03.555 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:03 compute-0 nova_compute[260603]: 2025-10-02 09:09:03.555 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:09:03 compute-0 nova_compute[260603]: 2025-10-02 09:09:03.556 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:09:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:09:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3447626246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:09:04 compute-0 nova_compute[260603]: 2025-10-02 09:09:04.032 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:09:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 767 B/s wr, 13 op/s
Oct 02 09:09:04 compute-0 nova_compute[260603]: 2025-10-02 09:09:04.183 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:09:04 compute-0 nova_compute[260603]: 2025-10-02 09:09:04.185 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3625MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:09:04 compute-0 nova_compute[260603]: 2025-10-02 09:09:04.185 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:04 compute-0 nova_compute[260603]: 2025-10-02 09:09:04.186 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:04 compute-0 nova_compute[260603]: 2025-10-02 09:09:04.357 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:09:04 compute-0 nova_compute[260603]: 2025-10-02 09:09:04.357 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:09:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3447626246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:09:04 compute-0 nova_compute[260603]: 2025-10-02 09:09:04.499 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:09:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:09:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/566579265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:09:04 compute-0 nova_compute[260603]: 2025-10-02 09:09:04.974 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:09:04 compute-0 nova_compute[260603]: 2025-10-02 09:09:04.980 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:09:05 compute-0 nova_compute[260603]: 2025-10-02 09:09:05.003 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:09:05 compute-0 nova_compute[260603]: 2025-10-02 09:09:05.063 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:09:05 compute-0 nova_compute[260603]: 2025-10-02 09:09:05.064 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:05 compute-0 ceph-mon[74477]: pgmap v2728: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 767 B/s wr, 13 op/s
Oct 02 09:09:05 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/566579265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:09:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:06 compute-0 nova_compute[260603]: 2025-10-02 09:09:06.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:06 compute-0 nova_compute[260603]: 2025-10-02 09:09:06.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:07 compute-0 ceph-mon[74477]: pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:09:09 compute-0 nova_compute[260603]: 2025-10-02 09:09:09.060 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:09 compute-0 ceph-mon[74477]: pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:11 compute-0 nova_compute[260603]: 2025-10-02 09:09:11.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:11 compute-0 nova_compute[260603]: 2025-10-02 09:09:11.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:11 compute-0 ceph-mon[74477]: pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:11 compute-0 nova_compute[260603]: 2025-10-02 09:09:11.513 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:11 compute-0 nova_compute[260603]: 2025-10-02 09:09:11.544 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:11 compute-0 nova_compute[260603]: 2025-10-02 09:09:11.545 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 09:09:11 compute-0 nova_compute[260603]: 2025-10-02 09:09:11.574 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 09:09:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:12 compute-0 ceph-mon[74477]: pgmap v2732: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:09:13 compute-0 nova_compute[260603]: 2025-10-02 09:09:13.549 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:14.920 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:38:02 2001:db8:0:1:f816:3eff:fe57:3802 2001:db8::f816:3eff:fe57:3802'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe57:3802/64 2001:db8::f816:3eff:fe57:3802/64', 'neutron:device_id': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a251a259-65e8-4a45-82af-f69bd5f24a08, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=136a7ea2-2365-4779-b31d-41cbfc52a20f) old=Port_Binding(mac=['fa:16:3e:57:38:02 2001:db8::f816:3eff:fe57:3802'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe57:3802/64', 'neutron:device_id': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:09:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:14.921 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 136a7ea2-2365-4779-b31d-41cbfc52a20f in datapath 6d9c157f-cf57-4b44-8fba-d16631e22418 updated
Oct 02 09:09:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:14.922 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d9c157f-cf57-4b44-8fba-d16631e22418, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:09:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:14.923 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0abeef-c538-476c-93e2-642cea4e0625]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:15 compute-0 ceph-mon[74477]: pgmap v2733: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:16 compute-0 nova_compute[260603]: 2025-10-02 09:09:16.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:16 compute-0 nova_compute[260603]: 2025-10-02 09:09:16.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:17 compute-0 podman[416483]: 2025-10-02 09:09:17.056670491 +0000 UTC m=+0.125240342 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 09:09:17 compute-0 podman[416484]: 2025-10-02 09:09:17.07947929 +0000 UTC m=+0.134585453 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 09:09:17 compute-0 ceph-mon[74477]: pgmap v2734: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:09:19 compute-0 ceph-mon[74477]: pgmap v2735: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:20 compute-0 podman[416529]: 2025-10-02 09:09:20.992424647 +0000 UTC m=+0.063174664 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 09:09:21 compute-0 podman[416530]: 2025-10-02 09:09:21.017411883 +0000 UTC m=+0.073544557 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct 02 09:09:21 compute-0 nova_compute[260603]: 2025-10-02 09:09:21.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:21 compute-0 ceph-mon[74477]: pgmap v2736: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:21 compute-0 nova_compute[260603]: 2025-10-02 09:09:21.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:09:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/867271278' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:09:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:09:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/867271278' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:09:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/867271278' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:09:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/867271278' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:09:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:09:23 compute-0 ceph-mon[74477]: pgmap v2737: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:25 compute-0 ceph-mon[74477]: pgmap v2738: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:25 compute-0 nova_compute[260603]: 2025-10-02 09:09:25.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.183 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.183 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.200 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.291 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.292 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.299 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.300 2 INFO nova.compute.claims [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.405 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:26 compute-0 ceph-mon[74477]: pgmap v2739: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:09:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:09:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019343218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.835 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.841 2 DEBUG nova.compute.provider_tree [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.855 2 DEBUG nova.scheduler.client.report [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.876 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.877 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.929 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.930 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.953 2 INFO nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:09:26 compute-0 nova_compute[260603]: 2025-10-02 09:09:26.971 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.078 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.079 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.079 2 INFO nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Creating image(s)
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.099 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.120 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.141 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.145 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.205 2 DEBUG nova.policy [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.247 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.248 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.248 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.248 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.269 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:09:27 compute-0 nova_compute[260603]: 2025-10-02 09:09:27.273 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:09:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3019343218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:09:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:09:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:09:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:09:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:09:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:09:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 45 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 10 KiB/s wr, 8 op/s
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:09:28
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr']
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:09:28 compute-0 sudo[416683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:09:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:09:28 compute-0 sudo[416683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:28 compute-0 sudo[416683]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:28 compute-0 nova_compute[260603]: 2025-10-02 09:09:28.282 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Successfully created port: 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:09:28 compute-0 sudo[416708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:09:28 compute-0 sudo[416708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:28 compute-0 sudo[416708]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:28 compute-0 nova_compute[260603]: 2025-10-02 09:09:28.376 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:09:28 compute-0 sudo[416733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:09:28 compute-0 sudo[416733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:28 compute-0 sudo[416733]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:28 compute-0 nova_compute[260603]: 2025-10-02 09:09:28.447 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:09:28 compute-0 sudo[416774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:09:28 compute-0 sudo[416774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:09:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:09:28 compute-0 ceph-mon[74477]: pgmap v2740: 305 pgs: 305 active+clean; 45 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 10 KiB/s wr, 8 op/s
Oct 02 09:09:28 compute-0 nova_compute[260603]: 2025-10-02 09:09:28.726 2 DEBUG nova.objects.instance [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:09:28 compute-0 nova_compute[260603]: 2025-10-02 09:09:28.781 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:09:28 compute-0 nova_compute[260603]: 2025-10-02 09:09:28.782 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Ensure instance console log exists: /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:09:28 compute-0 nova_compute[260603]: 2025-10-02 09:09:28.783 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:28 compute-0 nova_compute[260603]: 2025-10-02 09:09:28.783 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:28 compute-0 nova_compute[260603]: 2025-10-02 09:09:28.783 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:29 compute-0 sudo[416774]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:09:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:09:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:09:29 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:09:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:09:29 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:09:29 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e0c0ff83-7f65-4578-aace-f004b70aa6ec does not exist
Oct 02 09:09:29 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c461ccbb-5e4e-4b9e-a329-52978d316540 does not exist
Oct 02 09:09:29 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev eeffa823-ac44-4c47-ba54-662e2b742b0f does not exist
Oct 02 09:09:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:09:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:09:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:09:29 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:09:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:09:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:09:29 compute-0 sudo[416887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:09:29 compute-0 sudo[416887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:29 compute-0 sudo[416887]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:29 compute-0 sudo[416912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:09:29 compute-0 sudo[416912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:29 compute-0 sudo[416912]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:29 compute-0 sudo[416937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:09:29 compute-0 sudo[416937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:29 compute-0 sudo[416937]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:29 compute-0 sudo[416962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:09:29 compute-0 sudo[416962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:29 compute-0 podman[417030]: 2025-10-02 09:09:29.642006609 +0000 UTC m=+0.023733769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:09:29 compute-0 podman[417030]: 2025-10-02 09:09:29.74628606 +0000 UTC m=+0.128013210 container create 2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 09:09:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:09:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:09:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:09:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:09:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:09:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:09:29 compute-0 systemd[1]: Started libpod-conmon-2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3.scope.
Oct 02 09:09:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:09:29 compute-0 podman[417030]: 2025-10-02 09:09:29.905541199 +0000 UTC m=+0.287268359 container init 2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 09:09:29 compute-0 podman[417030]: 2025-10-02 09:09:29.913103154 +0000 UTC m=+0.294830324 container start 2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 09:09:29 compute-0 systemd[1]: libpod-2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3.scope: Deactivated successfully.
Oct 02 09:09:29 compute-0 friendly_pascal[417047]: 167 167
Oct 02 09:09:29 compute-0 conmon[417047]: conmon 2824eb781c989c5d7f25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3.scope/container/memory.events
Oct 02 09:09:29 compute-0 podman[417030]: 2025-10-02 09:09:29.924521479 +0000 UTC m=+0.306248649 container attach 2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 09:09:29 compute-0 podman[417030]: 2025-10-02 09:09:29.924944992 +0000 UTC m=+0.306672132 container died 2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 09:09:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb4bd7bc0ebef1897f5d77ef330d7d7e22982d5fc36106ea935d52d9112cfb8e-merged.mount: Deactivated successfully.
Oct 02 09:09:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 45 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 10 KiB/s wr, 8 op/s
Oct 02 09:09:30 compute-0 podman[417030]: 2025-10-02 09:09:30.235889135 +0000 UTC m=+0.617616285 container remove 2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:09:30 compute-0 systemd[1]: libpod-conmon-2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3.scope: Deactivated successfully.
Oct 02 09:09:30 compute-0 podman[417070]: 2025-10-02 09:09:30.515954298 +0000 UTC m=+0.102291760 container create 3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:09:30 compute-0 podman[417070]: 2025-10-02 09:09:30.452914999 +0000 UTC m=+0.039252481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:09:30 compute-0 systemd[1]: Started libpod-conmon-3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590.scope.
Oct 02 09:09:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5fb65108afc223619f66f1c67a3dd132bd60140e93b9e099984d0f0179b0ca7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5fb65108afc223619f66f1c67a3dd132bd60140e93b9e099984d0f0179b0ca7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5fb65108afc223619f66f1c67a3dd132bd60140e93b9e099984d0f0179b0ca7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5fb65108afc223619f66f1c67a3dd132bd60140e93b9e099984d0f0179b0ca7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5fb65108afc223619f66f1c67a3dd132bd60140e93b9e099984d0f0179b0ca7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:30 compute-0 podman[417070]: 2025-10-02 09:09:30.649641453 +0000 UTC m=+0.235978935 container init 3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:09:30 compute-0 podman[417070]: 2025-10-02 09:09:30.656671481 +0000 UTC m=+0.243008943 container start 3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 09:09:30 compute-0 podman[417070]: 2025-10-02 09:09:30.663641428 +0000 UTC m=+0.249978910 container attach 3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:09:30 compute-0 ceph-mon[74477]: pgmap v2741: 305 pgs: 305 active+clean; 45 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 10 KiB/s wr, 8 op/s
Oct 02 09:09:30 compute-0 nova_compute[260603]: 2025-10-02 09:09:30.950 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Successfully created port: 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:09:31 compute-0 nova_compute[260603]: 2025-10-02 09:09:31.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:31 compute-0 nova_compute[260603]: 2025-10-02 09:09:31.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:31 compute-0 determined_shamir[417086]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:09:31 compute-0 determined_shamir[417086]: --> relative data size: 1.0
Oct 02 09:09:31 compute-0 determined_shamir[417086]: --> All data devices are unavailable
Oct 02 09:09:31 compute-0 systemd[1]: libpod-3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590.scope: Deactivated successfully.
Oct 02 09:09:31 compute-0 systemd[1]: libpod-3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590.scope: Consumed 1.189s CPU time.
Oct 02 09:09:31 compute-0 podman[417115]: 2025-10-02 09:09:31.998963074 +0000 UTC m=+0.035729921 container died 3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 09:09:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5fb65108afc223619f66f1c67a3dd132bd60140e93b9e099984d0f0179b0ca7-merged.mount: Deactivated successfully.
Oct 02 09:09:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 45 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 10 KiB/s wr, 8 op/s
Oct 02 09:09:32 compute-0 podman[417115]: 2025-10-02 09:09:32.058890526 +0000 UTC m=+0.095657353 container remove 3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:09:32 compute-0 systemd[1]: libpod-conmon-3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590.scope: Deactivated successfully.
Oct 02 09:09:32 compute-0 sudo[416962]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:32 compute-0 sudo[417130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:09:32 compute-0 sudo[417130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:32 compute-0 sudo[417130]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:32 compute-0 sudo[417155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:09:32 compute-0 sudo[417155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:32 compute-0 sudo[417155]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:32 compute-0 sudo[417180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:09:32 compute-0 sudo[417180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:32 compute-0 sudo[417180]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:32 compute-0 sudo[417205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:09:32 compute-0 sudo[417205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:32 compute-0 podman[417269]: 2025-10-02 09:09:32.812702511 +0000 UTC m=+0.048563330 container create 4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 09:09:32 compute-0 systemd[1]: Started libpod-conmon-4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014.scope.
Oct 02 09:09:32 compute-0 podman[417269]: 2025-10-02 09:09:32.791991518 +0000 UTC m=+0.027852347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:09:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:09:32 compute-0 podman[417269]: 2025-10-02 09:09:32.914367201 +0000 UTC m=+0.150228110 container init 4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_haslett, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:09:32 compute-0 podman[417269]: 2025-10-02 09:09:32.931954658 +0000 UTC m=+0.167815467 container start 4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_haslett, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:09:32 compute-0 podman[417269]: 2025-10-02 09:09:32.935739594 +0000 UTC m=+0.171600423 container attach 4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_haslett, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 09:09:32 compute-0 magical_haslett[417285]: 167 167
Oct 02 09:09:32 compute-0 systemd[1]: libpod-4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014.scope: Deactivated successfully.
Oct 02 09:09:32 compute-0 podman[417269]: 2025-10-02 09:09:32.940497812 +0000 UTC m=+0.176358621 container died 4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:09:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cc60bf02188e57559b284b66a31300fa69a1eed8ecdecaf369f7d2ca28cb974-merged.mount: Deactivated successfully.
Oct 02 09:09:32 compute-0 podman[417269]: 2025-10-02 09:09:32.982839508 +0000 UTC m=+0.218700347 container remove 4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 09:09:33 compute-0 systemd[1]: libpod-conmon-4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014.scope: Deactivated successfully.
Oct 02 09:09:33 compute-0 nova_compute[260603]: 2025-10-02 09:09:33.003 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Successfully updated port: 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:09:33 compute-0 nova_compute[260603]: 2025-10-02 09:09:33.093 2 DEBUG nova.compute.manager [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-changed-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:09:33 compute-0 nova_compute[260603]: 2025-10-02 09:09:33.094 2 DEBUG nova.compute.manager [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing instance network info cache due to event network-changed-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:09:33 compute-0 nova_compute[260603]: 2025-10-02 09:09:33.095 2 DEBUG oslo_concurrency.lockutils [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:09:33 compute-0 nova_compute[260603]: 2025-10-02 09:09:33.095 2 DEBUG oslo_concurrency.lockutils [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:09:33 compute-0 nova_compute[260603]: 2025-10-02 09:09:33.095 2 DEBUG nova.network.neutron [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing network info cache for port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:09:33 compute-0 ceph-mon[74477]: pgmap v2742: 305 pgs: 305 active+clean; 45 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 10 KiB/s wr, 8 op/s
Oct 02 09:09:33 compute-0 podman[417311]: 2025-10-02 09:09:33.171938085 +0000 UTC m=+0.053659639 container create d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 09:09:33 compute-0 systemd[1]: Started libpod-conmon-d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74.scope.
Oct 02 09:09:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:09:33 compute-0 podman[417311]: 2025-10-02 09:09:33.149679673 +0000 UTC m=+0.031401207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:09:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b34e1c3bf90ad2d962482859ee95430fb65404f90eec4d3bb04dab1aa6b523/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b34e1c3bf90ad2d962482859ee95430fb65404f90eec4d3bb04dab1aa6b523/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b34e1c3bf90ad2d962482859ee95430fb65404f90eec4d3bb04dab1aa6b523/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b34e1c3bf90ad2d962482859ee95430fb65404f90eec4d3bb04dab1aa6b523/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:33 compute-0 podman[417311]: 2025-10-02 09:09:33.289364224 +0000 UTC m=+0.171085748 container init d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:09:33 compute-0 podman[417311]: 2025-10-02 09:09:33.297667232 +0000 UTC m=+0.179388766 container start d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:09:33 compute-0 podman[417311]: 2025-10-02 09:09:33.301896993 +0000 UTC m=+0.183618517 container attach d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:09:33 compute-0 nova_compute[260603]: 2025-10-02 09:09:33.917 2 DEBUG nova.network.neutron [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:09:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]: {
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:     "0": [
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:         {
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "devices": [
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "/dev/loop3"
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             ],
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_name": "ceph_lv0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_size": "21470642176",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "name": "ceph_lv0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "tags": {
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.cluster_name": "ceph",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.crush_device_class": "",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.encrypted": "0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.osd_id": "0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.type": "block",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.vdo": "0"
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             },
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "type": "block",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "vg_name": "ceph_vg0"
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:         }
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:     ],
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:     "1": [
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:         {
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "devices": [
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "/dev/loop4"
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             ],
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_name": "ceph_lv1",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_size": "21470642176",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "name": "ceph_lv1",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "tags": {
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.cluster_name": "ceph",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.crush_device_class": "",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.encrypted": "0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.osd_id": "1",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.type": "block",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.vdo": "0"
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             },
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "type": "block",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "vg_name": "ceph_vg1"
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:         }
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:     ],
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:     "2": [
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:         {
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "devices": [
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "/dev/loop5"
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             ],
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_name": "ceph_lv2",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_size": "21470642176",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "name": "ceph_lv2",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "tags": {
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.cluster_name": "ceph",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.crush_device_class": "",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.encrypted": "0",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.osd_id": "2",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.type": "block",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:                 "ceph.vdo": "0"
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             },
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "type": "block",
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:             "vg_name": "ceph_vg2"
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:         }
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]:     ]
Oct 02 09:09:34 compute-0 affectionate_shirley[417328]: }
Oct 02 09:09:34 compute-0 systemd[1]: libpod-d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74.scope: Deactivated successfully.
Oct 02 09:09:34 compute-0 podman[417311]: 2025-10-02 09:09:34.130525914 +0000 UTC m=+1.012247448 container died d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:09:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3b34e1c3bf90ad2d962482859ee95430fb65404f90eec4d3bb04dab1aa6b523-merged.mount: Deactivated successfully.
Oct 02 09:09:34 compute-0 podman[417311]: 2025-10-02 09:09:34.183383657 +0000 UTC m=+1.065105161 container remove d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:09:34 compute-0 systemd[1]: libpod-conmon-d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74.scope: Deactivated successfully.
Oct 02 09:09:34 compute-0 sudo[417205]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:34 compute-0 sudo[417348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:09:34 compute-0 sudo[417348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:34 compute-0 sudo[417348]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:34 compute-0 sudo[417373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:09:34 compute-0 sudo[417373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:34 compute-0 sudo[417373]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:34 compute-0 sudo[417398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:09:34 compute-0 sudo[417398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:34 compute-0 sudo[417398]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:34 compute-0 sudo[417423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:09:34 compute-0 sudo[417423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:34 compute-0 nova_compute[260603]: 2025-10-02 09:09:34.582 2 DEBUG nova.network.neutron [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:09:34 compute-0 nova_compute[260603]: 2025-10-02 09:09:34.603 2 DEBUG oslo_concurrency.lockutils [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:09:34 compute-0 nova_compute[260603]: 2025-10-02 09:09:34.720 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:34 compute-0 nova_compute[260603]: 2025-10-02 09:09:34.720 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 09:09:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:34.847 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:34.847 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:34.848 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:34 compute-0 nova_compute[260603]: 2025-10-02 09:09:34.863 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Successfully updated port: 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:09:34 compute-0 nova_compute[260603]: 2025-10-02 09:09:34.882 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:09:34 compute-0 nova_compute[260603]: 2025-10-02 09:09:34.882 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:09:34 compute-0 nova_compute[260603]: 2025-10-02 09:09:34.882 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:09:34 compute-0 podman[417489]: 2025-10-02 09:09:34.905456556 +0000 UTC m=+0.047868539 container create aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:09:34 compute-0 systemd[1]: Started libpod-conmon-aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963.scope.
Oct 02 09:09:34 compute-0 podman[417489]: 2025-10-02 09:09:34.886808166 +0000 UTC m=+0.029220169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:09:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:09:35 compute-0 podman[417489]: 2025-10-02 09:09:35.018338313 +0000 UTC m=+0.160750326 container init aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:09:35 compute-0 podman[417489]: 2025-10-02 09:09:35.031784841 +0000 UTC m=+0.174196824 container start aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 09:09:35 compute-0 compassionate_swartz[417505]: 167 167
Oct 02 09:09:35 compute-0 systemd[1]: libpod-aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963.scope: Deactivated successfully.
Oct 02 09:09:35 compute-0 podman[417489]: 2025-10-02 09:09:35.039102878 +0000 UTC m=+0.181514871 container attach aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 09:09:35 compute-0 podman[417489]: 2025-10-02 09:09:35.039690157 +0000 UTC m=+0.182102140 container died aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 09:09:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1392cf9851fe8b3cfbe2231d8997a418ca19abf0076b7dddf53ffd1829863b30-merged.mount: Deactivated successfully.
Oct 02 09:09:35 compute-0 nova_compute[260603]: 2025-10-02 09:09:35.115 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:09:35 compute-0 ceph-mon[74477]: pgmap v2743: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 02 09:09:35 compute-0 podman[417489]: 2025-10-02 09:09:35.162209364 +0000 UTC m=+0.304621377 container remove aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:09:35 compute-0 systemd[1]: libpod-conmon-aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963.scope: Deactivated successfully.
Oct 02 09:09:35 compute-0 nova_compute[260603]: 2025-10-02 09:09:35.205 2 DEBUG nova.compute.manager [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-changed-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:09:35 compute-0 nova_compute[260603]: 2025-10-02 09:09:35.207 2 DEBUG nova.compute.manager [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing instance network info cache due to event network-changed-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:09:35 compute-0 nova_compute[260603]: 2025-10-02 09:09:35.208 2 DEBUG oslo_concurrency.lockutils [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:09:35 compute-0 podman[417529]: 2025-10-02 09:09:35.372009563 +0000 UTC m=+0.057943641 container create cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 09:09:35 compute-0 systemd[1]: Started libpod-conmon-cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b.scope.
Oct 02 09:09:35 compute-0 podman[417529]: 2025-10-02 09:09:35.344021534 +0000 UTC m=+0.029955692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:09:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c43c33331ecc50d597ea729823b989a4cdd08d7264402c7acfb40288b0d826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c43c33331ecc50d597ea729823b989a4cdd08d7264402c7acfb40288b0d826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c43c33331ecc50d597ea729823b989a4cdd08d7264402c7acfb40288b0d826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c43c33331ecc50d597ea729823b989a4cdd08d7264402c7acfb40288b0d826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:35 compute-0 podman[417529]: 2025-10-02 09:09:35.46873534 +0000 UTC m=+0.154669488 container init cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:09:35 compute-0 podman[417529]: 2025-10-02 09:09:35.476551302 +0000 UTC m=+0.162485390 container start cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 09:09:35 compute-0 podman[417529]: 2025-10-02 09:09:35.480954969 +0000 UTC m=+0.166889067 container attach cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:09:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 02 09:09:36 compute-0 nova_compute[260603]: 2025-10-02 09:09:36.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:36 compute-0 nova_compute[260603]: 2025-10-02 09:09:36.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:36 compute-0 ovn_controller[152344]: 2025-10-02T09:09:36Z|01535|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 02 09:09:36 compute-0 admiring_keller[417546]: {
Oct 02 09:09:36 compute-0 admiring_keller[417546]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "osd_id": 2,
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "type": "bluestore"
Oct 02 09:09:36 compute-0 admiring_keller[417546]:     },
Oct 02 09:09:36 compute-0 admiring_keller[417546]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "osd_id": 1,
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "type": "bluestore"
Oct 02 09:09:36 compute-0 admiring_keller[417546]:     },
Oct 02 09:09:36 compute-0 admiring_keller[417546]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "osd_id": 0,
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:09:36 compute-0 admiring_keller[417546]:         "type": "bluestore"
Oct 02 09:09:36 compute-0 admiring_keller[417546]:     }
Oct 02 09:09:36 compute-0 admiring_keller[417546]: }
Oct 02 09:09:36 compute-0 systemd[1]: libpod-cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b.scope: Deactivated successfully.
Oct 02 09:09:36 compute-0 systemd[1]: libpod-cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b.scope: Consumed 1.083s CPU time.
Oct 02 09:09:36 compute-0 podman[417529]: 2025-10-02 09:09:36.554025116 +0000 UTC m=+1.239959204 container died cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:09:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-56c43c33331ecc50d597ea729823b989a4cdd08d7264402c7acfb40288b0d826-merged.mount: Deactivated successfully.
Oct 02 09:09:36 compute-0 podman[417529]: 2025-10-02 09:09:36.648102539 +0000 UTC m=+1.334036607 container remove cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 09:09:36 compute-0 systemd[1]: libpod-conmon-cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b.scope: Deactivated successfully.
Oct 02 09:09:36 compute-0 sudo[417423]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:09:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:09:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:09:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:09:36 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a977273b-f569-4ca5-95a8-c5ceffdf8eb2 does not exist
Oct 02 09:09:36 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e8972721-075b-4ca8-8fde-fb32eed96006 does not exist
Oct 02 09:09:36 compute-0 sudo[417594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:09:36 compute-0 sudo[417594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:36 compute-0 sudo[417594]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:36 compute-0 sudo[417619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:09:36 compute-0 sudo[417619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:09:36 compute-0 sudo[417619]: pam_unix(sudo:session): session closed for user root
Oct 02 09:09:37 compute-0 ceph-mon[74477]: pgmap v2744: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 02 09:09:37 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:09:37 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:09:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.176 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:09:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.241 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.241 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Instance network_info: |[{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.242 2 DEBUG oslo_concurrency.lockutils [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.242 2 DEBUG nova.network.neutron [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing network info cache for port 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.247 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Start _get_guest_xml network_info=[{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.253 2 WARNING nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.260 2 DEBUG nova.virt.libvirt.host [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.261 2 DEBUG nova.virt.libvirt.host [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.268 2 DEBUG nova.virt.libvirt.host [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.269 2 DEBUG nova.virt.libvirt.host [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.269 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.270 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.271 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.272 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.272 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.273 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.273 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.274 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.274 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.275 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.275 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.276 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.281 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:09:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:09:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121066416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.767 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.793 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:09:38 compute-0 nova_compute[260603]: 2025-10-02 09:09:38.797 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:09:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:09:39 compute-0 ceph-mon[74477]: pgmap v2745: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 02 09:09:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3121066416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:09:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:09:39 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1307119254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.284 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.286 2 DEBUG nova.virt.libvirt.vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:09:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2095471568',display_name='tempest-TestGettingAddress-server-2095471568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2095471568',id=141,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-jh5zd8cz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:09:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.286 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.287 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.288 2 DEBUG nova.virt.libvirt.vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:09:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2095471568',display_name='tempest-TestGettingAddress-server-2095471568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2095471568',id=141,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-jh5zd8cz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:09:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.288 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.289 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.290 2 DEBUG nova.objects.instance [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.342 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:09:39 compute-0 nova_compute[260603]:   <uuid>3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b</uuid>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   <name>instance-0000008d</name>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-2095471568</nova:name>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:09:38</nova:creationTime>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <nova:port uuid="2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b">
Oct 02 09:09:39 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <nova:port uuid="5bca2e0b-43ae-46f8-b3cf-7be35129b5d8">
Oct 02 09:09:39 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe00:94b1" ipVersion="6"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe00:94b1" ipVersion="6"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <system>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <entry name="serial">3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b</entry>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <entry name="uuid">3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b</entry>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     </system>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   <os>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   </os>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   <features>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   </features>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk">
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       </source>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk.config">
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       </source>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:09:39 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:93:c0:38"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <target dev="tap2b42cc6e-a1"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:00:94:b1"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <target dev="tap5bca2e0b-43"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/console.log" append="off"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <video>
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     </video>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:09:39 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:09:39 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:09:39 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:09:39 compute-0 nova_compute[260603]: </domain>
Oct 02 09:09:39 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.344 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Preparing to wait for external event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.345 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.345 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.346 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.346 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Preparing to wait for external event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.346 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.347 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.347 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.348 2 DEBUG nova.virt.libvirt.vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:09:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2095471568',display_name='tempest-TestGettingAddress-server-2095471568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2095471568',id=141,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-jh5zd8cz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:09:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.349 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.350 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.351 2 DEBUG os_vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.358 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b42cc6e-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.359 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b42cc6e-a1, col_values=(('external_ids', {'iface-id': '2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:c0:38', 'vm-uuid': '3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:39 compute-0 NetworkManager[45129]: <info>  [1759396179.3633] manager: (tap2b42cc6e-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/619)
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.370 2 INFO os_vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1')
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.371 2 DEBUG nova.virt.libvirt.vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:09:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2095471568',display_name='tempest-TestGettingAddress-server-2095471568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2095471568',id=141,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-jh5zd8cz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:09:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.372 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.374 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.375 2 DEBUG os_vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bca2e0b-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5bca2e0b-43, col_values=(('external_ids', {'iface-id': '5bca2e0b-43ae-46f8-b3cf-7be35129b5d8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:94:b1', 'vm-uuid': '3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:39 compute-0 NetworkManager[45129]: <info>  [1759396179.3860] manager: (tap5bca2e0b-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.395 2 INFO os_vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43')
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.483 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.484 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.484 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:93:c0:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.484 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:00:94:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.485 2 INFO nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Using config drive
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.519 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.532 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:39 compute-0 nova_compute[260603]: 2025-10-02 09:09:39.533 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:09:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.070 2 INFO nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Creating config drive at /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/disk.config
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.076 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9qtaaep4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:09:40 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1307119254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.230 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9qtaaep4" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.260 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.265 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/disk.config 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.351 2 DEBUG nova.network.neutron [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updated VIF entry in instance network info cache for port 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.353 2 DEBUG nova.network.neutron [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.386 2 DEBUG oslo_concurrency.lockutils [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.449 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/disk.config 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.450 2 INFO nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Deleting local config drive /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/disk.config because it was imported into RBD.
Oct 02 09:09:40 compute-0 kernel: tap2b42cc6e-a1: entered promiscuous mode
Oct 02 09:09:40 compute-0 NetworkManager[45129]: <info>  [1759396180.5305] manager: (tap2b42cc6e-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:40 compute-0 ovn_controller[152344]: 2025-10-02T09:09:40Z|01536|binding|INFO|Claiming lport 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b for this chassis.
Oct 02 09:09:40 compute-0 ovn_controller[152344]: 2025-10-02T09:09:40Z|01537|binding|INFO|2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b: Claiming fa:16:3e:93:c0:38 10.100.0.11
Oct 02 09:09:40 compute-0 NetworkManager[45129]: <info>  [1759396180.5509] manager: (tap5bca2e0b-43): new Tun device (/org/freedesktop/NetworkManager/Devices/622)
Oct 02 09:09:40 compute-0 kernel: tap5bca2e0b-43: entered promiscuous mode
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.562 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:c0:38 10.100.0.11'], port_security=['fa:16:3e:93:c0:38 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68558f02-4047-4331-a47e-cbbee9580ea4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.564 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b in datapath d7dd05b8-70c0-4ef8-a410-57d83c307eaa bound to our chassis
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.566 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7dd05b8-70c0-4ef8-a410-57d83c307eaa
Oct 02 09:09:40 compute-0 systemd-udevd[417788]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:09:40 compute-0 systemd-udevd[417789]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.584 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a63445b8-4882-4fe7-90eb-2ada7c8d74d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.586 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7dd05b8-71 in ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.588 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7dd05b8-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.588 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3cae3488-5af5-4356-9513-70636629bdf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.590 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e55fec1-f385-4a08-b409-15e98af50de4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 systemd-machined[214636]: New machine qemu-175-instance-0000008d.
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.604 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[ee428333-2bbf-4bfc-9214-5331af06416f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 NetworkManager[45129]: <info>  [1759396180.6059] device (tap5bca2e0b-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:09:40 compute-0 NetworkManager[45129]: <info>  [1759396180.6074] device (tap5bca2e0b-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:09:40 compute-0 NetworkManager[45129]: <info>  [1759396180.6083] device (tap2b42cc6e-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:09:40 compute-0 NetworkManager[45129]: <info>  [1759396180.6094] device (tap2b42cc6e-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:09:40 compute-0 systemd[1]: Started Virtual Machine qemu-175-instance-0000008d.
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.630 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[38edd325-a808-4651-86a7-4fbfceb0c3d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:40 compute-0 ovn_controller[152344]: 2025-10-02T09:09:40Z|01538|binding|INFO|Claiming lport 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 for this chassis.
Oct 02 09:09:40 compute-0 ovn_controller[152344]: 2025-10-02T09:09:40Z|01539|binding|INFO|5bca2e0b-43ae-46f8-b3cf-7be35129b5d8: Claiming fa:16:3e:00:94:b1 2001:db8:0:1:f816:3eff:fe00:94b1 2001:db8::f816:3eff:fe00:94b1
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.638 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:94:b1 2001:db8:0:1:f816:3eff:fe00:94b1 2001:db8::f816:3eff:fe00:94b1'], port_security=['fa:16:3e:00:94:b1 2001:db8:0:1:f816:3eff:fe00:94b1 2001:db8::f816:3eff:fe00:94b1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe00:94b1/64 2001:db8::f816:3eff:fe00:94b1/64', 'neutron:device_id': '3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a251a259-65e8-4a45-82af-f69bd5f24a08, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:40 compute-0 ovn_controller[152344]: 2025-10-02T09:09:40Z|01540|binding|INFO|Setting lport 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b ovn-installed in OVS
Oct 02 09:09:40 compute-0 ovn_controller[152344]: 2025-10-02T09:09:40Z|01541|binding|INFO|Setting lport 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b up in Southbound
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:40 compute-0 ovn_controller[152344]: 2025-10-02T09:09:40Z|01542|binding|INFO|Setting lport 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 ovn-installed in OVS
Oct 02 09:09:40 compute-0 ovn_controller[152344]: 2025-10-02T09:09:40Z|01543|binding|INFO|Setting lport 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 up in Southbound
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.670 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0c22e1a6-ba94-41d7-bb77-7b13c16e01c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.681 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d356d5d8-9c32-492f-895b-13a4b047e013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 NetworkManager[45129]: <info>  [1759396180.6870] manager: (tapd7dd05b8-70): new Veth device (/org/freedesktop/NetworkManager/Devices/623)
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.721 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ac7382-2956-4f2a-81d9-27e6195fed06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.724 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1897c0d8-e591-4e54-b1b6-3d958fcbb73f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 NetworkManager[45129]: <info>  [1759396180.7490] device (tapd7dd05b8-70): carrier: link connected
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.758 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a2db0f18-06c6-413a-ba48-6a7eaaba2268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.786 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8394eb91-1e22-43e0-80e0-d03eb3e5e49c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7dd05b8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:04:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705935, 'reachable_time': 20541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417822, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.810 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6c54bc-f66b-4209-b8cb-79a1c4e528ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:4ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705935, 'tstamp': 705935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417823, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.834 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a1683b47-b4cf-49c0-9e43-b4f3598a3524]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7dd05b8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:04:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705935, 'reachable_time': 20541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 417824, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.874 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b75669b-7d7d-416f-b244-2b04153ad62a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.956 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccbe9be-cbce-4382-a78a-604f1f88ad73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.958 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7dd05b8-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.958 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.959 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7dd05b8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:40 compute-0 NetworkManager[45129]: <info>  [1759396180.9621] manager: (tapd7dd05b8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Oct 02 09:09:40 compute-0 kernel: tapd7dd05b8-70: entered promiscuous mode
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.967 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7dd05b8-70, col_values=(('external_ids', {'iface-id': '93ded116-ee2f-4f81-a2c6-257136c86ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:40 compute-0 ovn_controller[152344]: 2025-10-02T09:09:40Z|01544|binding|INFO|Releasing lport 93ded116-ee2f-4f81-a2c6-257136c86ae4 from this chassis (sb_readonly=0)
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:40 compute-0 nova_compute[260603]: 2025-10-02 09:09:40.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.996 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7dd05b8-70c0-4ef8-a410-57d83c307eaa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7dd05b8-70c0-4ef8-a410-57d83c307eaa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.997 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[de113ef3-3360-46c7-bc7c-9221a8c50185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.998 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-d7dd05b8-70c0-4ef8-a410-57d83c307eaa
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/d7dd05b8-70c0-4ef8-a410-57d83c307eaa.pid.haproxy
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID d7dd05b8-70c0-4ef8-a410-57d83c307eaa
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:09:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.998 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'env', 'PROCESS_TAG=haproxy-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7dd05b8-70c0-4ef8-a410-57d83c307eaa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.171 2 DEBUG nova.compute.manager [req-42135d6b-2d30-41e3-a4bb-c7b1bc32eb76 req-a45538be-8119-4acd-abc2-4e77a43a447b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.172 2 DEBUG oslo_concurrency.lockutils [req-42135d6b-2d30-41e3-a4bb-c7b1bc32eb76 req-a45538be-8119-4acd-abc2-4e77a43a447b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.172 2 DEBUG oslo_concurrency.lockutils [req-42135d6b-2d30-41e3-a4bb-c7b1bc32eb76 req-a45538be-8119-4acd-abc2-4e77a43a447b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.172 2 DEBUG oslo_concurrency.lockutils [req-42135d6b-2d30-41e3-a4bb-c7b1bc32eb76 req-a45538be-8119-4acd-abc2-4e77a43a447b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.173 2 DEBUG nova.compute.manager [req-42135d6b-2d30-41e3-a4bb-c7b1bc32eb76 req-a45538be-8119-4acd-abc2-4e77a43a447b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Processing event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.182 2 DEBUG nova.compute.manager [req-2f106664-884d-48ca-a448-63a72597bfbf req-4e2205c8-71f8-4171-9928-19f2b209edf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.183 2 DEBUG oslo_concurrency.lockutils [req-2f106664-884d-48ca-a448-63a72597bfbf req-4e2205c8-71f8-4171-9928-19f2b209edf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.183 2 DEBUG oslo_concurrency.lockutils [req-2f106664-884d-48ca-a448-63a72597bfbf req-4e2205c8-71f8-4171-9928-19f2b209edf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.183 2 DEBUG oslo_concurrency.lockutils [req-2f106664-884d-48ca-a448-63a72597bfbf req-4e2205c8-71f8-4171-9928-19f2b209edf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.183 2 DEBUG nova.compute.manager [req-2f106664-884d-48ca-a448-63a72597bfbf req-4e2205c8-71f8-4171-9928-19f2b209edf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Processing event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:09:41 compute-0 ceph-mon[74477]: pgmap v2746: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.322 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:41 compute-0 podman[417898]: 2025-10-02 09:09:41.492138382 +0000 UTC m=+0.059186091 container create f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:09:41 compute-0 podman[417898]: 2025-10-02 09:09:41.460623132 +0000 UTC m=+0.027670831 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:09:41 compute-0 systemd[1]: Started libpod-conmon-f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a.scope.
Oct 02 09:09:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85b02860fc626a52f25bc8a62daedd7cc30adfcd5cfe6c239da051b179e05c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:41 compute-0 podman[417898]: 2025-10-02 09:09:41.623638498 +0000 UTC m=+0.190686227 container init f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 09:09:41 compute-0 podman[417898]: 2025-10-02 09:09:41.634511306 +0000 UTC m=+0.201558975 container start f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:09:41 compute-0 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [NOTICE]   (417918) : New worker (417920) forked
Oct 02 09:09:41 compute-0 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [NOTICE]   (417918) : Loading success.
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.730 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 in datapath 6d9c157f-cf57-4b44-8fba-d16631e22418 unbound from our chassis
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.731 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d9c157f-cf57-4b44-8fba-d16631e22418
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[77088aa3-9989-40d7-865c-f0a82f9773f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.745 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d9c157f-c1 in ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.749 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d9c157f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.749 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fa13a146-2fbb-4c75-a9b9-f685148c836c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.750 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7da28982-b8de-4b17-8646-23297bb46b13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.764 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb32067-8d8e-43df-bc6e-b9bc8fe9c42d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.784 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4fc27e-b005-4d2f-ad79-7568726134a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.825 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[42d7a377-137d-417b-9a03-964da13ae197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.832 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f86fa05-82bc-4820-82ce-0db27f04c109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:41 compute-0 systemd-udevd[417810]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:09:41 compute-0 NetworkManager[45129]: <info>  [1759396181.8355] manager: (tap6d9c157f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/625)
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.877 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0c879d44-209c-41f8-945f-a389b2b619ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.881 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[613bd572-7bb0-4c40-b371-390f5bf59b18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:41 compute-0 NetworkManager[45129]: <info>  [1759396181.9164] device (tap6d9c157f-c0): carrier: link connected
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.926 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[11a21b7e-c3d8-4a1f-90f1-274607ea0303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.954 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[042300b9-201b-459a-8c0b-c499168cfcd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d9c157f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:38:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706051, 'reachable_time': 40260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417939, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.959 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.960 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396181.958515, 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.960 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] VM Started (Lifecycle Event)
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.967 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.971 2 INFO nova.virt.libvirt.driver [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Instance spawned successfully.
Oct 02 09:09:41 compute-0 nova_compute[260603]: 2025-10-02 09:09:41.972 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:09:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.986 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6034eb40-4d52-43ab-bb26-39c37ddf10bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:3802'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706051, 'tstamp': 706051}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417940, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.009 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.010 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2411f0f9-1890-4eb1-b721-ad9680c90b48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d9c157f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:38:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706051, 'reachable_time': 40260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 417941, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.014 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.015 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.015 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.015 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.016 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.016 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.020 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.053 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dc88e72a-cf0c-4e5d-905b-8b631241843d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.071 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.071 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396181.9588575, 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.072 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] VM Paused (Lifecycle Event)
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.090 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[34efbb30-8e8d-42e8-9e4a-ae571ccdf6f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d9c157f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d9c157f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:42 compute-0 kernel: tap6d9c157f-c0: entered promiscuous mode
Oct 02 09:09:42 compute-0 NetworkManager[45129]: <info>  [1759396182.0966] manager: (tap6d9c157f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.097 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d9c157f-c0, col_values=(('external_ids', {'iface-id': '136a7ea2-2365-4779-b31d-41cbfc52a20f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:42 compute-0 ovn_controller[152344]: 2025-10-02T09:09:42Z|01545|binding|INFO|Releasing lport 136a7ea2-2365-4779-b31d-41cbfc52a20f from this chassis (sb_readonly=0)
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.105 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.108 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396181.965137, 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.108 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] VM Resumed (Lifecycle Event)
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.113 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d9c157f-cf57-4b44-8fba-d16631e22418.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d9c157f-cf57-4b44-8fba-d16631e22418.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.114 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d9c9db-df6f-46a9-9929-bdf2bea6cb52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.114 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-6d9c157f-cf57-4b44-8fba-d16631e22418
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/6d9c157f-cf57-4b44-8fba-d16631e22418.pid.haproxy
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 6d9c157f-cf57-4b44-8fba-d16631e22418
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.115 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'env', 'PROCESS_TAG=haproxy-6d9c157f-cf57-4b44-8fba-d16631e22418', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d9c157f-cf57-4b44-8fba-d16631e22418.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.121 2 INFO nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Took 15.04 seconds to spawn the instance on the hypervisor.
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.122 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.130 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.134 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.159 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.183 2 INFO nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Took 15.92 seconds to build instance.
Oct 02 09:09:42 compute-0 nova_compute[260603]: 2025-10-02 09:09:42.210 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:42 compute-0 podman[417972]: 2025-10-02 09:09:42.601175226 +0000 UTC m=+0.063599117 container create 5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 09:09:42 compute-0 systemd[1]: Started libpod-conmon-5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82.scope.
Oct 02 09:09:42 compute-0 podman[417972]: 2025-10-02 09:09:42.572470484 +0000 UTC m=+0.034894395 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:09:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7808816a47e7300fa580de7380ab5c8b3663ac809aa4c663ed97b18b7941cb10/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:09:42 compute-0 podman[417972]: 2025-10-02 09:09:42.687420636 +0000 UTC m=+0.149844617 container init 5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:09:42 compute-0 podman[417972]: 2025-10-02 09:09:42.700597496 +0000 UTC m=+0.163021427 container start 5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 09:09:42 compute-0 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [NOTICE]   (417992) : New worker (417994) forked
Oct 02 09:09:42 compute-0 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [NOTICE]   (417992) : Loading success.
Oct 02 09:09:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.809 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:09:43 compute-0 ceph-mon[74477]: pgmap v2747: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct 02 09:09:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:09:43 compute-0 nova_compute[260603]: 2025-10-02 09:09:43.270 2 DEBUG nova.compute.manager [req-fcde42ab-70c1-49dd-b0bc-0cd7c9be5eb5 req-71ef4fe1-5992-4dcc-b347-7abc5a303b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:09:43 compute-0 nova_compute[260603]: 2025-10-02 09:09:43.271 2 DEBUG oslo_concurrency.lockutils [req-fcde42ab-70c1-49dd-b0bc-0cd7c9be5eb5 req-71ef4fe1-5992-4dcc-b347-7abc5a303b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:43 compute-0 nova_compute[260603]: 2025-10-02 09:09:43.271 2 DEBUG oslo_concurrency.lockutils [req-fcde42ab-70c1-49dd-b0bc-0cd7c9be5eb5 req-71ef4fe1-5992-4dcc-b347-7abc5a303b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:43 compute-0 nova_compute[260603]: 2025-10-02 09:09:43.271 2 DEBUG oslo_concurrency.lockutils [req-fcde42ab-70c1-49dd-b0bc-0cd7c9be5eb5 req-71ef4fe1-5992-4dcc-b347-7abc5a303b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:43 compute-0 nova_compute[260603]: 2025-10-02 09:09:43.272 2 DEBUG nova.compute.manager [req-fcde42ab-70c1-49dd-b0bc-0cd7c9be5eb5 req-71ef4fe1-5992-4dcc-b347-7abc5a303b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] No waiting events found dispatching network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:09:43 compute-0 nova_compute[260603]: 2025-10-02 09:09:43.272 2 WARNING nova.compute.manager [req-fcde42ab-70c1-49dd-b0bc-0cd7c9be5eb5 req-71ef4fe1-5992-4dcc-b347-7abc5a303b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received unexpected event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b for instance with vm_state active and task_state None.
Oct 02 09:09:43 compute-0 nova_compute[260603]: 2025-10-02 09:09:43.337 2 DEBUG nova.compute.manager [req-85526271-728e-4539-bf04-c0b5d3da5127 req-707b97b0-44b0-40f2-bde3-2e6218a9d4f3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:09:43 compute-0 nova_compute[260603]: 2025-10-02 09:09:43.338 2 DEBUG oslo_concurrency.lockutils [req-85526271-728e-4539-bf04-c0b5d3da5127 req-707b97b0-44b0-40f2-bde3-2e6218a9d4f3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:09:43 compute-0 nova_compute[260603]: 2025-10-02 09:09:43.338 2 DEBUG oslo_concurrency.lockutils [req-85526271-728e-4539-bf04-c0b5d3da5127 req-707b97b0-44b0-40f2-bde3-2e6218a9d4f3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:09:43 compute-0 nova_compute[260603]: 2025-10-02 09:09:43.338 2 DEBUG oslo_concurrency.lockutils [req-85526271-728e-4539-bf04-c0b5d3da5127 req-707b97b0-44b0-40f2-bde3-2e6218a9d4f3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:09:43 compute-0 nova_compute[260603]: 2025-10-02 09:09:43.338 2 DEBUG nova.compute.manager [req-85526271-728e-4539-bf04-c0b5d3da5127 req-707b97b0-44b0-40f2-bde3-2e6218a9d4f3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] No waiting events found dispatching network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:09:43 compute-0 nova_compute[260603]: 2025-10-02 09:09:43.339 2 WARNING nova.compute.manager [req-85526271-728e-4539-bf04-c0b5d3da5127 req-707b97b0-44b0-40f2-bde3-2e6218a9d4f3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received unexpected event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 for instance with vm_state active and task_state None.
Oct 02 09:09:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 489 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 02 09:09:44 compute-0 nova_compute[260603]: 2025-10-02 09:09:44.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:09:44.812 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:09:45 compute-0 ceph-mon[74477]: pgmap v2748: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 489 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 02 09:09:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 441 KiB/s rd, 12 KiB/s wr, 23 op/s
Oct 02 09:09:46 compute-0 nova_compute[260603]: 2025-10-02 09:09:46.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:47 compute-0 nova_compute[260603]: 2025-10-02 09:09:47.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:47 compute-0 ovn_controller[152344]: 2025-10-02T09:09:47Z|01546|binding|INFO|Releasing lport 93ded116-ee2f-4f81-a2c6-257136c86ae4 from this chassis (sb_readonly=0)
Oct 02 09:09:47 compute-0 ovn_controller[152344]: 2025-10-02T09:09:47Z|01547|binding|INFO|Releasing lport 136a7ea2-2365-4779-b31d-41cbfc52a20f from this chassis (sb_readonly=0)
Oct 02 09:09:47 compute-0 NetworkManager[45129]: <info>  [1759396187.1098] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/627)
Oct 02 09:09:47 compute-0 NetworkManager[45129]: <info>  [1759396187.1102] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Oct 02 09:09:47 compute-0 nova_compute[260603]: 2025-10-02 09:09:47.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:47 compute-0 ovn_controller[152344]: 2025-10-02T09:09:47Z|01548|binding|INFO|Releasing lport 93ded116-ee2f-4f81-a2c6-257136c86ae4 from this chassis (sb_readonly=0)
Oct 02 09:09:47 compute-0 ovn_controller[152344]: 2025-10-02T09:09:47Z|01549|binding|INFO|Releasing lport 136a7ea2-2365-4779-b31d-41cbfc52a20f from this chassis (sb_readonly=0)
Oct 02 09:09:47 compute-0 nova_compute[260603]: 2025-10-02 09:09:47.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:47 compute-0 ceph-mon[74477]: pgmap v2749: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 441 KiB/s rd, 12 KiB/s wr, 23 op/s
Oct 02 09:09:48 compute-0 podman[418005]: 2025-10-02 09:09:48.008112911 +0000 UTC m=+0.067423916 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 09:09:48 compute-0 podman[418004]: 2025-10-02 09:09:48.040696114 +0000 UTC m=+0.106178011 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:09:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:09:48 compute-0 nova_compute[260603]: 2025-10-02 09:09:48.079 2 DEBUG nova.compute.manager [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-changed-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:09:48 compute-0 nova_compute[260603]: 2025-10-02 09:09:48.080 2 DEBUG nova.compute.manager [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing instance network info cache due to event network-changed-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:09:48 compute-0 nova_compute[260603]: 2025-10-02 09:09:48.080 2 DEBUG oslo_concurrency.lockutils [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:09:48 compute-0 nova_compute[260603]: 2025-10-02 09:09:48.081 2 DEBUG oslo_concurrency.lockutils [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:09:48 compute-0 nova_compute[260603]: 2025-10-02 09:09:48.081 2 DEBUG nova.network.neutron [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing network info cache for port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:09:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:09:49 compute-0 ceph-mon[74477]: pgmap v2750: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:09:49 compute-0 nova_compute[260603]: 2025-10-02 09:09:49.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:49 compute-0 nova_compute[260603]: 2025-10-02 09:09:49.631 2 DEBUG nova.network.neutron [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updated VIF entry in instance network info cache for port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:09:49 compute-0 nova_compute[260603]: 2025-10-02 09:09:49.631 2 DEBUG nova.network.neutron [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:09:49 compute-0 nova_compute[260603]: 2025-10-02 09:09:49.692 2 DEBUG oslo_concurrency.lockutils [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:09:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:09:50 compute-0 nova_compute[260603]: 2025-10-02 09:09:50.972 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:51 compute-0 ceph-mon[74477]: pgmap v2751: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:09:51 compute-0 nova_compute[260603]: 2025-10-02 09:09:51.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:52 compute-0 podman[418047]: 2025-10-02 09:09:52.035276718 +0000 UTC m=+0.086094767 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct 02 09:09:52 compute-0 podman[418048]: 2025-10-02 09:09:52.064898798 +0000 UTC m=+0.107371897 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 02 09:09:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:09:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:09:53 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 02 09:09:53 compute-0 ceph-mon[74477]: pgmap v2752: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:09:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 02 09:09:54 compute-0 nova_compute[260603]: 2025-10-02 09:09:54.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:54 compute-0 ceph-mon[74477]: pgmap v2753: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 02 09:09:55 compute-0 nova_compute[260603]: 2025-10-02 09:09:55.623 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:55 compute-0 ovn_controller[152344]: 2025-10-02T09:09:55Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:c0:38 10.100.0.11
Oct 02 09:09:55 compute-0 ovn_controller[152344]: 2025-10-02T09:09:55Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:c0:38 10.100.0.11
Oct 02 09:09:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 51 op/s
Oct 02 09:09:56 compute-0 nova_compute[260603]: 2025-10-02 09:09:56.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:09:57 compute-0 ceph-mon[74477]: pgmap v2754: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 51 op/s
Oct 02 09:09:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:09:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:09:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:09:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:09:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:09:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:09:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct 02 09:09:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:09:58 compute-0 nova_compute[260603]: 2025-10-02 09:09:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:09:58 compute-0 nova_compute[260603]: 2025-10-02 09:09:58.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:09:58 compute-0 nova_compute[260603]: 2025-10-02 09:09:58.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:09:58 compute-0 nova_compute[260603]: 2025-10-02 09:09:58.875 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:09:58 compute-0 nova_compute[260603]: 2025-10-02 09:09:58.876 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:09:58 compute-0 nova_compute[260603]: 2025-10-02 09:09:58.876 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 09:09:58 compute-0 nova_compute[260603]: 2025-10-02 09:09:58.877 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:09:59 compute-0 ceph-mon[74477]: pgmap v2755: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct 02 09:09:59 compute-0 nova_compute[260603]: 2025-10-02 09:09:59.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:10:01 compute-0 ceph-mon[74477]: pgmap v2756: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:10:01 compute-0 nova_compute[260603]: 2025-10-02 09:10:01.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:10:02 compute-0 nova_compute[260603]: 2025-10-02 09:10:02.442 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:10:02 compute-0 nova_compute[260603]: 2025-10-02 09:10:02.485 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:10:02 compute-0 nova_compute[260603]: 2025-10-02 09:10:02.485 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 09:10:02 compute-0 nova_compute[260603]: 2025-10-02 09:10:02.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:10:02 compute-0 nova_compute[260603]: 2025-10-02 09:10:02.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:10:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:10:03 compute-0 ceph-mon[74477]: pgmap v2757: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:10:03 compute-0 nova_compute[260603]: 2025-10-02 09:10:03.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:10:03 compute-0 nova_compute[260603]: 2025-10-02 09:10:03.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:03 compute-0 nova_compute[260603]: 2025-10-02 09:10:03.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:03 compute-0 nova_compute[260603]: 2025-10-02 09:10:03.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:03 compute-0 nova_compute[260603]: 2025-10-02 09:10:03.543 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:10:03 compute-0 nova_compute[260603]: 2025-10-02 09:10:03.544 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:10:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:10:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57172279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:10:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.085 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.167 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.167 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:10:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/57172279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.395 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.397 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3439MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.397 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.397 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.477 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.478 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.478 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.506 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.525 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.526 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.545 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.570 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 09:10:04 compute-0 nova_compute[260603]: 2025-10-02 09:10:04.626 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:10:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:10:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3450206284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:10:05 compute-0 nova_compute[260603]: 2025-10-02 09:10:05.138 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:10:05 compute-0 nova_compute[260603]: 2025-10-02 09:10:05.143 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:10:05 compute-0 nova_compute[260603]: 2025-10-02 09:10:05.170 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:10:05 compute-0 nova_compute[260603]: 2025-10-02 09:10:05.190 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:10:05 compute-0 nova_compute[260603]: 2025-10-02 09:10:05.191 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:05 compute-0 ceph-mon[74477]: pgmap v2758: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:10:05 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3450206284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:10:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 09:10:06 compute-0 nova_compute[260603]: 2025-10-02 09:10:06.193 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:10:06 compute-0 nova_compute[260603]: 2025-10-02 09:10:06.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:06 compute-0 nova_compute[260603]: 2025-10-02 09:10:06.875 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:06 compute-0 nova_compute[260603]: 2025-10-02 09:10:06.875 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:06 compute-0 nova_compute[260603]: 2025-10-02 09:10:06.898 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:10:06 compute-0 nova_compute[260603]: 2025-10-02 09:10:06.978 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:06 compute-0 nova_compute[260603]: 2025-10-02 09:10:06.979 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:06 compute-0 nova_compute[260603]: 2025-10-02 09:10:06.987 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:10:06 compute-0 nova_compute[260603]: 2025-10-02 09:10:06.988 2 INFO nova.compute.claims [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.105 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:10:07 compute-0 ceph-mon[74477]: pgmap v2759: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 09:10:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:10:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/418743664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.536 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.543 2 DEBUG nova.compute.provider_tree [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.566 2 DEBUG nova.scheduler.client.report [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.596 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.597 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.672 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.673 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.692 2 INFO nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.712 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.825 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.827 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.827 2 INFO nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Creating image(s)
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.851 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.881 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.906 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.910 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:10:07 compute-0 nova_compute[260603]: 2025-10-02 09:10:07.953 2 DEBUG nova.policy [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.000 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.001 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.002 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.002 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.033 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.037 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 770238ca-0d80-443f-943e-236e0cfb3606_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:10:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 09:10:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:10:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/418743664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.594 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 770238ca-0d80-443f-943e-236e0cfb3606_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.685 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.888 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Successfully created port: 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.903 2 DEBUG nova.objects.instance [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 770238ca-0d80-443f-943e-236e0cfb3606 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.926 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.926 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Ensure instance console log exists: /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.927 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.928 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:08 compute-0 nova_compute[260603]: 2025-10-02 09:10:08.929 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:09 compute-0 nova_compute[260603]: 2025-10-02 09:10:09.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:09 compute-0 ceph-mon[74477]: pgmap v2760: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 09:10:09 compute-0 nova_compute[260603]: 2025-10-02 09:10:09.555 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Successfully created port: 9e457fab-8f77-47eb-a2dc-fa212b72ab38 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:10:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 02 09:10:10 compute-0 nova_compute[260603]: 2025-10-02 09:10:10.445 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Successfully updated port: 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:10:10 compute-0 nova_compute[260603]: 2025-10-02 09:10:10.566 2 DEBUG nova.compute.manager [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-changed-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:10 compute-0 nova_compute[260603]: 2025-10-02 09:10:10.566 2 DEBUG nova.compute.manager [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing instance network info cache due to event network-changed-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:10:10 compute-0 nova_compute[260603]: 2025-10-02 09:10:10.567 2 DEBUG oslo_concurrency.lockutils [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:10:10 compute-0 nova_compute[260603]: 2025-10-02 09:10:10.567 2 DEBUG oslo_concurrency.lockutils [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:10:10 compute-0 nova_compute[260603]: 2025-10-02 09:10:10.567 2 DEBUG nova.network.neutron [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing network info cache for port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:10:10 compute-0 nova_compute[260603]: 2025-10-02 09:10:10.913 2 DEBUG nova.network.neutron [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:10:11 compute-0 nova_compute[260603]: 2025-10-02 09:10:11.302 2 DEBUG nova.network.neutron [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:10:11 compute-0 nova_compute[260603]: 2025-10-02 09:10:11.337 2 DEBUG oslo_concurrency.lockutils [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:10:11 compute-0 nova_compute[260603]: 2025-10-02 09:10:11.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:11 compute-0 ceph-mon[74477]: pgmap v2761: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 02 09:10:11 compute-0 nova_compute[260603]: 2025-10-02 09:10:11.929 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Successfully updated port: 9e457fab-8f77-47eb-a2dc-fa212b72ab38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:10:11 compute-0 nova_compute[260603]: 2025-10-02 09:10:11.952 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:10:11 compute-0 nova_compute[260603]: 2025-10-02 09:10:11.952 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:10:11 compute-0 nova_compute[260603]: 2025-10-02 09:10:11.952 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:10:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 02 09:10:12 compute-0 nova_compute[260603]: 2025-10-02 09:10:12.137 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:10:12 compute-0 nova_compute[260603]: 2025-10-02 09:10:12.675 2 DEBUG nova.compute.manager [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-changed-9e457fab-8f77-47eb-a2dc-fa212b72ab38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:12 compute-0 nova_compute[260603]: 2025-10-02 09:10:12.676 2 DEBUG nova.compute.manager [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing instance network info cache due to event network-changed-9e457fab-8f77-47eb-a2dc-fa212b72ab38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:10:12 compute-0 nova_compute[260603]: 2025-10-02 09:10:12.676 2 DEBUG oslo_concurrency.lockutils [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:10:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:10:13 compute-0 ceph-mon[74477]: pgmap v2762: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 02 09:10:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.229 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updating instance_info_cache with network_info: [{"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.274 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.275 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Instance network_info: |[{"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.275 2 DEBUG oslo_concurrency.lockutils [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.275 2 DEBUG nova.network.neutron [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing network info cache for port 9e457fab-8f77-47eb-a2dc-fa212b72ab38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.278 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Start _get_guest_xml network_info=[{"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.283 2 WARNING nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.289 2 DEBUG nova.virt.libvirt.host [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.290 2 DEBUG nova.virt.libvirt.host [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.295 2 DEBUG nova.virt.libvirt.host [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.295 2 DEBUG nova.virt.libvirt.host [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.295 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.296 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.296 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.296 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.296 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.297 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.297 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.297 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.297 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.297 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.298 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.298 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.300 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:10:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:10:14 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326863042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.747 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.774 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:10:14 compute-0 nova_compute[260603]: 2025-10-02 09:10:14.777 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:10:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:10:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1236462984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.385 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.388 2 DEBUG nova.virt.libvirt.vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-242796341',display_name='tempest-TestGettingAddress-server-242796341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-242796341',id=142,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-r8zzf1o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:10:07Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=770238ca-0d80-443f-943e-236e0cfb3606,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.389 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.391 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.393 2 DEBUG nova.virt.libvirt.vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-242796341',display_name='tempest-TestGettingAddress-server-242796341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-242796341',id=142,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-r8zzf1o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:10:07Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=770238ca-0d80-443f-943e-236e0cfb3606,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.393 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.394 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.396 2 DEBUG nova.objects.instance [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 770238ca-0d80-443f-943e-236e0cfb3606 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.482 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:10:15 compute-0 nova_compute[260603]:   <uuid>770238ca-0d80-443f-943e-236e0cfb3606</uuid>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   <name>instance-0000008e</name>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-242796341</nova:name>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:10:14</nova:creationTime>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <nova:port uuid="451aa4f7-0b3e-4a00-b063-7584a6bbc7cc">
Oct 02 09:10:15 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <nova:port uuid="9e457fab-8f77-47eb-a2dc-fa212b72ab38">
Oct 02 09:10:15 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe3d:e64" ipVersion="6"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe3d:e64" ipVersion="6"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <system>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <entry name="serial">770238ca-0d80-443f-943e-236e0cfb3606</entry>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <entry name="uuid">770238ca-0d80-443f-943e-236e0cfb3606</entry>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     </system>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   <os>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   </os>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   <features>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   </features>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/770238ca-0d80-443f-943e-236e0cfb3606_disk">
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       </source>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/770238ca-0d80-443f-943e-236e0cfb3606_disk.config">
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       </source>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:10:15 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:a9:4e:93"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <target dev="tap451aa4f7-0b"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:3d:0e:64"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <target dev="tap9e457fab-8f"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/console.log" append="off"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <video>
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     </video>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:10:15 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:10:15 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:10:15 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:10:15 compute-0 nova_compute[260603]: </domain>
Oct 02 09:10:15 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.484 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Preparing to wait for external event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.485 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.486 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.486 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.487 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Preparing to wait for external event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.487 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.487 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.487 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.488 2 DEBUG nova.virt.libvirt.vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-242796341',display_name='tempest-TestGettingAddress-server-242796341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-242796341',id=142,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-r8zzf1o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:10:07Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=770238ca-0d80-443f-943e-236e0cfb3606,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.488 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.489 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.489 2 DEBUG os_vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.491 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.491 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:10:15 compute-0 ceph-mon[74477]: pgmap v2763: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:10:15 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1326863042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:10:15 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1236462984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.495 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap451aa4f7-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap451aa4f7-0b, col_values=(('external_ids', {'iface-id': '451aa4f7-0b3e-4a00-b063-7584a6bbc7cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:4e:93', 'vm-uuid': '770238ca-0d80-443f-943e-236e0cfb3606'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:15 compute-0 NetworkManager[45129]: <info>  [1759396215.4986] manager: (tap451aa4f7-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.508 2 INFO os_vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b')
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.509 2 DEBUG nova.virt.libvirt.vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-242796341',display_name='tempest-TestGettingAddress-server-242796341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-242796341',id=142,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-r8zzf1o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:10:07Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=770238ca-0d80-443f-943e-236e0cfb3606,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.509 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.510 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.510 2 DEBUG os_vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e457fab-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e457fab-8f, col_values=(('external_ids', {'iface-id': '9e457fab-8f77-47eb-a2dc-fa212b72ab38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:0e:64', 'vm-uuid': '770238ca-0d80-443f-943e-236e0cfb3606'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:15 compute-0 NetworkManager[45129]: <info>  [1759396215.5162] manager: (tap9e457fab-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/630)
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.524 2 INFO os_vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f')
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.601 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.602 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.602 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:a9:4e:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.602 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:3d:0e:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.603 2 INFO nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Using config drive
Oct 02 09:10:15 compute-0 nova_compute[260603]: 2025-10-02 09:10:15.639 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:10:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:10:16 compute-0 nova_compute[260603]: 2025-10-02 09:10:16.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:16 compute-0 nova_compute[260603]: 2025-10-02 09:10:16.956 2 INFO nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Creating config drive at /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/disk.config
Oct 02 09:10:16 compute-0 nova_compute[260603]: 2025-10-02 09:10:16.966 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp412hh4ka execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:10:17 compute-0 nova_compute[260603]: 2025-10-02 09:10:17.118 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp412hh4ka" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:10:17 compute-0 nova_compute[260603]: 2025-10-02 09:10:17.141 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:10:17 compute-0 nova_compute[260603]: 2025-10-02 09:10:17.145 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/disk.config 770238ca-0d80-443f-943e-236e0cfb3606_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:10:17 compute-0 nova_compute[260603]: 2025-10-02 09:10:17.286 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/disk.config 770238ca-0d80-443f-943e-236e0cfb3606_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:10:17 compute-0 nova_compute[260603]: 2025-10-02 09:10:17.287 2 INFO nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Deleting local config drive /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/disk.config because it was imported into RBD.
Oct 02 09:10:17 compute-0 NetworkManager[45129]: <info>  [1759396217.3626] manager: (tap451aa4f7-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/631)
Oct 02 09:10:17 compute-0 kernel: tap451aa4f7-0b: entered promiscuous mode
Oct 02 09:10:17 compute-0 ovn_controller[152344]: 2025-10-02T09:10:17Z|01550|binding|INFO|Claiming lport 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc for this chassis.
Oct 02 09:10:17 compute-0 ovn_controller[152344]: 2025-10-02T09:10:17Z|01551|binding|INFO|451aa4f7-0b3e-4a00-b063-7584a6bbc7cc: Claiming fa:16:3e:a9:4e:93 10.100.0.3
Oct 02 09:10:17 compute-0 nova_compute[260603]: 2025-10-02 09:10:17.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.381 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:4e:93 10.100.0.3'], port_security=['fa:16:3e:a9:4e:93 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '770238ca-0d80-443f-943e-236e0cfb3606', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68558f02-4047-4331-a47e-cbbee9580ea4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.382 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc in datapath d7dd05b8-70c0-4ef8-a410-57d83c307eaa bound to our chassis
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.384 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7dd05b8-70c0-4ef8-a410-57d83c307eaa
Oct 02 09:10:17 compute-0 NetworkManager[45129]: <info>  [1759396217.3904] manager: (tap9e457fab-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/632)
Oct 02 09:10:17 compute-0 kernel: tap9e457fab-8f: entered promiscuous mode
Oct 02 09:10:17 compute-0 systemd-udevd[418457]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:10:17 compute-0 systemd-udevd[418460]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:10:17 compute-0 ovn_controller[152344]: 2025-10-02T09:10:17Z|01552|binding|INFO|Setting lport 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc ovn-installed in OVS
Oct 02 09:10:17 compute-0 ovn_controller[152344]: 2025-10-02T09:10:17Z|01553|binding|INFO|Setting lport 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc up in Southbound
Oct 02 09:10:17 compute-0 nova_compute[260603]: 2025-10-02 09:10:17.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:17 compute-0 ovn_controller[152344]: 2025-10-02T09:10:17Z|01554|if_status|INFO|Dropped 3 log messages in last 126 seconds (most recently, 126 seconds ago) due to excessive rate
Oct 02 09:10:17 compute-0 ovn_controller[152344]: 2025-10-02T09:10:17Z|01555|if_status|INFO|Not updating pb chassis for 9e457fab-8f77-47eb-a2dc-fa212b72ab38 now as sb is readonly
Oct 02 09:10:17 compute-0 NetworkManager[45129]: <info>  [1759396217.4038] device (tap451aa4f7-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:10:17 compute-0 NetworkManager[45129]: <info>  [1759396217.4050] device (tap451aa4f7-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:10:17 compute-0 ovn_controller[152344]: 2025-10-02T09:10:17Z|01556|binding|INFO|Claiming lport 9e457fab-8f77-47eb-a2dc-fa212b72ab38 for this chassis.
Oct 02 09:10:17 compute-0 ovn_controller[152344]: 2025-10-02T09:10:17Z|01557|binding|INFO|9e457fab-8f77-47eb-a2dc-fa212b72ab38: Claiming fa:16:3e:3d:0e:64 2001:db8:0:1:f816:3eff:fe3d:e64 2001:db8::f816:3eff:fe3d:e64
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.408 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6ac607-4870-462a-80ba-311207e6dc85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:17 compute-0 NetworkManager[45129]: <info>  [1759396217.4168] device (tap9e457fab-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:10:17 compute-0 NetworkManager[45129]: <info>  [1759396217.4186] device (tap9e457fab-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:10:17 compute-0 nova_compute[260603]: 2025-10-02 09:10:17.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:17 compute-0 ovn_controller[152344]: 2025-10-02T09:10:17Z|01558|binding|INFO|Setting lport 9e457fab-8f77-47eb-a2dc-fa212b72ab38 ovn-installed in OVS
Oct 02 09:10:17 compute-0 nova_compute[260603]: 2025-10-02 09:10:17.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:17 compute-0 ovn_controller[152344]: 2025-10-02T09:10:17Z|01559|binding|INFO|Setting lport 9e457fab-8f77-47eb-a2dc-fa212b72ab38 up in Southbound
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.431 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:0e:64 2001:db8:0:1:f816:3eff:fe3d:e64 2001:db8::f816:3eff:fe3d:e64'], port_security=['fa:16:3e:3d:0e:64 2001:db8:0:1:f816:3eff:fe3d:e64 2001:db8::f816:3eff:fe3d:e64'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3d:e64/64 2001:db8::f816:3eff:fe3d:e64/64', 'neutron:device_id': '770238ca-0d80-443f-943e-236e0cfb3606', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a251a259-65e8-4a45-82af-f69bd5f24a08, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9e457fab-8f77-47eb-a2dc-fa212b72ab38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:10:17 compute-0 systemd-machined[214636]: New machine qemu-176-instance-0000008e.
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.440 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f10a5f-61be-4e29-84cc-baf49116a044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:17 compute-0 systemd[1]: Started Virtual Machine qemu-176-instance-0000008e.
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.446 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[33ab7688-24f2-4b42-b551-4e588c4265f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.474 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[241a33a8-9754-422a-94aa-a40b8b59d97f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.492 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[75a1b15e-d2a9-40f6-8855-8c1cb16ad3ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7dd05b8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:04:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705935, 'reachable_time': 20541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418472, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:17 compute-0 ceph-mon[74477]: pgmap v2764: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.507 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[be627377-7775-4cf0-8133-e130f6ed8fb3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd7dd05b8-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705950, 'tstamp': 705950}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418477, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7dd05b8-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705955, 'tstamp': 705955}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418477, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.510 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7dd05b8-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:17 compute-0 nova_compute[260603]: 2025-10-02 09:10:17.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.512 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7dd05b8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.513 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.513 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7dd05b8-70, col_values=(('external_ids', {'iface-id': '93ded116-ee2f-4f81-a2c6-257136c86ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.514 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.515 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9e457fab-8f77-47eb-a2dc-fa212b72ab38 in datapath 6d9c157f-cf57-4b44-8fba-d16631e22418 unbound from our chassis
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.517 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d9c157f-cf57-4b44-8fba-d16631e22418
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.534 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4ea472-c419-4a76-a8e8-4a102ba13800]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.568 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b6368c48-2548-4351-acef-78e845f3139a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.572 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[852c669e-155f-4c1d-a6ed-d98905c438e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.599 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4597e8-5d52-40f5-97b2-9a506fc06291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.615 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d578f17e-fe76-4c18-abf9-0876d88188c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d9c157f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:38:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706051, 'reachable_time': 40260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418484, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.628 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[963a6034-e5ba-4319-b6e7-a440fdec1eac]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d9c157f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706068, 'tstamp': 706068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418485, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.630 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d9c157f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:17 compute-0 nova_compute[260603]: 2025-10-02 09:10:17.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.633 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d9c157f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.634 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.634 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d9c157f-c0, col_values=(('external_ids', {'iface-id': '136a7ea2-2365-4779-b31d-41cbfc52a20f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:17 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.634 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:10:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.161 2 DEBUG nova.compute.manager [req-630d3f97-f1a3-4230-b1bf-87e2378b35c7 req-3e03cd7a-1013-4265-a916-0a89c19e4ec1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.162 2 DEBUG oslo_concurrency.lockutils [req-630d3f97-f1a3-4230-b1bf-87e2378b35c7 req-3e03cd7a-1013-4265-a916-0a89c19e4ec1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.162 2 DEBUG oslo_concurrency.lockutils [req-630d3f97-f1a3-4230-b1bf-87e2378b35c7 req-3e03cd7a-1013-4265-a916-0a89c19e4ec1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.162 2 DEBUG oslo_concurrency.lockutils [req-630d3f97-f1a3-4230-b1bf-87e2378b35c7 req-3e03cd7a-1013-4265-a916-0a89c19e4ec1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.163 2 DEBUG nova.compute.manager [req-630d3f97-f1a3-4230-b1bf-87e2378b35c7 req-3e03cd7a-1013-4265-a916-0a89c19e4ec1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Processing event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.173 2 DEBUG nova.compute.manager [req-d2ff4de3-c50a-41dd-aca7-759ab2e81d4f req-d6efba99-6fbf-43d8-9ba5-4bb1ae1f3177 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.174 2 DEBUG oslo_concurrency.lockutils [req-d2ff4de3-c50a-41dd-aca7-759ab2e81d4f req-d6efba99-6fbf-43d8-9ba5-4bb1ae1f3177 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.174 2 DEBUG oslo_concurrency.lockutils [req-d2ff4de3-c50a-41dd-aca7-759ab2e81d4f req-d6efba99-6fbf-43d8-9ba5-4bb1ae1f3177 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.174 2 DEBUG oslo_concurrency.lockutils [req-d2ff4de3-c50a-41dd-aca7-759ab2e81d4f req-d6efba99-6fbf-43d8-9ba5-4bb1ae1f3177 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.175 2 DEBUG nova.compute.manager [req-d2ff4de3-c50a-41dd-aca7-759ab2e81d4f req-d6efba99-6fbf-43d8-9ba5-4bb1ae1f3177 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Processing event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:10:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.384 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.385 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396218.3833501, 770238ca-0d80-443f-943e-236e0cfb3606 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.385 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] VM Started (Lifecycle Event)
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.391 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.395 2 INFO nova.virt.libvirt.driver [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Instance spawned successfully.
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.395 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.412 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.417 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.420 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.421 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.421 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.421 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.422 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.422 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.450 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.450 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396218.38367, 770238ca-0d80-443f-943e-236e0cfb3606 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.451 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] VM Paused (Lifecycle Event)
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.485 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.488 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396218.390792, 770238ca-0d80-443f-943e-236e0cfb3606 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.488 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] VM Resumed (Lifecycle Event)
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.496 2 INFO nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Took 10.67 seconds to spawn the instance on the hypervisor.
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.497 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.509 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.512 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.540 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.561 2 INFO nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Took 11.61 seconds to build instance.
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.577 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:18 compute-0 ceph-mon[74477]: pgmap v2765: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.940 2 DEBUG nova.network.neutron [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updated VIF entry in instance network info cache for port 9e457fab-8f77-47eb-a2dc-fa212b72ab38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.941 2 DEBUG nova.network.neutron [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updating instance_info_cache with network_info: [{"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:10:18 compute-0 nova_compute[260603]: 2025-10-02 09:10:18.956 2 DEBUG oslo_concurrency.lockutils [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:10:18 compute-0 podman[418530]: 2025-10-02 09:10:18.994921664 +0000 UTC m=+0.057792908 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:10:19 compute-0 podman[418529]: 2025-10-02 09:10:19.027772244 +0000 UTC m=+0.090736381 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 09:10:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.254 2 DEBUG nova.compute.manager [req-9bd1593b-d0f6-405c-83a2-4af36e5a378a req-64c7a721-9994-48ea-8f5f-d5f88144b873 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.255 2 DEBUG oslo_concurrency.lockutils [req-9bd1593b-d0f6-405c-83a2-4af36e5a378a req-64c7a721-9994-48ea-8f5f-d5f88144b873 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.255 2 DEBUG oslo_concurrency.lockutils [req-9bd1593b-d0f6-405c-83a2-4af36e5a378a req-64c7a721-9994-48ea-8f5f-d5f88144b873 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.255 2 DEBUG oslo_concurrency.lockutils [req-9bd1593b-d0f6-405c-83a2-4af36e5a378a req-64c7a721-9994-48ea-8f5f-d5f88144b873 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.256 2 DEBUG nova.compute.manager [req-9bd1593b-d0f6-405c-83a2-4af36e5a378a req-64c7a721-9994-48ea-8f5f-d5f88144b873 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] No waiting events found dispatching network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.256 2 WARNING nova.compute.manager [req-9bd1593b-d0f6-405c-83a2-4af36e5a378a req-64c7a721-9994-48ea-8f5f-d5f88144b873 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received unexpected event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc for instance with vm_state active and task_state None.
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.326 2 DEBUG nova.compute.manager [req-475d828d-0207-4b66-b589-ebdcc69a1bfb req-f503bdf0-0c42-43f3-8419-037d1865767e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.326 2 DEBUG oslo_concurrency.lockutils [req-475d828d-0207-4b66-b589-ebdcc69a1bfb req-f503bdf0-0c42-43f3-8419-037d1865767e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.327 2 DEBUG oslo_concurrency.lockutils [req-475d828d-0207-4b66-b589-ebdcc69a1bfb req-f503bdf0-0c42-43f3-8419-037d1865767e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.327 2 DEBUG oslo_concurrency.lockutils [req-475d828d-0207-4b66-b589-ebdcc69a1bfb req-f503bdf0-0c42-43f3-8419-037d1865767e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.327 2 DEBUG nova.compute.manager [req-475d828d-0207-4b66-b589-ebdcc69a1bfb req-f503bdf0-0c42-43f3-8419-037d1865767e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] No waiting events found dispatching network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.327 2 WARNING nova.compute.manager [req-475d828d-0207-4b66-b589-ebdcc69a1bfb req-f503bdf0-0c42-43f3-8419-037d1865767e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received unexpected event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 for instance with vm_state active and task_state None.
Oct 02 09:10:20 compute-0 nova_compute[260603]: 2025-10-02 09:10:20.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:21 compute-0 ceph-mon[74477]: pgmap v2766: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 09:10:21 compute-0 nova_compute[260603]: 2025-10-02 09:10:21.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 09:10:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:10:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2436508400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:10:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:10:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2436508400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:10:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2436508400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:10:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2436508400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:10:22 compute-0 podman[418577]: 2025-10-02 09:10:22.990996194 +0000 UTC m=+0.053606256 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:10:22 compute-0 podman[418576]: 2025-10-02 09:10:22.997707653 +0000 UTC m=+0.061814462 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:10:23 compute-0 nova_compute[260603]: 2025-10-02 09:10:23.062 2 DEBUG nova.compute.manager [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-changed-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:23 compute-0 nova_compute[260603]: 2025-10-02 09:10:23.063 2 DEBUG nova.compute.manager [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing instance network info cache due to event network-changed-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:10:23 compute-0 nova_compute[260603]: 2025-10-02 09:10:23.063 2 DEBUG oslo_concurrency.lockutils [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:10:23 compute-0 nova_compute[260603]: 2025-10-02 09:10:23.063 2 DEBUG oslo_concurrency.lockutils [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:10:23 compute-0 nova_compute[260603]: 2025-10-02 09:10:23.063 2 DEBUG nova.network.neutron [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing network info cache for port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:10:23 compute-0 ceph-mon[74477]: pgmap v2767: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 09:10:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:10:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:10:25 compute-0 nova_compute[260603]: 2025-10-02 09:10:25.132 2 DEBUG nova.network.neutron [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updated VIF entry in instance network info cache for port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:10:25 compute-0 nova_compute[260603]: 2025-10-02 09:10:25.134 2 DEBUG nova.network.neutron [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updating instance_info_cache with network_info: [{"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:10:25 compute-0 nova_compute[260603]: 2025-10-02 09:10:25.174 2 DEBUG oslo_concurrency.lockutils [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:10:25 compute-0 ceph-mon[74477]: pgmap v2768: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:10:25 compute-0 nova_compute[260603]: 2025-10-02 09:10:25.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:10:26 compute-0 nova_compute[260603]: 2025-10-02 09:10:26.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:27 compute-0 ceph-mon[74477]: pgmap v2769: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:10:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:10:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:10:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:10:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:10:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:10:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:10:28
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', '.rgw.root', 'images', 'default.rgw.log', '.mgr']
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:10:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:10:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:10:29 compute-0 ceph-mon[74477]: pgmap v2770: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:10:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Oct 02 09:10:30 compute-0 nova_compute[260603]: 2025-10-02 09:10:30.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:31 compute-0 ovn_controller[152344]: 2025-10-02T09:10:31Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:4e:93 10.100.0.3
Oct 02 09:10:31 compute-0 ovn_controller[152344]: 2025-10-02T09:10:31Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:4e:93 10.100.0.3
Oct 02 09:10:31 compute-0 ceph-mon[74477]: pgmap v2771: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Oct 02 09:10:31 compute-0 nova_compute[260603]: 2025-10-02 09:10:31.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Oct 02 09:10:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:10:33 compute-0 ceph-mon[74477]: pgmap v2772: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Oct 02 09:10:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 02 09:10:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:34.848 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:34.849 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:34.849 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:35 compute-0 ceph-mon[74477]: pgmap v2773: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 02 09:10:35 compute-0 nova_compute[260603]: 2025-10-02 09:10:35.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:10:36 compute-0 nova_compute[260603]: 2025-10-02 09:10:36.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:36 compute-0 nova_compute[260603]: 2025-10-02 09:10:36.935 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:10:36 compute-0 nova_compute[260603]: 2025-10-02 09:10:36.992 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 09:10:36 compute-0 nova_compute[260603]: 2025-10-02 09:10:36.992 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 770238ca-0d80-443f-943e-236e0cfb3606 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 09:10:36 compute-0 nova_compute[260603]: 2025-10-02 09:10:36.993 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:36 compute-0 nova_compute[260603]: 2025-10-02 09:10:36.993 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:36 compute-0 nova_compute[260603]: 2025-10-02 09:10:36.994 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:36 compute-0 nova_compute[260603]: 2025-10-02 09:10:36.994 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "770238ca-0d80-443f-943e-236e0cfb3606" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:37 compute-0 sudo[418616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:10:37 compute-0 sudo[418616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:37 compute-0 sudo[418616]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:37 compute-0 nova_compute[260603]: 2025-10-02 09:10:37.046 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:37 compute-0 nova_compute[260603]: 2025-10-02 09:10:37.047 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "770238ca-0d80-443f-943e-236e0cfb3606" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:37 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:10:37 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:10:37 compute-0 sudo[418641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:10:37 compute-0 sudo[418641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:37 compute-0 sudo[418641]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:37 compute-0 sudo[418667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:10:37 compute-0 sudo[418667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:37 compute-0 sudo[418667]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:37 compute-0 sudo[418692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 09:10:37 compute-0 sudo[418692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:37 compute-0 ceph-mon[74477]: pgmap v2774: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:10:37 compute-0 sudo[418692]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:10:37 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:10:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:10:37 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:10:37 compute-0 sudo[418738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:10:37 compute-0 sudo[418738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:37 compute-0 sudo[418738]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:37 compute-0 sudo[418764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:10:37 compute-0 sudo[418764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:37 compute-0 sudo[418764]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:37 compute-0 sudo[418789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:10:37 compute-0 sudo[418789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:37 compute-0 sudo[418789]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:38 compute-0 sudo[418814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:10:38 compute-0 sudo[418814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 09:10:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:10:38 compute-0 sudo[418814]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:38 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:10:38 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:10:38 compute-0 ceph-mon[74477]: pgmap v2775: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 09:10:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:10:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:10:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:10:38 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:10:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:10:38 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:10:38 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2b80ba5e-b043-412a-be65-a6d482c2cfe9 does not exist
Oct 02 09:10:38 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b8ac9fa5-69f3-42c0-b1bb-7565312e3f81 does not exist
Oct 02 09:10:38 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5a1f7398-e1da-4548-8e96-05e109d6d0e7 does not exist
Oct 02 09:10:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:10:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:10:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:10:38 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:10:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:10:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:10:38 compute-0 sudo[418870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:10:38 compute-0 sudo[418870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:38 compute-0 sudo[418870]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:38 compute-0 sudo[418895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:10:38 compute-0 sudo[418895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:38 compute-0 sudo[418895]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:39 compute-0 sudo[418920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:10:39 compute-0 sudo[418920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:39 compute-0 sudo[418920]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015181009677997005 of space, bias 1.0, pg target 0.45543029033991017 quantized to 32 (current 32)
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:10:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:10:39 compute-0 sudo[418945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:10:39 compute-0 sudo[418945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:39 compute-0 podman[419013]: 2025-10-02 09:10:39.624888587 +0000 UTC m=+0.068296983 container create 2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:10:39 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:10:39 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:10:39 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:10:39 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:10:39 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:10:39 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:10:39 compute-0 systemd[1]: Started libpod-conmon-2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815.scope.
Oct 02 09:10:39 compute-0 podman[419013]: 2025-10-02 09:10:39.601573262 +0000 UTC m=+0.044981638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:10:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:10:39 compute-0 podman[419013]: 2025-10-02 09:10:39.736416573 +0000 UTC m=+0.179824949 container init 2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:10:39 compute-0 podman[419013]: 2025-10-02 09:10:39.745153235 +0000 UTC m=+0.188561611 container start 2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:10:39 compute-0 podman[419013]: 2025-10-02 09:10:39.748693704 +0000 UTC m=+0.192102100 container attach 2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:10:39 compute-0 great_mclaren[419029]: 167 167
Oct 02 09:10:39 compute-0 systemd[1]: libpod-2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815.scope: Deactivated successfully.
Oct 02 09:10:39 compute-0 podman[419013]: 2025-10-02 09:10:39.75210378 +0000 UTC m=+0.195512156 container died 2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 09:10:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cf5637a8d00e7882c29ecca008609a0675692d13eca5def109f43db085f1cdb-merged.mount: Deactivated successfully.
Oct 02 09:10:39 compute-0 podman[419013]: 2025-10-02 09:10:39.800540625 +0000 UTC m=+0.243948991 container remove 2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:10:39 compute-0 systemd[1]: libpod-conmon-2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815.scope: Deactivated successfully.
Oct 02 09:10:40 compute-0 podman[419052]: 2025-10-02 09:10:40.046931173 +0000 UTC m=+0.061887015 container create c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 09:10:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:10:40 compute-0 systemd[1]: Started libpod-conmon-c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09.scope.
Oct 02 09:10:40 compute-0 podman[419052]: 2025-10-02 09:10:40.023445143 +0000 UTC m=+0.038400975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:10:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e39ff5aba036e34c6ad6a761f94b9f16d3b89a8a1f4ba6b6b1acee0b085076/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e39ff5aba036e34c6ad6a761f94b9f16d3b89a8a1f4ba6b6b1acee0b085076/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e39ff5aba036e34c6ad6a761f94b9f16d3b89a8a1f4ba6b6b1acee0b085076/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e39ff5aba036e34c6ad6a761f94b9f16d3b89a8a1f4ba6b6b1acee0b085076/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e39ff5aba036e34c6ad6a761f94b9f16d3b89a8a1f4ba6b6b1acee0b085076/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:40 compute-0 podman[419052]: 2025-10-02 09:10:40.143376989 +0000 UTC m=+0.158332791 container init c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 09:10:40 compute-0 podman[419052]: 2025-10-02 09:10:40.160587774 +0000 UTC m=+0.175543576 container start c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 09:10:40 compute-0 podman[419052]: 2025-10-02 09:10:40.165148235 +0000 UTC m=+0.180104087 container attach c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 09:10:40 compute-0 nova_compute[260603]: 2025-10-02 09:10:40.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:10:40 compute-0 nova_compute[260603]: 2025-10-02 09:10:40.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:10:40 compute-0 nova_compute[260603]: 2025-10-02 09:10:40.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:40 compute-0 ceph-mon[74477]: pgmap v2776: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:10:41 compute-0 nice_kalam[419069]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:10:41 compute-0 nice_kalam[419069]: --> relative data size: 1.0
Oct 02 09:10:41 compute-0 nice_kalam[419069]: --> All data devices are unavailable
Oct 02 09:10:41 compute-0 systemd[1]: libpod-c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09.scope: Deactivated successfully.
Oct 02 09:10:41 compute-0 podman[419052]: 2025-10-02 09:10:41.384947572 +0000 UTC m=+1.399903384 container died c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 09:10:41 compute-0 systemd[1]: libpod-c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09.scope: Consumed 1.159s CPU time.
Oct 02 09:10:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-96e39ff5aba036e34c6ad6a761f94b9f16d3b89a8a1f4ba6b6b1acee0b085076-merged.mount: Deactivated successfully.
Oct 02 09:10:41 compute-0 podman[419052]: 2025-10-02 09:10:41.446200495 +0000 UTC m=+1.461156297 container remove c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:10:41 compute-0 systemd[1]: libpod-conmon-c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09.scope: Deactivated successfully.
Oct 02 09:10:41 compute-0 sudo[418945]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:41 compute-0 nova_compute[260603]: 2025-10-02 09:10:41.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:41 compute-0 sudo[419111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:10:41 compute-0 sudo[419111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:41 compute-0 sudo[419111]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:41 compute-0 sudo[419136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:10:41 compute-0 sudo[419136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:41 compute-0 sudo[419136]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:41 compute-0 sudo[419161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:10:41 compute-0 sudo[419161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:41 compute-0 sudo[419161]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:41 compute-0 sudo[419186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:10:41 compute-0 sudo[419186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:41 compute-0 nova_compute[260603]: 2025-10-02 09:10:41.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:41.903 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:10:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:41.905 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:10:41 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:41.905 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.251 2 DEBUG nova.compute.manager [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-changed-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.251 2 DEBUG nova.compute.manager [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing instance network info cache due to event network-changed-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.252 2 DEBUG oslo_concurrency.lockutils [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.252 2 DEBUG oslo_concurrency.lockutils [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.252 2 DEBUG nova.network.neutron [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing network info cache for port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:10:42 compute-0 podman[419252]: 2025-10-02 09:10:42.267009473 +0000 UTC m=+0.045769743 container create 33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 09:10:42 compute-0 systemd[1]: Started libpod-conmon-33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96.scope.
Oct 02 09:10:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:10:42 compute-0 podman[419252]: 2025-10-02 09:10:42.24472021 +0000 UTC m=+0.023480530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:10:42 compute-0 podman[419252]: 2025-10-02 09:10:42.340183927 +0000 UTC m=+0.118944257 container init 33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 09:10:42 compute-0 podman[419252]: 2025-10-02 09:10:42.348355051 +0000 UTC m=+0.127115321 container start 33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:10:42 compute-0 podman[419252]: 2025-10-02 09:10:42.352226501 +0000 UTC m=+0.130986811 container attach 33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:10:42 compute-0 inspiring_boyd[419268]: 167 167
Oct 02 09:10:42 compute-0 systemd[1]: libpod-33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96.scope: Deactivated successfully.
Oct 02 09:10:42 compute-0 podman[419252]: 2025-10-02 09:10:42.356509664 +0000 UTC m=+0.135269964 container died 33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.361 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.362 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.362 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.362 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.362 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.363 2 INFO nova.compute.manager [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Terminating instance
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.364 2 DEBUG nova.compute.manager [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:10:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-41df3c6436e63e14b3f6f1bc5fb0222c0b9cfd8076995751a3a500c1d19148fc-merged.mount: Deactivated successfully.
Oct 02 09:10:42 compute-0 podman[419252]: 2025-10-02 09:10:42.412572206 +0000 UTC m=+0.191332476 container remove 33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:10:42 compute-0 systemd[1]: libpod-conmon-33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96.scope: Deactivated successfully.
Oct 02 09:10:42 compute-0 kernel: tap451aa4f7-0b (unregistering): left promiscuous mode
Oct 02 09:10:42 compute-0 NetworkManager[45129]: <info>  [1759396242.4347] device (tap451aa4f7-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 ovn_controller[152344]: 2025-10-02T09:10:42Z|01560|binding|INFO|Releasing lport 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc from this chassis (sb_readonly=0)
Oct 02 09:10:42 compute-0 ovn_controller[152344]: 2025-10-02T09:10:42Z|01561|binding|INFO|Setting lport 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc down in Southbound
Oct 02 09:10:42 compute-0 ovn_controller[152344]: 2025-10-02T09:10:42Z|01562|binding|INFO|Removing iface tap451aa4f7-0b ovn-installed in OVS
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 kernel: tap9e457fab-8f (unregistering): left promiscuous mode
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.472 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:4e:93 10.100.0.3'], port_security=['fa:16:3e:a9:4e:93 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '770238ca-0d80-443f-943e-236e0cfb3606', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68558f02-4047-4331-a47e-cbbee9580ea4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:10:42 compute-0 NetworkManager[45129]: <info>  [1759396242.4740] device (tap9e457fab-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.474 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc in datapath d7dd05b8-70c0-4ef8-a410-57d83c307eaa unbound from our chassis
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.475 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7dd05b8-70c0-4ef8-a410-57d83c307eaa
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 ovn_controller[152344]: 2025-10-02T09:10:42Z|01563|binding|INFO|Releasing lport 9e457fab-8f77-47eb-a2dc-fa212b72ab38 from this chassis (sb_readonly=0)
Oct 02 09:10:42 compute-0 ovn_controller[152344]: 2025-10-02T09:10:42Z|01564|binding|INFO|Setting lport 9e457fab-8f77-47eb-a2dc-fa212b72ab38 down in Southbound
Oct 02 09:10:42 compute-0 ovn_controller[152344]: 2025-10-02T09:10:42Z|01565|binding|INFO|Removing iface tap9e457fab-8f ovn-installed in OVS
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.495 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9d19acec-d5ec-492b-a8fa-8ebfa89a8737]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.508 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:0e:64 2001:db8:0:1:f816:3eff:fe3d:e64 2001:db8::f816:3eff:fe3d:e64'], port_security=['fa:16:3e:3d:0e:64 2001:db8:0:1:f816:3eff:fe3d:e64 2001:db8::f816:3eff:fe3d:e64'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3d:e64/64 2001:db8::f816:3eff:fe3d:e64/64', 'neutron:device_id': '770238ca-0d80-443f-943e-236e0cfb3606', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a251a259-65e8-4a45-82af-f69bd5f24a08, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9e457fab-8f77-47eb-a2dc-fa212b72ab38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.526 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c683b5dc-8f9c-4e7e-8dd2-7b88f49c76da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.529 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[90638446-a70d-44a3-a345-140e1100f162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:42 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Oct 02 09:10:42 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Consumed 13.335s CPU time.
Oct 02 09:10:42 compute-0 systemd-machined[214636]: Machine qemu-176-instance-0000008e terminated.
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.557 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e55e3a2c-600e-4ce3-8d04-4b2c565d03c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.576 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0e884076-1e1f-4cdd-8369-e85549cd90a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7dd05b8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:04:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705935, 'reachable_time': 20541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419310, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:42 compute-0 NetworkManager[45129]: <info>  [1759396242.5988] manager: (tap9e457fab-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/633)
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.598 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d34453a2-3d67-43e8-ad79-5bbf7d09bc4f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd7dd05b8-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705950, 'tstamp': 705950}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419318, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7dd05b8-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705955, 'tstamp': 705955}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419318, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.602 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7dd05b8-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.617 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7dd05b8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.617 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.617 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7dd05b8-70, col_values=(('external_ids', {'iface-id': '93ded116-ee2f-4f81-a2c6-257136c86ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.618 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.619 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9e457fab-8f77-47eb-a2dc-fa212b72ab38 in datapath 6d9c157f-cf57-4b44-8fba-d16631e22418 unbound from our chassis
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.619 2 INFO nova.virt.libvirt.driver [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Instance destroyed successfully.
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.620 2 DEBUG nova.objects.instance [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 770238ca-0d80-443f-943e-236e0cfb3606 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.620 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d9c157f-cf57-4b44-8fba-d16631e22418
Oct 02 09:10:42 compute-0 podman[419308]: 2025-10-02 09:10:42.625790092 +0000 UTC m=+0.053004348 container create 7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.639 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7499a1-a0f5-4c9f-945f-5763ae90212a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.652 2 DEBUG nova.virt.libvirt.vif [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-242796341',display_name='tempest-TestGettingAddress-server-242796341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-242796341',id=142,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:10:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-r8zzf1o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:10:18Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=770238ca-0d80-443f-943e-236e0cfb3606,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.653 2 DEBUG nova.network.os_vif_util [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.653 2 DEBUG nova.network.os_vif_util [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.653 2 DEBUG os_vif [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap451aa4f7-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:42 compute-0 systemd[1]: Started libpod-conmon-7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d.scope.
Oct 02 09:10:42 compute-0 podman[419308]: 2025-10-02 09:10:42.602808728 +0000 UTC m=+0.030023004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.697 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cb06324c-f573-4242-aea0-df9e89f54bf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.700 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[88cac130-b9e7-4111-9f6e-f16b7ce7cc35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.704 2 INFO os_vif [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b')
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.704 2 DEBUG nova.virt.libvirt.vif [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-242796341',display_name='tempest-TestGettingAddress-server-242796341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-242796341',id=142,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:10:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-r8zzf1o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:10:18Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=770238ca-0d80-443f-943e-236e0cfb3606,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.705 2 DEBUG nova.network.os_vif_util [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.705 2 DEBUG nova.network.os_vif_util [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.706 2 DEBUG os_vif [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.707 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e457fab-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.710 2 INFO os_vif [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f')
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.727 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f004a05e-ef8c-4808-8f18-77f4f0e5ef1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583afa14d3d2032f73fb6806d0694e8d68450548d3b1b2d0ba9d11f8562586ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583afa14d3d2032f73fb6806d0694e8d68450548d3b1b2d0ba9d11f8562586ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583afa14d3d2032f73fb6806d0694e8d68450548d3b1b2d0ba9d11f8562586ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583afa14d3d2032f73fb6806d0694e8d68450548d3b1b2d0ba9d11f8562586ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:42 compute-0 podman[419308]: 2025-10-02 09:10:42.74736202 +0000 UTC m=+0.174576306 container init 7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ce8f1465-e950-4eba-9a46-57a09088dd9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d9c157f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:38:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706051, 'reachable_time': 40260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419369, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:42 compute-0 podman[419308]: 2025-10-02 09:10:42.756637218 +0000 UTC m=+0.183851474 container start 7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:10:42 compute-0 podman[419308]: 2025-10-02 09:10:42.759944561 +0000 UTC m=+0.187158817 container attach 7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.759 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e5014af9-772c-4fa2-a7aa-208695d62d07]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d9c157f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706068, 'tstamp': 706068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419371, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.763 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d9c157f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.765 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d9c157f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.766 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.766 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d9c157f-c0, col_values=(('external_ids', {'iface-id': '136a7ea2-2365-4779-b31d-41cbfc52a20f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.766 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.795 2 DEBUG nova.compute.manager [req-58630a29-1db2-4701-b4c7-26149d8acfe9 req-0166a2d7-aa25-4c7f-9289-e97b7b6f8afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-unplugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.796 2 DEBUG oslo_concurrency.lockutils [req-58630a29-1db2-4701-b4c7-26149d8acfe9 req-0166a2d7-aa25-4c7f-9289-e97b7b6f8afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.796 2 DEBUG oslo_concurrency.lockutils [req-58630a29-1db2-4701-b4c7-26149d8acfe9 req-0166a2d7-aa25-4c7f-9289-e97b7b6f8afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.796 2 DEBUG oslo_concurrency.lockutils [req-58630a29-1db2-4701-b4c7-26149d8acfe9 req-0166a2d7-aa25-4c7f-9289-e97b7b6f8afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.796 2 DEBUG nova.compute.manager [req-58630a29-1db2-4701-b4c7-26149d8acfe9 req-0166a2d7-aa25-4c7f-9289-e97b7b6f8afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] No waiting events found dispatching network-vif-unplugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:10:42 compute-0 nova_compute[260603]: 2025-10-02 09:10:42.796 2 DEBUG nova.compute.manager [req-58630a29-1db2-4701-b4c7-26149d8acfe9 req-0166a2d7-aa25-4c7f-9289-e97b7b6f8afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-unplugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:10:43 compute-0 nova_compute[260603]: 2025-10-02 09:10:43.020 2 INFO nova.virt.libvirt.driver [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Deleting instance files /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606_del
Oct 02 09:10:43 compute-0 nova_compute[260603]: 2025-10-02 09:10:43.021 2 INFO nova.virt.libvirt.driver [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Deletion of /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606_del complete
Oct 02 09:10:43 compute-0 nova_compute[260603]: 2025-10-02 09:10:43.088 2 INFO nova.compute.manager [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 02 09:10:43 compute-0 nova_compute[260603]: 2025-10-02 09:10:43.089 2 DEBUG oslo.service.loopingcall [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:10:43 compute-0 nova_compute[260603]: 2025-10-02 09:10:43.089 2 DEBUG nova.compute.manager [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:10:43 compute-0 nova_compute[260603]: 2025-10-02 09:10:43.090 2 DEBUG nova.network.neutron [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:10:43 compute-0 ceph-mon[74477]: pgmap v2777: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:10:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:10:43 compute-0 exciting_tesla[419348]: {
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:     "0": [
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:         {
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "devices": [
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "/dev/loop3"
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             ],
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_name": "ceph_lv0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_size": "21470642176",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "name": "ceph_lv0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "tags": {
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.cluster_name": "ceph",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.crush_device_class": "",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.encrypted": "0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.osd_id": "0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.type": "block",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.vdo": "0"
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             },
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "type": "block",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "vg_name": "ceph_vg0"
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:         }
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:     ],
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:     "1": [
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:         {
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "devices": [
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "/dev/loop4"
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             ],
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_name": "ceph_lv1",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_size": "21470642176",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "name": "ceph_lv1",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "tags": {
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.cluster_name": "ceph",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.crush_device_class": "",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.encrypted": "0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.osd_id": "1",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.type": "block",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.vdo": "0"
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             },
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "type": "block",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "vg_name": "ceph_vg1"
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:         }
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:     ],
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:     "2": [
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:         {
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "devices": [
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "/dev/loop5"
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             ],
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_name": "ceph_lv2",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_size": "21470642176",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "name": "ceph_lv2",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "tags": {
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.cluster_name": "ceph",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.crush_device_class": "",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.encrypted": "0",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.osd_id": "2",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.type": "block",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:                 "ceph.vdo": "0"
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             },
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "type": "block",
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:             "vg_name": "ceph_vg2"
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:         }
Oct 02 09:10:43 compute-0 exciting_tesla[419348]:     ]
Oct 02 09:10:43 compute-0 exciting_tesla[419348]: }
Oct 02 09:10:43 compute-0 systemd[1]: libpod-7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d.scope: Deactivated successfully.
Oct 02 09:10:43 compute-0 podman[419308]: 2025-10-02 09:10:43.530573099 +0000 UTC m=+0.957787395 container died 7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 09:10:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-583afa14d3d2032f73fb6806d0694e8d68450548d3b1b2d0ba9d11f8562586ea-merged.mount: Deactivated successfully.
Oct 02 09:10:43 compute-0 podman[419308]: 2025-10-02 09:10:43.596716234 +0000 UTC m=+1.023930490 container remove 7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 09:10:43 compute-0 systemd[1]: libpod-conmon-7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d.scope: Deactivated successfully.
Oct 02 09:10:43 compute-0 sudo[419186]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:43 compute-0 nova_compute[260603]: 2025-10-02 09:10:43.650 2 DEBUG nova.network.neutron [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updated VIF entry in instance network info cache for port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:10:43 compute-0 nova_compute[260603]: 2025-10-02 09:10:43.651 2 DEBUG nova.network.neutron [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updating instance_info_cache with network_info: [{"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:10:43 compute-0 nova_compute[260603]: 2025-10-02 09:10:43.696 2 DEBUG oslo_concurrency.lockutils [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:10:43 compute-0 sudo[419392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:10:43 compute-0 sudo[419392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:43 compute-0 sudo[419392]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:43 compute-0 sudo[419417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:10:43 compute-0 sudo[419417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:43 compute-0 sudo[419417]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:43 compute-0 sudo[419442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:10:43 compute-0 sudo[419442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:43 compute-0 sudo[419442]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:43 compute-0 sudo[419467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:10:43 compute-0 sudo[419467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 178 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 09:10:44 compute-0 podman[419532]: 2025-10-02 09:10:44.333250863 +0000 UTC m=+0.069737188 container create f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tesla, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 09:10:44 compute-0 podman[419532]: 2025-10-02 09:10:44.287626575 +0000 UTC m=+0.024112930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:10:44 compute-0 systemd[1]: Started libpod-conmon-f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697.scope.
Oct 02 09:10:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:10:44 compute-0 podman[419532]: 2025-10-02 09:10:44.463619864 +0000 UTC m=+0.200106189 container init f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 09:10:44 compute-0 podman[419532]: 2025-10-02 09:10:44.473033857 +0000 UTC m=+0.209520182 container start f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tesla, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:10:44 compute-0 blissful_tesla[419548]: 167 167
Oct 02 09:10:44 compute-0 podman[419532]: 2025-10-02 09:10:44.480790218 +0000 UTC m=+0.217276573 container attach f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tesla, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:10:44 compute-0 systemd[1]: libpod-f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697.scope: Deactivated successfully.
Oct 02 09:10:44 compute-0 podman[419532]: 2025-10-02 09:10:44.482856022 +0000 UTC m=+0.219342337 container died f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tesla, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:10:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-120eed7c678a060ac9f6bbec7431a29e50552716ea8f82bc65f97a8a9a5335bb-merged.mount: Deactivated successfully.
Oct 02 09:10:44 compute-0 podman[419532]: 2025-10-02 09:10:44.554583511 +0000 UTC m=+0.291069816 container remove f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tesla, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:10:44 compute-0 systemd[1]: libpod-conmon-f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697.scope: Deactivated successfully.
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.712 2 DEBUG nova.compute.manager [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-unplugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.713 2 DEBUG oslo_concurrency.lockutils [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.714 2 DEBUG oslo_concurrency.lockutils [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.714 2 DEBUG oslo_concurrency.lockutils [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.714 2 DEBUG nova.compute.manager [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] No waiting events found dispatching network-vif-unplugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.714 2 DEBUG nova.compute.manager [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-unplugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.715 2 DEBUG nova.compute.manager [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.715 2 DEBUG oslo_concurrency.lockutils [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.715 2 DEBUG oslo_concurrency.lockutils [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.715 2 DEBUG oslo_concurrency.lockutils [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.716 2 DEBUG nova.compute.manager [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] No waiting events found dispatching network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.716 2 WARNING nova.compute.manager [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received unexpected event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc for instance with vm_state active and task_state deleting.
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.775 2 DEBUG nova.network.neutron [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.805 2 INFO nova.compute.manager [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Took 1.72 seconds to deallocate network for instance.
Oct 02 09:10:44 compute-0 podman[419574]: 2025-10-02 09:10:44.72544264 +0000 UTC m=+0.026620008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:10:44 compute-0 podman[419574]: 2025-10-02 09:10:44.844702106 +0000 UTC m=+0.145879494 container create cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.860 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.861 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.906 2 DEBUG nova.compute.manager [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.906 2 DEBUG oslo_concurrency.lockutils [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.907 2 DEBUG oslo_concurrency.lockutils [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.907 2 DEBUG oslo_concurrency.lockutils [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.908 2 DEBUG nova.compute.manager [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] No waiting events found dispatching network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.908 2 WARNING nova.compute.manager [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received unexpected event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 for instance with vm_state deleted and task_state None.
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.909 2 DEBUG nova.compute.manager [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-deleted-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.910 2 DEBUG nova.compute.manager [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-deleted-9e457fab-8f77-47eb-a2dc-fa212b72ab38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:44 compute-0 systemd[1]: Started libpod-conmon-cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b.scope.
Oct 02 09:10:44 compute-0 nova_compute[260603]: 2025-10-02 09:10:44.954 2 DEBUG oslo_concurrency.processutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:10:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af921acf617426c64938863419a6a080358bbc5484b5e6645a0a63e57cf9021/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af921acf617426c64938863419a6a080358bbc5484b5e6645a0a63e57cf9021/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af921acf617426c64938863419a6a080358bbc5484b5e6645a0a63e57cf9021/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af921acf617426c64938863419a6a080358bbc5484b5e6645a0a63e57cf9021/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:10:45 compute-0 podman[419574]: 2025-10-02 09:10:45.001715256 +0000 UTC m=+0.302892684 container init cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:10:45 compute-0 podman[419574]: 2025-10-02 09:10:45.013795961 +0000 UTC m=+0.314973339 container start cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:10:45 compute-0 podman[419574]: 2025-10-02 09:10:45.052500633 +0000 UTC m=+0.353678011 container attach cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:10:45 compute-0 ceph-mon[74477]: pgmap v2778: 305 pgs: 305 active+clean; 178 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 09:10:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:10:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241820592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:10:45 compute-0 nova_compute[260603]: 2025-10-02 09:10:45.430 2 DEBUG oslo_concurrency.processutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:10:45 compute-0 nova_compute[260603]: 2025-10-02 09:10:45.438 2 DEBUG nova.compute.provider_tree [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:10:45 compute-0 nova_compute[260603]: 2025-10-02 09:10:45.457 2 DEBUG nova.scheduler.client.report [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:10:45 compute-0 nova_compute[260603]: 2025-10-02 09:10:45.483 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:45 compute-0 nova_compute[260603]: 2025-10-02 09:10:45.523 2 INFO nova.scheduler.client.report [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 770238ca-0d80-443f-943e-236e0cfb3606
Oct 02 09:10:45 compute-0 nova_compute[260603]: 2025-10-02 09:10:45.634 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:46 compute-0 funny_buck[419591]: {
Oct 02 09:10:46 compute-0 funny_buck[419591]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "osd_id": 2,
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "type": "bluestore"
Oct 02 09:10:46 compute-0 funny_buck[419591]:     },
Oct 02 09:10:46 compute-0 funny_buck[419591]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "osd_id": 1,
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "type": "bluestore"
Oct 02 09:10:46 compute-0 funny_buck[419591]:     },
Oct 02 09:10:46 compute-0 funny_buck[419591]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "osd_id": 0,
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:10:46 compute-0 funny_buck[419591]:         "type": "bluestore"
Oct 02 09:10:46 compute-0 funny_buck[419591]:     }
Oct 02 09:10:46 compute-0 funny_buck[419591]: }
Oct 02 09:10:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 178 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 11 KiB/s wr, 11 op/s
Oct 02 09:10:46 compute-0 systemd[1]: libpod-cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b.scope: Deactivated successfully.
Oct 02 09:10:46 compute-0 podman[419574]: 2025-10-02 09:10:46.121946997 +0000 UTC m=+1.423124345 container died cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 09:10:46 compute-0 systemd[1]: libpod-cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b.scope: Consumed 1.109s CPU time.
Oct 02 09:10:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4af921acf617426c64938863419a6a080358bbc5484b5e6645a0a63e57cf9021-merged.mount: Deactivated successfully.
Oct 02 09:10:46 compute-0 podman[419574]: 2025-10-02 09:10:46.195818694 +0000 UTC m=+1.496996042 container remove cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 09:10:46 compute-0 systemd[1]: libpod-conmon-cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b.scope: Deactivated successfully.
Oct 02 09:10:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2241820592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:10:46 compute-0 sudo[419467]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:10:46 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:10:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:10:46 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:10:46 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0c4213cd-9bbc-4461-bc5e-c44d27113d66 does not exist
Oct 02 09:10:46 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 09962792-687d-44c5-851b-828f16556e74 does not exist
Oct 02 09:10:46 compute-0 sudo[419660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:10:46 compute-0 sudo[419660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:46 compute-0 sudo[419660]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:46 compute-0 sudo[419685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:10:46 compute-0 sudo[419685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:10:46 compute-0 sudo[419685]: pam_unix(sudo:session): session closed for user root
Oct 02 09:10:46 compute-0 nova_compute[260603]: 2025-10-02 09:10:46.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:47 compute-0 ceph-mon[74477]: pgmap v2779: 305 pgs: 305 active+clean; 178 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 11 KiB/s wr, 11 op/s
Oct 02 09:10:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:10:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:10:47 compute-0 nova_compute[260603]: 2025-10-02 09:10:47.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 24 KiB/s wr, 30 op/s
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.180 2 DEBUG nova.compute.manager [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-changed-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.180 2 DEBUG nova.compute.manager [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing instance network info cache due to event network-changed-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.180 2 DEBUG oslo_concurrency.lockutils [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.181 2 DEBUG oslo_concurrency.lockutils [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.181 2 DEBUG nova.network.neutron [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing network info cache for port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:10:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.507 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.508 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.508 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.509 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.509 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.510 2 INFO nova.compute.manager [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Terminating instance
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.511 2 DEBUG nova.compute.manager [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:10:48 compute-0 kernel: tap2b42cc6e-a1 (unregistering): left promiscuous mode
Oct 02 09:10:48 compute-0 NetworkManager[45129]: <info>  [1759396248.5727] device (tap2b42cc6e-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 ovn_controller[152344]: 2025-10-02T09:10:48Z|01566|binding|INFO|Releasing lport 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b from this chassis (sb_readonly=0)
Oct 02 09:10:48 compute-0 ovn_controller[152344]: 2025-10-02T09:10:48Z|01567|binding|INFO|Setting lport 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b down in Southbound
Oct 02 09:10:48 compute-0 ovn_controller[152344]: 2025-10-02T09:10:48Z|01568|binding|INFO|Removing iface tap2b42cc6e-a1 ovn-installed in OVS
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 kernel: tap5bca2e0b-43 (unregistering): left promiscuous mode
Oct 02 09:10:48 compute-0 NetworkManager[45129]: <info>  [1759396248.6339] device (tap5bca2e0b-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 ovn_controller[152344]: 2025-10-02T09:10:48Z|01569|binding|INFO|Releasing lport 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 from this chassis (sb_readonly=1)
Oct 02 09:10:48 compute-0 ovn_controller[152344]: 2025-10-02T09:10:48Z|01570|binding|INFO|Removing iface tap5bca2e0b-43 ovn-installed in OVS
Oct 02 09:10:48 compute-0 ovn_controller[152344]: 2025-10-02T09:10:48Z|01571|if_status|INFO|Dropped 2 log messages in last 409 seconds (most recently, 409 seconds ago) due to excessive rate
Oct 02 09:10:48 compute-0 ovn_controller[152344]: 2025-10-02T09:10:48Z|01572|if_status|INFO|Not setting lport 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 down as sb is readonly
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Oct 02 09:10:48 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008d.scope: Consumed 16.658s CPU time.
Oct 02 09:10:48 compute-0 ovn_controller[152344]: 2025-10-02T09:10:48Z|01573|binding|INFO|Setting lport 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 down in Southbound
Oct 02 09:10:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:48.696 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:c0:38 10.100.0.11'], port_security=['fa:16:3e:93:c0:38 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68558f02-4047-4331-a47e-cbbee9580ea4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:10:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:48.697 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b in datapath d7dd05b8-70c0-4ef8-a410-57d83c307eaa unbound from our chassis
Oct 02 09:10:48 compute-0 systemd-machined[214636]: Machine qemu-175-instance-0000008d terminated.
Oct 02 09:10:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:48.698 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7dd05b8-70c0-4ef8-a410-57d83c307eaa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:10:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:48.700 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1df2b0a-08ce-4271-925d-f2e783ee9876]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:48.700 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa namespace which is not needed anymore
Oct 02 09:10:48 compute-0 NetworkManager[45129]: <info>  [1759396248.7341] manager: (tap2b42cc6e-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/634)
Oct 02 09:10:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:48.734 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:94:b1 2001:db8:0:1:f816:3eff:fe00:94b1 2001:db8::f816:3eff:fe00:94b1'], port_security=['fa:16:3e:00:94:b1 2001:db8:0:1:f816:3eff:fe00:94b1 2001:db8::f816:3eff:fe00:94b1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe00:94b1/64 2001:db8::f816:3eff:fe00:94b1/64', 'neutron:device_id': '3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a251a259-65e8-4a45-82af-f69bd5f24a08, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 NetworkManager[45129]: <info>  [1759396248.7459] manager: (tap5bca2e0b-43): new Tun device (/org/freedesktop/NetworkManager/Devices/635)
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.768 2 INFO nova.virt.libvirt.driver [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Instance destroyed successfully.
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.769 2 DEBUG nova.objects.instance [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:10:48 compute-0 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [NOTICE]   (417918) : haproxy version is 2.8.14-c23fe91
Oct 02 09:10:48 compute-0 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [NOTICE]   (417918) : path to executable is /usr/sbin/haproxy
Oct 02 09:10:48 compute-0 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [WARNING]  (417918) : Exiting Master process...
Oct 02 09:10:48 compute-0 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [WARNING]  (417918) : Exiting Master process...
Oct 02 09:10:48 compute-0 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [ALERT]    (417918) : Current worker (417920) exited with code 143 (Terminated)
Oct 02 09:10:48 compute-0 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [WARNING]  (417918) : All workers exited. Exiting... (0)
Oct 02 09:10:48 compute-0 systemd[1]: libpod-f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a.scope: Deactivated successfully.
Oct 02 09:10:48 compute-0 podman[419758]: 2025-10-02 09:10:48.838237159 +0000 UTC m=+0.046847897 container died f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.907 2 DEBUG nova.virt.libvirt.vif [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:09:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2095471568',display_name='tempest-TestGettingAddress-server-2095471568',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2095471568',id=141,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:09:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-jh5zd8cz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:09:42Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.908 2 DEBUG nova.network.os_vif_util [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.909 2 DEBUG nova.network.os_vif_util [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.909 2 DEBUG os_vif [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.911 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b42cc6e-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.919 2 INFO os_vif [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1')
Oct 02 09:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c85b02860fc626a52f25bc8a62daedd7cc30adfcd5cfe6c239da051b179e05c4-merged.mount: Deactivated successfully.
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.920 2 DEBUG nova.virt.libvirt.vif [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:09:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2095471568',display_name='tempest-TestGettingAddress-server-2095471568',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2095471568',id=141,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:09:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-jh5zd8cz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:09:42Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a-userdata-shm.mount: Deactivated successfully.
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.920 2 DEBUG nova.network.os_vif_util [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.921 2 DEBUG nova.network.os_vif_util [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.921 2 DEBUG os_vif [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.923 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bca2e0b-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:10:48 compute-0 nova_compute[260603]: 2025-10-02 09:10:48.931 2 INFO os_vif [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43')
Oct 02 09:10:48 compute-0 podman[419758]: 2025-10-02 09:10:48.945086269 +0000 UTC m=+0.153696997 container cleanup f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 09:10:48 compute-0 systemd[1]: libpod-conmon-f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a.scope: Deactivated successfully.
Oct 02 09:10:49 compute-0 podman[419803]: 2025-10-02 09:10:49.0120231 +0000 UTC m=+0.044085411 container remove f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.020 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d88851-9d6f-4718-ad17-cf0d292bb785]: (4, ('Thu Oct  2 09:10:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa (f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a)\nf7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a\nThu Oct  2 09:10:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa (f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a)\nf7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.022 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8633bd35-b3ca-4a16-8b55-45c05a4d0fd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.023 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7dd05b8-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:49 compute-0 kernel: tapd7dd05b8-70: left promiscuous mode
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.039 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[07b76bad-6585-4f6b-a7c5-2a3102431ac0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.071 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a6aa3972-dbdb-4ec1-812d-d3b083dde394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.072 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30335c91-e493-4b59-a397-93471c5a2348]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.088 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ddba37bb-4d1d-4595-bfb4-a3c8f1de3979]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705926, 'reachable_time': 25298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419839, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.090 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.090 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[505898d4-2427-4b26-be23-66190386b49b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.091 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 in datapath 6d9c157f-cf57-4b44-8fba-d16631e22418 unbound from our chassis
Oct 02 09:10:49 compute-0 systemd[1]: run-netns-ovnmeta\x2dd7dd05b8\x2d70c0\x2d4ef8\x2da410\x2d57d83c307eaa.mount: Deactivated successfully.
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.092 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d9c157f-cf57-4b44-8fba-d16631e22418, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.093 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4ebaf70f-5035-4765-bf8b-23dee384c532]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.093 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418 namespace which is not needed anymore
Oct 02 09:10:49 compute-0 podman[419819]: 2025-10-02 09:10:49.127413476 +0000 UTC m=+0.060199043 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 09:10:49 compute-0 podman[419820]: 2025-10-02 09:10:49.150351888 +0000 UTC m=+0.090504643 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:10:49 compute-0 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [NOTICE]   (417992) : haproxy version is 2.8.14-c23fe91
Oct 02 09:10:49 compute-0 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [NOTICE]   (417992) : path to executable is /usr/sbin/haproxy
Oct 02 09:10:49 compute-0 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [WARNING]  (417992) : Exiting Master process...
Oct 02 09:10:49 compute-0 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [ALERT]    (417992) : Current worker (417994) exited with code 143 (Terminated)
Oct 02 09:10:49 compute-0 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [WARNING]  (417992) : All workers exited. Exiting... (0)
Oct 02 09:10:49 compute-0 systemd[1]: libpod-5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82.scope: Deactivated successfully.
Oct 02 09:10:49 compute-0 conmon[417988]: conmon 5f0f697c21e73e116db8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82.scope/container/memory.events
Oct 02 09:10:49 compute-0 podman[419882]: 2025-10-02 09:10:49.277987714 +0000 UTC m=+0.101082402 container died 5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:10:49 compute-0 ceph-mon[74477]: pgmap v2780: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 24 KiB/s wr, 30 op/s
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.387 2 DEBUG nova.compute.manager [req-36eb94bd-e623-4474-ae00-bb0dc3aa2023 req-0d1e1628-db8e-4715-a925-5bad4d9d42e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-unplugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.388 2 DEBUG oslo_concurrency.lockutils [req-36eb94bd-e623-4474-ae00-bb0dc3aa2023 req-0d1e1628-db8e-4715-a925-5bad4d9d42e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.388 2 DEBUG oslo_concurrency.lockutils [req-36eb94bd-e623-4474-ae00-bb0dc3aa2023 req-0d1e1628-db8e-4715-a925-5bad4d9d42e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.388 2 DEBUG oslo_concurrency.lockutils [req-36eb94bd-e623-4474-ae00-bb0dc3aa2023 req-0d1e1628-db8e-4715-a925-5bad4d9d42e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.388 2 DEBUG nova.compute.manager [req-36eb94bd-e623-4474-ae00-bb0dc3aa2023 req-0d1e1628-db8e-4715-a925-5bad4d9d42e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] No waiting events found dispatching network-vif-unplugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.389 2 DEBUG nova.compute.manager [req-36eb94bd-e623-4474-ae00-bb0dc3aa2023 req-0d1e1628-db8e-4715-a925-5bad4d9d42e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-unplugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82-userdata-shm.mount: Deactivated successfully.
Oct 02 09:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7808816a47e7300fa580de7380ab5c8b3663ac809aa4c663ed97b18b7941cb10-merged.mount: Deactivated successfully.
Oct 02 09:10:49 compute-0 podman[419882]: 2025-10-02 09:10:49.461302612 +0000 UTC m=+0.284397300 container cleanup 5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:10:49 compute-0 systemd[1]: libpod-conmon-5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82.scope: Deactivated successfully.
Oct 02 09:10:49 compute-0 podman[419911]: 2025-10-02 09:10:49.654503516 +0000 UTC m=+0.173421211 container remove 5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.660 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe6aaaf-0e1a-4e88-ac0c-00041594fe6e]: (4, ('Thu Oct  2 09:10:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418 (5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82)\n5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82\nThu Oct  2 09:10:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418 (5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82)\n5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.662 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6348a6a2-9c54-4799-a429-8e3505f6ce62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.662 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d9c157f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:10:49 compute-0 kernel: tap6d9c157f-c0: left promiscuous mode
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.721 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb53658-9fa2-4fdc-93d3-3a04a2983fb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.753 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[37a7fb59-1718-4225-8008-f65bfe8b5c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.754 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7325b538-1c6f-4c85-934e-f90a3f8ee2b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.772 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[05991e30-107a-400f-89ef-17e86d3b70c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706042, 'reachable_time': 41988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419929, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.774 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:10:49 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.775 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff7fc31-67b5-4110-b760-4074ee1d7b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.869 2 INFO nova.virt.libvirt.driver [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Deleting instance files /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_del
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.870 2 INFO nova.virt.libvirt.driver [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Deletion of /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_del complete
Oct 02 09:10:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d6d9c157f\x2dcf57\x2d4b44\x2d8fba\x2dd16631e22418.mount: Deactivated successfully.
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.940 2 INFO nova.compute.manager [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Took 1.43 seconds to destroy the instance on the hypervisor.
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.940 2 DEBUG oslo.service.loopingcall [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.941 2 DEBUG nova.compute.manager [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:10:49 compute-0 nova_compute[260603]: 2025-10-02 09:10:49.941 2 DEBUG nova.network.neutron [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:10:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 09:10:50 compute-0 nova_compute[260603]: 2025-10-02 09:10:50.387 2 DEBUG nova.network.neutron [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updated VIF entry in instance network info cache for port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:10:50 compute-0 nova_compute[260603]: 2025-10-02 09:10:50.388 2 DEBUG nova.network.neutron [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:10:50 compute-0 nova_compute[260603]: 2025-10-02 09:10:50.422 2 DEBUG oslo_concurrency.lockutils [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:10:50 compute-0 nova_compute[260603]: 2025-10-02 09:10:50.459 2 DEBUG nova.compute.manager [req-b332be07-16f7-4b06-8762-7ecc2662ea7a req-ad15b017-35bf-4db0-89a4-0a7b8bee89a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-unplugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:50 compute-0 nova_compute[260603]: 2025-10-02 09:10:50.460 2 DEBUG oslo_concurrency.lockutils [req-b332be07-16f7-4b06-8762-7ecc2662ea7a req-ad15b017-35bf-4db0-89a4-0a7b8bee89a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:50 compute-0 nova_compute[260603]: 2025-10-02 09:10:50.460 2 DEBUG oslo_concurrency.lockutils [req-b332be07-16f7-4b06-8762-7ecc2662ea7a req-ad15b017-35bf-4db0-89a4-0a7b8bee89a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:50 compute-0 nova_compute[260603]: 2025-10-02 09:10:50.461 2 DEBUG oslo_concurrency.lockutils [req-b332be07-16f7-4b06-8762-7ecc2662ea7a req-ad15b017-35bf-4db0-89a4-0a7b8bee89a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:50 compute-0 nova_compute[260603]: 2025-10-02 09:10:50.461 2 DEBUG nova.compute.manager [req-b332be07-16f7-4b06-8762-7ecc2662ea7a req-ad15b017-35bf-4db0-89a4-0a7b8bee89a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] No waiting events found dispatching network-vif-unplugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:10:50 compute-0 nova_compute[260603]: 2025-10-02 09:10:50.462 2 DEBUG nova.compute.manager [req-b332be07-16f7-4b06-8762-7ecc2662ea7a req-ad15b017-35bf-4db0-89a4-0a7b8bee89a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-unplugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:10:50 compute-0 nova_compute[260603]: 2025-10-02 09:10:50.910 2 DEBUG nova.network.neutron [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:10:50 compute-0 nova_compute[260603]: 2025-10-02 09:10:50.944 2 INFO nova.compute.manager [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Took 1.00 seconds to deallocate network for instance.
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.002 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.002 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.069 2 DEBUG oslo_concurrency.processutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:10:51 compute-0 ceph-mon[74477]: pgmap v2781: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 09:10:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:10:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4244381091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.489 2 DEBUG oslo_concurrency.processutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.499 2 DEBUG nova.compute.provider_tree [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.509 2 DEBUG nova.compute.manager [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.510 2 DEBUG oslo_concurrency.lockutils [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.511 2 DEBUG oslo_concurrency.lockutils [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.511 2 DEBUG oslo_concurrency.lockutils [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.512 2 DEBUG nova.compute.manager [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] No waiting events found dispatching network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.513 2 WARNING nova.compute.manager [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received unexpected event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b for instance with vm_state deleted and task_state None.
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.513 2 DEBUG nova.compute.manager [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-deleted-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.513 2 DEBUG nova.compute.manager [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-deleted-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.527 2 DEBUG nova.scheduler.client.report [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.561 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.593 2 INFO nova.scheduler.client.report [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b
Oct 02 09:10:51 compute-0 nova_compute[260603]: 2025-10-02 09:10:51.675 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 09:10:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4244381091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:10:52 compute-0 nova_compute[260603]: 2025-10-02 09:10:52.681 2 DEBUG nova.compute.manager [req-1fa86fbd-5d02-4fa5-bf98-ee1cc013ccff req-6e65987d-9982-439d-ab6a-09b9d01c9346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:10:52 compute-0 nova_compute[260603]: 2025-10-02 09:10:52.682 2 DEBUG oslo_concurrency.lockutils [req-1fa86fbd-5d02-4fa5-bf98-ee1cc013ccff req-6e65987d-9982-439d-ab6a-09b9d01c9346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:10:52 compute-0 nova_compute[260603]: 2025-10-02 09:10:52.682 2 DEBUG oslo_concurrency.lockutils [req-1fa86fbd-5d02-4fa5-bf98-ee1cc013ccff req-6e65987d-9982-439d-ab6a-09b9d01c9346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:10:52 compute-0 nova_compute[260603]: 2025-10-02 09:10:52.682 2 DEBUG oslo_concurrency.lockutils [req-1fa86fbd-5d02-4fa5-bf98-ee1cc013ccff req-6e65987d-9982-439d-ab6a-09b9d01c9346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:10:52 compute-0 nova_compute[260603]: 2025-10-02 09:10:52.682 2 DEBUG nova.compute.manager [req-1fa86fbd-5d02-4fa5-bf98-ee1cc013ccff req-6e65987d-9982-439d-ab6a-09b9d01c9346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] No waiting events found dispatching network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:10:52 compute-0 nova_compute[260603]: 2025-10-02 09:10:52.683 2 WARNING nova.compute.manager [req-1fa86fbd-5d02-4fa5-bf98-ee1cc013ccff req-6e65987d-9982-439d-ab6a-09b9d01c9346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received unexpected event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 for instance with vm_state deleted and task_state None.
Oct 02 09:10:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:10:53 compute-0 ceph-mon[74477]: pgmap v2782: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 09:10:53 compute-0 nova_compute[260603]: 2025-10-02 09:10:53.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:53 compute-0 podman[419952]: 2025-10-02 09:10:53.976483814 +0000 UTC m=+0.049130168 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:10:54 compute-0 podman[419953]: 2025-10-02 09:10:54.013637588 +0000 UTC m=+0.070192593 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 02 09:10:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 13 KiB/s wr, 57 op/s
Oct 02 09:10:54 compute-0 ceph-mon[74477]: pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 13 KiB/s wr, 57 op/s
Oct 02 09:10:55 compute-0 nova_compute[260603]: 2025-10-02 09:10:55.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:10:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 45 op/s
Oct 02 09:10:56 compute-0 nova_compute[260603]: 2025-10-02 09:10:56.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:57 compute-0 ceph-mon[74477]: pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 45 op/s
Oct 02 09:10:57 compute-0 nova_compute[260603]: 2025-10-02 09:10:57.616 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396242.6146586, 770238ca-0d80-443f-943e-236e0cfb3606 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:10:57 compute-0 nova_compute[260603]: 2025-10-02 09:10:57.617 2 INFO nova.compute.manager [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] VM Stopped (Lifecycle Event)
Oct 02 09:10:57 compute-0 nova_compute[260603]: 2025-10-02 09:10:57.650 2 DEBUG nova.compute.manager [None req-916ee98e-b9e2-42af-8c88-6431c1dce4b8 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:10:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:10:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:10:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:10:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:10:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:10:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:10:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 45 op/s
Oct 02 09:10:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:10:58 compute-0 nova_compute[260603]: 2025-10-02 09:10:58.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:10:59 compute-0 ceph-mon[74477]: pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 45 op/s
Oct 02 09:10:59 compute-0 nova_compute[260603]: 2025-10-02 09:10:59.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:10:59 compute-0 nova_compute[260603]: 2025-10-02 09:10:59.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:10:59 compute-0 nova_compute[260603]: 2025-10-02 09:10:59.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:11:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:11:00 compute-0 nova_compute[260603]: 2025-10-02 09:11:00.461 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:11:01 compute-0 nova_compute[260603]: 2025-10-02 09:11:01.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:01 compute-0 nova_compute[260603]: 2025-10-02 09:11:01.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:01 compute-0 ceph-mon[74477]: pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:11:01 compute-0 nova_compute[260603]: 2025-10-02 09:11:01.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:11:02 compute-0 nova_compute[260603]: 2025-10-02 09:11:02.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:11:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:11:03 compute-0 ceph-mon[74477]: pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:11:03 compute-0 nova_compute[260603]: 2025-10-02 09:11:03.758 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396248.7572656, 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:11:03 compute-0 nova_compute[260603]: 2025-10-02 09:11:03.759 2 INFO nova.compute.manager [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] VM Stopped (Lifecycle Event)
Oct 02 09:11:03 compute-0 nova_compute[260603]: 2025-10-02 09:11:03.784 2 DEBUG nova.compute.manager [None req-a02181e4-682c-4463-94b2-2e98a9a31076 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:11:03 compute-0 nova_compute[260603]: 2025-10-02 09:11:03.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:11:04 compute-0 nova_compute[260603]: 2025-10-02 09:11:04.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:11:04 compute-0 ceph-mon[74477]: pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:11:05 compute-0 nova_compute[260603]: 2025-10-02 09:11:05.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:11:05 compute-0 nova_compute[260603]: 2025-10-02 09:11:05.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:11:05 compute-0 nova_compute[260603]: 2025-10-02 09:11:05.576 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:05 compute-0 nova_compute[260603]: 2025-10-02 09:11:05.576 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:05 compute-0 nova_compute[260603]: 2025-10-02 09:11:05.577 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:05 compute-0 nova_compute[260603]: 2025-10-02 09:11:05.577 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:11:05 compute-0 nova_compute[260603]: 2025-10-02 09:11:05.577 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:11:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:11:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/930059621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:11:06 compute-0 nova_compute[260603]: 2025-10-02 09:11:06.058 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:11:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:06 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/930059621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:11:06 compute-0 nova_compute[260603]: 2025-10-02 09:11:06.256 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:11:06 compute-0 nova_compute[260603]: 2025-10-02 09:11:06.257 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3593MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:11:06 compute-0 nova_compute[260603]: 2025-10-02 09:11:06.258 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:06 compute-0 nova_compute[260603]: 2025-10-02 09:11:06.258 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:06 compute-0 nova_compute[260603]: 2025-10-02 09:11:06.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:07 compute-0 ceph-mon[74477]: pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:07 compute-0 nova_compute[260603]: 2025-10-02 09:11:07.698 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:11:07 compute-0 nova_compute[260603]: 2025-10-02 09:11:07.699 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:11:07 compute-0 nova_compute[260603]: 2025-10-02 09:11:07.780 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:11:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:11:08 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012869170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:11:08 compute-0 nova_compute[260603]: 2025-10-02 09:11:08.259 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:11:08 compute-0 nova_compute[260603]: 2025-10-02 09:11:08.268 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:11:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:11:08 compute-0 nova_compute[260603]: 2025-10-02 09:11:08.301 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:11:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1012869170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:11:08 compute-0 nova_compute[260603]: 2025-10-02 09:11:08.352 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:11:08 compute-0 nova_compute[260603]: 2025-10-02 09:11:08.353 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:08 compute-0 nova_compute[260603]: 2025-10-02 09:11:08.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:09 compute-0 ceph-mon[74477]: pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:11 compute-0 nova_compute[260603]: 2025-10-02 09:11:11.349 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:11:11 compute-0 ceph-mon[74477]: pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:11 compute-0 nova_compute[260603]: 2025-10-02 09:11:11.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:11:13 compute-0 ceph-mon[74477]: pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:13 compute-0 nova_compute[260603]: 2025-10-02 09:11:13.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:14 compute-0 nova_compute[260603]: 2025-10-02 09:11:14.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:11:15 compute-0 sshd-session[420038]: Connection closed by 167.71.248.239 port 34144
Oct 02 09:11:15 compute-0 nova_compute[260603]: 2025-10-02 09:11:15.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:11:15 compute-0 ceph-mon[74477]: pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:16 compute-0 nova_compute[260603]: 2025-10-02 09:11:16.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:16 compute-0 ceph-mon[74477]: pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:11:18 compute-0 nova_compute[260603]: 2025-10-02 09:11:18.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:19 compute-0 ceph-mon[74477]: pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:20 compute-0 podman[420040]: 2025-10-02 09:11:20.014885257 +0000 UTC m=+0.069927574 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 09:11:20 compute-0 podman[420039]: 2025-10-02 09:11:20.053647881 +0000 UTC m=+0.116256953 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 09:11:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:21 compute-0 ceph-mon[74477]: pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:21 compute-0 nova_compute[260603]: 2025-10-02 09:11:21.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:11:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4269718831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:11:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:11:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4269718831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:11:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/4269718831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:11:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/4269718831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.693905) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396282693966, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1896, "num_deletes": 251, "total_data_size": 3051073, "memory_usage": 3101312, "flush_reason": "Manual Compaction"}
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396282769297, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2987451, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56855, "largest_seqno": 58750, "table_properties": {"data_size": 2978864, "index_size": 5272, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17625, "raw_average_key_size": 20, "raw_value_size": 2961754, "raw_average_value_size": 3373, "num_data_blocks": 234, "num_entries": 878, "num_filter_entries": 878, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396083, "oldest_key_time": 1759396083, "file_creation_time": 1759396282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 75482 microseconds, and 12809 cpu microseconds.
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.769383) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2987451 bytes OK
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.769415) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.844637) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.844691) EVENT_LOG_v1 {"time_micros": 1759396282844681, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.844718) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3043046, prev total WAL file size 3043046, number of live WAL files 2.
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.846050) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(2917KB)], [134(9903KB)]
Oct 02 09:11:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396282846123, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 13128395, "oldest_snapshot_seqno": -1}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7873 keys, 11409239 bytes, temperature: kUnknown
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283116619, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11409239, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11355850, "index_size": 32604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 205181, "raw_average_key_size": 26, "raw_value_size": 11214542, "raw_average_value_size": 1424, "num_data_blocks": 1275, "num_entries": 7873, "num_filter_entries": 7873, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.117105) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11409239 bytes
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.149868) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 48.5 rd, 42.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.7 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 8387, records dropped: 514 output_compression: NoCompression
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.149930) EVENT_LOG_v1 {"time_micros": 1759396283149907, "job": 82, "event": "compaction_finished", "compaction_time_micros": 270706, "compaction_time_cpu_micros": 52243, "output_level": 6, "num_output_files": 1, "total_output_size": 11409239, "num_input_records": 8387, "num_output_records": 7873, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283151259, "job": 82, "event": "table_file_deletion", "file_number": 136}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283155361, "job": 82, "event": "table_file_deletion", "file_number": 134}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.845889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.155508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.155521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.155525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.155530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.155534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:11:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.304032) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283304077, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 255, "num_deletes": 250, "total_data_size": 14332, "memory_usage": 19968, "flush_reason": "Manual Compaction"}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283321134, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 13852, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58751, "largest_seqno": 59005, "table_properties": {"data_size": 12099, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396283, "oldest_key_time": 1759396283, "file_creation_time": 1759396283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 17221 microseconds, and 1252 cpu microseconds.
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.321247) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 13852 bytes OK
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.321280) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.342820) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.342854) EVENT_LOG_v1 {"time_micros": 1759396283342843, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.342884) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 12321, prev total WAL file size 12321, number of live WAL files 2.
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.343659) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323532' seq:72057594037927935, type:22 .. '6D6772737461740032353033' seq:0, type:0; will stop at (end)
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(13KB)], [137(10MB)]
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283343731, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11423091, "oldest_snapshot_seqno": -1}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7624 keys, 8127621 bytes, temperature: kUnknown
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283432146, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8127621, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8080822, "index_size": 26654, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19077, "raw_key_size": 200199, "raw_average_key_size": 26, "raw_value_size": 7948729, "raw_average_value_size": 1042, "num_data_blocks": 1026, "num_entries": 7624, "num_filter_entries": 7624, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.432504) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8127621 bytes
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.467926) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.1 rd, 91.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.9 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(1411.4) write-amplify(586.7) OK, records in: 8128, records dropped: 504 output_compression: NoCompression
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.467980) EVENT_LOG_v1 {"time_micros": 1759396283467958, "job": 84, "event": "compaction_finished", "compaction_time_micros": 88499, "compaction_time_cpu_micros": 33205, "output_level": 6, "num_output_files": 1, "total_output_size": 8127621, "num_input_records": 8128, "num_output_records": 7624, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:11:23 compute-0 ceph-mon[74477]: pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283468255, "job": 84, "event": "table_file_deletion", "file_number": 139}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283471764, "job": 84, "event": "table_file_deletion", "file_number": 137}
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.343463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.471896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.471907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.471911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.471915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:11:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.471919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:11:23 compute-0 nova_compute[260603]: 2025-10-02 09:11:23.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:25 compute-0 podman[420085]: 2025-10-02 09:11:25.020596924 +0000 UTC m=+0.092006560 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:11:25 compute-0 podman[420086]: 2025-10-02 09:11:25.034424743 +0000 UTC m=+0.093361341 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:11:25 compute-0 ceph-mon[74477]: pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:26.029 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:74:d6 2001:db8:0:1:f816:3eff:fe4b:74d6 2001:db8::f816:3eff:fe4b:74d6'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe4b:74d6/64 2001:db8::f816:3eff:fe4b:74d6/64', 'neutron:device_id': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bb22cc4-c817-4149-925c-4cb21e573102, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=be1c87e3-582f-4bbb-a5fb-4fb837b7e882) old=Port_Binding(mac=['fa:16:3e:4b:74:d6 2001:db8::f816:3eff:fe4b:74d6'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe4b:74d6/64', 'neutron:device_id': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:11:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:26.031 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port be1c87e3-582f-4bbb-a5fb-4fb837b7e882 in datapath 5d64d879-42c6-456c-a212-df00bf998997 updated
Oct 02 09:11:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:26.032 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d64d879-42c6-456c-a212-df00bf998997, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:11:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:26.034 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd6677b-8835-4443-a650-eb58fe6e96cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:26 compute-0 nova_compute[260603]: 2025-10-02 09:11:26.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:26 compute-0 ceph-mon[74477]: pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:11:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:11:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:11:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:11:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:11:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:11:28
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['images', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups']
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:11:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:11:29 compute-0 nova_compute[260603]: 2025-10-02 09:11:29.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:29 compute-0 ceph-mon[74477]: pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:31 compute-0 ceph-mon[74477]: pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:31 compute-0 nova_compute[260603]: 2025-10-02 09:11:31.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:32 compute-0 ceph-mon[74477]: pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:11:34 compute-0 nova_compute[260603]: 2025-10-02 09:11:34.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:34.849 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:34.849 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:34.849 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:35 compute-0 ceph-mon[74477]: pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:36 compute-0 nova_compute[260603]: 2025-10-02 09:11:36.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:36 compute-0 nova_compute[260603]: 2025-10-02 09:11:36.764 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:36 compute-0 nova_compute[260603]: 2025-10-02 09:11:36.765 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:36 compute-0 nova_compute[260603]: 2025-10-02 09:11:36.791 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:11:36 compute-0 nova_compute[260603]: 2025-10-02 09:11:36.922 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:36 compute-0 nova_compute[260603]: 2025-10-02 09:11:36.923 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:36 compute-0 nova_compute[260603]: 2025-10-02 09:11:36.930 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:11:36 compute-0 nova_compute[260603]: 2025-10-02 09:11:36.931 2 INFO nova.compute.claims [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:11:37 compute-0 nova_compute[260603]: 2025-10-02 09:11:37.083 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:11:37 compute-0 ceph-mon[74477]: pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:11:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1444054755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:11:37 compute-0 nova_compute[260603]: 2025-10-02 09:11:37.622 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:11:37 compute-0 nova_compute[260603]: 2025-10-02 09:11:37.630 2 DEBUG nova.compute.provider_tree [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:11:37 compute-0 nova_compute[260603]: 2025-10-02 09:11:37.666 2 DEBUG nova.scheduler.client.report [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:11:37 compute-0 nova_compute[260603]: 2025-10-02 09:11:37.728 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:37 compute-0 nova_compute[260603]: 2025-10-02 09:11:37.730 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:11:37 compute-0 nova_compute[260603]: 2025-10-02 09:11:37.822 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:11:37 compute-0 nova_compute[260603]: 2025-10-02 09:11:37.823 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:11:37 compute-0 nova_compute[260603]: 2025-10-02 09:11:37.877 2 INFO nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:11:37 compute-0 nova_compute[260603]: 2025-10-02 09:11:37.961 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:11:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.125 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.127 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.128 2 INFO nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Creating image(s)
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.166 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.199 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.230 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.239 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:11:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.336 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.337 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.338 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.339 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1444054755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.387 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:11:38 compute-0 nova_compute[260603]: 2025-10-02 09:11:38.391 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:11:39 compute-0 nova_compute[260603]: 2025-10-02 09:11:39.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:11:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:11:39 compute-0 ceph-mon[74477]: pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:39 compute-0 nova_compute[260603]: 2025-10-02 09:11:39.695 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:11:39 compute-0 nova_compute[260603]: 2025-10-02 09:11:39.789 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:11:39 compute-0 nova_compute[260603]: 2025-10-02 09:11:39.992 2 DEBUG nova.policy [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:11:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:40 compute-0 nova_compute[260603]: 2025-10-02 09:11:40.503 2 DEBUG nova.objects.instance [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 71c9f70f-5f86-4723-9e4f-a4aca14211cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:11:40 compute-0 nova_compute[260603]: 2025-10-02 09:11:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:11:40 compute-0 nova_compute[260603]: 2025-10-02 09:11:40.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:11:40 compute-0 nova_compute[260603]: 2025-10-02 09:11:40.530 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:11:40 compute-0 nova_compute[260603]: 2025-10-02 09:11:40.530 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Ensure instance console log exists: /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:11:40 compute-0 nova_compute[260603]: 2025-10-02 09:11:40.531 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:40 compute-0 nova_compute[260603]: 2025-10-02 09:11:40.532 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:40 compute-0 nova_compute[260603]: 2025-10-02 09:11:40.533 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:40 compute-0 ceph-mon[74477]: pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:41 compute-0 nova_compute[260603]: 2025-10-02 09:11:41.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:42 compute-0 nova_compute[260603]: 2025-10-02 09:11:42.751 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Successfully created port: 102411d5-80b6-47af-9293-08b07c65d541 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:11:43 compute-0 ceph-mon[74477]: pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:11:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:11:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:44.089 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:11:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:44.090 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:11:44 compute-0 nova_compute[260603]: 2025-10-02 09:11:44.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:11:44 compute-0 nova_compute[260603]: 2025-10-02 09:11:44.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:44 compute-0 nova_compute[260603]: 2025-10-02 09:11:44.177 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Successfully created port: d5b6295d-90a7-4d25-be69-ccd7a58621c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:11:45 compute-0 nova_compute[260603]: 2025-10-02 09:11:45.039 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Successfully updated port: 102411d5-80b6-47af-9293-08b07c65d541 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:11:45 compute-0 nova_compute[260603]: 2025-10-02 09:11:45.136 2 DEBUG nova.compute.manager [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-changed-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:11:45 compute-0 nova_compute[260603]: 2025-10-02 09:11:45.136 2 DEBUG nova.compute.manager [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing instance network info cache due to event network-changed-102411d5-80b6-47af-9293-08b07c65d541. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:11:45 compute-0 nova_compute[260603]: 2025-10-02 09:11:45.137 2 DEBUG oslo_concurrency.lockutils [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:11:45 compute-0 nova_compute[260603]: 2025-10-02 09:11:45.137 2 DEBUG oslo_concurrency.lockutils [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:11:45 compute-0 nova_compute[260603]: 2025-10-02 09:11:45.138 2 DEBUG nova.network.neutron [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing network info cache for port 102411d5-80b6-47af-9293-08b07c65d541 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:11:45 compute-0 nova_compute[260603]: 2025-10-02 09:11:45.293 2 DEBUG nova.network.neutron [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:11:45 compute-0 ceph-mon[74477]: pgmap v2808: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:11:45 compute-0 nova_compute[260603]: 2025-10-02 09:11:45.880 2 DEBUG nova.network.neutron [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:11:45 compute-0 nova_compute[260603]: 2025-10-02 09:11:45.903 2 DEBUG oslo_concurrency.lockutils [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:11:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:11:46 compute-0 nova_compute[260603]: 2025-10-02 09:11:46.255 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Successfully updated port: d5b6295d-90a7-4d25-be69-ccd7a58621c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:11:46 compute-0 nova_compute[260603]: 2025-10-02 09:11:46.281 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:11:46 compute-0 nova_compute[260603]: 2025-10-02 09:11:46.281 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:11:46 compute-0 nova_compute[260603]: 2025-10-02 09:11:46.281 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:11:46 compute-0 nova_compute[260603]: 2025-10-02 09:11:46.431 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:11:46 compute-0 sudo[420312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:11:46 compute-0 sudo[420312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:46 compute-0 sudo[420312]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:46 compute-0 nova_compute[260603]: 2025-10-02 09:11:46.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:46 compute-0 sudo[420337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:11:46 compute-0 sudo[420337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:46 compute-0 sudo[420337]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:46 compute-0 sudo[420362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:11:46 compute-0 sudo[420362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:46 compute-0 sudo[420362]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:46 compute-0 sudo[420387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 09:11:46 compute-0 sudo[420387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:47 compute-0 nova_compute[260603]: 2025-10-02 09:11:47.281 2 DEBUG nova.compute.manager [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-changed-d5b6295d-90a7-4d25-be69-ccd7a58621c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:11:47 compute-0 nova_compute[260603]: 2025-10-02 09:11:47.282 2 DEBUG nova.compute.manager [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing instance network info cache due to event network-changed-d5b6295d-90a7-4d25-be69-ccd7a58621c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:11:47 compute-0 nova_compute[260603]: 2025-10-02 09:11:47.282 2 DEBUG oslo_concurrency.lockutils [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:11:47 compute-0 podman[420486]: 2025-10-02 09:11:47.408729436 +0000 UTC m=+0.111208047 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:11:47 compute-0 ceph-mon[74477]: pgmap v2809: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:11:47 compute-0 podman[420486]: 2025-10-02 09:11:47.505130111 +0000 UTC m=+0.207608672 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:11:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:11:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:11:48 compute-0 sudo[420387]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:11:48 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:11:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:11:48 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:11:48 compute-0 sudo[420644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:11:48 compute-0 sudo[420644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:48 compute-0 sudo[420644]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:48 compute-0 sudo[420669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:11:48 compute-0 sudo[420669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:48 compute-0 sudo[420669]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:48 compute-0 sudo[420694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:11:48 compute-0 sudo[420694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:48 compute-0 sudo[420694]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:48 compute-0 sudo[420719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:11:48 compute-0 sudo[420719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:49 compute-0 ovn_controller[152344]: 2025-10-02T09:11:49Z|01574|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:49 compute-0 sudo[420719]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 09:11:49 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 09:11:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:11:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:11:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:11:49 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:11:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:11:49 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:11:49 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9eb06afc-c243-473a-8e5b-f054bf6c2878 does not exist
Oct 02 09:11:49 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b0af0d07-b722-40ce-98d5-3b49561aeef7 does not exist
Oct 02 09:11:49 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9c9c9cf9-e465-4633-8429-51406cb77170 does not exist
Oct 02 09:11:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:11:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:11:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:11:49 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:11:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:11:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.306 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:11:49 compute-0 sudo[420775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:11:49 compute-0 sudo[420775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:49 compute-0 sudo[420775]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:49 compute-0 ceph-mon[74477]: pgmap v2810: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:11:49 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:11:49 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:11:49 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 09:11:49 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:11:49 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:11:49 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:11:49 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:11:49 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:11:49 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:11:49 compute-0 sudo[420800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:11:49 compute-0 sudo[420800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:49 compute-0 sudo[420800]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:49 compute-0 sudo[420825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:11:49 compute-0 sudo[420825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:49 compute-0 sudo[420825]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:49 compute-0 sudo[420850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:11:49 compute-0 sudo[420850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.640 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.641 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Instance network_info: |[{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.641 2 DEBUG oslo_concurrency.lockutils [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.641 2 DEBUG nova.network.neutron [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing network info cache for port d5b6295d-90a7-4d25-be69-ccd7a58621c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.644 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Start _get_guest_xml network_info=[{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.650 2 WARNING nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.658 2 DEBUG nova.virt.libvirt.host [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.659 2 DEBUG nova.virt.libvirt.host [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.662 2 DEBUG nova.virt.libvirt.host [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.663 2 DEBUG nova.virt.libvirt.host [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.663 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.664 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.664 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.664 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.665 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.665 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.665 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.665 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.666 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.666 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.666 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.666 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:11:49 compute-0 nova_compute[260603]: 2025-10-02 09:11:49.671 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:11:49 compute-0 podman[420917]: 2025-10-02 09:11:49.884273415 +0000 UTC m=+0.090411420 container create 19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 09:11:49 compute-0 podman[420917]: 2025-10-02 09:11:49.814090934 +0000 UTC m=+0.020229029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:11:49 compute-0 systemd[1]: Started libpod-conmon-19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952.scope.
Oct 02 09:11:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:11:49 compute-0 podman[420917]: 2025-10-02 09:11:49.984394136 +0000 UTC m=+0.190532191 container init 19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 09:11:49 compute-0 podman[420917]: 2025-10-02 09:11:49.995299625 +0000 UTC m=+0.201437630 container start 19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:11:49 compute-0 podman[420917]: 2025-10-02 09:11:49.998891747 +0000 UTC m=+0.205029792 container attach 19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 09:11:50 compute-0 silly_swanson[420953]: 167 167
Oct 02 09:11:50 compute-0 systemd[1]: libpod-19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952.scope: Deactivated successfully.
Oct 02 09:11:50 compute-0 conmon[420953]: conmon 19f3e03c0ed665425233 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952.scope/container/memory.events
Oct 02 09:11:50 compute-0 podman[420917]: 2025-10-02 09:11:50.007230616 +0000 UTC m=+0.213368641 container died 19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:11:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-815c9ad102a4a7abe644cd4dd988d467c1322bbae353039ede8c93707ec63c30-merged.mount: Deactivated successfully.
Oct 02 09:11:50 compute-0 podman[420917]: 2025-10-02 09:11:50.055576488 +0000 UTC m=+0.261714503 container remove 19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 09:11:50 compute-0 systemd[1]: libpod-conmon-19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952.scope: Deactivated successfully.
Oct 02 09:11:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:11:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:11:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022964571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.162 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:11:50 compute-0 podman[420968]: 2025-10-02 09:11:50.16695603 +0000 UTC m=+0.096677566 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.182 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:11:50 compute-0 podman[420977]: 2025-10-02 09:11:50.188648723 +0000 UTC m=+0.091129382 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.190 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:11:50 compute-0 podman[421040]: 2025-10-02 09:11:50.251037362 +0000 UTC m=+0.039337903 container create f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_raman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 09:11:50 compute-0 systemd[1]: Started libpod-conmon-f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045.scope.
Oct 02 09:11:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b2c3a6f5ff1230eefa2196f522acc406f94a0a1bfa64360749da767b0e3db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b2c3a6f5ff1230eefa2196f522acc406f94a0a1bfa64360749da767b0e3db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b2c3a6f5ff1230eefa2196f522acc406f94a0a1bfa64360749da767b0e3db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b2c3a6f5ff1230eefa2196f522acc406f94a0a1bfa64360749da767b0e3db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b2c3a6f5ff1230eefa2196f522acc406f94a0a1bfa64360749da767b0e3db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:50 compute-0 podman[421040]: 2025-10-02 09:11:50.234904281 +0000 UTC m=+0.023204842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:11:50 compute-0 podman[421040]: 2025-10-02 09:11:50.334156706 +0000 UTC m=+0.122457267 container init f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_raman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:11:50 compute-0 podman[421040]: 2025-10-02 09:11:50.345375865 +0000 UTC m=+0.133676406 container start f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:11:50 compute-0 podman[421040]: 2025-10-02 09:11:50.348603745 +0000 UTC m=+0.136904316 container attach f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 09:11:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1022964571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:11:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:11:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3360459198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.616 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.619 2 DEBUG nova.virt.libvirt.vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1569895686',display_name='tempest-TestGettingAddress-server-1569895686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1569895686',id=143,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-59lxi72k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:11:38Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=71c9f70f-5f86-4723-9e4f-a4aca14211cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.620 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.622 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.624 2 DEBUG nova.virt.libvirt.vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1569895686',display_name='tempest-TestGettingAddress-server-1569895686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1569895686',id=143,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-59lxi72k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:11:38Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=71c9f70f-5f86-4723-9e4f-a4aca14211cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.625 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.626 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.629 2 DEBUG nova.objects.instance [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 71c9f70f-5f86-4723-9e4f-a4aca14211cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.702 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:11:50 compute-0 nova_compute[260603]:   <uuid>71c9f70f-5f86-4723-9e4f-a4aca14211cb</uuid>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   <name>instance-0000008f</name>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-1569895686</nova:name>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:11:49</nova:creationTime>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <nova:port uuid="102411d5-80b6-47af-9293-08b07c65d541">
Oct 02 09:11:50 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <nova:port uuid="d5b6295d-90a7-4d25-be69-ccd7a58621c6">
Oct 02 09:11:50 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe3c:6203" ipVersion="6"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe3c:6203" ipVersion="6"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <system>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <entry name="serial">71c9f70f-5f86-4723-9e4f-a4aca14211cb</entry>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <entry name="uuid">71c9f70f-5f86-4723-9e4f-a4aca14211cb</entry>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     </system>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   <os>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   </os>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   <features>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   </features>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk">
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       </source>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk.config">
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       </source>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:11:50 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:05:ca:02"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <target dev="tap102411d5-80"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:3c:62:03"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <target dev="tapd5b6295d-90"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/console.log" append="off"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <video>
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     </video>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:11:50 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:11:50 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:11:50 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:11:50 compute-0 nova_compute[260603]: </domain>
Oct 02 09:11:50 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.713 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Preparing to wait for external event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.714 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.714 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.715 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.715 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Preparing to wait for external event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.716 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.716 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.717 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.718 2 DEBUG nova.virt.libvirt.vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1569895686',display_name='tempest-TestGettingAddress-server-1569895686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1569895686',id=143,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-59lxi72k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:11:38Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=71c9f70f-5f86-4723-9e4f-a4aca14211cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.719 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.720 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.721 2 DEBUG os_vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.728 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.729 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap102411d5-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap102411d5-80, col_values=(('external_ids', {'iface-id': '102411d5-80b6-47af-9293-08b07c65d541', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:ca:02', 'vm-uuid': '71c9f70f-5f86-4723-9e4f-a4aca14211cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:50 compute-0 NetworkManager[45129]: <info>  [1759396310.7822] manager: (tap102411d5-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/636)
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.790 2 INFO os_vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80')
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.792 2 DEBUG nova.virt.libvirt.vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1569895686',display_name='tempest-TestGettingAddress-server-1569895686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1569895686',id=143,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-59lxi72k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:11:38Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=71c9f70f-5f86-4723-9e4f-a4aca14211cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.793 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.796 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.797 2 DEBUG os_vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.798 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.799 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5b6295d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.804 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5b6295d-90, col_values=(('external_ids', {'iface-id': 'd5b6295d-90a7-4d25-be69-ccd7a58621c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:62:03', 'vm-uuid': '71c9f70f-5f86-4723-9e4f-a4aca14211cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:50 compute-0 NetworkManager[45129]: <info>  [1759396310.8068] manager: (tapd5b6295d-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.815 2 INFO os_vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90')
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.872 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.873 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.874 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:05:ca:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.874 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:3c:62:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.876 2 INFO nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Using config drive
Oct 02 09:11:50 compute-0 nova_compute[260603]: 2025-10-02 09:11:50.909 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:11:51 compute-0 ceph-mon[74477]: pgmap v2811: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:11:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3360459198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:11:51 compute-0 inspiring_raman[421058]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:11:51 compute-0 inspiring_raman[421058]: --> relative data size: 1.0
Oct 02 09:11:51 compute-0 inspiring_raman[421058]: --> All data devices are unavailable
Oct 02 09:11:51 compute-0 nova_compute[260603]: 2025-10-02 09:11:51.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:51 compute-0 systemd[1]: libpod-f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045.scope: Deactivated successfully.
Oct 02 09:11:51 compute-0 systemd[1]: libpod-f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045.scope: Consumed 1.156s CPU time.
Oct 02 09:11:51 compute-0 podman[421040]: 2025-10-02 09:11:51.562897169 +0000 UTC m=+1.351197710 container died f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_raman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 09:11:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-045b2c3a6f5ff1230eefa2196f522acc406f94a0a1bfa64360749da767b0e3db-merged.mount: Deactivated successfully.
Oct 02 09:11:51 compute-0 podman[421040]: 2025-10-02 09:11:51.649296684 +0000 UTC m=+1.437597225 container remove f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 09:11:51 compute-0 systemd[1]: libpod-conmon-f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045.scope: Deactivated successfully.
Oct 02 09:11:51 compute-0 sudo[420850]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:51 compute-0 sudo[421142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:11:51 compute-0 sudo[421142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:51 compute-0 sudo[421142]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:51 compute-0 sudo[421167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:11:51 compute-0 sudo[421167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:51 compute-0 sudo[421167]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:51 compute-0 sudo[421192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:11:51 compute-0 sudo[421192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:51 compute-0 sudo[421192]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:51 compute-0 sudo[421217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:11:51 compute-0 sudo[421217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:11:52 compute-0 podman[421283]: 2025-10-02 09:11:52.33586881 +0000 UTC m=+0.087731587 container create 1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 09:11:52 compute-0 systemd[1]: Started libpod-conmon-1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32.scope.
Oct 02 09:11:52 compute-0 podman[421283]: 2025-10-02 09:11:52.280634074 +0000 UTC m=+0.032496881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:11:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:11:52 compute-0 podman[421283]: 2025-10-02 09:11:52.411552493 +0000 UTC m=+0.163415280 container init 1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shirley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:11:52 compute-0 podman[421283]: 2025-10-02 09:11:52.42596015 +0000 UTC m=+0.177822937 container start 1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:11:52 compute-0 podman[421283]: 2025-10-02 09:11:52.429570152 +0000 UTC m=+0.181432939 container attach 1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shirley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 09:11:52 compute-0 dreamy_shirley[421300]: 167 167
Oct 02 09:11:52 compute-0 systemd[1]: libpod-1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32.scope: Deactivated successfully.
Oct 02 09:11:52 compute-0 podman[421283]: 2025-10-02 09:11:52.433967068 +0000 UTC m=+0.185829855 container died 1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:11:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d9c7d5d2978d857d3e4fe4ac7c9fad088710b817d51d09605ad9408be256897-merged.mount: Deactivated successfully.
Oct 02 09:11:52 compute-0 podman[421283]: 2025-10-02 09:11:52.475742537 +0000 UTC m=+0.227605354 container remove 1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 09:11:52 compute-0 systemd[1]: libpod-conmon-1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32.scope: Deactivated successfully.
Oct 02 09:11:52 compute-0 nova_compute[260603]: 2025-10-02 09:11:52.546 2 INFO nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Creating config drive at /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/disk.config
Oct 02 09:11:52 compute-0 nova_compute[260603]: 2025-10-02 09:11:52.551 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2umpw0z1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:11:52 compute-0 nova_compute[260603]: 2025-10-02 09:11:52.601 2 DEBUG nova.network.neutron [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updated VIF entry in instance network info cache for port d5b6295d-90a7-4d25-be69-ccd7a58621c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:11:52 compute-0 nova_compute[260603]: 2025-10-02 09:11:52.603 2 DEBUG nova.network.neutron [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:11:52 compute-0 nova_compute[260603]: 2025-10-02 09:11:52.630 2 DEBUG oslo_concurrency.lockutils [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:11:52 compute-0 podman[421326]: 2025-10-02 09:11:52.68240539 +0000 UTC m=+0.049141519 container create 9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 09:11:52 compute-0 nova_compute[260603]: 2025-10-02 09:11:52.692 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2umpw0z1" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:11:52 compute-0 nova_compute[260603]: 2025-10-02 09:11:52.721 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:11:52 compute-0 nova_compute[260603]: 2025-10-02 09:11:52.725 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/disk.config 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:11:52 compute-0 systemd[1]: Started libpod-conmon-9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439.scope.
Oct 02 09:11:52 compute-0 podman[421326]: 2025-10-02 09:11:52.661627083 +0000 UTC m=+0.028363232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:11:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baf0e342ee68d8880e8df6acae850236efb2b13e6da97563a8e97ae38ae513e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baf0e342ee68d8880e8df6acae850236efb2b13e6da97563a8e97ae38ae513e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baf0e342ee68d8880e8df6acae850236efb2b13e6da97563a8e97ae38ae513e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baf0e342ee68d8880e8df6acae850236efb2b13e6da97563a8e97ae38ae513e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:52 compute-0 podman[421326]: 2025-10-02 09:11:52.788053732 +0000 UTC m=+0.154789901 container init 9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:11:52 compute-0 podman[421326]: 2025-10-02 09:11:52.794392369 +0000 UTC m=+0.161128498 container start 9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 09:11:52 compute-0 podman[421326]: 2025-10-02 09:11:52.797604769 +0000 UTC m=+0.164340908 container attach 9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:11:52 compute-0 nova_compute[260603]: 2025-10-02 09:11:52.908 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/disk.config 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:11:52 compute-0 nova_compute[260603]: 2025-10-02 09:11:52.909 2 INFO nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Deleting local config drive /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/disk.config because it was imported into RBD.
Oct 02 09:11:53 compute-0 NetworkManager[45129]: <info>  [1759396313.0016] manager: (tap102411d5-80): new Tun device (/org/freedesktop/NetworkManager/Devices/638)
Oct 02 09:11:53 compute-0 kernel: tap102411d5-80: entered promiscuous mode
Oct 02 09:11:53 compute-0 nova_compute[260603]: 2025-10-02 09:11:53.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:53 compute-0 ovn_controller[152344]: 2025-10-02T09:11:53Z|01575|binding|INFO|Claiming lport 102411d5-80b6-47af-9293-08b07c65d541 for this chassis.
Oct 02 09:11:53 compute-0 ovn_controller[152344]: 2025-10-02T09:11:53Z|01576|binding|INFO|102411d5-80b6-47af-9293-08b07c65d541: Claiming fa:16:3e:05:ca:02 10.100.0.6
Oct 02 09:11:53 compute-0 nova_compute[260603]: 2025-10-02 09:11:53.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:53 compute-0 kernel: tapd5b6295d-90: entered promiscuous mode
Oct 02 09:11:53 compute-0 NetworkManager[45129]: <info>  [1759396313.0358] manager: (tapd5b6295d-90): new Tun device (/org/freedesktop/NetworkManager/Devices/639)
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.041 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:ca:02 10.100.0.6'], port_security=['fa:16:3e:05:ca:02 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '71c9f70f-5f86-4723-9e4f-a4aca14211cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-436e56fa-4885-4043-b091-8043a6f9f710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79454522-7e2a-40b8-ae72-355dd621c03a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=102411d5-80b6-47af-9293-08b07c65d541) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.043 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 102411d5-80b6-47af-9293-08b07c65d541 in datapath 436e56fa-4885-4043-b091-8043a6f9f710 bound to our chassis
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.044 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 436e56fa-4885-4043-b091-8043a6f9f710
Oct 02 09:11:53 compute-0 systemd-udevd[421401]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:11:53 compute-0 systemd-udevd[421402]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.058 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3e0aed-42cf-4995-be52-bf4b5718aed7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.064 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap436e56fa-41 in ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:11:53 compute-0 NetworkManager[45129]: <info>  [1759396313.0683] device (tapd5b6295d-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.066 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap436e56fa-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.066 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2a372b83-300e-4414-b473-a2c4c2d926ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.068 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9964f8ac-300f-42bd-beb5-d7f4b160f92b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 NetworkManager[45129]: <info>  [1759396313.0733] device (tapd5b6295d-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:11:53 compute-0 NetworkManager[45129]: <info>  [1759396313.0748] device (tap102411d5-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:11:53 compute-0 NetworkManager[45129]: <info>  [1759396313.0766] device (tap102411d5-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.083 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe40c21-8ec0-47bb-89ec-58cb7820cedd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.091 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:53 compute-0 systemd-machined[214636]: New machine qemu-177-instance-0000008f.
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.115 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2cab7d73-0892-4410-8e5b-a89237f6af95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 systemd[1]: Started Virtual Machine qemu-177-instance-0000008f.
Oct 02 09:11:53 compute-0 ovn_controller[152344]: 2025-10-02T09:11:53Z|01577|binding|INFO|Claiming lport d5b6295d-90a7-4d25-be69-ccd7a58621c6 for this chassis.
Oct 02 09:11:53 compute-0 ovn_controller[152344]: 2025-10-02T09:11:53Z|01578|binding|INFO|d5b6295d-90a7-4d25-be69-ccd7a58621c6: Claiming fa:16:3e:3c:62:03 2001:db8:0:1:f816:3eff:fe3c:6203 2001:db8::f816:3eff:fe3c:6203
Oct 02 09:11:53 compute-0 nova_compute[260603]: 2025-10-02 09:11:53.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.129 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:62:03 2001:db8:0:1:f816:3eff:fe3c:6203 2001:db8::f816:3eff:fe3c:6203'], port_security=['fa:16:3e:3c:62:03 2001:db8:0:1:f816:3eff:fe3c:6203 2001:db8::f816:3eff:fe3c:6203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3c:6203/64 2001:db8::f816:3eff:fe3c:6203/64', 'neutron:device_id': '71c9f70f-5f86-4723-9e4f-a4aca14211cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bb22cc4-c817-4149-925c-4cb21e573102, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d5b6295d-90a7-4d25-be69-ccd7a58621c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:11:53 compute-0 ovn_controller[152344]: 2025-10-02T09:11:53Z|01579|binding|INFO|Setting lport 102411d5-80b6-47af-9293-08b07c65d541 ovn-installed in OVS
Oct 02 09:11:53 compute-0 ovn_controller[152344]: 2025-10-02T09:11:53Z|01580|binding|INFO|Setting lport 102411d5-80b6-47af-9293-08b07c65d541 up in Southbound
Oct 02 09:11:53 compute-0 nova_compute[260603]: 2025-10-02 09:11:53.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:53 compute-0 ovn_controller[152344]: 2025-10-02T09:11:53Z|01581|binding|INFO|Setting lport d5b6295d-90a7-4d25-be69-ccd7a58621c6 ovn-installed in OVS
Oct 02 09:11:53 compute-0 ovn_controller[152344]: 2025-10-02T09:11:53Z|01582|binding|INFO|Setting lport d5b6295d-90a7-4d25-be69-ccd7a58621c6 up in Southbound
Oct 02 09:11:53 compute-0 nova_compute[260603]: 2025-10-02 09:11:53.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.150 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[36697106-905b-423b-a991-3782802f2488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 NetworkManager[45129]: <info>  [1759396313.1598] manager: (tap436e56fa-40): new Veth device (/org/freedesktop/NetworkManager/Devices/640)
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.164 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[601f84db-d4c5-4ecc-b549-7b5edf2060f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.210 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d24bf1d2-2ba1-496b-8a92-4d932a267b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.216 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0f578d92-bf6a-4d96-abd9-1be87108f795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 NetworkManager[45129]: <info>  [1759396313.2468] device (tap436e56fa-40): carrier: link connected
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.253 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[75ba8f3d-0e6a-4752-b00a-94eb25c69ac6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.275 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[70082af5-8a2e-44e4-be28-86cd1a6bbe02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap436e56fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:8d:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719184, 'reachable_time': 36984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421438, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.295 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ea07767f-a225-49d7-93ba-d9fd6c1f7245]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:8dfb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719184, 'tstamp': 719184}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421439, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.311 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5028b5cd-bf6c-4013-9274-915e3f435a0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap436e56fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:8d:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719184, 'reachable_time': 36984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 421440, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.341 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe07ab3-845f-4b9c-a268-a8e2a0d19df9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.401 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[247d8d6d-3340-4d70-9a7a-b0b5afee69b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.402 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap436e56fa-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.403 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.403 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap436e56fa-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:53 compute-0 NetworkManager[45129]: <info>  [1759396313.4286] manager: (tap436e56fa-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Oct 02 09:11:53 compute-0 kernel: tap436e56fa-40: entered promiscuous mode
Oct 02 09:11:53 compute-0 nova_compute[260603]: 2025-10-02 09:11:53.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.434 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap436e56fa-40, col_values=(('external_ids', {'iface-id': '3c52d5f8-d941-470e-b21b-5afb7b1bf813'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:53 compute-0 nova_compute[260603]: 2025-10-02 09:11:53.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:53 compute-0 ovn_controller[152344]: 2025-10-02T09:11:53Z|01583|binding|INFO|Releasing lport 3c52d5f8-d941-470e-b21b-5afb7b1bf813 from this chassis (sb_readonly=0)
Oct 02 09:11:53 compute-0 nova_compute[260603]: 2025-10-02 09:11:53.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.453 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/436e56fa-4885-4043-b091-8043a6f9f710.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/436e56fa-4885-4043-b091-8043a6f9f710.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.454 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29600729-0c3d-4e7c-9373-1b5f4e62b2ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.455 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-436e56fa-4885-4043-b091-8043a6f9f710
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/436e56fa-4885-4043-b091-8043a6f9f710.pid.haproxy
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 436e56fa-4885-4043-b091-8043a6f9f710
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:11:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.456 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'env', 'PROCESS_TAG=haproxy-436e56fa-4885-4043-b091-8043a6f9f710', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/436e56fa-4885-4043-b091-8043a6f9f710.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:11:53 compute-0 ceph-mon[74477]: pgmap v2812: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:11:53 compute-0 nice_fermat[421360]: {
Oct 02 09:11:53 compute-0 nice_fermat[421360]:     "0": [
Oct 02 09:11:53 compute-0 nice_fermat[421360]:         {
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "devices": [
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "/dev/loop3"
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             ],
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_name": "ceph_lv0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_size": "21470642176",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "name": "ceph_lv0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "tags": {
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.cluster_name": "ceph",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.crush_device_class": "",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.encrypted": "0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.osd_id": "0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.type": "block",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.vdo": "0"
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             },
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "type": "block",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "vg_name": "ceph_vg0"
Oct 02 09:11:53 compute-0 nice_fermat[421360]:         }
Oct 02 09:11:53 compute-0 nice_fermat[421360]:     ],
Oct 02 09:11:53 compute-0 nice_fermat[421360]:     "1": [
Oct 02 09:11:53 compute-0 nice_fermat[421360]:         {
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "devices": [
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "/dev/loop4"
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             ],
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_name": "ceph_lv1",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_size": "21470642176",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "name": "ceph_lv1",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "tags": {
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.cluster_name": "ceph",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.crush_device_class": "",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.encrypted": "0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.osd_id": "1",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.type": "block",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.vdo": "0"
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             },
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "type": "block",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "vg_name": "ceph_vg1"
Oct 02 09:11:53 compute-0 nice_fermat[421360]:         }
Oct 02 09:11:53 compute-0 nice_fermat[421360]:     ],
Oct 02 09:11:53 compute-0 nice_fermat[421360]:     "2": [
Oct 02 09:11:53 compute-0 nice_fermat[421360]:         {
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "devices": [
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "/dev/loop5"
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             ],
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_name": "ceph_lv2",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_size": "21470642176",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "name": "ceph_lv2",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "tags": {
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.cluster_name": "ceph",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.crush_device_class": "",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.encrypted": "0",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.osd_id": "2",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.type": "block",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:                 "ceph.vdo": "0"
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             },
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "type": "block",
Oct 02 09:11:53 compute-0 nice_fermat[421360]:             "vg_name": "ceph_vg2"
Oct 02 09:11:53 compute-0 nice_fermat[421360]:         }
Oct 02 09:11:53 compute-0 nice_fermat[421360]:     ]
Oct 02 09:11:53 compute-0 nice_fermat[421360]: }
Oct 02 09:11:53 compute-0 systemd[1]: libpod-9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439.scope: Deactivated successfully.
Oct 02 09:11:53 compute-0 podman[421326]: 2025-10-02 09:11:53.602006986 +0000 UTC m=+0.968743155 container died 9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2baf0e342ee68d8880e8df6acae850236efb2b13e6da97563a8e97ae38ae513e-merged.mount: Deactivated successfully.
Oct 02 09:11:53 compute-0 podman[421326]: 2025-10-02 09:11:53.882366819 +0000 UTC m=+1.249102948 container remove 9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 09:11:53 compute-0 systemd[1]: libpod-conmon-9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439.scope: Deactivated successfully.
Oct 02 09:11:53 compute-0 sudo[421217]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:54 compute-0 sudo[421531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:11:54 compute-0 sudo[421531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:54 compute-0 sudo[421531]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:54 compute-0 podman[421530]: 2025-10-02 09:11:54.038407858 +0000 UTC m=+0.079217573 container create ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:11:54 compute-0 systemd[1]: Started libpod-conmon-ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772.scope.
Oct 02 09:11:54 compute-0 podman[421530]: 2025-10-02 09:11:53.997666812 +0000 UTC m=+0.038476557 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:11:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:11:54 compute-0 sudo[421568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ba23fbb429caa93a956634bafbbdd11b0d78f46b17b74eaebd84acb971bba0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:54 compute-0 sudo[421568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:54 compute-0 sudo[421568]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:11:54 compute-0 podman[421530]: 2025-10-02 09:11:54.123193383 +0000 UTC m=+0.164003128 container init ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 09:11:54 compute-0 podman[421530]: 2025-10-02 09:11:54.130802509 +0000 UTC m=+0.171612224 container start ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 09:11:54 compute-0 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [NOTICE]   (421604) : New worker (421624) forked
Oct 02 09:11:54 compute-0 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [NOTICE]   (421604) : Loading success.
Oct 02 09:11:54 compute-0 sudo[421599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:11:54 compute-0 sudo[421599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:54 compute-0 sudo[421599]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.190 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d5b6295d-90a7-4d25-be69-ccd7a58621c6 in datapath 5d64d879-42c6-456c-a212-df00bf998997 unbound from our chassis
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.192 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d64d879-42c6-456c-a212-df00bf998997
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.201 2 DEBUG nova.compute.manager [req-0271a026-dcb8-4ccc-8c3f-ede19c685b0c req-806eb3c6-19ea-4305-8da7-29a41c2c3694 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.202 2 DEBUG oslo_concurrency.lockutils [req-0271a026-dcb8-4ccc-8c3f-ede19c685b0c req-806eb3c6-19ea-4305-8da7-29a41c2c3694 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.202 2 DEBUG oslo_concurrency.lockutils [req-0271a026-dcb8-4ccc-8c3f-ede19c685b0c req-806eb3c6-19ea-4305-8da7-29a41c2c3694 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.202 2 DEBUG oslo_concurrency.lockutils [req-0271a026-dcb8-4ccc-8c3f-ede19c685b0c req-806eb3c6-19ea-4305-8da7-29a41c2c3694 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.203 2 DEBUG nova.compute.manager [req-0271a026-dcb8-4ccc-8c3f-ede19c685b0c req-806eb3c6-19ea-4305-8da7-29a41c2c3694 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Processing event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.203 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9f3c5f-573e-4afe-b62e-94c1da06fccf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.204 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d64d879-41 in ovnmeta-5d64d879-42c6-456c-a212-df00bf998997 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.206 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d64d879-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.206 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf6a0e7-6800-403e-9033-3fd5d9dacf8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.207 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[db488d29-529d-4769-bf93-e36a9ebc72f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.218 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8505b06e-0e9e-4109-8059-dbd80ed0f686]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.232 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6743103c-322c-448b-b014-e73fd269dafa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.233 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396314.232942, 71c9f70f-5f86-4723-9e4f-a4aca14211cb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.233 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] VM Started (Lifecycle Event)
Oct 02 09:11:54 compute-0 sudo[421635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:11:54 compute-0 sudo[421635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.257 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4760c6-3c3a-4681-b04e-f11a328ffd3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.263 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a0fa26e3-a4ea-436c-9171-f7062731c220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 NetworkManager[45129]: <info>  [1759396314.2643] manager: (tap5d64d879-40): new Veth device (/org/freedesktop/NetworkManager/Devices/642)
Oct 02 09:11:54 compute-0 systemd-udevd[421426]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.271 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.283 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396314.2330513, 71c9f70f-5f86-4723-9e4f-a4aca14211cb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.284 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] VM Paused (Lifecycle Event)
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.298 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab4c00e-d5d3-48fb-abbf-ca354f79de65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.301 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a450fc24-b625-4904-ad66-dc0c3dc407c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 NetworkManager[45129]: <info>  [1759396314.3207] device (tap5d64d879-40): carrier: link connected
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.328 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4dfa7b62-5327-4ed7-8744-f758a388f990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.343 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72b4d70c-3128-469f-92d7-1bd545a2c833]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d64d879-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:74:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719292, 'reachable_time': 17062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421670, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.358 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[26d3415d-ef9d-4f09-83f7-db042debe901]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:74d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719292, 'tstamp': 719292}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421671, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.373 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[de45972a-6560-4960-a8e5-7a91d970e1e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d64d879-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:74:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719292, 'reachable_time': 17062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 421674, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.401 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[989861a6-d96b-488f-b17f-fdf9161bbdae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.429 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[376564e5-c288-4a08-b90a-46c3e3450c18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.431 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d64d879-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.431 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.431 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d64d879-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:54 compute-0 kernel: tap5d64d879-40: entered promiscuous mode
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:54 compute-0 NetworkManager[45129]: <info>  [1759396314.4374] manager: (tap5d64d879-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/643)
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.438 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d64d879-40, col_values=(('external_ids', {'iface-id': 'be1c87e3-582f-4bbb-a5fb-4fb837b7e882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:54 compute-0 ovn_controller[152344]: 2025-10-02T09:11:54Z|01584|binding|INFO|Releasing lport be1c87e3-582f-4bbb-a5fb-4fb837b7e882 from this chassis (sb_readonly=0)
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.442 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d64d879-42c6-456c-a212-df00bf998997.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d64d879-42c6-456c-a212-df00bf998997.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.442 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9f6978-8f8c-42f5-940d-e25cd4a59ff5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.443 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-5d64d879-42c6-456c-a212-df00bf998997
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/5d64d879-42c6-456c-a212-df00bf998997.pid.haproxy
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 5d64d879-42c6-456c-a212-df00bf998997
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:11:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.445 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'env', 'PROCESS_TAG=haproxy-5d64d879-42c6-456c-a212-df00bf998997', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d64d879-42c6-456c-a212-df00bf998997.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.504 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.508 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:11:54 compute-0 nova_compute[260603]: 2025-10-02 09:11:54.526 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:11:54 compute-0 podman[421719]: 2025-10-02 09:11:54.590798204 +0000 UTC m=+0.043858124 container create f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shockley, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 09:11:54 compute-0 systemd[1]: Started libpod-conmon-f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049.scope.
Oct 02 09:11:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:11:54 compute-0 podman[421719]: 2025-10-02 09:11:54.573416444 +0000 UTC m=+0.026476384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:11:54 compute-0 podman[421719]: 2025-10-02 09:11:54.680520812 +0000 UTC m=+0.133580752 container init f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shockley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:11:54 compute-0 podman[421719]: 2025-10-02 09:11:54.687551131 +0000 UTC m=+0.140611051 container start f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shockley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 09:11:54 compute-0 podman[421719]: 2025-10-02 09:11:54.69106467 +0000 UTC m=+0.144124610 container attach f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shockley, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 09:11:54 compute-0 stoic_shockley[421736]: 167 167
Oct 02 09:11:54 compute-0 systemd[1]: libpod-f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049.scope: Deactivated successfully.
Oct 02 09:11:54 compute-0 podman[421719]: 2025-10-02 09:11:54.69329503 +0000 UTC m=+0.146354950 container died f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e08e236a8c643bf7d50ac3c60e625506d22a8f19a90b193a29f8f77f2a15302e-merged.mount: Deactivated successfully.
Oct 02 09:11:54 compute-0 podman[421719]: 2025-10-02 09:11:54.734178829 +0000 UTC m=+0.187238739 container remove f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shockley, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:11:54 compute-0 systemd[1]: libpod-conmon-f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049.scope: Deactivated successfully.
Oct 02 09:11:54 compute-0 podman[421776]: 2025-10-02 09:11:54.822428832 +0000 UTC m=+0.048927432 container create eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 09:11:54 compute-0 systemd[1]: Started libpod-conmon-eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099.scope.
Oct 02 09:11:54 compute-0 podman[421776]: 2025-10-02 09:11:54.796168766 +0000 UTC m=+0.022667376 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:11:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89707fb13d0c4fb2e7f83ff5487b81f8ced57c0f936b53bc230c0c45bebc01b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:54 compute-0 podman[421796]: 2025-10-02 09:11:54.922170551 +0000 UTC m=+0.057895209 container create ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haibt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 09:11:54 compute-0 podman[421776]: 2025-10-02 09:11:54.926655151 +0000 UTC m=+0.153153741 container init eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 09:11:54 compute-0 podman[421776]: 2025-10-02 09:11:54.932433791 +0000 UTC m=+0.158932381 container start eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 09:11:54 compute-0 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [NOTICE]   (421815) : New worker (421819) forked
Oct 02 09:11:54 compute-0 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [NOTICE]   (421815) : Loading success.
Oct 02 09:11:54 compute-0 systemd[1]: Started libpod-conmon-ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e.scope.
Oct 02 09:11:54 compute-0 podman[421796]: 2025-10-02 09:11:54.888499185 +0000 UTC m=+0.024223933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:11:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9048349f208ba1bf57c0fcb1654e8417f54cd7552ba9176e06f6fd6d7fca2a98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9048349f208ba1bf57c0fcb1654e8417f54cd7552ba9176e06f6fd6d7fca2a98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9048349f208ba1bf57c0fcb1654e8417f54cd7552ba9176e06f6fd6d7fca2a98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9048349f208ba1bf57c0fcb1654e8417f54cd7552ba9176e06f6fd6d7fca2a98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:11:55 compute-0 podman[421796]: 2025-10-02 09:11:55.008353059 +0000 UTC m=+0.144077747 container init ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:11:55 compute-0 podman[421796]: 2025-10-02 09:11:55.016993038 +0000 UTC m=+0.152717706 container start ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haibt, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 09:11:55 compute-0 podman[421796]: 2025-10-02 09:11:55.021589892 +0000 UTC m=+0.157314560 container attach ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 09:11:55 compute-0 podman[421834]: 2025-10-02 09:11:55.137090961 +0000 UTC m=+0.059096148 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:11:55 compute-0 podman[421833]: 2025-10-02 09:11:55.137405851 +0000 UTC m=+0.056328932 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:11:55 compute-0 ceph-mon[74477]: pgmap v2813: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:11:55 compute-0 nova_compute[260603]: 2025-10-02 09:11:55.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]: {
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "osd_id": 2,
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "type": "bluestore"
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:     },
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "osd_id": 1,
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "type": "bluestore"
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:     },
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "osd_id": 0,
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:         "type": "bluestore"
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]:     }
Oct 02 09:11:55 compute-0 inspiring_haibt[421827]: }
Oct 02 09:11:55 compute-0 systemd[1]: libpod-ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e.scope: Deactivated successfully.
Oct 02 09:11:55 compute-0 podman[421796]: 2025-10-02 09:11:55.995831866 +0000 UTC m=+1.131556554 container died ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haibt, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9048349f208ba1bf57c0fcb1654e8417f54cd7552ba9176e06f6fd6d7fca2a98-merged.mount: Deactivated successfully.
Oct 02 09:11:56 compute-0 podman[421796]: 2025-10-02 09:11:56.044584892 +0000 UTC m=+1.180309560 container remove ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haibt, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:11:56 compute-0 systemd[1]: libpod-conmon-ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e.scope: Deactivated successfully.
Oct 02 09:11:56 compute-0 sudo[421635]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:11:56 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:11:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:11:56 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:11:56 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2c11cc22-5886-479a-90b4-6879d4cbf1e8 does not exist
Oct 02 09:11:56 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 68ed0cde-6d2a-427f-a025-af4290ea7974 does not exist
Oct 02 09:11:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Oct 02 09:11:56 compute-0 sudo[421912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:11:56 compute-0 sudo[421912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:56 compute-0 sudo[421912]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:56 compute-0 sudo[421937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:11:56 compute-0 sudo[421937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:11:56 compute-0 sudo[421937]: pam_unix(sudo:session): session closed for user root
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.434 2 DEBUG nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.434 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.435 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.435 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.435 2 DEBUG nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] No event matching network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 in dict_keys([('network-vif-plugged', 'd5b6295d-90a7-4d25-be69-ccd7a58621c6')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.435 2 WARNING nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received unexpected event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 for instance with vm_state building and task_state spawning.
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.435 2 DEBUG nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.435 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.436 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.436 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.436 2 DEBUG nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Processing event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.436 2 DEBUG nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.436 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.436 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.437 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.437 2 DEBUG nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] No waiting events found dispatching network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.437 2 WARNING nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received unexpected event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 for instance with vm_state building and task_state spawning.
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.438 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.442 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396316.4427345, 71c9f70f-5f86-4723-9e4f-a4aca14211cb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.443 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] VM Resumed (Lifecycle Event)
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.444 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.448 2 INFO nova.virt.libvirt.driver [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Instance spawned successfully.
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.448 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.470 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.477 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.477 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.478 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.478 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.478 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.479 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.482 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.519 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.540 2 INFO nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Took 18.41 seconds to spawn the instance on the hypervisor.
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.541 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.604 2 INFO nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Took 19.73 seconds to build instance.
Oct 02 09:11:56 compute-0 nova_compute[260603]: 2025-10-02 09:11:56.622 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:11:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:11:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:11:57 compute-0 ceph-mon[74477]: pgmap v2814: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Oct 02 09:11:57 compute-0 nova_compute[260603]: 2025-10-02 09:11:57.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:11:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:11:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:11:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:11:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:11:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:11:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:11:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Oct 02 09:11:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:11:59 compute-0 ceph-mon[74477]: pgmap v2815: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Oct 02 09:12:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Oct 02 09:12:00 compute-0 nova_compute[260603]: 2025-10-02 09:12:00.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:00 compute-0 nova_compute[260603]: 2025-10-02 09:12:00.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:00 compute-0 NetworkManager[45129]: <info>  [1759396320.8551] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Oct 02 09:12:00 compute-0 NetworkManager[45129]: <info>  [1759396320.8561] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Oct 02 09:12:00 compute-0 ovn_controller[152344]: 2025-10-02T09:12:00Z|01585|binding|INFO|Releasing lport be1c87e3-582f-4bbb-a5fb-4fb837b7e882 from this chassis (sb_readonly=0)
Oct 02 09:12:00 compute-0 ovn_controller[152344]: 2025-10-02T09:12:00Z|01586|binding|INFO|Releasing lport 3c52d5f8-d941-470e-b21b-5afb7b1bf813 from this chassis (sb_readonly=0)
Oct 02 09:12:00 compute-0 ovn_controller[152344]: 2025-10-02T09:12:00Z|01587|binding|INFO|Releasing lport be1c87e3-582f-4bbb-a5fb-4fb837b7e882 from this chassis (sb_readonly=0)
Oct 02 09:12:00 compute-0 ovn_controller[152344]: 2025-10-02T09:12:00Z|01588|binding|INFO|Releasing lport 3c52d5f8-d941-470e-b21b-5afb7b1bf813 from this chassis (sb_readonly=0)
Oct 02 09:12:00 compute-0 nova_compute[260603]: 2025-10-02 09:12:00.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:00 compute-0 nova_compute[260603]: 2025-10-02 09:12:00.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:01 compute-0 ceph-mon[74477]: pgmap v2816: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Oct 02 09:12:01 compute-0 nova_compute[260603]: 2025-10-02 09:12:01.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:12:01 compute-0 nova_compute[260603]: 2025-10-02 09:12:01.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:12:01 compute-0 nova_compute[260603]: 2025-10-02 09:12:01.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:12:01 compute-0 nova_compute[260603]: 2025-10-02 09:12:01.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:02 compute-0 nova_compute[260603]: 2025-10-02 09:12:02.000 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:12:02 compute-0 nova_compute[260603]: 2025-10-02 09:12:02.000 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:12:02 compute-0 nova_compute[260603]: 2025-10-02 09:12:02.000 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 09:12:02 compute-0 nova_compute[260603]: 2025-10-02 09:12:02.001 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 71c9f70f-5f86-4723-9e4f-a4aca14211cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:12:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Oct 02 09:12:03 compute-0 nova_compute[260603]: 2025-10-02 09:12:03.080 2 DEBUG nova.compute.manager [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-changed-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:12:03 compute-0 nova_compute[260603]: 2025-10-02 09:12:03.080 2 DEBUG nova.compute.manager [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing instance network info cache due to event network-changed-102411d5-80b6-47af-9293-08b07c65d541. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:12:03 compute-0 nova_compute[260603]: 2025-10-02 09:12:03.080 2 DEBUG oslo_concurrency.lockutils [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:12:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:12:03 compute-0 ceph-mon[74477]: pgmap v2817: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Oct 02 09:12:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:12:05 compute-0 ceph-mon[74477]: pgmap v2818: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:12:05 compute-0 nova_compute[260603]: 2025-10-02 09:12:05.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.000 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.047 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.048 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.048 2 DEBUG oslo_concurrency.lockutils [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.048 2 DEBUG nova.network.neutron [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing network info cache for port 102411d5-80b6-47af-9293-08b07c65d541 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.050 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.051 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.051 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:12:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.141 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.141 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.142 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.142 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.142 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:12:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085201974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.618 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.755 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.756 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.960 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.961 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3404MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.961 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:06 compute-0 nova_compute[260603]: 2025-10-02 09:12:06.961 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:07 compute-0 nova_compute[260603]: 2025-10-02 09:12:07.065 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 71c9f70f-5f86-4723-9e4f-a4aca14211cb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:12:07 compute-0 nova_compute[260603]: 2025-10-02 09:12:07.066 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:12:07 compute-0 nova_compute[260603]: 2025-10-02 09:12:07.066 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:12:07 compute-0 nova_compute[260603]: 2025-10-02 09:12:07.107 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:12:07 compute-0 ceph-mon[74477]: pgmap v2819: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 02 09:12:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4085201974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:12:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:12:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3074771945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:12:07 compute-0 nova_compute[260603]: 2025-10-02 09:12:07.581 2 DEBUG nova.network.neutron [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updated VIF entry in instance network info cache for port 102411d5-80b6-47af-9293-08b07c65d541. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:12:07 compute-0 nova_compute[260603]: 2025-10-02 09:12:07.582 2 DEBUG nova.network.neutron [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:12:07 compute-0 nova_compute[260603]: 2025-10-02 09:12:07.590 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:12:07 compute-0 nova_compute[260603]: 2025-10-02 09:12:07.595 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:12:07 compute-0 nova_compute[260603]: 2025-10-02 09:12:07.603 2 DEBUG oslo_concurrency.lockutils [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:12:07 compute-0 nova_compute[260603]: 2025-10-02 09:12:07.613 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:12:07 compute-0 nova_compute[260603]: 2025-10-02 09:12:07.687 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:12:07 compute-0 nova_compute[260603]: 2025-10-02 09:12:07.688 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 109 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 111 op/s
Oct 02 09:12:08 compute-0 nova_compute[260603]: 2025-10-02 09:12:08.157 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:12:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:12:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3074771945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:12:08 compute-0 nova_compute[260603]: 2025-10-02 09:12:08.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:12:08 compute-0 ovn_controller[152344]: 2025-10-02T09:12:08Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:ca:02 10.100.0.6
Oct 02 09:12:08 compute-0 ovn_controller[152344]: 2025-10-02T09:12:08Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:ca:02 10.100.0.6
Oct 02 09:12:09 compute-0 ceph-mon[74477]: pgmap v2820: 305 pgs: 305 active+clean; 109 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 111 op/s
Oct 02 09:12:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 109 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Oct 02 09:12:10 compute-0 ceph-mon[74477]: pgmap v2821: 305 pgs: 305 active+clean; 109 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Oct 02 09:12:10 compute-0 nova_compute[260603]: 2025-10-02 09:12:10.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:11 compute-0 nova_compute[260603]: 2025-10-02 09:12:11.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 109 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Oct 02 09:12:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:12:13 compute-0 ceph-mon[74477]: pgmap v2822: 305 pgs: 305 active+clean; 109 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Oct 02 09:12:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 695 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 02 09:12:15 compute-0 ceph-mon[74477]: pgmap v2823: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 695 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 02 09:12:15 compute-0 nova_compute[260603]: 2025-10-02 09:12:15.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:12:16 compute-0 nova_compute[260603]: 2025-10-02 09:12:16.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:16 compute-0 ceph-mon[74477]: pgmap v2824: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:12:17 compute-0 nova_compute[260603]: 2025-10-02 09:12:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:12:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:12:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:12:19 compute-0 ceph-mon[74477]: pgmap v2825: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:12:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 107 KiB/s wr, 24 op/s
Oct 02 09:12:20 compute-0 nova_compute[260603]: 2025-10-02 09:12:20.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:21 compute-0 podman[422009]: 2025-10-02 09:12:21.006345478 +0000 UTC m=+0.063216876 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 09:12:21 compute-0 podman[422008]: 2025-10-02 09:12:21.034951937 +0000 UTC m=+0.097754189 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 09:12:21 compute-0 ceph-mon[74477]: pgmap v2826: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 107 KiB/s wr, 24 op/s
Oct 02 09:12:21 compute-0 nova_compute[260603]: 2025-10-02 09:12:21.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:12:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2594647811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:12:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:12:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2594647811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:12:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 107 KiB/s wr, 24 op/s
Oct 02 09:12:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2594647811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:12:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2594647811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:12:22 compute-0 nova_compute[260603]: 2025-10-02 09:12:22.956 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:22 compute-0 nova_compute[260603]: 2025-10-02 09:12:22.957 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:22 compute-0 nova_compute[260603]: 2025-10-02 09:12:22.976 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:12:23 compute-0 nova_compute[260603]: 2025-10-02 09:12:23.089 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:23 compute-0 nova_compute[260603]: 2025-10-02 09:12:23.090 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:23 compute-0 nova_compute[260603]: 2025-10-02 09:12:23.102 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:12:23 compute-0 nova_compute[260603]: 2025-10-02 09:12:23.103 2 INFO nova.compute.claims [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:12:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:12:23 compute-0 ceph-mon[74477]: pgmap v2827: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 107 KiB/s wr, 24 op/s
Oct 02 09:12:23 compute-0 nova_compute[260603]: 2025-10-02 09:12:23.511 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:12:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:12:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410144398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.003 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.008 2 DEBUG nova.compute.provider_tree [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.042 2 DEBUG nova.scheduler.client.report [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.080 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.081 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.126 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.126 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:12:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 107 KiB/s wr, 25 op/s
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.159 2 INFO nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.190 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.279 2 DEBUG nova.policy [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.304 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.306 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.307 2 INFO nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Creating image(s)
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.338 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.369 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.399 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.403 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.505 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.505 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.506 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.506 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/410144398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.578 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:12:24 compute-0 nova_compute[260603]: 2025-10-02 09:12:24.583 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 66710b2a-3c24-45a9-a500-f29978d33f4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:12:25 compute-0 nova_compute[260603]: 2025-10-02 09:12:25.194 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Successfully created port: 244a8221-32fa-4c1b-959f-5a29d4e651f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:12:25 compute-0 nova_compute[260603]: 2025-10-02 09:12:25.329 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 66710b2a-3c24-45a9-a500-f29978d33f4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:12:25 compute-0 nova_compute[260603]: 2025-10-02 09:12:25.384 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:12:25 compute-0 ceph-mon[74477]: pgmap v2828: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 107 KiB/s wr, 25 op/s
Oct 02 09:12:25 compute-0 nova_compute[260603]: 2025-10-02 09:12:25.595 2 DEBUG nova.objects.instance [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 66710b2a-3c24-45a9-a500-f29978d33f4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:12:25 compute-0 nova_compute[260603]: 2025-10-02 09:12:25.610 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:12:25 compute-0 nova_compute[260603]: 2025-10-02 09:12:25.611 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Ensure instance console log exists: /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:12:25 compute-0 nova_compute[260603]: 2025-10-02 09:12:25.611 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:25 compute-0 nova_compute[260603]: 2025-10-02 09:12:25.611 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:25 compute-0 nova_compute[260603]: 2025-10-02 09:12:25.612 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:25 compute-0 nova_compute[260603]: 2025-10-02 09:12:25.752 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Successfully created port: 8c6c65df-3056-4b48-96d3-e6dc8744f31d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:12:25 compute-0 nova_compute[260603]: 2025-10-02 09:12:25.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:25 compute-0 podman[422240]: 2025-10-02 09:12:25.995517932 +0000 UTC m=+0.058336375 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 09:12:26 compute-0 podman[422241]: 2025-10-02 09:12:26.042803201 +0000 UTC m=+0.091378821 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:12:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 09:12:26 compute-0 ceph-mon[74477]: pgmap v2829: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 09:12:26 compute-0 nova_compute[260603]: 2025-10-02 09:12:26.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:27 compute-0 nova_compute[260603]: 2025-10-02 09:12:27.016 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Successfully updated port: 244a8221-32fa-4c1b-959f-5a29d4e651f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:12:27 compute-0 nova_compute[260603]: 2025-10-02 09:12:27.149 2 DEBUG nova.compute.manager [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-changed-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:12:27 compute-0 nova_compute[260603]: 2025-10-02 09:12:27.149 2 DEBUG nova.compute.manager [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing instance network info cache due to event network-changed-244a8221-32fa-4c1b-959f-5a29d4e651f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:12:27 compute-0 nova_compute[260603]: 2025-10-02 09:12:27.150 2 DEBUG oslo_concurrency.lockutils [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:12:27 compute-0 nova_compute[260603]: 2025-10-02 09:12:27.150 2 DEBUG oslo_concurrency.lockutils [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:12:27 compute-0 nova_compute[260603]: 2025-10-02 09:12:27.150 2 DEBUG nova.network.neutron [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing network info cache for port 244a8221-32fa-4c1b-959f-5a29d4e651f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:12:27 compute-0 nova_compute[260603]: 2025-10-02 09:12:27.340 2 DEBUG nova.network.neutron [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:12:27 compute-0 nova_compute[260603]: 2025-10-02 09:12:27.656 2 DEBUG nova.network.neutron [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:12:27 compute-0 nova_compute[260603]: 2025-10-02 09:12:27.680 2 DEBUG oslo_concurrency.lockutils [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:12:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:12:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:12:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:12:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:12:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:12:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:12:28
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta']
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:12:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:12:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:12:29 compute-0 nova_compute[260603]: 2025-10-02 09:12:29.006 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Successfully updated port: 8c6c65df-3056-4b48-96d3-e6dc8744f31d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:12:29 compute-0 nova_compute[260603]: 2025-10-02 09:12:29.023 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:12:29 compute-0 nova_compute[260603]: 2025-10-02 09:12:29.023 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:12:29 compute-0 nova_compute[260603]: 2025-10-02 09:12:29.024 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:12:29 compute-0 nova_compute[260603]: 2025-10-02 09:12:29.243 2 DEBUG nova.compute.manager [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-changed-8c6c65df-3056-4b48-96d3-e6dc8744f31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:12:29 compute-0 nova_compute[260603]: 2025-10-02 09:12:29.243 2 DEBUG nova.compute.manager [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing instance network info cache due to event network-changed-8c6c65df-3056-4b48-96d3-e6dc8744f31d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:12:29 compute-0 nova_compute[260603]: 2025-10-02 09:12:29.244 2 DEBUG oslo_concurrency.lockutils [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:12:29 compute-0 ceph-mon[74477]: pgmap v2830: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:12:30 compute-0 nova_compute[260603]: 2025-10-02 09:12:30.004 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:12:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:12:30 compute-0 nova_compute[260603]: 2025-10-02 09:12:30.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:31 compute-0 ceph-mon[74477]: pgmap v2831: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:12:31 compute-0 nova_compute[260603]: 2025-10-02 09:12:31.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:12:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:12:33 compute-0 ceph-mon[74477]: pgmap v2832: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:12:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.319 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updating instance_info_cache with network_info: [{"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.346 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.346 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Instance network_info: |[{"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.347 2 DEBUG oslo_concurrency.lockutils [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.347 2 DEBUG nova.network.neutron [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing network info cache for port 8c6c65df-3056-4b48-96d3-e6dc8744f31d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.354 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Start _get_guest_xml network_info=[{"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.361 2 WARNING nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.368 2 DEBUG nova.virt.libvirt.host [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.369 2 DEBUG nova.virt.libvirt.host [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.373 2 DEBUG nova.virt.libvirt.host [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.373 2 DEBUG nova.virt.libvirt.host [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.374 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.374 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.375 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.375 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.375 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.376 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.376 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.376 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.377 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.377 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.377 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.378 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.381 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:12:34 compute-0 ceph-mon[74477]: pgmap v2833: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:12:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:34.849 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:34.850 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:34.851 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:12:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3726409983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.923 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.950 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:12:34 compute-0 nova_compute[260603]: 2025-10-02 09:12:34.955 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:12:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:12:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2097381826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.382 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.384 2 DEBUG nova.virt.libvirt.vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569731112',display_name='tempest-TestGettingAddress-server-569731112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569731112',id=144,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-i93zg3s6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:12:24Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=66710b2a-3c24-45a9-a500-f29978d33f4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.384 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.385 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.386 2 DEBUG nova.virt.libvirt.vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569731112',display_name='tempest-TestGettingAddress-server-569731112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569731112',id=144,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-i93zg3s6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:12:24Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=66710b2a-3c24-45a9-a500-f29978d33f4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.386 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.386 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.387 2 DEBUG nova.objects.instance [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66710b2a-3c24-45a9-a500-f29978d33f4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.485 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:12:35 compute-0 nova_compute[260603]:   <uuid>66710b2a-3c24-45a9-a500-f29978d33f4f</uuid>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   <name>instance-00000090</name>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-569731112</nova:name>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:12:34</nova:creationTime>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <nova:port uuid="244a8221-32fa-4c1b-959f-5a29d4e651f1">
Oct 02 09:12:35 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <nova:port uuid="8c6c65df-3056-4b48-96d3-e6dc8744f31d">
Oct 02 09:12:35 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe2b:b850" ipVersion="6"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe2b:b850" ipVersion="6"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <system>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <entry name="serial">66710b2a-3c24-45a9-a500-f29978d33f4f</entry>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <entry name="uuid">66710b2a-3c24-45a9-a500-f29978d33f4f</entry>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     </system>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   <os>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   </os>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   <features>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   </features>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/66710b2a-3c24-45a9-a500-f29978d33f4f_disk">
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       </source>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/66710b2a-3c24-45a9-a500-f29978d33f4f_disk.config">
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       </source>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:12:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:10:a2:36"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <target dev="tap244a8221-32"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:2b:b8:50"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <target dev="tap8c6c65df-30"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/console.log" append="off"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <video>
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     </video>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:12:35 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:12:35 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:12:35 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:12:35 compute-0 nova_compute[260603]: </domain>
Oct 02 09:12:35 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.486 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Preparing to wait for external event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.486 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.487 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.487 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.487 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Preparing to wait for external event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.487 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.487 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.488 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.488 2 DEBUG nova.virt.libvirt.vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569731112',display_name='tempest-TestGettingAddress-server-569731112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569731112',id=144,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-i93zg3s6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:12:24Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=66710b2a-3c24-45a9-a500-f29978d33f4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.488 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.489 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.489 2 DEBUG os_vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.490 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.491 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.494 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap244a8221-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.495 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap244a8221-32, col_values=(('external_ids', {'iface-id': '244a8221-32fa-4c1b-959f-5a29d4e651f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:a2:36', 'vm-uuid': '66710b2a-3c24-45a9-a500-f29978d33f4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:35 compute-0 NetworkManager[45129]: <info>  [1759396355.4982] manager: (tap244a8221-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/646)
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.507 2 INFO os_vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32')
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.508 2 DEBUG nova.virt.libvirt.vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569731112',display_name='tempest-TestGettingAddress-server-569731112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569731112',id=144,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-i93zg3s6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:12:24Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=66710b2a-3c24-45a9-a500-f29978d33f4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.508 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.509 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.509 2 DEBUG os_vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c6c65df-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c6c65df-30, col_values=(('external_ids', {'iface-id': '8c6c65df-3056-4b48-96d3-e6dc8744f31d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:b8:50', 'vm-uuid': '66710b2a-3c24-45a9-a500-f29978d33f4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:35 compute-0 NetworkManager[45129]: <info>  [1759396355.5163] manager: (tap8c6c65df-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.522 2 INFO os_vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30')
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.668 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.669 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.669 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:10:a2:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.669 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:2b:b8:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.670 2 INFO nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Using config drive
Oct 02 09:12:35 compute-0 nova_compute[260603]: 2025-10-02 09:12:35.696 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:12:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3726409983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:12:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2097381826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:12:36 compute-0 nova_compute[260603]: 2025-10-02 09:12:36.068 2 DEBUG nova.network.neutron [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updated VIF entry in instance network info cache for port 8c6c65df-3056-4b48-96d3-e6dc8744f31d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:12:36 compute-0 nova_compute[260603]: 2025-10-02 09:12:36.068 2 DEBUG nova.network.neutron [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updating instance_info_cache with network_info: [{"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:12:36 compute-0 nova_compute[260603]: 2025-10-02 09:12:36.130 2 DEBUG oslo_concurrency.lockutils [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:12:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:12:36 compute-0 nova_compute[260603]: 2025-10-02 09:12:36.244 2 INFO nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Creating config drive at /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/disk.config
Oct 02 09:12:36 compute-0 nova_compute[260603]: 2025-10-02 09:12:36.249 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpue8bzfif execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:12:36 compute-0 nova_compute[260603]: 2025-10-02 09:12:36.422 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpue8bzfif" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:12:36 compute-0 nova_compute[260603]: 2025-10-02 09:12:36.466 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:12:36 compute-0 nova_compute[260603]: 2025-10-02 09:12:36.471 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/disk.config 66710b2a-3c24-45a9-a500-f29978d33f4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:12:36 compute-0 nova_compute[260603]: 2025-10-02 09:12:36.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:36 compute-0 ceph-mon[74477]: pgmap v2834: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:12:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 09:12:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:12:38 compute-0 nova_compute[260603]: 2025-10-02 09:12:38.495 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/disk.config 66710b2a-3c24-45a9-a500-f29978d33f4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:12:38 compute-0 nova_compute[260603]: 2025-10-02 09:12:38.496 2 INFO nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Deleting local config drive /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/disk.config because it was imported into RBD.
Oct 02 09:12:38 compute-0 kernel: tap244a8221-32: entered promiscuous mode
Oct 02 09:12:38 compute-0 NetworkManager[45129]: <info>  [1759396358.5797] manager: (tap244a8221-32): new Tun device (/org/freedesktop/NetworkManager/Devices/648)
Oct 02 09:12:38 compute-0 ovn_controller[152344]: 2025-10-02T09:12:38Z|01589|binding|INFO|Claiming lport 244a8221-32fa-4c1b-959f-5a29d4e651f1 for this chassis.
Oct 02 09:12:38 compute-0 ovn_controller[152344]: 2025-10-02T09:12:38Z|01590|binding|INFO|244a8221-32fa-4c1b-959f-5a29d4e651f1: Claiming fa:16:3e:10:a2:36 10.100.0.11
Oct 02 09:12:38 compute-0 nova_compute[260603]: 2025-10-02 09:12:38.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:38 compute-0 NetworkManager[45129]: <info>  [1759396358.6112] manager: (tap8c6c65df-30): new Tun device (/org/freedesktop/NetworkManager/Devices/649)
Oct 02 09:12:38 compute-0 kernel: tap8c6c65df-30: entered promiscuous mode
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.613 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:a2:36 10.100.0.11'], port_security=['fa:16:3e:10:a2:36 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '66710b2a-3c24-45a9-a500-f29978d33f4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-436e56fa-4885-4043-b091-8043a6f9f710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79454522-7e2a-40b8-ae72-355dd621c03a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=244a8221-32fa-4c1b-959f-5a29d4e651f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.614 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 244a8221-32fa-4c1b-959f-5a29d4e651f1 in datapath 436e56fa-4885-4043-b091-8043a6f9f710 bound to our chassis
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.615 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 436e56fa-4885-4043-b091-8043a6f9f710
Oct 02 09:12:38 compute-0 nova_compute[260603]: 2025-10-02 09:12:38.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:38 compute-0 ovn_controller[152344]: 2025-10-02T09:12:38Z|01591|binding|INFO|Setting lport 244a8221-32fa-4c1b-959f-5a29d4e651f1 ovn-installed in OVS
Oct 02 09:12:38 compute-0 ovn_controller[152344]: 2025-10-02T09:12:38Z|01592|binding|INFO|Setting lport 244a8221-32fa-4c1b-959f-5a29d4e651f1 up in Southbound
Oct 02 09:12:38 compute-0 ovn_controller[152344]: 2025-10-02T09:12:38Z|01593|if_status|INFO|Dropped 1 log messages in last 141 seconds (most recently, 141 seconds ago) due to excessive rate
Oct 02 09:12:38 compute-0 ovn_controller[152344]: 2025-10-02T09:12:38Z|01594|if_status|INFO|Not updating pb chassis for 8c6c65df-3056-4b48-96d3-e6dc8744f31d now as sb is readonly
Oct 02 09:12:38 compute-0 systemd-udevd[422421]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:12:38 compute-0 nova_compute[260603]: 2025-10-02 09:12:38.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:38 compute-0 systemd-udevd[422420]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:12:38 compute-0 ovn_controller[152344]: 2025-10-02T09:12:38Z|01595|binding|INFO|Claiming lport 8c6c65df-3056-4b48-96d3-e6dc8744f31d for this chassis.
Oct 02 09:12:38 compute-0 ovn_controller[152344]: 2025-10-02T09:12:38Z|01596|binding|INFO|8c6c65df-3056-4b48-96d3-e6dc8744f31d: Claiming fa:16:3e:2b:b8:50 2001:db8:0:1:f816:3eff:fe2b:b850 2001:db8::f816:3eff:fe2b:b850
Oct 02 09:12:38 compute-0 nova_compute[260603]: 2025-10-02 09:12:38.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:38 compute-0 ovn_controller[152344]: 2025-10-02T09:12:38Z|01597|binding|INFO|Setting lport 8c6c65df-3056-4b48-96d3-e6dc8744f31d ovn-installed in OVS
Oct 02 09:12:38 compute-0 nova_compute[260603]: 2025-10-02 09:12:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:38 compute-0 NetworkManager[45129]: <info>  [1759396358.6456] device (tap244a8221-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.642 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7203507a-31fe-4cc6-ade6-cd9db4cd3e35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:12:38 compute-0 NetworkManager[45129]: <info>  [1759396358.6471] device (tap244a8221-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.645 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:b8:50 2001:db8:0:1:f816:3eff:fe2b:b850 2001:db8::f816:3eff:fe2b:b850'], port_security=['fa:16:3e:2b:b8:50 2001:db8:0:1:f816:3eff:fe2b:b850 2001:db8::f816:3eff:fe2b:b850'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe2b:b850/64 2001:db8::f816:3eff:fe2b:b850/64', 'neutron:device_id': '66710b2a-3c24-45a9-a500-f29978d33f4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bb22cc4-c817-4149-925c-4cb21e573102, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8c6c65df-3056-4b48-96d3-e6dc8744f31d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:12:38 compute-0 ovn_controller[152344]: 2025-10-02T09:12:38Z|01598|binding|INFO|Setting lport 8c6c65df-3056-4b48-96d3-e6dc8744f31d up in Southbound
Oct 02 09:12:38 compute-0 NetworkManager[45129]: <info>  [1759396358.6559] device (tap8c6c65df-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:12:38 compute-0 NetworkManager[45129]: <info>  [1759396358.6580] device (tap8c6c65df-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:12:38 compute-0 systemd-machined[214636]: New machine qemu-178-instance-00000090.
Oct 02 09:12:38 compute-0 systemd[1]: Started Virtual Machine qemu-178-instance-00000090.
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.686 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[87df7eff-da37-477d-b9f8-9a7ccd7d1754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.692 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[37f78e74-ad40-4757-82ee-4540d83e0995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.725 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[28f9d231-71c6-4a21-96ef-b5e0b8cfac2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6ff2d0-47da-48fb-be96-4146bacfd944]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap436e56fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:8d:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719184, 'reachable_time': 36984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422435, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.763 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3c1f6c-f814-407f-b021-e7da7d0c0566]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap436e56fa-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719196, 'tstamp': 719196}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422439, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap436e56fa-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719199, 'tstamp': 719199}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422439, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.765 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap436e56fa-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:38 compute-0 nova_compute[260603]: 2025-10-02 09:12:38.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.768 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap436e56fa-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:38 compute-0 nova_compute[260603]: 2025-10-02 09:12:38.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.769 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.769 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap436e56fa-40, col_values=(('external_ids', {'iface-id': '3c52d5f8-d941-470e-b21b-5afb7b1bf813'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.769 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.771 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8c6c65df-3056-4b48-96d3-e6dc8744f31d in datapath 5d64d879-42c6-456c-a212-df00bf998997 unbound from our chassis
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.772 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d64d879-42c6-456c-a212-df00bf998997
Oct 02 09:12:38 compute-0 ceph-mon[74477]: pgmap v2835: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.789 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2b31fa-2100-4e9f-9866-e6e527f7b193]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.822 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4a53de-29ea-4d5a-bf0d-ef14b8257660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.826 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[151a37a2-18d8-489c-a995-7cf96d53a1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.857 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5cd124-7a13-41d1-8406-02169541ee8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.876 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72d8718c-5111-4b5b-aff0-36be1259d6f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d64d879-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:74:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 2146, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 2146, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719292, 'reachable_time': 17062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422446, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.893 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c72b49ab-f3a0-4972-8125-09f3d6f741fa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d64d879-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719302, 'tstamp': 719302}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422447, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.894 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d64d879-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:38 compute-0 nova_compute[260603]: 2025-10-02 09:12:38.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:38 compute-0 nova_compute[260603]: 2025-10-02 09:12:38.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.898 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d64d879-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.898 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.898 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d64d879-40, col_values=(('external_ids', {'iface-id': 'be1c87e3-582f-4bbb-a5fb-4fb837b7e882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.898 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011076865395259492 of space, bias 1.0, pg target 0.33230596185778477 quantized to 32 (current 32)
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:12:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.192 2 DEBUG nova.compute.manager [req-41599a5a-03b1-4947-91eb-cfeeee921631 req-e76f93c1-a212-499c-9161-fbbaf22d8ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.193 2 DEBUG oslo_concurrency.lockutils [req-41599a5a-03b1-4947-91eb-cfeeee921631 req-e76f93c1-a212-499c-9161-fbbaf22d8ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.194 2 DEBUG oslo_concurrency.lockutils [req-41599a5a-03b1-4947-91eb-cfeeee921631 req-e76f93c1-a212-499c-9161-fbbaf22d8ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.194 2 DEBUG oslo_concurrency.lockutils [req-41599a5a-03b1-4947-91eb-cfeeee921631 req-e76f93c1-a212-499c-9161-fbbaf22d8ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.194 2 DEBUG nova.compute.manager [req-41599a5a-03b1-4947-91eb-cfeeee921631 req-e76f93c1-a212-499c-9161-fbbaf22d8ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Processing event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.196 2 DEBUG nova.compute.manager [req-366f0834-2606-4241-8cf4-205356e4b55e req-ccc47a28-33e1-494c-b8b4-d055901e3263 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.196 2 DEBUG oslo_concurrency.lockutils [req-366f0834-2606-4241-8cf4-205356e4b55e req-ccc47a28-33e1-494c-b8b4-d055901e3263 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.196 2 DEBUG oslo_concurrency.lockutils [req-366f0834-2606-4241-8cf4-205356e4b55e req-ccc47a28-33e1-494c-b8b4-d055901e3263 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.197 2 DEBUG oslo_concurrency.lockutils [req-366f0834-2606-4241-8cf4-205356e4b55e req-ccc47a28-33e1-494c-b8b4-d055901e3263 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.197 2 DEBUG nova.compute.manager [req-366f0834-2606-4241-8cf4-205356e4b55e req-ccc47a28-33e1-494c-b8b4-d055901e3263 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Processing event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.665 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.666 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396359.6641946, 66710b2a-3c24-45a9-a500-f29978d33f4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.667 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] VM Started (Lifecycle Event)
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.671 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.677 2 INFO nova.virt.libvirt.driver [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Instance spawned successfully.
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.678 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.695 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.708 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.709 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.710 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.711 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.712 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.713 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.718 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.759 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.760 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396359.6642907, 66710b2a-3c24-45a9-a500-f29978d33f4f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.761 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] VM Paused (Lifecycle Event)
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.785 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.789 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396359.6698976, 66710b2a-3c24-45a9-a500-f29978d33f4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.790 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] VM Resumed (Lifecycle Event)
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.795 2 INFO nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Took 15.49 seconds to spawn the instance on the hypervisor.
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.796 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.808 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.814 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.846 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.873 2 INFO nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Took 16.82 seconds to build instance.
Oct 02 09:12:39 compute-0 nova_compute[260603]: 2025-10-02 09:12:39.891 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct 02 09:12:40 compute-0 nova_compute[260603]: 2025-10-02 09:12:40.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:40 compute-0 nova_compute[260603]: 2025-10-02 09:12:40.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:12:40 compute-0 nova_compute[260603]: 2025-10-02 09:12:40.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.320 2 DEBUG nova.compute.manager [req-1e4ea2f0-eccd-4626-9bb4-d8dd9b5ed745 req-f40ca315-c97e-485e-8276-df73efc993aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.321 2 DEBUG oslo_concurrency.lockutils [req-1e4ea2f0-eccd-4626-9bb4-d8dd9b5ed745 req-f40ca315-c97e-485e-8276-df73efc993aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.321 2 DEBUG oslo_concurrency.lockutils [req-1e4ea2f0-eccd-4626-9bb4-d8dd9b5ed745 req-f40ca315-c97e-485e-8276-df73efc993aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.321 2 DEBUG oslo_concurrency.lockutils [req-1e4ea2f0-eccd-4626-9bb4-d8dd9b5ed745 req-f40ca315-c97e-485e-8276-df73efc993aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.322 2 DEBUG nova.compute.manager [req-1e4ea2f0-eccd-4626-9bb4-d8dd9b5ed745 req-f40ca315-c97e-485e-8276-df73efc993aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] No waiting events found dispatching network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.322 2 WARNING nova.compute.manager [req-1e4ea2f0-eccd-4626-9bb4-d8dd9b5ed745 req-f40ca315-c97e-485e-8276-df73efc993aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received unexpected event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 for instance with vm_state active and task_state None.
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.384 2 DEBUG nova.compute.manager [req-65d957e9-ab91-4756-b09e-d472bce25bfa req-15ee2fb9-73be-4ed4-840a-ebbce66d7600 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.385 2 DEBUG oslo_concurrency.lockutils [req-65d957e9-ab91-4756-b09e-d472bce25bfa req-15ee2fb9-73be-4ed4-840a-ebbce66d7600 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.385 2 DEBUG oslo_concurrency.lockutils [req-65d957e9-ab91-4756-b09e-d472bce25bfa req-15ee2fb9-73be-4ed4-840a-ebbce66d7600 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.386 2 DEBUG oslo_concurrency.lockutils [req-65d957e9-ab91-4756-b09e-d472bce25bfa req-15ee2fb9-73be-4ed4-840a-ebbce66d7600 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.386 2 DEBUG nova.compute.manager [req-65d957e9-ab91-4756-b09e-d472bce25bfa req-15ee2fb9-73be-4ed4-840a-ebbce66d7600 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] No waiting events found dispatching network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.386 2 WARNING nova.compute.manager [req-65d957e9-ab91-4756-b09e-d472bce25bfa req-15ee2fb9-73be-4ed4-840a-ebbce66d7600 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received unexpected event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d for instance with vm_state active and task_state None.
Oct 02 09:12:41 compute-0 ceph-mon[74477]: pgmap v2836: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct 02 09:12:41 compute-0 nova_compute[260603]: 2025-10-02 09:12:41.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct 02 09:12:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:12:43 compute-0 ceph-mon[74477]: pgmap v2837: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct 02 09:12:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:12:44 compute-0 nova_compute[260603]: 2025-10-02 09:12:44.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:44.280 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:12:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:44.281 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:12:44 compute-0 ceph-mon[74477]: pgmap v2838: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:12:45 compute-0 nova_compute[260603]: 2025-10-02 09:12:45.012 2 DEBUG nova.compute.manager [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-changed-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:12:45 compute-0 nova_compute[260603]: 2025-10-02 09:12:45.012 2 DEBUG nova.compute.manager [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing instance network info cache due to event network-changed-244a8221-32fa-4c1b-959f-5a29d4e651f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:12:45 compute-0 nova_compute[260603]: 2025-10-02 09:12:45.012 2 DEBUG oslo_concurrency.lockutils [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:12:45 compute-0 nova_compute[260603]: 2025-10-02 09:12:45.013 2 DEBUG oslo_concurrency.lockutils [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:12:45 compute-0 nova_compute[260603]: 2025-10-02 09:12:45.013 2 DEBUG nova.network.neutron [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing network info cache for port 244a8221-32fa-4c1b-959f-5a29d4e651f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:12:45 compute-0 nova_compute[260603]: 2025-10-02 09:12:45.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:12:46 compute-0 nova_compute[260603]: 2025-10-02 09:12:46.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:47 compute-0 nova_compute[260603]: 2025-10-02 09:12:47.012 2 DEBUG nova.network.neutron [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updated VIF entry in instance network info cache for port 244a8221-32fa-4c1b-959f-5a29d4e651f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:12:47 compute-0 nova_compute[260603]: 2025-10-02 09:12:47.013 2 DEBUG nova.network.neutron [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updating instance_info_cache with network_info: [{"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:12:47 compute-0 nova_compute[260603]: 2025-10-02 09:12:47.133 2 DEBUG oslo_concurrency.lockutils [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:12:47 compute-0 ceph-mon[74477]: pgmap v2839: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:12:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:12:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:12:48.283 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:12:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:12:48 compute-0 ceph-mon[74477]: pgmap v2840: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:12:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Oct 02 09:12:50 compute-0 nova_compute[260603]: 2025-10-02 09:12:50.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:51 compute-0 ceph-mon[74477]: pgmap v2841: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Oct 02 09:12:51 compute-0 nova_compute[260603]: 2025-10-02 09:12:51.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:52 compute-0 podman[422494]: 2025-10-02 09:12:52.038395364 +0000 UTC m=+0.090925426 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 09:12:52 compute-0 podman[422493]: 2025-10-02 09:12:52.092435703 +0000 UTC m=+0.144380528 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:12:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Oct 02 09:12:52 compute-0 ceph-mon[74477]: pgmap v2842: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Oct 02 09:12:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:12:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 179 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 92 op/s
Oct 02 09:12:55 compute-0 ceph-mon[74477]: pgmap v2843: 305 pgs: 305 active+clean; 179 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 92 op/s
Oct 02 09:12:55 compute-0 nova_compute[260603]: 2025-10-02 09:12:55.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 179 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.3 MiB/s wr, 22 op/s
Oct 02 09:12:56 compute-0 sudo[422537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:12:56 compute-0 sudo[422537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:12:56 compute-0 sudo[422537]: pam_unix(sudo:session): session closed for user root
Oct 02 09:12:56 compute-0 podman[422561]: 2025-10-02 09:12:56.418783899 +0000 UTC m=+0.074079563 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 09:12:56 compute-0 sudo[422575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:12:56 compute-0 sudo[422575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:12:56 compute-0 podman[422562]: 2025-10-02 09:12:56.435065835 +0000 UTC m=+0.080867214 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:12:56 compute-0 sudo[422575]: pam_unix(sudo:session): session closed for user root
Oct 02 09:12:56 compute-0 sudo[422627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:12:56 compute-0 sudo[422627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:12:56 compute-0 sudo[422627]: pam_unix(sudo:session): session closed for user root
Oct 02 09:12:56 compute-0 sudo[422652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:12:56 compute-0 sudo[422652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:12:56 compute-0 ovn_controller[152344]: 2025-10-02T09:12:56Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:a2:36 10.100.0.11
Oct 02 09:12:56 compute-0 ovn_controller[152344]: 2025-10-02T09:12:56Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:a2:36 10.100.0.11
Oct 02 09:12:56 compute-0 nova_compute[260603]: 2025-10-02 09:12:56.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:12:57 compute-0 sudo[422652]: pam_unix(sudo:session): session closed for user root
Oct 02 09:12:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:12:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:12:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:12:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:12:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:12:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:12:57 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5fffe95c-d5bf-48c2-bb6e-83c6444ffd8b does not exist
Oct 02 09:12:57 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f5253d04-2671-4eee-ac12-7d4be5520576 does not exist
Oct 02 09:12:57 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2759a9fb-36b4-4498-b7b3-966d772f8836 does not exist
Oct 02 09:12:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:12:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:12:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:12:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:12:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:12:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:12:57 compute-0 sudo[422710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:12:57 compute-0 sudo[422710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:12:57 compute-0 ceph-mon[74477]: pgmap v2844: 305 pgs: 305 active+clean; 179 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.3 MiB/s wr, 22 op/s
Oct 02 09:12:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:12:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:12:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:12:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:12:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:12:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:12:57 compute-0 sudo[422710]: pam_unix(sudo:session): session closed for user root
Oct 02 09:12:57 compute-0 sudo[422735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:12:57 compute-0 sudo[422735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:12:57 compute-0 sudo[422735]: pam_unix(sudo:session): session closed for user root
Oct 02 09:12:57 compute-0 sudo[422760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:12:57 compute-0 sudo[422760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:12:57 compute-0 sudo[422760]: pam_unix(sudo:session): session closed for user root
Oct 02 09:12:57 compute-0 sudo[422785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:12:57 compute-0 sudo[422785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:12:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:12:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:12:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:12:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:12:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:12:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:12:58 compute-0 podman[422851]: 2025-10-02 09:12:58.105725702 +0000 UTC m=+0.074554068 container create c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 09:12:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 02 09:12:58 compute-0 podman[422851]: 2025-10-02 09:12:58.067765132 +0000 UTC m=+0.036593518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:12:58 compute-0 systemd[1]: Started libpod-conmon-c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c.scope.
Oct 02 09:12:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:12:58 compute-0 podman[422851]: 2025-10-02 09:12:58.266803508 +0000 UTC m=+0.235631954 container init c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:12:58 compute-0 podman[422851]: 2025-10-02 09:12:58.278598435 +0000 UTC m=+0.247426791 container start c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 09:12:58 compute-0 cool_swirles[422867]: 167 167
Oct 02 09:12:58 compute-0 systemd[1]: libpod-c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c.scope: Deactivated successfully.
Oct 02 09:12:58 compute-0 podman[422851]: 2025-10-02 09:12:58.320736913 +0000 UTC m=+0.289565319 container attach c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:12:58 compute-0 podman[422851]: 2025-10-02 09:12:58.322422956 +0000 UTC m=+0.291251342 container died c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 09:12:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2845016db65fc1eb9f142e76b4b6335d499a55322a8ebd2c7d93afe5ba5ded70-merged.mount: Deactivated successfully.
Oct 02 09:12:58 compute-0 podman[422851]: 2025-10-02 09:12:58.526488138 +0000 UTC m=+0.495316494 container remove c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:12:58 compute-0 systemd[1]: libpod-conmon-c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c.scope: Deactivated successfully.
Oct 02 09:12:58 compute-0 podman[422890]: 2025-10-02 09:12:58.763613317 +0000 UTC m=+0.058509650 container create 661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 09:12:58 compute-0 podman[422890]: 2025-10-02 09:12:58.731738456 +0000 UTC m=+0.026634799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:12:58 compute-0 systemd[1]: Started libpod-conmon-661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7.scope.
Oct 02 09:12:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87dc222d2cb7e41a1df234c8cd3f7dee4289fc6724b31c13047d3ca7af6ce17c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87dc222d2cb7e41a1df234c8cd3f7dee4289fc6724b31c13047d3ca7af6ce17c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87dc222d2cb7e41a1df234c8cd3f7dee4289fc6724b31c13047d3ca7af6ce17c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87dc222d2cb7e41a1df234c8cd3f7dee4289fc6724b31c13047d3ca7af6ce17c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87dc222d2cb7e41a1df234c8cd3f7dee4289fc6724b31c13047d3ca7af6ce17c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:12:58 compute-0 podman[422890]: 2025-10-02 09:12:58.89984585 +0000 UTC m=+0.194742253 container init 661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 09:12:58 compute-0 podman[422890]: 2025-10-02 09:12:58.910087278 +0000 UTC m=+0.204983601 container start 661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 09:12:58 compute-0 podman[422890]: 2025-10-02 09:12:58.919906603 +0000 UTC m=+0.214802926 container attach 661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:12:59 compute-0 ceph-mon[74477]: pgmap v2845: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 02 09:12:59 compute-0 nova_compute[260603]: 2025-10-02 09:12:59.523 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:13:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 02 09:13:00 compute-0 unruffled_chandrasekhar[422906]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:13:00 compute-0 unruffled_chandrasekhar[422906]: --> relative data size: 1.0
Oct 02 09:13:00 compute-0 unruffled_chandrasekhar[422906]: --> All data devices are unavailable
Oct 02 09:13:00 compute-0 systemd[1]: libpod-661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7.scope: Deactivated successfully.
Oct 02 09:13:00 compute-0 systemd[1]: libpod-661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7.scope: Consumed 1.224s CPU time.
Oct 02 09:13:00 compute-0 podman[422890]: 2025-10-02 09:13:00.219452538 +0000 UTC m=+1.514348861 container died 661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-87dc222d2cb7e41a1df234c8cd3f7dee4289fc6724b31c13047d3ca7af6ce17c-merged.mount: Deactivated successfully.
Oct 02 09:13:00 compute-0 podman[422890]: 2025-10-02 09:13:00.320355833 +0000 UTC m=+1.615252166 container remove 661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 09:13:00 compute-0 systemd[1]: libpod-conmon-661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7.scope: Deactivated successfully.
Oct 02 09:13:00 compute-0 sudo[422785]: pam_unix(sudo:session): session closed for user root
Oct 02 09:13:00 compute-0 sudo[422946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:13:00 compute-0 sudo[422946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:13:00 compute-0 sudo[422946]: pam_unix(sudo:session): session closed for user root
Oct 02 09:13:00 compute-0 sudo[422971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:13:00 compute-0 sudo[422971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:13:00 compute-0 sudo[422971]: pam_unix(sudo:session): session closed for user root
Oct 02 09:13:00 compute-0 sudo[422996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:13:00 compute-0 nova_compute[260603]: 2025-10-02 09:13:00.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:00 compute-0 sudo[422996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:13:00 compute-0 sudo[422996]: pam_unix(sudo:session): session closed for user root
Oct 02 09:13:00 compute-0 sudo[423021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:13:00 compute-0 sudo[423021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:13:00 compute-0 podman[423085]: 2025-10-02 09:13:00.981072685 +0000 UTC m=+0.098941895 container create 5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chatelet, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:13:01 compute-0 podman[423085]: 2025-10-02 09:13:00.914052913 +0000 UTC m=+0.031922143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:13:01 compute-0 systemd[1]: Started libpod-conmon-5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a.scope.
Oct 02 09:13:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:13:01 compute-0 podman[423085]: 2025-10-02 09:13:01.08899677 +0000 UTC m=+0.206866010 container init 5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chatelet, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:13:01 compute-0 podman[423085]: 2025-10-02 09:13:01.095739659 +0000 UTC m=+0.213608859 container start 5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chatelet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 09:13:01 compute-0 cool_chatelet[423102]: 167 167
Oct 02 09:13:01 compute-0 systemd[1]: libpod-5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a.scope: Deactivated successfully.
Oct 02 09:13:01 compute-0 podman[423085]: 2025-10-02 09:13:01.127122024 +0000 UTC m=+0.244991234 container attach 5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chatelet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:13:01 compute-0 podman[423085]: 2025-10-02 09:13:01.127410423 +0000 UTC m=+0.245279633 container died 5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 02 09:13:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-18287e2d3558a5d5e735e600d53a7cdedf203cd007898ef7659bfbcc927e6f38-merged.mount: Deactivated successfully.
Oct 02 09:13:01 compute-0 podman[423085]: 2025-10-02 09:13:01.324875739 +0000 UTC m=+0.442744949 container remove 5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:13:01 compute-0 systemd[1]: libpod-conmon-5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a.scope: Deactivated successfully.
Oct 02 09:13:01 compute-0 nova_compute[260603]: 2025-10-02 09:13:01.522 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:13:01 compute-0 nova_compute[260603]: 2025-10-02 09:13:01.523 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:13:01 compute-0 nova_compute[260603]: 2025-10-02 09:13:01.523 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:13:01 compute-0 podman[423126]: 2025-10-02 09:13:01.486935125 +0000 UTC m=+0.021868830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:13:01 compute-0 ceph-mon[74477]: pgmap v2846: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 02 09:13:01 compute-0 podman[423126]: 2025-10-02 09:13:01.585576641 +0000 UTC m=+0.120510316 container create 6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_raman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:13:01 compute-0 systemd[1]: Started libpod-conmon-6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e.scope.
Oct 02 09:13:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170e83ff482772a262d809fa0f7b3b5896696d6a2c0d2bddfc9a4a46081cc279/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170e83ff482772a262d809fa0f7b3b5896696d6a2c0d2bddfc9a4a46081cc279/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170e83ff482772a262d809fa0f7b3b5896696d6a2c0d2bddfc9a4a46081cc279/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170e83ff482772a262d809fa0f7b3b5896696d6a2c0d2bddfc9a4a46081cc279/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:13:01 compute-0 nova_compute[260603]: 2025-10-02 09:13:01.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:01 compute-0 podman[423126]: 2025-10-02 09:13:01.760712153 +0000 UTC m=+0.295645898 container init 6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:13:01 compute-0 podman[423126]: 2025-10-02 09:13:01.768032451 +0000 UTC m=+0.302966126 container start 6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:13:01 compute-0 podman[423126]: 2025-10-02 09:13:01.823297199 +0000 UTC m=+0.358230954 container attach 6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_raman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:13:02 compute-0 nova_compute[260603]: 2025-10-02 09:13:02.004 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:13:02 compute-0 nova_compute[260603]: 2025-10-02 09:13:02.004 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:13:02 compute-0 nova_compute[260603]: 2025-10-02 09:13:02.004 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 09:13:02 compute-0 nova_compute[260603]: 2025-10-02 09:13:02.004 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 71c9f70f-5f86-4723-9e4f-a4aca14211cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:13:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 02 09:13:02 compute-0 wizardly_raman[423143]: {
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:     "0": [
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:         {
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "devices": [
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "/dev/loop3"
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             ],
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_name": "ceph_lv0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_size": "21470642176",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "name": "ceph_lv0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "tags": {
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.cluster_name": "ceph",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.crush_device_class": "",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.encrypted": "0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.osd_id": "0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.type": "block",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.vdo": "0"
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             },
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "type": "block",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "vg_name": "ceph_vg0"
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:         }
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:     ],
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:     "1": [
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:         {
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "devices": [
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "/dev/loop4"
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             ],
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_name": "ceph_lv1",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_size": "21470642176",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "name": "ceph_lv1",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "tags": {
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.cluster_name": "ceph",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.crush_device_class": "",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.encrypted": "0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.osd_id": "1",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.type": "block",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.vdo": "0"
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             },
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "type": "block",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "vg_name": "ceph_vg1"
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:         }
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:     ],
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:     "2": [
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:         {
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "devices": [
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "/dev/loop5"
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             ],
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_name": "ceph_lv2",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_size": "21470642176",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "name": "ceph_lv2",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "tags": {
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.cluster_name": "ceph",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.crush_device_class": "",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.encrypted": "0",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.osd_id": "2",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.type": "block",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:                 "ceph.vdo": "0"
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             },
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "type": "block",
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:             "vg_name": "ceph_vg2"
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:         }
Oct 02 09:13:02 compute-0 wizardly_raman[423143]:     ]
Oct 02 09:13:02 compute-0 wizardly_raman[423143]: }
Oct 02 09:13:02 compute-0 systemd[1]: libpod-6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e.scope: Deactivated successfully.
Oct 02 09:13:02 compute-0 conmon[423143]: conmon 6d880ec1795572fbce46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e.scope/container/memory.events
Oct 02 09:13:02 compute-0 ceph-mon[74477]: pgmap v2847: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 02 09:13:02 compute-0 podman[423152]: 2025-10-02 09:13:02.622469443 +0000 UTC m=+0.025956707 container died 6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_raman, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:13:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-170e83ff482772a262d809fa0f7b3b5896696d6a2c0d2bddfc9a4a46081cc279-merged.mount: Deactivated successfully.
Oct 02 09:13:02 compute-0 podman[423152]: 2025-10-02 09:13:02.736661322 +0000 UTC m=+0.140148586 container remove 6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:13:02 compute-0 systemd[1]: libpod-conmon-6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e.scope: Deactivated successfully.
Oct 02 09:13:02 compute-0 sudo[423021]: pam_unix(sudo:session): session closed for user root
Oct 02 09:13:02 compute-0 sudo[423167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:13:02 compute-0 sudo[423167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:13:02 compute-0 sudo[423167]: pam_unix(sudo:session): session closed for user root
Oct 02 09:13:02 compute-0 sudo[423192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:13:02 compute-0 sudo[423192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:13:02 compute-0 sudo[423192]: pam_unix(sudo:session): session closed for user root
Oct 02 09:13:02 compute-0 sudo[423217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:13:02 compute-0 sudo[423217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:13:02 compute-0 sudo[423217]: pam_unix(sudo:session): session closed for user root
Oct 02 09:13:03 compute-0 sudo[423242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:13:03 compute-0 sudo[423242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:13:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:13:03 compute-0 podman[423308]: 2025-10-02 09:13:03.370482718 +0000 UTC m=+0.021514209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:13:03 compute-0 podman[423308]: 2025-10-02 09:13:03.60348764 +0000 UTC m=+0.254519111 container create 7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 09:13:03 compute-0 systemd[1]: Started libpod-conmon-7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21.scope.
Oct 02 09:13:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:13:03 compute-0 podman[423308]: 2025-10-02 09:13:03.794548807 +0000 UTC m=+0.445580318 container init 7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 09:13:03 compute-0 podman[423308]: 2025-10-02 09:13:03.806728296 +0000 UTC m=+0.457759767 container start 7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:13:03 compute-0 youthful_heisenberg[423324]: 167 167
Oct 02 09:13:03 compute-0 systemd[1]: libpod-7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21.scope: Deactivated successfully.
Oct 02 09:13:03 compute-0 podman[423308]: 2025-10-02 09:13:03.812341609 +0000 UTC m=+0.463373110 container attach 7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:13:03 compute-0 podman[423308]: 2025-10-02 09:13:03.812989349 +0000 UTC m=+0.464020810 container died 7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a15ec1af6dd370fbcee5a5ce84e3d16f683c902e32b33156400230131fa6226f-merged.mount: Deactivated successfully.
Oct 02 09:13:03 compute-0 podman[423308]: 2025-10-02 09:13:03.969424021 +0000 UTC m=+0.620455492 container remove 7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:13:03 compute-0 systemd[1]: libpod-conmon-7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21.scope: Deactivated successfully.
Oct 02 09:13:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 02 09:13:04 compute-0 podman[423346]: 2025-10-02 09:13:04.183646518 +0000 UTC m=+0.029905460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:13:04 compute-0 podman[423346]: 2025-10-02 09:13:04.277886397 +0000 UTC m=+0.124145309 container create 84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:13:04 compute-0 nova_compute[260603]: 2025-10-02 09:13:04.370 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:13:04 compute-0 nova_compute[260603]: 2025-10-02 09:13:04.387 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:13:04 compute-0 nova_compute[260603]: 2025-10-02 09:13:04.387 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 09:13:04 compute-0 systemd[1]: Started libpod-conmon-84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409.scope.
Oct 02 09:13:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d2fcdc3f7c8784be87085719da67c3cd261ccec4a971f072019f10d07ced95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d2fcdc3f7c8784be87085719da67c3cd261ccec4a971f072019f10d07ced95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d2fcdc3f7c8784be87085719da67c3cd261ccec4a971f072019f10d07ced95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d2fcdc3f7c8784be87085719da67c3cd261ccec4a971f072019f10d07ced95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:13:04 compute-0 podman[423346]: 2025-10-02 09:13:04.481124903 +0000 UTC m=+0.327383825 container init 84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 09:13:04 compute-0 podman[423346]: 2025-10-02 09:13:04.487822131 +0000 UTC m=+0.334081043 container start 84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:13:04 compute-0 nova_compute[260603]: 2025-10-02 09:13:04.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:13:04 compute-0 podman[423346]: 2025-10-02 09:13:04.549960822 +0000 UTC m=+0.396219764 container attach 84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:13:05 compute-0 ceph-mon[74477]: pgmap v2848: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 02 09:13:05 compute-0 nova_compute[260603]: 2025-10-02 09:13:05.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:13:05 compute-0 nova_compute[260603]: 2025-10-02 09:13:05.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:05 compute-0 bold_gagarin[423362]: {
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "osd_id": 2,
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "type": "bluestore"
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:     },
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "osd_id": 1,
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "type": "bluestore"
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:     },
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "osd_id": 0,
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:         "type": "bluestore"
Oct 02 09:13:05 compute-0 bold_gagarin[423362]:     }
Oct 02 09:13:05 compute-0 bold_gagarin[423362]: }
Oct 02 09:13:05 compute-0 nova_compute[260603]: 2025-10-02 09:13:05.591 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:05 compute-0 nova_compute[260603]: 2025-10-02 09:13:05.592 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:05 compute-0 nova_compute[260603]: 2025-10-02 09:13:05.593 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:05 compute-0 nova_compute[260603]: 2025-10-02 09:13:05.593 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:13:05 compute-0 nova_compute[260603]: 2025-10-02 09:13:05.593 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:13:05 compute-0 systemd[1]: libpod-84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409.scope: Deactivated successfully.
Oct 02 09:13:05 compute-0 systemd[1]: libpod-84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409.scope: Consumed 1.123s CPU time.
Oct 02 09:13:05 compute-0 podman[423396]: 2025-10-02 09:13:05.692578309 +0000 UTC m=+0.044206124 container died 84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct 02 09:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-72d2fcdc3f7c8784be87085719da67c3cd261ccec4a971f072019f10d07ced95-merged.mount: Deactivated successfully.
Oct 02 09:13:05 compute-0 podman[423396]: 2025-10-02 09:13:05.931647839 +0000 UTC m=+0.283275564 container remove 84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:13:05 compute-0 systemd[1]: libpod-conmon-84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409.scope: Deactivated successfully.
Oct 02 09:13:05 compute-0 sudo[423242]: pam_unix(sudo:session): session closed for user root
Oct 02 09:13:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:13:06 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:13:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:13:06 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:13:06 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c46c46df-1b76-4338-8dbe-48bb021a8ad2 does not exist
Oct 02 09:13:06 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 3e44f3d1-e65c-4ca2-a43b-f8650d375477 does not exist
Oct 02 09:13:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:13:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1608232646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.102 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:13:06 compute-0 sudo[423430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:13:06 compute-0 sudo[423430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:13:06 compute-0 sudo[423430]: pam_unix(sudo:session): session closed for user root
Oct 02 09:13:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 817 KiB/s wr, 48 op/s
Oct 02 09:13:06 compute-0 sudo[423458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:13:06 compute-0 sudo[423458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:13:06 compute-0 sudo[423458]: pam_unix(sudo:session): session closed for user root
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.243 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.245 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.249 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.250 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.432 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.433 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3156MB free_disk=59.8972053527832GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.434 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.434 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.680 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 71c9f70f-5f86-4723-9e4f-a4aca14211cb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.681 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 66710b2a-3c24-45a9-a500-f29978d33f4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.681 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.681 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:06 compute-0 nova_compute[260603]: 2025-10-02 09:13:06.750 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:13:07 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:13:07 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:13:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1608232646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:13:07 compute-0 ceph-mon[74477]: pgmap v2849: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 817 KiB/s wr, 48 op/s
Oct 02 09:13:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:13:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280372030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:13:07 compute-0 nova_compute[260603]: 2025-10-02 09:13:07.285 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:13:07 compute-0 nova_compute[260603]: 2025-10-02 09:13:07.294 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:13:07 compute-0 nova_compute[260603]: 2025-10-02 09:13:07.317 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:13:07 compute-0 nova_compute[260603]: 2025-10-02 09:13:07.355 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:13:07 compute-0 nova_compute[260603]: 2025-10-02 09:13:07.356 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 817 KiB/s wr, 48 op/s
Oct 02 09:13:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1280372030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:13:08 compute-0 nova_compute[260603]: 2025-10-02 09:13:08.358 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:13:08 compute-0 nova_compute[260603]: 2025-10-02 09:13:08.359 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:13:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:13:09 compute-0 ceph-mon[74477]: pgmap v2850: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 817 KiB/s wr, 48 op/s
Oct 02 09:13:09 compute-0 nova_compute[260603]: 2025-10-02 09:13:09.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:13:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Oct 02 09:13:10 compute-0 nova_compute[260603]: 2025-10-02 09:13:10.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:11 compute-0 ceph-mon[74477]: pgmap v2851: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Oct 02 09:13:11 compute-0 nova_compute[260603]: 2025-10-02 09:13:11.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Oct 02 09:13:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:13:13 compute-0 ceph-mon[74477]: pgmap v2852: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Oct 02 09:13:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 20 KiB/s wr, 1 op/s
Oct 02 09:13:14 compute-0 ovn_controller[152344]: 2025-10-02T09:13:14Z|01599|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Oct 02 09:13:14 compute-0 ceph-mon[74477]: pgmap v2853: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 20 KiB/s wr, 1 op/s
Oct 02 09:13:15 compute-0 nova_compute[260603]: 2025-10-02 09:13:15.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:13:15 compute-0 nova_compute[260603]: 2025-10-02 09:13:15.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Oct 02 09:13:16 compute-0 nova_compute[260603]: 2025-10-02 09:13:16.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:17 compute-0 ceph-mon[74477]: pgmap v2854: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Oct 02 09:13:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 9.3 KiB/s wr, 1 op/s
Oct 02 09:13:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:13:18 compute-0 nova_compute[260603]: 2025-10-02 09:13:18.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:13:19 compute-0 ceph-mon[74477]: pgmap v2855: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 9.3 KiB/s wr, 1 op/s
Oct 02 09:13:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 9.3 KiB/s wr, 1 op/s
Oct 02 09:13:20 compute-0 nova_compute[260603]: 2025-10-02 09:13:20.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:20 compute-0 ceph-mon[74477]: pgmap v2856: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 9.3 KiB/s wr, 1 op/s
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.508 2 DEBUG nova.compute.manager [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-changed-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.508 2 DEBUG nova.compute.manager [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing instance network info cache due to event network-changed-244a8221-32fa-4c1b-959f-5a29d4e651f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.509 2 DEBUG oslo_concurrency.lockutils [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.509 2 DEBUG oslo_concurrency.lockutils [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.509 2 DEBUG nova.network.neutron [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing network info cache for port 244a8221-32fa-4c1b-959f-5a29d4e651f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.575 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.576 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.577 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.578 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.578 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.580 2 INFO nova.compute.manager [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Terminating instance
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.583 2 DEBUG nova.compute.manager [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:13:21 compute-0 nova_compute[260603]: 2025-10-02 09:13:21.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 9.3 KiB/s wr, 1 op/s
Oct 02 09:13:22 compute-0 kernel: tap244a8221-32 (unregistering): left promiscuous mode
Oct 02 09:13:22 compute-0 NetworkManager[45129]: <info>  [1759396402.3218] device (tap244a8221-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:13:22 compute-0 ovn_controller[152344]: 2025-10-02T09:13:22Z|01600|binding|INFO|Releasing lport 244a8221-32fa-4c1b-959f-5a29d4e651f1 from this chassis (sb_readonly=0)
Oct 02 09:13:22 compute-0 ovn_controller[152344]: 2025-10-02T09:13:22Z|01601|binding|INFO|Setting lport 244a8221-32fa-4c1b-959f-5a29d4e651f1 down in Southbound
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 ovn_controller[152344]: 2025-10-02T09:13:22Z|01602|binding|INFO|Removing iface tap244a8221-32 ovn-installed in OVS
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.357 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:a2:36 10.100.0.11'], port_security=['fa:16:3e:10:a2:36 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '66710b2a-3c24-45a9-a500-f29978d33f4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-436e56fa-4885-4043-b091-8043a6f9f710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79454522-7e2a-40b8-ae72-355dd621c03a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=244a8221-32fa-4c1b-959f-5a29d4e651f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.360 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 244a8221-32fa-4c1b-959f-5a29d4e651f1 in datapath 436e56fa-4885-4043-b091-8043a6f9f710 unbound from our chassis
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.363 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 436e56fa-4885-4043-b091-8043a6f9f710
Oct 02 09:13:22 compute-0 kernel: tap8c6c65df-30 (unregistering): left promiscuous mode
Oct 02 09:13:22 compute-0 NetworkManager[45129]: <info>  [1759396402.3725] device (tap8c6c65df-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.396 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d12060c9-2e3f-46ee-b6ce-ff19b16f3f45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 ovn_controller[152344]: 2025-10-02T09:13:22Z|01603|binding|INFO|Releasing lport 8c6c65df-3056-4b48-96d3-e6dc8744f31d from this chassis (sb_readonly=0)
Oct 02 09:13:22 compute-0 ovn_controller[152344]: 2025-10-02T09:13:22Z|01604|binding|INFO|Setting lport 8c6c65df-3056-4b48-96d3-e6dc8744f31d down in Southbound
Oct 02 09:13:22 compute-0 ovn_controller[152344]: 2025-10-02T09:13:22Z|01605|binding|INFO|Removing iface tap8c6c65df-30 ovn-installed in OVS
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Deactivated successfully.
Oct 02 09:13:22 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Consumed 14.636s CPU time.
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.436 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:b8:50 2001:db8:0:1:f816:3eff:fe2b:b850 2001:db8::f816:3eff:fe2b:b850'], port_security=['fa:16:3e:2b:b8:50 2001:db8:0:1:f816:3eff:fe2b:b850 2001:db8::f816:3eff:fe2b:b850'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe2b:b850/64 2001:db8::f816:3eff:fe2b:b850/64', 'neutron:device_id': '66710b2a-3c24-45a9-a500-f29978d33f4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bb22cc4-c817-4149-925c-4cb21e573102, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8c6c65df-3056-4b48-96d3-e6dc8744f31d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:13:22 compute-0 systemd-machined[214636]: Machine qemu-178-instance-00000090 terminated.
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.451 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d5cdf5-4278-43ad-b3b3-2135b1daffa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.455 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a83459-d5f8-4388-8ecf-720a526a87d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:22 compute-0 podman[423508]: 2025-10-02 09:13:22.473054627 +0000 UTC m=+0.115764029 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.499 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cf19d383-d239-4d08-96e1-c83046596725]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:22 compute-0 podman[423505]: 2025-10-02 09:13:22.510616164 +0000 UTC m=+0.161552151 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.526 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa2dc92-09a7-41af-bc7b-31b9f7857eda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap436e56fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:8d:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719184, 'reachable_time': 36984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423562, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.546 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[47e335fd-4c73-4012-bee1-35b7af5bd692]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap436e56fa-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719196, 'tstamp': 719196}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423563, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap436e56fa-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719199, 'tstamp': 719199}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423563, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.548 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap436e56fa-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.559 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap436e56fa-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.559 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.560 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap436e56fa-40, col_values=(('external_ids', {'iface-id': '3c52d5f8-d941-470e-b21b-5afb7b1bf813'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.561 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.562 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8c6c65df-3056-4b48-96d3-e6dc8744f31d in datapath 5d64d879-42c6-456c-a212-df00bf998997 unbound from our chassis
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.565 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d64d879-42c6-456c-a212-df00bf998997
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.584 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3469a0fc-08ee-47e1-a5c4-088127e5e797]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:22 compute-0 NetworkManager[45129]: <info>  [1759396402.6129] manager: (tap244a8221-32): new Tun device (/org/freedesktop/NetworkManager/Devices/650)
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.620 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac4c139-7b26-49a2-af7f-9444feaafe2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.624 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe8ce5b-18e0-49ef-90a6-533e6e03272b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.644 2 INFO nova.virt.libvirt.driver [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Instance destroyed successfully.
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.645 2 DEBUG nova.objects.instance [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 66710b2a-3c24-45a9-a500-f29978d33f4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.666 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4d7b8e-4dca-466b-a964-81dbb0718b0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.676 2 DEBUG nova.virt.libvirt.vif [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569731112',display_name='tempest-TestGettingAddress-server-569731112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569731112',id=144,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:12:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-i93zg3s6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:12:39Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=66710b2a-3c24-45a9-a500-f29978d33f4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.678 2 DEBUG nova.network.os_vif_util [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.678 2 DEBUG nova.network.os_vif_util [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.679 2 DEBUG os_vif [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap244a8221-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.691 2 INFO os_vif [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32')
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.692 2 DEBUG nova.virt.libvirt.vif [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569731112',display_name='tempest-TestGettingAddress-server-569731112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569731112',id=144,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:12:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-i93zg3s6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:12:39Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=66710b2a-3c24-45a9-a500-f29978d33f4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.692 2 DEBUG nova.network.os_vif_util [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.693 2 DEBUG nova.network.os_vif_util [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.693 2 DEBUG os_vif [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.694 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c6c65df-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.698 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9b1b93-7372-40a9-9a4d-fc252537c909]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d64d879-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:74:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 6, 'rx_bytes': 3460, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 6, 'rx_bytes': 3460, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719292, 'reachable_time': 17062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423587, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.702 2 INFO os_vif [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30')
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.724 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a9a0fb-8cf2-477a-9b9b-b17368aa567c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d64d879-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719302, 'tstamp': 719302}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423593, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.726 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d64d879-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.729 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d64d879-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.729 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.730 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d64d879-40, col_values=(('external_ids', {'iface-id': 'be1c87e3-582f-4bbb-a5fb-4fb837b7e882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.730 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.787 2 DEBUG nova.compute.manager [req-f9565c87-7978-4d73-a7df-2bb8fb974fed req-b4c6c881-ad4e-4f1f-93a5-31e3830faea0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-unplugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.787 2 DEBUG oslo_concurrency.lockutils [req-f9565c87-7978-4d73-a7df-2bb8fb974fed req-b4c6c881-ad4e-4f1f-93a5-31e3830faea0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.787 2 DEBUG oslo_concurrency.lockutils [req-f9565c87-7978-4d73-a7df-2bb8fb974fed req-b4c6c881-ad4e-4f1f-93a5-31e3830faea0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.788 2 DEBUG oslo_concurrency.lockutils [req-f9565c87-7978-4d73-a7df-2bb8fb974fed req-b4c6c881-ad4e-4f1f-93a5-31e3830faea0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.788 2 DEBUG nova.compute.manager [req-f9565c87-7978-4d73-a7df-2bb8fb974fed req-b4c6c881-ad4e-4f1f-93a5-31e3830faea0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] No waiting events found dispatching network-vif-unplugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:13:22 compute-0 nova_compute[260603]: 2025-10-02 09:13:22.788 2 DEBUG nova.compute.manager [req-f9565c87-7978-4d73-a7df-2bb8fb974fed req-b4c6c881-ad4e-4f1f-93a5-31e3830faea0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-unplugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.044 2 DEBUG nova.network.neutron [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updated VIF entry in instance network info cache for port 244a8221-32fa-4c1b-959f-5a29d4e651f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.045 2 DEBUG nova.network.neutron [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updating instance_info_cache with network_info: [{"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:13:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:23.073 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:13:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:23.074 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.077 2 DEBUG oslo_concurrency.lockutils [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:13:23 compute-0 ceph-mon[74477]: pgmap v2857: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 9.3 KiB/s wr, 1 op/s
Oct 02 09:13:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.495 2 INFO nova.virt.libvirt.driver [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Deleting instance files /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f_del
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.496 2 INFO nova.virt.libvirt.driver [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Deletion of /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f_del complete
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.553 2 INFO nova.compute.manager [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Took 1.97 seconds to destroy the instance on the hypervisor.
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.554 2 DEBUG oslo.service.loopingcall [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.554 2 DEBUG nova.compute.manager [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.554 2 DEBUG nova.network.neutron [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.629 2 DEBUG nova.compute.manager [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-unplugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.629 2 DEBUG oslo_concurrency.lockutils [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.630 2 DEBUG oslo_concurrency.lockutils [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.630 2 DEBUG oslo_concurrency.lockutils [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.630 2 DEBUG nova.compute.manager [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] No waiting events found dispatching network-vif-unplugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.631 2 DEBUG nova.compute.manager [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-unplugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.631 2 DEBUG nova.compute.manager [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.632 2 DEBUG oslo_concurrency.lockutils [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.632 2 DEBUG oslo_concurrency.lockutils [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.632 2 DEBUG oslo_concurrency.lockutils [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.633 2 DEBUG nova.compute.manager [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] No waiting events found dispatching network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:13:23 compute-0 nova_compute[260603]: 2025-10-02 09:13:23.633 2 WARNING nova.compute.manager [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received unexpected event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 for instance with vm_state active and task_state deleting.
Oct 02 09:13:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 9.4 KiB/s wr, 4 op/s
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.701 2 DEBUG nova.network.neutron [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.721 2 INFO nova.compute.manager [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Took 1.17 seconds to deallocate network for instance.
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.787 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.788 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.893 2 DEBUG nova.compute.manager [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.894 2 DEBUG oslo_concurrency.lockutils [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.894 2 DEBUG oslo_concurrency.lockutils [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.894 2 DEBUG oslo_concurrency.lockutils [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.894 2 DEBUG nova.compute.manager [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] No waiting events found dispatching network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.894 2 WARNING nova.compute.manager [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received unexpected event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d for instance with vm_state deleted and task_state None.
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.894 2 DEBUG nova.compute.manager [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-deleted-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.895 2 DEBUG nova.compute.manager [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-deleted-8c6c65df-3056-4b48-96d3-e6dc8744f31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:24 compute-0 nova_compute[260603]: 2025-10-02 09:13:24.896 2 DEBUG oslo_concurrency.processutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:13:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:13:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601005303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:13:25 compute-0 nova_compute[260603]: 2025-10-02 09:13:25.357 2 DEBUG oslo_concurrency.processutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:13:25 compute-0 nova_compute[260603]: 2025-10-02 09:13:25.364 2 DEBUG nova.compute.provider_tree [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:13:25 compute-0 nova_compute[260603]: 2025-10-02 09:13:25.381 2 DEBUG nova.scheduler.client.report [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:13:25 compute-0 nova_compute[260603]: 2025-10-02 09:13:25.402 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:25 compute-0 nova_compute[260603]: 2025-10-02 09:13:25.429 2 INFO nova.scheduler.client.report [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 66710b2a-3c24-45a9-a500-f29978d33f4f
Oct 02 09:13:25 compute-0 ceph-mon[74477]: pgmap v2858: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 9.4 KiB/s wr, 4 op/s
Oct 02 09:13:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2601005303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:13:25 compute-0 nova_compute[260603]: 2025-10-02 09:13:25.482 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 02 09:13:26 compute-0 nova_compute[260603]: 2025-10-02 09:13:26.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 podman[423631]: 2025-10-02 09:13:27.01207611 +0000 UTC m=+0.079456689 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001)
Oct 02 09:13:27 compute-0 podman[423632]: 2025-10-02 09:13:27.030612787 +0000 UTC m=+0.093527448 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.462 2 DEBUG nova.compute.manager [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-changed-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.463 2 DEBUG nova.compute.manager [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing instance network info cache due to event network-changed-102411d5-80b6-47af-9293-08b07c65d541. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.463 2 DEBUG oslo_concurrency.lockutils [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.463 2 DEBUG oslo_concurrency.lockutils [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:13:27 compute-0 ceph-mon[74477]: pgmap v2859: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.463 2 DEBUG nova.network.neutron [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing network info cache for port 102411d5-80b6-47af-9293-08b07c65d541 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.524 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.524 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.524 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.524 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.525 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.526 2 INFO nova.compute.manager [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Terminating instance
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.526 2 DEBUG nova.compute.manager [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:13:27 compute-0 kernel: tap102411d5-80 (unregistering): left promiscuous mode
Oct 02 09:13:27 compute-0 NetworkManager[45129]: <info>  [1759396407.5801] device (tap102411d5-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 ovn_controller[152344]: 2025-10-02T09:13:27Z|01606|binding|INFO|Releasing lport 102411d5-80b6-47af-9293-08b07c65d541 from this chassis (sb_readonly=0)
Oct 02 09:13:27 compute-0 ovn_controller[152344]: 2025-10-02T09:13:27Z|01607|binding|INFO|Setting lport 102411d5-80b6-47af-9293-08b07c65d541 down in Southbound
Oct 02 09:13:27 compute-0 ovn_controller[152344]: 2025-10-02T09:13:27Z|01608|binding|INFO|Removing iface tap102411d5-80 ovn-installed in OVS
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.607 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:ca:02 10.100.0.6'], port_security=['fa:16:3e:05:ca:02 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '71c9f70f-5f86-4723-9e4f-a4aca14211cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-436e56fa-4885-4043-b091-8043a6f9f710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79454522-7e2a-40b8-ae72-355dd621c03a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=102411d5-80b6-47af-9293-08b07c65d541) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:13:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.608 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 102411d5-80b6-47af-9293-08b07c65d541 in datapath 436e56fa-4885-4043-b091-8043a6f9f710 unbound from our chassis
Oct 02 09:13:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.609 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 436e56fa-4885-4043-b091-8043a6f9f710, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:13:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.610 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4199d5-32e1-4f98-9f67-0f1fcd21eb42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.611 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710 namespace which is not needed anymore
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 kernel: tapd5b6295d-90 (unregistering): left promiscuous mode
Oct 02 09:13:27 compute-0 NetworkManager[45129]: <info>  [1759396407.6299] device (tapd5b6295d-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 ovn_controller[152344]: 2025-10-02T09:13:27Z|01609|binding|INFO|Releasing lport d5b6295d-90a7-4d25-be69-ccd7a58621c6 from this chassis (sb_readonly=0)
Oct 02 09:13:27 compute-0 ovn_controller[152344]: 2025-10-02T09:13:27Z|01610|binding|INFO|Setting lport d5b6295d-90a7-4d25-be69-ccd7a58621c6 down in Southbound
Oct 02 09:13:27 compute-0 ovn_controller[152344]: 2025-10-02T09:13:27Z|01611|binding|INFO|Removing iface tapd5b6295d-90 ovn-installed in OVS
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.648 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:62:03 2001:db8:0:1:f816:3eff:fe3c:6203 2001:db8::f816:3eff:fe3c:6203'], port_security=['fa:16:3e:3c:62:03 2001:db8:0:1:f816:3eff:fe3c:6203 2001:db8::f816:3eff:fe3c:6203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3c:6203/64 2001:db8::f816:3eff:fe3c:6203/64', 'neutron:device_id': '71c9f70f-5f86-4723-9e4f-a4aca14211cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bb22cc4-c817-4149-925c-4cb21e573102, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d5b6295d-90a7-4d25-be69-ccd7a58621c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Oct 02 09:13:27 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Consumed 16.258s CPU time.
Oct 02 09:13:27 compute-0 systemd-machined[214636]: Machine qemu-177-instance-0000008f terminated.
Oct 02 09:13:27 compute-0 NetworkManager[45129]: <info>  [1759396407.7608] manager: (tapd5b6295d-90): new Tun device (/org/freedesktop/NetworkManager/Devices/651)
Oct 02 09:13:27 compute-0 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [NOTICE]   (421604) : haproxy version is 2.8.14-c23fe91
Oct 02 09:13:27 compute-0 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [NOTICE]   (421604) : path to executable is /usr/sbin/haproxy
Oct 02 09:13:27 compute-0 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [WARNING]  (421604) : Exiting Master process...
Oct 02 09:13:27 compute-0 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [ALERT]    (421604) : Current worker (421624) exited with code 143 (Terminated)
Oct 02 09:13:27 compute-0 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [WARNING]  (421604) : All workers exited. Exiting... (0)
Oct 02 09:13:27 compute-0 systemd[1]: libpod-ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772.scope: Deactivated successfully.
Oct 02 09:13:27 compute-0 conmon[421591]: conmon ad5b94684147e8d87723 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772.scope/container/memory.events
Oct 02 09:13:27 compute-0 podman[423701]: 2025-10-02 09:13:27.778296081 +0000 UTC m=+0.052181332 container died ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.778 2 INFO nova.virt.libvirt.driver [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Instance destroyed successfully.
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.779 2 DEBUG nova.objects.instance [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 71c9f70f-5f86-4723-9e4f-a4aca14211cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.799 2 DEBUG nova.virt.libvirt.vif [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1569895686',display_name='tempest-TestGettingAddress-server-1569895686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1569895686',id=143,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:11:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-59lxi72k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:11:56Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=71c9f70f-5f86-4723-9e4f-a4aca14211cb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.800 2 DEBUG nova.network.os_vif_util [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.800 2 DEBUG nova.network.os_vif_util [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.801 2 DEBUG os_vif [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.804 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap102411d5-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:13:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772-userdata-shm.mount: Deactivated successfully.
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6ba23fbb429caa93a956634bafbbdd11b0d78f46b17b74eaebd84acb971bba0-merged.mount: Deactivated successfully.
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.845 2 INFO os_vif [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80')
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.846 2 DEBUG nova.virt.libvirt.vif [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1569895686',display_name='tempest-TestGettingAddress-server-1569895686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1569895686',id=143,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:11:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-59lxi72k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:11:56Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=71c9f70f-5f86-4723-9e4f-a4aca14211cb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.847 2 DEBUG nova.network.os_vif_util [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.847 2 DEBUG nova.network.os_vif_util [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.848 2 DEBUG os_vif [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.850 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5b6295d-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.854 2 INFO os_vif [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90')
Oct 02 09:13:27 compute-0 podman[423701]: 2025-10-02 09:13:27.860113094 +0000 UTC m=+0.133998345 container cleanup ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:13:27 compute-0 systemd[1]: libpod-conmon-ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772.scope: Deactivated successfully.
Oct 02 09:13:27 compute-0 podman[423759]: 2025-10-02 09:13:27.934355901 +0000 UTC m=+0.049220410 container remove ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 09:13:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.940 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f64b78e-9380-4880-abe5-5b80cc7fc460]: (4, ('Thu Oct  2 09:13:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710 (ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772)\nad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772\nThu Oct  2 09:13:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710 (ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772)\nad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.941 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[85641aa3-1ef6-4d82-be6d-dc0fc1343e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.943 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap436e56fa-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 kernel: tap436e56fa-40: left promiscuous mode
Oct 02 09:13:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:13:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:13:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:13:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:13:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:13:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 nova_compute[260603]: 2025-10-02 09:13:27.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.968 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[837d7d5a-c88f-4b23-805d-509f3f54e988]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.991 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fa554a57-3d7a-424a-a7d3-77749cc2dd7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.992 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[535e1f0f-083a-4dbd-b927-8a4908c75b54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.008 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[34265c4f-3c13-4c82-baba-d86392ef4152]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719174, 'reachable_time': 42147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423781, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d436e56fa\x2d4885\x2d4043\x2db091\x2d8043a6f9f710.mount: Deactivated successfully.
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.010 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.010 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[600a5463-8eea-4827-9ed1-8b0dff857c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.014 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d5b6295d-90a7-4d25-be69-ccd7a58621c6 in datapath 5d64d879-42c6-456c-a212-df00bf998997 unbound from our chassis
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.015 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d64d879-42c6-456c-a212-df00bf998997, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.016 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b997a87-99d5-4062-b0c3-87f8a0edc0af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.016 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d64d879-42c6-456c-a212-df00bf998997 namespace which is not needed anymore
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:13:28
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', '.rgw.root', 'vms', '.mgr', 'images', 'default.rgw.control', 'volumes']
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 119 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 6.9 KiB/s wr, 31 op/s
Oct 02 09:13:28 compute-0 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [NOTICE]   (421815) : haproxy version is 2.8.14-c23fe91
Oct 02 09:13:28 compute-0 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [NOTICE]   (421815) : path to executable is /usr/sbin/haproxy
Oct 02 09:13:28 compute-0 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [WARNING]  (421815) : Exiting Master process...
Oct 02 09:13:28 compute-0 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [ALERT]    (421815) : Current worker (421819) exited with code 143 (Terminated)
Oct 02 09:13:28 compute-0 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [WARNING]  (421815) : All workers exited. Exiting... (0)
Oct 02 09:13:28 compute-0 systemd[1]: libpod-eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099.scope: Deactivated successfully.
Oct 02 09:13:28 compute-0 podman[423799]: 2025-10-02 09:13:28.199416698 +0000 UTC m=+0.057733215 container died eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099-userdata-shm.mount: Deactivated successfully.
Oct 02 09:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f89707fb13d0c4fb2e7f83ff5487b81f8ced57c0f936b53bc230c0c45bebc01b-merged.mount: Deactivated successfully.
Oct 02 09:13:28 compute-0 podman[423799]: 2025-10-02 09:13:28.251216358 +0000 UTC m=+0.109532845 container cleanup eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 09:13:28 compute-0 systemd[1]: libpod-conmon-eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099.scope: Deactivated successfully.
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.299 2 DEBUG nova.compute.manager [req-150fa4f0-4827-429d-b756-a3dc42a3f8ab req-8afd21f5-3f25-4eb8-b725-070045d57ed5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-unplugged-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.301 2 DEBUG oslo_concurrency.lockutils [req-150fa4f0-4827-429d-b756-a3dc42a3f8ab req-8afd21f5-3f25-4eb8-b725-070045d57ed5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.301 2 DEBUG oslo_concurrency.lockutils [req-150fa4f0-4827-429d-b756-a3dc42a3f8ab req-8afd21f5-3f25-4eb8-b725-070045d57ed5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.302 2 DEBUG oslo_concurrency.lockutils [req-150fa4f0-4827-429d-b756-a3dc42a3f8ab req-8afd21f5-3f25-4eb8-b725-070045d57ed5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.302 2 DEBUG nova.compute.manager [req-150fa4f0-4827-429d-b756-a3dc42a3f8ab req-8afd21f5-3f25-4eb8-b725-070045d57ed5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] No waiting events found dispatching network-vif-unplugged-102411d5-80b6-47af-9293-08b07c65d541 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.302 2 DEBUG nova.compute.manager [req-150fa4f0-4827-429d-b756-a3dc42a3f8ab req-8afd21f5-3f25-4eb8-b725-070045d57ed5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-unplugged-102411d5-80b6-47af-9293-08b07c65d541 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:13:28 compute-0 podman[423828]: 2025-10-02 09:13:28.338128728 +0000 UTC m=+0.056726013 container remove eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.346 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c870865-49ec-4adc-adf2-a0567c79fa28]: (4, ('Thu Oct  2 09:13:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997 (eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099)\neaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099\nThu Oct  2 09:13:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997 (eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099)\neaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.348 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[71a1b782-90af-4d00-9cd2-09e5aba0dd70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.349 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d64d879-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:28 compute-0 kernel: tap5d64d879-40: left promiscuous mode
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.379 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[563c5704-69f6-499e-a311-55072a100c9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.402 2 INFO nova.virt.libvirt.driver [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Deleting instance files /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb_del
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.404 2 INFO nova.virt.libvirt.driver [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Deletion of /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb_del complete
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.413 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bd4833-dbd5-416c-8400-39d9294bddfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.415 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[077af658-5bac-48c9-be60-abf4685f52fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.435 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dadc0a88-dded-49a6-8f2d-21fc55739fb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719285, 'reachable_time': 20450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423840, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.437 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d64d879-42c6-456c-a212-df00bf998997 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:13:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.437 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e86e3a26-c8f1-45b7-9457-c6a72b2bc4cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.458 2 INFO nova.compute.manager [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.460 2 DEBUG oslo.service.loopingcall [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.460 2 DEBUG nova.compute.manager [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:13:28 compute-0 nova_compute[260603]: 2025-10-02 09:13:28.461 2 DEBUG nova.network.neutron [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:13:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:13:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d5d64d879\x2d42c6\x2d456c\x2da212\x2ddf00bf998997.mount: Deactivated successfully.
Oct 02 09:13:29 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:29.077 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:13:29 compute-0 ceph-mon[74477]: pgmap v2860: 305 pgs: 305 active+clean; 119 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 6.9 KiB/s wr, 31 op/s
Oct 02 09:13:29 compute-0 nova_compute[260603]: 2025-10-02 09:13:29.575 2 DEBUG nova.compute.manager [req-a60b5349-f487-4e2f-b02c-23b767701304 req-2b05f18c-b7f4-4193-8ce7-640a58a77f03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-deleted-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:29 compute-0 nova_compute[260603]: 2025-10-02 09:13:29.576 2 INFO nova.compute.manager [req-a60b5349-f487-4e2f-b02c-23b767701304 req-2b05f18c-b7f4-4193-8ce7-640a58a77f03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Neutron deleted interface 102411d5-80b6-47af-9293-08b07c65d541; detaching it from the instance and deleting it from the info cache
Oct 02 09:13:29 compute-0 nova_compute[260603]: 2025-10-02 09:13:29.576 2 DEBUG nova.network.neutron [req-a60b5349-f487-4e2f-b02c-23b767701304 req-2b05f18c-b7f4-4193-8ce7-640a58a77f03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:13:29 compute-0 nova_compute[260603]: 2025-10-02 09:13:29.602 2 DEBUG nova.compute.manager [req-a60b5349-f487-4e2f-b02c-23b767701304 req-2b05f18c-b7f4-4193-8ce7-640a58a77f03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Detach interface failed, port_id=102411d5-80b6-47af-9293-08b07c65d541, reason: Instance 71c9f70f-5f86-4723-9e4f-a4aca14211cb could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.054 2 DEBUG nova.network.neutron [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updated VIF entry in instance network info cache for port 102411d5-80b6-47af-9293-08b07c65d541. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.054 2 DEBUG nova.network.neutron [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.095 2 DEBUG oslo_concurrency.lockutils [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:13:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 119 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 5.9 KiB/s wr, 30 op/s
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.368 2 DEBUG nova.network.neutron [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.399 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.400 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.400 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.402 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.403 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] No waiting events found dispatching network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.404 2 WARNING nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received unexpected event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 for instance with vm_state active and task_state deleting.
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.404 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-unplugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.405 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.406 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.406 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.407 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] No waiting events found dispatching network-vif-unplugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.407 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-unplugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.408 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.408 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.409 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.409 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.410 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] No waiting events found dispatching network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.410 2 WARNING nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received unexpected event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 for instance with vm_state active and task_state deleting.
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.413 2 INFO nova.compute.manager [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Took 1.95 seconds to deallocate network for instance.
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.597 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.598 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:30 compute-0 nova_compute[260603]: 2025-10-02 09:13:30.665 2 DEBUG oslo_concurrency.processutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:13:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:13:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654881305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:13:31 compute-0 nova_compute[260603]: 2025-10-02 09:13:31.096 2 DEBUG oslo_concurrency.processutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:13:31 compute-0 nova_compute[260603]: 2025-10-02 09:13:31.105 2 DEBUG nova.compute.provider_tree [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:13:31 compute-0 nova_compute[260603]: 2025-10-02 09:13:31.133 2 DEBUG nova.scheduler.client.report [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:13:31 compute-0 nova_compute[260603]: 2025-10-02 09:13:31.179 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:31 compute-0 nova_compute[260603]: 2025-10-02 09:13:31.205 2 INFO nova.scheduler.client.report [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 71c9f70f-5f86-4723-9e4f-a4aca14211cb
Oct 02 09:13:31 compute-0 nova_compute[260603]: 2025-10-02 09:13:31.270 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:31 compute-0 ceph-mon[74477]: pgmap v2861: 305 pgs: 305 active+clean; 119 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 5.9 KiB/s wr, 30 op/s
Oct 02 09:13:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/654881305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:13:31 compute-0 nova_compute[260603]: 2025-10-02 09:13:31.698 2 DEBUG nova.compute.manager [req-bbe9462f-d5c6-41cf-921a-3fca2da06c3b req-44a6f2f9-ec52-4663-bcc6-6a1344ee6bcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-deleted-d5b6295d-90a7-4d25-be69-ccd7a58621c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:13:31 compute-0 nova_compute[260603]: 2025-10-02 09:13:31.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 119 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 5.9 KiB/s wr, 30 op/s
Oct 02 09:13:32 compute-0 nova_compute[260603]: 2025-10-02 09:13:32.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:13:33 compute-0 ceph-mon[74477]: pgmap v2862: 305 pgs: 305 active+clean; 119 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 5.9 KiB/s wr, 30 op/s
Oct 02 09:13:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 7.1 KiB/s wr, 57 op/s
Oct 02 09:13:34 compute-0 ceph-mon[74477]: pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 7.1 KiB/s wr, 57 op/s
Oct 02 09:13:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:34.850 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:13:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:34.851 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:13:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:13:34.851 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:13:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 7.0 KiB/s wr, 54 op/s
Oct 02 09:13:36 compute-0 nova_compute[260603]: 2025-10-02 09:13:36.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:37 compute-0 ceph-mon[74477]: pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 7.0 KiB/s wr, 54 op/s
Oct 02 09:13:37 compute-0 nova_compute[260603]: 2025-10-02 09:13:37.642 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396402.641145, 66710b2a-3c24-45a9-a500-f29978d33f4f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:13:37 compute-0 nova_compute[260603]: 2025-10-02 09:13:37.643 2 INFO nova.compute.manager [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] VM Stopped (Lifecycle Event)
Oct 02 09:13:37 compute-0 nova_compute[260603]: 2025-10-02 09:13:37.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:37 compute-0 nova_compute[260603]: 2025-10-02 09:13:37.674 2 DEBUG nova.compute.manager [None req-4f6af903-4ad1-49d5-9d8f-c59dae47f232 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:13:37 compute-0 nova_compute[260603]: 2025-10-02 09:13:37.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:37 compute-0 nova_compute[260603]: 2025-10-02 09:13:37.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 7.0 KiB/s wr, 54 op/s
Oct 02 09:13:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:13:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:13:39 compute-0 ceph-mon[74477]: pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 7.0 KiB/s wr, 54 op/s
Oct 02 09:13:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 09:13:40 compute-0 nova_compute[260603]: 2025-10-02 09:13:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:13:40 compute-0 nova_compute[260603]: 2025-10-02 09:13:40.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:13:41 compute-0 ceph-mon[74477]: pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 09:13:41 compute-0 nova_compute[260603]: 2025-10-02 09:13:41.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 09:13:42 compute-0 nova_compute[260603]: 2025-10-02 09:13:42.776 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396407.7744737, 71c9f70f-5f86-4723-9e4f-a4aca14211cb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:13:42 compute-0 nova_compute[260603]: 2025-10-02 09:13:42.777 2 INFO nova.compute.manager [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] VM Stopped (Lifecycle Event)
Oct 02 09:13:42 compute-0 nova_compute[260603]: 2025-10-02 09:13:42.803 2 DEBUG nova.compute.manager [None req-e51d24a1-ab70-4e04-bd9f-314ef7ef0139 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:13:42 compute-0 ceph-mon[74477]: pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 09:13:42 compute-0 nova_compute[260603]: 2025-10-02 09:13:42.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:13:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 09:13:45 compute-0 ceph-mon[74477]: pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 02 09:13:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:46 compute-0 nova_compute[260603]: 2025-10-02 09:13:46.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:47 compute-0 ceph-mon[74477]: pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:47 compute-0 nova_compute[260603]: 2025-10-02 09:13:47.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:13:48 compute-0 ceph-mon[74477]: pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:51 compute-0 ceph-mon[74477]: pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:51 compute-0 nova_compute[260603]: 2025-10-02 09:13:51.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:52 compute-0 nova_compute[260603]: 2025-10-02 09:13:52.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:52 compute-0 podman[423865]: 2025-10-02 09:13:52.992077222 +0000 UTC m=+0.059918234 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 09:13:53 compute-0 podman[423864]: 2025-10-02 09:13:53.036043538 +0000 UTC m=+0.097766980 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:13:53 compute-0 ceph-mon[74477]: pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:13:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:55 compute-0 ceph-mon[74477]: pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:56 compute-0 nova_compute[260603]: 2025-10-02 09:13:56.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:57 compute-0 ceph-mon[74477]: pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:57 compute-0 nova_compute[260603]: 2025-10-02 09:13:57.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:13:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:13:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:13:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:13:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:13:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:13:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:13:58 compute-0 podman[423909]: 2025-10-02 09:13:58.013823544 +0000 UTC m=+0.057171217 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:13:58 compute-0 podman[423910]: 2025-10-02 09:13:58.040728641 +0000 UTC m=+0.070880033 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:13:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:13:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:13:59 compute-0 ceph-mon[74477]: pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:14:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:14:00 compute-0 nova_compute[260603]: 2025-10-02 09:14:00.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:14:01 compute-0 ceph-mon[74477]: pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:14:01 compute-0 nova_compute[260603]: 2025-10-02 09:14:01.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.003 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.003 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.027 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.119 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.119 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.129 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.129 2 INFO nova.compute.claims [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:14:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.236 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:14:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982536581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.675 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.684 2 DEBUG nova.compute.provider_tree [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.703 2 DEBUG nova.scheduler.client.report [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.730 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.731 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.772 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.773 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.800 2 INFO nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.818 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.902 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.903 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.903 2 INFO nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Creating image(s)
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.931 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:02 compute-0 nova_compute[260603]: 2025-10-02 09:14:02.970 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.006 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.010 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.063 2 DEBUG nova.policy [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.105 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.106 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.107 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.107 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.129 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.132 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7293bf39-223f-4668-bd0f-c65476fac3e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:03 compute-0 ceph-mon[74477]: pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:14:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3982536581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:14:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:14:03 compute-0 nova_compute[260603]: 2025-10-02 09:14:03.544 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:14:04 compute-0 nova_compute[260603]: 2025-10-02 09:14:04.004 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7293bf39-223f-4668-bd0f-c65476fac3e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.872s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:04 compute-0 nova_compute[260603]: 2025-10-02 09:14:04.051 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Successfully created port: 17c0a9ac-d61a-433a-b3f3-154a8c467f5a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:14:04 compute-0 nova_compute[260603]: 2025-10-02 09:14:04.110 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:14:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:14:04 compute-0 nova_compute[260603]: 2025-10-02 09:14:04.237 2 DEBUG nova.objects.instance [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 7293bf39-223f-4668-bd0f-c65476fac3e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:14:04 compute-0 nova_compute[260603]: 2025-10-02 09:14:04.254 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:14:04 compute-0 nova_compute[260603]: 2025-10-02 09:14:04.255 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Ensure instance console log exists: /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:14:04 compute-0 nova_compute[260603]: 2025-10-02 09:14:04.256 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:04 compute-0 nova_compute[260603]: 2025-10-02 09:14:04.256 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:04 compute-0 nova_compute[260603]: 2025-10-02 09:14:04.257 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:04 compute-0 nova_compute[260603]: 2025-10-02 09:14:04.520 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Successfully created port: 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:14:05 compute-0 nova_compute[260603]: 2025-10-02 09:14:05.258 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Successfully updated port: 17c0a9ac-d61a-433a-b3f3-154a8c467f5a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:14:05 compute-0 nova_compute[260603]: 2025-10-02 09:14:05.374 2 DEBUG nova.compute.manager [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-changed-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:05 compute-0 nova_compute[260603]: 2025-10-02 09:14:05.375 2 DEBUG nova.compute.manager [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing instance network info cache due to event network-changed-17c0a9ac-d61a-433a-b3f3-154a8c467f5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:14:05 compute-0 nova_compute[260603]: 2025-10-02 09:14:05.376 2 DEBUG oslo_concurrency.lockutils [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:14:05 compute-0 nova_compute[260603]: 2025-10-02 09:14:05.376 2 DEBUG oslo_concurrency.lockutils [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:14:05 compute-0 nova_compute[260603]: 2025-10-02 09:14:05.377 2 DEBUG nova.network.neutron [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing network info cache for port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:14:05 compute-0 nova_compute[260603]: 2025-10-02 09:14:05.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:14:05 compute-0 ceph-mon[74477]: pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:14:05 compute-0 nova_compute[260603]: 2025-10-02 09:14:05.617 2 DEBUG nova.network.neutron [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:14:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:14:06 compute-0 sudo[424136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:14:06 compute-0 sudo[424136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:06 compute-0 sudo[424136]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:06 compute-0 sudo[424161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:14:06 compute-0 sudo[424161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:06 compute-0 sudo[424161]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:06 compute-0 sudo[424186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:14:06 compute-0 sudo[424186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:06 compute-0 sudo[424186]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:06 compute-0 sudo[424211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:14:06 compute-0 sudo[424211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.534 2 DEBUG nova.network.neutron [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.549 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.585 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Successfully updated port: 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.588 2 DEBUG oslo_concurrency.lockutils [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.605 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.605 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.605 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.776 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:06 compute-0 sudo[424211]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:14:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330760844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:14:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:14:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:14:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:14:06 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:14:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:14:06 compute-0 nova_compute[260603]: 2025-10-02 09:14:06.993 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:07 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:14:07 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 731670fc-cdc4-42e2-aa6d-3debc18dc20d does not exist
Oct 02 09:14:07 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2dcfc675-c57f-40cb-a96d-df74bfdae4f9 does not exist
Oct 02 09:14:07 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 3e6ac35a-807f-4a12-9669-c2bf9a73596b does not exist
Oct 02 09:14:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:14:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:14:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:14:07 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:14:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:14:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:14:07 compute-0 sudo[424291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:14:07 compute-0 sudo[424291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:07 compute-0 sudo[424291]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:07 compute-0 sudo[424316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:14:07 compute-0 sudo[424316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.169 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.170 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3602MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.170 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.170 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:07 compute-0 sudo[424316]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:07 compute-0 sudo[424341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:14:07 compute-0 sudo[424341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:07 compute-0 sudo[424341]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.274 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 7293bf39-223f-4668-bd0f-c65476fac3e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.274 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.274 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:14:07 compute-0 sudo[424366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:14:07 compute-0 sudo[424366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.316 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.472 2 DEBUG nova.compute.manager [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-changed-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.472 2 DEBUG nova.compute.manager [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing instance network info cache due to event network-changed-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.473 2 DEBUG oslo_concurrency.lockutils [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:14:07 compute-0 ceph-mon[74477]: pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:14:07 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3330760844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:14:07 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:14:07 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:14:07 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:14:07 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:14:07 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:14:07 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:14:07 compute-0 podman[424451]: 2025-10-02 09:14:07.601798572 +0000 UTC m=+0.020373505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:14:07 compute-0 podman[424451]: 2025-10-02 09:14:07.711246243 +0000 UTC m=+0.129821126 container create f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:14:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:14:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1158445354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.750 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.755 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:14:07 compute-0 systemd[1]: Started libpod-conmon-f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe.scope.
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.777 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:14:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.808 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:14:07 compute-0 nova_compute[260603]: 2025-10-02 09:14:07.808 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:07 compute-0 podman[424451]: 2025-10-02 09:14:07.842110759 +0000 UTC m=+0.260685732 container init f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 09:14:07 compute-0 podman[424451]: 2025-10-02 09:14:07.850360436 +0000 UTC m=+0.268935309 container start f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 09:14:07 compute-0 podman[424451]: 2025-10-02 09:14:07.85402838 +0000 UTC m=+0.272603343 container attach f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:14:07 compute-0 silly_elion[424469]: 167 167
Oct 02 09:14:07 compute-0 systemd[1]: libpod-f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe.scope: Deactivated successfully.
Oct 02 09:14:07 compute-0 podman[424451]: 2025-10-02 09:14:07.859006205 +0000 UTC m=+0.277581098 container died f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:14:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b2da1585cddbb6f6ed44b41d89977d29bbcf1a0b236bf7698effe2f903dd4e0-merged.mount: Deactivated successfully.
Oct 02 09:14:07 compute-0 podman[424451]: 2025-10-02 09:14:07.903514058 +0000 UTC m=+0.322088951 container remove f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:14:07 compute-0 systemd[1]: libpod-conmon-f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe.scope: Deactivated successfully.
Oct 02 09:14:08 compute-0 nova_compute[260603]: 2025-10-02 09:14:08.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:08 compute-0 podman[424495]: 2025-10-02 09:14:08.085350568 +0000 UTC m=+0.057390964 container create 1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pasteur, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 09:14:08 compute-0 systemd[1]: Started libpod-conmon-1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087.scope.
Oct 02 09:14:08 compute-0 podman[424495]: 2025-10-02 09:14:08.056217303 +0000 UTC m=+0.028257689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:14:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea387dee46837ea1c50c1d36f7be22f5855d187ad8d011ef2c1e8455cd63e288/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea387dee46837ea1c50c1d36f7be22f5855d187ad8d011ef2c1e8455cd63e288/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea387dee46837ea1c50c1d36f7be22f5855d187ad8d011ef2c1e8455cd63e288/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea387dee46837ea1c50c1d36f7be22f5855d187ad8d011ef2c1e8455cd63e288/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea387dee46837ea1c50c1d36f7be22f5855d187ad8d011ef2c1e8455cd63e288/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:08 compute-0 podman[424495]: 2025-10-02 09:14:08.188019669 +0000 UTC m=+0.160060045 container init 1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:14:08 compute-0 podman[424495]: 2025-10-02 09:14:08.199480225 +0000 UTC m=+0.171520621 container start 1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pasteur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:14:08 compute-0 podman[424495]: 2025-10-02 09:14:08.21060808 +0000 UTC m=+0.182648446 container attach 1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:14:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:14:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1158445354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:14:09 compute-0 epic_pasteur[424511]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:14:09 compute-0 epic_pasteur[424511]: --> relative data size: 1.0
Oct 02 09:14:09 compute-0 epic_pasteur[424511]: --> All data devices are unavailable
Oct 02 09:14:09 compute-0 systemd[1]: libpod-1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087.scope: Deactivated successfully.
Oct 02 09:14:09 compute-0 podman[424495]: 2025-10-02 09:14:09.233164208 +0000 UTC m=+1.205204584 container died 1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 09:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea387dee46837ea1c50c1d36f7be22f5855d187ad8d011ef2c1e8455cd63e288-merged.mount: Deactivated successfully.
Oct 02 09:14:09 compute-0 podman[424495]: 2025-10-02 09:14:09.587976093 +0000 UTC m=+1.560016479 container remove 1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:14:09 compute-0 systemd[1]: libpod-conmon-1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087.scope: Deactivated successfully.
Oct 02 09:14:09 compute-0 sudo[424366]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:09 compute-0 ceph-mon[74477]: pgmap v2880: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:09 compute-0 sudo[424555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:14:09 compute-0 sudo[424555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:09 compute-0 sudo[424555]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:09 compute-0 sudo[424580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:14:09 compute-0 sudo[424580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:09 compute-0 sudo[424580]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:09 compute-0 sudo[424605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:14:09 compute-0 sudo[424605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:09 compute-0 sudo[424605]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:09 compute-0 sudo[424630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:14:09 compute-0 sudo[424630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:10 compute-0 podman[424696]: 2025-10-02 09:14:10.341093117 +0000 UTC m=+0.047824437 container create 2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:14:10 compute-0 systemd[1]: Started libpod-conmon-2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5.scope.
Oct 02 09:14:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:14:10 compute-0 podman[424696]: 2025-10-02 09:14:10.403380513 +0000 UTC m=+0.110111813 container init 2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:14:10 compute-0 podman[424696]: 2025-10-02 09:14:10.319627781 +0000 UTC m=+0.026359091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:14:10 compute-0 podman[424696]: 2025-10-02 09:14:10.416022926 +0000 UTC m=+0.122754246 container start 2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:14:10 compute-0 gallant_mclaren[424712]: 167 167
Oct 02 09:14:10 compute-0 systemd[1]: libpod-2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5.scope: Deactivated successfully.
Oct 02 09:14:10 compute-0 podman[424696]: 2025-10-02 09:14:10.423401595 +0000 UTC m=+0.130132875 container attach 2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 09:14:10 compute-0 podman[424696]: 2025-10-02 09:14:10.42388826 +0000 UTC m=+0.130619540 container died 2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:14:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-26171184e808d6476d160e3618759f3295233729047ca903bea84c4bb6d00776-merged.mount: Deactivated successfully.
Oct 02 09:14:10 compute-0 podman[424696]: 2025-10-02 09:14:10.461900441 +0000 UTC m=+0.168631721 container remove 2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:14:10 compute-0 systemd[1]: libpod-conmon-2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5.scope: Deactivated successfully.
Oct 02 09:14:10 compute-0 podman[424735]: 2025-10-02 09:14:10.637561842 +0000 UTC m=+0.040940025 container create 571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_aryabhata, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:14:10 compute-0 ceph-mon[74477]: pgmap v2881: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:10 compute-0 systemd[1]: Started libpod-conmon-571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4.scope.
Oct 02 09:14:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1612bd5730d3bb210c2ebeb2cfe3a229b70552996aae9f2e3971074348f760b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1612bd5730d3bb210c2ebeb2cfe3a229b70552996aae9f2e3971074348f760b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1612bd5730d3bb210c2ebeb2cfe3a229b70552996aae9f2e3971074348f760b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1612bd5730d3bb210c2ebeb2cfe3a229b70552996aae9f2e3971074348f760b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:10 compute-0 podman[424735]: 2025-10-02 09:14:10.619735937 +0000 UTC m=+0.023114110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:14:10 compute-0 podman[424735]: 2025-10-02 09:14:10.726958119 +0000 UTC m=+0.130336302 container init 571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_aryabhata, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 09:14:10 compute-0 podman[424735]: 2025-10-02 09:14:10.744457334 +0000 UTC m=+0.147835537 container start 571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 02 09:14:10 compute-0 podman[424735]: 2025-10-02 09:14:10.748676294 +0000 UTC m=+0.152054467 container attach 571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_aryabhata, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:14:10 compute-0 nova_compute[260603]: 2025-10-02 09:14:10.808 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.314 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.337 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.337 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Instance network_info: |[{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.337 2 DEBUG oslo_concurrency.lockutils [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.338 2 DEBUG nova.network.neutron [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing network info cache for port 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.342 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Start _get_guest_xml network_info=[{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.350 2 WARNING nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.361 2 DEBUG nova.virt.libvirt.host [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.362 2 DEBUG nova.virt.libvirt.host [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.365 2 DEBUG nova.virt.libvirt.host [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.366 2 DEBUG nova.virt.libvirt.host [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.366 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.366 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.367 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.367 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.367 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.368 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.368 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.368 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.368 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.368 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.369 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.369 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.372 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]: {
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:     "0": [
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:         {
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "devices": [
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "/dev/loop3"
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             ],
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_name": "ceph_lv0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_size": "21470642176",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "name": "ceph_lv0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "tags": {
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.cluster_name": "ceph",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.crush_device_class": "",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.encrypted": "0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.osd_id": "0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.type": "block",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.vdo": "0"
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             },
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "type": "block",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "vg_name": "ceph_vg0"
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:         }
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:     ],
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:     "1": [
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:         {
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "devices": [
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "/dev/loop4"
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             ],
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_name": "ceph_lv1",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_size": "21470642176",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "name": "ceph_lv1",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "tags": {
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.cluster_name": "ceph",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.crush_device_class": "",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.encrypted": "0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.osd_id": "1",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.type": "block",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.vdo": "0"
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             },
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "type": "block",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "vg_name": "ceph_vg1"
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:         }
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:     ],
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:     "2": [
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:         {
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "devices": [
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "/dev/loop5"
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             ],
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_name": "ceph_lv2",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_size": "21470642176",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "name": "ceph_lv2",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "tags": {
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.cluster_name": "ceph",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.crush_device_class": "",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.encrypted": "0",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.osd_id": "2",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.type": "block",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:                 "ceph.vdo": "0"
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             },
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "type": "block",
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:             "vg_name": "ceph_vg2"
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:         }
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]:     ]
Oct 02 09:14:11 compute-0 sweet_aryabhata[424751]: }
Oct 02 09:14:11 compute-0 systemd[1]: libpod-571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4.scope: Deactivated successfully.
Oct 02 09:14:11 compute-0 podman[424735]: 2025-10-02 09:14:11.595941274 +0000 UTC m=+0.999319467 container died 571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_aryabhata, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1612bd5730d3bb210c2ebeb2cfe3a229b70552996aae9f2e3971074348f760b9-merged.mount: Deactivated successfully.
Oct 02 09:14:11 compute-0 podman[424735]: 2025-10-02 09:14:11.718483943 +0000 UTC m=+1.121862126 container remove 571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_aryabhata, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct 02 09:14:11 compute-0 systemd[1]: libpod-conmon-571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4.scope: Deactivated successfully.
Oct 02 09:14:11 compute-0 sudo[424630]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:14:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2150068212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:14:11 compute-0 sudo[424793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:11 compute-0 sudo[424793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:11 compute-0 sudo[424793]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.848 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.878 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:11 compute-0 nova_compute[260603]: 2025-10-02 09:14:11.885 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:11 compute-0 sudo[424820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:14:11 compute-0 sudo[424820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:11 compute-0 sudo[424820]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:11 compute-0 sudo[424864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:14:11 compute-0 sudo[424864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2150068212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:14:11 compute-0 sudo[424864]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:12 compute-0 sudo[424889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:14:12 compute-0 sudo[424889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:14:12 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/487459000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.335 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.337 2 DEBUG nova.virt.libvirt.vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1093810609',display_name='tempest-TestGettingAddress-server-1093810609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1093810609',id=145,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-5uv39x2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7293bf39-223f-4668-bd0f-c65476fac3e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.337 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.338 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.338 2 DEBUG nova.virt.libvirt.vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1093810609',display_name='tempest-TestGettingAddress-server-1093810609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1093810609',id=145,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-5uv39x2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7293bf39-223f-4668-bd0f-c65476fac3e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.339 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.339 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.340 2 DEBUG nova.objects.instance [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7293bf39-223f-4668-bd0f-c65476fac3e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:14:12 compute-0 podman[424976]: 2025-10-02 09:14:12.358730138 +0000 UTC m=+0.056412614 container create d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.361 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:14:12 compute-0 nova_compute[260603]:   <uuid>7293bf39-223f-4668-bd0f-c65476fac3e4</uuid>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   <name>instance-00000091</name>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-1093810609</nova:name>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:14:11</nova:creationTime>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <nova:port uuid="17c0a9ac-d61a-433a-b3f3-154a8c467f5a">
Oct 02 09:14:12 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <nova:port uuid="6f4bc2ea-2d5e-4fbb-95ce-ada64748d460">
Oct 02 09:14:12 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fed2:6656" ipVersion="6"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <system>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <entry name="serial">7293bf39-223f-4668-bd0f-c65476fac3e4</entry>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <entry name="uuid">7293bf39-223f-4668-bd0f-c65476fac3e4</entry>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     </system>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   <os>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   </os>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   <features>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   </features>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7293bf39-223f-4668-bd0f-c65476fac3e4_disk">
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       </source>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7293bf39-223f-4668-bd0f-c65476fac3e4_disk.config">
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       </source>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:14:12 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:65:2b:06"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <target dev="tap17c0a9ac-d6"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:d2:66:56"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <target dev="tap6f4bc2ea-2d"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/console.log" append="off"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <video>
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     </video>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:14:12 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:14:12 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:14:12 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:14:12 compute-0 nova_compute[260603]: </domain>
Oct 02 09:14:12 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.362 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Preparing to wait for external event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.362 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.362 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.363 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.363 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Preparing to wait for external event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.363 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.363 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.363 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.364 2 DEBUG nova.virt.libvirt.vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1093810609',display_name='tempest-TestGettingAddress-server-1093810609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1093810609',id=145,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-5uv39x2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7293bf39-223f-4668-bd0f-c65476fac3e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.365 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.365 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.365 2 DEBUG os_vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.366 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.367 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.370 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17c0a9ac-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.370 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap17c0a9ac-d6, col_values=(('external_ids', {'iface-id': '17c0a9ac-d61a-433a-b3f3-154a8c467f5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:2b:06', 'vm-uuid': '7293bf39-223f-4668-bd0f-c65476fac3e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:12 compute-0 NetworkManager[45129]: <info>  [1759396452.3734] manager: (tap17c0a9ac-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.380 2 INFO os_vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6')
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.380 2 DEBUG nova.virt.libvirt.vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1093810609',display_name='tempest-TestGettingAddress-server-1093810609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1093810609',id=145,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-5uv39x2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7293bf39-223f-4668-bd0f-c65476fac3e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.381 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.381 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.381 2 DEBUG os_vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f4bc2ea-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f4bc2ea-2d, col_values=(('external_ids', {'iface-id': '6f4bc2ea-2d5e-4fbb-95ce-ada64748d460', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:66:56', 'vm-uuid': '7293bf39-223f-4668-bd0f-c65476fac3e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:12 compute-0 NetworkManager[45129]: <info>  [1759396452.3865] manager: (tap6f4bc2ea-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.393 2 INFO os_vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d')
Oct 02 09:14:12 compute-0 systemd[1]: Started libpod-conmon-d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e.scope.
Oct 02 09:14:12 compute-0 podman[424976]: 2025-10-02 09:14:12.322640157 +0000 UTC m=+0.020322653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.439 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.440 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.440 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:65:2b:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.440 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:d2:66:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.441 2 INFO nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Using config drive
Oct 02 09:14:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:14:12 compute-0 podman[424976]: 2025-10-02 09:14:12.464615838 +0000 UTC m=+0.162298314 container init d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.468 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:12 compute-0 podman[424976]: 2025-10-02 09:14:12.474912359 +0000 UTC m=+0.172594825 container start d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_murdock, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 02 09:14:12 compute-0 podman[424976]: 2025-10-02 09:14:12.478646385 +0000 UTC m=+0.176328861 container attach d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:14:12 compute-0 zealous_murdock[424998]: 167 167
Oct 02 09:14:12 compute-0 systemd[1]: libpod-d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e.scope: Deactivated successfully.
Oct 02 09:14:12 compute-0 podman[424976]: 2025-10-02 09:14:12.482657339 +0000 UTC m=+0.180339825 container died d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 09:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-caa3ff46d79bd45a9137ebc282129da18c60a5bd81ca503ede3b9f5f4dd70383-merged.mount: Deactivated successfully.
Oct 02 09:14:12 compute-0 podman[424976]: 2025-10-02 09:14:12.729444718 +0000 UTC m=+0.427127224 container remove d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_murdock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 09:14:12 compute-0 systemd[1]: libpod-conmon-d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e.scope: Deactivated successfully.
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.916 2 INFO nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Creating config drive at /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/disk.config
Oct 02 09:14:12 compute-0 nova_compute[260603]: 2025-10-02 09:14:12.923 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaicx45e_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:13 compute-0 podman[425040]: 2025-10-02 09:14:12.912625831 +0000 UTC m=+0.027999091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:14:13 compute-0 podman[425040]: 2025-10-02 09:14:13.017882222 +0000 UTC m=+0.133255442 container create 3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:14:13 compute-0 ceph-mon[74477]: pgmap v2882: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:13 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/487459000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:14:13 compute-0 nova_compute[260603]: 2025-10-02 09:14:13.071 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaicx45e_" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:13 compute-0 nova_compute[260603]: 2025-10-02 09:14:13.112 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:13 compute-0 systemd[1]: Started libpod-conmon-3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462.scope.
Oct 02 09:14:13 compute-0 nova_compute[260603]: 2025-10-02 09:14:13.117 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/disk.config 7293bf39-223f-4668-bd0f-c65476fac3e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7f03223ab3501bb8b718e3b7dadfd65e01f9a71f88c5856508be709ce7db40a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7f03223ab3501bb8b718e3b7dadfd65e01f9a71f88c5856508be709ce7db40a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7f03223ab3501bb8b718e3b7dadfd65e01f9a71f88c5856508be709ce7db40a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7f03223ab3501bb8b718e3b7dadfd65e01f9a71f88c5856508be709ce7db40a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:13 compute-0 nova_compute[260603]: 2025-10-02 09:14:13.185 2 DEBUG nova.network.neutron [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updated VIF entry in instance network info cache for port 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:14:13 compute-0 nova_compute[260603]: 2025-10-02 09:14:13.187 2 DEBUG nova.network.neutron [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:14:13 compute-0 nova_compute[260603]: 2025-10-02 09:14:13.211 2 DEBUG oslo_concurrency.lockutils [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:14:13 compute-0 podman[425040]: 2025-10-02 09:14:13.231097957 +0000 UTC m=+0.346471157 container init 3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:14:13 compute-0 podman[425040]: 2025-10-02 09:14:13.240685825 +0000 UTC m=+0.356059005 container start 3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:14:13 compute-0 podman[425040]: 2025-10-02 09:14:13.30165213 +0000 UTC m=+0.417025400 container attach 3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 09:14:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:14:13 compute-0 nova_compute[260603]: 2025-10-02 09:14:13.998 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/disk.config 7293bf39-223f-4668-bd0f-c65476fac3e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.880s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.000 2 INFO nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Deleting local config drive /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/disk.config because it was imported into RBD.
Oct 02 09:14:14 compute-0 NetworkManager[45129]: <info>  [1759396454.0710] manager: (tap17c0a9ac-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/654)
Oct 02 09:14:14 compute-0 kernel: tap17c0a9ac-d6: entered promiscuous mode
Oct 02 09:14:14 compute-0 systemd-udevd[425128]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:14 compute-0 ovn_controller[152344]: 2025-10-02T09:14:14Z|01612|binding|INFO|Claiming lport 17c0a9ac-d61a-433a-b3f3-154a8c467f5a for this chassis.
Oct 02 09:14:14 compute-0 ovn_controller[152344]: 2025-10-02T09:14:14Z|01613|binding|INFO|17c0a9ac-d61a-433a-b3f3-154a8c467f5a: Claiming fa:16:3e:65:2b:06 10.100.0.10
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:14 compute-0 kernel: tap6f4bc2ea-2d: entered promiscuous mode
Oct 02 09:14:14 compute-0 NetworkManager[45129]: <info>  [1759396454.1312] manager: (tap6f4bc2ea-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Oct 02 09:14:14 compute-0 ovn_controller[152344]: 2025-10-02T09:14:14Z|01614|if_status|INFO|Dropped 1 log messages in last 95 seconds (most recently, 95 seconds ago) due to excessive rate
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:14 compute-0 ovn_controller[152344]: 2025-10-02T09:14:14Z|01615|if_status|INFO|Not updating pb chassis for 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 now as sb is readonly
Oct 02 09:14:14 compute-0 systemd-udevd[425138]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:14:14 compute-0 NetworkManager[45129]: <info>  [1759396454.1408] device (tap17c0a9ac-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:14:14 compute-0 NetworkManager[45129]: <info>  [1759396454.1432] device (tap17c0a9ac-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:14:14 compute-0 ovn_controller[152344]: 2025-10-02T09:14:14Z|01616|binding|INFO|Claiming lport 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 for this chassis.
Oct 02 09:14:14 compute-0 ovn_controller[152344]: 2025-10-02T09:14:14Z|01617|binding|INFO|6f4bc2ea-2d5e-4fbb-95ce-ada64748d460: Claiming fa:16:3e:d2:66:56 2001:db8::f816:3eff:fed2:6656
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.143 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:2b:06 10.100.0.10'], port_security=['fa:16:3e:65:2b:06 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7293bf39-223f-4668-bd0f-c65476fac3e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6339cab-fbb4-4887-8953-252cca735cc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80b3bab0-2229-4a60-832f-071c3bc1d0ec, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=17c0a9ac-d61a-433a-b3f3-154a8c467f5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.144 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a in datapath c6339cab-fbb4-4887-8953-252cca735cc6 bound to our chassis
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.145 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6339cab-fbb4-4887-8953-252cca735cc6
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.148 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:66:56 2001:db8::f816:3eff:fed2:6656'], port_security=['fa:16:3e:d2:66:56 2001:db8::f816:3eff:fed2:6656'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fed2:6656/64', 'neutron:device_id': '7293bf39-223f-4668-bd0f-c65476fac3e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f302b50b-078a-40f3-87d8-1172d81fe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82aa0caa-5e65-4ef0-b1d6-b9e910e6cadb, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:14:14 compute-0 NetworkManager[45129]: <info>  [1759396454.1589] device (tap6f4bc2ea-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:14:14 compute-0 NetworkManager[45129]: <info>  [1759396454.1602] device (tap6f4bc2ea-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.162 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29a9defd-2f40-4475-8ff4-f18853a64587]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.163 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6339cab-f1 in ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.165 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6339cab-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.165 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18bd6431-c830-4b09-aa7b-6fae4bdfba3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.167 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ceac3246-8e4d-44c8-80fc-18de5b1fb064]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 systemd-machined[214636]: New machine qemu-179-instance-00000091.
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.178 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8ddfddc2-7e7b-496d-bffc-c438792bf90d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 systemd[1]: Started Virtual Machine qemu-179-instance-00000091.
Oct 02 09:14:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.204 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b07a7ca-fa6b-4538-ba66-109a8073341b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:14 compute-0 ovn_controller[152344]: 2025-10-02T09:14:14Z|01618|binding|INFO|Setting lport 17c0a9ac-d61a-433a-b3f3-154a8c467f5a ovn-installed in OVS
Oct 02 09:14:14 compute-0 ovn_controller[152344]: 2025-10-02T09:14:14Z|01619|binding|INFO|Setting lport 17c0a9ac-d61a-433a-b3f3-154a8c467f5a up in Southbound
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:14 compute-0 ovn_controller[152344]: 2025-10-02T09:14:14Z|01620|binding|INFO|Setting lport 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 ovn-installed in OVS
Oct 02 09:14:14 compute-0 ovn_controller[152344]: 2025-10-02T09:14:14Z|01621|binding|INFO|Setting lport 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 up in Southbound
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.248 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[05f867ff-b520-4194-aad5-f1f823af135c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 NetworkManager[45129]: <info>  [1759396454.2572] manager: (tapc6339cab-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/656)
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.258 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[694e9f73-daa3-4a9b-96cb-5bee1aa4699a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]: {
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "osd_id": 2,
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "type": "bluestore"
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:     },
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "osd_id": 1,
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "type": "bluestore"
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:     },
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "osd_id": 0,
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:         "type": "bluestore"
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]:     }
Oct 02 09:14:14 compute-0 kind_hodgkin[425078]: }
Oct 02 09:14:14 compute-0 systemd[1]: libpod-3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462.scope: Deactivated successfully.
Oct 02 09:14:14 compute-0 systemd[1]: libpod-3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462.scope: Consumed 1.032s CPU time.
Oct 02 09:14:14 compute-0 conmon[425078]: conmon 3faab0a072f8fd5844d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462.scope/container/memory.events
Oct 02 09:14:14 compute-0 podman[425040]: 2025-10-02 09:14:14.293472192 +0000 UTC m=+1.408845382 container died 3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.307 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[78a583b3-4d49-45e6-baec-470a0a72b39a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.312 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a0807b79-ea1a-406e-b6a0-86772d0e8113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 NetworkManager[45129]: <info>  [1759396454.3372] device (tapc6339cab-f0): carrier: link connected
Oct 02 09:14:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7f03223ab3501bb8b718e3b7dadfd65e01f9a71f88c5856508be709ce7db40a-merged.mount: Deactivated successfully.
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.346 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8dcefc71-9c08-46c1-b99d-156996baebbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.373 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c6593d-da97-4f2a-9a96-9599e2ae4ffc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6339cab-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:e0:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733293, 'reachable_time': 28730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425195, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 podman[425040]: 2025-10-02 09:14:14.394933165 +0000 UTC m=+1.510306345 container remove 3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 09:14:14 compute-0 systemd[1]: libpod-conmon-3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462.scope: Deactivated successfully.
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.398 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[596d46de-e6d7-4e5f-9c7d-84e31dd284c7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:e05c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733293, 'tstamp': 733293}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425203, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.424 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e070086-3219-47bb-ba63-77dee93a969f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6339cab-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:e0:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733293, 'reachable_time': 28730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 425215, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 sudo[424889]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.434 2 DEBUG nova.compute.manager [req-592b82e3-b5a1-4351-a967-d0fd83113b8b req-1adba0e2-d8a5-41fc-9e1f-8fbd19ac4afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.435 2 DEBUG oslo_concurrency.lockutils [req-592b82e3-b5a1-4351-a967-d0fd83113b8b req-1adba0e2-d8a5-41fc-9e1f-8fbd19ac4afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.435 2 DEBUG oslo_concurrency.lockutils [req-592b82e3-b5a1-4351-a967-d0fd83113b8b req-1adba0e2-d8a5-41fc-9e1f-8fbd19ac4afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.435 2 DEBUG oslo_concurrency.lockutils [req-592b82e3-b5a1-4351-a967-d0fd83113b8b req-1adba0e2-d8a5-41fc-9e1f-8fbd19ac4afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.436 2 DEBUG nova.compute.manager [req-592b82e3-b5a1-4351-a967-d0fd83113b8b req-1adba0e2-d8a5-41fc-9e1f-8fbd19ac4afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Processing event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:14:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:14:14 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:14:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.454 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0b01b260-acae-4367-b921-cd60418b1ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:14:14 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9944b4df-3e1c-4cae-9a01-763118d4c2e6 does not exist
Oct 02 09:14:14 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e914bea6-52e8-465b-8615-e97bce9eb505 does not exist
Oct 02 09:14:14 compute-0 sudo[425237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:14:14 compute-0 sudo[425237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.524 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c23f9e8f-2098-4d91-9920-25e47fe32062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 sudo[425237]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.525 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6339cab-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.526 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.526 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6339cab-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:14 compute-0 NetworkManager[45129]: <info>  [1759396454.5286] manager: (tapc6339cab-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/657)
Oct 02 09:14:14 compute-0 kernel: tapc6339cab-f0: entered promiscuous mode
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.530 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6339cab-f0, col_values=(('external_ids', {'iface-id': '698ce34e-6a9d-4a50-8426-c137ad35d6fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:14 compute-0 ovn_controller[152344]: 2025-10-02T09:14:14Z|01622|binding|INFO|Releasing lport 698ce34e-6a9d-4a50-8426-c137ad35d6fb from this chassis (sb_readonly=0)
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.548 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6339cab-fbb4-4887-8953-252cca735cc6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6339cab-fbb4-4887-8953-252cca735cc6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.549 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b44590e-eee5-404c-936e-bcb308cca90d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.550 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-c6339cab-fbb4-4887-8953-252cca735cc6
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/c6339cab-fbb4-4887-8953-252cca735cc6.pid.haproxy
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID c6339cab-fbb4-4887-8953-252cca735cc6
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:14:14 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.551 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'env', 'PROCESS_TAG=haproxy-c6339cab-fbb4-4887-8953-252cca735cc6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6339cab-fbb4-4887-8953-252cca735cc6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:14:14 compute-0 sudo[425271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:14:14 compute-0 sudo[425271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:14:14 compute-0 sudo[425271]: pam_unix(sudo:session): session closed for user root
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.963 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396454.9626837, 7293bf39-223f-4668-bd0f-c65476fac3e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.965 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] VM Started (Lifecycle Event)
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.993 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:14:14 compute-0 nova_compute[260603]: 2025-10-02 09:14:14.999 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396454.962983, 7293bf39-223f-4668-bd0f-c65476fac3e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:14:15 compute-0 nova_compute[260603]: 2025-10-02 09:14:15.001 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] VM Paused (Lifecycle Event)
Oct 02 09:14:15 compute-0 podman[425322]: 2025-10-02 09:14:15.016037936 +0000 UTC m=+0.093996082 container create f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:14:15 compute-0 nova_compute[260603]: 2025-10-02 09:14:15.026 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:14:15 compute-0 nova_compute[260603]: 2025-10-02 09:14:15.031 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:14:15 compute-0 podman[425322]: 2025-10-02 09:14:14.963791922 +0000 UTC m=+0.041750108 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:14:15 compute-0 systemd[1]: Started libpod-conmon-f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63.scope.
Oct 02 09:14:15 compute-0 nova_compute[260603]: 2025-10-02 09:14:15.065 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:14:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa7e4bc0c2d4824c03116f4f6f6e684d6814bec0ec595371262415f74733fdb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:15 compute-0 podman[425322]: 2025-10-02 09:14:15.117625203 +0000 UTC m=+0.195583429 container init f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:14:15 compute-0 podman[425322]: 2025-10-02 09:14:15.122576797 +0000 UTC m=+0.200534983 container start f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 09:14:15 compute-0 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [NOTICE]   (425341) : New worker (425343) forked
Oct 02 09:14:15 compute-0 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [NOTICE]   (425341) : Loading success.
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.201 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 in datapath f302b50b-078a-40f3-87d8-1172d81fe604 unbound from our chassis
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.203 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f302b50b-078a-40f3-87d8-1172d81fe604
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.215 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[65045182-21e0-40f6-9f1b-e74a225cf9ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.217 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf302b50b-01 in ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.219 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf302b50b-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.219 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7e08e240-52cc-45d2-b84c-04ac678cab78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.220 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[08b6bd46-c286-426c-b8aa-27de1f8719e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.233 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7e7ed6-2706-4a23-bcfe-ebc36886f096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.260 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[89f302d8-e585-423b-bcc3-899da7a71e29]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.293 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0a069f77-c573-4d65-9039-36f0672adf46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 NetworkManager[45129]: <info>  [1759396455.3026] manager: (tapf302b50b-00): new Veth device (/org/freedesktop/NetworkManager/Devices/658)
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.302 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[47402e61-810a-494e-8d78-fe6f53e7341e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 systemd-udevd[425175]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.342 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[354d34cc-ffe9-4385-843f-32762af1eb12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.345 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5cdc61e1-dfda-4794-99b8-320fad701a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 NetworkManager[45129]: <info>  [1759396455.3735] device (tapf302b50b-00): carrier: link connected
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.380 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[add28784-fe6e-4f65-a55b-ccb2fe66a45b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.399 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b733b84-879a-41ac-a1ec-0ba8a000be0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf302b50b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:c1:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733397, 'reachable_time': 26509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425362, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.416 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d81cb9-e149-420d-a9ba-571cc058201e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:c132'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733397, 'tstamp': 733397}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425363, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.434 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ea66f2-48ff-4ea3-a5ed-3fcc4902ecb5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf302b50b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:c1:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733397, 'reachable_time': 26509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 425364, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.476 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[689c5fb2-4421-419f-8257-6e14dc2dc3f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ceph-mon[74477]: pgmap v2883: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:15 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:14:15 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.511 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8c3cec-0430-4887-a5b8-c984ce2886cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.514 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf302b50b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.514 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.515 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf302b50b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:15 compute-0 nova_compute[260603]: 2025-10-02 09:14:15.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:15 compute-0 NetworkManager[45129]: <info>  [1759396455.5192] manager: (tapf302b50b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Oct 02 09:14:15 compute-0 kernel: tapf302b50b-00: entered promiscuous mode
Oct 02 09:14:15 compute-0 nova_compute[260603]: 2025-10-02 09:14:15.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.522 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf302b50b-00, col_values=(('external_ids', {'iface-id': '3ba778b6-61e6-4019-8a62-1bee20d3b186'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:15 compute-0 nova_compute[260603]: 2025-10-02 09:14:15.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:15 compute-0 ovn_controller[152344]: 2025-10-02T09:14:15Z|01623|binding|INFO|Releasing lport 3ba778b6-61e6-4019-8a62-1bee20d3b186 from this chassis (sb_readonly=0)
Oct 02 09:14:15 compute-0 nova_compute[260603]: 2025-10-02 09:14:15.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.553 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f302b50b-078a-40f3-87d8-1172d81fe604.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f302b50b-078a-40f3-87d8-1172d81fe604.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.554 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c0ae45-f754-4e4d-b330-2b3bc217669e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.555 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-f302b50b-078a-40f3-87d8-1172d81fe604
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/f302b50b-078a-40f3-87d8-1172d81fe604.pid.haproxy
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID f302b50b-078a-40f3-87d8-1172d81fe604
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:14:15 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.555 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'env', 'PROCESS_TAG=haproxy-f302b50b-078a-40f3-87d8-1172d81fe604', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f302b50b-078a-40f3-87d8-1172d81fe604.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:14:16 compute-0 podman[425394]: 2025-10-02 09:14:15.941261078 +0000 UTC m=+0.038566729 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:14:16 compute-0 podman[425394]: 2025-10-02 09:14:16.179936865 +0000 UTC m=+0.277242466 container create 6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 09:14:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:16 compute-0 systemd[1]: Started libpod-conmon-6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64.scope.
Oct 02 09:14:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4deff7f3e4521b9208a36798e9ac94c4d2e8879828943b12e08aea6816e2d9bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:14:16 compute-0 podman[425394]: 2025-10-02 09:14:16.334659714 +0000 UTC m=+0.431965305 container init 6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:14:16 compute-0 podman[425394]: 2025-10-02 09:14:16.341872728 +0000 UTC m=+0.439178289 container start 6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:14:16 compute-0 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [NOTICE]   (425413) : New worker (425415) forked
Oct 02 09:14:16 compute-0 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [NOTICE]   (425413) : Loading success.
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.531 2 DEBUG nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.532 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.532 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.533 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.533 2 DEBUG nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] No event matching network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a in dict_keys([('network-vif-plugged', '6f4bc2ea-2d5e-4fbb-95ce-ada64748d460')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.534 2 WARNING nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received unexpected event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a for instance with vm_state building and task_state spawning.
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.534 2 DEBUG nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.535 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.535 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.536 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.536 2 DEBUG nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Processing event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.536 2 DEBUG nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.537 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.537 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.538 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.538 2 DEBUG nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] No waiting events found dispatching network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.539 2 WARNING nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received unexpected event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 for instance with vm_state building and task_state spawning.
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.540 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.545 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396456.545263, 7293bf39-223f-4668-bd0f-c65476fac3e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.546 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] VM Resumed (Lifecycle Event)
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.549 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.554 2 INFO nova.virt.libvirt.driver [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Instance spawned successfully.
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.555 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.567 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.578 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.588 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.589 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.590 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.591 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.592 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.593 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.616 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.675 2 INFO nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Took 13.77 seconds to spawn the instance on the hypervisor.
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.676 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.749 2 INFO nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Took 14.66 seconds to build instance.
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.765 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:16 compute-0 nova_compute[260603]: 2025-10-02 09:14:16.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:17 compute-0 nova_compute[260603]: 2025-10-02 09:14:17.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:17 compute-0 nova_compute[260603]: 2025-10-02 09:14:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:14:17 compute-0 nova_compute[260603]: 2025-10-02 09:14:17.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 09:14:17 compute-0 nova_compute[260603]: 2025-10-02 09:14:17.535 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 09:14:17 compute-0 ceph-mon[74477]: pgmap v2884: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 02 09:14:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:14:18 compute-0 ceph-mon[74477]: pgmap v2885: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 02 09:14:19 compute-0 nova_compute[260603]: 2025-10-02 09:14:19.537 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:14:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 59 op/s
Oct 02 09:14:21 compute-0 ceph-mon[74477]: pgmap v2886: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 59 op/s
Oct 02 09:14:21 compute-0 ovn_controller[152344]: 2025-10-02T09:14:21Z|01624|binding|INFO|Releasing lport 3ba778b6-61e6-4019-8a62-1bee20d3b186 from this chassis (sb_readonly=0)
Oct 02 09:14:21 compute-0 ovn_controller[152344]: 2025-10-02T09:14:21Z|01625|binding|INFO|Releasing lport 698ce34e-6a9d-4a50-8426-c137ad35d6fb from this chassis (sb_readonly=0)
Oct 02 09:14:21 compute-0 NetworkManager[45129]: <info>  [1759396461.5357] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Oct 02 09:14:21 compute-0 NetworkManager[45129]: <info>  [1759396461.5366] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Oct 02 09:14:21 compute-0 nova_compute[260603]: 2025-10-02 09:14:21.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:21 compute-0 ovn_controller[152344]: 2025-10-02T09:14:21Z|01626|binding|INFO|Releasing lport 3ba778b6-61e6-4019-8a62-1bee20d3b186 from this chassis (sb_readonly=0)
Oct 02 09:14:21 compute-0 nova_compute[260603]: 2025-10-02 09:14:21.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:21 compute-0 ovn_controller[152344]: 2025-10-02T09:14:21Z|01627|binding|INFO|Releasing lport 698ce34e-6a9d-4a50-8426-c137ad35d6fb from this chassis (sb_readonly=0)
Oct 02 09:14:21 compute-0 nova_compute[260603]: 2025-10-02 09:14:21.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:21 compute-0 nova_compute[260603]: 2025-10-02 09:14:21.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:21 compute-0 nova_compute[260603]: 2025-10-02 09:14:21.929 2 DEBUG nova.compute.manager [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-changed-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:21 compute-0 nova_compute[260603]: 2025-10-02 09:14:21.930 2 DEBUG nova.compute.manager [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing instance network info cache due to event network-changed-17c0a9ac-d61a-433a-b3f3-154a8c467f5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:14:21 compute-0 nova_compute[260603]: 2025-10-02 09:14:21.931 2 DEBUG oslo_concurrency.lockutils [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:14:21 compute-0 nova_compute[260603]: 2025-10-02 09:14:21.932 2 DEBUG oslo_concurrency.lockutils [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:14:21 compute-0 nova_compute[260603]: 2025-10-02 09:14:21.932 2 DEBUG nova.network.neutron [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing network info cache for port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:14:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:14:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/317323458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:14:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:14:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/317323458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:14:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 59 op/s
Oct 02 09:14:22 compute-0 nova_compute[260603]: 2025-10-02 09:14:22.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/317323458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:14:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/317323458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:14:23 compute-0 nova_compute[260603]: 2025-10-02 09:14:23.314 2 DEBUG nova.network.neutron [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updated VIF entry in instance network info cache for port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:14:23 compute-0 nova_compute[260603]: 2025-10-02 09:14:23.315 2 DEBUG nova.network.neutron [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:14:23 compute-0 nova_compute[260603]: 2025-10-02 09:14:23.339 2 DEBUG oslo_concurrency.lockutils [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:14:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:14:23 compute-0 ceph-mon[74477]: pgmap v2887: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 59 op/s
Oct 02 09:14:24 compute-0 podman[425426]: 2025-10-02 09:14:24.027574286 +0000 UTC m=+0.072484813 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 09:14:24 compute-0 podman[425425]: 2025-10-02 09:14:24.027617438 +0000 UTC m=+0.085333883 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 09:14:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:14:24 compute-0 ceph-mon[74477]: pgmap v2888: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:14:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:14:26 compute-0 nova_compute[260603]: 2025-10-02 09:14:26.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:27 compute-0 ceph-mon[74477]: pgmap v2889: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:14:27 compute-0 nova_compute[260603]: 2025-10-02 09:14:27.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:14:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:14:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:14:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:14:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:14:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:14:28
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'images', 'backups', 'cephfs.cephfs.meta', '.mgr', 'vms', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log']
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 93 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 696 KiB/s wr, 87 op/s
Oct 02 09:14:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:14:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:14:28 compute-0 podman[425473]: 2025-10-02 09:14:28.996768808 +0000 UTC m=+0.066583690 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 09:14:28 compute-0 podman[425472]: 2025-10-02 09:14:28.99681406 +0000 UTC m=+0.065806956 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 09:14:29 compute-0 ovn_controller[152344]: 2025-10-02T09:14:29Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:2b:06 10.100.0.10
Oct 02 09:14:29 compute-0 ovn_controller[152344]: 2025-10-02T09:14:29Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:2b:06 10.100.0.10
Oct 02 09:14:29 compute-0 ceph-mon[74477]: pgmap v2890: 305 pgs: 305 active+clean; 93 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 696 KiB/s wr, 87 op/s
Oct 02 09:14:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 93 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 684 KiB/s wr, 28 op/s
Oct 02 09:14:30 compute-0 ceph-mon[74477]: pgmap v2891: 305 pgs: 305 active+clean; 93 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 684 KiB/s wr, 28 op/s
Oct 02 09:14:31 compute-0 nova_compute[260603]: 2025-10-02 09:14:31.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 93 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 684 KiB/s wr, 28 op/s
Oct 02 09:14:32 compute-0 nova_compute[260603]: 2025-10-02 09:14:32.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:33 compute-0 ceph-mon[74477]: pgmap v2892: 305 pgs: 305 active+clean; 93 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 684 KiB/s wr, 28 op/s
Oct 02 09:14:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:14:33 compute-0 nova_compute[260603]: 2025-10-02 09:14:33.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:14:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 743 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Oct 02 09:14:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:34.851 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:34.852 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:34.854 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:35 compute-0 ceph-mon[74477]: pgmap v2893: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 743 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Oct 02 09:14:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:14:36 compute-0 nova_compute[260603]: 2025-10-02 09:14:36.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:37 compute-0 ceph-mon[74477]: pgmap v2894: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:14:37 compute-0 nova_compute[260603]: 2025-10-02 09:14:37.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 09:14:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:14:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:14:39 compute-0 ceph-mon[74477]: pgmap v2895: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 09:14:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct 02 09:14:40 compute-0 nova_compute[260603]: 2025-10-02 09:14:40.540 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:14:40 compute-0 nova_compute[260603]: 2025-10-02 09:14:40.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:14:40 compute-0 nova_compute[260603]: 2025-10-02 09:14:40.541 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:14:40 compute-0 nova_compute[260603]: 2025-10-02 09:14:40.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 09:14:41 compute-0 ceph-mon[74477]: pgmap v2896: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct 02 09:14:41 compute-0 nova_compute[260603]: 2025-10-02 09:14:41.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct 02 09:14:42 compute-0 nova_compute[260603]: 2025-10-02 09:14:42.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:14:43 compute-0 ceph-mon[74477]: pgmap v2897: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct 02 09:14:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct 02 09:14:45 compute-0 nova_compute[260603]: 2025-10-02 09:14:45.087 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:45 compute-0 nova_compute[260603]: 2025-10-02 09:14:45.088 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:45 compute-0 nova_compute[260603]: 2025-10-02 09:14:45.194 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:14:45 compute-0 nova_compute[260603]: 2025-10-02 09:14:45.308 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:45 compute-0 nova_compute[260603]: 2025-10-02 09:14:45.309 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:45 compute-0 nova_compute[260603]: 2025-10-02 09:14:45.318 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:14:45 compute-0 nova_compute[260603]: 2025-10-02 09:14:45.318 2 INFO nova.compute.claims [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:14:45 compute-0 nova_compute[260603]: 2025-10-02 09:14:45.674 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:45 compute-0 ceph-mon[74477]: pgmap v2898: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct 02 09:14:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:14:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4194959007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.112 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.122 2 DEBUG nova.compute.provider_tree [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.142 2 DEBUG nova.scheduler.client.report [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.173 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.174 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:14:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.241 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.241 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.277 2 INFO nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.300 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.416 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.417 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.417 2 INFO nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Creating image(s)
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.439 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.460 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.481 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.484 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.584 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.585 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.585 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.586 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.610 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.613 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4194959007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:14:46 compute-0 ceph-mon[74477]: pgmap v2899: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.932 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:46 compute-0 nova_compute[260603]: 2025-10-02 09:14:46.981 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:14:47 compute-0 nova_compute[260603]: 2025-10-02 09:14:47.060 2 DEBUG nova.objects.instance [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 6acf9ec4-afe0-4ef6-b857-246ad87fe800 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:14:47 compute-0 nova_compute[260603]: 2025-10-02 09:14:47.064 2 DEBUG nova.policy [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:14:47 compute-0 nova_compute[260603]: 2025-10-02 09:14:47.081 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:14:47 compute-0 nova_compute[260603]: 2025-10-02 09:14:47.081 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Ensure instance console log exists: /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:14:47 compute-0 nova_compute[260603]: 2025-10-02 09:14:47.082 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:47 compute-0 nova_compute[260603]: 2025-10-02 09:14:47.082 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:47 compute-0 nova_compute[260603]: 2025-10-02 09:14:47.082 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:47 compute-0 nova_compute[260603]: 2025-10-02 09:14:47.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:48 compute-0 nova_compute[260603]: 2025-10-02 09:14:48.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:48.072 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:14:48 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:48.073 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:14:48 compute-0 nova_compute[260603]: 2025-10-02 09:14:48.150 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Successfully created port: 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:14:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 167 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:14:49 compute-0 nova_compute[260603]: 2025-10-02 09:14:49.090 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Successfully created port: 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:14:49 compute-0 ceph-mon[74477]: pgmap v2900: 305 pgs: 305 active+clean; 167 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:50.076 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 167 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:50 compute-0 nova_compute[260603]: 2025-10-02 09:14:50.276 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Successfully updated port: 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:14:50 compute-0 nova_compute[260603]: 2025-10-02 09:14:50.370 2 DEBUG nova.compute.manager [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-changed-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:50 compute-0 nova_compute[260603]: 2025-10-02 09:14:50.370 2 DEBUG nova.compute.manager [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing instance network info cache due to event network-changed-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:14:50 compute-0 nova_compute[260603]: 2025-10-02 09:14:50.371 2 DEBUG oslo_concurrency.lockutils [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:14:50 compute-0 nova_compute[260603]: 2025-10-02 09:14:50.371 2 DEBUG oslo_concurrency.lockutils [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:14:50 compute-0 nova_compute[260603]: 2025-10-02 09:14:50.371 2 DEBUG nova.network.neutron [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing network info cache for port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:14:50 compute-0 nova_compute[260603]: 2025-10-02 09:14:50.583 2 DEBUG nova.network.neutron [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:14:51 compute-0 ceph-mon[74477]: pgmap v2901: 305 pgs: 305 active+clean; 167 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:51 compute-0 nova_compute[260603]: 2025-10-02 09:14:51.337 2 DEBUG nova.network.neutron [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:14:51 compute-0 nova_compute[260603]: 2025-10-02 09:14:51.363 2 DEBUG oslo_concurrency.lockutils [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:14:51 compute-0 nova_compute[260603]: 2025-10-02 09:14:51.418 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Successfully updated port: 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:14:51 compute-0 nova_compute[260603]: 2025-10-02 09:14:51.437 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:14:51 compute-0 nova_compute[260603]: 2025-10-02 09:14:51.438 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:14:51 compute-0 nova_compute[260603]: 2025-10-02 09:14:51.438 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:14:51 compute-0 nova_compute[260603]: 2025-10-02 09:14:51.573 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:14:51 compute-0 nova_compute[260603]: 2025-10-02 09:14:51.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 167 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:52 compute-0 nova_compute[260603]: 2025-10-02 09:14:52.444 2 DEBUG nova.compute.manager [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-changed-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:52 compute-0 nova_compute[260603]: 2025-10-02 09:14:52.444 2 DEBUG nova.compute.manager [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing instance network info cache due to event network-changed-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:14:52 compute-0 nova_compute[260603]: 2025-10-02 09:14:52.444 2 DEBUG oslo_concurrency.lockutils [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:14:52 compute-0 nova_compute[260603]: 2025-10-02 09:14:52.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.228 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [{"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.249 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.250 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Instance network_info: |[{"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.250 2 DEBUG oslo_concurrency.lockutils [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.251 2 DEBUG nova.network.neutron [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing network info cache for port 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.253 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Start _get_guest_xml network_info=[{"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.258 2 WARNING nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.263 2 DEBUG nova.virt.libvirt.host [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.264 2 DEBUG nova.virt.libvirt.host [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.270 2 DEBUG nova.virt.libvirt.host [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.271 2 DEBUG nova.virt.libvirt.host [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.271 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.271 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.272 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.272 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.272 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.272 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.273 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.273 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.273 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.273 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.273 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.274 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.276 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:53 compute-0 ceph-mon[74477]: pgmap v2902: 305 pgs: 305 active+clean; 167 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:14:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:14:53 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/440182644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.724 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.767 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:53 compute-0 nova_compute[260603]: 2025-10-02 09:14:53.772 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:14:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2178415046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.230 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.232 2 DEBUG nova.virt.libvirt.vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-847780468',display_name='tempest-TestGettingAddress-server-847780468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-847780468',id=146,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-cp8cov8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:46Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=6acf9ec4-afe0-4ef6-b857-246ad87fe800,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.233 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.234 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.235 2 DEBUG nova.virt.libvirt.vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-847780468',display_name='tempest-TestGettingAddress-server-847780468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-847780468',id=146,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-cp8cov8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:46Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=6acf9ec4-afe0-4ef6-b857-246ad87fe800,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.236 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.237 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.239 2 DEBUG nova.objects.instance [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6acf9ec4-afe0-4ef6-b857-246ad87fe800 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.256 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:14:54 compute-0 nova_compute[260603]:   <uuid>6acf9ec4-afe0-4ef6-b857-246ad87fe800</uuid>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   <name>instance-00000092</name>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-847780468</nova:name>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:14:53</nova:creationTime>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <nova:port uuid="7304fbbd-4ecf-4fd7-95ea-9dba30ee6456">
Oct 02 09:14:54 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <nova:port uuid="3b0dd52d-fd1e-4e15-a6b5-ef4735fde479">
Oct 02 09:14:54 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fef8:8ec" ipVersion="6"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <system>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <entry name="serial">6acf9ec4-afe0-4ef6-b857-246ad87fe800</entry>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <entry name="uuid">6acf9ec4-afe0-4ef6-b857-246ad87fe800</entry>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     </system>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   <os>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   </os>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   <features>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   </features>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk">
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       </source>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk.config">
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       </source>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:14:54 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:2e:55:21"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <target dev="tap7304fbbd-4e"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:f8:08:ec"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <target dev="tap3b0dd52d-fd"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/console.log" append="off"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <video>
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     </video>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:14:54 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:14:54 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:14:54 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:14:54 compute-0 nova_compute[260603]: </domain>
Oct 02 09:14:54 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.258 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Preparing to wait for external event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.259 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.260 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.260 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.261 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Preparing to wait for external event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.261 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.262 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.262 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.264 2 DEBUG nova.virt.libvirt.vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-847780468',display_name='tempest-TestGettingAddress-server-847780468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-847780468',id=146,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-cp8cov8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:46Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=6acf9ec4-afe0-4ef6-b857-246ad87fe800,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.265 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.266 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.267 2 DEBUG os_vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.269 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.275 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7304fbbd-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.276 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7304fbbd-4e, col_values=(('external_ids', {'iface-id': '7304fbbd-4ecf-4fd7-95ea-9dba30ee6456', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:55:21', 'vm-uuid': '6acf9ec4-afe0-4ef6-b857-246ad87fe800'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:54 compute-0 NetworkManager[45129]: <info>  [1759396494.2808] manager: (tap7304fbbd-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.292 2 INFO os_vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e')
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.294 2 DEBUG nova.virt.libvirt.vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-847780468',display_name='tempest-TestGettingAddress-server-847780468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-847780468',id=146,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-cp8cov8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:46Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=6acf9ec4-afe0-4ef6-b857-246ad87fe800,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.294 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.296 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.297 2 DEBUG os_vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.299 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.302 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b0dd52d-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.303 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b0dd52d-fd, col_values=(('external_ids', {'iface-id': '3b0dd52d-fd1e-4e15-a6b5-ef4735fde479', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:08:ec', 'vm-uuid': '6acf9ec4-afe0-4ef6-b857-246ad87fe800'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:54 compute-0 NetworkManager[45129]: <info>  [1759396494.3062] manager: (tap3b0dd52d-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/663)
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.315 2 INFO os_vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd')
Oct 02 09:14:54 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/440182644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:14:54 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2178415046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:14:54 compute-0 podman[425767]: 2025-10-02 09:14:54.412564375 +0000 UTC m=+0.047708374 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 09:14:54 compute-0 podman[425765]: 2025-10-02 09:14:54.453445485 +0000 UTC m=+0.087352996 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.453 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.453 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.453 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:2e:55:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.454 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:f8:08:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.454 2 INFO nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Using config drive
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.476 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.574 2 DEBUG nova.network.neutron [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updated VIF entry in instance network info cache for port 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.575 2 DEBUG nova.network.neutron [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [{"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.599 2 DEBUG oslo_concurrency.lockutils [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.860 2 INFO nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Creating config drive at /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/disk.config
Oct 02 09:14:54 compute-0 nova_compute[260603]: 2025-10-02 09:14:54.864 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpah_07gqi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:55 compute-0 nova_compute[260603]: 2025-10-02 09:14:55.029 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpah_07gqi" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:55 compute-0 nova_compute[260603]: 2025-10-02 09:14:55.052 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:14:55 compute-0 nova_compute[260603]: 2025-10-02 09:14:55.055 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/disk.config 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:14:55 compute-0 ceph-mon[74477]: pgmap v2903: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.046 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/disk.config 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.991s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.047 2 INFO nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Deleting local config drive /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/disk.config because it was imported into RBD.
Oct 02 09:14:56 compute-0 NetworkManager[45129]: <info>  [1759396496.1128] manager: (tap7304fbbd-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/664)
Oct 02 09:14:56 compute-0 kernel: tap7304fbbd-4e: entered promiscuous mode
Oct 02 09:14:56 compute-0 ovn_controller[152344]: 2025-10-02T09:14:56Z|01628|binding|INFO|Claiming lport 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 for this chassis.
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:56 compute-0 ovn_controller[152344]: 2025-10-02T09:14:56Z|01629|binding|INFO|7304fbbd-4ecf-4fd7-95ea-9dba30ee6456: Claiming fa:16:3e:2e:55:21 10.100.0.13
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.173 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:55:21 10.100.0.13'], port_security=['fa:16:3e:2e:55:21 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6acf9ec4-afe0-4ef6-b857-246ad87fe800', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6339cab-fbb4-4887-8953-252cca735cc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80b3bab0-2229-4a60-832f-071c3bc1d0ec, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.174 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 in datapath c6339cab-fbb4-4887-8953-252cca735cc6 bound to our chassis
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.175 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6339cab-fbb4-4887-8953-252cca735cc6
Oct 02 09:14:56 compute-0 ovn_controller[152344]: 2025-10-02T09:14:56Z|01630|binding|INFO|Setting lport 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 ovn-installed in OVS
Oct 02 09:14:56 compute-0 ovn_controller[152344]: 2025-10-02T09:14:56Z|01631|binding|INFO|Setting lport 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 up in Southbound
Oct 02 09:14:56 compute-0 NetworkManager[45129]: <info>  [1759396496.1845] manager: (tap3b0dd52d-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:56 compute-0 systemd-udevd[425887]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:14:56 compute-0 systemd-udevd[425889]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:14:56 compute-0 kernel: tap3b0dd52d-fd: entered promiscuous mode
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.194 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[394f01c5-e997-433c-90d0-a2cc9089c305]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:56 compute-0 ovn_controller[152344]: 2025-10-02T09:14:56Z|01632|binding|INFO|Claiming lport 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 for this chassis.
Oct 02 09:14:56 compute-0 ovn_controller[152344]: 2025-10-02T09:14:56Z|01633|binding|INFO|3b0dd52d-fd1e-4e15-a6b5-ef4735fde479: Claiming fa:16:3e:f8:08:ec 2001:db8::f816:3eff:fef8:8ec
Oct 02 09:14:56 compute-0 NetworkManager[45129]: <info>  [1759396496.2042] device (tap7304fbbd-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:14:56 compute-0 NetworkManager[45129]: <info>  [1759396496.2058] device (tap7304fbbd-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:14:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.206 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:08:ec 2001:db8::f816:3eff:fef8:8ec'], port_security=['fa:16:3e:f8:08:ec 2001:db8::f816:3eff:fef8:8ec'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef8:8ec/64', 'neutron:device_id': '6acf9ec4-afe0-4ef6-b857-246ad87fe800', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f302b50b-078a-40f3-87d8-1172d81fe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82aa0caa-5e65-4ef0-b1d6-b9e910e6cadb, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:14:56 compute-0 NetworkManager[45129]: <info>  [1759396496.2118] device (tap3b0dd52d-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:14:56 compute-0 NetworkManager[45129]: <info>  [1759396496.2133] device (tap3b0dd52d-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:56 compute-0 ovn_controller[152344]: 2025-10-02T09:14:56Z|01634|binding|INFO|Setting lport 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 ovn-installed in OVS
Oct 02 09:14:56 compute-0 ovn_controller[152344]: 2025-10-02T09:14:56Z|01635|binding|INFO|Setting lport 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 up in Southbound
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:56 compute-0 systemd-machined[214636]: New machine qemu-180-instance-00000092.
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.235 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[44f723b7-6ea6-4cd9-85bf-9a9ad7518c54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.237 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e16b5a6c-312a-4f55-a01a-4c55d2e5c5fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:56 compute-0 systemd[1]: Started Virtual Machine qemu-180-instance-00000092.
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.267 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7e75d86a-5b40-4627-8499-57189fd04573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.294 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[07ede62c-455d-45d7-aca1-10b12fa18d44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6339cab-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:e0:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733293, 'reachable_time': 28730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425903, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.308 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d4aa30-0528-4226-8d06-474e14305882]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc6339cab-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733308, 'tstamp': 733308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425907, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc6339cab-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733312, 'tstamp': 733312}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425907, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.309 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6339cab-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.312 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6339cab-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.312 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.313 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6339cab-f0, col_values=(('external_ids', {'iface-id': '698ce34e-6a9d-4a50-8426-c137ad35d6fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.313 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.314 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 in datapath f302b50b-078a-40f3-87d8-1172d81fe604 unbound from our chassis
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.315 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f302b50b-078a-40f3-87d8-1172d81fe604
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.334 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d329b8ec-2735-417f-8f71-0f4268e5ad04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.364 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7b904eb6-7891-42a9-9e94-1cfdc42de2d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.367 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[829406e2-91a7-4628-9fd1-274e913dc7f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.392 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0896d6-1d35-4807-a135-77650677cb46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.408 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a2629279-d0d6-4eb1-879d-4e026af79023]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf302b50b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:c1:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1872, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1872, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733397, 'reachable_time': 26509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425913, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.421 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[816cf3f9-8185-4d5a-bbb6-b9fc20b6cb31]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf302b50b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733410, 'tstamp': 733410}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425914, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.423 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf302b50b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.426 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf302b50b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.426 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.426 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf302b50b-00, col_values=(('external_ids', {'iface-id': '3ba778b6-61e6-4019-8a62-1bee20d3b186'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:14:56 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.426 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.524 2 DEBUG nova.compute.manager [req-709f02aa-fc10-4b3c-b087-6974f830c0a1 req-66c86f34-2953-4619-91eb-5f11305d795d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.527 2 DEBUG oslo_concurrency.lockutils [req-709f02aa-fc10-4b3c-b087-6974f830c0a1 req-66c86f34-2953-4619-91eb-5f11305d795d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.527 2 DEBUG oslo_concurrency.lockutils [req-709f02aa-fc10-4b3c-b087-6974f830c0a1 req-66c86f34-2953-4619-91eb-5f11305d795d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.528 2 DEBUG oslo_concurrency.lockutils [req-709f02aa-fc10-4b3c-b087-6974f830c0a1 req-66c86f34-2953-4619-91eb-5f11305d795d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.529 2 DEBUG nova.compute.manager [req-709f02aa-fc10-4b3c-b087-6974f830c0a1 req-66c86f34-2953-4619-91eb-5f11305d795d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Processing event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:14:56 compute-0 ceph-mon[74477]: pgmap v2904: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:14:56 compute-0 nova_compute[260603]: 2025-10-02 09:14:56.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:57 compute-0 nova_compute[260603]: 2025-10-02 09:14:57.321 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396497.3203073, 6acf9ec4-afe0-4ef6-b857-246ad87fe800 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:14:57 compute-0 nova_compute[260603]: 2025-10-02 09:14:57.321 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] VM Started (Lifecycle Event)
Oct 02 09:14:57 compute-0 nova_compute[260603]: 2025-10-02 09:14:57.352 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:14:57 compute-0 nova_compute[260603]: 2025-10-02 09:14:57.357 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396497.3207018, 6acf9ec4-afe0-4ef6-b857-246ad87fe800 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:14:57 compute-0 nova_compute[260603]: 2025-10-02 09:14:57.357 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] VM Paused (Lifecycle Event)
Oct 02 09:14:57 compute-0 nova_compute[260603]: 2025-10-02 09:14:57.376 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:14:57 compute-0 nova_compute[260603]: 2025-10-02 09:14:57.381 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:14:57 compute-0 nova_compute[260603]: 2025-10-02 09:14:57.403 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:14:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:14:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:14:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:14:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:14:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:14:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:14:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 09:14:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.602 2 DEBUG nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.602 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.602 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.602 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.603 2 DEBUG nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] No event matching network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 in dict_keys([('network-vif-plugged', '3b0dd52d-fd1e-4e15-a6b5-ef4735fde479')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.603 2 WARNING nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received unexpected event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 for instance with vm_state building and task_state spawning.
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.603 2 DEBUG nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.603 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.604 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.604 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.604 2 DEBUG nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Processing event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.604 2 DEBUG nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.604 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.605 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.605 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.605 2 DEBUG nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] No waiting events found dispatching network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.605 2 WARNING nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received unexpected event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 for instance with vm_state building and task_state spawning.
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.606 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.609 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396498.6085684, 6acf9ec4-afe0-4ef6-b857-246ad87fe800 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.610 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] VM Resumed (Lifecycle Event)
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.611 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.615 2 INFO nova.virt.libvirt.driver [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Instance spawned successfully.
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.615 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.632 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.638 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.642 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.642 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.643 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.643 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.644 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.644 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.673 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.755 2 INFO nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Took 12.34 seconds to spawn the instance on the hypervisor.
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.763 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.901 2 INFO nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Took 13.62 seconds to build instance.
Oct 02 09:14:58 compute-0 nova_compute[260603]: 2025-10-02 09:14:58.946 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:14:59 compute-0 nova_compute[260603]: 2025-10-02 09:14:59.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:14:59 compute-0 ceph-mon[74477]: pgmap v2905: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 09:14:59 compute-0 podman[425959]: 2025-10-02 09:14:59.996102938 +0000 UTC m=+0.064793485 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct 02 09:15:00 compute-0 podman[425958]: 2025-10-02 09:15:00.000547935 +0000 UTC m=+0.068600182 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 09:15:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct 02 09:15:01 compute-0 ceph-mon[74477]: pgmap v2906: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct 02 09:15:01 compute-0 anacron[188284]: Job `cron.monthly' started
Oct 02 09:15:01 compute-0 anacron[188284]: Job `cron.monthly' terminated
Oct 02 09:15:01 compute-0 anacron[188284]: Normal exit (3 jobs run)
Oct 02 09:15:01 compute-0 nova_compute[260603]: 2025-10-02 09:15:01.546 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:15:01 compute-0 nova_compute[260603]: 2025-10-02 09:15:01.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct 02 09:15:03 compute-0 nova_compute[260603]: 2025-10-02 09:15:03.338 2 DEBUG nova.compute.manager [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-changed-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:03 compute-0 nova_compute[260603]: 2025-10-02 09:15:03.339 2 DEBUG nova.compute.manager [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing instance network info cache due to event network-changed-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:15:03 compute-0 nova_compute[260603]: 2025-10-02 09:15:03.340 2 DEBUG oslo_concurrency.lockutils [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:15:03 compute-0 nova_compute[260603]: 2025-10-02 09:15:03.341 2 DEBUG oslo_concurrency.lockutils [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:15:03 compute-0 nova_compute[260603]: 2025-10-02 09:15:03.341 2 DEBUG nova.network.neutron [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing network info cache for port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:15:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:15:03 compute-0 ceph-mon[74477]: pgmap v2907: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct 02 09:15:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Oct 02 09:15:04 compute-0 nova_compute[260603]: 2025-10-02 09:15:04.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:04 compute-0 nova_compute[260603]: 2025-10-02 09:15:04.664 2 DEBUG nova.network.neutron [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updated VIF entry in instance network info cache for port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:15:04 compute-0 nova_compute[260603]: 2025-10-02 09:15:04.664 2 DEBUG nova.network.neutron [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [{"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:15:04 compute-0 nova_compute[260603]: 2025-10-02 09:15:04.696 2 DEBUG oslo_concurrency.lockutils [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:15:05 compute-0 ceph-mon[74477]: pgmap v2908: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Oct 02 09:15:05 compute-0 nova_compute[260603]: 2025-10-02 09:15:05.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:15:05 compute-0 nova_compute[260603]: 2025-10-02 09:15:05.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:15:05 compute-0 nova_compute[260603]: 2025-10-02 09:15:05.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:15:06 compute-0 nova_compute[260603]: 2025-10-02 09:15:06.019 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:15:06 compute-0 nova_compute[260603]: 2025-10-02 09:15:06.020 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:15:06 compute-0 nova_compute[260603]: 2025-10-02 09:15:06.020 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 09:15:06 compute-0 nova_compute[260603]: 2025-10-02 09:15:06.020 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7293bf39-223f-4668-bd0f-c65476fac3e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:15:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:15:06 compute-0 nova_compute[260603]: 2025-10-02 09:15:06.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:07 compute-0 ceph-mon[74477]: pgmap v2909: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:15:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:15:08 compute-0 nova_compute[260603]: 2025-10-02 09:15:08.356 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:15:08 compute-0 nova_compute[260603]: 2025-10-02 09:15:08.399 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:15:08 compute-0 nova_compute[260603]: 2025-10-02 09:15:08.400 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 09:15:08 compute-0 nova_compute[260603]: 2025-10-02 09:15:08.400 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:15:08 compute-0 nova_compute[260603]: 2025-10-02 09:15:08.400 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:15:08 compute-0 nova_compute[260603]: 2025-10-02 09:15:08.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:15:08 compute-0 nova_compute[260603]: 2025-10-02 09:15:08.552 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:08 compute-0 nova_compute[260603]: 2025-10-02 09:15:08.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:08 compute-0 nova_compute[260603]: 2025-10-02 09:15:08.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:08 compute-0 nova_compute[260603]: 2025-10-02 09:15:08.553 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:15:08 compute-0 nova_compute[260603]: 2025-10-02 09:15:08.554 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:15:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:15:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:15:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3905090160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.017 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.138 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.138 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.141 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.142 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.369 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.371 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3190MB free_disk=59.921817779541016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.371 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.371 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:09 compute-0 ceph-mon[74477]: pgmap v2910: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:15:09 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3905090160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.556454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396509556516, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 3407360, "memory_usage": 3464976, "flush_reason": "Manual Compaction"}
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.608 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 7293bf39-223f-4668-bd0f-c65476fac3e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.609 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 6acf9ec4-afe0-4ef6-b857-246ad87fe800 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.610 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.610 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.631 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396509647309, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3341124, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59006, "largest_seqno": 61049, "table_properties": {"data_size": 3331800, "index_size": 5881, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18702, "raw_average_key_size": 20, "raw_value_size": 3313375, "raw_average_value_size": 3562, "num_data_blocks": 262, "num_entries": 930, "num_filter_entries": 930, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396283, "oldest_key_time": 1759396283, "file_creation_time": 1759396509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 90917 microseconds, and 7676 cpu microseconds.
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.654 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.655 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.647378) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3341124 bytes OK
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.647398) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.655761) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.655792) EVENT_LOG_v1 {"time_micros": 1759396509655784, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.655814) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3398801, prev total WAL file size 3398801, number of live WAL files 2.
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.656824) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3262KB)], [140(7937KB)]
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396509656873, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11468745, "oldest_snapshot_seqno": -1}
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.677 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.706 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 09:15:09 compute-0 nova_compute[260603]: 2025-10-02 09:15:09.772 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8040 keys, 9752794 bytes, temperature: kUnknown
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396509783797, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 9752794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9701576, "index_size": 29985, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20165, "raw_key_size": 209474, "raw_average_key_size": 26, "raw_value_size": 9560739, "raw_average_value_size": 1189, "num_data_blocks": 1165, "num_entries": 8040, "num_filter_entries": 8040, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.784049) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 9752794 bytes
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.892923) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.3 rd, 76.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.8 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 8554, records dropped: 514 output_compression: NoCompression
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.892971) EVENT_LOG_v1 {"time_micros": 1759396509892953, "job": 86, "event": "compaction_finished", "compaction_time_micros": 127003, "compaction_time_cpu_micros": 28620, "output_level": 6, "num_output_files": 1, "total_output_size": 9752794, "num_input_records": 8554, "num_output_records": 8040, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396509894187, "job": 86, "event": "table_file_deletion", "file_number": 142}
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396509896583, "job": 86, "event": "table_file_deletion", "file_number": 140}
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.656710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.896629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.896634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.896638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.896641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:15:09 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.896644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:15:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Oct 02 09:15:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:15:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039907010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:15:10 compute-0 nova_compute[260603]: 2025-10-02 09:15:10.235 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:15:10 compute-0 nova_compute[260603]: 2025-10-02 09:15:10.245 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:15:10 compute-0 nova_compute[260603]: 2025-10-02 09:15:10.273 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:15:10 compute-0 nova_compute[260603]: 2025-10-02 09:15:10.304 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:15:10 compute-0 nova_compute[260603]: 2025-10-02 09:15:10.304 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4039907010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:15:11 compute-0 ceph-mon[74477]: pgmap v2911: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Oct 02 09:15:11 compute-0 nova_compute[260603]: 2025-10-02 09:15:11.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:12 compute-0 ovn_controller[152344]: 2025-10-02T09:15:12Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:55:21 10.100.0.13
Oct 02 09:15:12 compute-0 ovn_controller[152344]: 2025-10-02T09:15:12Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:55:21 10.100.0.13
Oct 02 09:15:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Oct 02 09:15:12 compute-0 ceph-mon[74477]: pgmap v2912: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Oct 02 09:15:13 compute-0 nova_compute[260603]: 2025-10-02 09:15:13.304 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:15:13 compute-0 nova_compute[260603]: 2025-10-02 09:15:13.305 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:15:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:15:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 193 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 02 09:15:14 compute-0 nova_compute[260603]: 2025-10-02 09:15:14.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:14 compute-0 sudo[426045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:15:14 compute-0 sudo[426045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:14 compute-0 sudo[426045]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:14 compute-0 sudo[426070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:15:14 compute-0 sudo[426070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:14 compute-0 sudo[426070]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:14 compute-0 sudo[426095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:15:14 compute-0 sudo[426095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:14 compute-0 sudo[426095]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:14 compute-0 sudo[426120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:15:14 compute-0 sudo[426120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:15 compute-0 ceph-mon[74477]: pgmap v2913: 305 pgs: 305 active+clean; 193 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 02 09:15:15 compute-0 sudo[426120]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:15 compute-0 nova_compute[260603]: 2025-10-02 09:15:15.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:15:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:15:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:15:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:15:15 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:15:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:15:15 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:15:15 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a4f2322c-317e-48ff-a0e3-498caba5993d does not exist
Oct 02 09:15:15 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c62e123c-fd4d-46a7-b11f-d376c29d5ec7 does not exist
Oct 02 09:15:15 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ed5d3ddb-9040-4a47-8dd4-df1c5b53d509 does not exist
Oct 02 09:15:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:15:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:15:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:15:15 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:15:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:15:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:15:15 compute-0 sudo[426178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:15:15 compute-0 sudo[426178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:15 compute-0 sudo[426178]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:15 compute-0 sudo[426203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:15:15 compute-0 sudo[426203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:15 compute-0 sudo[426203]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:15 compute-0 sudo[426228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:15:15 compute-0 sudo[426228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:15 compute-0 sudo[426228]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:15 compute-0 sudo[426253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:15:15 compute-0 sudo[426253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 193 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 02 09:15:16 compute-0 podman[426319]: 2025-10-02 09:15:16.222194895 +0000 UTC m=+0.090014908 container create 26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lederberg, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:15:16 compute-0 podman[426319]: 2025-10-02 09:15:16.159907359 +0000 UTC m=+0.027727392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:15:16 compute-0 systemd[1]: Started libpod-conmon-26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8.scope.
Oct 02 09:15:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:15:16 compute-0 podman[426319]: 2025-10-02 09:15:16.353563138 +0000 UTC m=+0.221383191 container init 26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lederberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 09:15:16 compute-0 podman[426319]: 2025-10-02 09:15:16.360309747 +0000 UTC m=+0.228129750 container start 26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 09:15:16 compute-0 romantic_lederberg[426335]: 167 167
Oct 02 09:15:16 compute-0 systemd[1]: libpod-26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8.scope: Deactivated successfully.
Oct 02 09:15:16 compute-0 podman[426319]: 2025-10-02 09:15:16.376832181 +0000 UTC m=+0.244652204 container attach 26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lederberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 09:15:16 compute-0 podman[426319]: 2025-10-02 09:15:16.377642377 +0000 UTC m=+0.245462390 container died 26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lederberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 09:15:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:15:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:15:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:15:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:15:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:15:16 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6ac432dd7e71105e4f56ccd3b9123e817b9a6b2d6a2005b5b49154f8befe4fe-merged.mount: Deactivated successfully.
Oct 02 09:15:16 compute-0 podman[426319]: 2025-10-02 09:15:16.45917712 +0000 UTC m=+0.326997123 container remove 26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lederberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 09:15:16 compute-0 systemd[1]: libpod-conmon-26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8.scope: Deactivated successfully.
Oct 02 09:15:16 compute-0 podman[426359]: 2025-10-02 09:15:16.68219324 +0000 UTC m=+0.059986645 container create e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:15:16 compute-0 systemd[1]: Started libpod-conmon-e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e.scope.
Oct 02 09:15:16 compute-0 podman[426359]: 2025-10-02 09:15:16.646594924 +0000 UTC m=+0.024388319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:15:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12917c30fe431423d775dbb7cbe23709d42806dbb204451014bf9166b523f3ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12917c30fe431423d775dbb7cbe23709d42806dbb204451014bf9166b523f3ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12917c30fe431423d775dbb7cbe23709d42806dbb204451014bf9166b523f3ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12917c30fe431423d775dbb7cbe23709d42806dbb204451014bf9166b523f3ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12917c30fe431423d775dbb7cbe23709d42806dbb204451014bf9166b523f3ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:16 compute-0 podman[426359]: 2025-10-02 09:15:16.812502379 +0000 UTC m=+0.190295774 container init e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:15:16 compute-0 podman[426359]: 2025-10-02 09:15:16.820796427 +0000 UTC m=+0.198589812 container start e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:15:16 compute-0 podman[426359]: 2025-10-02 09:15:16.837520107 +0000 UTC m=+0.215313512 container attach e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 09:15:16 compute-0 nova_compute[260603]: 2025-10-02 09:15:16.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:17 compute-0 ceph-mon[74477]: pgmap v2914: 305 pgs: 305 active+clean; 193 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 02 09:15:17 compute-0 dreamy_swanson[426376]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:15:17 compute-0 dreamy_swanson[426376]: --> relative data size: 1.0
Oct 02 09:15:17 compute-0 dreamy_swanson[426376]: --> All data devices are unavailable
Oct 02 09:15:17 compute-0 systemd[1]: libpod-e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e.scope: Deactivated successfully.
Oct 02 09:15:17 compute-0 podman[426359]: 2025-10-02 09:15:17.83035135 +0000 UTC m=+1.208144715 container died e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:15:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-12917c30fe431423d775dbb7cbe23709d42806dbb204451014bf9166b523f3ff-merged.mount: Deactivated successfully.
Oct 02 09:15:17 compute-0 podman[426359]: 2025-10-02 09:15:17.977246415 +0000 UTC m=+1.355039780 container remove e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:15:17 compute-0 systemd[1]: libpod-conmon-e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e.scope: Deactivated successfully.
Oct 02 09:15:18 compute-0 sudo[426253]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:18 compute-0 sudo[426419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:15:18 compute-0 sudo[426419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:18 compute-0 sudo[426419]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:18 compute-0 sudo[426444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:15:18 compute-0 sudo[426444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:15:18 compute-0 sudo[426444]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:18 compute-0 sudo[426469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:15:18 compute-0 sudo[426469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:18 compute-0 sudo[426469]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:18 compute-0 sudo[426494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:15:18 compute-0 sudo[426494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:18 compute-0 ceph-mon[74477]: pgmap v2915: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:15:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:15:18 compute-0 podman[426562]: 2025-10-02 09:15:18.827997683 +0000 UTC m=+0.118262877 container create cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:15:18 compute-0 podman[426562]: 2025-10-02 09:15:18.751162355 +0000 UTC m=+0.041427529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:15:18 compute-0 systemd[1]: Started libpod-conmon-cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55.scope.
Oct 02 09:15:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:15:18 compute-0 podman[426562]: 2025-10-02 09:15:18.991668209 +0000 UTC m=+0.281933463 container init cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:15:19 compute-0 podman[426562]: 2025-10-02 09:15:19.00681391 +0000 UTC m=+0.297079104 container start cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 09:15:19 compute-0 unruffled_bardeen[426578]: 167 167
Oct 02 09:15:19 compute-0 systemd[1]: libpod-cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55.scope: Deactivated successfully.
Oct 02 09:15:19 compute-0 podman[426562]: 2025-10-02 09:15:19.021070653 +0000 UTC m=+0.311335847 container attach cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:15:19 compute-0 podman[426562]: 2025-10-02 09:15:19.021533188 +0000 UTC m=+0.311798372 container died cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 09:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4921e38935e74aec9cb9e95c6f1a4fe3f09f7ef2a569528ba9a8f9ba3b1564c4-merged.mount: Deactivated successfully.
Oct 02 09:15:19 compute-0 podman[426562]: 2025-10-02 09:15:19.169826935 +0000 UTC m=+0.460092089 container remove cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 09:15:19 compute-0 systemd[1]: libpod-conmon-cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55.scope: Deactivated successfully.
Oct 02 09:15:19 compute-0 nova_compute[260603]: 2025-10-02 09:15:19.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:19 compute-0 podman[426603]: 2025-10-02 09:15:19.346503496 +0000 UTC m=+0.039628013 container create 1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:15:19 compute-0 systemd[1]: Started libpod-conmon-1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a.scope.
Oct 02 09:15:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198cb23d87d6d97780771738a28fa735ab97573908e55ad4ec7f35d899c2f3cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198cb23d87d6d97780771738a28fa735ab97573908e55ad4ec7f35d899c2f3cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198cb23d87d6d97780771738a28fa735ab97573908e55ad4ec7f35d899c2f3cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198cb23d87d6d97780771738a28fa735ab97573908e55ad4ec7f35d899c2f3cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:19 compute-0 podman[426603]: 2025-10-02 09:15:19.420824326 +0000 UTC m=+0.113948853 container init 1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:15:19 compute-0 podman[426603]: 2025-10-02 09:15:19.331798329 +0000 UTC m=+0.024922876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:15:19 compute-0 podman[426603]: 2025-10-02 09:15:19.429514456 +0000 UTC m=+0.122638973 container start 1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:15:19 compute-0 podman[426603]: 2025-10-02 09:15:19.437243866 +0000 UTC m=+0.130368403 container attach 1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:15:19 compute-0 nova_compute[260603]: 2025-10-02 09:15:19.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]: {
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:     "0": [
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:         {
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "devices": [
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "/dev/loop3"
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             ],
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_name": "ceph_lv0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_size": "21470642176",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "name": "ceph_lv0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "tags": {
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.cluster_name": "ceph",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.crush_device_class": "",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.encrypted": "0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.osd_id": "0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.type": "block",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.vdo": "0"
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             },
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "type": "block",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "vg_name": "ceph_vg0"
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:         }
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:     ],
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:     "1": [
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:         {
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "devices": [
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "/dev/loop4"
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             ],
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_name": "ceph_lv1",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_size": "21470642176",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "name": "ceph_lv1",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "tags": {
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.cluster_name": "ceph",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.crush_device_class": "",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.encrypted": "0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.osd_id": "1",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.type": "block",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.vdo": "0"
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             },
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "type": "block",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "vg_name": "ceph_vg1"
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:         }
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:     ],
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:     "2": [
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:         {
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "devices": [
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "/dev/loop5"
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             ],
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_name": "ceph_lv2",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_size": "21470642176",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "name": "ceph_lv2",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "tags": {
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.cluster_name": "ceph",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.crush_device_class": "",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.encrypted": "0",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.osd_id": "2",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.type": "block",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:                 "ceph.vdo": "0"
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             },
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "type": "block",
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:             "vg_name": "ceph_vg2"
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:         }
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]:     ]
Oct 02 09:15:20 compute-0 pedantic_wilbur[426620]: }
Oct 02 09:15:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:15:20 compute-0 systemd[1]: libpod-1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a.scope: Deactivated successfully.
Oct 02 09:15:20 compute-0 podman[426603]: 2025-10-02 09:15:20.218768842 +0000 UTC m=+0.911893359 container died 1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:15:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-198cb23d87d6d97780771738a28fa735ab97573908e55ad4ec7f35d899c2f3cc-merged.mount: Deactivated successfully.
Oct 02 09:15:20 compute-0 podman[426603]: 2025-10-02 09:15:20.280119429 +0000 UTC m=+0.973243946 container remove 1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:15:20 compute-0 systemd[1]: libpod-conmon-1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a.scope: Deactivated successfully.
Oct 02 09:15:20 compute-0 sudo[426494]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:20 compute-0 sudo[426643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:15:20 compute-0 sudo[426643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:20 compute-0 sudo[426643]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:20 compute-0 sudo[426668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:15:20 compute-0 sudo[426668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:20 compute-0 sudo[426668]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:20 compute-0 sudo[426693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:15:20 compute-0 sudo[426693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:20 compute-0 sudo[426693]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:20 compute-0 sudo[426718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:15:20 compute-0 sudo[426718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:20 compute-0 podman[426782]: 2025-10-02 09:15:20.91057263 +0000 UTC m=+0.097693277 container create da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:15:20 compute-0 podman[426782]: 2025-10-02 09:15:20.838696027 +0000 UTC m=+0.025816704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:15:20 compute-0 systemd[1]: Started libpod-conmon-da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812.scope.
Oct 02 09:15:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:15:20 compute-0 podman[426782]: 2025-10-02 09:15:20.983102394 +0000 UTC m=+0.170223081 container init da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:15:20 compute-0 podman[426782]: 2025-10-02 09:15:20.989553595 +0000 UTC m=+0.176674252 container start da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 02 09:15:20 compute-0 podman[426782]: 2025-10-02 09:15:20.992649191 +0000 UTC m=+0.179769848 container attach da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 09:15:20 compute-0 systemd[1]: libpod-da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812.scope: Deactivated successfully.
Oct 02 09:15:20 compute-0 relaxed_franklin[426799]: 167 167
Oct 02 09:15:20 compute-0 conmon[426799]: conmon da50ebf4a69b4c119701 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812.scope/container/memory.events
Oct 02 09:15:20 compute-0 podman[426782]: 2025-10-02 09:15:20.996960795 +0000 UTC m=+0.184081452 container died da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:15:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d97716b88969fbb091f9e0ad7cf424fd9219eff69595cea57b08ad46720c4a26-merged.mount: Deactivated successfully.
Oct 02 09:15:21 compute-0 podman[426782]: 2025-10-02 09:15:21.050565831 +0000 UTC m=+0.237686508 container remove da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:15:21 compute-0 systemd[1]: libpod-conmon-da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812.scope: Deactivated successfully.
Oct 02 09:15:21 compute-0 podman[426823]: 2025-10-02 09:15:21.229479561 +0000 UTC m=+0.045452324 container create 4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:15:21 compute-0 ceph-mon[74477]: pgmap v2916: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:15:21 compute-0 systemd[1]: Started libpod-conmon-4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c.scope.
Oct 02 09:15:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be750230f7ca837efa8a37ba46dc8bf276e5d96e41b65aa0b2c482bcba8d5e6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be750230f7ca837efa8a37ba46dc8bf276e5d96e41b65aa0b2c482bcba8d5e6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be750230f7ca837efa8a37ba46dc8bf276e5d96e41b65aa0b2c482bcba8d5e6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be750230f7ca837efa8a37ba46dc8bf276e5d96e41b65aa0b2c482bcba8d5e6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:15:21 compute-0 podman[426823]: 2025-10-02 09:15:21.212223785 +0000 UTC m=+0.028196568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:15:21 compute-0 podman[426823]: 2025-10-02 09:15:21.312843722 +0000 UTC m=+0.128816515 container init 4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 09:15:21 compute-0 podman[426823]: 2025-10-02 09:15:21.319179908 +0000 UTC m=+0.135152691 container start 4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_diffie, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 09:15:21 compute-0 podman[426823]: 2025-10-02 09:15:21.322954746 +0000 UTC m=+0.138927539 container attach 4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_diffie, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.520 2 DEBUG nova.compute.manager [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-changed-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.522 2 DEBUG nova.compute.manager [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing instance network info cache due to event network-changed-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.523 2 DEBUG oslo_concurrency.lockutils [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.523 2 DEBUG oslo_concurrency.lockutils [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.523 2 DEBUG nova.network.neutron [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing network info cache for port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.638 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.639 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.639 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.640 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.640 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.642 2 INFO nova.compute.manager [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Terminating instance
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.643 2 DEBUG nova.compute.manager [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:15:21 compute-0 kernel: tap7304fbbd-4e (unregistering): left promiscuous mode
Oct 02 09:15:21 compute-0 NetworkManager[45129]: <info>  [1759396521.7139] device (tap7304fbbd-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:15:21 compute-0 ovn_controller[152344]: 2025-10-02T09:15:21Z|01636|binding|INFO|Releasing lport 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 from this chassis (sb_readonly=0)
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 ovn_controller[152344]: 2025-10-02T09:15:21Z|01637|binding|INFO|Setting lport 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 down in Southbound
Oct 02 09:15:21 compute-0 ovn_controller[152344]: 2025-10-02T09:15:21Z|01638|binding|INFO|Removing iface tap7304fbbd-4e ovn-installed in OVS
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 kernel: tap3b0dd52d-fd (unregistering): left promiscuous mode
Oct 02 09:15:21 compute-0 NetworkManager[45129]: <info>  [1759396521.7428] device (tap3b0dd52d-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.746 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:55:21 10.100.0.13'], port_security=['fa:16:3e:2e:55:21 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6acf9ec4-afe0-4ef6-b857-246ad87fe800', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6339cab-fbb4-4887-8953-252cca735cc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80b3bab0-2229-4a60-832f-071c3bc1d0ec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.748 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 in datapath c6339cab-fbb4-4887-8953-252cca735cc6 unbound from our chassis
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.755 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6339cab-fbb4-4887-8953-252cca735cc6
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 ovn_controller[152344]: 2025-10-02T09:15:21Z|01639|binding|INFO|Releasing lport 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 from this chassis (sb_readonly=0)
Oct 02 09:15:21 compute-0 ovn_controller[152344]: 2025-10-02T09:15:21Z|01640|binding|INFO|Setting lport 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 down in Southbound
Oct 02 09:15:21 compute-0 ovn_controller[152344]: 2025-10-02T09:15:21Z|01641|binding|INFO|Removing iface tap3b0dd52d-fd ovn-installed in OVS
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.771 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[822ffd17-006f-4a58-9568-3168d698e3de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.788 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:08:ec 2001:db8::f816:3eff:fef8:8ec'], port_security=['fa:16:3e:f8:08:ec 2001:db8::f816:3eff:fef8:8ec'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef8:8ec/64', 'neutron:device_id': '6acf9ec4-afe0-4ef6-b857-246ad87fe800', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f302b50b-078a-40f3-87d8-1172d81fe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82aa0caa-5e65-4ef0-b1d6-b9e910e6cadb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:15:21 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct 02 09:15:21 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Consumed 13.271s CPU time.
Oct 02 09:15:21 compute-0 systemd-machined[214636]: Machine qemu-180-instance-00000092 terminated.
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.804 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d42ae0d3-8b76-4029-bd31-f0302d1316d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.808 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0d35e7-8ba9-47c6-a3b3-3a69cf1497fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.835 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8e1a02-650b-4aba-8056-844269b5f829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.851 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b3696a86-ad0f-4acf-909c-0b464bda591e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6339cab-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:e0:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733293, 'reachable_time': 28730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426861, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.868 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10924ebc-50e3-444d-903c-67ec927d4ab4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc6339cab-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733308, 'tstamp': 733308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426862, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc6339cab-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733312, 'tstamp': 733312}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426862, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.870 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6339cab-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 NetworkManager[45129]: <info>  [1759396521.8788] manager: (tap3b0dd52d-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/666)
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.888 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6339cab-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.888 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.889 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6339cab-f0, col_values=(('external_ids', {'iface-id': '698ce34e-6a9d-4a50-8426-c137ad35d6fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.889 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.890 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 in datapath f302b50b-078a-40f3-87d8-1172d81fe604 unbound from our chassis
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.893 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f302b50b-078a-40f3-87d8-1172d81fe604
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.894 2 INFO nova.virt.libvirt.driver [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Instance destroyed successfully.
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.895 2 DEBUG nova.objects.instance [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 6acf9ec4-afe0-4ef6-b857-246ad87fe800 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.914 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[17ff02f5-7940-4232-a0e6-7880c22c1471]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.918 2 DEBUG nova.virt.libvirt.vif [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-847780468',display_name='tempest-TestGettingAddress-server-847780468',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-847780468',id=146,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:14:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-cp8cov8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:14:58Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=6acf9ec4-afe0-4ef6-b857-246ad87fe800,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.918 2 DEBUG nova.network.os_vif_util [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.919 2 DEBUG nova.network.os_vif_util [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.919 2 DEBUG os_vif [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7304fbbd-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.932 2 INFO os_vif [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e')
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.933 2 DEBUG nova.virt.libvirt.vif [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-847780468',display_name='tempest-TestGettingAddress-server-847780468',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-847780468',id=146,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:14:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-cp8cov8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:14:58Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=6acf9ec4-afe0-4ef6-b857-246ad87fe800,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.933 2 DEBUG nova.network.os_vif_util [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.934 2 DEBUG nova.network.os_vif_util [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.934 2 DEBUG os_vif [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b0dd52d-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:21 compute-0 nova_compute[260603]: 2025-10-02 09:15:21.941 2 INFO os_vif [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd')
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.962 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[962b33a2-8f02-45a8-ba86-abcb34e1ac85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.965 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[96ca5f0a-2dbb-4523-82ae-2d151e9aa108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.991 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ab9117a7-61bf-47f2-8c3f-caa8e9e0bbf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.006 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[601cd2f3-6d18-422e-9b3f-b5dad5fa2ffc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf302b50b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:c1:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 6, 'rx_bytes': 2912, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 6, 'rx_bytes': 2912, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733397, 'reachable_time': 26509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426909, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.021 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[041ee50f-1613-4b64-9950-a9f6a6887f20]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf302b50b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733410, 'tstamp': 733410}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426910, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.023 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf302b50b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:22 compute-0 nova_compute[260603]: 2025-10-02 09:15:22.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.025 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf302b50b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.025 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:15:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.026 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf302b50b-00, col_values=(('external_ids', {'iface-id': '3ba778b6-61e6-4019-8a62-1bee20d3b186'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.026 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:15:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:15:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3880506349' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:15:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:15:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3880506349' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:15:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:15:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3880506349' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:15:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3880506349' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]: {
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "osd_id": 2,
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "type": "bluestore"
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:     },
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "osd_id": 1,
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "type": "bluestore"
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:     },
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "osd_id": 0,
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:         "type": "bluestore"
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]:     }
Oct 02 09:15:22 compute-0 mystifying_diffie[426840]: }
Oct 02 09:15:22 compute-0 systemd[1]: libpod-4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c.scope: Deactivated successfully.
Oct 02 09:15:22 compute-0 systemd[1]: libpod-4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c.scope: Consumed 1.030s CPU time.
Oct 02 09:15:22 compute-0 podman[426823]: 2025-10-02 09:15:22.398727416 +0000 UTC m=+1.214700199 container died 4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:15:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-be750230f7ca837efa8a37ba46dc8bf276e5d96e41b65aa0b2c482bcba8d5e6a-merged.mount: Deactivated successfully.
Oct 02 09:15:22 compute-0 podman[426823]: 2025-10-02 09:15:22.448800162 +0000 UTC m=+1.264772925 container remove 4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:15:22 compute-0 systemd[1]: libpod-conmon-4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c.scope: Deactivated successfully.
Oct 02 09:15:22 compute-0 sudo[426718]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:15:22 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:15:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:15:22 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:15:22 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5e7fca68-fdb6-4f1d-83d7-ff01018dc97a does not exist
Oct 02 09:15:22 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0aa0fdae-64db-4676-a94c-e25b3afa04ec does not exist
Oct 02 09:15:22 compute-0 sudo[426951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:15:22 compute-0 sudo[426951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:22 compute-0 sudo[426951]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:22 compute-0 nova_compute[260603]: 2025-10-02 09:15:22.573 2 INFO nova.virt.libvirt.driver [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Deleting instance files /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800_del
Oct 02 09:15:22 compute-0 nova_compute[260603]: 2025-10-02 09:15:22.574 2 INFO nova.virt.libvirt.driver [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Deletion of /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800_del complete
Oct 02 09:15:22 compute-0 sudo[426976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:15:22 compute-0 sudo[426976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:15:22 compute-0 sudo[426976]: pam_unix(sudo:session): session closed for user root
Oct 02 09:15:22 compute-0 nova_compute[260603]: 2025-10-02 09:15:22.688 2 INFO nova.compute.manager [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Took 1.04 seconds to destroy the instance on the hypervisor.
Oct 02 09:15:22 compute-0 nova_compute[260603]: 2025-10-02 09:15:22.688 2 DEBUG oslo.service.loopingcall [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:15:22 compute-0 nova_compute[260603]: 2025-10-02 09:15:22.689 2 DEBUG nova.compute.manager [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:15:22 compute-0 nova_compute[260603]: 2025-10-02 09:15:22.689 2 DEBUG nova.network.neutron [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.071 2 DEBUG nova.network.neutron [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updated VIF entry in instance network info cache for port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.072 2 DEBUG nova.network.neutron [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [{"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.098 2 DEBUG oslo_concurrency.lockutils [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:15:23 compute-0 ceph-mon[74477]: pgmap v2917: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:15:23 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:15:23 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.621 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-unplugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.622 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.623 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.624 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.624 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] No waiting events found dispatching network-vif-unplugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.625 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-unplugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.625 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.626 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.626 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.627 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.627 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] No waiting events found dispatching network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.628 2 WARNING nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received unexpected event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 for instance with vm_state active and task_state deleting.
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.628 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-unplugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.628 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.629 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.629 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.630 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] No waiting events found dispatching network-vif-unplugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.630 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-unplugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.631 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.632 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.632 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.633 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.633 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] No waiting events found dispatching network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.633 2 WARNING nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received unexpected event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 for instance with vm_state active and task_state deleting.
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.634 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-deleted-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.634 2 INFO nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Neutron deleted interface 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456; detaching it from the instance and deleting it from the info cache
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.635 2 DEBUG nova.network.neutron [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [{"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.673 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Detach interface failed, port_id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456, reason: Instance 6acf9ec4-afe0-4ef6-b857-246ad87fe800 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.674 2 DEBUG nova.network.neutron [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:15:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.683280) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396523683319, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 410, "num_deletes": 257, "total_data_size": 266842, "memory_usage": 274648, "flush_reason": "Manual Compaction"}
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396523687078, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 264583, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61050, "largest_seqno": 61459, "table_properties": {"data_size": 262150, "index_size": 532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5990, "raw_average_key_size": 18, "raw_value_size": 257190, "raw_average_value_size": 786, "num_data_blocks": 23, "num_entries": 327, "num_filter_entries": 327, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396510, "oldest_key_time": 1759396510, "file_creation_time": 1759396523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 3909 microseconds, and 1535 cpu microseconds.
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.687182) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 264583 bytes OK
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.687209) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.688693) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.688714) EVENT_LOG_v1 {"time_micros": 1759396523688707, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.688734) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 264219, prev total WAL file size 264219, number of live WAL files 2.
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.689270) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353035' seq:72057594037927935, type:22 .. '6C6F676D0032373538' seq:0, type:0; will stop at (end)
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(258KB)], [143(9524KB)]
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396523689320, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 10017377, "oldest_snapshot_seqno": -1}
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.710 2 INFO nova.compute.manager [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Took 1.02 seconds to deallocate network for instance.
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7842 keys, 9902936 bytes, temperature: kUnknown
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396523742611, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 9902936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9852222, "index_size": 29993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 206310, "raw_average_key_size": 26, "raw_value_size": 9713985, "raw_average_value_size": 1238, "num_data_blocks": 1163, "num_entries": 7842, "num_filter_entries": 7842, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.742873) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 9902936 bytes
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.744286) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.7 rd, 185.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.3 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(75.3) write-amplify(37.4) OK, records in: 8367, records dropped: 525 output_compression: NoCompression
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.744328) EVENT_LOG_v1 {"time_micros": 1759396523744310, "job": 88, "event": "compaction_finished", "compaction_time_micros": 53361, "compaction_time_cpu_micros": 27043, "output_level": 6, "num_output_files": 1, "total_output_size": 9902936, "num_input_records": 8367, "num_output_records": 7842, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396523744628, "job": 88, "event": "table_file_deletion", "file_number": 145}
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396523746425, "job": 88, "event": "table_file_deletion", "file_number": 143}
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.689165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.746521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.746528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.746530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.746532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:15:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.746534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.757 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.757 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:23 compute-0 nova_compute[260603]: 2025-10-02 09:15:23.832 2 DEBUG oslo_concurrency.processutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:15:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Oct 02 09:15:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:15:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/164263658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:15:24 compute-0 nova_compute[260603]: 2025-10-02 09:15:24.284 2 DEBUG oslo_concurrency.processutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:15:24 compute-0 nova_compute[260603]: 2025-10-02 09:15:24.293 2 DEBUG nova.compute.provider_tree [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:15:24 compute-0 nova_compute[260603]: 2025-10-02 09:15:24.312 2 DEBUG nova.scheduler.client.report [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:15:24 compute-0 nova_compute[260603]: 2025-10-02 09:15:24.342 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:24 compute-0 nova_compute[260603]: 2025-10-02 09:15:24.374 2 INFO nova.scheduler.client.report [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 6acf9ec4-afe0-4ef6-b857-246ad87fe800
Oct 02 09:15:24 compute-0 nova_compute[260603]: 2025-10-02 09:15:24.443 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:24 compute-0 ceph-mon[74477]: pgmap v2918: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Oct 02 09:15:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/164263658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:15:24 compute-0 podman[427024]: 2025-10-02 09:15:24.993703027 +0000 UTC m=+0.054410542 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 09:15:25 compute-0 podman[427023]: 2025-10-02 09:15:25.022846962 +0000 UTC m=+0.087807890 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:15:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.157 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.159 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.365 2 DEBUG nova.compute.manager [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-changed-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.365 2 DEBUG nova.compute.manager [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing instance network info cache due to event network-changed-17c0a9ac-d61a-433a-b3f3-154a8c467f5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.366 2 DEBUG oslo_concurrency.lockutils [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.366 2 DEBUG oslo_concurrency.lockutils [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.366 2 DEBUG nova.network.neutron [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing network info cache for port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.514 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.515 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.515 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.516 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.516 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.517 2 INFO nova.compute.manager [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Terminating instance
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.518 2 DEBUG nova.compute.manager [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:15:25 compute-0 kernel: tap17c0a9ac-d6 (unregistering): left promiscuous mode
Oct 02 09:15:25 compute-0 NetworkManager[45129]: <info>  [1759396525.5731] device (tap17c0a9ac-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:15:25 compute-0 ovn_controller[152344]: 2025-10-02T09:15:25Z|01642|binding|INFO|Releasing lport 17c0a9ac-d61a-433a-b3f3-154a8c467f5a from this chassis (sb_readonly=0)
Oct 02 09:15:25 compute-0 ovn_controller[152344]: 2025-10-02T09:15:25Z|01643|binding|INFO|Setting lport 17c0a9ac-d61a-433a-b3f3-154a8c467f5a down in Southbound
Oct 02 09:15:25 compute-0 ovn_controller[152344]: 2025-10-02T09:15:25Z|01644|binding|INFO|Removing iface tap17c0a9ac-d6 ovn-installed in OVS
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.605 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:2b:06 10.100.0.10'], port_security=['fa:16:3e:65:2b:06 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7293bf39-223f-4668-bd0f-c65476fac3e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6339cab-fbb4-4887-8953-252cca735cc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80b3bab0-2229-4a60-832f-071c3bc1d0ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=17c0a9ac-d61a-433a-b3f3-154a8c467f5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:15:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.607 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a in datapath c6339cab-fbb4-4887-8953-252cca735cc6 unbound from our chassis
Oct 02 09:15:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.609 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6339cab-fbb4-4887-8953-252cca735cc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:15:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.610 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2073db8b-22f1-444f-a74b-b9aaf424778f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.610 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6 namespace which is not needed anymore
Oct 02 09:15:25 compute-0 kernel: tap6f4bc2ea-2d (unregistering): left promiscuous mode
Oct 02 09:15:25 compute-0 NetworkManager[45129]: <info>  [1759396525.6248] device (tap6f4bc2ea-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:15:25 compute-0 ovn_controller[152344]: 2025-10-02T09:15:25Z|01645|binding|INFO|Releasing lport 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 from this chassis (sb_readonly=0)
Oct 02 09:15:25 compute-0 ovn_controller[152344]: 2025-10-02T09:15:25Z|01646|binding|INFO|Setting lport 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 down in Southbound
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 ovn_controller[152344]: 2025-10-02T09:15:25Z|01647|binding|INFO|Removing iface tap6f4bc2ea-2d ovn-installed in OVS
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Deactivated successfully.
Oct 02 09:15:25 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Consumed 15.096s CPU time.
Oct 02 09:15:25 compute-0 systemd-machined[214636]: Machine qemu-179-instance-00000091 terminated.
Oct 02 09:15:25 compute-0 NetworkManager[45129]: <info>  [1759396525.7637] manager: (tap6f4bc2ea-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/667)
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.767 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:66:56 2001:db8::f816:3eff:fed2:6656'], port_security=['fa:16:3e:d2:66:56 2001:db8::f816:3eff:fed2:6656'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fed2:6656/64', 'neutron:device_id': '7293bf39-223f-4668-bd0f-c65476fac3e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f302b50b-078a-40f3-87d8-1172d81fe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82aa0caa-5e65-4ef0-b1d6-b9e910e6cadb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.775 2 DEBUG nova.compute.manager [req-ce5d0e7d-0d49-433f-8e84-4af602179ec6 req-abb6ab75-d48d-4d44-978d-c4c59ab1d849 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-deleted-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.781 2 INFO nova.virt.libvirt.driver [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Instance destroyed successfully.
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.781 2 DEBUG nova.objects.instance [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 7293bf39-223f-4668-bd0f-c65476fac3e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:15:25 compute-0 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [NOTICE]   (425341) : haproxy version is 2.8.14-c23fe91
Oct 02 09:15:25 compute-0 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [NOTICE]   (425341) : path to executable is /usr/sbin/haproxy
Oct 02 09:15:25 compute-0 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [WARNING]  (425341) : Exiting Master process...
Oct 02 09:15:25 compute-0 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [WARNING]  (425341) : Exiting Master process...
Oct 02 09:15:25 compute-0 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [ALERT]    (425341) : Current worker (425343) exited with code 143 (Terminated)
Oct 02 09:15:25 compute-0 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [WARNING]  (425341) : All workers exited. Exiting... (0)
Oct 02 09:15:25 compute-0 systemd[1]: libpod-f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63.scope: Deactivated successfully.
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.842 2 DEBUG nova.virt.libvirt.vif [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1093810609',display_name='tempest-TestGettingAddress-server-1093810609',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1093810609',id=145,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:14:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-5uv39x2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:14:16Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7293bf39-223f-4668-bd0f-c65476fac3e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.842 2 DEBUG nova.network.os_vif_util [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.843 2 DEBUG nova.network.os_vif_util [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:15:25 compute-0 podman[427100]: 2025-10-02 09:15:25.843916387 +0000 UTC m=+0.109918007 container died f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.844 2 DEBUG os_vif [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17c0a9ac-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.855 2 INFO os_vif [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6')
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.856 2 DEBUG nova.virt.libvirt.vif [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1093810609',display_name='tempest-TestGettingAddress-server-1093810609',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1093810609',id=145,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:14:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-5uv39x2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:14:16Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7293bf39-223f-4668-bd0f-c65476fac3e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.856 2 DEBUG nova.network.os_vif_util [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.857 2 DEBUG nova.network.os_vif_util [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.857 2 DEBUG os_vif [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.859 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f4bc2ea-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:25 compute-0 nova_compute[260603]: 2025-10-02 09:15:25.863 2 INFO os_vif [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d')
Oct 02 09:15:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa7e4bc0c2d4824c03116f4f6f6e684d6814bec0ec595371262415f74733fdb6-merged.mount: Deactivated successfully.
Oct 02 09:15:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63-userdata-shm.mount: Deactivated successfully.
Oct 02 09:15:26 compute-0 podman[427100]: 2025-10-02 09:15:26.015000085 +0000 UTC m=+0.281001735 container cleanup f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:15:26 compute-0 systemd[1]: libpod-conmon-f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63.scope: Deactivated successfully.
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.161 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 52 KiB/s wr, 37 op/s
Oct 02 09:15:26 compute-0 podman[427165]: 2025-10-02 09:15:26.276012945 +0000 UTC m=+0.233902039 container remove f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.286 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[633d6943-2fc2-40bf-8063-bff87d22b922]: (4, ('Thu Oct  2 09:15:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6 (f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63)\nf3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63\nThu Oct  2 09:15:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6 (f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63)\nf3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.289 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac4f22e-7191-4db0-96a0-696b42abc5dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.290 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6339cab-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:26 compute-0 nova_compute[260603]: 2025-10-02 09:15:26.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:26 compute-0 kernel: tapc6339cab-f0: left promiscuous mode
Oct 02 09:15:26 compute-0 nova_compute[260603]: 2025-10-02 09:15:26.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.303 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e87e9baf-dccd-4743-b96a-808fe7a83dff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:26 compute-0 nova_compute[260603]: 2025-10-02 09:15:26.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.335 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f26f16f-b4f6-4f19-97dd-66d9e0fec1ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.337 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d42496e-b4cb-42dc-8bce-6eeeb50283ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.358 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5131d128-7837-4569-ab39-55bb0a0a60a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733284, 'reachable_time': 34526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 427180, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.362 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.362 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[ca508ad9-fdc8-4a71-ba6e-d7a1ae522f18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.363 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 in datapath f302b50b-078a-40f3-87d8-1172d81fe604 unbound from our chassis
Oct 02 09:15:26 compute-0 systemd[1]: run-netns-ovnmeta\x2dc6339cab\x2dfbb4\x2d4887\x2d8953\x2d252cca735cc6.mount: Deactivated successfully.
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.366 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f302b50b-078a-40f3-87d8-1172d81fe604, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.367 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f497c22-4679-4c4d-b38e-485fa79839ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.368 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604 namespace which is not needed anymore
Oct 02 09:15:26 compute-0 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [NOTICE]   (425413) : haproxy version is 2.8.14-c23fe91
Oct 02 09:15:26 compute-0 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [NOTICE]   (425413) : path to executable is /usr/sbin/haproxy
Oct 02 09:15:26 compute-0 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [WARNING]  (425413) : Exiting Master process...
Oct 02 09:15:26 compute-0 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [ALERT]    (425413) : Current worker (425415) exited with code 143 (Terminated)
Oct 02 09:15:26 compute-0 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [WARNING]  (425413) : All workers exited. Exiting... (0)
Oct 02 09:15:26 compute-0 systemd[1]: libpod-6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64.scope: Deactivated successfully.
Oct 02 09:15:26 compute-0 podman[427199]: 2025-10-02 09:15:26.586803413 +0000 UTC m=+0.105404876 container died 6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64-userdata-shm.mount: Deactivated successfully.
Oct 02 09:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4deff7f3e4521b9208a36798e9ac94c4d2e8879828943b12e08aea6816e2d9bf-merged.mount: Deactivated successfully.
Oct 02 09:15:26 compute-0 podman[427199]: 2025-10-02 09:15:26.765535337 +0000 UTC m=+0.284136720 container cleanup 6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:15:26 compute-0 systemd[1]: libpod-conmon-6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64.scope: Deactivated successfully.
Oct 02 09:15:26 compute-0 nova_compute[260603]: 2025-10-02 09:15:26.794 2 DEBUG nova.network.neutron [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updated VIF entry in instance network info cache for port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:15:26 compute-0 nova_compute[260603]: 2025-10-02 09:15:26.795 2 DEBUG nova.network.neutron [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:15:26 compute-0 nova_compute[260603]: 2025-10-02 09:15:26.823 2 DEBUG oslo_concurrency.lockutils [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:15:26 compute-0 nova_compute[260603]: 2025-10-02 09:15:26.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:27 compute-0 podman[427230]: 2025-10-02 09:15:27.138092575 +0000 UTC m=+0.346269312 container remove 6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:15:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.144 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1617ebe4-95a2-433d-b40c-da9333affeb3]: (4, ('Thu Oct  2 09:15:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604 (6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64)\n6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64\nThu Oct  2 09:15:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604 (6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64)\n6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.145 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c46616b-fdca-4bc5-8d89-2e8073979b83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.146 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf302b50b-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:15:27 compute-0 kernel: tapf302b50b-00: left promiscuous mode
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.152 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8bd6da-1a28-484a-851b-95240340be4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.186 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dce3d967-3830-4325-abd2-6f5d976e7574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.187 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0ef74d-f2a5-4411-9d37-dd02497b23ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.210 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[251ceaea-5882-4ab3-92d9-68cfb52951fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733388, 'reachable_time': 29619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 427246, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.214 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:15:27 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.214 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f189f67f-aa49-4089-8712-b2106d4e9d6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:15:27 compute-0 systemd[1]: run-netns-ovnmeta\x2df302b50b\x2d078a\x2d40f3\x2d87d8\x2d1172d81fe604.mount: Deactivated successfully.
Oct 02 09:15:27 compute-0 ceph-mon[74477]: pgmap v2919: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 52 KiB/s wr, 37 op/s
Oct 02 09:15:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:15:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:15:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:15:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:15:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:15:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.967 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-unplugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.967 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.968 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.968 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.968 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] No waiting events found dispatching network-vif-unplugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.968 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-unplugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.969 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.969 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.969 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.969 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.969 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] No waiting events found dispatching network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.970 2 WARNING nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received unexpected event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a for instance with vm_state active and task_state deleting.
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.970 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-unplugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.970 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.970 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.970 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.971 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] No waiting events found dispatching network-vif-unplugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.971 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-unplugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.971 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.971 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.971 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.972 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.972 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] No waiting events found dispatching network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:15:27 compute-0 nova_compute[260603]: 2025-10-02 09:15:27.972 2 WARNING nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received unexpected event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 for instance with vm_state active and task_state deleting.
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:15:28
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'backups', 'vms', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.meta', 'volumes']
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 58 KiB/s wr, 57 op/s
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:15:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:15:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:15:29 compute-0 ceph-mon[74477]: pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 58 KiB/s wr, 57 op/s
Oct 02 09:15:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 44 op/s
Oct 02 09:15:30 compute-0 nova_compute[260603]: 2025-10-02 09:15:30.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:31 compute-0 podman[427249]: 2025-10-02 09:15:31.051936973 +0000 UTC m=+0.098125532 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:15:31 compute-0 podman[427248]: 2025-10-02 09:15:31.078670562 +0000 UTC m=+0.128285047 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 09:15:31 compute-0 ceph-mon[74477]: pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 44 op/s
Oct 02 09:15:31 compute-0 nova_compute[260603]: 2025-10-02 09:15:31.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 44 op/s
Oct 02 09:15:33 compute-0 ceph-mon[74477]: pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 44 op/s
Oct 02 09:15:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:15:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 19 KiB/s wr, 54 op/s
Oct 02 09:15:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:34.852 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:34.853 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:15:34.853 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:34 compute-0 ceph-mon[74477]: pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 19 KiB/s wr, 54 op/s
Oct 02 09:15:35 compute-0 nova_compute[260603]: 2025-10-02 09:15:35.088 2 INFO nova.virt.libvirt.driver [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Deleting instance files /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4_del
Oct 02 09:15:35 compute-0 nova_compute[260603]: 2025-10-02 09:15:35.089 2 INFO nova.virt.libvirt.driver [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Deletion of /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4_del complete
Oct 02 09:15:35 compute-0 nova_compute[260603]: 2025-10-02 09:15:35.201 2 INFO nova.compute.manager [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Took 9.68 seconds to destroy the instance on the hypervisor.
Oct 02 09:15:35 compute-0 nova_compute[260603]: 2025-10-02 09:15:35.201 2 DEBUG oslo.service.loopingcall [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:15:35 compute-0 nova_compute[260603]: 2025-10-02 09:15:35.202 2 DEBUG nova.compute.manager [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:15:35 compute-0 nova_compute[260603]: 2025-10-02 09:15:35.202 2 DEBUG nova.network.neutron [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:15:35 compute-0 nova_compute[260603]: 2025-10-02 09:15:35.845 2 DEBUG nova.compute.manager [req-6acfe868-b5c8-4e38-865c-67440d378dc6 req-65e617de-8704-4953-a9ec-249effbdcd45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-deleted-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:35 compute-0 nova_compute[260603]: 2025-10-02 09:15:35.845 2 INFO nova.compute.manager [req-6acfe868-b5c8-4e38-865c-67440d378dc6 req-65e617de-8704-4953-a9ec-249effbdcd45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Neutron deleted interface 17c0a9ac-d61a-433a-b3f3-154a8c467f5a; detaching it from the instance and deleting it from the info cache
Oct 02 09:15:35 compute-0 nova_compute[260603]: 2025-10-02 09:15:35.845 2 DEBUG nova.network.neutron [req-6acfe868-b5c8-4e38-865c-67440d378dc6 req-65e617de-8704-4953-a9ec-249effbdcd45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [{"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:15:35 compute-0 nova_compute[260603]: 2025-10-02 09:15:35.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:35 compute-0 nova_compute[260603]: 2025-10-02 09:15:35.890 2 DEBUG nova.compute.manager [req-6acfe868-b5c8-4e38-865c-67440d378dc6 req-65e617de-8704-4953-a9ec-249effbdcd45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Detach interface failed, port_id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a, reason: Instance 7293bf39-223f-4668-bd0f-c65476fac3e4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 09:15:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.6 KiB/s wr, 29 op/s
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.293 2 DEBUG nova.network.neutron [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.327 2 INFO nova.compute.manager [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Took 1.12 seconds to deallocate network for instance.
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.387 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.388 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.462 2 DEBUG oslo_concurrency.processutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:15:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:15:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512102911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.894 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396521.8930523, 6acf9ec4-afe0-4ef6-b857-246ad87fe800 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.895 2 INFO nova.compute.manager [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] VM Stopped (Lifecycle Event)
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.914 2 DEBUG oslo_concurrency.processutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.922 2 DEBUG nova.compute.provider_tree [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.928 2 DEBUG nova.compute.manager [None req-ee945fdf-ff84-4e2c-8b26-0eff7b4764fc - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.945 2 DEBUG nova.scheduler.client.report [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:36 compute-0 nova_compute[260603]: 2025-10-02 09:15:36.974 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:37 compute-0 nova_compute[260603]: 2025-10-02 09:15:37.004 2 INFO nova.scheduler.client.report [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 7293bf39-223f-4668-bd0f-c65476fac3e4
Oct 02 09:15:37 compute-0 nova_compute[260603]: 2025-10-02 09:15:37.072 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:15:37 compute-0 ceph-mon[74477]: pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.6 KiB/s wr, 29 op/s
Oct 02 09:15:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3512102911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:15:37 compute-0 nova_compute[260603]: 2025-10-02 09:15:37.942 2 DEBUG nova.compute.manager [req-28718e5d-ff1d-4489-8100-0aea95f02d7a req-a3f2728b-75cc-422b-b71f-61e2d0b72e70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-deleted-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:15:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 6.1 KiB/s wr, 32 op/s
Oct 02 09:15:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:15:38 compute-0 ceph-mon[74477]: pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 6.1 KiB/s wr, 32 op/s
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:15:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:15:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 597 B/s wr, 13 op/s
Oct 02 09:15:40 compute-0 nova_compute[260603]: 2025-10-02 09:15:40.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:15:40 compute-0 nova_compute[260603]: 2025-10-02 09:15:40.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:15:40 compute-0 nova_compute[260603]: 2025-10-02 09:15:40.779 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396525.77701, 7293bf39-223f-4668-bd0f-c65476fac3e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:15:40 compute-0 nova_compute[260603]: 2025-10-02 09:15:40.779 2 INFO nova.compute.manager [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] VM Stopped (Lifecycle Event)
Oct 02 09:15:40 compute-0 nova_compute[260603]: 2025-10-02 09:15:40.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:40 compute-0 nova_compute[260603]: 2025-10-02 09:15:40.898 2 DEBUG nova.compute.manager [None req-380388eb-5634-4d06-9cf4-f48c87821a61 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:15:40 compute-0 ceph-mon[74477]: pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 597 B/s wr, 13 op/s
Oct 02 09:15:41 compute-0 nova_compute[260603]: 2025-10-02 09:15:41.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 597 B/s wr, 13 op/s
Oct 02 09:15:42 compute-0 nova_compute[260603]: 2025-10-02 09:15:42.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:42 compute-0 nova_compute[260603]: 2025-10-02 09:15:42.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:43 compute-0 ceph-mon[74477]: pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 597 B/s wr, 13 op/s
Oct 02 09:15:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:15:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 597 B/s wr, 13 op/s
Oct 02 09:15:44 compute-0 ceph-mon[74477]: pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 597 B/s wr, 13 op/s
Oct 02 09:15:45 compute-0 nova_compute[260603]: 2025-10-02 09:15:45.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 511 B/s wr, 3 op/s
Oct 02 09:15:46 compute-0 nova_compute[260603]: 2025-10-02 09:15:46.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:47 compute-0 ceph-mon[74477]: pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 511 B/s wr, 3 op/s
Oct 02 09:15:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 511 B/s wr, 3 op/s
Oct 02 09:15:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:15:49 compute-0 ceph-mon[74477]: pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 511 B/s wr, 3 op/s
Oct 02 09:15:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:15:50 compute-0 nova_compute[260603]: 2025-10-02 09:15:50.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:51 compute-0 ceph-mon[74477]: pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:15:51 compute-0 nova_compute[260603]: 2025-10-02 09:15:51.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:15:53 compute-0 ceph-mon[74477]: pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:15:53 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:15:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:15:55 compute-0 ceph-mon[74477]: pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:15:55 compute-0 nova_compute[260603]: 2025-10-02 09:15:55.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:55 compute-0 podman[427314]: 2025-10-02 09:15:55.99076411 +0000 UTC m=+0.054184565 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:15:56 compute-0 podman[427313]: 2025-10-02 09:15:56.063620493 +0000 UTC m=+0.127388349 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:15:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:15:56 compute-0 ceph-mon[74477]: pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:15:56 compute-0 nova_compute[260603]: 2025-10-02 09:15:56.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:15:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:15:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:15:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:15:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:15:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:15:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:15:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:15:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:15:58 compute-0 ceph-mon[74477]: pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:00 compute-0 nova_compute[260603]: 2025-10-02 09:16:00.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:01 compute-0 ceph-mon[74477]: pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:01 compute-0 nova_compute[260603]: 2025-10-02 09:16:01.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:02 compute-0 podman[427357]: 2025-10-02 09:16:02.000217396 +0000 UTC m=+0.064922828 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 09:16:02 compute-0 podman[427358]: 2025-10-02 09:16:02.019669881 +0000 UTC m=+0.078952975 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:16:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:02 compute-0 nova_compute[260603]: 2025-10-02 09:16:02.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:16:02 compute-0 ceph-mon[74477]: pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:03.001 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:01:06 10.100.0.2 2001:db8::f816:3eff:fe96:106'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe96:106/64', 'neutron:device_id': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=591e6b4a-e1ca-4274-8e8d-321441edaa04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=25655298-5d81-4448-950c-289fb68606f0) old=Port_Binding(mac=['fa:16:3e:96:01:06 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:16:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:03.003 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 25655298-5d81-4448-950c-289fb68606f0 in datapath 32cae488-7671-4c05-a475-2c63f103261b updated
Oct 02 09:16:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:03.005 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32cae488-7671-4c05-a475-2c63f103261b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:16:03 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:03.006 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2b49dda1-d28d-41f8-9f5f-68f6213e3c10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:16:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:05 compute-0 nova_compute[260603]: 2025-10-02 09:16:05.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:16:05 compute-0 nova_compute[260603]: 2025-10-02 09:16:05.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:16:05 compute-0 nova_compute[260603]: 2025-10-02 09:16:05.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:16:05 compute-0 ceph-mon[74477]: pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:05 compute-0 nova_compute[260603]: 2025-10-02 09:16:05.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:06 compute-0 ceph-mon[74477]: pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:06 compute-0 nova_compute[260603]: 2025-10-02 09:16:06.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:07 compute-0 nova_compute[260603]: 2025-10-02 09:16:07.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:16:07 compute-0 nova_compute[260603]: 2025-10-02 09:16:07.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:16:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:07.610 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:01:06 10.100.0.2 2001:db8:0:1:f816:3eff:fe96:106 2001:db8::f816:3eff:fe96:106'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe96:106/64 2001:db8::f816:3eff:fe96:106/64', 'neutron:device_id': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=591e6b4a-e1ca-4274-8e8d-321441edaa04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=25655298-5d81-4448-950c-289fb68606f0) old=Port_Binding(mac=['fa:16:3e:96:01:06 10.100.0.2 2001:db8::f816:3eff:fe96:106'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe96:106/64', 'neutron:device_id': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:16:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:07.611 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 25655298-5d81-4448-950c-289fb68606f0 in datapath 32cae488-7671-4c05-a475-2c63f103261b updated
Oct 02 09:16:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:07.612 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32cae488-7671-4c05-a475-2c63f103261b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:16:07 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:07.612 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[471f515f-09e6-4f9a-855b-64a4a0577b9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:16:09 compute-0 ceph-mon[74477]: pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:09 compute-0 nova_compute[260603]: 2025-10-02 09:16:09.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:16:09 compute-0 nova_compute[260603]: 2025-10-02 09:16:09.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:16:09 compute-0 nova_compute[260603]: 2025-10-02 09:16:09.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:16:09 compute-0 nova_compute[260603]: 2025-10-02 09:16:09.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:16:09 compute-0 nova_compute[260603]: 2025-10-02 09:16:09.549 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:16:09 compute-0 nova_compute[260603]: 2025-10-02 09:16:09.549 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:16:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:16:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/696011136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:16:09 compute-0 nova_compute[260603]: 2025-10-02 09:16:09.990 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:16:10 compute-0 nova_compute[260603]: 2025-10-02 09:16:10.191 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:16:10 compute-0 nova_compute[260603]: 2025-10-02 09:16:10.193 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3625MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:16:10 compute-0 nova_compute[260603]: 2025-10-02 09:16:10.194 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:16:10 compute-0 nova_compute[260603]: 2025-10-02 09:16:10.194 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:16:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:10 compute-0 nova_compute[260603]: 2025-10-02 09:16:10.389 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:16:10 compute-0 nova_compute[260603]: 2025-10-02 09:16:10.390 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:16:10 compute-0 nova_compute[260603]: 2025-10-02 09:16:10.446 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:16:10 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/696011136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:16:10 compute-0 nova_compute[260603]: 2025-10-02 09:16:10.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:16:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4063309839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:16:10 compute-0 nova_compute[260603]: 2025-10-02 09:16:10.981 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:16:10 compute-0 nova_compute[260603]: 2025-10-02 09:16:10.991 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:16:11 compute-0 nova_compute[260603]: 2025-10-02 09:16:11.016 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:16:11 compute-0 nova_compute[260603]: 2025-10-02 09:16:11.061 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:16:11 compute-0 nova_compute[260603]: 2025-10-02 09:16:11.062 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:16:11 compute-0 ceph-mon[74477]: pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4063309839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:16:11 compute-0 nova_compute[260603]: 2025-10-02 09:16:11.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:12 compute-0 nova_compute[260603]: 2025-10-02 09:16:12.505 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:16:12 compute-0 nova_compute[260603]: 2025-10-02 09:16:12.506 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:16:12 compute-0 nova_compute[260603]: 2025-10-02 09:16:12.526 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:16:12 compute-0 nova_compute[260603]: 2025-10-02 09:16:12.595 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:16:12 compute-0 nova_compute[260603]: 2025-10-02 09:16:12.596 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:16:12 compute-0 nova_compute[260603]: 2025-10-02 09:16:12.606 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:16:12 compute-0 nova_compute[260603]: 2025-10-02 09:16:12.607 2 INFO nova.compute.claims [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:16:12 compute-0 ceph-mon[74477]: pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:12 compute-0 nova_compute[260603]: 2025-10-02 09:16:12.713 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:16:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:16:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1507463177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.220 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.226 2 DEBUG nova.compute.provider_tree [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.244 2 DEBUG nova.scheduler.client.report [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.267 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.268 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.327 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.327 2 DEBUG nova.network.neutron [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.349 2 INFO nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.368 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.464 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.465 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.465 2 INFO nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Creating image(s)
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.486 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.508 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.537 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.542 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.621 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.621 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.622 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.622 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.648 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:16:13 compute-0 nova_compute[260603]: 2025-10-02 09:16:13.653 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:16:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:16:14 compute-0 nova_compute[260603]: 2025-10-02 09:16:14.058 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:16:14 compute-0 nova_compute[260603]: 2025-10-02 09:16:14.059 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:16:14 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1507463177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:16:14 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct 02 09:16:14 compute-0 nova_compute[260603]: 2025-10-02 09:16:14.081 2 DEBUG nova.policy [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:16:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:15 compute-0 nova_compute[260603]: 2025-10-02 09:16:15.209 2 DEBUG nova.network.neutron [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Successfully created port: dad30664-f830-4b51-9cf5-8b92d95308bb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:16:15 compute-0 ceph-mon[74477]: pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:15 compute-0 nova_compute[260603]: 2025-10-02 09:16:15.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:16 compute-0 nova_compute[260603]: 2025-10-02 09:16:16.366 2 DEBUG nova.network.neutron [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Successfully updated port: dad30664-f830-4b51-9cf5-8b92d95308bb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:16:16 compute-0 nova_compute[260603]: 2025-10-02 09:16:16.386 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:16:16 compute-0 nova_compute[260603]: 2025-10-02 09:16:16.386 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:16:16 compute-0 nova_compute[260603]: 2025-10-02 09:16:16.387 2 DEBUG nova.network.neutron [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:16:16 compute-0 nova_compute[260603]: 2025-10-02 09:16:16.452 2 DEBUG nova.compute.manager [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-changed-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:16:16 compute-0 nova_compute[260603]: 2025-10-02 09:16:16.453 2 DEBUG nova.compute.manager [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Refreshing instance network info cache due to event network-changed-dad30664-f830-4b51-9cf5-8b92d95308bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:16:16 compute-0 nova_compute[260603]: 2025-10-02 09:16:16.453 2 DEBUG oslo_concurrency.lockutils [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:16:16 compute-0 nova_compute[260603]: 2025-10-02 09:16:16.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:17 compute-0 nova_compute[260603]: 2025-10-02 09:16:17.041 2 DEBUG nova.network.neutron [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:16:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 43 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 94 KiB/s wr, 10 op/s
Oct 02 09:16:18 compute-0 ceph-mon[74477]: pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:16:19 compute-0 nova_compute[260603]: 2025-10-02 09:16:19.238 2 DEBUG nova.network.neutron [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updating instance_info_cache with network_info: [{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:16:19 compute-0 nova_compute[260603]: 2025-10-02 09:16:19.262 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:16:19 compute-0 nova_compute[260603]: 2025-10-02 09:16:19.262 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Instance network_info: |[{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:16:19 compute-0 nova_compute[260603]: 2025-10-02 09:16:19.263 2 DEBUG oslo_concurrency.lockutils [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:16:19 compute-0 nova_compute[260603]: 2025-10-02 09:16:19.263 2 DEBUG nova.network.neutron [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Refreshing network info cache for port dad30664-f830-4b51-9cf5-8b92d95308bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:16:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:16:19 compute-0 ceph-mon[74477]: pgmap v2945: 305 pgs: 305 active+clean; 43 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 94 KiB/s wr, 10 op/s
Oct 02 09:16:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 59 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 525 KiB/s wr, 14 op/s
Oct 02 09:16:20 compute-0 nova_compute[260603]: 2025-10-02 09:16:20.364 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.712s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:16:20 compute-0 nova_compute[260603]: 2025-10-02 09:16:20.442 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:16:20 compute-0 nova_compute[260603]: 2025-10-02 09:16:20.793 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:16:20 compute-0 nova_compute[260603]: 2025-10-02 09:16:20.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:21 compute-0 ceph-mon[74477]: pgmap v2946: 305 pgs: 305 active+clean; 59 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 525 KiB/s wr, 14 op/s
Oct 02 09:16:21 compute-0 nova_compute[260603]: 2025-10-02 09:16:21.908 2 DEBUG nova.network.neutron [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updated VIF entry in instance network info cache for port dad30664-f830-4b51-9cf5-8b92d95308bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:16:21 compute-0 nova_compute[260603]: 2025-10-02 09:16:21.909 2 DEBUG nova.network.neutron [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updating instance_info_cache with network_info: [{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:16:21 compute-0 nova_compute[260603]: 2025-10-02 09:16:21.933 2 DEBUG oslo_concurrency.lockutils [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:16:21 compute-0 nova_compute[260603]: 2025-10-02 09:16:21.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:16:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4019264692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:16:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:16:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4019264692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.173 2 DEBUG nova.objects.instance [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid bfb5f44c-0aeb-439f-9d64-934d5cb85c02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.206 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.207 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Ensure instance console log exists: /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.207 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.208 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.208 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.210 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Start _get_guest_xml network_info=[{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.214 2 WARNING nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.219 2 DEBUG nova.virt.libvirt.host [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.220 2 DEBUG nova.virt.libvirt.host [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.223 2 DEBUG nova.virt.libvirt.host [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.223 2 DEBUG nova.virt.libvirt.host [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.224 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.224 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.224 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.225 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.225 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.225 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.225 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.226 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.226 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.226 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.226 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.227 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.229 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:16:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 59 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 525 KiB/s wr, 14 op/s
Oct 02 09:16:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/4019264692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:16:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/4019264692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:16:22 compute-0 sudo[427652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:16:22 compute-0 sudo[427652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:22 compute-0 sudo[427652]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:16:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3308999108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:16:22 compute-0 sudo[427677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:16:22 compute-0 sudo[427677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:22 compute-0 sudo[427677]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.760 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.781 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:16:22 compute-0 nova_compute[260603]: 2025-10-02 09:16:22.785 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:16:22 compute-0 sudo[427703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:16:22 compute-0 sudo[427703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:22 compute-0 sudo[427703]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:22 compute-0 sudo[427747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:16:22 compute-0 sudo[427747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:16:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557702783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.208 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.212 2 DEBUG nova.virt.libvirt.vif [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1773019515',display_name='tempest-TestGettingAddress-server-1773019515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1773019515',id=147,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGsNuW5gEn3WEc3qQ3L0JdcOZd6vD7SMJl6t9YZV9xlo9AdZ8yeIelaH48tIJjYeguHisd3f+wPKQxnBOP+bkaPT8EVTluVcKTxfA+koE1m61LBSFo3LpknbEg+9XiigA==',key_name='tempest-TestGettingAddress-1915804126',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-9gbn0ocy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:16:13Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=bfb5f44c-0aeb-439f-9d64-934d5cb85c02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.213 2 DEBUG nova.network.os_vif_util [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.215 2 DEBUG nova.network.os_vif_util [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.218 2 DEBUG nova.objects.instance [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid bfb5f44c-0aeb-439f-9d64-934d5cb85c02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.256 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:16:23 compute-0 nova_compute[260603]:   <uuid>bfb5f44c-0aeb-439f-9d64-934d5cb85c02</uuid>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   <name>instance-00000093</name>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-1773019515</nova:name>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:16:22</nova:creationTime>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:16:23 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:16:23 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:16:23 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:16:23 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:16:23 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:16:23 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:16:23 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:16:23 compute-0 nova_compute[260603]:         <nova:port uuid="dad30664-f830-4b51-9cf5-8b92d95308bb">
Oct 02 09:16:23 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe28:75e6" ipVersion="6"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe28:75e6" ipVersion="6"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <system>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <entry name="serial">bfb5f44c-0aeb-439f-9d64-934d5cb85c02</entry>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <entry name="uuid">bfb5f44c-0aeb-439f-9d64-934d5cb85c02</entry>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     </system>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   <os>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   </os>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   <features>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   </features>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk">
Oct 02 09:16:23 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       </source>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:16:23 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk.config">
Oct 02 09:16:23 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       </source>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:16:23 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:28:75:e6"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <target dev="tapdad30664-f8"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/console.log" append="off"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <video>
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     </video>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:16:23 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:16:23 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:16:23 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:16:23 compute-0 nova_compute[260603]: </domain>
Oct 02 09:16:23 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.256 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Preparing to wait for external event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.256 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.257 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.257 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.259 2 DEBUG nova.virt.libvirt.vif [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1773019515',display_name='tempest-TestGettingAddress-server-1773019515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1773019515',id=147,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGsNuW5gEn3WEc3qQ3L0JdcOZd6vD7SMJl6t9YZV9xlo9AdZ8yeIelaH48tIJjYeguHisd3f+wPKQxnBOP+bkaPT8EVTluVcKTxfA+koE1m61LBSFo3LpknbEg+9XiigA==',key_name='tempest-TestGettingAddress-1915804126',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-9gbn0ocy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:16:13Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=bfb5f44c-0aeb-439f-9d64-934d5cb85c02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.261 2 DEBUG nova.network.os_vif_util [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.262 2 DEBUG nova.network.os_vif_util [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.263 2 DEBUG os_vif [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.265 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdad30664-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdad30664-f8, col_values=(('external_ids', {'iface-id': 'dad30664-f830-4b51-9cf5-8b92d95308bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:75:e6', 'vm-uuid': 'bfb5f44c-0aeb-439f-9d64-934d5cb85c02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:23 compute-0 NetworkManager[45129]: <info>  [1759396583.2768] manager: (tapdad30664-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.285 2 INFO os_vif [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8')
Oct 02 09:16:23 compute-0 sudo[427747]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:16:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:16:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:16:23 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:16:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:16:23 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:16:23 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 62763e4a-205a-4e49-9267-bad8ea4d990a does not exist
Oct 02 09:16:23 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 692aa8ad-7985-46ab-9854-b797b2efcfab does not exist
Oct 02 09:16:23 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 17c42f98-1d2a-4ee8-9b42-81b1cb91b8af does not exist
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.659 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.660 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.660 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:28:75:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.660 2 INFO nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Using config drive
Oct 02 09:16:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:16:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:16:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:16:23 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:16:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:16:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:16:23 compute-0 nova_compute[260603]: 2025-10-02 09:16:23.745 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:16:23 compute-0 sudo[427843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:16:23 compute-0 sudo[427843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:23 compute-0 sudo[427843]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:23 compute-0 ceph-mon[74477]: pgmap v2947: 305 pgs: 305 active+clean; 59 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 525 KiB/s wr, 14 op/s
Oct 02 09:16:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3308999108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:16:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1557702783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:16:23 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:16:23 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:16:23 compute-0 sudo[427870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:16:23 compute-0 sudo[427870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:23 compute-0 sudo[427870]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:23 compute-0 sudo[427895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:16:23 compute-0 sudo[427895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:23 compute-0 sudo[427895]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:24 compute-0 sudo[427920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:16:24 compute-0 sudo[427920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:24 compute-0 nova_compute[260603]: 2025-10-02 09:16:24.205 2 INFO nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Creating config drive at /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/disk.config
Oct 02 09:16:24 compute-0 nova_compute[260603]: 2025-10-02 09:16:24.215 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp29y3cr41 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:16:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 80 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.1 MiB/s wr, 16 op/s
Oct 02 09:16:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:16:24 compute-0 nova_compute[260603]: 2025-10-02 09:16:24.361 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp29y3cr41" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:16:24 compute-0 nova_compute[260603]: 2025-10-02 09:16:24.385 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:16:24 compute-0 nova_compute[260603]: 2025-10-02 09:16:24.389 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/disk.config bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:16:24 compute-0 podman[427988]: 2025-10-02 09:16:24.328484714 +0000 UTC m=+0.026048050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:16:24 compute-0 podman[427988]: 2025-10-02 09:16:24.785023841 +0000 UTC m=+0.482587187 container create 6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 09:16:25 compute-0 systemd[1]: Started libpod-conmon-6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122.scope.
Oct 02 09:16:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:16:25 compute-0 podman[427988]: 2025-10-02 09:16:25.581444451 +0000 UTC m=+1.279007837 container init 6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:16:25 compute-0 podman[427988]: 2025-10-02 09:16:25.595810607 +0000 UTC m=+1.293373953 container start 6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:16:25 compute-0 youthful_joliot[428039]: 167 167
Oct 02 09:16:25 compute-0 systemd[1]: libpod-6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122.scope: Deactivated successfully.
Oct 02 09:16:25 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:16:25 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:16:25 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:16:25 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:16:25 compute-0 ceph-mon[74477]: pgmap v2948: 305 pgs: 305 active+clean; 80 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.1 MiB/s wr, 16 op/s
Oct 02 09:16:25 compute-0 podman[427988]: 2025-10-02 09:16:25.791825998 +0000 UTC m=+1.489389334 container attach 6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:16:25 compute-0 podman[427988]: 2025-10-02 09:16:25.792369575 +0000 UTC m=+1.489932881 container died 6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 09:16:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 88 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:16:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-20b9d78d8c9a603e7d4ebe968df7a964affb8d6661e734b64405a29cf20e4b7c-merged.mount: Deactivated successfully.
Oct 02 09:16:26 compute-0 ceph-mon[74477]: pgmap v2949: 305 pgs: 305 active+clean; 88 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:16:26 compute-0 nova_compute[260603]: 2025-10-02 09:16:26.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:27 compute-0 podman[428044]: 2025-10-02 09:16:27.476111538 +0000 UTC m=+1.848271537 container remove 6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:16:27 compute-0 systemd[1]: libpod-conmon-6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122.scope: Deactivated successfully.
Oct 02 09:16:27 compute-0 podman[428061]: 2025-10-02 09:16:27.518554037 +0000 UTC m=+1.189076543 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 09:16:27 compute-0 podman[428060]: 2025-10-02 09:16:27.569543072 +0000 UTC m=+1.236319270 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 02 09:16:27 compute-0 podman[428113]: 2025-10-02 09:16:27.632519169 +0000 UTC m=+0.026902147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:16:27 compute-0 podman[428113]: 2025-10-02 09:16:27.823095812 +0000 UTC m=+0.217478790 container create d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 09:16:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:16:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:16:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:16:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:16:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:16:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:16:27 compute-0 systemd[1]: Started libpod-conmon-d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66.scope.
Oct 02 09:16:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:16:28 compute-0 nova_compute[260603]: 2025-10-02 09:16:28.003 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/disk.config bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:16:28 compute-0 nova_compute[260603]: 2025-10-02 09:16:28.003 2 INFO nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Deleting local config drive /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/disk.config because it was imported into RBD.
Oct 02 09:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2a4cb15fa798089d2a70e5f089ac8d879228aad0f202571dd78b2fe684409/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2a4cb15fa798089d2a70e5f089ac8d879228aad0f202571dd78b2fe684409/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2a4cb15fa798089d2a70e5f089ac8d879228aad0f202571dd78b2fe684409/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2a4cb15fa798089d2a70e5f089ac8d879228aad0f202571dd78b2fe684409/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2a4cb15fa798089d2a70e5f089ac8d879228aad0f202571dd78b2fe684409/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:28 compute-0 kernel: tapdad30664-f8: entered promiscuous mode
Oct 02 09:16:28 compute-0 NetworkManager[45129]: <info>  [1759396588.0850] manager: (tapdad30664-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/669)
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:16:28
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'vms']
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:16:28 compute-0 podman[428113]: 2025-10-02 09:16:28.1268255 +0000 UTC m=+0.521208498 container init d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 09:16:28 compute-0 ovn_controller[152344]: 2025-10-02T09:16:28Z|01648|binding|INFO|Claiming lport dad30664-f830-4b51-9cf5-8b92d95308bb for this chassis.
Oct 02 09:16:28 compute-0 ovn_controller[152344]: 2025-10-02T09:16:28Z|01649|binding|INFO|dad30664-f830-4b51-9cf5-8b92d95308bb: Claiming fa:16:3e:28:75:e6 10.100.0.11 2001:db8:0:1:f816:3eff:fe28:75e6 2001:db8::f816:3eff:fe28:75e6
Oct 02 09:16:28 compute-0 nova_compute[260603]: 2025-10-02 09:16:28.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:28 compute-0 nova_compute[260603]: 2025-10-02 09:16:28.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:28 compute-0 podman[428113]: 2025-10-02 09:16:28.137909005 +0000 UTC m=+0.532291973 container start d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.148 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:75:e6 10.100.0.11 2001:db8:0:1:f816:3eff:fe28:75e6 2001:db8::f816:3eff:fe28:75e6'], port_security=['fa:16:3e:28:75:e6 10.100.0.11 2001:db8:0:1:f816:3eff:fe28:75e6 2001:db8::f816:3eff:fe28:75e6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8:0:1:f816:3eff:fe28:75e6/64 2001:db8::f816:3eff:fe28:75e6/64', 'neutron:device_id': 'bfb5f44c-0aeb-439f-9d64-934d5cb85c02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6142c2da-c0c8-4842-a55d-76581298b5e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=591e6b4a-e1ca-4274-8e8d-321441edaa04, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dad30664-f830-4b51-9cf5-8b92d95308bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.149 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dad30664-f830-4b51-9cf5-8b92d95308bb in datapath 32cae488-7671-4c05-a475-2c63f103261b bound to our chassis
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.150 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32cae488-7671-4c05-a475-2c63f103261b
Oct 02 09:16:28 compute-0 systemd-udevd[428147]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:16:28 compute-0 podman[428113]: 2025-10-02 09:16:28.165468301 +0000 UTC m=+0.559851329 container attach d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 09:16:28 compute-0 systemd-machined[214636]: New machine qemu-181-instance-00000093.
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.164 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad322ef-e843-45b0-b493-80ea329321ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.166 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap32cae488-71 in ovnmeta-32cae488-7671-4c05-a475-2c63f103261b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.168 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap32cae488-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:16:28 compute-0 NetworkManager[45129]: <info>  [1759396588.1686] device (tapdad30664-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.168 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c6838a85-606e-4203-900c-e222f64223c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 NetworkManager[45129]: <info>  [1759396588.1696] device (tapdad30664-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.170 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c46533af-daa2-4ca9-b506-ebc0c97e0b28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 systemd[1]: Started Virtual Machine qemu-181-instance-00000093.
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.185 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[138b0261-b840-43c4-a7d8-58bd51fed3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.199 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aee5be3c-4aec-42df-aa31-7717c4c92f2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 ovn_controller[152344]: 2025-10-02T09:16:28Z|01650|binding|INFO|Setting lport dad30664-f830-4b51-9cf5-8b92d95308bb ovn-installed in OVS
Oct 02 09:16:28 compute-0 ovn_controller[152344]: 2025-10-02T09:16:28Z|01651|binding|INFO|Setting lport dad30664-f830-4b51-9cf5-8b92d95308bb up in Southbound
Oct 02 09:16:28 compute-0 nova_compute[260603]: 2025-10-02 09:16:28.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.232 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[36e12c96-a45a-485a-b90e-d0dc59f4c4ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 systemd-udevd[428150]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:16:28 compute-0 NetworkManager[45129]: <info>  [1759396588.2417] manager: (tap32cae488-70): new Veth device (/org/freedesktop/NetworkManager/Devices/670)
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.244 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6cc491-23a5-4767-a61a-5b7b0ae47278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 nova_compute[260603]: 2025-10-02 09:16:28.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.286 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9f48eecf-4f1d-4625-9826-bda13b40c3f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.289 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[eae3079c-518e-4709-a74a-7c0aac318cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 NetworkManager[45129]: <info>  [1759396588.3159] device (tap32cae488-70): carrier: link connected
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.322 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[22dbb3e7-5aef-4600-8999-ddf2da07fdb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.338 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab69994-1d6e-4a1c-8140-f22f30990ae5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32cae488-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746691, 'reachable_time': 42035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 428180, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.351 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1955f8c9-33ec-4a71-b4dc-b5015ba75590]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:106'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746691, 'tstamp': 746691}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428181, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.366 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[46c662a4-a66f-4c30-be03-e0a9c5c5ae84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32cae488-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746691, 'reachable_time': 42035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 428182, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.396 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d3547526-ba1b-400f-b6d1-dfe17d5e321d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.446 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6b312f-2beb-4079-b208-8ff698593035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.447 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32cae488-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.448 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.448 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32cae488-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:16:28 compute-0 nova_compute[260603]: 2025-10-02 09:16:28.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:28 compute-0 NetworkManager[45129]: <info>  [1759396588.4506] manager: (tap32cae488-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/671)
Oct 02 09:16:28 compute-0 kernel: tap32cae488-70: entered promiscuous mode
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.453 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32cae488-70, col_values=(('external_ids', {'iface-id': '25655298-5d81-4448-950c-289fb68606f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:16:28 compute-0 nova_compute[260603]: 2025-10-02 09:16:28.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:28 compute-0 ovn_controller[152344]: 2025-10-02T09:16:28Z|01652|binding|INFO|Releasing lport 25655298-5d81-4448-950c-289fb68606f0 from this chassis (sb_readonly=0)
Oct 02 09:16:28 compute-0 nova_compute[260603]: 2025-10-02 09:16:28.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.455 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/32cae488-7671-4c05-a475-2c63f103261b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/32cae488-7671-4c05-a475-2c63f103261b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.456 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[47c9d439-38a6-4482-9e05-72e4862e0bae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.457 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-32cae488-7671-4c05-a475-2c63f103261b
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/32cae488-7671-4c05-a475-2c63f103261b.pid.haproxy
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 32cae488-7671-4c05-a475-2c63f103261b
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:16:28 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.459 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'env', 'PROCESS_TAG=haproxy-32cae488-7671-4c05-a475-2c63f103261b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/32cae488-7671-4c05-a475-2c63f103261b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:16:28 compute-0 nova_compute[260603]: 2025-10-02 09:16:28.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:16:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:16:28 compute-0 podman[428214]: 2025-10-02 09:16:28.775228639 +0000 UTC m=+0.018781504 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:16:29 compute-0 nova_compute[260603]: 2025-10-02 09:16:29.208 2 DEBUG nova.compute.manager [req-bb91e70f-69a8-4ece-a069-21f95d624941 req-04524914-b040-478b-8773-bc659acedcb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:16:29 compute-0 nova_compute[260603]: 2025-10-02 09:16:29.210 2 DEBUG oslo_concurrency.lockutils [req-bb91e70f-69a8-4ece-a069-21f95d624941 req-04524914-b040-478b-8773-bc659acedcb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:16:29 compute-0 nova_compute[260603]: 2025-10-02 09:16:29.210 2 DEBUG oslo_concurrency.lockutils [req-bb91e70f-69a8-4ece-a069-21f95d624941 req-04524914-b040-478b-8773-bc659acedcb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:16:29 compute-0 nova_compute[260603]: 2025-10-02 09:16:29.210 2 DEBUG oslo_concurrency.lockutils [req-bb91e70f-69a8-4ece-a069-21f95d624941 req-04524914-b040-478b-8773-bc659acedcb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:16:29 compute-0 nova_compute[260603]: 2025-10-02 09:16:29.211 2 DEBUG nova.compute.manager [req-bb91e70f-69a8-4ece-a069-21f95d624941 req-04524914-b040-478b-8773-bc659acedcb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Processing event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:16:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:16:29 compute-0 festive_yonath[428129]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:16:29 compute-0 festive_yonath[428129]: --> relative data size: 1.0
Oct 02 09:16:29 compute-0 festive_yonath[428129]: --> All data devices are unavailable
Oct 02 09:16:29 compute-0 ceph-mon[74477]: pgmap v2950: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 09:16:29 compute-0 systemd[1]: libpod-d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66.scope: Deactivated successfully.
Oct 02 09:16:29 compute-0 systemd[1]: libpod-d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66.scope: Consumed 1.016s CPU time.
Oct 02 09:16:29 compute-0 podman[428214]: 2025-10-02 09:16:29.793904426 +0000 UTC m=+1.037457321 container create bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:16:30 compute-0 systemd[1]: Started libpod-conmon-bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104.scope.
Oct 02 09:16:30 compute-0 podman[428285]: 2025-10-02 09:16:30.054521575 +0000 UTC m=+0.366216622 container died d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 09:16:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140339e959af83ff118cb8023b67732905454915fb70723adb2eabd214c6428e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.7 MiB/s wr, 23 op/s
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.281 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.283 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396590.2810063, bfb5f44c-0aeb-439f-9d64-934d5cb85c02 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.283 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] VM Started (Lifecycle Event)
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.288 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.292 2 INFO nova.virt.libvirt.driver [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Instance spawned successfully.
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.293 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.303 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.305 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.316 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.316 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.317 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.317 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.318 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.318 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.345 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.345 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396590.2820663, bfb5f44c-0aeb-439f-9d64-934d5cb85c02 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.345 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] VM Paused (Lifecycle Event)
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.376 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.380 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396590.2855577, bfb5f44c-0aeb-439f-9d64-934d5cb85c02 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.381 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] VM Resumed (Lifecycle Event)
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.388 2 INFO nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Took 16.92 seconds to spawn the instance on the hypervisor.
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.389 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.399 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.403 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.426 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.454 2 INFO nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Took 17.88 seconds to build instance.
Oct 02 09:16:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ce2a4cb15fa798089d2a70e5f089ac8d879228aad0f202571dd78b2fe684409-merged.mount: Deactivated successfully.
Oct 02 09:16:30 compute-0 nova_compute[260603]: 2025-10-02 09:16:30.529 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:16:30 compute-0 ceph-mon[74477]: pgmap v2951: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.7 MiB/s wr, 23 op/s
Oct 02 09:16:31 compute-0 podman[428214]: 2025-10-02 09:16:31.110432897 +0000 UTC m=+2.353985762 container init bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:16:31 compute-0 podman[428214]: 2025-10-02 09:16:31.117820558 +0000 UTC m=+2.361373403 container start bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 09:16:31 compute-0 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [NOTICE]   (428314) : New worker (428316) forked
Oct 02 09:16:31 compute-0 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [NOTICE]   (428314) : Loading success.
Oct 02 09:16:31 compute-0 nova_compute[260603]: 2025-10-02 09:16:31.309 2 DEBUG nova.compute.manager [req-c6b60424-d032-4ff9-9894-3f05c1e58980 req-164842ab-65c6-4718-be88-365c21d45c35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:16:31 compute-0 nova_compute[260603]: 2025-10-02 09:16:31.310 2 DEBUG oslo_concurrency.lockutils [req-c6b60424-d032-4ff9-9894-3f05c1e58980 req-164842ab-65c6-4718-be88-365c21d45c35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:16:31 compute-0 nova_compute[260603]: 2025-10-02 09:16:31.310 2 DEBUG oslo_concurrency.lockutils [req-c6b60424-d032-4ff9-9894-3f05c1e58980 req-164842ab-65c6-4718-be88-365c21d45c35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:16:31 compute-0 nova_compute[260603]: 2025-10-02 09:16:31.310 2 DEBUG oslo_concurrency.lockutils [req-c6b60424-d032-4ff9-9894-3f05c1e58980 req-164842ab-65c6-4718-be88-365c21d45c35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:16:31 compute-0 nova_compute[260603]: 2025-10-02 09:16:31.310 2 DEBUG nova.compute.manager [req-c6b60424-d032-4ff9-9894-3f05c1e58980 req-164842ab-65c6-4718-be88-365c21d45c35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] No waiting events found dispatching network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:16:31 compute-0 nova_compute[260603]: 2025-10-02 09:16:31.310 2 WARNING nova.compute.manager [req-c6b60424-d032-4ff9-9894-3f05c1e58980 req-164842ab-65c6-4718-be88-365c21d45c35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received unexpected event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb for instance with vm_state active and task_state None.
Oct 02 09:16:31 compute-0 podman[428285]: 2025-10-02 09:16:31.861015513 +0000 UTC m=+2.172710580 container remove d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:16:31 compute-0 systemd[1]: libpod-conmon-d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66.scope: Deactivated successfully.
Oct 02 09:16:31 compute-0 sudo[427920]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:31 compute-0 sudo[428325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:16:31 compute-0 sudo[428325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:31 compute-0 sudo[428325]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:31 compute-0 nova_compute[260603]: 2025-10-02 09:16:31.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:32 compute-0 sudo[428350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:16:32 compute-0 sudo[428350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:32 compute-0 sudo[428350]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:32 compute-0 sudo[428387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:16:32 compute-0 sudo[428387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:32 compute-0 sudo[428387]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:32 compute-0 podman[428375]: 2025-10-02 09:16:32.138594089 +0000 UTC m=+0.065693383 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 09:16:32 compute-0 podman[428374]: 2025-10-02 09:16:32.14699867 +0000 UTC m=+0.073402122 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible)
Oct 02 09:16:32 compute-0 sudo[428441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:16:32 compute-0 sudo[428441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 19 op/s
Oct 02 09:16:32 compute-0 podman[428505]: 2025-10-02 09:16:32.524985156 +0000 UTC m=+0.026487414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:16:32 compute-0 podman[428505]: 2025-10-02 09:16:32.873540728 +0000 UTC m=+0.375042946 container create 417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 09:16:33 compute-0 nova_compute[260603]: 2025-10-02 09:16:33.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:33 compute-0 systemd[1]: Started libpod-conmon-417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734.scope.
Oct 02 09:16:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:16:33 compute-0 ceph-mon[74477]: pgmap v2952: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 19 op/s
Oct 02 09:16:33 compute-0 podman[428505]: 2025-10-02 09:16:33.839021371 +0000 UTC m=+1.340523629 container init 417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 09:16:33 compute-0 podman[428505]: 2025-10-02 09:16:33.845903715 +0000 UTC m=+1.347405933 container start 417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:16:33 compute-0 festive_hertz[428521]: 167 167
Oct 02 09:16:33 compute-0 systemd[1]: libpod-417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734.scope: Deactivated successfully.
Oct 02 09:16:33 compute-0 conmon[428521]: conmon 417d22c72f17ab42df0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734.scope/container/memory.events
Oct 02 09:16:34 compute-0 podman[428505]: 2025-10-02 09:16:34.13244979 +0000 UTC m=+1.633951998 container attach 417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:16:34 compute-0 podman[428505]: 2025-10-02 09:16:34.133491252 +0000 UTC m=+1.634993460 container died 417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 09:16:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 86 op/s
Oct 02 09:16:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:16:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e579e2e01c973b07b96fae702fe8be4e79d6dcfde181f5e56b6c305582d6cc78-merged.mount: Deactivated successfully.
Oct 02 09:16:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:34.853 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:16:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:34.854 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:16:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:16:34.855 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:16:35 compute-0 podman[428505]: 2025-10-02 09:16:35.203995128 +0000 UTC m=+2.705497366 container remove 417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:16:35 compute-0 systemd[1]: libpod-conmon-417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734.scope: Deactivated successfully.
Oct 02 09:16:35 compute-0 ceph-mon[74477]: pgmap v2953: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 86 op/s
Oct 02 09:16:35 compute-0 podman[428546]: 2025-10-02 09:16:35.388260405 +0000 UTC m=+0.022246333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:16:35 compute-0 podman[428546]: 2025-10-02 09:16:35.588498897 +0000 UTC m=+0.222484835 container create dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 09:16:35 compute-0 systemd[1]: Started libpod-conmon-dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824.scope.
Oct 02 09:16:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e58500a605c2efc53481ecba0db0d1155d09f777b96d9ef23750eea1ef957/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e58500a605c2efc53481ecba0db0d1155d09f777b96d9ef23750eea1ef957/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e58500a605c2efc53481ecba0db0d1155d09f777b96d9ef23750eea1ef957/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e58500a605c2efc53481ecba0db0d1155d09f777b96d9ef23750eea1ef957/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:35 compute-0 podman[428546]: 2025-10-02 09:16:35.999587322 +0000 UTC m=+0.633573280 container init dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:16:36 compute-0 podman[428546]: 2025-10-02 09:16:36.006938581 +0000 UTC m=+0.640924479 container start dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:16:36 compute-0 podman[428546]: 2025-10-02 09:16:36.111928723 +0000 UTC m=+0.745914621 container attach dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:16:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 677 KiB/s wr, 83 op/s
Oct 02 09:16:36 compute-0 youthful_joliot[428564]: {
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:     "0": [
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:         {
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "devices": [
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "/dev/loop3"
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             ],
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_name": "ceph_lv0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_size": "21470642176",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "name": "ceph_lv0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "tags": {
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.cluster_name": "ceph",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.crush_device_class": "",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.encrypted": "0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.osd_id": "0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.type": "block",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.vdo": "0"
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             },
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "type": "block",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "vg_name": "ceph_vg0"
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:         }
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:     ],
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:     "1": [
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:         {
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "devices": [
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "/dev/loop4"
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             ],
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_name": "ceph_lv1",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_size": "21470642176",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "name": "ceph_lv1",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "tags": {
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.cluster_name": "ceph",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.crush_device_class": "",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.encrypted": "0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.osd_id": "1",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.type": "block",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.vdo": "0"
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             },
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "type": "block",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "vg_name": "ceph_vg1"
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:         }
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:     ],
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:     "2": [
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:         {
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "devices": [
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "/dev/loop5"
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             ],
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_name": "ceph_lv2",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_size": "21470642176",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "name": "ceph_lv2",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "tags": {
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.cluster_name": "ceph",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.crush_device_class": "",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.encrypted": "0",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.osd_id": "2",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.type": "block",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:                 "ceph.vdo": "0"
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             },
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "type": "block",
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:             "vg_name": "ceph_vg2"
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:         }
Oct 02 09:16:36 compute-0 youthful_joliot[428564]:     ]
Oct 02 09:16:36 compute-0 youthful_joliot[428564]: }
Oct 02 09:16:36 compute-0 systemd[1]: libpod-dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824.scope: Deactivated successfully.
Oct 02 09:16:36 compute-0 podman[428546]: 2025-10-02 09:16:36.738209805 +0000 UTC m=+1.372195703 container died dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:16:36 compute-0 nova_compute[260603]: 2025-10-02 09:16:36.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:37 compute-0 ovn_controller[152344]: 2025-10-02T09:16:37Z|01653|binding|INFO|Releasing lport 25655298-5d81-4448-950c-289fb68606f0 from this chassis (sb_readonly=0)
Oct 02 09:16:37 compute-0 nova_compute[260603]: 2025-10-02 09:16:37.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:37 compute-0 NetworkManager[45129]: <info>  [1759396597.0865] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/672)
Oct 02 09:16:37 compute-0 NetworkManager[45129]: <info>  [1759396597.0881] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/673)
Oct 02 09:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-714e58500a605c2efc53481ecba0db0d1155d09f777b96d9ef23750eea1ef957-merged.mount: Deactivated successfully.
Oct 02 09:16:37 compute-0 ovn_controller[152344]: 2025-10-02T09:16:37Z|01654|binding|INFO|Releasing lport 25655298-5d81-4448-950c-289fb68606f0 from this chassis (sb_readonly=0)
Oct 02 09:16:37 compute-0 nova_compute[260603]: 2025-10-02 09:16:37.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:37 compute-0 nova_compute[260603]: 2025-10-02 09:16:37.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:37 compute-0 podman[428546]: 2025-10-02 09:16:37.392944082 +0000 UTC m=+2.026930010 container remove dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:16:37 compute-0 systemd[1]: libpod-conmon-dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824.scope: Deactivated successfully.
Oct 02 09:16:37 compute-0 sudo[428441]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:37 compute-0 sudo[428588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:16:37 compute-0 sudo[428588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:37 compute-0 sudo[428588]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:37 compute-0 sudo[428613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:16:37 compute-0 sudo[428613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:37 compute-0 sudo[428613]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:37 compute-0 nova_compute[260603]: 2025-10-02 09:16:37.651 2 DEBUG nova.compute.manager [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-changed-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:16:37 compute-0 nova_compute[260603]: 2025-10-02 09:16:37.652 2 DEBUG nova.compute.manager [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Refreshing instance network info cache due to event network-changed-dad30664-f830-4b51-9cf5-8b92d95308bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:16:37 compute-0 nova_compute[260603]: 2025-10-02 09:16:37.652 2 DEBUG oslo_concurrency.lockutils [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:16:37 compute-0 nova_compute[260603]: 2025-10-02 09:16:37.653 2 DEBUG oslo_concurrency.lockutils [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:16:37 compute-0 nova_compute[260603]: 2025-10-02 09:16:37.653 2 DEBUG nova.network.neutron [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Refreshing network info cache for port dad30664-f830-4b51-9cf5-8b92d95308bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:16:37 compute-0 sudo[428638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:16:37 compute-0 sudo[428638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:37 compute-0 sudo[428638]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:37 compute-0 ceph-mon[74477]: pgmap v2954: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 677 KiB/s wr, 83 op/s
Oct 02 09:16:37 compute-0 sudo[428663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:16:37 compute-0 sudo[428663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:38 compute-0 podman[428728]: 2025-10-02 09:16:38.091794289 +0000 UTC m=+0.027314899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:16:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:16:38 compute-0 nova_compute[260603]: 2025-10-02 09:16:38.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:38 compute-0 podman[428728]: 2025-10-02 09:16:38.325329767 +0000 UTC m=+0.260850377 container create b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 02 09:16:38 compute-0 systemd[1]: Started libpod-conmon-b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a.scope.
Oct 02 09:16:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:16:38 compute-0 podman[428728]: 2025-10-02 09:16:38.883412729 +0000 UTC m=+0.818933389 container init b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 09:16:38 compute-0 podman[428728]: 2025-10-02 09:16:38.895253697 +0000 UTC m=+0.830774297 container start b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elion, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 09:16:38 compute-0 systemd[1]: libpod-b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a.scope: Deactivated successfully.
Oct 02 09:16:38 compute-0 frosty_elion[428744]: 167 167
Oct 02 09:16:38 compute-0 conmon[428744]: conmon b04faf59cd6483e5b866 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a.scope/container/memory.events
Oct 02 09:16:39 compute-0 podman[428728]: 2025-10-02 09:16:39.0462672 +0000 UTC m=+0.981787860 container attach b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elion, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 09:16:39 compute-0 podman[428728]: 2025-10-02 09:16:39.046825087 +0000 UTC m=+0.982345687 container died b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elion, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:16:39 compute-0 ceph-mon[74477]: pgmap v2955: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:16:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:16:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:16:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-90f26b115dafc1ce7542d7c456a5ab580335078293874ca5dc06110a187d958a-merged.mount: Deactivated successfully.
Oct 02 09:16:40 compute-0 podman[428728]: 2025-10-02 09:16:40.239736898 +0000 UTC m=+2.175257508 container remove b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 09:16:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Oct 02 09:16:40 compute-0 systemd[1]: libpod-conmon-b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a.scope: Deactivated successfully.
Oct 02 09:16:40 compute-0 nova_compute[260603]: 2025-10-02 09:16:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:16:40 compute-0 nova_compute[260603]: 2025-10-02 09:16:40.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:16:40 compute-0 podman[428767]: 2025-10-02 09:16:40.442262211 +0000 UTC m=+0.029152156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:16:40 compute-0 podman[428767]: 2025-10-02 09:16:40.663378203 +0000 UTC m=+0.250268058 container create 37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 09:16:40 compute-0 ceph-mon[74477]: pgmap v2956: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Oct 02 09:16:40 compute-0 systemd[1]: Started libpod-conmon-37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396.scope.
Oct 02 09:16:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ad9048a84469a9a5f2abb0fcaf7db941ceab533dca8a2950793a82fbec1a0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ad9048a84469a9a5f2abb0fcaf7db941ceab533dca8a2950793a82fbec1a0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ad9048a84469a9a5f2abb0fcaf7db941ceab533dca8a2950793a82fbec1a0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ad9048a84469a9a5f2abb0fcaf7db941ceab533dca8a2950793a82fbec1a0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:16:40 compute-0 podman[428767]: 2025-10-02 09:16:40.85409487 +0000 UTC m=+0.440984745 container init 37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:16:40 compute-0 podman[428767]: 2025-10-02 09:16:40.869135257 +0000 UTC m=+0.456025142 container start 37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:16:40 compute-0 podman[428767]: 2025-10-02 09:16:40.965958046 +0000 UTC m=+0.552847901 container attach 37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:16:41 compute-0 nova_compute[260603]: 2025-10-02 09:16:41.251 2 DEBUG nova.network.neutron [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updated VIF entry in instance network info cache for port dad30664-f830-4b51-9cf5-8b92d95308bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:16:41 compute-0 nova_compute[260603]: 2025-10-02 09:16:41.253 2 DEBUG nova.network.neutron [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updating instance_info_cache with network_info: [{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:16:41 compute-0 nova_compute[260603]: 2025-10-02 09:16:41.300 2 DEBUG oslo_concurrency.lockutils [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:16:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:16:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1348 writes, 6344 keys, 1348 commit groups, 1.0 writes per commit group, ingest: 8.63 MB, 0.01 MB/s
                                           Interval WAL: 1348 writes, 1348 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     94.3      0.80              0.29        44    0.018       0      0       0.0       0.0
                                             L6      1/0    9.44 MB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   4.8    150.1    126.9      2.85              1.29        43    0.066    274K    23K       0.0       0.0
                                            Sum      1/0    9.44 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.8    117.1    119.7      3.66              1.58        87    0.042    274K    23K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8     71.8     73.5      0.90              0.25        12    0.075     49K   3088       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   0.0    150.1    126.9      2.85              1.29        43    0.066    274K    23K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     94.9      0.80              0.29        43    0.019       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.074, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.43 GB write, 0.08 MB/s write, 0.42 GB read, 0.08 MB/s read, 3.7 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 48.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000311 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3105,46.14 MB,15.1765%) FilterBlock(88,765.73 KB,0.245983%) IndexBlock(88,1.25 MB,0.409975%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]: {
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "osd_id": 2,
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "type": "bluestore"
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:     },
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "osd_id": 1,
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "type": "bluestore"
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:     },
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "osd_id": 0,
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:         "type": "bluestore"
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]:     }
Oct 02 09:16:41 compute-0 heuristic_matsumoto[428785]: }
Oct 02 09:16:41 compute-0 systemd[1]: libpod-37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396.scope: Deactivated successfully.
Oct 02 09:16:41 compute-0 podman[428767]: 2025-10-02 09:16:41.937268611 +0000 UTC m=+1.524158556 container died 37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 09:16:41 compute-0 systemd[1]: libpod-37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396.scope: Consumed 1.063s CPU time.
Oct 02 09:16:42 compute-0 nova_compute[260603]: 2025-10-02 09:16:42.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 66 op/s
Oct 02 09:16:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2ad9048a84469a9a5f2abb0fcaf7db941ceab533dca8a2950793a82fbec1a0e-merged.mount: Deactivated successfully.
Oct 02 09:16:43 compute-0 ceph-mon[74477]: pgmap v2957: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 66 op/s
Oct 02 09:16:43 compute-0 nova_compute[260603]: 2025-10-02 09:16:43.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:43 compute-0 podman[428767]: 2025-10-02 09:16:43.512640506 +0000 UTC m=+3.099530411 container remove 37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:16:43 compute-0 sudo[428663]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:16:43 compute-0 systemd[1]: libpod-conmon-37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396.scope: Deactivated successfully.
Oct 02 09:16:43 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:16:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:16:43 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:16:43 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 04bc5472-b05a-49ae-a322-5426900638a3 does not exist
Oct 02 09:16:43 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2488ac31-8a07-4863-a139-e47e5e7263c2 does not exist
Oct 02 09:16:44 compute-0 sudo[428834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:16:44 compute-0 sudo[428834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:44 compute-0 sudo[428834]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:44 compute-0 sudo[428859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:16:44 compute-0 sudo[428859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:16:44 compute-0 sudo[428859]: pam_unix(sudo:session): session closed for user root
Oct 02 09:16:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 97 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Oct 02 09:16:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:16:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:16:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:16:44 compute-0 ceph-mon[74477]: pgmap v2958: 305 pgs: 305 active+clean; 97 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Oct 02 09:16:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 97 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 MiB/s wr, 18 op/s
Oct 02 09:16:47 compute-0 nova_compute[260603]: 2025-10-02 09:16:47.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:47 compute-0 ceph-mon[74477]: pgmap v2959: 305 pgs: 305 active+clean; 97 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 MiB/s wr, 18 op/s
Oct 02 09:16:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 108 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.0 MiB/s wr, 26 op/s
Oct 02 09:16:48 compute-0 nova_compute[260603]: 2025-10-02 09:16:48.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:48 compute-0 ceph-mon[74477]: pgmap v2960: 305 pgs: 305 active+clean; 108 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.0 MiB/s wr, 26 op/s
Oct 02 09:16:48 compute-0 ovn_controller[152344]: 2025-10-02T09:16:48Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:75:e6 10.100.0.11
Oct 02 09:16:48 compute-0 ovn_controller[152344]: 2025-10-02T09:16:48Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:75:e6 10.100.0.11
Oct 02 09:16:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:16:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 109 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Oct 02 09:16:51 compute-0 ceph-mon[74477]: pgmap v2961: 305 pgs: 305 active+clean; 109 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Oct 02 09:16:52 compute-0 nova_compute[260603]: 2025-10-02 09:16:52.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 109 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Oct 02 09:16:53 compute-0 nova_compute[260603]: 2025-10-02 09:16:53.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:53 compute-0 ceph-mon[74477]: pgmap v2962: 305 pgs: 305 active+clean; 109 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Oct 02 09:16:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 259 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:16:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:16:55 compute-0 ceph-mon[74477]: pgmap v2963: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 259 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:16:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 959 KiB/s wr, 44 op/s
Oct 02 09:16:57 compute-0 nova_compute[260603]: 2025-10-02 09:16:57.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:57 compute-0 ceph-mon[74477]: pgmap v2964: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 959 KiB/s wr, 44 op/s
Oct 02 09:16:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:16:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:16:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:16:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:16:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:16:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:16:58 compute-0 podman[428885]: 2025-10-02 09:16:58.050164863 +0000 UTC m=+0.090933197 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 09:16:58 compute-0 podman[428884]: 2025-10-02 09:16:58.112222111 +0000 UTC m=+0.153253833 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 02 09:16:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 971 KiB/s wr, 44 op/s
Oct 02 09:16:58 compute-0 nova_compute[260603]: 2025-10-02 09:16:58.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:16:59 compute-0 ceph-mon[74477]: pgmap v2965: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 971 KiB/s wr, 44 op/s
Oct 02 09:16:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:17:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 196 KiB/s rd, 182 KiB/s wr, 36 op/s
Oct 02 09:17:01 compute-0 ceph-mon[74477]: pgmap v2966: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 196 KiB/s rd, 182 KiB/s wr, 36 op/s
Oct 02 09:17:02 compute-0 nova_compute[260603]: 2025-10-02 09:17:02.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 108 KiB/s wr, 29 op/s
Oct 02 09:17:02 compute-0 nova_compute[260603]: 2025-10-02 09:17:02.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:17:03 compute-0 podman[428927]: 2025-10-02 09:17:03.026770805 +0000 UTC m=+0.084762565 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:17:03 compute-0 podman[428928]: 2025-10-02 09:17:03.037640262 +0000 UTC m=+0.076607301 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 09:17:03 compute-0 nova_compute[260603]: 2025-10-02 09:17:03.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:03 compute-0 ceph-mon[74477]: pgmap v2967: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 108 KiB/s wr, 29 op/s
Oct 02 09:17:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 108 KiB/s wr, 29 op/s
Oct 02 09:17:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:17:05 compute-0 ceph-mon[74477]: pgmap v2968: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 108 KiB/s wr, 29 op/s
Oct 02 09:17:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 02 09:17:07 compute-0 ovn_controller[152344]: 2025-10-02T09:17:07Z|01655|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct 02 09:17:07 compute-0 nova_compute[260603]: 2025-10-02 09:17:07.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:07 compute-0 nova_compute[260603]: 2025-10-02 09:17:07.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:17:07 compute-0 nova_compute[260603]: 2025-10-02 09:17:07.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:17:07 compute-0 nova_compute[260603]: 2025-10-02 09:17:07.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:17:07 compute-0 ceph-mon[74477]: pgmap v2969: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 02 09:17:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 02 09:17:08 compute-0 nova_compute[260603]: 2025-10-02 09:17:08.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:09 compute-0 nova_compute[260603]: 2025-10-02 09:17:09.060 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:17:09 compute-0 nova_compute[260603]: 2025-10-02 09:17:09.061 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:17:09 compute-0 nova_compute[260603]: 2025-10-02 09:17:09.061 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 09:17:09 compute-0 nova_compute[260603]: 2025-10-02 09:17:09.062 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bfb5f44c-0aeb-439f-9d64-934d5cb85c02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:17:09 compute-0 ceph-mon[74477]: pgmap v2970: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 02 09:17:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:17:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 09:17:11 compute-0 ceph-mon[74477]: pgmap v2971: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 09:17:12 compute-0 nova_compute[260603]: 2025-10-02 09:17:12.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:17:12 compute-0 ceph-mon[74477]: pgmap v2972: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.175 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updating instance_info_cache with network_info: [{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.202 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.203 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.204 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.204 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.204 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.255 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.256 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.256 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.256 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.257 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:17:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073728237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.711 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:17:13 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2073728237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.870 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:17:13 compute-0 nova_compute[260603]: 2025-10-02 09:17:13.871 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.069 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.071 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3388MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.071 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.071 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.164 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance bfb5f44c-0aeb-439f-9d64-934d5cb85c02 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.164 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.164 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.208 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:17:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.641 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.642 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:17:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:17:14 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165600605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.681 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.689 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.716 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.742 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:17:14 compute-0 ceph-mon[74477]: pgmap v2973: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:17:14 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1165600605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.780 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.781 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.863 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.864 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.877 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:17:14 compute-0 nova_compute[260603]: 2025-10-02 09:17:14.878 2 INFO nova.compute.claims [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:17:15 compute-0 nova_compute[260603]: 2025-10-02 09:17:15.025 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:17:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:17:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3382146672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:17:15 compute-0 sshd-session[429032]: Invalid user pos from 167.71.248.239 port 42294
Oct 02 09:17:15 compute-0 nova_compute[260603]: 2025-10-02 09:17:15.515 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:17:15 compute-0 sshd-session[429032]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 09:17:15 compute-0 sshd-session[429032]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.71.248.239
Oct 02 09:17:15 compute-0 nova_compute[260603]: 2025-10-02 09:17:15.528 2 DEBUG nova.compute.provider_tree [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:17:15 compute-0 nova_compute[260603]: 2025-10-02 09:17:15.549 2 DEBUG nova.scheduler.client.report [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:17:15 compute-0 nova_compute[260603]: 2025-10-02 09:17:15.628 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:15 compute-0 nova_compute[260603]: 2025-10-02 09:17:15.630 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:17:15 compute-0 nova_compute[260603]: 2025-10-02 09:17:15.776 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:17:15 compute-0 nova_compute[260603]: 2025-10-02 09:17:15.777 2 DEBUG nova.network.neutron [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:17:15 compute-0 nova_compute[260603]: 2025-10-02 09:17:15.830 2 INFO nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:17:15 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3382146672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:17:15 compute-0 nova_compute[260603]: 2025-10-02 09:17:15.897 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:17:15 compute-0 nova_compute[260603]: 2025-10-02 09:17:15.988 2 DEBUG nova.policy [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.052 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.055 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.055 2 INFO nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Creating image(s)
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.094 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.127 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.157 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.162 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.209 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.209 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.239 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.239 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.240 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.240 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.269 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:17:16 compute-0 nova_compute[260603]: 2025-10-02 09:17:16.273 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:17:16 compute-0 ceph-mon[74477]: pgmap v2974: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:17:17 compute-0 nova_compute[260603]: 2025-10-02 09:17:17.031 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:17:17 compute-0 nova_compute[260603]: 2025-10-02 09:17:17.127 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:17:17 compute-0 nova_compute[260603]: 2025-10-02 09:17:17.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:17 compute-0 nova_compute[260603]: 2025-10-02 09:17:17.404 2 DEBUG nova.objects.instance [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:17:17 compute-0 nova_compute[260603]: 2025-10-02 09:17:17.423 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:17:17 compute-0 nova_compute[260603]: 2025-10-02 09:17:17.424 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Ensure instance console log exists: /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:17:17 compute-0 nova_compute[260603]: 2025-10-02 09:17:17.425 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:17 compute-0 nova_compute[260603]: 2025-10-02 09:17:17.426 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:17 compute-0 nova_compute[260603]: 2025-10-02 09:17:17.426 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:17 compute-0 sshd-session[429032]: Failed password for invalid user pos from 167.71.248.239 port 42294 ssh2
Oct 02 09:17:17 compute-0 sshd-session[429032]: Connection closed by invalid user pos 167.71.248.239 port 42294 [preauth]
Oct 02 09:17:18 compute-0 nova_compute[260603]: 2025-10-02 09:17:18.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:18.092 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:17:18 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:18.095 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:17:18 compute-0 nova_compute[260603]: 2025-10-02 09:17:18.204 2 DEBUG nova.network.neutron [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Successfully created port: 9d78430c-570c-4b66-97e9-27790d7f2c0b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:17:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 149 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 929 KiB/s wr, 13 op/s
Oct 02 09:17:18 compute-0 nova_compute[260603]: 2025-10-02 09:17:18.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:19 compute-0 ceph-mon[74477]: pgmap v2975: 305 pgs: 305 active+clean; 149 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 929 KiB/s wr, 13 op/s
Oct 02 09:17:19 compute-0 nova_compute[260603]: 2025-10-02 09:17:19.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:17:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:17:20 compute-0 nova_compute[260603]: 2025-10-02 09:17:20.243 2 DEBUG nova.network.neutron [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Successfully updated port: 9d78430c-570c-4b66-97e9-27790d7f2c0b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:17:20 compute-0 nova_compute[260603]: 2025-10-02 09:17:20.265 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:17:20 compute-0 nova_compute[260603]: 2025-10-02 09:17:20.265 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:17:20 compute-0 nova_compute[260603]: 2025-10-02 09:17:20.266 2 DEBUG nova.network.neutron [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:17:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:17:20 compute-0 nova_compute[260603]: 2025-10-02 09:17:20.381 2 DEBUG nova.compute.manager [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-changed-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:17:20 compute-0 nova_compute[260603]: 2025-10-02 09:17:20.382 2 DEBUG nova.compute.manager [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Refreshing instance network info cache due to event network-changed-9d78430c-570c-4b66-97e9-27790d7f2c0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:17:20 compute-0 nova_compute[260603]: 2025-10-02 09:17:20.382 2 DEBUG oslo_concurrency.lockutils [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:17:21 compute-0 nova_compute[260603]: 2025-10-02 09:17:21.063 2 DEBUG nova.network.neutron [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:17:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:21.097 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:17:21 compute-0 ceph-mon[74477]: pgmap v2976: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:17:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:17:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3548428173' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:17:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:17:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3548428173' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:17:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:17:22 compute-0 nova_compute[260603]: 2025-10-02 09:17:22.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3548428173' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:17:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3548428173' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:17:22 compute-0 nova_compute[260603]: 2025-10-02 09:17:22.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.036 2 DEBUG nova.network.neutron [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updating instance_info_cache with network_info: [{"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.084 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.085 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Instance network_info: |[{"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.086 2 DEBUG oslo_concurrency.lockutils [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.087 2 DEBUG nova.network.neutron [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Refreshing network info cache for port 9d78430c-570c-4b66-97e9-27790d7f2c0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.093 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Start _get_guest_xml network_info=[{"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.099 2 WARNING nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.104 2 DEBUG nova.virt.libvirt.host [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.105 2 DEBUG nova.virt.libvirt.host [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.108 2 DEBUG nova.virt.libvirt.host [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.109 2 DEBUG nova.virt.libvirt.host [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.110 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.110 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.111 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.112 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.113 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.113 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.114 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.115 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.115 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.116 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.116 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.117 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.124 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:17:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139598554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.622 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.642 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:17:23 compute-0 nova_compute[260603]: 2025-10-02 09:17:23.646 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:17:23 compute-0 ceph-mon[74477]: pgmap v2977: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:17:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:17:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2848137534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.104 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.106 2 DEBUG nova.virt.libvirt.vif [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-650898802',display_name='tempest-TestGettingAddress-server-650898802',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-650898802',id=148,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGsNuW5gEn3WEc3qQ3L0JdcOZd6vD7SMJl6t9YZV9xlo9AdZ8yeIelaH48tIJjYeguHisd3f+wPKQxnBOP+bkaPT8EVTluVcKTxfA+koE1m61LBSFo3LpknbEg+9XiigA==',key_name='tempest-TestGettingAddress-1915804126',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-96j0e3zy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:17:15Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.106 2 DEBUG nova.network.os_vif_util [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.108 2 DEBUG nova.network.os_vif_util [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.109 2 DEBUG nova.objects.instance [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.131 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:17:24 compute-0 nova_compute[260603]:   <uuid>7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555</uuid>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   <name>instance-00000094</name>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-650898802</nova:name>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:17:23</nova:creationTime>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:17:24 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:17:24 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:17:24 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:17:24 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:17:24 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:17:24 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:17:24 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:17:24 compute-0 nova_compute[260603]:         <nova:port uuid="9d78430c-570c-4b66-97e9-27790d7f2c0b">
Oct 02 09:17:24 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fef7:b632" ipVersion="6"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fef7:b632" ipVersion="6"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <system>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <entry name="serial">7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555</entry>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <entry name="uuid">7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555</entry>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     </system>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   <os>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   </os>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   <features>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   </features>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk">
Oct 02 09:17:24 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       </source>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:17:24 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk.config">
Oct 02 09:17:24 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       </source>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:17:24 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:f7:b6:32"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <target dev="tap9d78430c-57"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/console.log" append="off"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <video>
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     </video>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:17:24 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:17:24 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:17:24 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:17:24 compute-0 nova_compute[260603]: </domain>
Oct 02 09:17:24 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.132 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Preparing to wait for external event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.132 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.133 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.133 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.133 2 DEBUG nova.virt.libvirt.vif [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-650898802',display_name='tempest-TestGettingAddress-server-650898802',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-650898802',id=148,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGsNuW5gEn3WEc3qQ3L0JdcOZd6vD7SMJl6t9YZV9xlo9AdZ8yeIelaH48tIJjYeguHisd3f+wPKQxnBOP+bkaPT8EVTluVcKTxfA+koE1m61LBSFo3LpknbEg+9XiigA==',key_name='tempest-TestGettingAddress-1915804126',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-96j0e3zy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:17:15Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.134 2 DEBUG nova.network.os_vif_util [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.134 2 DEBUG nova.network.os_vif_util [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.135 2 DEBUG os_vif [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d78430c-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d78430c-57, col_values=(('external_ids', {'iface-id': '9d78430c-570c-4b66-97e9-27790d7f2c0b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:b6:32', 'vm-uuid': '7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:17:24 compute-0 NetworkManager[45129]: <info>  [1759396644.1413] manager: (tap9d78430c-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/674)
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.148 2 INFO os_vif [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57')
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.231 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.232 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.232 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:f7:b6:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.233 2 INFO nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Using config drive
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.261 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:17:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.559 2 INFO nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Creating config drive at /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/disk.config
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.571 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf63wnhg6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:17:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:17:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1139598554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:17:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2848137534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:17:24 compute-0 ceph-mon[74477]: pgmap v2978: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.738 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf63wnhg6" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.780 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:17:24 compute-0 nova_compute[260603]: 2025-10-02 09:17:24.785 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/disk.config 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:17:25 compute-0 nova_compute[260603]: 2025-10-02 09:17:25.106 2 DEBUG nova.network.neutron [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updated VIF entry in instance network info cache for port 9d78430c-570c-4b66-97e9-27790d7f2c0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:17:25 compute-0 nova_compute[260603]: 2025-10-02 09:17:25.107 2 DEBUG nova.network.neutron [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updating instance_info_cache with network_info: [{"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:17:25 compute-0 nova_compute[260603]: 2025-10-02 09:17:25.129 2 DEBUG oslo_concurrency.lockutils [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:17:25 compute-0 nova_compute[260603]: 2025-10-02 09:17:25.896 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/disk.config 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:17:25 compute-0 nova_compute[260603]: 2025-10-02 09:17:25.897 2 INFO nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Deleting local config drive /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/disk.config because it was imported into RBD.
Oct 02 09:17:25 compute-0 kernel: tap9d78430c-57: entered promiscuous mode
Oct 02 09:17:25 compute-0 NetworkManager[45129]: <info>  [1759396645.9743] manager: (tap9d78430c-57): new Tun device (/org/freedesktop/NetworkManager/Devices/675)
Oct 02 09:17:25 compute-0 ovn_controller[152344]: 2025-10-02T09:17:25Z|01656|binding|INFO|Claiming lport 9d78430c-570c-4b66-97e9-27790d7f2c0b for this chassis.
Oct 02 09:17:25 compute-0 ovn_controller[152344]: 2025-10-02T09:17:25Z|01657|binding|INFO|9d78430c-570c-4b66-97e9-27790d7f2c0b: Claiming fa:16:3e:f7:b6:32 10.100.0.3 2001:db8:0:1:f816:3eff:fef7:b632 2001:db8::f816:3eff:fef7:b632
Oct 02 09:17:25 compute-0 nova_compute[260603]: 2025-10-02 09:17:25.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:25.983 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:b6:32 10.100.0.3 2001:db8:0:1:f816:3eff:fef7:b632 2001:db8::f816:3eff:fef7:b632'], port_security=['fa:16:3e:f7:b6:32 10.100.0.3 2001:db8:0:1:f816:3eff:fef7:b632 2001:db8::f816:3eff:fef7:b632'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fef7:b632/64 2001:db8::f816:3eff:fef7:b632/64', 'neutron:device_id': '7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6142c2da-c0c8-4842-a55d-76581298b5e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=591e6b4a-e1ca-4274-8e8d-321441edaa04, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9d78430c-570c-4b66-97e9-27790d7f2c0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:17:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:25.985 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9d78430c-570c-4b66-97e9-27790d7f2c0b in datapath 32cae488-7671-4c05-a475-2c63f103261b bound to our chassis
Oct 02 09:17:25 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:25.987 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32cae488-7671-4c05-a475-2c63f103261b
Oct 02 09:17:25 compute-0 ovn_controller[152344]: 2025-10-02T09:17:25Z|01658|binding|INFO|Setting lport 9d78430c-570c-4b66-97e9-27790d7f2c0b ovn-installed in OVS
Oct 02 09:17:25 compute-0 ovn_controller[152344]: 2025-10-02T09:17:25Z|01659|binding|INFO|Setting lport 9d78430c-570c-4b66-97e9-27790d7f2c0b up in Southbound
Oct 02 09:17:25 compute-0 nova_compute[260603]: 2025-10-02 09:17:25.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:26 compute-0 nova_compute[260603]: 2025-10-02 09:17:25.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.016 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe666db-fbfd-44eb-99b8-b574db9c58d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:17:26 compute-0 systemd-machined[214636]: New machine qemu-182-instance-00000094.
Oct 02 09:17:26 compute-0 systemd[1]: Started Virtual Machine qemu-182-instance-00000094.
Oct 02 09:17:26 compute-0 systemd-udevd[429340]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:17:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.064 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d96cfc2e-f215-4870-8a16-4f4b06470ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:17:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.068 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a64c58a6-6ed1-4b48-8e29-4c79f36709e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:17:26 compute-0 NetworkManager[45129]: <info>  [1759396646.0804] device (tap9d78430c-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:17:26 compute-0 NetworkManager[45129]: <info>  [1759396646.0820] device (tap9d78430c-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:17:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.103 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[307ab445-b9fd-45e8-8b7f-d3f0b8383ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:17:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.132 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50316b04-b308-4f39-9108-7efa80fca1a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32cae488-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 25, 'tx_packets': 6, 'rx_bytes': 2230, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 25, 'tx_packets': 6, 'rx_bytes': 2230, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746691, 'reachable_time': 42035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 429349, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:17:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.159 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4e84fa-7f9f-42f5-afc2-d24cc6b189da]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32cae488-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746702, 'tstamp': 746702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 429351, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32cae488-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746704, 'tstamp': 746704}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 429351, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:17:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.163 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32cae488-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:17:26 compute-0 nova_compute[260603]: 2025-10-02 09:17:26.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.169 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32cae488-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:17:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.169 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:17:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.170 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32cae488-70, col_values=(('external_ids', {'iface-id': '25655298-5d81-4448-950c-289fb68606f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:17:26 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.171 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:17:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:17:26 compute-0 nova_compute[260603]: 2025-10-02 09:17:26.529 2 DEBUG nova.compute.manager [req-42058949-f386-4e4d-9143-a55e5dfa3153 req-0f70c7d8-b0a7-4270-b66b-0210fba6cd57 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:17:26 compute-0 nova_compute[260603]: 2025-10-02 09:17:26.530 2 DEBUG oslo_concurrency.lockutils [req-42058949-f386-4e4d-9143-a55e5dfa3153 req-0f70c7d8-b0a7-4270-b66b-0210fba6cd57 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:26 compute-0 nova_compute[260603]: 2025-10-02 09:17:26.530 2 DEBUG oslo_concurrency.lockutils [req-42058949-f386-4e4d-9143-a55e5dfa3153 req-0f70c7d8-b0a7-4270-b66b-0210fba6cd57 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:26 compute-0 nova_compute[260603]: 2025-10-02 09:17:26.531 2 DEBUG oslo_concurrency.lockutils [req-42058949-f386-4e4d-9143-a55e5dfa3153 req-0f70c7d8-b0a7-4270-b66b-0210fba6cd57 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:26 compute-0 nova_compute[260603]: 2025-10-02 09:17:26.531 2 DEBUG nova.compute.manager [req-42058949-f386-4e4d-9143-a55e5dfa3153 req-0f70c7d8-b0a7-4270-b66b-0210fba6cd57 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Processing event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.348 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396647.3481598, 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.349 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] VM Started (Lifecycle Event)
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.350 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.354 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.357 2 INFO nova.virt.libvirt.driver [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Instance spawned successfully.
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.357 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.375 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.378 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.385 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.385 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.386 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.386 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.386 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.387 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.408 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.408 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396647.3482668, 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.409 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] VM Paused (Lifecycle Event)
Oct 02 09:17:27 compute-0 ceph-mon[74477]: pgmap v2979: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.433 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.436 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396647.3538828, 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.436 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] VM Resumed (Lifecycle Event)
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.443 2 INFO nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Took 11.39 seconds to spawn the instance on the hypervisor.
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.443 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.453 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.455 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.507 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.517 2 INFO nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Took 12.72 seconds to build instance.
Oct 02 09:17:27 compute-0 nova_compute[260603]: 2025-10-02 09:17:27.552 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:17:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:17:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:17:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:17:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:17:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:17:28
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.control']
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:17:28 compute-0 nova_compute[260603]: 2025-10-02 09:17:28.648 2 DEBUG nova.compute.manager [req-1471add5-9cb3-41a9-be81-7482edd8b09d req-20725070-e473-454d-854e-7d7b6cb00b0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:17:28 compute-0 nova_compute[260603]: 2025-10-02 09:17:28.648 2 DEBUG oslo_concurrency.lockutils [req-1471add5-9cb3-41a9-be81-7482edd8b09d req-20725070-e473-454d-854e-7d7b6cb00b0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:28 compute-0 nova_compute[260603]: 2025-10-02 09:17:28.648 2 DEBUG oslo_concurrency.lockutils [req-1471add5-9cb3-41a9-be81-7482edd8b09d req-20725070-e473-454d-854e-7d7b6cb00b0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:28 compute-0 nova_compute[260603]: 2025-10-02 09:17:28.649 2 DEBUG oslo_concurrency.lockutils [req-1471add5-9cb3-41a9-be81-7482edd8b09d req-20725070-e473-454d-854e-7d7b6cb00b0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:28 compute-0 nova_compute[260603]: 2025-10-02 09:17:28.649 2 DEBUG nova.compute.manager [req-1471add5-9cb3-41a9-be81-7482edd8b09d req-20725070-e473-454d-854e-7d7b6cb00b0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] No waiting events found dispatching network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:17:28 compute-0 nova_compute[260603]: 2025-10-02 09:17:28.649 2 WARNING nova.compute.manager [req-1471add5-9cb3-41a9-be81-7482edd8b09d req-20725070-e473-454d-854e-7d7b6cb00b0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received unexpected event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b for instance with vm_state active and task_state None.
Oct 02 09:17:28 compute-0 ceph-mgr[74774]: client.0 ms_handle_reset on v2:192.168.122.100:6800/860957497
Oct 02 09:17:29 compute-0 podman[429396]: 2025-10-02 09:17:29.037925117 +0000 UTC m=+0.093607855 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 09:17:29 compute-0 podman[429395]: 2025-10-02 09:17:29.057694375 +0000 UTC m=+0.123448457 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:17:29 compute-0 nova_compute[260603]: 2025-10-02 09:17:29.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:29 compute-0 ceph-mon[74477]: pgmap v2980: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 09:17:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:17:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 948 KiB/s rd, 903 KiB/s wr, 54 op/s
Oct 02 09:17:31 compute-0 ceph-mon[74477]: pgmap v2981: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 948 KiB/s rd, 903 KiB/s wr, 54 op/s
Oct 02 09:17:31 compute-0 nova_compute[260603]: 2025-10-02 09:17:31.632 2 DEBUG nova.compute.manager [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-changed-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:17:31 compute-0 nova_compute[260603]: 2025-10-02 09:17:31.633 2 DEBUG nova.compute.manager [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Refreshing instance network info cache due to event network-changed-9d78430c-570c-4b66-97e9-27790d7f2c0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:17:31 compute-0 nova_compute[260603]: 2025-10-02 09:17:31.633 2 DEBUG oslo_concurrency.lockutils [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:17:31 compute-0 nova_compute[260603]: 2025-10-02 09:17:31.633 2 DEBUG oslo_concurrency.lockutils [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:17:31 compute-0 nova_compute[260603]: 2025-10-02 09:17:31.633 2 DEBUG nova.network.neutron [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Refreshing network info cache for port 9d78430c-570c-4b66-97e9-27790d7f2c0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:17:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 938 KiB/s rd, 15 KiB/s wr, 40 op/s
Oct 02 09:17:32 compute-0 nova_compute[260603]: 2025-10-02 09:17:32.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:33 compute-0 ceph-mon[74477]: pgmap v2982: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 938 KiB/s rd, 15 KiB/s wr, 40 op/s
Oct 02 09:17:33 compute-0 nova_compute[260603]: 2025-10-02 09:17:33.856 2 DEBUG nova.network.neutron [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updated VIF entry in instance network info cache for port 9d78430c-570c-4b66-97e9-27790d7f2c0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:17:33 compute-0 nova_compute[260603]: 2025-10-02 09:17:33.857 2 DEBUG nova.network.neutron [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updating instance_info_cache with network_info: [{"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:17:33 compute-0 nova_compute[260603]: 2025-10-02 09:17:33.881 2 DEBUG oslo_concurrency.lockutils [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:17:34 compute-0 podman[429442]: 2025-10-02 09:17:34.045769526 +0000 UTC m=+0.098667623 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 02 09:17:34 compute-0 podman[429441]: 2025-10-02 09:17:34.056655025 +0000 UTC m=+0.114737254 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 09:17:34 compute-0 nova_compute[260603]: 2025-10-02 09:17:34.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:17:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:17:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:34.855 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:34.855 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:34.855 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:35 compute-0 ceph-mon[74477]: pgmap v2983: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 02 09:17:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 09:17:37 compute-0 nova_compute[260603]: 2025-10-02 09:17:37.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:37 compute-0 ceph-mon[74477]: pgmap v2984: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 09:17:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 02 09:17:38 compute-0 ceph-mon[74477]: pgmap v2985: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 02 09:17:38 compute-0 ovn_controller[152344]: 2025-10-02T09:17:38Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:b6:32 10.100.0.3
Oct 02 09:17:38 compute-0 ovn_controller[152344]: 2025-10-02T09:17:38Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:b6:32 10.100.0.3
Oct 02 09:17:39 compute-0 nova_compute[260603]: 2025-10-02 09:17:39.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011077501302337695 of space, bias 1.0, pg target 0.33232503907013083 quantized to 32 (current 32)
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:17:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:17:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:17:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 173 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 859 KiB/s wr, 89 op/s
Oct 02 09:17:41 compute-0 nova_compute[260603]: 2025-10-02 09:17:41.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:17:41 compute-0 nova_compute[260603]: 2025-10-02 09:17:41.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:17:41 compute-0 ceph-mon[74477]: pgmap v2986: 305 pgs: 305 active+clean; 173 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 859 KiB/s wr, 89 op/s
Oct 02 09:17:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 173 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 846 KiB/s wr, 52 op/s
Oct 02 09:17:42 compute-0 nova_compute[260603]: 2025-10-02 09:17:42.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:43 compute-0 ceph-mon[74477]: pgmap v2987: 305 pgs: 305 active+clean; 173 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 846 KiB/s wr, 52 op/s
Oct 02 09:17:44 compute-0 nova_compute[260603]: 2025-10-02 09:17:44.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:44 compute-0 sudo[429482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:17:44 compute-0 sudo[429482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:44 compute-0 sudo[429482]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 195 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 02 09:17:44 compute-0 sudo[429507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:17:44 compute-0 sudo[429507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:44 compute-0 sudo[429507]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:44 compute-0 sudo[429532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:17:44 compute-0 sudo[429532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:44 compute-0 sudo[429532]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:44 compute-0 sudo[429557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:17:44 compute-0 sudo[429557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:17:44 compute-0 sudo[429557]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:17:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:17:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:17:45 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:17:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:17:45 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:17:45 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0720e283-5492-4e18-a2f6-55bc26589134 does not exist
Oct 02 09:17:45 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f59b07fa-d8f6-4523-926f-9879c2f669b1 does not exist
Oct 02 09:17:45 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev bb66cf11-13d9-461b-9734-c7a85c919512 does not exist
Oct 02 09:17:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:17:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:17:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:17:45 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:17:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:17:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:17:45 compute-0 sudo[429613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:17:45 compute-0 sudo[429613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:45 compute-0 sudo[429613]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:45 compute-0 sudo[429638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:17:45 compute-0 sudo[429638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:45 compute-0 sudo[429638]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:45 compute-0 sudo[429663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:17:45 compute-0 sudo[429663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:45 compute-0 sudo[429663]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:45 compute-0 sudo[429688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:17:45 compute-0 sudo[429688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:45 compute-0 ceph-mon[74477]: pgmap v2988: 305 pgs: 305 active+clean; 195 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 02 09:17:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:17:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:17:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:17:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:17:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:17:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:17:45 compute-0 podman[429752]: 2025-10-02 09:17:45.664270168 +0000 UTC m=+0.023692351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:17:45 compute-0 podman[429752]: 2025-10-02 09:17:45.843045562 +0000 UTC m=+0.202467735 container create e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 09:17:46 compute-0 systemd[1]: Started libpod-conmon-e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd.scope.
Oct 02 09:17:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:17:46 compute-0 podman[429752]: 2025-10-02 09:17:46.223369711 +0000 UTC m=+0.582791944 container init e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:17:46 compute-0 podman[429752]: 2025-10-02 09:17:46.234413377 +0000 UTC m=+0.593835530 container start e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:17:46 compute-0 beautiful_hypatia[429769]: 167 167
Oct 02 09:17:46 compute-0 systemd[1]: libpod-e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd.scope: Deactivated successfully.
Oct 02 09:17:46 compute-0 conmon[429769]: conmon e6be962c7e4324563a70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd.scope/container/memory.events
Oct 02 09:17:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 195 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Oct 02 09:17:46 compute-0 podman[429752]: 2025-10-02 09:17:46.410189006 +0000 UTC m=+0.769611249 container attach e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hypatia, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 09:17:46 compute-0 podman[429752]: 2025-10-02 09:17:46.410685843 +0000 UTC m=+0.770108056 container died e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hypatia, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 09:17:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-60a45bebccab754c7c057c0272231327ff0102bf71ccb84183a950caa4356e19-merged.mount: Deactivated successfully.
Oct 02 09:17:46 compute-0 ceph-mon[74477]: pgmap v2989: 305 pgs: 305 active+clean; 195 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Oct 02 09:17:47 compute-0 nova_compute[260603]: 2025-10-02 09:17:47.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:47 compute-0 podman[429752]: 2025-10-02 09:17:47.600691492 +0000 UTC m=+1.960113685 container remove e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hypatia, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 09:17:47 compute-0 systemd[1]: libpod-conmon-e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd.scope: Deactivated successfully.
Oct 02 09:17:47 compute-0 podman[429794]: 2025-10-02 09:17:47.819311071 +0000 UTC m=+0.028654096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:17:47 compute-0 podman[429794]: 2025-10-02 09:17:47.979147223 +0000 UTC m=+0.188490228 container create cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatelet, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 09:17:48 compute-0 systemd[1]: Started libpod-conmon-cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261.scope.
Oct 02 09:17:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d924bf14f75e554ac83b8e53ed6701d411fb71134e15d0546878fe8472d7e7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d924bf14f75e554ac83b8e53ed6701d411fb71134e15d0546878fe8472d7e7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d924bf14f75e554ac83b8e53ed6701d411fb71134e15d0546878fe8472d7e7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d924bf14f75e554ac83b8e53ed6701d411fb71134e15d0546878fe8472d7e7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d924bf14f75e554ac83b8e53ed6701d411fb71134e15d0546878fe8472d7e7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:48 compute-0 podman[429794]: 2025-10-02 09:17:48.219739198 +0000 UTC m=+0.429082223 container init cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:17:48 compute-0 podman[429794]: 2025-10-02 09:17:48.232677162 +0000 UTC m=+0.442020167 container start cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatelet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:17:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 09:17:48 compute-0 podman[429794]: 2025-10-02 09:17:48.349107699 +0000 UTC m=+0.558450814 container attach cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:17:49 compute-0 nova_compute[260603]: 2025-10-02 09:17:49.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:49 compute-0 relaxed_chatelet[429811]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:17:49 compute-0 relaxed_chatelet[429811]: --> relative data size: 1.0
Oct 02 09:17:49 compute-0 relaxed_chatelet[429811]: --> All data devices are unavailable
Oct 02 09:17:49 compute-0 systemd[1]: libpod-cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261.scope: Deactivated successfully.
Oct 02 09:17:49 compute-0 systemd[1]: libpod-cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261.scope: Consumed 1.049s CPU time.
Oct 02 09:17:49 compute-0 podman[429794]: 2025-10-02 09:17:49.349712142 +0000 UTC m=+1.559055187 container died cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatelet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:17:49 compute-0 ceph-mon[74477]: pgmap v2990: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 09:17:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:17:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d924bf14f75e554ac83b8e53ed6701d411fb71134e15d0546878fe8472d7e7f-merged.mount: Deactivated successfully.
Oct 02 09:17:50 compute-0 nova_compute[260603]: 2025-10-02 09:17:50.173 2 DEBUG nova.compute.manager [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-changed-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:17:50 compute-0 nova_compute[260603]: 2025-10-02 09:17:50.174 2 DEBUG nova.compute.manager [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Refreshing instance network info cache due to event network-changed-9d78430c-570c-4b66-97e9-27790d7f2c0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:17:50 compute-0 nova_compute[260603]: 2025-10-02 09:17:50.174 2 DEBUG oslo_concurrency.lockutils [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:17:50 compute-0 nova_compute[260603]: 2025-10-02 09:17:50.175 2 DEBUG oslo_concurrency.lockutils [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:17:50 compute-0 nova_compute[260603]: 2025-10-02 09:17:50.175 2 DEBUG nova.network.neutron [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Refreshing network info cache for port 9d78430c-570c-4b66-97e9-27790d7f2c0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:17:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:17:50 compute-0 nova_compute[260603]: 2025-10-02 09:17:50.397 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:50 compute-0 nova_compute[260603]: 2025-10-02 09:17:50.398 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:50 compute-0 nova_compute[260603]: 2025-10-02 09:17:50.398 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:50 compute-0 nova_compute[260603]: 2025-10-02 09:17:50.399 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:50 compute-0 nova_compute[260603]: 2025-10-02 09:17:50.399 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:50 compute-0 nova_compute[260603]: 2025-10-02 09:17:50.401 2 INFO nova.compute.manager [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Terminating instance
Oct 02 09:17:50 compute-0 nova_compute[260603]: 2025-10-02 09:17:50.403 2 DEBUG nova.compute.manager [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:17:50 compute-0 podman[429794]: 2025-10-02 09:17:50.452346302 +0000 UTC m=+2.661689317 container remove cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:17:50 compute-0 sudo[429688]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:50 compute-0 systemd[1]: libpod-conmon-cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261.scope: Deactivated successfully.
Oct 02 09:17:50 compute-0 sudo[429852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:17:50 compute-0 sudo[429852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:50 compute-0 sudo[429852]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:50 compute-0 sudo[429877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:17:50 compute-0 sudo[429877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:50 compute-0 sudo[429877]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:50 compute-0 sudo[429902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:17:50 compute-0 sudo[429902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:50 compute-0 sudo[429902]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:50 compute-0 sudo[429927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:17:50 compute-0 sudo[429927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:50 compute-0 ceph-mon[74477]: pgmap v2991: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 09:17:51 compute-0 kernel: tap9d78430c-57 (unregistering): left promiscuous mode
Oct 02 09:17:51 compute-0 podman[429992]: 2025-10-02 09:17:51.151996885 +0000 UTC m=+0.032231258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:17:51 compute-0 NetworkManager[45129]: <info>  [1759396671.2515] device (tap9d78430c-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:17:51 compute-0 ovn_controller[152344]: 2025-10-02T09:17:51Z|01660|binding|INFO|Releasing lport 9d78430c-570c-4b66-97e9-27790d7f2c0b from this chassis (sb_readonly=0)
Oct 02 09:17:51 compute-0 ovn_controller[152344]: 2025-10-02T09:17:51Z|01661|binding|INFO|Setting lport 9d78430c-570c-4b66-97e9-27790d7f2c0b down in Southbound
Oct 02 09:17:51 compute-0 ovn_controller[152344]: 2025-10-02T09:17:51Z|01662|binding|INFO|Removing iface tap9d78430c-57 ovn-installed in OVS
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.294 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:b6:32 10.100.0.3 2001:db8:0:1:f816:3eff:fef7:b632 2001:db8::f816:3eff:fef7:b632'], port_security=['fa:16:3e:f7:b6:32 10.100.0.3 2001:db8:0:1:f816:3eff:fef7:b632 2001:db8::f816:3eff:fef7:b632'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fef7:b632/64 2001:db8::f816:3eff:fef7:b632/64', 'neutron:device_id': '7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6142c2da-c0c8-4842-a55d-76581298b5e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=591e6b4a-e1ca-4274-8e8d-321441edaa04, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9d78430c-570c-4b66-97e9-27790d7f2c0b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.297 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9d78430c-570c-4b66-97e9-27790d7f2c0b in datapath 32cae488-7671-4c05-a475-2c63f103261b unbound from our chassis
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.298 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32cae488-7671-4c05-a475-2c63f103261b
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.317 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[973ec3d1-0627-43ec-83dc-ec55799d82a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:17:51 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct 02 09:17:51 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Consumed 13.379s CPU time.
Oct 02 09:17:51 compute-0 systemd-machined[214636]: Machine qemu-182-instance-00000094 terminated.
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.351 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a05c88-ffa4-45b1-bc12-d90449e4c276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.355 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7a68fb3c-1fad-40fc-b78f-23f6df47c0e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.384 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8d860f1b-2f72-4e16-a46c-635c06c0454c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.405 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7dcf948b-4d24-4614-8571-e82a2dea97fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32cae488-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 3628, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 3628, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746691, 'reachable_time': 42035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430015, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.423 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed20292-b3b7-486c-9944-04302d4a85a6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32cae488-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746702, 'tstamp': 746702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430016, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32cae488-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746704, 'tstamp': 746704}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430016, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:17:51 compute-0 podman[429992]: 2025-10-02 09:17:51.425652053 +0000 UTC m=+0.305886446 container create 327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.426 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32cae488-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.436 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32cae488-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.437 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.437 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32cae488-70, col_values=(('external_ids', {'iface-id': '25655298-5d81-4448-950c-289fb68606f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:17:51 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.437 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.445 2 INFO nova.virt.libvirt.driver [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Instance destroyed successfully.
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.446 2 DEBUG nova.objects.instance [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.469 2 DEBUG nova.virt.libvirt.vif [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-650898802',display_name='tempest-TestGettingAddress-server-650898802',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-650898802',id=148,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGsNuW5gEn3WEc3qQ3L0JdcOZd6vD7SMJl6t9YZV9xlo9AdZ8yeIelaH48tIJjYeguHisd3f+wPKQxnBOP+bkaPT8EVTluVcKTxfA+koE1m61LBSFo3LpknbEg+9XiigA==',key_name='tempest-TestGettingAddress-1915804126',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:17:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-96j0e3zy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:17:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.469 2 DEBUG nova.network.os_vif_util [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.470 2 DEBUG nova.network.os_vif_util [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.471 2 DEBUG os_vif [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d78430c-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:51 compute-0 nova_compute[260603]: 2025-10-02 09:17:51.480 2 INFO os_vif [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57')
Oct 02 09:17:51 compute-0 systemd[1]: Started libpod-conmon-327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947.scope.
Oct 02 09:17:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:17:51 compute-0 podman[429992]: 2025-10-02 09:17:51.916403371 +0000 UTC m=+0.796637734 container init 327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:17:51 compute-0 podman[429992]: 2025-10-02 09:17:51.925187086 +0000 UTC m=+0.805421479 container start 327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:17:51 compute-0 festive_carver[430050]: 167 167
Oct 02 09:17:51 compute-0 systemd[1]: libpod-327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947.scope: Deactivated successfully.
Oct 02 09:17:51 compute-0 conmon[430050]: conmon 327746a8703da799a265 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947.scope/container/memory.events
Oct 02 09:17:51 compute-0 podman[429992]: 2025-10-02 09:17:51.996804522 +0000 UTC m=+0.877038885 container attach 327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:17:51 compute-0 podman[429992]: 2025-10-02 09:17:51.997616278 +0000 UTC m=+0.877850621 container died 327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:17:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2003b70988640370bb0ec7f0d43f991805293303e6c672a52f2f3ef30d32bd8d-merged.mount: Deactivated successfully.
Oct 02 09:17:52 compute-0 nova_compute[260603]: 2025-10-02 09:17:52.279 2 DEBUG nova.compute.manager [req-4d84a0e3-d091-4d1c-901e-49c0a7d0df5a req-47be0feb-cb49-42e0-97f3-aac2af6df497 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-vif-unplugged-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:17:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 193 KiB/s rd, 1.3 MiB/s wr, 45 op/s
Oct 02 09:17:52 compute-0 nova_compute[260603]: 2025-10-02 09:17:52.280 2 DEBUG oslo_concurrency.lockutils [req-4d84a0e3-d091-4d1c-901e-49c0a7d0df5a req-47be0feb-cb49-42e0-97f3-aac2af6df497 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:52 compute-0 nova_compute[260603]: 2025-10-02 09:17:52.281 2 DEBUG oslo_concurrency.lockutils [req-4d84a0e3-d091-4d1c-901e-49c0a7d0df5a req-47be0feb-cb49-42e0-97f3-aac2af6df497 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:52 compute-0 nova_compute[260603]: 2025-10-02 09:17:52.281 2 DEBUG oslo_concurrency.lockutils [req-4d84a0e3-d091-4d1c-901e-49c0a7d0df5a req-47be0feb-cb49-42e0-97f3-aac2af6df497 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:52 compute-0 nova_compute[260603]: 2025-10-02 09:17:52.282 2 DEBUG nova.compute.manager [req-4d84a0e3-d091-4d1c-901e-49c0a7d0df5a req-47be0feb-cb49-42e0-97f3-aac2af6df497 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] No waiting events found dispatching network-vif-unplugged-9d78430c-570c-4b66-97e9-27790d7f2c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:17:52 compute-0 nova_compute[260603]: 2025-10-02 09:17:52.282 2 DEBUG nova.compute.manager [req-4d84a0e3-d091-4d1c-901e-49c0a7d0df5a req-47be0feb-cb49-42e0-97f3-aac2af6df497 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-vif-unplugged-9d78430c-570c-4b66-97e9-27790d7f2c0b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:17:52 compute-0 nova_compute[260603]: 2025-10-02 09:17:52.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:52 compute-0 podman[429992]: 2025-10-02 09:17:52.471264882 +0000 UTC m=+1.351499275 container remove 327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:17:52 compute-0 systemd[1]: libpod-conmon-327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947.scope: Deactivated successfully.
Oct 02 09:17:52 compute-0 podman[430074]: 2025-10-02 09:17:52.661737281 +0000 UTC m=+0.047141164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:17:52 compute-0 podman[430074]: 2025-10-02 09:17:52.817078744 +0000 UTC m=+0.202482647 container create bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 09:17:52 compute-0 nova_compute[260603]: 2025-10-02 09:17:52.929 2 DEBUG nova.network.neutron [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updated VIF entry in instance network info cache for port 9d78430c-570c-4b66-97e9-27790d7f2c0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:17:52 compute-0 nova_compute[260603]: 2025-10-02 09:17:52.930 2 DEBUG nova.network.neutron [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updating instance_info_cache with network_info: [{"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:17:52 compute-0 systemd[1]: Started libpod-conmon-bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9.scope.
Oct 02 09:17:53 compute-0 nova_compute[260603]: 2025-10-02 09:17:53.010 2 DEBUG oslo_concurrency.lockutils [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:17:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35b65c387a4e2f6d7b2ab11616c6957d2387463d3e12be003417273ab4bf43f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35b65c387a4e2f6d7b2ab11616c6957d2387463d3e12be003417273ab4bf43f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35b65c387a4e2f6d7b2ab11616c6957d2387463d3e12be003417273ab4bf43f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35b65c387a4e2f6d7b2ab11616c6957d2387463d3e12be003417273ab4bf43f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:53 compute-0 podman[430074]: 2025-10-02 09:17:53.120441739 +0000 UTC m=+0.505845622 container init bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 09:17:53 compute-0 podman[430074]: 2025-10-02 09:17:53.132736863 +0000 UTC m=+0.518140746 container start bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:17:53 compute-0 podman[430074]: 2025-10-02 09:17:53.27639663 +0000 UTC m=+0.661800583 container attach bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:17:53 compute-0 ceph-mon[74477]: pgmap v2992: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 193 KiB/s rd, 1.3 MiB/s wr, 45 op/s
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]: {
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:     "0": [
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:         {
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "devices": [
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "/dev/loop3"
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             ],
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_name": "ceph_lv0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_size": "21470642176",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "name": "ceph_lv0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "tags": {
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.cluster_name": "ceph",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.crush_device_class": "",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.encrypted": "0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.osd_id": "0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.type": "block",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.vdo": "0"
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             },
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "type": "block",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "vg_name": "ceph_vg0"
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:         }
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:     ],
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:     "1": [
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:         {
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "devices": [
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "/dev/loop4"
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             ],
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_name": "ceph_lv1",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_size": "21470642176",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "name": "ceph_lv1",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "tags": {
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.cluster_name": "ceph",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.crush_device_class": "",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.encrypted": "0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.osd_id": "1",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.type": "block",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.vdo": "0"
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             },
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "type": "block",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "vg_name": "ceph_vg1"
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:         }
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:     ],
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:     "2": [
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:         {
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "devices": [
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "/dev/loop5"
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             ],
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_name": "ceph_lv2",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_size": "21470642176",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "name": "ceph_lv2",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "tags": {
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.cluster_name": "ceph",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.crush_device_class": "",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.encrypted": "0",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.osd_id": "2",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.type": "block",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:                 "ceph.vdo": "0"
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             },
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "type": "block",
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:             "vg_name": "ceph_vg2"
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:         }
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]:     ]
Oct 02 09:17:53 compute-0 exciting_mccarthy[430091]: }
Oct 02 09:17:54 compute-0 systemd[1]: libpod-bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9.scope: Deactivated successfully.
Oct 02 09:17:54 compute-0 podman[430074]: 2025-10-02 09:17:54.017082325 +0000 UTC m=+1.402486218 container died bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:17:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 174 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 1.3 MiB/s wr, 57 op/s
Oct 02 09:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a35b65c387a4e2f6d7b2ab11616c6957d2387463d3e12be003417273ab4bf43f-merged.mount: Deactivated successfully.
Oct 02 09:17:54 compute-0 nova_compute[260603]: 2025-10-02 09:17:54.388 2 DEBUG nova.compute.manager [req-ede535bb-5af5-499e-82fe-4f6d60287431 req-a4fc7dba-7eb1-4dab-826f-e0fe326f4189 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:17:54 compute-0 nova_compute[260603]: 2025-10-02 09:17:54.389 2 DEBUG oslo_concurrency.lockutils [req-ede535bb-5af5-499e-82fe-4f6d60287431 req-a4fc7dba-7eb1-4dab-826f-e0fe326f4189 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:17:54 compute-0 nova_compute[260603]: 2025-10-02 09:17:54.389 2 DEBUG oslo_concurrency.lockutils [req-ede535bb-5af5-499e-82fe-4f6d60287431 req-a4fc7dba-7eb1-4dab-826f-e0fe326f4189 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:17:54 compute-0 nova_compute[260603]: 2025-10-02 09:17:54.389 2 DEBUG oslo_concurrency.lockutils [req-ede535bb-5af5-499e-82fe-4f6d60287431 req-a4fc7dba-7eb1-4dab-826f-e0fe326f4189 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:17:54 compute-0 nova_compute[260603]: 2025-10-02 09:17:54.389 2 DEBUG nova.compute.manager [req-ede535bb-5af5-499e-82fe-4f6d60287431 req-a4fc7dba-7eb1-4dab-826f-e0fe326f4189 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] No waiting events found dispatching network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:17:54 compute-0 nova_compute[260603]: 2025-10-02 09:17:54.389 2 WARNING nova.compute.manager [req-ede535bb-5af5-499e-82fe-4f6d60287431 req-a4fc7dba-7eb1-4dab-826f-e0fe326f4189 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received unexpected event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b for instance with vm_state active and task_state deleting.
Oct 02 09:17:54 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 02 09:17:54 compute-0 podman[430074]: 2025-10-02 09:17:54.537539641 +0000 UTC m=+1.922943514 container remove bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 09:17:54 compute-0 systemd[1]: libpod-conmon-bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9.scope: Deactivated successfully.
Oct 02 09:17:54 compute-0 sudo[429927]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:54 compute-0 sudo[430112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:17:54 compute-0 sudo[430112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:54 compute-0 sudo[430112]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:17:54 compute-0 ceph-mon[74477]: pgmap v2993: 305 pgs: 305 active+clean; 174 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 1.3 MiB/s wr, 57 op/s
Oct 02 09:17:54 compute-0 sudo[430137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:17:54 compute-0 sudo[430137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:54 compute-0 sudo[430137]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:54 compute-0 sudo[430163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:17:54 compute-0 sudo[430163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:54 compute-0 sudo[430163]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:54 compute-0 sudo[430188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:17:54 compute-0 sudo[430188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:55 compute-0 podman[430255]: 2025-10-02 09:17:55.345166826 +0000 UTC m=+0.030383030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:17:55 compute-0 podman[430255]: 2025-10-02 09:17:55.478735598 +0000 UTC m=+0.163951792 container create 8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_matsumoto, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:17:55 compute-0 systemd[1]: Started libpod-conmon-8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a.scope.
Oct 02 09:17:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:17:55 compute-0 podman[430255]: 2025-10-02 09:17:55.733491945 +0000 UTC m=+0.418708189 container init 8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_matsumoto, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 09:17:55 compute-0 podman[430255]: 2025-10-02 09:17:55.741195686 +0000 UTC m=+0.426411840 container start 8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:17:55 compute-0 sleepy_matsumoto[430273]: 167 167
Oct 02 09:17:55 compute-0 systemd[1]: libpod-8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a.scope: Deactivated successfully.
Oct 02 09:17:55 compute-0 podman[430255]: 2025-10-02 09:17:55.851167381 +0000 UTC m=+0.536383625 container attach 8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 02 09:17:55 compute-0 podman[430255]: 2025-10-02 09:17:55.85207228 +0000 UTC m=+0.537288474 container died 8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_matsumoto, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:17:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c2832fb150155accf00b412ca4469803960ff2d9ac119745e05353876c87dd2-merged.mount: Deactivated successfully.
Oct 02 09:17:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 174 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 44 KiB/s wr, 23 op/s
Oct 02 09:17:56 compute-0 podman[430255]: 2025-10-02 09:17:56.31022617 +0000 UTC m=+0.995442364 container remove 8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:17:56 compute-0 systemd[1]: libpod-conmon-8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a.scope: Deactivated successfully.
Oct 02 09:17:56 compute-0 nova_compute[260603]: 2025-10-02 09:17:56.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:56 compute-0 podman[430297]: 2025-10-02 09:17:56.52797614 +0000 UTC m=+0.062102020 container create e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 09:17:56 compute-0 podman[430297]: 2025-10-02 09:17:56.498571933 +0000 UTC m=+0.032697873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:17:56 compute-0 systemd[1]: Started libpod-conmon-e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175.scope.
Oct 02 09:17:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43bf4bd9ae20bb8708f1fad1c097a44dd934742bcc7e560dffc18fe1060a689/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43bf4bd9ae20bb8708f1fad1c097a44dd934742bcc7e560dffc18fe1060a689/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43bf4bd9ae20bb8708f1fad1c097a44dd934742bcc7e560dffc18fe1060a689/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43bf4bd9ae20bb8708f1fad1c097a44dd934742bcc7e560dffc18fe1060a689/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:17:56 compute-0 podman[430297]: 2025-10-02 09:17:56.849654928 +0000 UTC m=+0.383780818 container init e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:17:56 compute-0 podman[430297]: 2025-10-02 09:17:56.85898498 +0000 UTC m=+0.393110880 container start e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:17:56 compute-0 podman[430297]: 2025-10-02 09:17:56.96177362 +0000 UTC m=+0.495899480 container attach e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:17:57 compute-0 nova_compute[260603]: 2025-10-02 09:17:57.099 2 INFO nova.virt.libvirt.driver [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Deleting instance files /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_del
Oct 02 09:17:57 compute-0 nova_compute[260603]: 2025-10-02 09:17:57.101 2 INFO nova.virt.libvirt.driver [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Deletion of /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_del complete
Oct 02 09:17:57 compute-0 nova_compute[260603]: 2025-10-02 09:17:57.201 2 INFO nova.compute.manager [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Took 6.80 seconds to destroy the instance on the hypervisor.
Oct 02 09:17:57 compute-0 nova_compute[260603]: 2025-10-02 09:17:57.203 2 DEBUG oslo.service.loopingcall [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:17:57 compute-0 nova_compute[260603]: 2025-10-02 09:17:57.204 2 DEBUG nova.compute.manager [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:17:57 compute-0 nova_compute[260603]: 2025-10-02 09:17:57.205 2 DEBUG nova.network.neutron [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:17:57 compute-0 ceph-mon[74477]: pgmap v2994: 305 pgs: 305 active+clean; 174 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 44 KiB/s wr, 23 op/s
Oct 02 09:17:57 compute-0 nova_compute[260603]: 2025-10-02 09:17:57.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:17:57 compute-0 elated_sammet[430313]: {
Oct 02 09:17:57 compute-0 elated_sammet[430313]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "osd_id": 2,
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "type": "bluestore"
Oct 02 09:17:57 compute-0 elated_sammet[430313]:     },
Oct 02 09:17:57 compute-0 elated_sammet[430313]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "osd_id": 1,
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "type": "bluestore"
Oct 02 09:17:57 compute-0 elated_sammet[430313]:     },
Oct 02 09:17:57 compute-0 elated_sammet[430313]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "osd_id": 0,
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:17:57 compute-0 elated_sammet[430313]:         "type": "bluestore"
Oct 02 09:17:57 compute-0 elated_sammet[430313]:     }
Oct 02 09:17:57 compute-0 elated_sammet[430313]: }
Oct 02 09:17:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:17:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:17:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:17:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:17:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:17:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:17:57 compute-0 systemd[1]: libpod-e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175.scope: Deactivated successfully.
Oct 02 09:17:57 compute-0 podman[430297]: 2025-10-02 09:17:57.998182132 +0000 UTC m=+1.532308002 container died e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 09:17:57 compute-0 systemd[1]: libpod-e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175.scope: Consumed 1.139s CPU time.
Oct 02 09:17:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a43bf4bd9ae20bb8708f1fad1c097a44dd934742bcc7e560dffc18fe1060a689-merged.mount: Deactivated successfully.
Oct 02 09:17:58 compute-0 podman[430297]: 2025-10-02 09:17:58.264365726 +0000 UTC m=+1.798491596 container remove e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:17:58 compute-0 systemd[1]: libpod-conmon-e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175.scope: Deactivated successfully.
Oct 02 09:17:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 125 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 48 KiB/s wr, 32 op/s
Oct 02 09:17:58 compute-0 sudo[430188]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:17:58 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:17:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:17:58 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:17:58 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 967754cc-5330-48d9-9939-84d079f83429 does not exist
Oct 02 09:17:58 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ff6f4044-38ca-4998-a906-4593e70b3049 does not exist
Oct 02 09:17:58 compute-0 sudo[430360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:17:58 compute-0 sudo[430360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:58 compute-0 sudo[430360]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:58 compute-0 sudo[430385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:17:58 compute-0 sudo[430385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:17:58 compute-0 sudo[430385]: pam_unix(sudo:session): session closed for user root
Oct 02 09:17:59 compute-0 ceph-mon[74477]: pgmap v2995: 305 pgs: 305 active+clean; 125 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 48 KiB/s wr, 32 op/s
Oct 02 09:17:59 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:17:59 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:17:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:00 compute-0 podman[430411]: 2025-10-02 09:18:00.05480104 +0000 UTC m=+0.102719719 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:18:00 compute-0 podman[430410]: 2025-10-02 09:18:00.090103313 +0000 UTC m=+0.138115975 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:18:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 11 KiB/s wr, 30 op/s
Oct 02 09:18:00 compute-0 nova_compute[260603]: 2025-10-02 09:18:00.299 2 DEBUG nova.network.neutron [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:18:00 compute-0 nova_compute[260603]: 2025-10-02 09:18:00.488 2 DEBUG nova.compute.manager [req-8ccef13e-8fa3-4aa8-8436-e86a661f441a req-89a7020b-de6e-4f3e-a1db-77dffc0570d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-vif-deleted-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:18:00 compute-0 nova_compute[260603]: 2025-10-02 09:18:00.489 2 INFO nova.compute.manager [req-8ccef13e-8fa3-4aa8-8436-e86a661f441a req-89a7020b-de6e-4f3e-a1db-77dffc0570d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Neutron deleted interface 9d78430c-570c-4b66-97e9-27790d7f2c0b; detaching it from the instance and deleting it from the info cache
Oct 02 09:18:00 compute-0 nova_compute[260603]: 2025-10-02 09:18:00.489 2 DEBUG nova.network.neutron [req-8ccef13e-8fa3-4aa8-8436-e86a661f441a req-89a7020b-de6e-4f3e-a1db-77dffc0570d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:18:00 compute-0 nova_compute[260603]: 2025-10-02 09:18:00.654 2 INFO nova.compute.manager [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Took 3.45 seconds to deallocate network for instance.
Oct 02 09:18:00 compute-0 nova_compute[260603]: 2025-10-02 09:18:00.669 2 DEBUG nova.compute.manager [req-8ccef13e-8fa3-4aa8-8436-e86a661f441a req-89a7020b-de6e-4f3e-a1db-77dffc0570d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Detach interface failed, port_id=9d78430c-570c-4b66-97e9-27790d7f2c0b, reason: Instance 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 09:18:00 compute-0 ceph-mon[74477]: pgmap v2996: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 11 KiB/s wr, 30 op/s
Oct 02 09:18:00 compute-0 nova_compute[260603]: 2025-10-02 09:18:00.858 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:00 compute-0 nova_compute[260603]: 2025-10-02 09:18:00.859 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:00 compute-0 nova_compute[260603]: 2025-10-02 09:18:00.946 2 DEBUG oslo_concurrency.processutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:18:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:18:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3300512841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:18:01 compute-0 nova_compute[260603]: 2025-10-02 09:18:01.390 2 DEBUG oslo_concurrency.processutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:18:01 compute-0 nova_compute[260603]: 2025-10-02 09:18:01.399 2 DEBUG nova.compute.provider_tree [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:18:01 compute-0 nova_compute[260603]: 2025-10-02 09:18:01.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:01 compute-0 nova_compute[260603]: 2025-10-02 09:18:01.542 2 DEBUG nova.scheduler.client.report [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:18:01 compute-0 nova_compute[260603]: 2025-10-02 09:18:01.701 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:01 compute-0 nova_compute[260603]: 2025-10-02 09:18:01.772 2 INFO nova.scheduler.client.report [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555
Oct 02 09:18:01 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3300512841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:18:01 compute-0 nova_compute[260603]: 2025-10-02 09:18:01.884 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 29 op/s
Oct 02 09:18:02 compute-0 nova_compute[260603]: 2025-10-02 09:18:02.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:03 compute-0 ceph-mon[74477]: pgmap v2997: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 29 op/s
Oct 02 09:18:03 compute-0 nova_compute[260603]: 2025-10-02 09:18:03.531 2 DEBUG nova.compute.manager [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-changed-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:18:03 compute-0 nova_compute[260603]: 2025-10-02 09:18:03.532 2 DEBUG nova.compute.manager [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Refreshing instance network info cache due to event network-changed-dad30664-f830-4b51-9cf5-8b92d95308bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:18:03 compute-0 nova_compute[260603]: 2025-10-02 09:18:03.532 2 DEBUG oslo_concurrency.lockutils [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:18:03 compute-0 nova_compute[260603]: 2025-10-02 09:18:03.533 2 DEBUG oslo_concurrency.lockutils [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:18:03 compute-0 nova_compute[260603]: 2025-10-02 09:18:03.533 2 DEBUG nova.network.neutron [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Refreshing network info cache for port dad30664-f830-4b51-9cf5-8b92d95308bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:18:03 compute-0 nova_compute[260603]: 2025-10-02 09:18:03.777 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:03 compute-0 nova_compute[260603]: 2025-10-02 09:18:03.778 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:03 compute-0 nova_compute[260603]: 2025-10-02 09:18:03.778 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:03 compute-0 nova_compute[260603]: 2025-10-02 09:18:03.779 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:03 compute-0 nova_compute[260603]: 2025-10-02 09:18:03.779 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:03 compute-0 nova_compute[260603]: 2025-10-02 09:18:03.781 2 INFO nova.compute.manager [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Terminating instance
Oct 02 09:18:03 compute-0 nova_compute[260603]: 2025-10-02 09:18:03.783 2 DEBUG nova.compute.manager [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:18:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 29 op/s
Oct 02 09:18:04 compute-0 kernel: tapdad30664-f8 (unregistering): left promiscuous mode
Oct 02 09:18:04 compute-0 NetworkManager[45129]: <info>  [1759396684.4812] device (tapdad30664-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:04 compute-0 ovn_controller[152344]: 2025-10-02T09:18:04Z|01663|binding|INFO|Releasing lport dad30664-f830-4b51-9cf5-8b92d95308bb from this chassis (sb_readonly=0)
Oct 02 09:18:04 compute-0 ovn_controller[152344]: 2025-10-02T09:18:04Z|01664|binding|INFO|Setting lport dad30664-f830-4b51-9cf5-8b92d95308bb down in Southbound
Oct 02 09:18:04 compute-0 ovn_controller[152344]: 2025-10-02T09:18:04Z|01665|binding|INFO|Removing iface tapdad30664-f8 ovn-installed in OVS
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:04.542 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:75:e6 10.100.0.11 2001:db8:0:1:f816:3eff:fe28:75e6 2001:db8::f816:3eff:fe28:75e6'], port_security=['fa:16:3e:28:75:e6 10.100.0.11 2001:db8:0:1:f816:3eff:fe28:75e6 2001:db8::f816:3eff:fe28:75e6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8:0:1:f816:3eff:fe28:75e6/64 2001:db8::f816:3eff:fe28:75e6/64', 'neutron:device_id': 'bfb5f44c-0aeb-439f-9d64-934d5cb85c02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6142c2da-c0c8-4842-a55d-76581298b5e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=591e6b4a-e1ca-4274-8e8d-321441edaa04, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dad30664-f830-4b51-9cf5-8b92d95308bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:18:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:04.545 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dad30664-f830-4b51-9cf5-8b92d95308bb in datapath 32cae488-7671-4c05-a475-2c63f103261b unbound from our chassis
Oct 02 09:18:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:04.546 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32cae488-7671-4c05-a475-2c63f103261b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:18:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:04.548 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[00f10b06-caa3-4767-b046-e30a03095d57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:18:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:04.549 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-32cae488-7671-4c05-a475-2c63f103261b namespace which is not needed anymore
Oct 02 09:18:04 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct 02 09:18:04 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Consumed 17.226s CPU time.
Oct 02 09:18:04 compute-0 systemd-machined[214636]: Machine qemu-181-instance-00000093 terminated.
Oct 02 09:18:04 compute-0 podman[430480]: 2025-10-02 09:18:04.650998251 +0000 UTC m=+0.110981698 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.655 2 INFO nova.virt.libvirt.driver [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Instance destroyed successfully.
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.656 2 DEBUG nova.objects.instance [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid bfb5f44c-0aeb-439f-9d64-934d5cb85c02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:18:04 compute-0 podman[430482]: 2025-10-02 09:18:04.66539726 +0000 UTC m=+0.118509482 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 02 09:18:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.757 2 DEBUG nova.virt.libvirt.vif [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1773019515',display_name='tempest-TestGettingAddress-server-1773019515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1773019515',id=147,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGsNuW5gEn3WEc3qQ3L0JdcOZd6vD7SMJl6t9YZV9xlo9AdZ8yeIelaH48tIJjYeguHisd3f+wPKQxnBOP+bkaPT8EVTluVcKTxfA+koE1m61LBSFo3LpknbEg+9XiigA==',key_name='tempest-TestGettingAddress-1915804126',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:16:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-9gbn0ocy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:16:30Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=bfb5f44c-0aeb-439f-9d64-934d5cb85c02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.758 2 DEBUG nova.network.os_vif_util [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.759 2 DEBUG nova.network.os_vif_util [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.760 2 DEBUG os_vif [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.762 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdad30664-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:04 compute-0 nova_compute[260603]: 2025-10-02 09:18:04.773 2 INFO os_vif [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8')
Oct 02 09:18:04 compute-0 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [NOTICE]   (428314) : haproxy version is 2.8.14-c23fe91
Oct 02 09:18:04 compute-0 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [NOTICE]   (428314) : path to executable is /usr/sbin/haproxy
Oct 02 09:18:04 compute-0 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [WARNING]  (428314) : Exiting Master process...
Oct 02 09:18:04 compute-0 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [WARNING]  (428314) : Exiting Master process...
Oct 02 09:18:04 compute-0 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [ALERT]    (428314) : Current worker (428316) exited with code 143 (Terminated)
Oct 02 09:18:04 compute-0 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [WARNING]  (428314) : All workers exited. Exiting... (0)
Oct 02 09:18:04 compute-0 systemd[1]: libpod-bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104.scope: Deactivated successfully.
Oct 02 09:18:04 compute-0 podman[430551]: 2025-10-02 09:18:04.879909261 +0000 UTC m=+0.187088295 container died bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:18:05 compute-0 nova_compute[260603]: 2025-10-02 09:18:05.083 2 DEBUG nova.compute.manager [req-a438c283-bf1a-43ad-9a83-b2c4b3551963 req-829187b6-c9b9-4a8f-b07c-ebd15f087b01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-vif-unplugged-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:18:05 compute-0 nova_compute[260603]: 2025-10-02 09:18:05.083 2 DEBUG oslo_concurrency.lockutils [req-a438c283-bf1a-43ad-9a83-b2c4b3551963 req-829187b6-c9b9-4a8f-b07c-ebd15f087b01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:05 compute-0 nova_compute[260603]: 2025-10-02 09:18:05.084 2 DEBUG oslo_concurrency.lockutils [req-a438c283-bf1a-43ad-9a83-b2c4b3551963 req-829187b6-c9b9-4a8f-b07c-ebd15f087b01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:05 compute-0 nova_compute[260603]: 2025-10-02 09:18:05.084 2 DEBUG oslo_concurrency.lockutils [req-a438c283-bf1a-43ad-9a83-b2c4b3551963 req-829187b6-c9b9-4a8f-b07c-ebd15f087b01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:05 compute-0 nova_compute[260603]: 2025-10-02 09:18:05.084 2 DEBUG nova.compute.manager [req-a438c283-bf1a-43ad-9a83-b2c4b3551963 req-829187b6-c9b9-4a8f-b07c-ebd15f087b01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] No waiting events found dispatching network-vif-unplugged-dad30664-f830-4b51-9cf5-8b92d95308bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:18:05 compute-0 nova_compute[260603]: 2025-10-02 09:18:05.085 2 DEBUG nova.compute.manager [req-a438c283-bf1a-43ad-9a83-b2c4b3551963 req-829187b6-c9b9-4a8f-b07c-ebd15f087b01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-vif-unplugged-dad30664-f830-4b51-9cf5-8b92d95308bb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:18:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104-userdata-shm.mount: Deactivated successfully.
Oct 02 09:18:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-140339e959af83ff118cb8023b67732905454915fb70723adb2eabd214c6428e-merged.mount: Deactivated successfully.
Oct 02 09:18:05 compute-0 ceph-mon[74477]: pgmap v2998: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 29 op/s
Oct 02 09:18:06 compute-0 podman[430551]: 2025-10-02 09:18:06.057376559 +0000 UTC m=+1.364555573 container cleanup bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 09:18:06 compute-0 systemd[1]: libpod-conmon-bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104.scope: Deactivated successfully.
Oct 02 09:18:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 3.8 KiB/s wr, 18 op/s
Oct 02 09:18:06 compute-0 nova_compute[260603]: 2025-10-02 09:18:06.443 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396671.4428585, 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:18:06 compute-0 nova_compute[260603]: 2025-10-02 09:18:06.444 2 INFO nova.compute.manager [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] VM Stopped (Lifecycle Event)
Oct 02 09:18:06 compute-0 nova_compute[260603]: 2025-10-02 09:18:06.494 2 DEBUG nova.compute.manager [None req-a0371a05-4525-4344-87ac-0f21bcd67f57 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:18:06 compute-0 podman[430598]: 2025-10-02 09:18:06.842688968 +0000 UTC m=+0.759868756 container remove bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 09:18:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.851 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[75a352c3-1ee6-43fd-8901-f8040b8d4c15]: (4, ('Thu Oct  2 09:18:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b (bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104)\nbc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104\nThu Oct  2 09:18:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b (bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104)\nbc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:18:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.854 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[11b3435e-5986-42ec-80c0-5ac68f44379d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:18:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.855 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32cae488-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:18:06 compute-0 kernel: tap32cae488-70: left promiscuous mode
Oct 02 09:18:06 compute-0 nova_compute[260603]: 2025-10-02 09:18:06.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:06 compute-0 nova_compute[260603]: 2025-10-02 09:18:06.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.889 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18bce2a8-cbd7-4742-bf7b-aa875eed039f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:18:06 compute-0 ceph-mon[74477]: pgmap v2999: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 3.8 KiB/s wr, 18 op/s
Oct 02 09:18:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.920 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b084a4-05bb-4157-a440-c857e8bd20e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:18:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.922 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a825ded2-9cbd-41eb-a361-ca23fef225a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:18:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.948 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10b11296-51e7-4a22-9936-19a8609352cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746682, 'reachable_time': 30147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430614, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:18:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d32cae488\x2d7671\x2d4c05\x2da475\x2d2c63f103261b.mount: Deactivated successfully.
Oct 02 09:18:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.952 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-32cae488-7671-4c05-a475-2c63f103261b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:18:06 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.952 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cbf257-caae-4368-976f-3df165250823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:18:07 compute-0 nova_compute[260603]: 2025-10-02 09:18:07.101 2 DEBUG nova.network.neutron [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updated VIF entry in instance network info cache for port dad30664-f830-4b51-9cf5-8b92d95308bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:18:07 compute-0 nova_compute[260603]: 2025-10-02 09:18:07.101 2 DEBUG nova.network.neutron [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updating instance_info_cache with network_info: [{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:18:07 compute-0 nova_compute[260603]: 2025-10-02 09:18:07.123 2 DEBUG oslo_concurrency.lockutils [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:18:07 compute-0 nova_compute[260603]: 2025-10-02 09:18:07.169 2 DEBUG nova.compute.manager [req-6444cbbc-0ca0-4d86-bcd0-55d87e7fbf20 req-34b68275-64e0-44eb-b735-ea373fa2cc45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:18:07 compute-0 nova_compute[260603]: 2025-10-02 09:18:07.170 2 DEBUG oslo_concurrency.lockutils [req-6444cbbc-0ca0-4d86-bcd0-55d87e7fbf20 req-34b68275-64e0-44eb-b735-ea373fa2cc45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:07 compute-0 nova_compute[260603]: 2025-10-02 09:18:07.170 2 DEBUG oslo_concurrency.lockutils [req-6444cbbc-0ca0-4d86-bcd0-55d87e7fbf20 req-34b68275-64e0-44eb-b735-ea373fa2cc45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:07 compute-0 nova_compute[260603]: 2025-10-02 09:18:07.171 2 DEBUG oslo_concurrency.lockutils [req-6444cbbc-0ca0-4d86-bcd0-55d87e7fbf20 req-34b68275-64e0-44eb-b735-ea373fa2cc45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:07 compute-0 nova_compute[260603]: 2025-10-02 09:18:07.171 2 DEBUG nova.compute.manager [req-6444cbbc-0ca0-4d86-bcd0-55d87e7fbf20 req-34b68275-64e0-44eb-b735-ea373fa2cc45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] No waiting events found dispatching network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:18:07 compute-0 nova_compute[260603]: 2025-10-02 09:18:07.172 2 WARNING nova.compute.manager [req-6444cbbc-0ca0-4d86-bcd0-55d87e7fbf20 req-34b68275-64e0-44eb-b735-ea373fa2cc45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received unexpected event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb for instance with vm_state active and task_state deleting.
Oct 02 09:18:07 compute-0 nova_compute[260603]: 2025-10-02 09:18:07.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:07 compute-0 nova_compute[260603]: 2025-10-02 09:18:07.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:18:07 compute-0 nova_compute[260603]: 2025-10-02 09:18:07.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:18:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 3.9 KiB/s wr, 19 op/s
Oct 02 09:18:08 compute-0 nova_compute[260603]: 2025-10-02 09:18:08.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:18:08 compute-0 nova_compute[260603]: 2025-10-02 09:18:08.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:18:08 compute-0 nova_compute[260603]: 2025-10-02 09:18:08.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:18:08 compute-0 nova_compute[260603]: 2025-10-02 09:18:08.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 09:18:08 compute-0 nova_compute[260603]: 2025-10-02 09:18:08.540 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:18:09 compute-0 ceph-mon[74477]: pgmap v3000: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 3.9 KiB/s wr, 19 op/s
Oct 02 09:18:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:09 compute-0 nova_compute[260603]: 2025-10-02 09:18:09.771 2 INFO nova.virt.libvirt.driver [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Deleting instance files /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02_del
Oct 02 09:18:09 compute-0 nova_compute[260603]: 2025-10-02 09:18:09.773 2 INFO nova.virt.libvirt.driver [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Deletion of /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02_del complete
Oct 02 09:18:09 compute-0 nova_compute[260603]: 2025-10-02 09:18:09.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:09 compute-0 nova_compute[260603]: 2025-10-02 09:18:09.831 2 INFO nova.compute.manager [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Took 6.05 seconds to destroy the instance on the hypervisor.
Oct 02 09:18:09 compute-0 nova_compute[260603]: 2025-10-02 09:18:09.832 2 DEBUG oslo.service.loopingcall [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:18:09 compute-0 nova_compute[260603]: 2025-10-02 09:18:09.832 2 DEBUG nova.compute.manager [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:18:09 compute-0 nova_compute[260603]: 2025-10-02 09:18:09.833 2 DEBUG nova.network.neutron [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:18:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 82 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 85 B/s wr, 20 op/s
Oct 02 09:18:10 compute-0 nova_compute[260603]: 2025-10-02 09:18:10.607 2 DEBUG nova.network.neutron [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:18:10 compute-0 nova_compute[260603]: 2025-10-02 09:18:10.634 2 INFO nova.compute.manager [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Took 0.80 seconds to deallocate network for instance.
Oct 02 09:18:10 compute-0 nova_compute[260603]: 2025-10-02 09:18:10.665 2 DEBUG nova.compute.manager [req-564917b4-a336-4586-83d2-d2b114c16afb req-d9c2100f-8232-41f6-9763-3c169f4defe7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-vif-deleted-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:18:10 compute-0 nova_compute[260603]: 2025-10-02 09:18:10.684 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:10 compute-0 nova_compute[260603]: 2025-10-02 09:18:10.685 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:10 compute-0 nova_compute[260603]: 2025-10-02 09:18:10.746 2 DEBUG oslo_concurrency.processutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:18:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:18:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203521532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:18:11 compute-0 nova_compute[260603]: 2025-10-02 09:18:11.212 2 DEBUG oslo_concurrency.processutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:18:11 compute-0 nova_compute[260603]: 2025-10-02 09:18:11.223 2 DEBUG nova.compute.provider_tree [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:18:11 compute-0 nova_compute[260603]: 2025-10-02 09:18:11.245 2 DEBUG nova.scheduler.client.report [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:18:11 compute-0 nova_compute[260603]: 2025-10-02 09:18:11.267 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:11 compute-0 nova_compute[260603]: 2025-10-02 09:18:11.297 2 INFO nova.scheduler.client.report [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance bfb5f44c-0aeb-439f-9d64-934d5cb85c02
Oct 02 09:18:11 compute-0 nova_compute[260603]: 2025-10-02 09:18:11.363 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:11 compute-0 nova_compute[260603]: 2025-10-02 09:18:11.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:18:11 compute-0 nova_compute[260603]: 2025-10-02 09:18:11.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:11 compute-0 nova_compute[260603]: 2025-10-02 09:18:11.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:11 compute-0 nova_compute[260603]: 2025-10-02 09:18:11.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:11 compute-0 nova_compute[260603]: 2025-10-02 09:18:11.550 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:18:11 compute-0 nova_compute[260603]: 2025-10-02 09:18:11.550 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:18:11 compute-0 ceph-mon[74477]: pgmap v3001: 305 pgs: 305 active+clean; 82 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 85 B/s wr, 20 op/s
Oct 02 09:18:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2203521532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:18:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:18:12 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/728948911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:18:12 compute-0 nova_compute[260603]: 2025-10-02 09:18:12.112 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:18:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 82 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 85 B/s wr, 12 op/s
Oct 02 09:18:12 compute-0 nova_compute[260603]: 2025-10-02 09:18:12.381 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:18:12 compute-0 nova_compute[260603]: 2025-10-02 09:18:12.383 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3554MB free_disk=59.96678924560547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:18:12 compute-0 nova_compute[260603]: 2025-10-02 09:18:12.383 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:12 compute-0 nova_compute[260603]: 2025-10-02 09:18:12.383 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:12 compute-0 nova_compute[260603]: 2025-10-02 09:18:12.500 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:18:12 compute-0 nova_compute[260603]: 2025-10-02 09:18:12.501 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:18:12 compute-0 nova_compute[260603]: 2025-10-02 09:18:12.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:12 compute-0 nova_compute[260603]: 2025-10-02 09:18:12.523 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:18:12 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/728948911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:18:12 compute-0 ceph-mon[74477]: pgmap v3002: 305 pgs: 305 active+clean; 82 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 85 B/s wr, 12 op/s
Oct 02 09:18:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:18:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2324713024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:18:13 compute-0 nova_compute[260603]: 2025-10-02 09:18:13.072 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:18:13 compute-0 nova_compute[260603]: 2025-10-02 09:18:13.081 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:18:13 compute-0 nova_compute[260603]: 2025-10-02 09:18:13.098 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:18:13 compute-0 nova_compute[260603]: 2025-10-02 09:18:13.141 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:18:13 compute-0 nova_compute[260603]: 2025-10-02 09:18:13.142 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:13 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2324713024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:18:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:18:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:14 compute-0 ceph-mon[74477]: pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:18:14 compute-0 nova_compute[260603]: 2025-10-02 09:18:14.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:15 compute-0 nova_compute[260603]: 2025-10-02 09:18:15.138 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:18:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:18:16 compute-0 nova_compute[260603]: 2025-10-02 09:18:16.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:18:16 compute-0 nova_compute[260603]: 2025-10-02 09:18:16.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:17 compute-0 nova_compute[260603]: 2025-10-02 09:18:17.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:17 compute-0 ceph-mon[74477]: pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:18:17 compute-0 nova_compute[260603]: 2025-10-02 09:18:17.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:18:19 compute-0 ceph-mon[74477]: pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:18:19 compute-0 nova_compute[260603]: 2025-10-02 09:18:19.651 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396684.6484373, bfb5f44c-0aeb-439f-9d64-934d5cb85c02 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:18:19 compute-0 nova_compute[260603]: 2025-10-02 09:18:19.651 2 INFO nova.compute.manager [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] VM Stopped (Lifecycle Event)
Oct 02 09:18:19 compute-0 nova_compute[260603]: 2025-10-02 09:18:19.678 2 DEBUG nova.compute.manager [None req-d496d68a-eaae-4e81-a50a-8a06da71ca85 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:18:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:19 compute-0 nova_compute[260603]: 2025-10-02 09:18:19.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 25 op/s
Oct 02 09:18:21 compute-0 nova_compute[260603]: 2025-10-02 09:18:21.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:21.194 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:18:21 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:21.196 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:18:21 compute-0 ceph-mon[74477]: pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 25 op/s
Oct 02 09:18:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:18:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 43K writes, 174K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 43K writes, 16K syncs, 2.74 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2386 writes, 10K keys, 2386 commit groups, 1.0 writes per commit group, ingest: 12.81 MB, 0.02 MB/s
                                           Interval WAL: 2386 writes, 857 syncs, 2.78 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:18:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:18:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/205807364' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:18:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:18:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/205807364' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:18:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 KiB/s wr, 15 op/s
Oct 02 09:18:22 compute-0 nova_compute[260603]: 2025-10-02 09:18:22.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/205807364' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:18:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/205807364' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:18:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:23.198 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:18:23 compute-0 ceph-mon[74477]: pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 KiB/s wr, 15 op/s
Oct 02 09:18:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 KiB/s wr, 15 op/s
Oct 02 09:18:24 compute-0 nova_compute[260603]: 2025-10-02 09:18:24.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:18:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:24 compute-0 nova_compute[260603]: 2025-10-02 09:18:24.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:24 compute-0 ceph-mon[74477]: pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 KiB/s wr, 15 op/s
Oct 02 09:18:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:18:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2280 writes, 9757 keys, 2280 commit groups, 1.0 writes per commit group, ingest: 10.53 MB, 0.02 MB/s
                                           Interval WAL: 2280 writes, 833 syncs, 2.74 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:18:27 compute-0 ceph-mon[74477]: pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:27 compute-0 nova_compute[260603]: 2025-10-02 09:18:27.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:18:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:18:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:18:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:18:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:18:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:18:28
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.rgw.root']
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:18:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:18:29 compute-0 ceph-mon[74477]: pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:29 compute-0 nova_compute[260603]: 2025-10-02 09:18:29.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:30 compute-0 podman[430685]: 2025-10-02 09:18:30.991943939 +0000 UTC m=+0.059260043 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 09:18:31 compute-0 podman[430684]: 2025-10-02 09:18:31.057802726 +0000 UTC m=+0.124773639 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 09:18:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:18:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 36K writes, 140K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.70 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1788 writes, 7086 keys, 1788 commit groups, 1.0 writes per commit group, ingest: 8.42 MB, 0.01 MB/s
                                           Interval WAL: 1788 writes, 697 syncs, 2.57 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:18:31 compute-0 ceph-mon[74477]: pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:32 compute-0 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct 02 09:18:32 compute-0 nova_compute[260603]: 2025-10-02 09:18:32.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:33 compute-0 ceph-mon[74477]: pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:34 compute-0 nova_compute[260603]: 2025-10-02 09:18:34.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:34.855 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:34.856 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:34.856 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:34 compute-0 ceph-mon[74477]: pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:35 compute-0 podman[430731]: 2025-10-02 09:18:35.006888394 +0000 UTC m=+0.064685582 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:18:35 compute-0 podman[430730]: 2025-10-02 09:18:35.050805865 +0000 UTC m=+0.106421685 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:18:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:35.400 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:77:f2 10.100.0.2 2001:db8::f816:3eff:fe92:77f2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe92:77f2/64', 'neutron:device_id': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d143d452-1c93-4a72-b1b5-b9ed8c58f219, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b940cd75-8ee4-4c6d-be34-c1fb5851ca50) old=Port_Binding(mac=['fa:16:3e:92:77:f2 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:18:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:35.401 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b940cd75-8ee4-4c6d-be34-c1fb5851ca50 in datapath 01f3e9aa-20f8-48ae-9b80-cbd0ba476303 updated
Oct 02 09:18:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:35.402 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01f3e9aa-20f8-48ae-9b80-cbd0ba476303, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:18:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:35.403 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[777ccf24-3168-4821-b203-5d710adfb890]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:18:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:37 compute-0 nova_compute[260603]: 2025-10-02 09:18:37.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:37 compute-0 ceph-mon[74477]: pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:38 compute-0 ceph-mon[74477]: pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:18:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:18:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:39 compute-0 nova_compute[260603]: 2025-10-02 09:18:39.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:41 compute-0 nova_compute[260603]: 2025-10-02 09:18:41.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:18:41 compute-0 nova_compute[260603]: 2025-10-02 09:18:41.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:18:41 compute-0 ceph-mon[74477]: pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:42.454 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:77:f2 10.100.0.2 2001:db8:0:1:f816:3eff:fe92:77f2 2001:db8::f816:3eff:fe92:77f2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe92:77f2/64 2001:db8::f816:3eff:fe92:77f2/64', 'neutron:device_id': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d143d452-1c93-4a72-b1b5-b9ed8c58f219, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b940cd75-8ee4-4c6d-be34-c1fb5851ca50) old=Port_Binding(mac=['fa:16:3e:92:77:f2 10.100.0.2 2001:db8::f816:3eff:fe92:77f2'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe92:77f2/64', 'neutron:device_id': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:18:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:42.456 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b940cd75-8ee4-4c6d-be34-c1fb5851ca50 in datapath 01f3e9aa-20f8-48ae-9b80-cbd0ba476303 updated
Oct 02 09:18:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:42.457 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01f3e9aa-20f8-48ae-9b80-cbd0ba476303, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:18:42 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:18:42.458 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c55aa2ea-c70f-466e-ae9e-0f5036c535a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:18:42 compute-0 nova_compute[260603]: 2025-10-02 09:18:42.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:42 compute-0 ceph-mon[74477]: pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:44 compute-0 nova_compute[260603]: 2025-10-02 09:18:44.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:45 compute-0 ceph-mon[74477]: pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:47 compute-0 ceph-mon[74477]: pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:47 compute-0 nova_compute[260603]: 2025-10-02 09:18:47.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:49 compute-0 ceph-mon[74477]: pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:49 compute-0 nova_compute[260603]: 2025-10-02 09:18:49.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:51 compute-0 nova_compute[260603]: 2025-10-02 09:18:51.504 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:51 compute-0 nova_compute[260603]: 2025-10-02 09:18:51.505 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:51 compute-0 nova_compute[260603]: 2025-10-02 09:18:51.521 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:18:51 compute-0 nova_compute[260603]: 2025-10-02 09:18:51.606 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:51 compute-0 nova_compute[260603]: 2025-10-02 09:18:51.606 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:51 compute-0 nova_compute[260603]: 2025-10-02 09:18:51.615 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:18:51 compute-0 nova_compute[260603]: 2025-10-02 09:18:51.615 2 INFO nova.compute.claims [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:18:51 compute-0 ceph-mon[74477]: pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:51 compute-0 nova_compute[260603]: 2025-10-02 09:18:51.722 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:18:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:18:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750493922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.210 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.219 2 DEBUG nova.compute.provider_tree [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.252 2 DEBUG nova.scheduler.client.report [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.275 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.276 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:18:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.326 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.326 2 DEBUG nova.network.neutron [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.362 2 INFO nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.388 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.499 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.500 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.500 2 INFO nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Creating image(s)
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.523 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.551 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.577 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.581 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.643 2 DEBUG nova.policy [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.694 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.694 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.695 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.695 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/750493922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:18:52 compute-0 ceph-mon[74477]: pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.726 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:18:52 compute-0 nova_compute[260603]: 2025-10-02 09:18:52.730 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d2fca23c-2574-4642-b931-f363d59bd5a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:18:53 compute-0 nova_compute[260603]: 2025-10-02 09:18:53.407 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d2fca23c-2574-4642-b931-f363d59bd5a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:18:53 compute-0 nova_compute[260603]: 2025-10-02 09:18:53.480 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:18:53 compute-0 nova_compute[260603]: 2025-10-02 09:18:53.659 2 DEBUG nova.objects.instance [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid d2fca23c-2574-4642-b931-f363d59bd5a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:18:53 compute-0 nova_compute[260603]: 2025-10-02 09:18:53.701 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:18:53 compute-0 nova_compute[260603]: 2025-10-02 09:18:53.702 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Ensure instance console log exists: /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:18:53 compute-0 nova_compute[260603]: 2025-10-02 09:18:53.703 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:53 compute-0 nova_compute[260603]: 2025-10-02 09:18:53.703 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:53 compute-0 nova_compute[260603]: 2025-10-02 09:18:53.704 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:54 compute-0 nova_compute[260603]: 2025-10-02 09:18:54.306 2 DEBUG nova.network.neutron [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Successfully created port: dac6b720-36af-4048-863c-cd259065f82e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:18:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Oct 02 09:18:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:54 compute-0 nova_compute[260603]: 2025-10-02 09:18:54.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:55 compute-0 nova_compute[260603]: 2025-10-02 09:18:55.287 2 DEBUG nova.network.neutron [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Successfully updated port: dac6b720-36af-4048-863c-cd259065f82e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:18:55 compute-0 nova_compute[260603]: 2025-10-02 09:18:55.308 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:18:55 compute-0 nova_compute[260603]: 2025-10-02 09:18:55.309 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:18:55 compute-0 nova_compute[260603]: 2025-10-02 09:18:55.309 2 DEBUG nova.network.neutron [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:18:55 compute-0 nova_compute[260603]: 2025-10-02 09:18:55.392 2 DEBUG nova.compute.manager [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-changed-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:18:55 compute-0 nova_compute[260603]: 2025-10-02 09:18:55.393 2 DEBUG nova.compute.manager [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Refreshing instance network info cache due to event network-changed-dac6b720-36af-4048-863c-cd259065f82e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:18:55 compute-0 nova_compute[260603]: 2025-10-02 09:18:55.393 2 DEBUG oslo_concurrency.lockutils [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:18:55 compute-0 nova_compute[260603]: 2025-10-02 09:18:55.611 2 DEBUG nova.network.neutron [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:18:55 compute-0 ceph-mon[74477]: pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Oct 02 09:18:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Oct 02 09:18:56 compute-0 ceph-mon[74477]: pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Oct 02 09:18:56 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Oct 02 09:18:56 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:56.906291) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:18:56 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Oct 02 09:18:56 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396736906354, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1895, "num_deletes": 251, "total_data_size": 3139122, "memory_usage": 3193776, "flush_reason": "Manual Compaction"}
Oct 02 09:18:56 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Oct 02 09:18:56 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396736989118, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 3097786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61460, "largest_seqno": 63354, "table_properties": {"data_size": 3089013, "index_size": 5458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17467, "raw_average_key_size": 20, "raw_value_size": 3071783, "raw_average_value_size": 3526, "num_data_blocks": 242, "num_entries": 871, "num_filter_entries": 871, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396524, "oldest_key_time": 1759396524, "file_creation_time": 1759396736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:18:56 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 82913 microseconds, and 9138 cpu microseconds.
Oct 02 09:18:56 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:56.989195) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 3097786 bytes OK
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:56.989233) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.010009) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.010093) EVENT_LOG_v1 {"time_micros": 1759396737010076, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.010137) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 3131126, prev total WAL file size 3131126, number of live WAL files 2.
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.011718) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(3025KB)], [146(9670KB)]
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396737011787, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13000722, "oldest_snapshot_seqno": -1}
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8199 keys, 11295475 bytes, temperature: kUnknown
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396737173549, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11295475, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11240934, "index_size": 32924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 214350, "raw_average_key_size": 26, "raw_value_size": 11095063, "raw_average_value_size": 1353, "num_data_blocks": 1283, "num_entries": 8199, "num_filter_entries": 8199, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396737, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.174100) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11295475 bytes
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.218617) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.3 rd, 69.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 8713, records dropped: 514 output_compression: NoCompression
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.218682) EVENT_LOG_v1 {"time_micros": 1759396737218656, "job": 90, "event": "compaction_finished", "compaction_time_micros": 161908, "compaction_time_cpu_micros": 34482, "output_level": 6, "num_output_files": 1, "total_output_size": 11295475, "num_input_records": 8713, "num_output_records": 8199, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396737220382, "job": 90, "event": "table_file_deletion", "file_number": 148}
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396737224809, "job": 90, "event": "table_file_deletion", "file_number": 146}
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.011533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.224929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.224940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.224944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.224948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:18:57 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.224952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:18:57 compute-0 nova_compute[260603]: 2025-10-02 09:18:57.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:18:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:18:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:18:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:18:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:18:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.148 2 DEBUG nova.network.neutron [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.232 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.233 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Instance network_info: |[{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.233 2 DEBUG oslo_concurrency.lockutils [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.234 2 DEBUG nova.network.neutron [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Refreshing network info cache for port dac6b720-36af-4048-863c-cd259065f82e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.238 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Start _get_guest_xml network_info=[{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.245 2 WARNING nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.252 2 DEBUG nova.virt.libvirt.host [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.253 2 DEBUG nova.virt.libvirt.host [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.257 2 DEBUG nova.virt.libvirt.host [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.258 2 DEBUG nova.virt.libvirt.host [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.259 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.259 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.260 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.260 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.260 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.260 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.261 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.261 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.261 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.261 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.262 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.262 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.266 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:18:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:18:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:18:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1404768080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.724 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.755 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:18:58 compute-0 nova_compute[260603]: 2025-10-02 09:18:58.761 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:18:59 compute-0 sudo[430997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:18:59 compute-0 sudo[430997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:18:59 compute-0 sudo[430997]: pam_unix(sudo:session): session closed for user root
Oct 02 09:18:59 compute-0 sudo[431041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:18:59 compute-0 sudo[431041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:18:59 compute-0 sudo[431041]: pam_unix(sudo:session): session closed for user root
Oct 02 09:18:59 compute-0 sudo[431066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:18:59 compute-0 sudo[431066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:18:59 compute-0 sudo[431066]: pam_unix(sudo:session): session closed for user root
Oct 02 09:18:59 compute-0 sudo[431091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:18:59 compute-0 sudo[431091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:18:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:18:59 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026037847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.275 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.279 2 DEBUG nova.virt.libvirt.vif [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:18:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-39754368',display_name='tempest-TestGettingAddress-server-39754368',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-39754368',id=149,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcq6b1IVAJMqrCeKypbws2/0Rs5i6q90XYETpVyQMku/Sh9hYU1xPZGh64+rGmgREbgZiLmTHs8bnO5pL74gp5+lt+bQjj8c2EhhfVbuK3Dnp+gnRVW0xWshm1hg5jBSA==',key_name='tempest-TestGettingAddress-505095675',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-eaeyiug0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:18:52Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=d2fca23c-2574-4642-b931-f363d59bd5a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.280 2 DEBUG nova.network.os_vif_util [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.282 2 DEBUG nova.network.os_vif_util [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.284 2 DEBUG nova.objects.instance [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid d2fca23c-2574-4642-b931-f363d59bd5a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:18:59 compute-0 ceph-mon[74477]: pgmap v3025: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:18:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1404768080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:18:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3026037847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.426 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:18:59 compute-0 nova_compute[260603]:   <uuid>d2fca23c-2574-4642-b931-f363d59bd5a7</uuid>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   <name>instance-00000095</name>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-39754368</nova:name>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:18:58</nova:creationTime>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:18:59 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:18:59 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:18:59 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:18:59 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:18:59 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:18:59 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:18:59 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:18:59 compute-0 nova_compute[260603]:         <nova:port uuid="dac6b720-36af-4048-863c-cd259065f82e">
Oct 02 09:18:59 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe6a:aa0" ipVersion="6"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe6a:aa0" ipVersion="6"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <system>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <entry name="serial">d2fca23c-2574-4642-b931-f363d59bd5a7</entry>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <entry name="uuid">d2fca23c-2574-4642-b931-f363d59bd5a7</entry>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     </system>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   <os>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   </os>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   <features>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   </features>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/d2fca23c-2574-4642-b931-f363d59bd5a7_disk">
Oct 02 09:18:59 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       </source>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:18:59 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/d2fca23c-2574-4642-b931-f363d59bd5a7_disk.config">
Oct 02 09:18:59 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       </source>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:18:59 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:6a:0a:a0"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <target dev="tapdac6b720-36"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/console.log" append="off"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <video>
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     </video>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:18:59 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:18:59 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:18:59 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:18:59 compute-0 nova_compute[260603]: </domain>
Oct 02 09:18:59 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.430 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Preparing to wait for external event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.431 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.431 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.431 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.432 2 DEBUG nova.virt.libvirt.vif [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:18:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-39754368',display_name='tempest-TestGettingAddress-server-39754368',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-39754368',id=149,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcq6b1IVAJMqrCeKypbws2/0Rs5i6q90XYETpVyQMku/Sh9hYU1xPZGh64+rGmgREbgZiLmTHs8bnO5pL74gp5+lt+bQjj8c2EhhfVbuK3Dnp+gnRVW0xWshm1hg5jBSA==',key_name='tempest-TestGettingAddress-505095675',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-eaeyiug0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:18:52Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=d2fca23c-2574-4642-b931-f363d59bd5a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.432 2 DEBUG nova.network.os_vif_util [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.434 2 DEBUG nova.network.os_vif_util [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.434 2 DEBUG os_vif [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdac6b720-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.440 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdac6b720-36, col_values=(('external_ids', {'iface-id': 'dac6b720-36af-4048-863c-cd259065f82e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:0a:a0', 'vm-uuid': 'd2fca23c-2574-4642-b931-f363d59bd5a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:59 compute-0 NetworkManager[45129]: <info>  [1759396739.4439] manager: (tapdac6b720-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/676)
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.455 2 INFO os_vif [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36')
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.656 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.656 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.657 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:6a:0a:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.658 2 INFO nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Using config drive
Oct 02 09:18:59 compute-0 nova_compute[260603]: 2025-10-02 09:18:59.683 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:18:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:18:59 compute-0 sudo[431091]: pam_unix(sudo:session): session closed for user root
Oct 02 09:18:59 compute-0 sudo[431169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:18:59 compute-0 sudo[431169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:18:59 compute-0 sudo[431169]: pam_unix(sudo:session): session closed for user root
Oct 02 09:18:59 compute-0 sudo[431194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:18:59 compute-0 sudo[431194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:18:59 compute-0 sudo[431194]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:00 compute-0 sudo[431219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:19:00 compute-0 sudo[431219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:00 compute-0 sudo[431219]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:00 compute-0 sudo[431244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 09:19:00 compute-0 sudo[431244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:00 compute-0 nova_compute[260603]: 2025-10-02 09:19:00.289 2 INFO nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Creating config drive at /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/disk.config
Oct 02 09:19:00 compute-0 nova_compute[260603]: 2025-10-02 09:19:00.297 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpykv7wpf1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:19:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:19:00 compute-0 sudo[431244]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:19:00 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:19:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:19:00 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:19:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:19:00 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:19:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:19:00 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:19:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:19:00 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:19:00 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ebff2bcf-b2f4-440e-a1f3-3864438700d8 does not exist
Oct 02 09:19:00 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 26fe571c-dae3-47ce-874d-8f457d10645c does not exist
Oct 02 09:19:00 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8bf94f34-f25f-4de1-b237-78a9072cab16 does not exist
Oct 02 09:19:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:19:00 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:19:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:19:00 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:19:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:19:00 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:19:00 compute-0 nova_compute[260603]: 2025-10-02 09:19:00.472 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpykv7wpf1" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:19:00 compute-0 nova_compute[260603]: 2025-10-02 09:19:00.507 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:19:00 compute-0 nova_compute[260603]: 2025-10-02 09:19:00.512 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/disk.config d2fca23c-2574-4642-b931-f363d59bd5a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:19:00 compute-0 sudo[431292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:19:00 compute-0 sudo[431292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:00 compute-0 sudo[431292]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:00 compute-0 sudo[431336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:19:00 compute-0 sudo[431336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:00 compute-0 sudo[431336]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:00 compute-0 sudo[431376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:19:00 compute-0 sudo[431376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:00 compute-0 sudo[431376]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:00 compute-0 nova_compute[260603]: 2025-10-02 09:19:00.741 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/disk.config d2fca23c-2574-4642-b931-f363d59bd5a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:19:00 compute-0 nova_compute[260603]: 2025-10-02 09:19:00.742 2 INFO nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Deleting local config drive /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/disk.config because it was imported into RBD.
Oct 02 09:19:00 compute-0 sudo[431404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:19:00 compute-0 sudo[431404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:00 compute-0 kernel: tapdac6b720-36: entered promiscuous mode
Oct 02 09:19:00 compute-0 NetworkManager[45129]: <info>  [1759396740.8349] manager: (tapdac6b720-36): new Tun device (/org/freedesktop/NetworkManager/Devices/677)
Oct 02 09:19:00 compute-0 ovn_controller[152344]: 2025-10-02T09:19:00Z|01666|binding|INFO|Claiming lport dac6b720-36af-4048-863c-cd259065f82e for this chassis.
Oct 02 09:19:00 compute-0 ovn_controller[152344]: 2025-10-02T09:19:00Z|01667|binding|INFO|dac6b720-36af-4048-863c-cd259065f82e: Claiming fa:16:3e:6a:0a:a0 10.100.0.12 2001:db8:0:1:f816:3eff:fe6a:aa0 2001:db8::f816:3eff:fe6a:aa0
Oct 02 09:19:00 compute-0 nova_compute[260603]: 2025-10-02 09:19:00.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:00 compute-0 nova_compute[260603]: 2025-10-02 09:19:00.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:00 compute-0 systemd-udevd[431441]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:19:00 compute-0 NetworkManager[45129]: <info>  [1759396740.8906] device (tapdac6b720-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:19:00 compute-0 NetworkManager[45129]: <info>  [1759396740.8918] device (tapdac6b720-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:19:00 compute-0 systemd-machined[214636]: New machine qemu-183-instance-00000095.
Oct 02 09:19:00 compute-0 systemd[1]: Started Virtual Machine qemu-183-instance-00000095.
Oct 02 09:19:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.919 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:0a:a0 10.100.0.12 2001:db8:0:1:f816:3eff:fe6a:aa0 2001:db8::f816:3eff:fe6a:aa0'], port_security=['fa:16:3e:6a:0a:a0 10.100.0.12 2001:db8:0:1:f816:3eff:fe6a:aa0 2001:db8::f816:3eff:fe6a:aa0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe6a:aa0/64 2001:db8::f816:3eff:fe6a:aa0/64', 'neutron:device_id': 'd2fca23c-2574-4642-b931-f363d59bd5a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f167374-6ddb-4a61-a8ba-3dc4425d2006', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d143d452-1c93-4a72-b1b5-b9ed8c58f219, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dac6b720-36af-4048-863c-cd259065f82e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:19:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.920 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dac6b720-36af-4048-863c-cd259065f82e in datapath 01f3e9aa-20f8-48ae-9b80-cbd0ba476303 bound to our chassis
Oct 02 09:19:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.921 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01f3e9aa-20f8-48ae-9b80-cbd0ba476303
Oct 02 09:19:00 compute-0 nova_compute[260603]: 2025-10-02 09:19:00.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:00 compute-0 ovn_controller[152344]: 2025-10-02T09:19:00Z|01668|binding|INFO|Setting lport dac6b720-36af-4048-863c-cd259065f82e ovn-installed in OVS
Oct 02 09:19:00 compute-0 ovn_controller[152344]: 2025-10-02T09:19:00Z|01669|binding|INFO|Setting lport dac6b720-36af-4048-863c-cd259065f82e up in Southbound
Oct 02 09:19:00 compute-0 nova_compute[260603]: 2025-10-02 09:19:00.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.945 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1273cd-b192-4413-9faf-e60e830a9406]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.946 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01f3e9aa-21 in ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:19:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.949 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01f3e9aa-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:19:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.949 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9176275-c6c1-4d41-b71b-c96e38fa1e80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.950 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1902cbc5-e8e2-44d8-81f4-aa7ee50470a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:00 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.967 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e45dee05-f64b-4457-85e1-b19143b97642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.998 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6be2a130-9e4a-45ad-8525-03e3dc663365]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.049 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4961d512-9e07-4ca0-a531-55603b4595a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 NetworkManager[45129]: <info>  [1759396741.0583] manager: (tap01f3e9aa-20): new Veth device (/org/freedesktop/NetworkManager/Devices/678)
Oct 02 09:19:01 compute-0 systemd-udevd[431444]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.057 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1e20aea0-dbb7-4ea3-99fc-23c875f51955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.113 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[93c21c0b-e3d0-488b-b8ad-0843e60710b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.119 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[83fc131a-5c9a-4f43-9756-58ec26e5cf64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 NetworkManager[45129]: <info>  [1759396741.1510] device (tap01f3e9aa-20): carrier: link connected
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.156 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b5e1e7-0e9a-448f-b6ed-6362b1ce4af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.186 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b62c4383-3f41-4527-bda9-b648b9a6c8e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01f3e9aa-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:77:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761975, 'reachable_time': 39646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431535, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 podman[431496]: 2025-10-02 09:19:01.194839981 +0000 UTC m=+0.073203257 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.210 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7e943a8d-34ee-4d59-a474-6d2f4e698b1d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:77f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761975, 'tstamp': 761975}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431560, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.245 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09152b3c-d0c5-4f13-aa63-53da0b3871eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01f3e9aa-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:77:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761975, 'reachable_time': 39646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 431564, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 podman[431494]: 2025-10-02 09:19:01.250791698 +0000 UTC m=+0.131277591 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.291 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c3dbce8-0b41-4367-b72a-497feddcf578]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 podman[431569]: 2025-10-02 09:19:01.303100852 +0000 UTC m=+0.060951265 container create 90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:19:01 compute-0 systemd[1]: Started libpod-conmon-90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53.scope.
Oct 02 09:19:01 compute-0 podman[431569]: 2025-10-02 09:19:01.273301121 +0000 UTC m=+0.031151554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:19:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.384 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2aff0472-9137-4d7f-8176-d00ca3f63bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.389 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01f3e9aa-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.389 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.390 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01f3e9aa-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:01 compute-0 kernel: tap01f3e9aa-20: entered promiscuous mode
Oct 02 09:19:01 compute-0 NetworkManager[45129]: <info>  [1759396741.3935] manager: (tap01f3e9aa-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/679)
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.395 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01f3e9aa-20, col_values=(('external_ids', {'iface-id': 'b940cd75-8ee4-4c6d-be34-c1fb5851ca50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:19:01 compute-0 ovn_controller[152344]: 2025-10-02T09:19:01Z|01670|binding|INFO|Releasing lport b940cd75-8ee4-4c6d-be34-c1fb5851ca50 from this chassis (sb_readonly=0)
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.397 2 DEBUG nova.compute.manager [req-e07fc541-af03-4c93-befb-3269ddd959df req-945619d1-2e09-4305-a63e-de1d401c1b4c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.398 2 DEBUG oslo_concurrency.lockutils [req-e07fc541-af03-4c93-befb-3269ddd959df req-945619d1-2e09-4305-a63e-de1d401c1b4c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.398 2 DEBUG oslo_concurrency.lockutils [req-e07fc541-af03-4c93-befb-3269ddd959df req-945619d1-2e09-4305-a63e-de1d401c1b4c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.398 2 DEBUG oslo_concurrency.lockutils [req-e07fc541-af03-4c93-befb-3269ddd959df req-945619d1-2e09-4305-a63e-de1d401c1b4c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.398 2 DEBUG nova.compute.manager [req-e07fc541-af03-4c93-befb-3269ddd959df req-945619d1-2e09-4305-a63e-de1d401c1b4c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Processing event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.402 2 DEBUG nova.network.neutron [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updated VIF entry in instance network info cache for port dac6b720-36af-4048-863c-cd259065f82e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.403 2 DEBUG nova.network.neutron [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:19:01 compute-0 podman[431569]: 2025-10-02 09:19:01.413434609 +0000 UTC m=+0.171285062 container init 90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.417 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01f3e9aa-20f8-48ae-9b80-cbd0ba476303.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01f3e9aa-20f8-48ae-9b80-cbd0ba476303.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.419 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f53721f0-2cc8-4777-900f-3e5561e2d9a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.420 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-01f3e9aa-20f8-48ae-9b80-cbd0ba476303
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/01f3e9aa-20f8-48ae-9b80-cbd0ba476303.pid.haproxy
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID 01f3e9aa-20f8-48ae-9b80-cbd0ba476303
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:19:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.422 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'env', 'PROCESS_TAG=haproxy-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01f3e9aa-20f8-48ae-9b80-cbd0ba476303.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:19:01 compute-0 podman[431569]: 2025-10-02 09:19:01.428631323 +0000 UTC m=+0.186481736 container start 90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 09:19:01 compute-0 ceph-mon[74477]: pgmap v3026: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:19:01 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:19:01 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:19:01 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:19:01 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:19:01 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:19:01 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:19:01 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:19:01 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:19:01 compute-0 podman[431569]: 2025-10-02 09:19:01.436356395 +0000 UTC m=+0.194206818 container attach 90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:19:01 compute-0 systemd[1]: libpod-90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53.scope: Deactivated successfully.
Oct 02 09:19:01 compute-0 conmon[431616]: conmon 90ef7d5608a8bb6d9c59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53.scope/container/memory.events
Oct 02 09:19:01 compute-0 elated_kare[431616]: 167 167
Oct 02 09:19:01 compute-0 podman[431569]: 2025-10-02 09:19:01.442256479 +0000 UTC m=+0.200106902 container died 90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 09:19:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dd334571404b15a8dc9a870b85f22b9d4ad876c9a696c29a12edc0d9d313106-merged.mount: Deactivated successfully.
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.475 2 DEBUG oslo_concurrency.lockutils [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:19:01 compute-0 podman[431569]: 2025-10-02 09:19:01.493791169 +0000 UTC m=+0.251641592 container remove 90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:19:01 compute-0 systemd[1]: libpod-conmon-90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53.scope: Deactivated successfully.
Oct 02 09:19:01 compute-0 podman[431646]: 2025-10-02 09:19:01.709734984 +0000 UTC m=+0.071533616 container create dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:19:01 compute-0 systemd[1]: Started libpod-conmon-dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701.scope.
Oct 02 09:19:01 compute-0 podman[431646]: 2025-10-02 09:19:01.67281791 +0000 UTC m=+0.034616552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:19:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf18e05b0217301f45fdf26fcc6a125b3127310a9d2df5dbd91c154da5a2d8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf18e05b0217301f45fdf26fcc6a125b3127310a9d2df5dbd91c154da5a2d8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf18e05b0217301f45fdf26fcc6a125b3127310a9d2df5dbd91c154da5a2d8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf18e05b0217301f45fdf26fcc6a125b3127310a9d2df5dbd91c154da5a2d8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf18e05b0217301f45fdf26fcc6a125b3127310a9d2df5dbd91c154da5a2d8d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:01 compute-0 podman[431646]: 2025-10-02 09:19:01.815369303 +0000 UTC m=+0.177167975 container init dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:19:01 compute-0 podman[431646]: 2025-10-02 09:19:01.828514043 +0000 UTC m=+0.190312665 container start dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:19:01 compute-0 podman[431646]: 2025-10-02 09:19:01.832184908 +0000 UTC m=+0.193983560 container attach dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.845 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.847 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396741.8446815, d2fca23c-2574-4642-b931-f363d59bd5a7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.848 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] VM Started (Lifecycle Event)
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.852 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.856 2 INFO nova.virt.libvirt.driver [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Instance spawned successfully.
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.856 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.943 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.947 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.965 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.966 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.967 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.967 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.968 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.968 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.976 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.976 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396741.8459713, d2fca23c-2574-4642-b931-f363d59bd5a7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:19:01 compute-0 nova_compute[260603]: 2025-10-02 09:19:01.976 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] VM Paused (Lifecycle Event)
Oct 02 09:19:01 compute-0 podman[431687]: 2025-10-02 09:19:01.886165414 +0000 UTC m=+0.036432578 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:19:02 compute-0 nova_compute[260603]: 2025-10-02 09:19:02.035 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:19:02 compute-0 nova_compute[260603]: 2025-10-02 09:19:02.040 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396741.8501787, d2fca23c-2574-4642-b931-f363d59bd5a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:19:02 compute-0 nova_compute[260603]: 2025-10-02 09:19:02.041 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] VM Resumed (Lifecycle Event)
Oct 02 09:19:02 compute-0 nova_compute[260603]: 2025-10-02 09:19:02.088 2 INFO nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Took 9.59 seconds to spawn the instance on the hypervisor.
Oct 02 09:19:02 compute-0 nova_compute[260603]: 2025-10-02 09:19:02.089 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:19:02 compute-0 nova_compute[260603]: 2025-10-02 09:19:02.091 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:19:02 compute-0 nova_compute[260603]: 2025-10-02 09:19:02.102 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:19:02 compute-0 nova_compute[260603]: 2025-10-02 09:19:02.151 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:19:02 compute-0 nova_compute[260603]: 2025-10-02 09:19:02.181 2 INFO nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Took 10.62 seconds to build instance.
Oct 02 09:19:02 compute-0 nova_compute[260603]: 2025-10-02 09:19:02.241 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:02 compute-0 podman[431687]: 2025-10-02 09:19:02.245161717 +0000 UTC m=+0.395428831 container create 77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 09:19:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 09:19:02 compute-0 systemd[1]: Started libpod-conmon-77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8.scope.
Oct 02 09:19:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:19:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fda16057209f96f06d9ded64e1e5d1b0e039e5d2ee796001d3308370659566c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:02 compute-0 podman[431687]: 2025-10-02 09:19:02.615463843 +0000 UTC m=+0.765730997 container init 77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:19:02 compute-0 podman[431687]: 2025-10-02 09:19:02.626304482 +0000 UTC m=+0.776571636 container start 77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 09:19:02 compute-0 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [NOTICE]   (431715) : New worker (431717) forked
Oct 02 09:19:02 compute-0 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [NOTICE]   (431715) : Loading success.
Oct 02 09:19:02 compute-0 nova_compute[260603]: 2025-10-02 09:19:02.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:02 compute-0 recursing_neumann[431677]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:19:02 compute-0 recursing_neumann[431677]: --> relative data size: 1.0
Oct 02 09:19:02 compute-0 recursing_neumann[431677]: --> All data devices are unavailable
Oct 02 09:19:02 compute-0 systemd[1]: libpod-dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701.scope: Deactivated successfully.
Oct 02 09:19:02 compute-0 systemd[1]: libpod-dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701.scope: Consumed 1.046s CPU time.
Oct 02 09:19:02 compute-0 podman[431646]: 2025-10-02 09:19:02.972479194 +0000 UTC m=+1.334277866 container died dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:19:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-adf18e05b0217301f45fdf26fcc6a125b3127310a9d2df5dbd91c154da5a2d8d-merged.mount: Deactivated successfully.
Oct 02 09:19:03 compute-0 podman[431646]: 2025-10-02 09:19:03.29312679 +0000 UTC m=+1.654925452 container remove dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 09:19:03 compute-0 sudo[431404]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:03 compute-0 systemd[1]: libpod-conmon-dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701.scope: Deactivated successfully.
Oct 02 09:19:03 compute-0 sudo[431755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:19:03 compute-0 sudo[431755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:03 compute-0 sudo[431755]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:03 compute-0 sudo[431780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:19:03 compute-0 sudo[431780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:03 compute-0 sudo[431780]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:03 compute-0 sudo[431805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:19:03 compute-0 sudo[431805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:03 compute-0 sudo[431805]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:03 compute-0 sudo[431830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:19:03 compute-0 sudo[431830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:03 compute-0 ceph-mon[74477]: pgmap v3027: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 09:19:03 compute-0 nova_compute[260603]: 2025-10-02 09:19:03.821 2 DEBUG nova.compute.manager [req-d794193a-fce5-44ad-8a9d-8d0445ff9152 req-598aa09f-90b4-44dc-b60d-57de6cd5cd67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:19:03 compute-0 nova_compute[260603]: 2025-10-02 09:19:03.823 2 DEBUG oslo_concurrency.lockutils [req-d794193a-fce5-44ad-8a9d-8d0445ff9152 req-598aa09f-90b4-44dc-b60d-57de6cd5cd67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:19:03 compute-0 nova_compute[260603]: 2025-10-02 09:19:03.823 2 DEBUG oslo_concurrency.lockutils [req-d794193a-fce5-44ad-8a9d-8d0445ff9152 req-598aa09f-90b4-44dc-b60d-57de6cd5cd67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:19:03 compute-0 nova_compute[260603]: 2025-10-02 09:19:03.823 2 DEBUG oslo_concurrency.lockutils [req-d794193a-fce5-44ad-8a9d-8d0445ff9152 req-598aa09f-90b4-44dc-b60d-57de6cd5cd67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:03 compute-0 nova_compute[260603]: 2025-10-02 09:19:03.823 2 DEBUG nova.compute.manager [req-d794193a-fce5-44ad-8a9d-8d0445ff9152 req-598aa09f-90b4-44dc-b60d-57de6cd5cd67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] No waiting events found dispatching network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:19:03 compute-0 nova_compute[260603]: 2025-10-02 09:19:03.823 2 WARNING nova.compute.manager [req-d794193a-fce5-44ad-8a9d-8d0445ff9152 req-598aa09f-90b4-44dc-b60d-57de6cd5cd67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received unexpected event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e for instance with vm_state active and task_state None.
Oct 02 09:19:04 compute-0 podman[431895]: 2025-10-02 09:19:04.114894377 +0000 UTC m=+0.027741807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:19:04 compute-0 podman[431895]: 2025-10-02 09:19:04.237957181 +0000 UTC m=+0.150804521 container create 4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_babbage, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:19:04 compute-0 systemd[1]: Started libpod-conmon-4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f.scope.
Oct 02 09:19:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 02 09:19:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:19:04 compute-0 podman[431895]: 2025-10-02 09:19:04.382074443 +0000 UTC m=+0.294921873 container init 4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_babbage, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:19:04 compute-0 podman[431895]: 2025-10-02 09:19:04.397838195 +0000 UTC m=+0.310685535 container start 4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_babbage, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 02 09:19:04 compute-0 strange_babbage[431911]: 167 167
Oct 02 09:19:04 compute-0 systemd[1]: libpod-4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f.scope: Deactivated successfully.
Oct 02 09:19:04 compute-0 podman[431895]: 2025-10-02 09:19:04.41782862 +0000 UTC m=+0.330676080 container attach 4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:19:04 compute-0 podman[431895]: 2025-10-02 09:19:04.418737967 +0000 UTC m=+0.331585367 container died 4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:19:04 compute-0 nova_compute[260603]: 2025-10-02 09:19:04.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-888346d2588ce1997608a947893362006185c875e684595fd38746c0dc914f0f-merged.mount: Deactivated successfully.
Oct 02 09:19:04 compute-0 podman[431895]: 2025-10-02 09:19:04.633741553 +0000 UTC m=+0.546588893 container remove 4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_babbage, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:19:04 compute-0 systemd[1]: libpod-conmon-4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f.scope: Deactivated successfully.
Oct 02 09:19:04 compute-0 ceph-mon[74477]: pgmap v3028: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 02 09:19:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:19:04 compute-0 podman[431936]: 2025-10-02 09:19:04.897421929 +0000 UTC m=+0.097216038 container create 4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:19:04 compute-0 podman[431936]: 2025-10-02 09:19:04.825898195 +0000 UTC m=+0.025692324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:19:05 compute-0 systemd[1]: Started libpod-conmon-4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a.scope.
Oct 02 09:19:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8db0ceeeb9420bcce1214ab79daaab5d827f7e0b10b90376ce548eb9e51444/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8db0ceeeb9420bcce1214ab79daaab5d827f7e0b10b90376ce548eb9e51444/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8db0ceeeb9420bcce1214ab79daaab5d827f7e0b10b90376ce548eb9e51444/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8db0ceeeb9420bcce1214ab79daaab5d827f7e0b10b90376ce548eb9e51444/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:05 compute-0 podman[431936]: 2025-10-02 09:19:05.138868161 +0000 UTC m=+0.338662320 container init 4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 09:19:05 compute-0 podman[431936]: 2025-10-02 09:19:05.152193397 +0000 UTC m=+0.351987546 container start 4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jepsen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:19:05 compute-0 podman[431936]: 2025-10-02 09:19:05.213337166 +0000 UTC m=+0.413131305 container attach 4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jepsen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:19:05 compute-0 podman[431953]: 2025-10-02 09:19:05.220150329 +0000 UTC m=+0.155683783 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct 02 09:19:05 compute-0 podman[431963]: 2025-10-02 09:19:05.288140503 +0000 UTC m=+0.164833789 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]: {
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:     "0": [
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:         {
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "devices": [
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "/dev/loop3"
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             ],
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_name": "ceph_lv0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_size": "21470642176",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "name": "ceph_lv0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "tags": {
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.cluster_name": "ceph",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.crush_device_class": "",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.encrypted": "0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.osd_id": "0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.type": "block",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.vdo": "0"
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             },
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "type": "block",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "vg_name": "ceph_vg0"
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:         }
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:     ],
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:     "1": [
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:         {
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "devices": [
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "/dev/loop4"
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             ],
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_name": "ceph_lv1",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_size": "21470642176",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "name": "ceph_lv1",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "tags": {
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.cluster_name": "ceph",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.crush_device_class": "",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.encrypted": "0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.osd_id": "1",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.type": "block",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.vdo": "0"
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             },
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "type": "block",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "vg_name": "ceph_vg1"
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:         }
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:     ],
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:     "2": [
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:         {
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "devices": [
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "/dev/loop5"
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             ],
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_name": "ceph_lv2",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_size": "21470642176",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "name": "ceph_lv2",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "tags": {
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.cluster_name": "ceph",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.crush_device_class": "",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.encrypted": "0",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.osd_id": "2",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.type": "block",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:                 "ceph.vdo": "0"
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             },
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "type": "block",
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:             "vg_name": "ceph_vg2"
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:         }
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]:     ]
Oct 02 09:19:05 compute-0 crazy_jepsen[431952]: }
Oct 02 09:19:06 compute-0 systemd[1]: libpod-4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a.scope: Deactivated successfully.
Oct 02 09:19:06 compute-0 podman[431936]: 2025-10-02 09:19:06.017489934 +0000 UTC m=+1.217284083 container died 4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a8db0ceeeb9420bcce1214ab79daaab5d827f7e0b10b90376ce548eb9e51444-merged.mount: Deactivated successfully.
Oct 02 09:19:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 02 09:19:06 compute-0 podman[431936]: 2025-10-02 09:19:06.397072571 +0000 UTC m=+1.596866680 container remove 4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 09:19:06 compute-0 sudo[431830]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:06 compute-0 systemd[1]: libpod-conmon-4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a.scope: Deactivated successfully.
Oct 02 09:19:06 compute-0 nova_compute[260603]: 2025-10-02 09:19:06.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:06 compute-0 sudo[432015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:19:06 compute-0 sudo[432015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:06 compute-0 sudo[432015]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:06 compute-0 ovn_controller[152344]: 2025-10-02T09:19:06Z|01671|binding|INFO|Releasing lport b940cd75-8ee4-4c6d-be34-c1fb5851ca50 from this chassis (sb_readonly=0)
Oct 02 09:19:06 compute-0 NetworkManager[45129]: <info>  [1759396746.6004] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/680)
Oct 02 09:19:06 compute-0 nova_compute[260603]: 2025-10-02 09:19:06.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:06 compute-0 NetworkManager[45129]: <info>  [1759396746.6029] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/681)
Oct 02 09:19:06 compute-0 sudo[432040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:19:06 compute-0 sudo[432040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:06 compute-0 sudo[432040]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:06 compute-0 ovn_controller[152344]: 2025-10-02T09:19:06Z|01672|binding|INFO|Releasing lport b940cd75-8ee4-4c6d-be34-c1fb5851ca50 from this chassis (sb_readonly=0)
Oct 02 09:19:06 compute-0 nova_compute[260603]: 2025-10-02 09:19:06.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:06 compute-0 nova_compute[260603]: 2025-10-02 09:19:06.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:06 compute-0 sudo[432065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:19:06 compute-0 sudo[432065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:06 compute-0 sudo[432065]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:06 compute-0 sudo[432091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:19:06 compute-0 sudo[432091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:06 compute-0 nova_compute[260603]: 2025-10-02 09:19:06.977 2 DEBUG nova.compute.manager [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-changed-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:19:06 compute-0 nova_compute[260603]: 2025-10-02 09:19:06.980 2 DEBUG nova.compute.manager [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Refreshing instance network info cache due to event network-changed-dac6b720-36af-4048-863c-cd259065f82e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:19:06 compute-0 nova_compute[260603]: 2025-10-02 09:19:06.980 2 DEBUG oslo_concurrency.lockutils [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:19:06 compute-0 nova_compute[260603]: 2025-10-02 09:19:06.980 2 DEBUG oslo_concurrency.lockutils [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:19:06 compute-0 nova_compute[260603]: 2025-10-02 09:19:06.981 2 DEBUG nova.network.neutron [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Refreshing network info cache for port dac6b720-36af-4048-863c-cd259065f82e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:19:07 compute-0 podman[432155]: 2025-10-02 09:19:07.255493933 +0000 UTC m=+0.071894977 container create a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:19:07 compute-0 podman[432155]: 2025-10-02 09:19:07.218311061 +0000 UTC m=+0.034712105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:19:07 compute-0 systemd[1]: Started libpod-conmon-a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb.scope.
Oct 02 09:19:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:19:07 compute-0 podman[432155]: 2025-10-02 09:19:07.379447495 +0000 UTC m=+0.195848539 container init a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:19:07 compute-0 podman[432155]: 2025-10-02 09:19:07.390939543 +0000 UTC m=+0.207340567 container start a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 09:19:07 compute-0 podman[432155]: 2025-10-02 09:19:07.395025231 +0000 UTC m=+0.211426265 container attach a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:19:07 compute-0 frosty_carson[432172]: 167 167
Oct 02 09:19:07 compute-0 systemd[1]: libpod-a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb.scope: Deactivated successfully.
Oct 02 09:19:07 compute-0 ceph-mon[74477]: pgmap v3029: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 02 09:19:07 compute-0 podman[432155]: 2025-10-02 09:19:07.401505743 +0000 UTC m=+0.217906767 container died a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 09:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-416ecfbbc4c4207f885916f68b63450e2f2f26285f41ecafb01d161a285dcfef-merged.mount: Deactivated successfully.
Oct 02 09:19:07 compute-0 podman[432155]: 2025-10-02 09:19:07.457876474 +0000 UTC m=+0.274277498 container remove a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:19:07 compute-0 systemd[1]: libpod-conmon-a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb.scope: Deactivated successfully.
Oct 02 09:19:07 compute-0 podman[432195]: 2025-10-02 09:19:07.713678944 +0000 UTC m=+0.084129869 container create 8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chandrasekhar, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 09:19:07 compute-0 systemd[1]: Started libpod-conmon-8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce.scope.
Oct 02 09:19:07 compute-0 nova_compute[260603]: 2025-10-02 09:19:07.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:07 compute-0 podman[432195]: 2025-10-02 09:19:07.683585024 +0000 UTC m=+0.054036049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:19:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e46d9ed111e3f3a21fd4b42df58ca21fb02d57f1bd3e75f808f1296a9342d9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e46d9ed111e3f3a21fd4b42df58ca21fb02d57f1bd3e75f808f1296a9342d9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e46d9ed111e3f3a21fd4b42df58ca21fb02d57f1bd3e75f808f1296a9342d9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e46d9ed111e3f3a21fd4b42df58ca21fb02d57f1bd3e75f808f1296a9342d9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:19:07 compute-0 podman[432195]: 2025-10-02 09:19:07.829508642 +0000 UTC m=+0.199959577 container init 8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:19:07 compute-0 podman[432195]: 2025-10-02 09:19:07.839298028 +0000 UTC m=+0.209748953 container start 8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chandrasekhar, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:19:07 compute-0 podman[432195]: 2025-10-02 09:19:07.865185266 +0000 UTC m=+0.235636231 container attach 8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 09:19:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:19:08 compute-0 nova_compute[260603]: 2025-10-02 09:19:08.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:08 compute-0 nova_compute[260603]: 2025-10-02 09:19:08.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]: {
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "osd_id": 2,
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "type": "bluestore"
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:     },
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "osd_id": 1,
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "type": "bluestore"
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:     },
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "osd_id": 0,
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:         "type": "bluestore"
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]:     }
Oct 02 09:19:08 compute-0 quizzical_chandrasekhar[432212]: }
Oct 02 09:19:08 compute-0 systemd[1]: libpod-8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce.scope: Deactivated successfully.
Oct 02 09:19:08 compute-0 systemd[1]: libpod-8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce.scope: Consumed 1.124s CPU time.
Oct 02 09:19:08 compute-0 podman[432245]: 2025-10-02 09:19:08.996337547 +0000 UTC m=+0.025792746 container died 8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chandrasekhar, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 09:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e46d9ed111e3f3a21fd4b42df58ca21fb02d57f1bd3e75f808f1296a9342d9b-merged.mount: Deactivated successfully.
Oct 02 09:19:09 compute-0 podman[432245]: 2025-10-02 09:19:09.064158626 +0000 UTC m=+0.093613775 container remove 8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 02 09:19:09 compute-0 systemd[1]: libpod-conmon-8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce.scope: Deactivated successfully.
Oct 02 09:19:09 compute-0 sudo[432091]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:19:09 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:19:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:19:09 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:19:09 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 67c3fca4-4062-42e2-a8e3-296d5163d17e does not exist
Oct 02 09:19:09 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev d3f16b45-edff-453c-b103-082d268f29b1 does not exist
Oct 02 09:19:09 compute-0 sudo[432260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:19:09 compute-0 sudo[432260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:09 compute-0 sudo[432260]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:09 compute-0 sudo[432285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:19:09 compute-0 sudo[432285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:19:09 compute-0 sudo[432285]: pam_unix(sudo:session): session closed for user root
Oct 02 09:19:09 compute-0 nova_compute[260603]: 2025-10-02 09:19:09.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:09 compute-0 ceph-mon[74477]: pgmap v3030: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:19:09 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:19:09 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:19:09 compute-0 nova_compute[260603]: 2025-10-02 09:19:09.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:09 compute-0 nova_compute[260603]: 2025-10-02 09:19:09.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:19:09 compute-0 nova_compute[260603]: 2025-10-02 09:19:09.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:19:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:19:10 compute-0 nova_compute[260603]: 2025-10-02 09:19:10.138 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:19:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:19:11 compute-0 ceph-mon[74477]: pgmap v3031: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:19:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:19:12 compute-0 nova_compute[260603]: 2025-10-02 09:19:12.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:12 compute-0 ceph-mon[74477]: pgmap v3032: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:19:14 compute-0 nova_compute[260603]: 2025-10-02 09:19:14.157 2 DEBUG nova.network.neutron [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updated VIF entry in instance network info cache for port dac6b720-36af-4048-863c-cd259065f82e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:19:14 compute-0 nova_compute[260603]: 2025-10-02 09:19:14.158 2 DEBUG nova.network.neutron [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:19:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 68 op/s
Oct 02 09:19:14 compute-0 nova_compute[260603]: 2025-10-02 09:19:14.366 2 DEBUG oslo_concurrency.lockutils [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:19:14 compute-0 nova_compute[260603]: 2025-10-02 09:19:14.367 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:19:14 compute-0 nova_compute[260603]: 2025-10-02 09:19:14.367 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 09:19:14 compute-0 nova_compute[260603]: 2025-10-02 09:19:14.367 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d2fca23c-2574-4642-b931-f363d59bd5a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:19:14 compute-0 nova_compute[260603]: 2025-10-02 09:19:14.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:19:15 compute-0 ceph-mon[74477]: pgmap v3033: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 68 op/s
Oct 02 09:19:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 56 op/s
Oct 02 09:19:16 compute-0 nova_compute[260603]: 2025-10-02 09:19:16.698 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:19:16 compute-0 nova_compute[260603]: 2025-10-02 09:19:16.754 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:19:16 compute-0 nova_compute[260603]: 2025-10-02 09:19:16.754 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 09:19:16 compute-0 nova_compute[260603]: 2025-10-02 09:19:16.754 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:16 compute-0 nova_compute[260603]: 2025-10-02 09:19:16.800 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:19:16 compute-0 nova_compute[260603]: 2025-10-02 09:19:16.800 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:19:16 compute-0 nova_compute[260603]: 2025-10-02 09:19:16.800 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:16 compute-0 nova_compute[260603]: 2025-10-02 09:19:16.801 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:19:16 compute-0 nova_compute[260603]: 2025-10-02 09:19:16.801 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:19:17 compute-0 ovn_controller[152344]: 2025-10-02T09:19:17Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:0a:a0 10.100.0.12
Oct 02 09:19:17 compute-0 ovn_controller[152344]: 2025-10-02T09:19:17Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:0a:a0 10.100.0.12
Oct 02 09:19:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:19:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637019215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:19:17 compute-0 nova_compute[260603]: 2025-10-02 09:19:17.223 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:19:17 compute-0 nova_compute[260603]: 2025-10-02 09:19:17.397 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:19:17 compute-0 nova_compute[260603]: 2025-10-02 09:19:17.397 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:19:17 compute-0 ceph-mon[74477]: pgmap v3034: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 56 op/s
Oct 02 09:19:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2637019215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:19:17 compute-0 nova_compute[260603]: 2025-10-02 09:19:17.552 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:19:17 compute-0 nova_compute[260603]: 2025-10-02 09:19:17.553 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3386MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:19:17 compute-0 nova_compute[260603]: 2025-10-02 09:19:17.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:19:17 compute-0 nova_compute[260603]: 2025-10-02 09:19:17.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:19:17 compute-0 nova_compute[260603]: 2025-10-02 09:19:17.676 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance d2fca23c-2574-4642-b931-f363d59bd5a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:19:17 compute-0 nova_compute[260603]: 2025-10-02 09:19:17.676 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:19:17 compute-0 nova_compute[260603]: 2025-10-02 09:19:17.676 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:19:17 compute-0 nova_compute[260603]: 2025-10-02 09:19:17.716 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:19:17 compute-0 nova_compute[260603]: 2025-10-02 09:19:17.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:19:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89840446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:19:18 compute-0 nova_compute[260603]: 2025-10-02 09:19:18.142 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:19:18 compute-0 nova_compute[260603]: 2025-10-02 09:19:18.148 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:19:18 compute-0 nova_compute[260603]: 2025-10-02 09:19:18.196 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:19:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 02 09:19:18 compute-0 nova_compute[260603]: 2025-10-02 09:19:18.340 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:19:18 compute-0 nova_compute[260603]: 2025-10-02 09:19:18.341 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/89840446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:19:19 compute-0 nova_compute[260603]: 2025-10-02 09:19:19.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:19 compute-0 ceph-mon[74477]: pgmap v3035: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 02 09:19:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:19:20 compute-0 nova_compute[260603]: 2025-10-02 09:19:20.106 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:20 compute-0 nova_compute[260603]: 2025-10-02 09:19:20.107 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:20 compute-0 nova_compute[260603]: 2025-10-02 09:19:20.168 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:19:20 compute-0 ceph-mon[74477]: pgmap v3036: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:19:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:19:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3772413877' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:19:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:19:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3772413877' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:19:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3772413877' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:19:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3772413877' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:19:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:19:22 compute-0 nova_compute[260603]: 2025-10-02 09:19:22.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:23 compute-0 ceph-mon[74477]: pgmap v3037: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:19:23 compute-0 nova_compute[260603]: 2025-10-02 09:19:23.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:23 compute-0 nova_compute[260603]: 2025-10-02 09:19:23.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 09:19:23 compute-0 nova_compute[260603]: 2025-10-02 09:19:23.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 09:19:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:19:24 compute-0 nova_compute[260603]: 2025-10-02 09:19:24.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:19:25 compute-0 nova_compute[260603]: 2025-10-02 09:19:25.539 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:25 compute-0 ceph-mon[74477]: pgmap v3038: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:19:26 compute-0 nova_compute[260603]: 2025-10-02 09:19:26.112 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:19:26 compute-0 nova_compute[260603]: 2025-10-02 09:19:26.113 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:19:26 compute-0 nova_compute[260603]: 2025-10-02 09:19:26.141 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:19:26 compute-0 nova_compute[260603]: 2025-10-02 09:19:26.249 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:19:26 compute-0 nova_compute[260603]: 2025-10-02 09:19:26.250 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:19:26 compute-0 nova_compute[260603]: 2025-10-02 09:19:26.262 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:19:26 compute-0 nova_compute[260603]: 2025-10-02 09:19:26.262 2 INFO nova.compute.claims [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:19:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:19:26 compute-0 nova_compute[260603]: 2025-10-02 09:19:26.520 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:19:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:19:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1386203685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:19:26 compute-0 nova_compute[260603]: 2025-10-02 09:19:26.972 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:19:26 compute-0 nova_compute[260603]: 2025-10-02 09:19:26.982 2 DEBUG nova.compute.provider_tree [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.014 2 DEBUG nova.scheduler.client.report [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.119 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.121 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.227 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.228 2 DEBUG nova.network.neutron [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.255 2 INFO nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.274 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.358 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.360 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.360 2 INFO nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Creating image(s)
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.383 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.405 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.428 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.432 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.475 2 DEBUG nova.policy [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.521 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.522 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.523 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.523 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.547 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.551 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:19:27 compute-0 ceph-mon[74477]: pgmap v3039: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 09:19:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1386203685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:19:27 compute-0 nova_compute[260603]: 2025-10-02 09:19:27.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:19:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:19:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:19:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:19:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:19:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:19:28
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log']
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 09:19:28 compute-0 nova_compute[260603]: 2025-10-02 09:19:28.332 2 DEBUG nova.network.neutron [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Successfully created port: 24b323af-568c-4843-a858-dab3f5df627f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:19:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:19:29 compute-0 ceph-mon[74477]: pgmap v3040: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 09:19:29 compute-0 nova_compute[260603]: 2025-10-02 09:19:29.283 2 DEBUG nova.network.neutron [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Successfully updated port: 24b323af-568c-4843-a858-dab3f5df627f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:19:29 compute-0 nova_compute[260603]: 2025-10-02 09:19:29.311 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:19:29 compute-0 nova_compute[260603]: 2025-10-02 09:19:29.312 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:19:29 compute-0 nova_compute[260603]: 2025-10-02 09:19:29.312 2 DEBUG nova.network.neutron [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:19:29 compute-0 nova_compute[260603]: 2025-10-02 09:19:29.400 2 DEBUG nova.compute.manager [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-changed-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:19:29 compute-0 nova_compute[260603]: 2025-10-02 09:19:29.400 2 DEBUG nova.compute.manager [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Refreshing instance network info cache due to event network-changed-24b323af-568c-4843-a858-dab3f5df627f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:19:29 compute-0 nova_compute[260603]: 2025-10-02 09:19:29.400 2 DEBUG oslo_concurrency.lockutils [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:19:29 compute-0 nova_compute[260603]: 2025-10-02 09:19:29.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:29 compute-0 nova_compute[260603]: 2025-10-02 09:19:29.481 2 DEBUG nova.network.neutron [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:19:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:19:29 compute-0 nova_compute[260603]: 2025-10-02 09:19:29.994 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:19:30 compute-0 nova_compute[260603]: 2025-10-02 09:19:30.063 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:19:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 02 09:19:31 compute-0 ceph-mon[74477]: pgmap v3041: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 02 09:19:32 compute-0 podman[432530]: 2025-10-02 09:19:32.052876259 +0000 UTC m=+0.102260705 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:19:32 compute-0 podman[432529]: 2025-10-02 09:19:32.105024477 +0000 UTC m=+0.155187358 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 09:19:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 148 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 MiB/s wr, 33 op/s
Oct 02 09:19:32 compute-0 nova_compute[260603]: 2025-10-02 09:19:32.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:33 compute-0 ceph-mon[74477]: pgmap v3042: 305 pgs: 305 active+clean; 148 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 MiB/s wr, 33 op/s
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.136 2 DEBUG nova.network.neutron [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updating instance_info_cache with network_info: [{"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.424 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.425 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Instance network_info: |[{"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.425 2 DEBUG oslo_concurrency.lockutils [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.426 2 DEBUG nova.network.neutron [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Refreshing network info cache for port 24b323af-568c-4843-a858-dab3f5df627f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.434 2 DEBUG nova.objects.instance [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e2e089b-fd7f-460e-ba3f-180204d961f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.760 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.761 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Ensure instance console log exists: /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.762 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.763 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.764 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.769 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Start _get_guest_xml network_info=[{"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.777 2 WARNING nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.784 2 DEBUG nova.virt.libvirt.host [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.786 2 DEBUG nova.virt.libvirt.host [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.791 2 DEBUG nova.virt.libvirt.host [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.791 2 DEBUG nova.virt.libvirt.host [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.792 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.792 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.793 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.793 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.793 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.794 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.794 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.794 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.795 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.795 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.795 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.796 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:19:33 compute-0 nova_compute[260603]: 2025-10-02 09:19:33.800 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:19:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:19:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2210677655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:19:34 compute-0 nova_compute[260603]: 2025-10-02 09:19:34.319 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:19:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 02 09:19:34 compute-0 nova_compute[260603]: 2025-10-02 09:19:34.344 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:19:34 compute-0 nova_compute[260603]: 2025-10-02 09:19:34.348 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:19:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2210677655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:19:34 compute-0 nova_compute[260603]: 2025-10-02 09:19:34.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:19:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:34.857 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:19:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:34.857 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:19:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:34.858 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:19:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/503369711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:19:34 compute-0 nova_compute[260603]: 2025-10-02 09:19:34.911 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:19:34 compute-0 nova_compute[260603]: 2025-10-02 09:19:34.913 2 DEBUG nova.virt.libvirt.vif [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:19:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1224861595',display_name='tempest-TestGettingAddress-server-1224861595',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1224861595',id=150,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcq6b1IVAJMqrCeKypbws2/0Rs5i6q90XYETpVyQMku/Sh9hYU1xPZGh64+rGmgREbgZiLmTHs8bnO5pL74gp5+lt+bQjj8c2EhhfVbuK3Dnp+gnRVW0xWshm1hg5jBSA==',key_name='tempest-TestGettingAddress-505095675',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-s7sei7n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:19:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=8e2e089b-fd7f-460e-ba3f-180204d961f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:19:34 compute-0 nova_compute[260603]: 2025-10-02 09:19:34.914 2 DEBUG nova.network.os_vif_util [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:19:34 compute-0 nova_compute[260603]: 2025-10-02 09:19:34.916 2 DEBUG nova.network.os_vif_util [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:19:34 compute-0 nova_compute[260603]: 2025-10-02 09:19:34.918 2 DEBUG nova.objects.instance [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e2e089b-fd7f-460e-ba3f-180204d961f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.034 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:19:35 compute-0 nova_compute[260603]:   <uuid>8e2e089b-fd7f-460e-ba3f-180204d961f8</uuid>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   <name>instance-00000096</name>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-1224861595</nova:name>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:19:33</nova:creationTime>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:19:35 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:19:35 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:19:35 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:19:35 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:19:35 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:19:35 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:19:35 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:19:35 compute-0 nova_compute[260603]:         <nova:port uuid="24b323af-568c-4843-a858-dab3f5df627f">
Oct 02 09:19:35 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fed6:7f89" ipVersion="6"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fed6:7f89" ipVersion="6"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <system>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <entry name="serial">8e2e089b-fd7f-460e-ba3f-180204d961f8</entry>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <entry name="uuid">8e2e089b-fd7f-460e-ba3f-180204d961f8</entry>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     </system>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   <os>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   </os>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   <features>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   </features>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/8e2e089b-fd7f-460e-ba3f-180204d961f8_disk">
Oct 02 09:19:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       </source>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:19:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/8e2e089b-fd7f-460e-ba3f-180204d961f8_disk.config">
Oct 02 09:19:35 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       </source>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:19:35 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:d6:7f:89"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <target dev="tap24b323af-56"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/console.log" append="off"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <video>
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     </video>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:19:35 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:19:35 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:19:35 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:19:35 compute-0 nova_compute[260603]: </domain>
Oct 02 09:19:35 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.037 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Preparing to wait for external event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.038 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.038 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.039 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.040 2 DEBUG nova.virt.libvirt.vif [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:19:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1224861595',display_name='tempest-TestGettingAddress-server-1224861595',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1224861595',id=150,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcq6b1IVAJMqrCeKypbws2/0Rs5i6q90XYETpVyQMku/Sh9hYU1xPZGh64+rGmgREbgZiLmTHs8bnO5pL74gp5+lt+bQjj8c2EhhfVbuK3Dnp+gnRVW0xWshm1hg5jBSA==',key_name='tempest-TestGettingAddress-505095675',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-s7sei7n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:19:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=8e2e089b-fd7f-460e-ba3f-180204d961f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.041 2 DEBUG nova.network.os_vif_util [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.043 2 DEBUG nova.network.os_vif_util [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.044 2 DEBUG os_vif [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24b323af-56, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24b323af-56, col_values=(('external_ids', {'iface-id': '24b323af-568c-4843-a858-dab3f5df627f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:7f:89', 'vm-uuid': '8e2e089b-fd7f-460e-ba3f-180204d961f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:35 compute-0 NetworkManager[45129]: <info>  [1759396775.0568] manager: (tap24b323af-56): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/682)
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.065 2 INFO os_vif [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56')
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.250 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.251 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.251 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:d6:7f:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.252 2 INFO nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Using config drive
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.287 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.690 2 DEBUG nova.network.neutron [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updated VIF entry in instance network info cache for port 24b323af-568c-4843-a858-dab3f5df627f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.691 2 DEBUG nova.network.neutron [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updating instance_info_cache with network_info: [{"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:19:35 compute-0 ceph-mon[74477]: pgmap v3043: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 02 09:19:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/503369711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:19:35 compute-0 nova_compute[260603]: 2025-10-02 09:19:35.818 2 DEBUG oslo_concurrency.lockutils [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:19:36 compute-0 podman[432675]: 2025-10-02 09:19:36.012946491 +0000 UTC m=+0.081028212 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid)
Oct 02 09:19:36 compute-0 podman[432674]: 2025-10-02 09:19:36.020839058 +0000 UTC m=+0.083389716 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:19:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 02 09:19:36 compute-0 ceph-mon[74477]: pgmap v3044: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 02 09:19:37 compute-0 nova_compute[260603]: 2025-10-02 09:19:37.146 2 INFO nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Creating config drive at /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/disk.config
Oct 02 09:19:37 compute-0 nova_compute[260603]: 2025-10-02 09:19:37.155 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ervjxir execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:19:37 compute-0 nova_compute[260603]: 2025-10-02 09:19:37.328 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ervjxir" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:19:37 compute-0 nova_compute[260603]: 2025-10-02 09:19:37.368 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:19:37 compute-0 nova_compute[260603]: 2025-10-02 09:19:37.372 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/disk.config 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:19:37 compute-0 nova_compute[260603]: 2025-10-02 09:19:37.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 02 09:19:38 compute-0 nova_compute[260603]: 2025-10-02 09:19:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011053336833365891 of space, bias 1.0, pg target 0.33160010500097675 quantized to 32 (current 32)
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:19:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:19:39 compute-0 nova_compute[260603]: 2025-10-02 09:19:39.389 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/disk.config 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:19:39 compute-0 nova_compute[260603]: 2025-10-02 09:19:39.390 2 INFO nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Deleting local config drive /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/disk.config because it was imported into RBD.
Oct 02 09:19:39 compute-0 kernel: tap24b323af-56: entered promiscuous mode
Oct 02 09:19:39 compute-0 NetworkManager[45129]: <info>  [1759396779.4648] manager: (tap24b323af-56): new Tun device (/org/freedesktop/NetworkManager/Devices/683)
Oct 02 09:19:39 compute-0 ovn_controller[152344]: 2025-10-02T09:19:39Z|01673|binding|INFO|Claiming lport 24b323af-568c-4843-a858-dab3f5df627f for this chassis.
Oct 02 09:19:39 compute-0 ovn_controller[152344]: 2025-10-02T09:19:39Z|01674|binding|INFO|24b323af-568c-4843-a858-dab3f5df627f: Claiming fa:16:3e:d6:7f:89 10.100.0.8 2001:db8:0:1:f816:3eff:fed6:7f89 2001:db8::f816:3eff:fed6:7f89
Oct 02 09:19:39 compute-0 nova_compute[260603]: 2025-10-02 09:19:39.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:39 compute-0 nova_compute[260603]: 2025-10-02 09:19:39.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:39 compute-0 ovn_controller[152344]: 2025-10-02T09:19:39Z|01675|binding|INFO|Setting lport 24b323af-568c-4843-a858-dab3f5df627f ovn-installed in OVS
Oct 02 09:19:39 compute-0 nova_compute[260603]: 2025-10-02 09:19:39.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:39 compute-0 ovn_controller[152344]: 2025-10-02T09:19:39Z|01676|binding|INFO|Setting lport 24b323af-568c-4843-a858-dab3f5df627f up in Southbound
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.501 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:7f:89 10.100.0.8 2001:db8:0:1:f816:3eff:fed6:7f89 2001:db8::f816:3eff:fed6:7f89'], port_security=['fa:16:3e:d6:7f:89 10.100.0.8 2001:db8:0:1:f816:3eff:fed6:7f89 2001:db8::f816:3eff:fed6:7f89'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8:0:1:f816:3eff:fed6:7f89/64 2001:db8::f816:3eff:fed6:7f89/64', 'neutron:device_id': '8e2e089b-fd7f-460e-ba3f-180204d961f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f167374-6ddb-4a61-a8ba-3dc4425d2006', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d143d452-1c93-4a72-b1b5-b9ed8c58f219, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=24b323af-568c-4843-a858-dab3f5df627f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.502 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 24b323af-568c-4843-a858-dab3f5df627f in datapath 01f3e9aa-20f8-48ae-9b80-cbd0ba476303 bound to our chassis
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.504 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01f3e9aa-20f8-48ae-9b80-cbd0ba476303
Oct 02 09:19:39 compute-0 systemd-machined[214636]: New machine qemu-184-instance-00000096.
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.525 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2dae5a-f994-4232-9588-45fcab548795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:39 compute-0 systemd[1]: Started Virtual Machine qemu-184-instance-00000096.
Oct 02 09:19:39 compute-0 systemd-udevd[432770]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:19:39 compute-0 NetworkManager[45129]: <info>  [1759396779.5619] device (tap24b323af-56): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:19:39 compute-0 NetworkManager[45129]: <info>  [1759396779.5628] device (tap24b323af-56): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.574 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c5eb6c22-81a4-4595-9811-b8a731a17139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.580 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[94b6f98e-d021-4547-a9ce-537a8c0e8cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.616 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bfefe207-8d7c-4073-b43a-6434f534d1f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.633 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eab3983a-6f2f-4e58-ab38-977a05888f3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01f3e9aa-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:77:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 25, 'tx_packets': 6, 'rx_bytes': 2230, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 25, 'tx_packets': 6, 'rx_bytes': 2230, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761975, 'reachable_time': 39646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 432782, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.654 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6af7f9b5-0538-4d9f-aaee-ad03a6da87b0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap01f3e9aa-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761993, 'tstamp': 761993}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 432783, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap01f3e9aa-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761997, 'tstamp': 761997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 432783, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.657 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01f3e9aa-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:19:39 compute-0 nova_compute[260603]: 2025-10-02 09:19:39.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.661 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01f3e9aa-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.661 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.662 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01f3e9aa-20, col_values=(('external_ids', {'iface-id': 'b940cd75-8ee4-4c6d-be34-c1fb5851ca50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:19:39 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.663 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:19:39 compute-0 ceph-mon[74477]: pgmap v3045: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 02 09:19:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.532 2 DEBUG nova.compute.manager [req-462ccb42-c99b-4c38-8914-9b6c6cb898c5 req-4a4ee047-e97b-487f-a807-3265ad425c26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.534 2 DEBUG oslo_concurrency.lockutils [req-462ccb42-c99b-4c38-8914-9b6c6cb898c5 req-4a4ee047-e97b-487f-a807-3265ad425c26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.534 2 DEBUG oslo_concurrency.lockutils [req-462ccb42-c99b-4c38-8914-9b6c6cb898c5 req-4a4ee047-e97b-487f-a807-3265ad425c26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.534 2 DEBUG oslo_concurrency.lockutils [req-462ccb42-c99b-4c38-8914-9b6c6cb898c5 req-4a4ee047-e97b-487f-a807-3265ad425c26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.535 2 DEBUG nova.compute.manager [req-462ccb42-c99b-4c38-8914-9b6c6cb898c5 req-4a4ee047-e97b-487f-a807-3265ad425c26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Processing event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.721 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.722 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 09:19:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:40.735 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:40.737 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:19:40 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:19:40.738 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:19:40 compute-0 ceph-mon[74477]: pgmap v3046: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.982 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.983 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396780.9814334, 8e2e089b-fd7f-460e-ba3f-180204d961f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.984 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] VM Started (Lifecycle Event)
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.988 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.994 2 INFO nova.virt.libvirt.driver [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Instance spawned successfully.
Oct 02 09:19:40 compute-0 nova_compute[260603]: 2025-10-02 09:19:40.997 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.206 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.213 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.369 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.370 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.371 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.372 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.372 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.373 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.454 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.455 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396780.9830508, 8e2e089b-fd7f-460e-ba3f-180204d961f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.455 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] VM Paused (Lifecycle Event)
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.622 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.628 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396780.9878767, 8e2e089b-fd7f-460e-ba3f-180204d961f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.629 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] VM Resumed (Lifecycle Event)
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.716 2 INFO nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Took 14.36 seconds to spawn the instance on the hypervisor.
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.717 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.803 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:19:41 compute-0 nova_compute[260603]: 2025-10-02 09:19:41.809 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:19:42 compute-0 nova_compute[260603]: 2025-10-02 09:19:42.196 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:19:42 compute-0 nova_compute[260603]: 2025-10-02 09:19:42.283 2 INFO nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Took 16.08 seconds to build instance.
Oct 02 09:19:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 02 09:19:42 compute-0 nova_compute[260603]: 2025-10-02 09:19:42.559 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:19:42 compute-0 nova_compute[260603]: 2025-10-02 09:19:42.559 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:19:42 compute-0 nova_compute[260603]: 2025-10-02 09:19:42.726 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:42 compute-0 nova_compute[260603]: 2025-10-02 09:19:42.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:42 compute-0 nova_compute[260603]: 2025-10-02 09:19:42.997 2 DEBUG nova.compute.manager [req-c594792b-27f8-4357-abae-aad4b8f6872c req-a48aa0ee-01d2-4e52-863a-d05f410b5cf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:19:42 compute-0 nova_compute[260603]: 2025-10-02 09:19:42.998 2 DEBUG oslo_concurrency.lockutils [req-c594792b-27f8-4357-abae-aad4b8f6872c req-a48aa0ee-01d2-4e52-863a-d05f410b5cf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:19:42 compute-0 nova_compute[260603]: 2025-10-02 09:19:42.999 2 DEBUG oslo_concurrency.lockutils [req-c594792b-27f8-4357-abae-aad4b8f6872c req-a48aa0ee-01d2-4e52-863a-d05f410b5cf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:19:43 compute-0 nova_compute[260603]: 2025-10-02 09:19:42.999 2 DEBUG oslo_concurrency.lockutils [req-c594792b-27f8-4357-abae-aad4b8f6872c req-a48aa0ee-01d2-4e52-863a-d05f410b5cf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:19:43 compute-0 nova_compute[260603]: 2025-10-02 09:19:43.000 2 DEBUG nova.compute.manager [req-c594792b-27f8-4357-abae-aad4b8f6872c req-a48aa0ee-01d2-4e52-863a-d05f410b5cf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] No waiting events found dispatching network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:19:43 compute-0 nova_compute[260603]: 2025-10-02 09:19:43.000 2 WARNING nova.compute.manager [req-c594792b-27f8-4357-abae-aad4b8f6872c req-a48aa0ee-01d2-4e52-863a-d05f410b5cf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received unexpected event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f for instance with vm_state active and task_state None.
Oct 02 09:19:43 compute-0 ceph-mon[74477]: pgmap v3047: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 02 09:19:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 531 KiB/s wr, 91 op/s
Oct 02 09:19:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.877298) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396784877371, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 657, "num_deletes": 251, "total_data_size": 800270, "memory_usage": 812992, "flush_reason": "Manual Compaction"}
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396784923878, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 542639, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63355, "largest_seqno": 64011, "table_properties": {"data_size": 539560, "index_size": 986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8209, "raw_average_key_size": 20, "raw_value_size": 533159, "raw_average_value_size": 1342, "num_data_blocks": 44, "num_entries": 397, "num_filter_entries": 397, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396738, "oldest_key_time": 1759396738, "file_creation_time": 1759396784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 46657 microseconds, and 3381 cpu microseconds.
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.923954) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 542639 bytes OK
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.923993) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.927540) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.927562) EVENT_LOG_v1 {"time_micros": 1759396784927554, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.927592) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 796764, prev total WAL file size 796764, number of live WAL files 2.
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.928449) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353032' seq:72057594037927935, type:22 .. '6D6772737461740032373534' seq:0, type:0; will stop at (end)
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(529KB)], [149(10MB)]
Oct 02 09:19:44 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396784928524, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 11838114, "oldest_snapshot_seqno": -1}
Oct 02 09:19:45 compute-0 nova_compute[260603]: 2025-10-02 09:19:45.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8101 keys, 8758236 bytes, temperature: kUnknown
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396785172946, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 8758236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8708582, "index_size": 28293, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 212510, "raw_average_key_size": 26, "raw_value_size": 8568562, "raw_average_value_size": 1057, "num_data_blocks": 1091, "num_entries": 8101, "num_filter_entries": 8101, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.173280) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 8758236 bytes
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.215516) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 48.4 rd, 35.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.8 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(38.0) write-amplify(16.1) OK, records in: 8596, records dropped: 495 output_compression: NoCompression
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.215549) EVENT_LOG_v1 {"time_micros": 1759396785215534, "job": 92, "event": "compaction_finished", "compaction_time_micros": 244523, "compaction_time_cpu_micros": 42156, "output_level": 6, "num_output_files": 1, "total_output_size": 8758236, "num_input_records": 8596, "num_output_records": 8101, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396785215917, "job": 92, "event": "table_file_deletion", "file_number": 151}
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396785219547, "job": 92, "event": "table_file_deletion", "file_number": 149}
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.928313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.219679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.219686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.219689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.219691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:19:45 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.219693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:19:45 compute-0 ceph-mon[74477]: pgmap v3048: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 531 KiB/s wr, 91 op/s
Oct 02 09:19:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 89 op/s
Oct 02 09:19:46 compute-0 nova_compute[260603]: 2025-10-02 09:19:46.526 2 DEBUG nova.compute.manager [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-changed-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:19:46 compute-0 nova_compute[260603]: 2025-10-02 09:19:46.526 2 DEBUG nova.compute.manager [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Refreshing instance network info cache due to event network-changed-24b323af-568c-4843-a858-dab3f5df627f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:19:46 compute-0 nova_compute[260603]: 2025-10-02 09:19:46.527 2 DEBUG oslo_concurrency.lockutils [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:19:46 compute-0 nova_compute[260603]: 2025-10-02 09:19:46.527 2 DEBUG oslo_concurrency.lockutils [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:19:46 compute-0 nova_compute[260603]: 2025-10-02 09:19:46.527 2 DEBUG nova.network.neutron [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Refreshing network info cache for port 24b323af-568c-4843-a858-dab3f5df627f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:19:47 compute-0 ceph-mon[74477]: pgmap v3049: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 89 op/s
Oct 02 09:19:47 compute-0 nova_compute[260603]: 2025-10-02 09:19:47.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 125 op/s
Oct 02 09:19:49 compute-0 nova_compute[260603]: 2025-10-02 09:19:49.157 2 DEBUG nova.network.neutron [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updated VIF entry in instance network info cache for port 24b323af-568c-4843-a858-dab3f5df627f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:19:49 compute-0 nova_compute[260603]: 2025-10-02 09:19:49.158 2 DEBUG nova.network.neutron [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updating instance_info_cache with network_info: [{"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:19:49 compute-0 ceph-mon[74477]: pgmap v3050: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 125 op/s
Oct 02 09:19:49 compute-0 nova_compute[260603]: 2025-10-02 09:19:49.734 2 DEBUG oslo_concurrency.lockutils [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:19:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:19:50 compute-0 nova_compute[260603]: 2025-10-02 09:19:50.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 107 op/s
Oct 02 09:19:51 compute-0 ceph-mon[74477]: pgmap v3051: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 107 op/s
Oct 02 09:19:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 109 op/s
Oct 02 09:19:52 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 02 09:19:52 compute-0 nova_compute[260603]: 2025-10-02 09:19:52.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:52 compute-0 ceph-mon[74477]: pgmap v3052: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 109 op/s
Oct 02 09:19:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 220 KiB/s wr, 73 op/s
Oct 02 09:19:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:19:55 compute-0 nova_compute[260603]: 2025-10-02 09:19:55.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:55 compute-0 ceph-mon[74477]: pgmap v3053: 305 pgs: 305 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 220 KiB/s wr, 73 op/s
Oct 02 09:19:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 556 KiB/s rd, 208 KiB/s wr, 46 op/s
Oct 02 09:19:57 compute-0 ceph-mon[74477]: pgmap v3054: 305 pgs: 305 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 556 KiB/s rd, 208 KiB/s wr, 46 op/s
Oct 02 09:19:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:19:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:19:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:19:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:19:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:19:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:19:57 compute-0 nova_compute[260603]: 2025-10-02 09:19:57.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:19:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 560 KiB/s rd, 2.0 MiB/s wr, 62 op/s
Oct 02 09:19:59 compute-0 ceph-mon[74477]: pgmap v3055: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 560 KiB/s rd, 2.0 MiB/s wr, 62 op/s
Oct 02 09:19:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:20:00 compute-0 nova_compute[260603]: 2025-10-02 09:20:00.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 26 op/s
Oct 02 09:20:00 compute-0 ovn_controller[152344]: 2025-10-02T09:20:00Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:7f:89 10.100.0.8
Oct 02 09:20:00 compute-0 ovn_controller[152344]: 2025-10-02T09:20:00Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:7f:89 10.100.0.8
Oct 02 09:20:00 compute-0 ceph-mon[74477]: pgmap v3056: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 26 op/s
Oct 02 09:20:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 2.0 MiB/s wr, 35 op/s
Oct 02 09:20:02 compute-0 ceph-mon[74477]: pgmap v3057: 305 pgs: 305 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 2.0 MiB/s wr, 35 op/s
Oct 02 09:20:03 compute-0 nova_compute[260603]: 2025-10-02 09:20:03.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:03 compute-0 podman[432828]: 2025-10-02 09:20:03.076956432 +0000 UTC m=+0.124699046 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:20:03 compute-0 podman[432827]: 2025-10-02 09:20:03.099028111 +0000 UTC m=+0.148165838 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:20:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Oct 02 09:20:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:20:05 compute-0 nova_compute[260603]: 2025-10-02 09:20:05.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:05 compute-0 ceph-mon[74477]: pgmap v3058: 305 pgs: 305 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Oct 02 09:20:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 1.9 MiB/s wr, 39 op/s
Oct 02 09:20:07 compute-0 podman[432870]: 2025-10-02 09:20:07.039783629 +0000 UTC m=+0.102076369 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 09:20:07 compute-0 podman[432871]: 2025-10-02 09:20:07.054911992 +0000 UTC m=+0.105589888 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:20:07 compute-0 nova_compute[260603]: 2025-10-02 09:20:07.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:20:07 compute-0 ceph-mon[74477]: pgmap v3059: 305 pgs: 305 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 1.9 MiB/s wr, 39 op/s
Oct 02 09:20:08 compute-0 nova_compute[260603]: 2025-10-02 09:20:08.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 1.9 MiB/s wr, 41 op/s
Oct 02 09:20:09 compute-0 sudo[432913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:20:09 compute-0 sudo[432913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:09 compute-0 sudo[432913]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:09 compute-0 sudo[432938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:20:09 compute-0 sudo[432938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:09 compute-0 sudo[432938]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:09 compute-0 nova_compute[260603]: 2025-10-02 09:20:09.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:20:09 compute-0 sudo[432963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:20:09 compute-0 sudo[432963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:09 compute-0 sudo[432963]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:09 compute-0 sudo[432988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:20:09 compute-0 sudo[432988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:09 compute-0 ceph-mon[74477]: pgmap v3060: 305 pgs: 305 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 1.9 MiB/s wr, 41 op/s
Oct 02 09:20:10 compute-0 nova_compute[260603]: 2025-10-02 09:20:10.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:10 compute-0 sudo[432988]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:10 compute-0 sudo[433045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:20:10 compute-0 sudo[433045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:10 compute-0 sudo[433045]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 88 KiB/s wr, 25 op/s
Oct 02 09:20:10 compute-0 sudo[433070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:20:10 compute-0 sudo[433070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:10 compute-0 sudo[433070]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:20:10 compute-0 sudo[433095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:20:10 compute-0 sudo[433095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:10 compute-0 sudo[433095]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:10 compute-0 nova_compute[260603]: 2025-10-02 09:20:10.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:20:10 compute-0 nova_compute[260603]: 2025-10-02 09:20:10.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:20:10 compute-0 nova_compute[260603]: 2025-10-02 09:20:10.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:20:10 compute-0 sudo[433120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- inventory --format=json-pretty --filter-for-batch
Oct 02 09:20:10 compute-0 sudo[433120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:10 compute-0 ceph-mon[74477]: pgmap v3061: 305 pgs: 305 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 88 KiB/s wr, 25 op/s
Oct 02 09:20:11 compute-0 nova_compute[260603]: 2025-10-02 09:20:11.148 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:20:11 compute-0 nova_compute[260603]: 2025-10-02 09:20:11.149 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:20:11 compute-0 nova_compute[260603]: 2025-10-02 09:20:11.149 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 09:20:11 compute-0 nova_compute[260603]: 2025-10-02 09:20:11.150 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d2fca23c-2574-4642-b931-f363d59bd5a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:20:11 compute-0 podman[433185]: 2025-10-02 09:20:11.106051609 +0000 UTC m=+0.047453093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:20:11 compute-0 podman[433185]: 2025-10-02 09:20:11.689288497 +0000 UTC m=+0.630689951 container create 9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:20:12 compute-0 systemd[1]: Started libpod-conmon-9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235.scope.
Oct 02 09:20:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:20:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 115 KiB/s wr, 33 op/s
Oct 02 09:20:12 compute-0 podman[433185]: 2025-10-02 09:20:12.591111915 +0000 UTC m=+1.532513439 container init 9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:20:12 compute-0 podman[433185]: 2025-10-02 09:20:12.607037802 +0000 UTC m=+1.548439266 container start 9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:20:12 compute-0 systemd[1]: libpod-9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235.scope: Deactivated successfully.
Oct 02 09:20:12 compute-0 loving_proskuriakova[433201]: 167 167
Oct 02 09:20:12 compute-0 conmon[433201]: conmon 9aa8e7da49965b741110 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235.scope/container/memory.events
Oct 02 09:20:12 compute-0 podman[433185]: 2025-10-02 09:20:12.990008754 +0000 UTC m=+1.931410278 container attach 9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 09:20:12 compute-0 podman[433185]: 2025-10-02 09:20:12.991193951 +0000 UTC m=+1.932595435 container died 9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:20:13 compute-0 nova_compute[260603]: 2025-10-02 09:20:13.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:13 compute-0 nova_compute[260603]: 2025-10-02 09:20:13.310 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:20:13 compute-0 ceph-mon[74477]: pgmap v3062: 305 pgs: 305 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 115 KiB/s wr, 33 op/s
Oct 02 09:20:13 compute-0 nova_compute[260603]: 2025-10-02 09:20:13.397 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:20:13 compute-0 nova_compute[260603]: 2025-10-02 09:20:13.398 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 09:20:13 compute-0 nova_compute[260603]: 2025-10-02 09:20:13.399 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:20:13 compute-0 nova_compute[260603]: 2025-10-02 09:20:13.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:20:13 compute-0 nova_compute[260603]: 2025-10-02 09:20:13.597 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:13 compute-0 nova_compute[260603]: 2025-10-02 09:20:13.599 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:13 compute-0 nova_compute[260603]: 2025-10-02 09:20:13.599 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:13 compute-0 nova_compute[260603]: 2025-10-02 09:20:13.599 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:20:13 compute-0 nova_compute[260603]: 2025-10-02 09:20:13.601 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca3f32bb23ce4910bbf87002928a59ecc107de40b099840cb4cf0b737146453c-merged.mount: Deactivated successfully.
Oct 02 09:20:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:20:14 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344404463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.054 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.194 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.194 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.201 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.201 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:20:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 93 KiB/s wr, 27 op/s
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.490 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.492 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3103MB free_disk=59.89738464355469GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.493 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.493 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:14 compute-0 podman[433185]: 2025-10-02 09:20:14.609874749 +0000 UTC m=+3.551276233 container remove 9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.633 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance d2fca23c-2574-4642-b931-f363d59bd5a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.633 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 8e2e089b-fd7f-460e-ba3f-180204d961f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.633 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.634 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.652 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.672 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.672 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.688 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.716 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 09:20:14 compute-0 systemd[1]: libpod-conmon-9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235.scope: Deactivated successfully.
Oct 02 09:20:14 compute-0 nova_compute[260603]: 2025-10-02 09:20:14.780 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:20:14 compute-0 podman[433249]: 2025-10-02 09:20:14.845810839 +0000 UTC m=+0.030823834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:20:15 compute-0 nova_compute[260603]: 2025-10-02 09:20:15.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:15 compute-0 podman[433249]: 2025-10-02 09:20:15.332536791 +0000 UTC m=+0.517549726 container create 5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_proskuriakova, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:20:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:20:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/77357292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:20:15 compute-0 nova_compute[260603]: 2025-10-02 09:20:15.400 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:20:15 compute-0 nova_compute[260603]: 2025-10-02 09:20:15.409 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:20:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:20:15 compute-0 nova_compute[260603]: 2025-10-02 09:20:15.602 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:20:15 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1344404463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:20:15 compute-0 ceph-mon[74477]: pgmap v3063: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 93 KiB/s wr, 27 op/s
Oct 02 09:20:15 compute-0 systemd[1]: Started libpod-conmon-5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f.scope.
Oct 02 09:20:15 compute-0 nova_compute[260603]: 2025-10-02 09:20:15.733 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:20:15 compute-0 nova_compute[260603]: 2025-10-02 09:20:15.734 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d629f48d969214eb41a0b1f928279c4b21975e9acae05e35229c4105684ddeda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d629f48d969214eb41a0b1f928279c4b21975e9acae05e35229c4105684ddeda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d629f48d969214eb41a0b1f928279c4b21975e9acae05e35229c4105684ddeda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d629f48d969214eb41a0b1f928279c4b21975e9acae05e35229c4105684ddeda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:15 compute-0 podman[433249]: 2025-10-02 09:20:15.89149867 +0000 UTC m=+1.076511665 container init 5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_proskuriakova, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 09:20:15 compute-0 podman[433249]: 2025-10-02 09:20:15.905641422 +0000 UTC m=+1.090654367 container start 5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 09:20:15 compute-0 ovn_controller[152344]: 2025-10-02T09:20:15Z|01677|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Oct 02 09:20:16 compute-0 podman[433249]: 2025-10-02 09:20:16.164299381 +0000 UTC m=+1.349312326 container attach 5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 09:20:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 82 KiB/s wr, 14 op/s
Oct 02 09:20:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/77357292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:20:16 compute-0 ceph-mon[74477]: pgmap v3064: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 82 KiB/s wr, 14 op/s
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]: [
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:     {
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:         "available": false,
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:         "ceph_device": false,
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:         "lsm_data": {},
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:         "lvs": [],
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:         "path": "/dev/sr0",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:         "rejected_reasons": [
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "Has a FileSystem",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "Insufficient space (<5GB)"
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:         ],
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:         "sys_api": {
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "actuators": null,
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "device_nodes": "sr0",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "devname": "sr0",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "human_readable_size": "482.00 KB",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "id_bus": "ata",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "model": "QEMU DVD-ROM",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "nr_requests": "2",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "parent": "/dev/sr0",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "partitions": {},
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "path": "/dev/sr0",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "removable": "1",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "rev": "2.5+",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "ro": "0",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "rotational": "0",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "sas_address": "",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "sas_device_handle": "",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "scheduler_mode": "mq-deadline",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "sectors": 0,
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "sectorsize": "2048",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "size": 493568.0,
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "support_discard": "2048",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "type": "disk",
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:             "vendor": "QEMU"
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:         }
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]:     }
Oct 02 09:20:17 compute-0 frosty_proskuriakova[433289]: ]
Oct 02 09:20:17 compute-0 systemd[1]: libpod-5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f.scope: Deactivated successfully.
Oct 02 09:20:17 compute-0 systemd[1]: libpod-5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f.scope: Consumed 1.890s CPU time.
Oct 02 09:20:17 compute-0 podman[435437]: 2025-10-02 09:20:17.789058909 +0000 UTC m=+0.032420453 container died 5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 02 09:20:18 compute-0 nova_compute[260603]: 2025-10-02 09:20:18.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 82 KiB/s wr, 14 op/s
Oct 02 09:20:19 compute-0 ceph-mon[74477]: pgmap v3065: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 82 KiB/s wr, 14 op/s
Oct 02 09:20:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d629f48d969214eb41a0b1f928279c4b21975e9acae05e35229c4105684ddeda-merged.mount: Deactivated successfully.
Oct 02 09:20:19 compute-0 nova_compute[260603]: 2025-10-02 09:20:19.730 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:20:19 compute-0 nova_compute[260603]: 2025-10-02 09:20:19.731 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:20:19 compute-0 podman[435437]: 2025-10-02 09:20:19.83237047 +0000 UTC m=+2.075732024 container remove 5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:20:19 compute-0 systemd[1]: libpod-conmon-5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f.scope: Deactivated successfully.
Oct 02 09:20:19 compute-0 sudo[433120]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:20:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:20:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:20:20 compute-0 nova_compute[260603]: 2025-10-02 09:20:20.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:20:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:20:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:20:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:20:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:20:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:20:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:20:20 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev adb8a190-9b7d-482e-a31c-8be5098ea5e7 does not exist
Oct 02 09:20:20 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ccae9128-3274-43bc-bdd1-fea366fa6f32 does not exist
Oct 02 09:20:20 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9e0ce440-92f8-4884-9867-323d6f695cf7 does not exist
Oct 02 09:20:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:20:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:20:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 38 KiB/s wr, 12 op/s
Oct 02 09:20:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:20:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:20:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:20:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:20:20 compute-0 sudo[435453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:20:20 compute-0 sudo[435453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:20:20 compute-0 sudo[435453]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:20 compute-0 sudo[435478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:20:20 compute-0 sudo[435478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:20 compute-0 sudo[435478]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:20 compute-0 sudo[435503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:20:20 compute-0 sudo[435503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:20 compute-0 sudo[435503]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:20 compute-0 sudo[435528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:20:20 compute-0 sudo[435528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:20:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:20:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:20:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:20:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:20:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:20:21 compute-0 ceph-mon[74477]: pgmap v3066: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 38 KiB/s wr, 12 op/s
Oct 02 09:20:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:20:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:20:21 compute-0 podman[435593]: 2025-10-02 09:20:21.26859361 +0000 UTC m=+0.089867888 container create 1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 09:20:21 compute-0 podman[435593]: 2025-10-02 09:20:21.222729678 +0000 UTC m=+0.044004006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:20:21 compute-0 systemd[1]: Started libpod-conmon-1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026.scope.
Oct 02 09:20:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:20:21 compute-0 podman[435593]: 2025-10-02 09:20:21.493253057 +0000 UTC m=+0.314527315 container init 1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 09:20:21 compute-0 podman[435593]: 2025-10-02 09:20:21.505976585 +0000 UTC m=+0.327250853 container start 1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 09:20:21 compute-0 keen_panini[435609]: 167 167
Oct 02 09:20:21 compute-0 systemd[1]: libpod-1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026.scope: Deactivated successfully.
Oct 02 09:20:21 compute-0 podman[435593]: 2025-10-02 09:20:21.573634787 +0000 UTC m=+0.394909035 container attach 1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:20:21 compute-0 podman[435593]: 2025-10-02 09:20:21.574904478 +0000 UTC m=+0.396178716 container died 1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 09:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-efc295c31643961b652803807db539db76a05625450a5ce2c957520be685bb46-merged.mount: Deactivated successfully.
Oct 02 09:20:22 compute-0 podman[435593]: 2025-10-02 09:20:22.130397658 +0000 UTC m=+0.951671906 container remove 1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:20:22 compute-0 systemd[1]: libpod-conmon-1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026.scope: Deactivated successfully.
Oct 02 09:20:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:20:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1497119359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:20:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:20:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1497119359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:20:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 56 KiB/s wr, 13 op/s
Oct 02 09:20:22 compute-0 podman[435635]: 2025-10-02 09:20:22.357495881 +0000 UTC m=+0.034464607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:20:22 compute-0 podman[435635]: 2025-10-02 09:20:22.488629337 +0000 UTC m=+0.165598013 container create 3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 09:20:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1497119359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:20:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1497119359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:20:22 compute-0 systemd[1]: Started libpod-conmon-3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d.scope.
Oct 02 09:20:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57abdbffeb2067448601772c687c801e56d005541d9e9a41867585757310b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57abdbffeb2067448601772c687c801e56d005541d9e9a41867585757310b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57abdbffeb2067448601772c687c801e56d005541d9e9a41867585757310b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57abdbffeb2067448601772c687c801e56d005541d9e9a41867585757310b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57abdbffeb2067448601772c687c801e56d005541d9e9a41867585757310b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:22 compute-0 podman[435635]: 2025-10-02 09:20:22.890865961 +0000 UTC m=+0.567834657 container init 3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 09:20:22 compute-0 podman[435635]: 2025-10-02 09:20:22.897494938 +0000 UTC m=+0.574463624 container start 3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:20:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:22.984 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:20:22 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:22.986 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:20:23 compute-0 podman[435635]: 2025-10-02 09:20:23.00834129 +0000 UTC m=+0.685310006 container attach 3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.237 2 DEBUG nova.compute.manager [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-changed-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.238 2 DEBUG nova.compute.manager [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Refreshing instance network info cache due to event network-changed-24b323af-568c-4843-a858-dab3f5df627f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.239 2 DEBUG oslo_concurrency.lockutils [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.239 2 DEBUG oslo_concurrency.lockutils [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.240 2 DEBUG nova.network.neutron [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Refreshing network info cache for port 24b323af-568c-4843-a858-dab3f5df627f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.403 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.404 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.405 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.406 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.406 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.408 2 INFO nova.compute.manager [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Terminating instance
Oct 02 09:20:23 compute-0 nova_compute[260603]: 2025-10-02 09:20:23.410 2 DEBUG nova.compute.manager [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:20:23 compute-0 ceph-mon[74477]: pgmap v3067: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 56 KiB/s wr, 13 op/s
Oct 02 09:20:23 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:23.989 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:20:24 compute-0 nice_lamport[435651]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:20:24 compute-0 nice_lamport[435651]: --> relative data size: 1.0
Oct 02 09:20:24 compute-0 nice_lamport[435651]: --> All data devices are unavailable
Oct 02 09:20:24 compute-0 systemd[1]: libpod-3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d.scope: Deactivated successfully.
Oct 02 09:20:24 compute-0 systemd[1]: libpod-3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d.scope: Consumed 1.058s CPU time.
Oct 02 09:20:24 compute-0 podman[435635]: 2025-10-02 09:20:24.06646838 +0000 UTC m=+1.743437116 container died 3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:20:24 compute-0 kernel: tap24b323af-56 (unregistering): left promiscuous mode
Oct 02 09:20:24 compute-0 NetworkManager[45129]: <info>  [1759396824.3228] device (tap24b323af-56): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:24 compute-0 ovn_controller[152344]: 2025-10-02T09:20:24Z|01678|binding|INFO|Releasing lport 24b323af-568c-4843-a858-dab3f5df627f from this chassis (sb_readonly=0)
Oct 02 09:20:24 compute-0 ovn_controller[152344]: 2025-10-02T09:20:24Z|01679|binding|INFO|Setting lport 24b323af-568c-4843-a858-dab3f5df627f down in Southbound
Oct 02 09:20:24 compute-0 ovn_controller[152344]: 2025-10-02T09:20:24Z|01680|binding|INFO|Removing iface tap24b323af-56 ovn-installed in OVS
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 31 KiB/s wr, 5 op/s
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.366 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:7f:89 10.100.0.8 2001:db8:0:1:f816:3eff:fed6:7f89 2001:db8::f816:3eff:fed6:7f89'], port_security=['fa:16:3e:d6:7f:89 10.100.0.8 2001:db8:0:1:f816:3eff:fed6:7f89 2001:db8::f816:3eff:fed6:7f89'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8:0:1:f816:3eff:fed6:7f89/64 2001:db8::f816:3eff:fed6:7f89/64', 'neutron:device_id': '8e2e089b-fd7f-460e-ba3f-180204d961f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f167374-6ddb-4a61-a8ba-3dc4425d2006', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d143d452-1c93-4a72-b1b5-b9ed8c58f219, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=24b323af-568c-4843-a858-dab3f5df627f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.368 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 24b323af-568c-4843-a858-dab3f5df627f in datapath 01f3e9aa-20f8-48ae-9b80-cbd0ba476303 unbound from our chassis
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.369 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01f3e9aa-20f8-48ae-9b80-cbd0ba476303
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.400 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[155c128d-70b5-4045-b77d-408bec0b3751]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:24 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Deactivated successfully.
Oct 02 09:20:24 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Consumed 15.694s CPU time.
Oct 02 09:20:24 compute-0 systemd-machined[214636]: Machine qemu-184-instance-00000096 terminated.
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.449 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ecc9d3-1d4a-4d0e-9a45-4f495413c3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.453 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e276990f-e107-4e47-9364-e479e6357444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.465 2 INFO nova.virt.libvirt.driver [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Instance destroyed successfully.
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.466 2 DEBUG nova.objects.instance [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 8e2e089b-fd7f-460e-ba3f-180204d961f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.510 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8208386b-b466-44ed-8ba5-48499c619f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.512 2 DEBUG nova.virt.libvirt.vif [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:19:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1224861595',display_name='tempest-TestGettingAddress-server-1224861595',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1224861595',id=150,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcq6b1IVAJMqrCeKypbws2/0Rs5i6q90XYETpVyQMku/Sh9hYU1xPZGh64+rGmgREbgZiLmTHs8bnO5pL74gp5+lt+bQjj8c2EhhfVbuK3Dnp+gnRVW0xWshm1hg5jBSA==',key_name='tempest-TestGettingAddress-505095675',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:19:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-s7sei7n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:19:41Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=8e2e089b-fd7f-460e-ba3f-180204d961f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.512 2 DEBUG nova.network.os_vif_util [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.513 2 DEBUG nova.network.os_vif_util [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.514 2 DEBUG os_vif [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.516 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24b323af-56, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.522 2 INFO os_vif [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56')
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.537 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df26c253-3730-4804-9608-2c5c80409808]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01f3e9aa-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:77:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 3628, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 3628, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761975, 'reachable_time': 39646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 435712, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.560 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab3f836-8ae0-4b5b-906e-3a2aa51a460b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap01f3e9aa-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761993, 'tstamp': 761993}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 435728, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap01f3e9aa-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761997, 'tstamp': 761997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 435728, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.562 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01f3e9aa-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:24 compute-0 nova_compute[260603]: 2025-10-02 09:20:24.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.566 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01f3e9aa-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.566 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.566 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01f3e9aa-20, col_values=(('external_ids', {'iface-id': 'b940cd75-8ee4-4c6d-be34-c1fb5851ca50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:20:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.567 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf57abdbffeb2067448601772c687c801e56d005541d9e9a41867585757310b1-merged.mount: Deactivated successfully.
Oct 02 09:20:25 compute-0 nova_compute[260603]: 2025-10-02 09:20:25.009 2 DEBUG nova.network.neutron [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updated VIF entry in instance network info cache for port 24b323af-568c-4843-a858-dab3f5df627f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:20:25 compute-0 nova_compute[260603]: 2025-10-02 09:20:25.010 2 DEBUG nova.network.neutron [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updating instance_info_cache with network_info: [{"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:20:25 compute-0 nova_compute[260603]: 2025-10-02 09:20:25.089 2 DEBUG nova.compute.manager [req-1fb04ae4-fa32-416c-8646-c22a67321557 req-e85228ec-cb95-4a69-903b-d001af75ad7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-vif-unplugged-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:20:25 compute-0 nova_compute[260603]: 2025-10-02 09:20:25.089 2 DEBUG oslo_concurrency.lockutils [req-1fb04ae4-fa32-416c-8646-c22a67321557 req-e85228ec-cb95-4a69-903b-d001af75ad7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:25 compute-0 nova_compute[260603]: 2025-10-02 09:20:25.090 2 DEBUG oslo_concurrency.lockutils [req-1fb04ae4-fa32-416c-8646-c22a67321557 req-e85228ec-cb95-4a69-903b-d001af75ad7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:25 compute-0 nova_compute[260603]: 2025-10-02 09:20:25.090 2 DEBUG oslo_concurrency.lockutils [req-1fb04ae4-fa32-416c-8646-c22a67321557 req-e85228ec-cb95-4a69-903b-d001af75ad7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:25 compute-0 nova_compute[260603]: 2025-10-02 09:20:25.090 2 DEBUG nova.compute.manager [req-1fb04ae4-fa32-416c-8646-c22a67321557 req-e85228ec-cb95-4a69-903b-d001af75ad7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] No waiting events found dispatching network-vif-unplugged-24b323af-568c-4843-a858-dab3f5df627f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:20:25 compute-0 nova_compute[260603]: 2025-10-02 09:20:25.090 2 DEBUG nova.compute.manager [req-1fb04ae4-fa32-416c-8646-c22a67321557 req-e85228ec-cb95-4a69-903b-d001af75ad7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-vif-unplugged-24b323af-568c-4843-a858-dab3f5df627f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:20:25 compute-0 nova_compute[260603]: 2025-10-02 09:20:25.095 2 DEBUG oslo_concurrency.lockutils [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:20:25 compute-0 ceph-mon[74477]: pgmap v3068: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 31 KiB/s wr, 5 op/s
Oct 02 09:20:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:20:25 compute-0 podman[435635]: 2025-10-02 09:20:25.779225977 +0000 UTC m=+3.456194653 container remove 3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 09:20:25 compute-0 sudo[435528]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:25 compute-0 systemd[1]: libpod-conmon-3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d.scope: Deactivated successfully.
Oct 02 09:20:25 compute-0 sudo[435733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:20:25 compute-0 sudo[435733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:25 compute-0 sudo[435733]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:25 compute-0 sudo[435758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:20:25 compute-0 sudo[435758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:25 compute-0 sudo[435758]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:26 compute-0 sudo[435783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:20:26 compute-0 sudo[435783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:26 compute-0 sudo[435783]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:26 compute-0 sudo[435808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:20:26 compute-0 sudo[435808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 19 KiB/s wr, 1 op/s
Oct 02 09:20:26 compute-0 nova_compute[260603]: 2025-10-02 09:20:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:20:26 compute-0 podman[435873]: 2025-10-02 09:20:26.482406011 +0000 UTC m=+0.025098005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:20:26 compute-0 podman[435873]: 2025-10-02 09:20:26.702218657 +0000 UTC m=+0.244910601 container create 73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_torvalds, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 09:20:26 compute-0 ceph-mon[74477]: pgmap v3069: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 19 KiB/s wr, 1 op/s
Oct 02 09:20:26 compute-0 systemd[1]: Started libpod-conmon-73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362.scope.
Oct 02 09:20:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:20:27 compute-0 nova_compute[260603]: 2025-10-02 09:20:27.183 2 DEBUG nova.compute.manager [req-ab9bee87-b551-4317-a84e-0a2890acb2aa req-bb1aed23-0896-4d09-ab67-6c80f78154b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:20:27 compute-0 nova_compute[260603]: 2025-10-02 09:20:27.184 2 DEBUG oslo_concurrency.lockutils [req-ab9bee87-b551-4317-a84e-0a2890acb2aa req-bb1aed23-0896-4d09-ab67-6c80f78154b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:27 compute-0 nova_compute[260603]: 2025-10-02 09:20:27.184 2 DEBUG oslo_concurrency.lockutils [req-ab9bee87-b551-4317-a84e-0a2890acb2aa req-bb1aed23-0896-4d09-ab67-6c80f78154b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:27 compute-0 nova_compute[260603]: 2025-10-02 09:20:27.185 2 DEBUG oslo_concurrency.lockutils [req-ab9bee87-b551-4317-a84e-0a2890acb2aa req-bb1aed23-0896-4d09-ab67-6c80f78154b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:27 compute-0 nova_compute[260603]: 2025-10-02 09:20:27.185 2 DEBUG nova.compute.manager [req-ab9bee87-b551-4317-a84e-0a2890acb2aa req-bb1aed23-0896-4d09-ab67-6c80f78154b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] No waiting events found dispatching network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:20:27 compute-0 nova_compute[260603]: 2025-10-02 09:20:27.185 2 WARNING nova.compute.manager [req-ab9bee87-b551-4317-a84e-0a2890acb2aa req-bb1aed23-0896-4d09-ab67-6c80f78154b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received unexpected event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f for instance with vm_state active and task_state deleting.
Oct 02 09:20:27 compute-0 podman[435873]: 2025-10-02 09:20:27.193044237 +0000 UTC m=+0.735736271 container init 73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_torvalds, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:20:27 compute-0 podman[435873]: 2025-10-02 09:20:27.207581491 +0000 UTC m=+0.750273435 container start 73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 09:20:27 compute-0 brave_torvalds[435889]: 167 167
Oct 02 09:20:27 compute-0 systemd[1]: libpod-73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362.scope: Deactivated successfully.
Oct 02 09:20:27 compute-0 podman[435873]: 2025-10-02 09:20:27.360461167 +0000 UTC m=+0.903153151 container attach 73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_torvalds, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 09:20:27 compute-0 podman[435873]: 2025-10-02 09:20:27.363026876 +0000 UTC m=+0.905718880 container died 73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_torvalds, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:20:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f8c318215ab2256c81362e4aa47c528e318ebc1da26c38e7aeb2cbfba238e4e-merged.mount: Deactivated successfully.
Oct 02 09:20:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:20:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:20:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:20:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:20:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:20:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:20:28
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.data']
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:20:28 compute-0 nova_compute[260603]: 2025-10-02 09:20:28.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 25 KiB/s wr, 9 op/s
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:20:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:20:28 compute-0 podman[435873]: 2025-10-02 09:20:28.813708309 +0000 UTC m=+2.356400283 container remove 73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_torvalds, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:20:28 compute-0 systemd[1]: libpod-conmon-73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362.scope: Deactivated successfully.
Oct 02 09:20:29 compute-0 ceph-mon[74477]: pgmap v3070: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 25 KiB/s wr, 9 op/s
Oct 02 09:20:29 compute-0 podman[435914]: 2025-10-02 09:20:29.032339787 +0000 UTC m=+0.027177550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:20:29 compute-0 podman[435914]: 2025-10-02 09:20:29.259013727 +0000 UTC m=+0.253851410 container create 261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 09:20:29 compute-0 systemd[1]: Started libpod-conmon-261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3.scope.
Oct 02 09:20:29 compute-0 nova_compute[260603]: 2025-10-02 09:20:29.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d77d736fdf91485204957be5779e4c7edf8c34bbd7aa92c1f43abf56f3925d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d77d736fdf91485204957be5779e4c7edf8c34bbd7aa92c1f43abf56f3925d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d77d736fdf91485204957be5779e4c7edf8c34bbd7aa92c1f43abf56f3925d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d77d736fdf91485204957be5779e4c7edf8c34bbd7aa92c1f43abf56f3925d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:29 compute-0 podman[435914]: 2025-10-02 09:20:29.658077101 +0000 UTC m=+0.652914844 container init 261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 09:20:29 compute-0 podman[435914]: 2025-10-02 09:20:29.668486096 +0000 UTC m=+0.663323759 container start 261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 09:20:29 compute-0 podman[435914]: 2025-10-02 09:20:29.814879259 +0000 UTC m=+0.809716952 container attach 261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:20:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 25 KiB/s wr, 9 op/s
Oct 02 09:20:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:20:30 compute-0 zealous_panini[435930]: {
Oct 02 09:20:30 compute-0 zealous_panini[435930]:     "0": [
Oct 02 09:20:30 compute-0 zealous_panini[435930]:         {
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "devices": [
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "/dev/loop3"
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             ],
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_name": "ceph_lv0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_size": "21470642176",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "name": "ceph_lv0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "tags": {
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.cluster_name": "ceph",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.crush_device_class": "",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.encrypted": "0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.osd_id": "0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.type": "block",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.vdo": "0"
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             },
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "type": "block",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "vg_name": "ceph_vg0"
Oct 02 09:20:30 compute-0 zealous_panini[435930]:         }
Oct 02 09:20:30 compute-0 zealous_panini[435930]:     ],
Oct 02 09:20:30 compute-0 zealous_panini[435930]:     "1": [
Oct 02 09:20:30 compute-0 zealous_panini[435930]:         {
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "devices": [
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "/dev/loop4"
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             ],
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_name": "ceph_lv1",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_size": "21470642176",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "name": "ceph_lv1",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "tags": {
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.cluster_name": "ceph",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.crush_device_class": "",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.encrypted": "0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.osd_id": "1",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.type": "block",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.vdo": "0"
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             },
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "type": "block",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "vg_name": "ceph_vg1"
Oct 02 09:20:30 compute-0 zealous_panini[435930]:         }
Oct 02 09:20:30 compute-0 zealous_panini[435930]:     ],
Oct 02 09:20:30 compute-0 zealous_panini[435930]:     "2": [
Oct 02 09:20:30 compute-0 zealous_panini[435930]:         {
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "devices": [
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "/dev/loop5"
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             ],
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_name": "ceph_lv2",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_size": "21470642176",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "name": "ceph_lv2",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "tags": {
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.cluster_name": "ceph",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.crush_device_class": "",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.encrypted": "0",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.osd_id": "2",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.type": "block",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:                 "ceph.vdo": "0"
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             },
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "type": "block",
Oct 02 09:20:30 compute-0 zealous_panini[435930]:             "vg_name": "ceph_vg2"
Oct 02 09:20:30 compute-0 zealous_panini[435930]:         }
Oct 02 09:20:30 compute-0 zealous_panini[435930]:     ]
Oct 02 09:20:30 compute-0 zealous_panini[435930]: }
Oct 02 09:20:30 compute-0 systemd[1]: libpod-261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3.scope: Deactivated successfully.
Oct 02 09:20:30 compute-0 podman[435940]: 2025-10-02 09:20:30.737339652 +0000 UTC m=+0.034134977 container died 261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 09:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2d77d736fdf91485204957be5779e4c7edf8c34bbd7aa92c1f43abf56f3925d-merged.mount: Deactivated successfully.
Oct 02 09:20:31 compute-0 podman[435940]: 2025-10-02 09:20:31.13426588 +0000 UTC m=+0.431061155 container remove 261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 09:20:31 compute-0 systemd[1]: libpod-conmon-261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3.scope: Deactivated successfully.
Oct 02 09:20:31 compute-0 sudo[435808]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:31 compute-0 sudo[435955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:20:31 compute-0 sudo[435955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:31 compute-0 sudo[435955]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:31 compute-0 sudo[435980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:20:31 compute-0 sudo[435980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:31 compute-0 sudo[435980]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:31 compute-0 sudo[436005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:20:31 compute-0 sudo[436005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:31 compute-0 sudo[436005]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:31 compute-0 sudo[436030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:20:31 compute-0 sudo[436030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:31 compute-0 ceph-mon[74477]: pgmap v3071: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 25 KiB/s wr, 9 op/s
Oct 02 09:20:32 compute-0 podman[436099]: 2025-10-02 09:20:31.909196584 +0000 UTC m=+0.029186703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:20:32 compute-0 podman[436099]: 2025-10-02 09:20:32.036392957 +0000 UTC m=+0.156383066 container create 69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 09:20:32 compute-0 systemd[1]: Started libpod-conmon-69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133.scope.
Oct 02 09:20:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:20:32 compute-0 podman[436099]: 2025-10-02 09:20:32.277937701 +0000 UTC m=+0.397927870 container init 69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ramanujan, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:20:32 compute-0 podman[436099]: 2025-10-02 09:20:32.291310569 +0000 UTC m=+0.411300638 container start 69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 02 09:20:32 compute-0 busy_ramanujan[436116]: 167 167
Oct 02 09:20:32 compute-0 systemd[1]: libpod-69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133.scope: Deactivated successfully.
Oct 02 09:20:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 154 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 26 KiB/s wr, 27 op/s
Oct 02 09:20:32 compute-0 podman[436099]: 2025-10-02 09:20:32.438567629 +0000 UTC m=+0.558557748 container attach 69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:20:32 compute-0 podman[436099]: 2025-10-02 09:20:32.439919662 +0000 UTC m=+0.559909741 container died 69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:20:32 compute-0 nova_compute[260603]: 2025-10-02 09:20:32.485 2 INFO nova.virt.libvirt.driver [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Deleting instance files /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8_del
Oct 02 09:20:32 compute-0 nova_compute[260603]: 2025-10-02 09:20:32.489 2 INFO nova.virt.libvirt.driver [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Deletion of /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8_del complete
Oct 02 09:20:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-84ea02ab45280cbfe5a151b797d108c946668ec573ebe312aa7eaad1fdb3cf22-merged.mount: Deactivated successfully.
Oct 02 09:20:32 compute-0 nova_compute[260603]: 2025-10-02 09:20:32.588 2 INFO nova.compute.manager [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Took 9.18 seconds to destroy the instance on the hypervisor.
Oct 02 09:20:32 compute-0 nova_compute[260603]: 2025-10-02 09:20:32.588 2 DEBUG oslo.service.loopingcall [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:20:32 compute-0 nova_compute[260603]: 2025-10-02 09:20:32.589 2 DEBUG nova.compute.manager [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:20:32 compute-0 nova_compute[260603]: 2025-10-02 09:20:32.589 2 DEBUG nova.network.neutron [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:20:32 compute-0 podman[436099]: 2025-10-02 09:20:32.66656347 +0000 UTC m=+0.786553549 container remove 69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ramanujan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 09:20:32 compute-0 systemd[1]: libpod-conmon-69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133.scope: Deactivated successfully.
Oct 02 09:20:32 compute-0 podman[436141]: 2025-10-02 09:20:32.901826209 +0000 UTC m=+0.029216813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:20:33 compute-0 podman[436141]: 2025-10-02 09:20:33.050388519 +0000 UTC m=+0.177779023 container create 2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 09:20:33 compute-0 nova_compute[260603]: 2025-10-02 09:20:33.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:33 compute-0 systemd[1]: Started libpod-conmon-2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028.scope.
Oct 02 09:20:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b330574442d53c064390e5722c4673f2eee2acd66dc47618ac589247635f086e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b330574442d53c064390e5722c4673f2eee2acd66dc47618ac589247635f086e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b330574442d53c064390e5722c4673f2eee2acd66dc47618ac589247635f086e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b330574442d53c064390e5722c4673f2eee2acd66dc47618ac589247635f086e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:20:33 compute-0 podman[436141]: 2025-10-02 09:20:33.273017423 +0000 UTC m=+0.400408017 container init 2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 09:20:33 compute-0 podman[436141]: 2025-10-02 09:20:33.281165408 +0000 UTC m=+0.408555912 container start 2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:20:33 compute-0 podman[436141]: 2025-10-02 09:20:33.337216208 +0000 UTC m=+0.464606812 container attach 2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:20:33 compute-0 podman[436160]: 2025-10-02 09:20:33.404599192 +0000 UTC m=+0.216475012 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 09:20:33 compute-0 podman[436158]: 2025-10-02 09:20:33.428087436 +0000 UTC m=+0.242103342 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 09:20:33 compute-0 nova_compute[260603]: 2025-10-02 09:20:33.451 2 DEBUG nova.network.neutron [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:20:33 compute-0 nova_compute[260603]: 2025-10-02 09:20:33.504 2 DEBUG nova.compute.manager [req-e2030448-6bfd-4c6e-a13d-a91040060d90 req-6fbc048f-8e25-4f0a-9a54-5bcb72de0158 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-vif-deleted-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:20:33 compute-0 nova_compute[260603]: 2025-10-02 09:20:33.504 2 INFO nova.compute.manager [req-e2030448-6bfd-4c6e-a13d-a91040060d90 req-6fbc048f-8e25-4f0a-9a54-5bcb72de0158 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Neutron deleted interface 24b323af-568c-4843-a858-dab3f5df627f; detaching it from the instance and deleting it from the info cache
Oct 02 09:20:33 compute-0 nova_compute[260603]: 2025-10-02 09:20:33.504 2 DEBUG nova.network.neutron [req-e2030448-6bfd-4c6e-a13d-a91040060d90 req-6fbc048f-8e25-4f0a-9a54-5bcb72de0158 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:20:33 compute-0 nova_compute[260603]: 2025-10-02 09:20:33.547 2 INFO nova.compute.manager [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Took 0.96 seconds to deallocate network for instance.
Oct 02 09:20:33 compute-0 nova_compute[260603]: 2025-10-02 09:20:33.558 2 DEBUG nova.compute.manager [req-e2030448-6bfd-4c6e-a13d-a91040060d90 req-6fbc048f-8e25-4f0a-9a54-5bcb72de0158 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Detach interface failed, port_id=24b323af-568c-4843-a858-dab3f5df627f, reason: Instance 8e2e089b-fd7f-460e-ba3f-180204d961f8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 09:20:33 compute-0 nova_compute[260603]: 2025-10-02 09:20:33.646 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:33 compute-0 nova_compute[260603]: 2025-10-02 09:20:33.647 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:33 compute-0 nova_compute[260603]: 2025-10-02 09:20:33.730 2 DEBUG oslo_concurrency.processutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:20:33 compute-0 ceph-mon[74477]: pgmap v3072: 305 pgs: 305 active+clean; 154 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 26 KiB/s wr, 27 op/s
Oct 02 09:20:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:20:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/893269694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:20:34 compute-0 nova_compute[260603]: 2025-10-02 09:20:34.160 2 DEBUG oslo_concurrency.processutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:20:34 compute-0 nova_compute[260603]: 2025-10-02 09:20:34.168 2 DEBUG nova.compute.provider_tree [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:20:34 compute-0 nova_compute[260603]: 2025-10-02 09:20:34.189 2 DEBUG nova.scheduler.client.report [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:20:34 compute-0 nova_compute[260603]: 2025-10-02 09:20:34.238 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:34 compute-0 tender_leakey[436157]: {
Oct 02 09:20:34 compute-0 tender_leakey[436157]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "osd_id": 2,
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "type": "bluestore"
Oct 02 09:20:34 compute-0 tender_leakey[436157]:     },
Oct 02 09:20:34 compute-0 tender_leakey[436157]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "osd_id": 1,
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "type": "bluestore"
Oct 02 09:20:34 compute-0 tender_leakey[436157]:     },
Oct 02 09:20:34 compute-0 tender_leakey[436157]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "osd_id": 0,
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:20:34 compute-0 tender_leakey[436157]:         "type": "bluestore"
Oct 02 09:20:34 compute-0 tender_leakey[436157]:     }
Oct 02 09:20:34 compute-0 tender_leakey[436157]: }
Oct 02 09:20:34 compute-0 nova_compute[260603]: 2025-10-02 09:20:34.288 2 INFO nova.scheduler.client.report [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 8e2e089b-fd7f-460e-ba3f-180204d961f8
Oct 02 09:20:34 compute-0 systemd[1]: libpod-2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028.scope: Deactivated successfully.
Oct 02 09:20:34 compute-0 podman[436141]: 2025-10-02 09:20:34.323933768 +0000 UTC m=+1.451324282 container died 2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_leakey, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct 02 09:20:34 compute-0 systemd[1]: libpod-2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028.scope: Consumed 1.042s CPU time.
Oct 02 09:20:34 compute-0 nova_compute[260603]: 2025-10-02 09:20:34.368 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 28 op/s
Oct 02 09:20:34 compute-0 nova_compute[260603]: 2025-10-02 09:20:34.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b330574442d53c064390e5722c4673f2eee2acd66dc47618ac589247635f086e-merged.mount: Deactivated successfully.
Oct 02 09:20:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:34.858 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:34.859 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:34.860 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/893269694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:20:34 compute-0 ceph-mon[74477]: pgmap v3073: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 28 op/s
Oct 02 09:20:35 compute-0 podman[436141]: 2025-10-02 09:20:35.063070165 +0000 UTC m=+2.190460699 container remove 2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_leakey, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:20:35 compute-0 systemd[1]: libpod-conmon-2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028.scope: Deactivated successfully.
Oct 02 09:20:35 compute-0 sudo[436030]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:20:35 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:20:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:20:35 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:20:35 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a8fd93ee-7c27-40e9-9582-15b9b037e9a7 does not exist
Oct 02 09:20:35 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ce33eac2-1642-4d77-a90f-da9573586a5f does not exist
Oct 02 09:20:35 compute-0 sudo[436268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:20:35 compute-0 sudo[436268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:35 compute-0 sudo[436268]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:35 compute-0 sudo[436293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:20:35 compute-0 sudo[436293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:20:35 compute-0 sudo[436293]: pam_unix(sudo:session): session closed for user root
Oct 02 09:20:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:20:35 compute-0 nova_compute[260603]: 2025-10-02 09:20:35.947 2 DEBUG nova.compute.manager [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-changed-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:20:35 compute-0 nova_compute[260603]: 2025-10-02 09:20:35.949 2 DEBUG nova.compute.manager [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Refreshing instance network info cache due to event network-changed-dac6b720-36af-4048-863c-cd259065f82e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:20:35 compute-0 nova_compute[260603]: 2025-10-02 09:20:35.949 2 DEBUG oslo_concurrency.lockutils [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:20:35 compute-0 nova_compute[260603]: 2025-10-02 09:20:35.950 2 DEBUG oslo_concurrency.lockutils [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:20:35 compute-0 nova_compute[260603]: 2025-10-02 09:20:35.950 2 DEBUG nova.network.neutron [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Refreshing network info cache for port dac6b720-36af-4048-863c-cd259065f82e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.089 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.089 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.090 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.090 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.090 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.092 2 INFO nova.compute.manager [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Terminating instance
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.093 2 DEBUG nova.compute.manager [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:20:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:20:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:20:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.8 KiB/s wr, 27 op/s
Oct 02 09:20:36 compute-0 kernel: tapdac6b720-36 (unregistering): left promiscuous mode
Oct 02 09:20:36 compute-0 NetworkManager[45129]: <info>  [1759396836.3970] device (tapdac6b720-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:36 compute-0 ovn_controller[152344]: 2025-10-02T09:20:36Z|01681|binding|INFO|Releasing lport dac6b720-36af-4048-863c-cd259065f82e from this chassis (sb_readonly=0)
Oct 02 09:20:36 compute-0 ovn_controller[152344]: 2025-10-02T09:20:36Z|01682|binding|INFO|Setting lport dac6b720-36af-4048-863c-cd259065f82e down in Southbound
Oct 02 09:20:36 compute-0 ovn_controller[152344]: 2025-10-02T09:20:36Z|01683|binding|INFO|Removing iface tapdac6b720-36 ovn-installed in OVS
Oct 02 09:20:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:36.421 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:0a:a0 10.100.0.12 2001:db8:0:1:f816:3eff:fe6a:aa0 2001:db8::f816:3eff:fe6a:aa0'], port_security=['fa:16:3e:6a:0a:a0 10.100.0.12 2001:db8:0:1:f816:3eff:fe6a:aa0 2001:db8::f816:3eff:fe6a:aa0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe6a:aa0/64 2001:db8::f816:3eff:fe6a:aa0/64', 'neutron:device_id': 'd2fca23c-2574-4642-b931-f363d59bd5a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f167374-6ddb-4a61-a8ba-3dc4425d2006', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d143d452-1c93-4a72-b1b5-b9ed8c58f219, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dac6b720-36af-4048-863c-cd259065f82e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:20:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:36.422 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dac6b720-36af-4048-863c-cd259065f82e in datapath 01f3e9aa-20f8-48ae-9b80-cbd0ba476303 unbound from our chassis
Oct 02 09:20:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:36.423 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01f3e9aa-20f8-48ae-9b80-cbd0ba476303, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:20:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:36.425 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7dea8400-3660-41c1-a631-9eadde9a8db7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:36 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:36.427 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303 namespace which is not needed anymore
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:36 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Deactivated successfully.
Oct 02 09:20:36 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Consumed 17.420s CPU time.
Oct 02 09:20:36 compute-0 systemd-machined[214636]: Machine qemu-183-instance-00000095 terminated.
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.550 2 INFO nova.virt.libvirt.driver [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Instance destroyed successfully.
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.551 2 DEBUG nova.objects.instance [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid d2fca23c-2574-4642-b931-f363d59bd5a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.583 2 DEBUG nova.virt.libvirt.vif [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:18:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-39754368',display_name='tempest-TestGettingAddress-server-39754368',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-39754368',id=149,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcq6b1IVAJMqrCeKypbws2/0Rs5i6q90XYETpVyQMku/Sh9hYU1xPZGh64+rGmgREbgZiLmTHs8bnO5pL74gp5+lt+bQjj8c2EhhfVbuK3Dnp+gnRVW0xWshm1hg5jBSA==',key_name='tempest-TestGettingAddress-505095675',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:19:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-eaeyiug0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:19:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=d2fca23c-2574-4642-b931-f363d59bd5a7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.584 2 DEBUG nova.network.os_vif_util [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.585 2 DEBUG nova.network.os_vif_util [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.586 2 DEBUG os_vif [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.588 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdac6b720-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:36 compute-0 nova_compute[260603]: 2025-10-02 09:20:36.644 2 INFO os_vif [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36')
Oct 02 09:20:36 compute-0 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [NOTICE]   (431715) : haproxy version is 2.8.14-c23fe91
Oct 02 09:20:36 compute-0 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [NOTICE]   (431715) : path to executable is /usr/sbin/haproxy
Oct 02 09:20:36 compute-0 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [WARNING]  (431715) : Exiting Master process...
Oct 02 09:20:36 compute-0 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [ALERT]    (431715) : Current worker (431717) exited with code 143 (Terminated)
Oct 02 09:20:36 compute-0 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [WARNING]  (431715) : All workers exited. Exiting... (0)
Oct 02 09:20:36 compute-0 systemd[1]: libpod-77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8.scope: Deactivated successfully.
Oct 02 09:20:36 compute-0 podman[436351]: 2025-10-02 09:20:36.714850587 +0000 UTC m=+0.156993845 container died 77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 09:20:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8-userdata-shm.mount: Deactivated successfully.
Oct 02 09:20:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fda16057209f96f06d9ded64e1e5d1b0e039e5d2ee796001d3308370659566c-merged.mount: Deactivated successfully.
Oct 02 09:20:36 compute-0 podman[436351]: 2025-10-02 09:20:36.932212596 +0000 UTC m=+0.374355834 container cleanup 77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:20:36 compute-0 systemd[1]: libpod-conmon-77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8.scope: Deactivated successfully.
Oct 02 09:20:37 compute-0 podman[436402]: 2025-10-02 09:20:37.18654958 +0000 UTC m=+0.214698237 container remove 77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 09:20:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.200 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c06c9e2b-36bb-46aa-87fa-6bbdd76eb7d6]: (4, ('Thu Oct  2 09:20:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303 (77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8)\n77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8\nThu Oct  2 09:20:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303 (77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8)\n77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.204 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8f9452-77e0-4505-8948-849144f66974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.205 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01f3e9aa-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:20:37 compute-0 nova_compute[260603]: 2025-10-02 09:20:37.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:37 compute-0 kernel: tap01f3e9aa-20: left promiscuous mode
Oct 02 09:20:37 compute-0 nova_compute[260603]: 2025-10-02 09:20:37.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.232 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62cfa559-6ead-4d33-a9f4-a7d8f6dd7e12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.265 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c37ca780-e433-444c-8c86-4bf109df3f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.267 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9401b8-2beb-41ed-a0fc-46432e7e4a6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:37 compute-0 ceph-mon[74477]: pgmap v3074: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.8 KiB/s wr, 27 op/s
Oct 02 09:20:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.300 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[223cebff-06e5-48f0-812f-5c124d4746cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761964, 'reachable_time': 35141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 436431, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.304 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:20:37 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.305 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[33fbf0c0-d4d5-4acf-89f5-6d99f95f4d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:20:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d01f3e9aa\x2d20f8\x2d48ae\x2d9b80\x2dcbd0ba476303.mount: Deactivated successfully.
Oct 02 09:20:37 compute-0 nova_compute[260603]: 2025-10-02 09:20:37.331 2 DEBUG nova.compute.manager [req-cea775ea-b184-4352-9705-2b25a6c84fa9 req-0cc894b4-75cf-4f66-943f-ca04b53e336b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-vif-unplugged-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:20:37 compute-0 nova_compute[260603]: 2025-10-02 09:20:37.332 2 DEBUG oslo_concurrency.lockutils [req-cea775ea-b184-4352-9705-2b25a6c84fa9 req-0cc894b4-75cf-4f66-943f-ca04b53e336b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:37 compute-0 nova_compute[260603]: 2025-10-02 09:20:37.332 2 DEBUG oslo_concurrency.lockutils [req-cea775ea-b184-4352-9705-2b25a6c84fa9 req-0cc894b4-75cf-4f66-943f-ca04b53e336b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:37 compute-0 nova_compute[260603]: 2025-10-02 09:20:37.333 2 DEBUG oslo_concurrency.lockutils [req-cea775ea-b184-4352-9705-2b25a6c84fa9 req-0cc894b4-75cf-4f66-943f-ca04b53e336b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:37 compute-0 nova_compute[260603]: 2025-10-02 09:20:37.333 2 DEBUG nova.compute.manager [req-cea775ea-b184-4352-9705-2b25a6c84fa9 req-0cc894b4-75cf-4f66-943f-ca04b53e336b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] No waiting events found dispatching network-vif-unplugged-dac6b720-36af-4048-863c-cd259065f82e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:20:37 compute-0 nova_compute[260603]: 2025-10-02 09:20:37.333 2 DEBUG nova.compute.manager [req-cea775ea-b184-4352-9705-2b25a6c84fa9 req-0cc894b4-75cf-4f66-943f-ca04b53e336b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-vif-unplugged-dac6b720-36af-4048-863c-cd259065f82e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:20:37 compute-0 podman[436415]: 2025-10-02 09:20:37.376537215 +0000 UTC m=+0.114154847 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Oct 02 09:20:37 compute-0 podman[436416]: 2025-10-02 09:20:37.399095619 +0000 UTC m=+0.130449625 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:20:37 compute-0 nova_compute[260603]: 2025-10-02 09:20:37.935 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:20:38 compute-0 nova_compute[260603]: 2025-10-02 09:20:38.000 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid d2fca23c-2574-4642-b931-f363d59bd5a7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 09:20:38 compute-0 nova_compute[260603]: 2025-10-02 09:20:38.001 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:38 compute-0 nova_compute[260603]: 2025-10-02 09:20:38.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 83 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 8.8 KiB/s wr, 36 op/s
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0004291100963729895 of space, bias 1.0, pg target 0.12873302891189686 quantized to 32 (current 32)
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:20:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:20:39 compute-0 nova_compute[260603]: 2025-10-02 09:20:39.450 2 DEBUG nova.compute.manager [req-c25bfbc8-077f-4078-9e40-150551b52fda req-92a41df7-4ba7-4940-932f-52a97b7e5466 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:20:39 compute-0 nova_compute[260603]: 2025-10-02 09:20:39.452 2 DEBUG oslo_concurrency.lockutils [req-c25bfbc8-077f-4078-9e40-150551b52fda req-92a41df7-4ba7-4940-932f-52a97b7e5466 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:39 compute-0 nova_compute[260603]: 2025-10-02 09:20:39.452 2 DEBUG oslo_concurrency.lockutils [req-c25bfbc8-077f-4078-9e40-150551b52fda req-92a41df7-4ba7-4940-932f-52a97b7e5466 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:39 compute-0 nova_compute[260603]: 2025-10-02 09:20:39.452 2 DEBUG oslo_concurrency.lockutils [req-c25bfbc8-077f-4078-9e40-150551b52fda req-92a41df7-4ba7-4940-932f-52a97b7e5466 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:39 compute-0 nova_compute[260603]: 2025-10-02 09:20:39.453 2 DEBUG nova.compute.manager [req-c25bfbc8-077f-4078-9e40-150551b52fda req-92a41df7-4ba7-4940-932f-52a97b7e5466 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] No waiting events found dispatching network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:20:39 compute-0 nova_compute[260603]: 2025-10-02 09:20:39.453 2 WARNING nova.compute.manager [req-c25bfbc8-077f-4078-9e40-150551b52fda req-92a41df7-4ba7-4940-932f-52a97b7e5466 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received unexpected event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e for instance with vm_state active and task_state deleting.
Oct 02 09:20:39 compute-0 nova_compute[260603]: 2025-10-02 09:20:39.463 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396824.4625225, 8e2e089b-fd7f-460e-ba3f-180204d961f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:20:39 compute-0 nova_compute[260603]: 2025-10-02 09:20:39.464 2 INFO nova.compute.manager [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] VM Stopped (Lifecycle Event)
Oct 02 09:20:39 compute-0 nova_compute[260603]: 2025-10-02 09:20:39.499 2 DEBUG nova.compute.manager [None req-1a1c1383-49eb-407e-adf7-e296f0daace9 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:20:39 compute-0 ceph-mon[74477]: pgmap v3075: 305 pgs: 305 active+clean; 83 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 8.8 KiB/s wr, 36 op/s
Oct 02 09:20:40 compute-0 nova_compute[260603]: 2025-10-02 09:20:40.005 2 INFO nova.virt.libvirt.driver [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Deleting instance files /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7_del
Oct 02 09:20:40 compute-0 nova_compute[260603]: 2025-10-02 09:20:40.006 2 INFO nova.virt.libvirt.driver [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Deletion of /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7_del complete
Oct 02 09:20:40 compute-0 nova_compute[260603]: 2025-10-02 09:20:40.126 2 INFO nova.compute.manager [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Took 4.03 seconds to destroy the instance on the hypervisor.
Oct 02 09:20:40 compute-0 nova_compute[260603]: 2025-10-02 09:20:40.127 2 DEBUG oslo.service.loopingcall [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:20:40 compute-0 nova_compute[260603]: 2025-10-02 09:20:40.127 2 DEBUG nova.compute.manager [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:20:40 compute-0 nova_compute[260603]: 2025-10-02 09:20:40.127 2 DEBUG nova.network.neutron [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:20:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 83 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 02 09:20:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:20:40 compute-0 nova_compute[260603]: 2025-10-02 09:20:40.749 2 DEBUG nova.network.neutron [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updated VIF entry in instance network info cache for port dac6b720-36af-4048-863c-cd259065f82e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:20:40 compute-0 nova_compute[260603]: 2025-10-02 09:20:40.749 2 DEBUG nova.network.neutron [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:20:40 compute-0 nova_compute[260603]: 2025-10-02 09:20:40.781 2 DEBUG oslo_concurrency.lockutils [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.248 2 DEBUG nova.network.neutron [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.305 2 INFO nova.compute.manager [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Took 1.18 seconds to deallocate network for instance.
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.324 2 DEBUG nova.compute.manager [req-2a161286-2816-477a-a6f7-618f3a30206c req-cac66c92-15e1-4904-82bf-9c809a82c83b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-vif-deleted-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.324 2 INFO nova.compute.manager [req-2a161286-2816-477a-a6f7-618f3a30206c req-cac66c92-15e1-4904-82bf-9c809a82c83b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Neutron deleted interface dac6b720-36af-4048-863c-cd259065f82e; detaching it from the instance and deleting it from the info cache
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.324 2 DEBUG nova.network.neutron [req-2a161286-2816-477a-a6f7-618f3a30206c req-cac66c92-15e1-4904-82bf-9c809a82c83b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.374 2 DEBUG nova.compute.manager [req-2a161286-2816-477a-a6f7-618f3a30206c req-cac66c92-15e1-4904-82bf-9c809a82c83b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Detach interface failed, port_id=dac6b720-36af-4048-863c-cd259065f82e, reason: Instance d2fca23c-2574-4642-b931-f363d59bd5a7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.414 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.415 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.466 2 DEBUG oslo_concurrency.processutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:41 compute-0 ceph-mon[74477]: pgmap v3076: 305 pgs: 305 active+clean; 83 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 02 09:20:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:20:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/869045681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.939 2 DEBUG oslo_concurrency.processutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.945 2 DEBUG nova.compute.provider_tree [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:20:41 compute-0 nova_compute[260603]: 2025-10-02 09:20:41.978 2 DEBUG nova.scheduler.client.report [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:20:42 compute-0 nova_compute[260603]: 2025-10-02 09:20:42.022 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:42 compute-0 nova_compute[260603]: 2025-10-02 09:20:42.054 2 INFO nova.scheduler.client.report [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance d2fca23c-2574-4642-b931-f363d59bd5a7
Oct 02 09:20:42 compute-0 nova_compute[260603]: 2025-10-02 09:20:42.194 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:42 compute-0 nova_compute[260603]: 2025-10-02 09:20:42.195 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:20:42 compute-0 nova_compute[260603]: 2025-10-02 09:20:42.195 2 INFO nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] During sync_power_state the instance has a pending task (deleting). Skip.
Oct 02 09:20:42 compute-0 nova_compute[260603]: 2025-10-02 09:20:42.195 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:20:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.0 KiB/s wr, 44 op/s
Oct 02 09:20:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/869045681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:20:43 compute-0 nova_compute[260603]: 2025-10-02 09:20:43.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:43 compute-0 nova_compute[260603]: 2025-10-02 09:20:43.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:20:43 compute-0 nova_compute[260603]: 2025-10-02 09:20:43.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:20:43 compute-0 ceph-mon[74477]: pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.0 KiB/s wr, 44 op/s
Oct 02 09:20:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Oct 02 09:20:44 compute-0 ceph-mon[74477]: pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Oct 02 09:20:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:20:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 09:20:46 compute-0 nova_compute[260603]: 2025-10-02 09:20:46.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:47 compute-0 ceph-mon[74477]: pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 09:20:48 compute-0 nova_compute[260603]: 2025-10-02 09:20:48.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 09:20:49 compute-0 ceph-mon[74477]: pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 02 09:20:50 compute-0 nova_compute[260603]: 2025-10-02 09:20:50.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:50 compute-0 nova_compute[260603]: 2025-10-02 09:20:50.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Oct 02 09:20:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:20:51 compute-0 ceph-mon[74477]: pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Oct 02 09:20:51 compute-0 nova_compute[260603]: 2025-10-02 09:20:51.548 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396836.5464284, d2fca23c-2574-4642-b931-f363d59bd5a7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:20:51 compute-0 nova_compute[260603]: 2025-10-02 09:20:51.548 2 INFO nova.compute.manager [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] VM Stopped (Lifecycle Event)
Oct 02 09:20:51 compute-0 nova_compute[260603]: 2025-10-02 09:20:51.575 2 DEBUG nova.compute.manager [None req-b0439db5-365e-4ce3-89da-5a85944fd82c - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:20:51 compute-0 nova_compute[260603]: 2025-10-02 09:20:51.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Oct 02 09:20:53 compute-0 nova_compute[260603]: 2025-10-02 09:20:53.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:53 compute-0 ceph-mon[74477]: pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Oct 02 09:20:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 341 B/s wr, 3 op/s
Oct 02 09:20:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:20:55 compute-0 ceph-mon[74477]: pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 341 B/s wr, 3 op/s
Oct 02 09:20:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:20:56 compute-0 nova_compute[260603]: 2025-10-02 09:20:56.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:57 compute-0 ceph-mon[74477]: pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:20:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:20:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:20:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:20:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:20:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:20:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:20:58 compute-0 nova_compute[260603]: 2025-10-02 09:20:58.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:20:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:20:59 compute-0 ceph-mon[74477]: pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:21:01 compute-0 ceph-mon[74477]: pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:01 compute-0 nova_compute[260603]: 2025-10-02 09:21:01.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:02 compute-0 ceph-mon[74477]: pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:03 compute-0 nova_compute[260603]: 2025-10-02 09:21:03.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:04 compute-0 podman[436483]: 2025-10-02 09:21:04.034951608 +0000 UTC m=+0.095098981 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:21:04 compute-0 podman[436482]: 2025-10-02 09:21:04.094932041 +0000 UTC m=+0.164698826 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:21:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:05 compute-0 ceph-mon[74477]: pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:21:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:06 compute-0 nova_compute[260603]: 2025-10-02 09:21:06.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:07 compute-0 ceph-mon[74477]: pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:07 compute-0 nova_compute[260603]: 2025-10-02 09:21:07.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:21:08 compute-0 podman[436531]: 2025-10-02 09:21:08.000013585 +0000 UTC m=+0.059637734 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:21:08 compute-0 podman[436530]: 2025-10-02 09:21:08.021691881 +0000 UTC m=+0.091161909 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 09:21:08 compute-0 nova_compute[260603]: 2025-10-02 09:21:08.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:09 compute-0 ceph-mon[74477]: pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:21:10 compute-0 nova_compute[260603]: 2025-10-02 09:21:10.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:21:10 compute-0 nova_compute[260603]: 2025-10-02 09:21:10.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:21:10 compute-0 nova_compute[260603]: 2025-10-02 09:21:10.546 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:21:10 compute-0 nova_compute[260603]: 2025-10-02 09:21:10.546 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:21:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:11.190 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:0e:c9 10.100.0.2 2001:db8::f816:3eff:fe52:ec9'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe52:ec9/64', 'neutron:device_id': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=886ca98a-7662-4ca0-8c8e-c35442cbbef0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f52ffbc5-b75c-4a8b-a490-21571dd7145a) old=Port_Binding(mac=['fa:16:3e:52:0e:c9 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:21:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:11.192 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f52ffbc5-b75c-4a8b-a490-21571dd7145a in datapath bb79a700-778c-4189-bea4-a6e50510de5b updated
Oct 02 09:21:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:11.193 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bb79a700-778c-4189-bea4-a6e50510de5b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:21:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:11.194 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1252e45-0c08-48b9-9619-282081dffe25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:11 compute-0 ceph-mon[74477]: pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:11 compute-0 nova_compute[260603]: 2025-10-02 09:21:11.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:12 compute-0 nova_compute[260603]: 2025-10-02 09:21:12.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:21:13 compute-0 nova_compute[260603]: 2025-10-02 09:21:13.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:13 compute-0 ceph-mon[74477]: pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:21:15 compute-0 nova_compute[260603]: 2025-10-02 09:21:15.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:21:15 compute-0 ceph-mon[74477]: pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:15 compute-0 nova_compute[260603]: 2025-10-02 09:21:15.588 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:21:15 compute-0 nova_compute[260603]: 2025-10-02 09:21:15.588 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:21:15 compute-0 nova_compute[260603]: 2025-10-02 09:21:15.588 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:21:15 compute-0 nova_compute[260603]: 2025-10-02 09:21:15.588 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:21:15 compute-0 nova_compute[260603]: 2025-10-02 09:21:15.589 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:21:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:21:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3377665194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:21:16 compute-0 nova_compute[260603]: 2025-10-02 09:21:16.027 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:21:16 compute-0 nova_compute[260603]: 2025-10-02 09:21:16.252 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:21:16 compute-0 nova_compute[260603]: 2025-10-02 09:21:16.255 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3586MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:21:16 compute-0 nova_compute[260603]: 2025-10-02 09:21:16.256 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:21:16 compute-0 nova_compute[260603]: 2025-10-02 09:21:16.257 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:21:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:16 compute-0 nova_compute[260603]: 2025-10-02 09:21:16.498 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:21:16 compute-0 nova_compute[260603]: 2025-10-02 09:21:16.498 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:21:16 compute-0 nova_compute[260603]: 2025-10-02 09:21:16.589 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:21:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3377665194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:21:16 compute-0 nova_compute[260603]: 2025-10-02 09:21:16.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:21:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525364514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:21:17 compute-0 nova_compute[260603]: 2025-10-02 09:21:17.025 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:21:17 compute-0 nova_compute[260603]: 2025-10-02 09:21:17.032 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:21:17 compute-0 nova_compute[260603]: 2025-10-02 09:21:17.139 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:21:17 compute-0 nova_compute[260603]: 2025-10-02 09:21:17.211 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:21:17 compute-0 nova_compute[260603]: 2025-10-02 09:21:17.212 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:21:17 compute-0 ceph-mon[74477]: pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/525364514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:21:17 compute-0 nova_compute[260603]: 2025-10-02 09:21:17.852 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:21:17 compute-0 nova_compute[260603]: 2025-10-02 09:21:17.852 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:21:18 compute-0 nova_compute[260603]: 2025-10-02 09:21:18.100 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:21:18 compute-0 nova_compute[260603]: 2025-10-02 09:21:18.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:18 compute-0 nova_compute[260603]: 2025-10-02 09:21:18.335 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:21:18 compute-0 nova_compute[260603]: 2025-10-02 09:21:18.336 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:21:18 compute-0 nova_compute[260603]: 2025-10-02 09:21:18.349 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:21:18 compute-0 nova_compute[260603]: 2025-10-02 09:21:18.350 2 INFO nova.compute.claims [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:21:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:18 compute-0 nova_compute[260603]: 2025-10-02 09:21:18.878 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:21:19 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:21:19 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741520235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:21:19 compute-0 nova_compute[260603]: 2025-10-02 09:21:19.339 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:21:19 compute-0 nova_compute[260603]: 2025-10-02 09:21:19.350 2 DEBUG nova.compute.provider_tree [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:21:19 compute-0 nova_compute[260603]: 2025-10-02 09:21:19.611 2 DEBUG nova.scheduler.client.report [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:21:19 compute-0 ceph-mon[74477]: pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2741520235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:21:19 compute-0 nova_compute[260603]: 2025-10-02 09:21:19.846 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:21:19 compute-0 nova_compute[260603]: 2025-10-02 09:21:19.847 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.037 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.038 2 DEBUG nova.network.neutron [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.208 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.209 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.315 2 INFO nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.357 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:21:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.512 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.514 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.514 2 INFO nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Creating image(s)
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.545 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.571 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.607 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.612 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.667 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.711 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.712 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.713 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.713 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:21:20 compute-0 ceph-mon[74477]: pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.741 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:21:20 compute-0 nova_compute[260603]: 2025-10-02 09:21:20.747 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:21:21 compute-0 nova_compute[260603]: 2025-10-02 09:21:21.084 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:21:21 compute-0 nova_compute[260603]: 2025-10-02 09:21:21.146 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:21:21 compute-0 nova_compute[260603]: 2025-10-02 09:21:21.205 2 DEBUG nova.policy [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:21:21 compute-0 nova_compute[260603]: 2025-10-02 09:21:21.241 2 DEBUG nova.objects.instance [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:21:21 compute-0 nova_compute[260603]: 2025-10-02 09:21:21.296 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:21:21 compute-0 nova_compute[260603]: 2025-10-02 09:21:21.297 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Ensure instance console log exists: /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:21:21 compute-0 nova_compute[260603]: 2025-10-02 09:21:21.298 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:21:21 compute-0 nova_compute[260603]: 2025-10-02 09:21:21.298 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:21:21 compute-0 nova_compute[260603]: 2025-10-02 09:21:21.298 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:21:21 compute-0 nova_compute[260603]: 2025-10-02 09:21:21.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:21:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3078565076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:21:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:21:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3078565076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:21:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3078565076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:21:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3078565076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:21:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 62 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 706 KiB/s wr, 23 op/s
Oct 02 09:21:23 compute-0 nova_compute[260603]: 2025-10-02 09:21:23.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:23 compute-0 ceph-mon[74477]: pgmap v3097: 305 pgs: 305 active+clean; 62 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 706 KiB/s wr, 23 op/s
Oct 02 09:21:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:21:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:21:25 compute-0 ceph-mon[74477]: pgmap v3098: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:21:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:21:26 compute-0 nova_compute[260603]: 2025-10-02 09:21:26.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:21:26 compute-0 nova_compute[260603]: 2025-10-02 09:21:26.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:27 compute-0 nova_compute[260603]: 2025-10-02 09:21:27.312 2 DEBUG nova.network.neutron [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Successfully created port: ce3d61f6-90a9-4869-ac95-256f110eea41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:21:27 compute-0 ceph-mon[74477]: pgmap v3099: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:21:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:21:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:21:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:21:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:21:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:21:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:21:28
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'volumes', '.mgr', 'default.rgw.meta', 'vms', '.rgw.root', 'images']
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:21:28 compute-0 nova_compute[260603]: 2025-10-02 09:21:28.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:21:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:21:28 compute-0 ceph-mon[74477]: pgmap v3100: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:21:29 compute-0 nova_compute[260603]: 2025-10-02 09:21:29.193 2 DEBUG nova.network.neutron [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Successfully updated port: ce3d61f6-90a9-4869-ac95-256f110eea41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:21:29 compute-0 nova_compute[260603]: 2025-10-02 09:21:29.234 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:21:29 compute-0 nova_compute[260603]: 2025-10-02 09:21:29.235 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:21:29 compute-0 nova_compute[260603]: 2025-10-02 09:21:29.235 2 DEBUG nova.network.neutron [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:21:29 compute-0 nova_compute[260603]: 2025-10-02 09:21:29.364 2 DEBUG nova.compute.manager [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-changed-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:21:29 compute-0 nova_compute[260603]: 2025-10-02 09:21:29.365 2 DEBUG nova.compute.manager [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Refreshing instance network info cache due to event network-changed-ce3d61f6-90a9-4869-ac95-256f110eea41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:21:29 compute-0 nova_compute[260603]: 2025-10-02 09:21:29.366 2 DEBUG oslo_concurrency.lockutils [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:21:29 compute-0 nova_compute[260603]: 2025-10-02 09:21:29.477 2 DEBUG nova.network.neutron [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:21:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:21:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:21:30 compute-0 nova_compute[260603]: 2025-10-02 09:21:30.998 2 DEBUG nova.network.neutron [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.456 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.457 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Instance network_info: |[{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.457 2 DEBUG oslo_concurrency.lockutils [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.458 2 DEBUG nova.network.neutron [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Refreshing network info cache for port ce3d61f6-90a9-4869-ac95-256f110eea41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.461 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Start _get_guest_xml network_info=[{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.468 2 WARNING nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.485 2 DEBUG nova.virt.libvirt.host [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.486 2 DEBUG nova.virt.libvirt.host [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.498 2 DEBUG nova.virt.libvirt.host [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.499 2 DEBUG nova.virt.libvirt.host [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.500 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.500 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.501 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.501 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.502 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.502 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.503 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.503 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.504 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.504 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.505 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.505 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.510 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:21:31 compute-0 ceph-mon[74477]: pgmap v3101: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:21:31 compute-0 nova_compute[260603]: 2025-10-02 09:21:31.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:21:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454536736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.043 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.069 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.074 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:21:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:21:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:21:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/871546043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.544 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.546 2 DEBUG nova.virt.libvirt.vif [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:21:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-885089396',display_name='tempest-TestGettingAddress-server-885089396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-885089396',id=151,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJ4erUkDf9pFYvis3BxPTrsgrZAeghsAW2aYbDdKvJxPUtfd2zcNxkwWc27ijo1XxIL1GH95TwtVkIOZnFQCr789wREwZXl2iwWdFQxsXXMtQjjBE9pyaOAIR5A+kumzQ==',key_name='tempest-TestGettingAddress-2084952376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-mdzn0750',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:21:20Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=ccf01ee2-f5c6-4802-a9dd-f8c0423d4914,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.546 2 DEBUG nova.network.os_vif_util [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.547 2 DEBUG nova.network.os_vif_util [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.548 2 DEBUG nova.objects.instance [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.597 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:21:32 compute-0 nova_compute[260603]:   <uuid>ccf01ee2-f5c6-4802-a9dd-f8c0423d4914</uuid>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   <name>instance-00000097</name>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-885089396</nova:name>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:21:31</nova:creationTime>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:21:32 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:21:32 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:21:32 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:21:32 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:21:32 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:21:32 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:21:32 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:21:32 compute-0 nova_compute[260603]:         <nova:port uuid="ce3d61f6-90a9-4869-ac95-256f110eea41">
Oct 02 09:21:32 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fec4:e3da" ipVersion="6"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <system>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <entry name="serial">ccf01ee2-f5c6-4802-a9dd-f8c0423d4914</entry>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <entry name="uuid">ccf01ee2-f5c6-4802-a9dd-f8c0423d4914</entry>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     </system>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   <os>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   </os>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   <features>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   </features>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk">
Oct 02 09:21:32 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       </source>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:21:32 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk.config">
Oct 02 09:21:32 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       </source>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:21:32 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:c4:e3:da"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <target dev="tapce3d61f6-90"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/console.log" append="off"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <video>
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     </video>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:21:32 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:21:32 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:21:32 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:21:32 compute-0 nova_compute[260603]: </domain>
Oct 02 09:21:32 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.599 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Preparing to wait for external event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.599 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.600 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.600 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.601 2 DEBUG nova.virt.libvirt.vif [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:21:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-885089396',display_name='tempest-TestGettingAddress-server-885089396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-885089396',id=151,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJ4erUkDf9pFYvis3BxPTrsgrZAeghsAW2aYbDdKvJxPUtfd2zcNxkwWc27ijo1XxIL1GH95TwtVkIOZnFQCr789wREwZXl2iwWdFQxsXXMtQjjBE9pyaOAIR5A+kumzQ==',key_name='tempest-TestGettingAddress-2084952376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-mdzn0750',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:21:20Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=ccf01ee2-f5c6-4802-a9dd-f8c0423d4914,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.602 2 DEBUG nova.network.os_vif_util [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.603 2 DEBUG nova.network.os_vif_util [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.604 2 DEBUG os_vif [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.606 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.606 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.610 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce3d61f6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.611 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce3d61f6-90, col_values=(('external_ids', {'iface-id': 'ce3d61f6-90a9-4869-ac95-256f110eea41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:e3:da', 'vm-uuid': 'ccf01ee2-f5c6-4802-a9dd-f8c0423d4914'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:21:32 compute-0 NetworkManager[45129]: <info>  [1759396892.6141] manager: (tapce3d61f6-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/684)
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:32 compute-0 nova_compute[260603]: 2025-10-02 09:21:32.622 2 INFO os_vif [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90')
Oct 02 09:21:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1454536736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:21:32 compute-0 ceph-mon[74477]: pgmap v3102: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:21:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/871546043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:21:33 compute-0 nova_compute[260603]: 2025-10-02 09:21:33.089 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:21:33 compute-0 nova_compute[260603]: 2025-10-02 09:21:33.090 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:21:33 compute-0 nova_compute[260603]: 2025-10-02 09:21:33.091 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:c4:e3:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:21:33 compute-0 nova_compute[260603]: 2025-10-02 09:21:33.092 2 INFO nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Using config drive
Oct 02 09:21:33 compute-0 nova_compute[260603]: 2025-10-02 09:21:33.115 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:21:33 compute-0 nova_compute[260603]: 2025-10-02 09:21:33.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:33 compute-0 nova_compute[260603]: 2025-10-02 09:21:33.636 2 INFO nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Creating config drive at /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/disk.config
Oct 02 09:21:33 compute-0 nova_compute[260603]: 2025-10-02 09:21:33.642 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6r3go9x_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:21:33 compute-0 nova_compute[260603]: 2025-10-02 09:21:33.794 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6r3go9x_" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:21:33 compute-0 nova_compute[260603]: 2025-10-02 09:21:33.817 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:21:33 compute-0 nova_compute[260603]: 2025-10-02 09:21:33.821 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/disk.config ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.231 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/disk.config ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.232 2 INFO nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Deleting local config drive /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/disk.config because it was imported into RBD.
Oct 02 09:21:34 compute-0 kernel: tapce3d61f6-90: entered promiscuous mode
Oct 02 09:21:34 compute-0 NetworkManager[45129]: <info>  [1759396894.3085] manager: (tapce3d61f6-90): new Tun device (/org/freedesktop/NetworkManager/Devices/685)
Oct 02 09:21:34 compute-0 ovn_controller[152344]: 2025-10-02T09:21:34Z|01684|binding|INFO|Claiming lport ce3d61f6-90a9-4869-ac95-256f110eea41 for this chassis.
Oct 02 09:21:34 compute-0 ovn_controller[152344]: 2025-10-02T09:21:34Z|01685|binding|INFO|ce3d61f6-90a9-4869-ac95-256f110eea41: Claiming fa:16:3e:c4:e3:da 10.100.0.4 2001:db8::f816:3eff:fec4:e3da
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.346 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:e3:da 10.100.0.4 2001:db8::f816:3eff:fec4:e3da'], port_security=['fa:16:3e:c4:e3:da 10.100.0.4 2001:db8::f816:3eff:fec4:e3da'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fec4:e3da/64', 'neutron:device_id': 'ccf01ee2-f5c6-4802-a9dd-f8c0423d4914', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5b689ca1-3c9b-4813-8474-00abea3332c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=886ca98a-7662-4ca0-8c8e-c35442cbbef0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ce3d61f6-90a9-4869-ac95-256f110eea41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.348 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ce3d61f6-90a9-4869-ac95-256f110eea41 in datapath bb79a700-778c-4189-bea4-a6e50510de5b bound to our chassis
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.350 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb79a700-778c-4189-bea4-a6e50510de5b
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.365 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb8974f-fc55-43a8-80b9-68538ec88811]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.366 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbb79a700-71 in ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.369 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbb79a700-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.369 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e67db031-b425-4c5b-a769-e6ff6a225f0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.370 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[958de970-4cb7-40c4-a6a7-054470a6da7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 systemd-machined[214636]: New machine qemu-185-instance-00000097.
Oct 02 09:21:34 compute-0 systemd[1]: Started Virtual Machine qemu-185-instance-00000097.
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.379 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4670d03c-c135-418e-afc2-12d5ba1112bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 ovn_controller[152344]: 2025-10-02T09:21:34Z|01686|binding|INFO|Setting lport ce3d61f6-90a9-4869-ac95-256f110eea41 ovn-installed in OVS
Oct 02 09:21:34 compute-0 ovn_controller[152344]: 2025-10-02T09:21:34Z|01687|binding|INFO|Setting lport ce3d61f6-90a9-4869-ac95-256f110eea41 up in Southbound
Oct 02 09:21:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 1.1 MiB/s wr, 3 op/s
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.396 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff4ef4c-6861-47ca-9701-a7ae6722db6d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:34 compute-0 systemd-udevd[436963]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:21:34 compute-0 NetworkManager[45129]: <info>  [1759396894.4161] device (tapce3d61f6-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:21:34 compute-0 NetworkManager[45129]: <info>  [1759396894.4169] device (tapce3d61f6-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.423 2 DEBUG nova.network.neutron [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updated VIF entry in instance network info cache for port ce3d61f6-90a9-4869-ac95-256f110eea41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.424 2 DEBUG nova.network.neutron [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.433 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[89a28890-f56d-476e-94dc-24b2780b9e0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 NetworkManager[45129]: <info>  [1759396894.4386] manager: (tapbb79a700-70): new Veth device (/org/freedesktop/NetworkManager/Devices/686)
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.438 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4a68540c-c27f-4455-98f9-6b7a118b9403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 podman[436936]: 2025-10-02 09:21:34.442484401 +0000 UTC m=+0.081241858 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.474 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1d52d7-bc72-40c9-a83e-ea6aa8cf89bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.476 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e26b4e39-a239-4a96-bdcf-859182896684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 podman[436935]: 2025-10-02 09:21:34.479525928 +0000 UTC m=+0.130818386 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:21:34 compute-0 NetworkManager[45129]: <info>  [1759396894.5016] device (tapbb79a700-70): carrier: link connected
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.504 2 DEBUG oslo_concurrency.lockutils [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.508 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f38a2f0c-051a-4307-b7eb-4042a3631024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.525 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eeea37d8-ef0b-4aeb-86cf-811482c3bdc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79a700-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0e:c9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777310, 'reachable_time': 42986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 437012, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.540 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2c12b2-0b17-427d-ac46-1b96fb193913]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:ec9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777310, 'tstamp': 777310}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 437013, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.555 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a0df1ea6-51bd-4035-bd9d-27d3597534dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79a700-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0e:c9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777310, 'reachable_time': 42986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 437014, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.587 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[79f07108-9ebe-4f4e-bbf7-a5392fc18e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.643 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d1d4f1-519c-4075-adf1-eeae39709273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.645 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79a700-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.645 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.645 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb79a700-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:21:34 compute-0 NetworkManager[45129]: <info>  [1759396894.6479] manager: (tapbb79a700-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/687)
Oct 02 09:21:34 compute-0 kernel: tapbb79a700-70: entered promiscuous mode
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.650 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb79a700-70, col_values=(('external_ids', {'iface-id': 'f52ffbc5-b75c-4a8b-a490-21571dd7145a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:21:34 compute-0 ovn_controller[152344]: 2025-10-02T09:21:34Z|01688|binding|INFO|Releasing lport f52ffbc5-b75c-4a8b-a490-21571dd7145a from this chassis (sb_readonly=0)
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.667 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bb79a700-778c-4189-bea4-a6e50510de5b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bb79a700-778c-4189-bea4-a6e50510de5b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.668 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[36f598b6-57c0-413b-85aa-fdb953cfdb64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.669 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: global
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     log         /dev/log local0 debug
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     log-tag     haproxy-metadata-proxy-bb79a700-778c-4189-bea4-a6e50510de5b
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     user        root
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     group       root
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     maxconn     1024
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     pidfile     /var/lib/neutron/external/pids/bb79a700-778c-4189-bea4-a6e50510de5b.pid.haproxy
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     daemon
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: defaults
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     log global
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     mode http
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     option httplog
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     option dontlognull
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     option http-server-close
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     option forwardfor
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     retries                 3
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     timeout http-request    30s
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     timeout connect         30s
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     timeout client          32s
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     timeout server          32s
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     timeout http-keep-alive 30s
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: listen listener
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     bind 169.254.169.254:80
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:     http-request add-header X-OVN-Network-ID bb79a700-778c-4189-bea4-a6e50510de5b
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.670 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'env', 'PROCESS_TAG=haproxy-bb79a700-778c-4189-bea4-a6e50510de5b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bb79a700-778c-4189-bea4-a6e50510de5b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.811 2 DEBUG nova.compute.manager [req-49c0589b-fd14-45f6-8e38-031c29bbc7ab req-ed9c3c82-8568-4c85-bb5d-7bfe912bf311 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.812 2 DEBUG oslo_concurrency.lockutils [req-49c0589b-fd14-45f6-8e38-031c29bbc7ab req-ed9c3c82-8568-4c85-bb5d-7bfe912bf311 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.812 2 DEBUG oslo_concurrency.lockutils [req-49c0589b-fd14-45f6-8e38-031c29bbc7ab req-ed9c3c82-8568-4c85-bb5d-7bfe912bf311 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.812 2 DEBUG oslo_concurrency.lockutils [req-49c0589b-fd14-45f6-8e38-031c29bbc7ab req-ed9c3c82-8568-4c85-bb5d-7bfe912bf311 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:21:34 compute-0 nova_compute[260603]: 2025-10-02 09:21:34.812 2 DEBUG nova.compute.manager [req-49c0589b-fd14-45f6-8e38-031c29bbc7ab req-ed9c3c82-8568-4c85-bb5d-7bfe912bf311 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Processing event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.859 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.860 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:21:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.860 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:21:35 compute-0 podman[437088]: 2025-10-02 09:21:35.074015597 +0000 UTC m=+0.058989623 container create 63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 09:21:35 compute-0 systemd[1]: Started libpod-conmon-63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d.scope.
Oct 02 09:21:35 compute-0 podman[437088]: 2025-10-02 09:21:35.045406404 +0000 UTC m=+0.030380450 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct 02 09:21:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ec26be00ec6c1706956eeee6591a41a079324f4042ac45aeef1cce56edc2ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:35 compute-0 podman[437088]: 2025-10-02 09:21:35.167867189 +0000 UTC m=+0.152841215 container init 63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:21:35 compute-0 podman[437088]: 2025-10-02 09:21:35.174223017 +0000 UTC m=+0.159197033 container start 63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:21:35 compute-0 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [NOTICE]   (437108) : New worker (437110) forked
Oct 02 09:21:35 compute-0 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [NOTICE]   (437108) : Loading success.
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:35.340 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:21:35 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:35.343 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.403 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396895.4023702, ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.403 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] VM Started (Lifecycle Event)
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.406 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.410 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.413 2 INFO nova.virt.libvirt.driver [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Instance spawned successfully.
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.414 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:21:35 compute-0 sudo[437119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:21:35 compute-0 sudo[437119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:35 compute-0 sudo[437119]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.437 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.440 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.454 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.454 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.455 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.455 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.455 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.456 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:21:35 compute-0 ceph-mon[74477]: pgmap v3103: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 1.1 MiB/s wr, 3 op/s
Oct 02 09:21:35 compute-0 sudo[437144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:21:35 compute-0 sudo[437144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:35 compute-0 sudo[437144]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:21:35 compute-0 sudo[437169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:21:35 compute-0 sudo[437169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:35 compute-0 sudo[437169]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.609 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.610 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396895.4044836, ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.610 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] VM Paused (Lifecycle Event)
Oct 02 09:21:35 compute-0 sudo[437194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 09:21:35 compute-0 sudo[437194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.685 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.689 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396895.4099636, ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.689 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] VM Resumed (Lifecycle Event)
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.757 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.761 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.831 2 INFO nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Took 15.32 seconds to spawn the instance on the hypervisor.
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.832 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:21:35 compute-0 nova_compute[260603]: 2025-10-02 09:21:35.851 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:21:35 compute-0 sudo[437194]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:21:35 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:21:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:21:35 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:21:35 compute-0 sudo[437239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:21:35 compute-0 sudo[437239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:36 compute-0 sudo[437239]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:36 compute-0 sudo[437264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:21:36 compute-0 sudo[437264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:36 compute-0 sudo[437264]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:36 compute-0 nova_compute[260603]: 2025-10-02 09:21:36.109 2 INFO nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Took 17.81 seconds to build instance.
Oct 02 09:21:36 compute-0 sudo[437289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:21:36 compute-0 sudo[437289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:36 compute-0 sudo[437289]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:36 compute-0 nova_compute[260603]: 2025-10-02 09:21:36.159 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:21:36 compute-0 sudo[437314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:21:36 compute-0 sudo[437314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:36 compute-0 sudo[437314]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:21:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:21:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:21:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:21:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:21:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:21:36 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 944c1748-536d-4798-a9c8-1d24bc2b5f7b does not exist
Oct 02 09:21:36 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c7b409fc-375a-419a-a14d-82f25cda2c89 does not exist
Oct 02 09:21:36 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 925cd190-fd48-4474-bfd3-cf31d5e5b01d does not exist
Oct 02 09:21:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:21:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:21:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:21:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:21:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:21:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:21:36 compute-0 sudo[437371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:21:36 compute-0 sudo[437371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:36 compute-0 sudo[437371]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:36 compute-0 sudo[437396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:21:36 compute-0 sudo[437396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:36 compute-0 sudo[437396]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:36 compute-0 nova_compute[260603]: 2025-10-02 09:21:36.926 2 DEBUG nova.compute.manager [req-bfc530fb-69d0-443c-8de5-b34bbc2269c4 req-5fecf528-3873-4955-9f4c-d38e7e5d7ab6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:21:36 compute-0 nova_compute[260603]: 2025-10-02 09:21:36.929 2 DEBUG oslo_concurrency.lockutils [req-bfc530fb-69d0-443c-8de5-b34bbc2269c4 req-5fecf528-3873-4955-9f4c-d38e7e5d7ab6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:21:36 compute-0 nova_compute[260603]: 2025-10-02 09:21:36.929 2 DEBUG oslo_concurrency.lockutils [req-bfc530fb-69d0-443c-8de5-b34bbc2269c4 req-5fecf528-3873-4955-9f4c-d38e7e5d7ab6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:21:36 compute-0 nova_compute[260603]: 2025-10-02 09:21:36.930 2 DEBUG oslo_concurrency.lockutils [req-bfc530fb-69d0-443c-8de5-b34bbc2269c4 req-5fecf528-3873-4955-9f4c-d38e7e5d7ab6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:21:36 compute-0 nova_compute[260603]: 2025-10-02 09:21:36.930 2 DEBUG nova.compute.manager [req-bfc530fb-69d0-443c-8de5-b34bbc2269c4 req-5fecf528-3873-4955-9f4c-d38e7e5d7ab6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] No waiting events found dispatching network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:21:36 compute-0 nova_compute[260603]: 2025-10-02 09:21:36.930 2 WARNING nova.compute.manager [req-bfc530fb-69d0-443c-8de5-b34bbc2269c4 req-5fecf528-3873-4955-9f4c-d38e7e5d7ab6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received unexpected event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 for instance with vm_state active and task_state None.
Oct 02 09:21:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:21:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:21:36 compute-0 ceph-mon[74477]: pgmap v3104: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:21:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:21:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:21:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:21:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:21:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:21:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:21:36 compute-0 sudo[437421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:21:36 compute-0 sudo[437421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:36 compute-0 sudo[437421]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:37 compute-0 sudo[437446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:21:37 compute-0 sudo[437446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:37 compute-0 podman[437511]: 2025-10-02 09:21:37.489296437 +0000 UTC m=+0.064287499 container create 611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 09:21:37 compute-0 systemd[1]: Started libpod-conmon-611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051.scope.
Oct 02 09:21:37 compute-0 podman[437511]: 2025-10-02 09:21:37.46089213 +0000 UTC m=+0.035883212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:21:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:21:37 compute-0 podman[437511]: 2025-10-02 09:21:37.586005338 +0000 UTC m=+0.160996410 container init 611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:21:37 compute-0 podman[437511]: 2025-10-02 09:21:37.595543296 +0000 UTC m=+0.170534338 container start 611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:21:37 compute-0 podman[437511]: 2025-10-02 09:21:37.598630012 +0000 UTC m=+0.173621094 container attach 611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 09:21:37 compute-0 trusting_cray[437527]: 167 167
Oct 02 09:21:37 compute-0 systemd[1]: libpod-611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051.scope: Deactivated successfully.
Oct 02 09:21:37 compute-0 podman[437511]: 2025-10-02 09:21:37.605058493 +0000 UTC m=+0.180049535 container died 611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:21:37 compute-0 nova_compute[260603]: 2025-10-02 09:21:37.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab385e067fd66a928af843c231ad6880ebd4bf6209d8c437cfed91428b81af12-merged.mount: Deactivated successfully.
Oct 02 09:21:37 compute-0 podman[437511]: 2025-10-02 09:21:37.649764009 +0000 UTC m=+0.224755051 container remove 611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 09:21:37 compute-0 systemd[1]: libpod-conmon-611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051.scope: Deactivated successfully.
Oct 02 09:21:37 compute-0 podman[437552]: 2025-10-02 09:21:37.905368693 +0000 UTC m=+0.069335366 container create e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moore, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 09:21:37 compute-0 podman[437552]: 2025-10-02 09:21:37.872870788 +0000 UTC m=+0.036837541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:21:37 compute-0 systemd[1]: Started libpod-conmon-e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5.scope.
Oct 02 09:21:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6418719180330139519b9391cddfb8fb8f8d35f444d4b3f2a19e4a8ac13b37b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6418719180330139519b9391cddfb8fb8f8d35f444d4b3f2a19e4a8ac13b37b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6418719180330139519b9391cddfb8fb8f8d35f444d4b3f2a19e4a8ac13b37b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6418719180330139519b9391cddfb8fb8f8d35f444d4b3f2a19e4a8ac13b37b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6418719180330139519b9391cddfb8fb8f8d35f444d4b3f2a19e4a8ac13b37b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:38 compute-0 podman[437552]: 2025-10-02 09:21:38.051376633 +0000 UTC m=+0.215343316 container init e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moore, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 09:21:38 compute-0 podman[437552]: 2025-10-02 09:21:38.059861639 +0000 UTC m=+0.223828302 container start e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moore, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:21:38 compute-0 podman[437552]: 2025-10-02 09:21:38.063799512 +0000 UTC m=+0.227766175 container attach e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moore, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:21:38 compute-0 podman[437572]: 2025-10-02 09:21:38.129981959 +0000 UTC m=+0.095862745 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 09:21:38 compute-0 nova_compute[260603]: 2025-10-02 09:21:38.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:38 compute-0 podman[437593]: 2025-10-02 09:21:38.270955612 +0000 UTC m=+0.098719144 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 09:21:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:21:39 compute-0 confident_moore[437569]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:21:39 compute-0 confident_moore[437569]: --> relative data size: 1.0
Oct 02 09:21:39 compute-0 confident_moore[437569]: --> All data devices are unavailable
Oct 02 09:21:39 compute-0 systemd[1]: libpod-e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5.scope: Deactivated successfully.
Oct 02 09:21:39 compute-0 systemd[1]: libpod-e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5.scope: Consumed 1.054s CPU time.
Oct 02 09:21:39 compute-0 podman[437552]: 2025-10-02 09:21:39.193515588 +0000 UTC m=+1.357482251 container died e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moore, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6418719180330139519b9391cddfb8fb8f8d35f444d4b3f2a19e4a8ac13b37b-merged.mount: Deactivated successfully.
Oct 02 09:21:39 compute-0 podman[437552]: 2025-10-02 09:21:39.271478553 +0000 UTC m=+1.435445266 container remove e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 09:21:39 compute-0 sudo[437446]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:39 compute-0 systemd[1]: libpod-conmon-e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5.scope: Deactivated successfully.
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:21:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:21:39 compute-0 sudo[437650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:21:39 compute-0 sudo[437650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:39 compute-0 sudo[437650]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:39 compute-0 ceph-mon[74477]: pgmap v3105: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:21:39 compute-0 sudo[437675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:21:39 compute-0 sudo[437675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:39 compute-0 sudo[437675]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:39 compute-0 sudo[437700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:21:39 compute-0 sudo[437700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:39 compute-0 sudo[437700]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:39 compute-0 sudo[437725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:21:39 compute-0 sudo[437725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:40 compute-0 podman[437788]: 2025-10-02 09:21:40.1332498 +0000 UTC m=+0.055591988 container create af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 09:21:40 compute-0 podman[437788]: 2025-10-02 09:21:40.114930978 +0000 UTC m=+0.037273176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:21:40 compute-0 systemd[1]: Started libpod-conmon-af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a.scope.
Oct 02 09:21:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:21:40 compute-0 podman[437788]: 2025-10-02 09:21:40.29462217 +0000 UTC m=+0.216964378 container init af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gauss, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:21:40 compute-0 podman[437788]: 2025-10-02 09:21:40.302893008 +0000 UTC m=+0.225235236 container start af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gauss, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 09:21:40 compute-0 elastic_gauss[437806]: 167 167
Oct 02 09:21:40 compute-0 systemd[1]: libpod-af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a.scope: Deactivated successfully.
Oct 02 09:21:40 compute-0 podman[437788]: 2025-10-02 09:21:40.33495107 +0000 UTC m=+0.257293258 container attach af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gauss, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:21:40 compute-0 podman[437788]: 2025-10-02 09:21:40.335737635 +0000 UTC m=+0.258079863 container died af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gauss, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:21:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6691db60d9680e2d9dd427d0358fddcd0baba6554e718a242805d555bb3ecff6-merged.mount: Deactivated successfully.
Oct 02 09:21:40 compute-0 podman[437788]: 2025-10-02 09:21:40.383558448 +0000 UTC m=+0.305900636 container remove af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gauss, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:21:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:21:40 compute-0 systemd[1]: libpod-conmon-af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a.scope: Deactivated successfully.
Oct 02 09:21:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:21:40 compute-0 podman[437829]: 2025-10-02 09:21:40.624224805 +0000 UTC m=+0.081398913 container create 1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 09:21:40 compute-0 systemd[1]: Started libpod-conmon-1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b.scope.
Oct 02 09:21:40 compute-0 podman[437829]: 2025-10-02 09:21:40.591679769 +0000 UTC m=+0.048853967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:21:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef71e162ab9a6af21bf826166351dfec12e88f190652ae5d8f53a35b6a08f31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef71e162ab9a6af21bf826166351dfec12e88f190652ae5d8f53a35b6a08f31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef71e162ab9a6af21bf826166351dfec12e88f190652ae5d8f53a35b6a08f31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef71e162ab9a6af21bf826166351dfec12e88f190652ae5d8f53a35b6a08f31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:40 compute-0 podman[437829]: 2025-10-02 09:21:40.731669822 +0000 UTC m=+0.188843960 container init 1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 09:21:40 compute-0 podman[437829]: 2025-10-02 09:21:40.751594464 +0000 UTC m=+0.208768602 container start 1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 09:21:40 compute-0 podman[437829]: 2025-10-02 09:21:40.756455466 +0000 UTC m=+0.213629604 container attach 1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:21:41 compute-0 ceph-mon[74477]: pgmap v3106: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:21:41 compute-0 exciting_turing[437846]: {
Oct 02 09:21:41 compute-0 exciting_turing[437846]:     "0": [
Oct 02 09:21:41 compute-0 exciting_turing[437846]:         {
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "devices": [
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "/dev/loop3"
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             ],
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_name": "ceph_lv0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_size": "21470642176",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "name": "ceph_lv0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "tags": {
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.cluster_name": "ceph",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.crush_device_class": "",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.encrypted": "0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.osd_id": "0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.type": "block",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.vdo": "0"
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             },
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "type": "block",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "vg_name": "ceph_vg0"
Oct 02 09:21:41 compute-0 exciting_turing[437846]:         }
Oct 02 09:21:41 compute-0 exciting_turing[437846]:     ],
Oct 02 09:21:41 compute-0 exciting_turing[437846]:     "1": [
Oct 02 09:21:41 compute-0 exciting_turing[437846]:         {
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "devices": [
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "/dev/loop4"
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             ],
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_name": "ceph_lv1",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_size": "21470642176",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "name": "ceph_lv1",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "tags": {
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.cluster_name": "ceph",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.crush_device_class": "",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.encrypted": "0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.osd_id": "1",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.type": "block",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.vdo": "0"
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             },
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "type": "block",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "vg_name": "ceph_vg1"
Oct 02 09:21:41 compute-0 exciting_turing[437846]:         }
Oct 02 09:21:41 compute-0 exciting_turing[437846]:     ],
Oct 02 09:21:41 compute-0 exciting_turing[437846]:     "2": [
Oct 02 09:21:41 compute-0 exciting_turing[437846]:         {
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "devices": [
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "/dev/loop5"
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             ],
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_name": "ceph_lv2",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_size": "21470642176",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "name": "ceph_lv2",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "tags": {
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.cluster_name": "ceph",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.crush_device_class": "",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.encrypted": "0",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.osd_id": "2",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.type": "block",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:                 "ceph.vdo": "0"
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             },
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "type": "block",
Oct 02 09:21:41 compute-0 exciting_turing[437846]:             "vg_name": "ceph_vg2"
Oct 02 09:21:41 compute-0 exciting_turing[437846]:         }
Oct 02 09:21:41 compute-0 exciting_turing[437846]:     ]
Oct 02 09:21:41 compute-0 exciting_turing[437846]: }
Oct 02 09:21:41 compute-0 systemd[1]: libpod-1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b.scope: Deactivated successfully.
Oct 02 09:21:41 compute-0 podman[437829]: 2025-10-02 09:21:41.610089168 +0000 UTC m=+1.067263286 container died 1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:21:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ef71e162ab9a6af21bf826166351dfec12e88f190652ae5d8f53a35b6a08f31-merged.mount: Deactivated successfully.
Oct 02 09:21:41 compute-0 podman[437829]: 2025-10-02 09:21:41.721687684 +0000 UTC m=+1.178861792 container remove 1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:21:41 compute-0 systemd[1]: libpod-conmon-1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b.scope: Deactivated successfully.
Oct 02 09:21:41 compute-0 sudo[437725]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:41 compute-0 NetworkManager[45129]: <info>  [1759396901.8828] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/688)
Oct 02 09:21:41 compute-0 NetworkManager[45129]: <info>  [1759396901.8838] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/689)
Oct 02 09:21:41 compute-0 nova_compute[260603]: 2025-10-02 09:21:41.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:41 compute-0 ovn_controller[152344]: 2025-10-02T09:21:41Z|01689|binding|INFO|Releasing lport f52ffbc5-b75c-4a8b-a490-21571dd7145a from this chassis (sb_readonly=0)
Oct 02 09:21:41 compute-0 sudo[437867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:21:41 compute-0 ovn_controller[152344]: 2025-10-02T09:21:41Z|01690|binding|INFO|Releasing lport f52ffbc5-b75c-4a8b-a490-21571dd7145a from this chassis (sb_readonly=0)
Oct 02 09:21:41 compute-0 nova_compute[260603]: 2025-10-02 09:21:41.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:41 compute-0 sudo[437867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:41 compute-0 nova_compute[260603]: 2025-10-02 09:21:41.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:41 compute-0 sudo[437867]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:42 compute-0 sudo[437893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:21:42 compute-0 sudo[437893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:42 compute-0 sudo[437893]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:42 compute-0 sudo[437918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:21:42 compute-0 sudo[437918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:42 compute-0 sudo[437918]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:42 compute-0 sudo[437943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:21:42 compute-0 sudo[437943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:21:42 compute-0 nova_compute[260603]: 2025-10-02 09:21:42.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:42 compute-0 podman[438010]: 2025-10-02 09:21:42.662409777 +0000 UTC m=+0.073367463 container create b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 09:21:42 compute-0 systemd[1]: Started libpod-conmon-b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77.scope.
Oct 02 09:21:42 compute-0 podman[438010]: 2025-10-02 09:21:42.634435963 +0000 UTC m=+0.045393679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:21:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:21:42 compute-0 podman[438010]: 2025-10-02 09:21:42.742627622 +0000 UTC m=+0.153585328 container init b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:21:42 compute-0 podman[438010]: 2025-10-02 09:21:42.755526646 +0000 UTC m=+0.166484332 container start b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 09:21:42 compute-0 podman[438010]: 2025-10-02 09:21:42.75888586 +0000 UTC m=+0.169843556 container attach b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:21:42 compute-0 sleepy_heyrovsky[438027]: 167 167
Oct 02 09:21:42 compute-0 systemd[1]: libpod-b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77.scope: Deactivated successfully.
Oct 02 09:21:42 compute-0 podman[438010]: 2025-10-02 09:21:42.761511782 +0000 UTC m=+0.172469488 container died b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 02 09:21:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc65c22dcc1c7e0b675b69cc3cc7dfd240934dbf58acf7c57d274b7ab82b5f5b-merged.mount: Deactivated successfully.
Oct 02 09:21:42 compute-0 podman[438010]: 2025-10-02 09:21:42.801349527 +0000 UTC m=+0.212307213 container remove b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:21:42 compute-0 systemd[1]: libpod-conmon-b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77.scope: Deactivated successfully.
Oct 02 09:21:42 compute-0 nova_compute[260603]: 2025-10-02 09:21:42.938 2 DEBUG nova.compute.manager [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-changed-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:21:42 compute-0 nova_compute[260603]: 2025-10-02 09:21:42.939 2 DEBUG nova.compute.manager [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Refreshing instance network info cache due to event network-changed-ce3d61f6-90a9-4869-ac95-256f110eea41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:21:42 compute-0 nova_compute[260603]: 2025-10-02 09:21:42.940 2 DEBUG oslo_concurrency.lockutils [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:21:42 compute-0 nova_compute[260603]: 2025-10-02 09:21:42.940 2 DEBUG oslo_concurrency.lockutils [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:21:42 compute-0 nova_compute[260603]: 2025-10-02 09:21:42.940 2 DEBUG nova.network.neutron [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Refreshing network info cache for port ce3d61f6-90a9-4869-ac95-256f110eea41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:21:43 compute-0 podman[438051]: 2025-10-02 09:21:43.004928835 +0000 UTC m=+0.073058953 container create daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:21:43 compute-0 systemd[1]: Started libpod-conmon-daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61.scope.
Oct 02 09:21:43 compute-0 podman[438051]: 2025-10-02 09:21:42.965884286 +0000 UTC m=+0.034014434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:21:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73860daa9e1f97c125f9be2ce152bc7d2fddb96fb24240eab51736f27d9f24ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73860daa9e1f97c125f9be2ce152bc7d2fddb96fb24240eab51736f27d9f24ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73860daa9e1f97c125f9be2ce152bc7d2fddb96fb24240eab51736f27d9f24ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73860daa9e1f97c125f9be2ce152bc7d2fddb96fb24240eab51736f27d9f24ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:21:43 compute-0 podman[438051]: 2025-10-02 09:21:43.108325285 +0000 UTC m=+0.176455423 container init daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:21:43 compute-0 podman[438051]: 2025-10-02 09:21:43.116554412 +0000 UTC m=+0.184684570 container start daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 09:21:43 compute-0 podman[438051]: 2025-10-02 09:21:43.121699512 +0000 UTC m=+0.189829630 container attach daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:21:43 compute-0 nova_compute[260603]: 2025-10-02 09:21:43.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:21:43.346 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:21:43 compute-0 ceph-mon[74477]: pgmap v3107: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]: {
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "osd_id": 2,
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "type": "bluestore"
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:     },
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "osd_id": 1,
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "type": "bluestore"
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:     },
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "osd_id": 0,
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:         "type": "bluestore"
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]:     }
Oct 02 09:21:44 compute-0 eloquent_volhard[438068]: }
Oct 02 09:21:44 compute-0 systemd[1]: libpod-daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61.scope: Deactivated successfully.
Oct 02 09:21:44 compute-0 systemd[1]: libpod-daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61.scope: Consumed 1.183s CPU time.
Oct 02 09:21:44 compute-0 podman[438051]: 2025-10-02 09:21:44.313982023 +0000 UTC m=+1.382112161 container died daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 09:21:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-73860daa9e1f97c125f9be2ce152bc7d2fddb96fb24240eab51736f27d9f24ce-merged.mount: Deactivated successfully.
Oct 02 09:21:44 compute-0 podman[438051]: 2025-10-02 09:21:44.397180281 +0000 UTC m=+1.465310399 container remove daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 02 09:21:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:21:44 compute-0 systemd[1]: libpod-conmon-daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61.scope: Deactivated successfully.
Oct 02 09:21:44 compute-0 sudo[437943]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:21:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:21:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:21:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:21:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2eb0dcf0-b82a-49bc-92c9-89f48f9d5835 does not exist
Oct 02 09:21:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 411e8268-5e60-4358-8bb5-c2158c42be87 does not exist
Oct 02 09:21:44 compute-0 nova_compute[260603]: 2025-10-02 09:21:44.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:21:44 compute-0 nova_compute[260603]: 2025-10-02 09:21:44.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:21:44 compute-0 sudo[438114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:21:44 compute-0 sudo[438114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:44 compute-0 sudo[438114]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:44 compute-0 sudo[438139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:21:44 compute-0 sudo[438139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:21:44 compute-0 sudo[438139]: pam_unix(sudo:session): session closed for user root
Oct 02 09:21:45 compute-0 ceph-mon[74477]: pgmap v3108: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:21:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:21:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:21:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:21:46 compute-0 nova_compute[260603]: 2025-10-02 09:21:46.279 2 DEBUG nova.network.neutron [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updated VIF entry in instance network info cache for port ce3d61f6-90a9-4869-ac95-256f110eea41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:21:46 compute-0 nova_compute[260603]: 2025-10-02 09:21:46.279 2 DEBUG nova.network.neutron [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:21:46 compute-0 nova_compute[260603]: 2025-10-02 09:21:46.355 2 DEBUG oslo_concurrency.lockutils [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:21:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:21:47 compute-0 ceph-mon[74477]: pgmap v3109: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:21:47 compute-0 nova_compute[260603]: 2025-10-02 09:21:47.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:48 compute-0 nova_compute[260603]: 2025-10-02 09:21:48.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:48 compute-0 ovn_controller[152344]: 2025-10-02T09:21:48Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:e3:da 10.100.0.4
Oct 02 09:21:48 compute-0 ovn_controller[152344]: 2025-10-02T09:21:48Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:e3:da 10.100.0.4
Oct 02 09:21:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 113 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 02 09:21:49 compute-0 ceph-mon[74477]: pgmap v3110: 305 pgs: 305 active+clean; 113 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 02 09:21:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 113 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Oct 02 09:21:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:21:50 compute-0 ceph-mon[74477]: pgmap v3111: 305 pgs: 305 active+clean; 113 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Oct 02 09:21:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 118 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 09:21:52 compute-0 nova_compute[260603]: 2025-10-02 09:21:52.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:53 compute-0 nova_compute[260603]: 2025-10-02 09:21:53.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:53 compute-0 ceph-mon[74477]: pgmap v3112: 305 pgs: 305 active+clean; 118 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 09:21:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:21:55 compute-0 ceph-mon[74477]: pgmap v3113: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:21:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:21:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:21:57 compute-0 nova_compute[260603]: 2025-10-02 09:21:57.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:57 compute-0 ceph-mon[74477]: pgmap v3114: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:21:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:21:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:21:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:21:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:21:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:21:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:21:58 compute-0 nova_compute[260603]: 2025-10-02 09:21:58.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:21:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:21:58 compute-0 ceph-mon[74477]: pgmap v3115: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 09:22:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 95 KiB/s wr, 18 op/s
Oct 02 09:22:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:22:01 compute-0 ceph-mon[74477]: pgmap v3116: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 95 KiB/s wr, 18 op/s
Oct 02 09:22:01 compute-0 nova_compute[260603]: 2025-10-02 09:22:01.627 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:01 compute-0 nova_compute[260603]: 2025-10-02 09:22:01.627 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:01 compute-0 nova_compute[260603]: 2025-10-02 09:22:01.653 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:22:01 compute-0 nova_compute[260603]: 2025-10-02 09:22:01.745 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:01 compute-0 nova_compute[260603]: 2025-10-02 09:22:01.745 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:01 compute-0 nova_compute[260603]: 2025-10-02 09:22:01.755 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:22:01 compute-0 nova_compute[260603]: 2025-10-02 09:22:01.756 2 INFO nova.compute.claims [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:22:01 compute-0 nova_compute[260603]: 2025-10-02 09:22:01.881 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:22:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:22:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2786742805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.388 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.399 2 DEBUG nova.compute.provider_tree [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:22:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 95 KiB/s wr, 19 op/s
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.459 2 DEBUG nova.scheduler.client.report [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.511 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.512 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:22:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2786742805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.596 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.597 2 DEBUG nova.network.neutron [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.654 2 INFO nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.713 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.812 2 DEBUG nova.policy [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.876 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.877 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.878 2 INFO nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Creating image(s)
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.905 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.931 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.957 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:22:02 compute-0 nova_compute[260603]: 2025-10-02 09:22:02.961 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:22:03 compute-0 nova_compute[260603]: 2025-10-02 09:22:03.038 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:22:03 compute-0 nova_compute[260603]: 2025-10-02 09:22:03.039 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:03 compute-0 nova_compute[260603]: 2025-10-02 09:22:03.039 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:03 compute-0 nova_compute[260603]: 2025-10-02 09:22:03.040 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:03 compute-0 nova_compute[260603]: 2025-10-02 09:22:03.062 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:22:03 compute-0 nova_compute[260603]: 2025-10-02 09:22:03.066 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1e3be288-5261-4a77-a127-f7bf088caf01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:22:03 compute-0 nova_compute[260603]: 2025-10-02 09:22:03.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:03 compute-0 ceph-mon[74477]: pgmap v3117: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 95 KiB/s wr, 19 op/s
Oct 02 09:22:03 compute-0 nova_compute[260603]: 2025-10-02 09:22:03.900 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1e3be288-5261-4a77-a127-f7bf088caf01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.834s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:22:03 compute-0 nova_compute[260603]: 2025-10-02 09:22:03.990 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:22:04 compute-0 nova_compute[260603]: 2025-10-02 09:22:04.355 2 DEBUG nova.objects.instance [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 1e3be288-5261-4a77-a127-f7bf088caf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:22:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 4 op/s
Oct 02 09:22:04 compute-0 nova_compute[260603]: 2025-10-02 09:22:04.426 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:22:04 compute-0 nova_compute[260603]: 2025-10-02 09:22:04.427 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Ensure instance console log exists: /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:22:04 compute-0 nova_compute[260603]: 2025-10-02 09:22:04.428 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:04 compute-0 nova_compute[260603]: 2025-10-02 09:22:04.428 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:04 compute-0 nova_compute[260603]: 2025-10-02 09:22:04.428 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:04 compute-0 nova_compute[260603]: 2025-10-02 09:22:04.556 2 DEBUG nova.network.neutron [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Successfully created port: bfc1331e-8260-43ad-b409-38c22a730429 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 09:22:04 compute-0 ceph-mon[74477]: pgmap v3118: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 4 op/s
Oct 02 09:22:05 compute-0 podman[438354]: 2025-10-02 09:22:05.028417992 +0000 UTC m=+0.087350649 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 09:22:05 compute-0 podman[438353]: 2025-10-02 09:22:05.03700459 +0000 UTC m=+0.098184568 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:22:05 compute-0 nova_compute[260603]: 2025-10-02 09:22:05.384 2 DEBUG nova.network.neutron [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Successfully updated port: bfc1331e-8260-43ad-b409-38c22a730429 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 09:22:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:22:05 compute-0 nova_compute[260603]: 2025-10-02 09:22:05.604 2 DEBUG nova.compute.manager [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-changed-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:22:05 compute-0 nova_compute[260603]: 2025-10-02 09:22:05.604 2 DEBUG nova.compute.manager [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Refreshing instance network info cache due to event network-changed-bfc1331e-8260-43ad-b409-38c22a730429. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:22:05 compute-0 nova_compute[260603]: 2025-10-02 09:22:05.605 2 DEBUG oslo_concurrency.lockutils [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:22:05 compute-0 nova_compute[260603]: 2025-10-02 09:22:05.605 2 DEBUG oslo_concurrency.lockutils [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:22:05 compute-0 nova_compute[260603]: 2025-10-02 09:22:05.605 2 DEBUG nova.network.neutron [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Refreshing network info cache for port bfc1331e-8260-43ad-b409-38c22a730429 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:22:05 compute-0 nova_compute[260603]: 2025-10-02 09:22:05.608 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:22:06 compute-0 nova_compute[260603]: 2025-10-02 09:22:06.208 2 DEBUG nova.network.neutron [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:22:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 09:22:06 compute-0 nova_compute[260603]: 2025-10-02 09:22:06.602 2 DEBUG nova.network.neutron [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:22:06 compute-0 nova_compute[260603]: 2025-10-02 09:22:06.633 2 DEBUG oslo_concurrency.lockutils [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:22:06 compute-0 nova_compute[260603]: 2025-10-02 09:22:06.633 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:22:06 compute-0 nova_compute[260603]: 2025-10-02 09:22:06.634 2 DEBUG nova.network.neutron [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:22:06 compute-0 nova_compute[260603]: 2025-10-02 09:22:06.827 2 DEBUG nova.network.neutron [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:22:07 compute-0 ceph-mon[74477]: pgmap v3119: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 02 09:22:07 compute-0 nova_compute[260603]: 2025-10-02 09:22:07.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.229 2 DEBUG nova.network.neutron [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updating instance_info_cache with network_info: [{"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.259 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.260 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Instance network_info: |[{"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.265 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Start _get_guest_xml network_info=[{"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.272 2 WARNING nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.281 2 DEBUG nova.virt.libvirt.host [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.284 2 DEBUG nova.virt.libvirt.host [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.288 2 DEBUG nova.virt.libvirt.host [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.289 2 DEBUG nova.virt.libvirt.host [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.290 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.291 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.292 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.292 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.293 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.293 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.294 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.294 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.295 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.296 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.296 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.297 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.303 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:22:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:22:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:22:08 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2545529187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.797 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.835 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:22:08 compute-0 nova_compute[260603]: 2025-10-02 09:22:08.840 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:22:08 compute-0 podman[438442]: 2025-10-02 09:22:08.995623264 +0000 UTC m=+0.054495903 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct 02 09:22:08 compute-0 podman[438441]: 2025-10-02 09:22:08.996540123 +0000 UTC m=+0.060677286 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:22:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:22:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109340288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.315 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.317 2 DEBUG nova.virt.libvirt.vif [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-421018847',display_name='tempest-TestGettingAddress-server-421018847',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-421018847',id=152,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJ4erUkDf9pFYvis3BxPTrsgrZAeghsAW2aYbDdKvJxPUtfd2zcNxkwWc27ijo1XxIL1GH95TwtVkIOZnFQCr789wREwZXl2iwWdFQxsXXMtQjjBE9pyaOAIR5A+kumzQ==',key_name='tempest-TestGettingAddress-2084952376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-77p3vre3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:22:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=1e3be288-5261-4a77-a127-f7bf088caf01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.317 2 DEBUG nova.network.os_vif_util [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.318 2 DEBUG nova.network.os_vif_util [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.319 2 DEBUG nova.objects.instance [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e3be288-5261-4a77-a127-f7bf088caf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.339 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:22:09 compute-0 nova_compute[260603]:   <uuid>1e3be288-5261-4a77-a127-f7bf088caf01</uuid>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   <name>instance-00000098</name>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <nova:name>tempest-TestGettingAddress-server-421018847</nova:name>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:22:08</nova:creationTime>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:22:09 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:22:09 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:22:09 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:22:09 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:22:09 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:22:09 compute-0 nova_compute[260603]:         <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct 02 09:22:09 compute-0 nova_compute[260603]:         <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <nova:ports>
Oct 02 09:22:09 compute-0 nova_compute[260603]:         <nova:port uuid="bfc1331e-8260-43ad-b409-38c22a730429">
Oct 02 09:22:09 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe5c:400b" ipVersion="6"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:         </nova:port>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       </nova:ports>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <system>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <entry name="serial">1e3be288-5261-4a77-a127-f7bf088caf01</entry>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <entry name="uuid">1e3be288-5261-4a77-a127-f7bf088caf01</entry>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     </system>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   <os>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   </os>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   <features>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   </features>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1e3be288-5261-4a77-a127-f7bf088caf01_disk">
Oct 02 09:22:09 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       </source>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:22:09 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/1e3be288-5261-4a77-a127-f7bf088caf01_disk.config">
Oct 02 09:22:09 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       </source>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:22:09 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <interface type="ethernet">
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <mac address="fa:16:3e:5c:40:0b"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <mtu size="1442"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <target dev="tapbfc1331e-82"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     </interface>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/console.log" append="off"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <video>
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     </video>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:22:09 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:22:09 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:22:09 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:22:09 compute-0 nova_compute[260603]: </domain>
Oct 02 09:22:09 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.340 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Preparing to wait for external event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.341 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.341 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.341 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.342 2 DEBUG nova.virt.libvirt.vif [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-421018847',display_name='tempest-TestGettingAddress-server-421018847',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-421018847',id=152,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJ4erUkDf9pFYvis3BxPTrsgrZAeghsAW2aYbDdKvJxPUtfd2zcNxkwWc27ijo1XxIL1GH95TwtVkIOZnFQCr789wREwZXl2iwWdFQxsXXMtQjjBE9pyaOAIR5A+kumzQ==',key_name='tempest-TestGettingAddress-2084952376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-77p3vre3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:22:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=1e3be288-5261-4a77-a127-f7bf088caf01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.343 2 DEBUG nova.network.os_vif_util [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.343 2 DEBUG nova.network.os_vif_util [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.344 2 DEBUG os_vif [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.349 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfc1331e-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.350 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbfc1331e-82, col_values=(('external_ids', {'iface-id': 'bfc1331e-8260-43ad-b409-38c22a730429', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:40:0b', 'vm-uuid': '1e3be288-5261-4a77-a127-f7bf088caf01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:09 compute-0 NetworkManager[45129]: <info>  [1759396929.4020] manager: (tapbfc1331e-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/690)
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.409 2 INFO os_vif [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82')
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.817 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.817 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.818 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:5c:40:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.819 2 INFO nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Using config drive
Oct 02 09:22:09 compute-0 nova_compute[260603]: 2025-10-02 09:22:09.853 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:22:09 compute-0 ceph-mon[74477]: pgmap v3120: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:22:09 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2545529187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:22:09 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1109340288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:22:10 compute-0 nova_compute[260603]: 2025-10-02 09:22:10.398 2 INFO nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Creating config drive at /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/disk.config
Oct 02 09:22:10 compute-0 nova_compute[260603]: 2025-10-02 09:22:10.404 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmxfn0gxe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:22:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:22:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:22:10 compute-0 nova_compute[260603]: 2025-10-02 09:22:10.575 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmxfn0gxe" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:22:10 compute-0 nova_compute[260603]: 2025-10-02 09:22:10.616 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:22:10 compute-0 nova_compute[260603]: 2025-10-02 09:22:10.621 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/disk.config 1e3be288-5261-4a77-a127-f7bf088caf01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:22:11 compute-0 ceph-mon[74477]: pgmap v3121: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 09:22:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 09:22:12 compute-0 nova_compute[260603]: 2025-10-02 09:22:12.522 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:22:12 compute-0 nova_compute[260603]: 2025-10-02 09:22:12.523 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:22:12 compute-0 nova_compute[260603]: 2025-10-02 09:22:12.523 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:22:12 compute-0 nova_compute[260603]: 2025-10-02 09:22:12.576 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 09:22:12 compute-0 nova_compute[260603]: 2025-10-02 09:22:12.909 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:22:12 compute-0 nova_compute[260603]: 2025-10-02 09:22:12.909 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:22:12 compute-0 nova_compute[260603]: 2025-10-02 09:22:12.909 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 09:22:12 compute-0 nova_compute[260603]: 2025-10-02 09:22:12.910 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:22:13 compute-0 ceph-mon[74477]: pgmap v3122: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 09:22:13 compute-0 nova_compute[260603]: 2025-10-02 09:22:13.243 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/disk.config 1e3be288-5261-4a77-a127-f7bf088caf01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:22:13 compute-0 nova_compute[260603]: 2025-10-02 09:22:13.244 2 INFO nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Deleting local config drive /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/disk.config because it was imported into RBD.
Oct 02 09:22:13 compute-0 nova_compute[260603]: 2025-10-02 09:22:13.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:13 compute-0 NetworkManager[45129]: <info>  [1759396933.3211] manager: (tapbfc1331e-82): new Tun device (/org/freedesktop/NetworkManager/Devices/691)
Oct 02 09:22:13 compute-0 kernel: tapbfc1331e-82: entered promiscuous mode
Oct 02 09:22:13 compute-0 ovn_controller[152344]: 2025-10-02T09:22:13Z|01691|binding|INFO|Claiming lport bfc1331e-8260-43ad-b409-38c22a730429 for this chassis.
Oct 02 09:22:13 compute-0 ovn_controller[152344]: 2025-10-02T09:22:13Z|01692|binding|INFO|bfc1331e-8260-43ad-b409-38c22a730429: Claiming fa:16:3e:5c:40:0b 10.100.0.3 2001:db8::f816:3eff:fe5c:400b
Oct 02 09:22:13 compute-0 nova_compute[260603]: 2025-10-02 09:22:13.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:13 compute-0 ovn_controller[152344]: 2025-10-02T09:22:13Z|01693|binding|INFO|Setting lport bfc1331e-8260-43ad-b409-38c22a730429 ovn-installed in OVS
Oct 02 09:22:13 compute-0 nova_compute[260603]: 2025-10-02 09:22:13.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:13 compute-0 nova_compute[260603]: 2025-10-02 09:22:13.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:13 compute-0 systemd-udevd[438573]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 09:22:13 compute-0 systemd-machined[214636]: New machine qemu-186-instance-00000098.
Oct 02 09:22:13 compute-0 NetworkManager[45129]: <info>  [1759396933.3910] device (tapbfc1331e-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 09:22:13 compute-0 NetworkManager[45129]: <info>  [1759396933.3924] device (tapbfc1331e-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 09:22:13 compute-0 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Oct 02 09:22:13 compute-0 ovn_controller[152344]: 2025-10-02T09:22:13Z|01694|binding|INFO|Setting lport bfc1331e-8260-43ad-b409-38c22a730429 up in Southbound
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.557 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:40:0b 10.100.0.3 2001:db8::f816:3eff:fe5c:400b'], port_security=['fa:16:3e:5c:40:0b 10.100.0.3 2001:db8::f816:3eff:fe5c:400b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe5c:400b/64', 'neutron:device_id': '1e3be288-5261-4a77-a127-f7bf088caf01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5b689ca1-3c9b-4813-8474-00abea3332c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=886ca98a-7662-4ca0-8c8e-c35442cbbef0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bfc1331e-8260-43ad-b409-38c22a730429) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.560 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bfc1331e-8260-43ad-b409-38c22a730429 in datapath bb79a700-778c-4189-bea4-a6e50510de5b bound to our chassis
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.564 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb79a700-778c-4189-bea4-a6e50510de5b
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.590 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d84f6134-32fb-4f7b-8c84-aa5b5ea4e822]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.631 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb432c4-4b2f-4407-8fec-5893ffd23bd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.637 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae588d2-e3c6-4c4d-a388-0fb66a109322]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.683 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c744d2ef-3f16-49e8-9cda-19047cc83278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.703 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[35a368ba-80b5-4601-bc97-a0f8d9bde520]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79a700-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0e:c9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1886, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1886, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777310, 'reachable_time': 42986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 438588, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.723 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ec860b32-9a84-4c94-8af0-c6e59ca5a880]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbb79a700-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777321, 'tstamp': 777321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 438589, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbb79a700-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777323, 'tstamp': 777323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 438589, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.725 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79a700-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:13 compute-0 nova_compute[260603]: 2025-10-02 09:22:13.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.730 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb79a700-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.730 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.731 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb79a700-70, col_values=(('external_ids', {'iface-id': 'f52ffbc5-b75c-4a8b-a490-21571dd7145a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:13 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.731 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:22:14 compute-0 nova_compute[260603]: 2025-10-02 09:22:14.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 09:22:14 compute-0 nova_compute[260603]: 2025-10-02 09:22:14.544 2 DEBUG nova.compute.manager [req-c7514f2f-877e-40cc-a62d-2160d1518f33 req-91e6bef0-8804-4724-b966-0a0a6d8e71d3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:22:14 compute-0 nova_compute[260603]: 2025-10-02 09:22:14.545 2 DEBUG oslo_concurrency.lockutils [req-c7514f2f-877e-40cc-a62d-2160d1518f33 req-91e6bef0-8804-4724-b966-0a0a6d8e71d3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:14 compute-0 nova_compute[260603]: 2025-10-02 09:22:14.545 2 DEBUG oslo_concurrency.lockutils [req-c7514f2f-877e-40cc-a62d-2160d1518f33 req-91e6bef0-8804-4724-b966-0a0a6d8e71d3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:14 compute-0 nova_compute[260603]: 2025-10-02 09:22:14.545 2 DEBUG oslo_concurrency.lockutils [req-c7514f2f-877e-40cc-a62d-2160d1518f33 req-91e6bef0-8804-4724-b966-0a0a6d8e71d3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:14 compute-0 nova_compute[260603]: 2025-10-02 09:22:14.545 2 DEBUG nova.compute.manager [req-c7514f2f-877e-40cc-a62d-2160d1518f33 req-91e6bef0-8804-4724-b966-0a0a6d8e71d3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Processing event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 09:22:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:22:15 compute-0 ceph-mon[74477]: pgmap v3123: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.744 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.745 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396935.742994, 1e3be288-5261-4a77-a127-f7bf088caf01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.745 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] VM Started (Lifecycle Event)
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.749 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.753 2 INFO nova.virt.libvirt.driver [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Instance spawned successfully.
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.753 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.900 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.905 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.926 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.960 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.960 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.961 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.961 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.961 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:22:15 compute-0 nova_compute[260603]: 2025-10-02 09:22:15.962 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.016 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.016 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396935.7433307, 1e3be288-5261-4a77-a127-f7bf088caf01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.016 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] VM Paused (Lifecycle Event)
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.021 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.022 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.022 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.022 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.187 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.193 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396935.7487059, 1e3be288-5261-4a77-a127-f7bf088caf01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.193 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] VM Resumed (Lifecycle Event)
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.284 2 INFO nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Took 13.41 seconds to spawn the instance on the hypervisor.
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.285 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:22:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.518 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.524 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.629 2 INFO nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Took 14.91 seconds to build instance.
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.679 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.679 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.679 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.679 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.680 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.770 2 DEBUG nova.compute.manager [req-8895f43c-3315-44be-b0d8-88d8fe76cd93 req-985fd6aa-0831-4be8-811c-7a6fbf88e778 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.770 2 DEBUG oslo_concurrency.lockutils [req-8895f43c-3315-44be-b0d8-88d8fe76cd93 req-985fd6aa-0831-4be8-811c-7a6fbf88e778 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.771 2 DEBUG oslo_concurrency.lockutils [req-8895f43c-3315-44be-b0d8-88d8fe76cd93 req-985fd6aa-0831-4be8-811c-7a6fbf88e778 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.771 2 DEBUG oslo_concurrency.lockutils [req-8895f43c-3315-44be-b0d8-88d8fe76cd93 req-985fd6aa-0831-4be8-811c-7a6fbf88e778 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.771 2 DEBUG nova.compute.manager [req-8895f43c-3315-44be-b0d8-88d8fe76cd93 req-985fd6aa-0831-4be8-811c-7a6fbf88e778 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] No waiting events found dispatching network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.771 2 WARNING nova.compute.manager [req-8895f43c-3315-44be-b0d8-88d8fe76cd93 req-985fd6aa-0831-4be8-811c-7a6fbf88e778 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received unexpected event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 for instance with vm_state active and task_state None.
Oct 02 09:22:16 compute-0 nova_compute[260603]: 2025-10-02 09:22:16.805 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:16 compute-0 ceph-mon[74477]: pgmap v3124: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 09:22:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:22:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/823752995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.269 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.454 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.455 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.460 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.460 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.638 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.640 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3273MB free_disk=59.92182922363281GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.640 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.641 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.872 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.872 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 1e3be288-5261-4a77-a127-f7bf088caf01 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.873 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.873 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:22:17 compute-0 nova_compute[260603]: 2025-10-02 09:22:17.936 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:22:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/823752995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:22:18 compute-0 nova_compute[260603]: 2025-10-02 09:22:18.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:22:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508554890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:22:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:22:18 compute-0 nova_compute[260603]: 2025-10-02 09:22:18.436 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:22:18 compute-0 nova_compute[260603]: 2025-10-02 09:22:18.443 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:22:18 compute-0 nova_compute[260603]: 2025-10-02 09:22:18.646 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:22:18 compute-0 nova_compute[260603]: 2025-10-02 09:22:18.757 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:22:18 compute-0 nova_compute[260603]: 2025-10-02 09:22:18.758 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:19 compute-0 nova_compute[260603]: 2025-10-02 09:22:19.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3508554890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:22:19 compute-0 ceph-mon[74477]: pgmap v3125: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 09:22:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:22:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:22:21 compute-0 ceph-mon[74477]: pgmap v3126: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 09:22:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:22:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859768270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:22:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:22:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859768270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:22:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1859768270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:22:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1859768270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:22:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 02 09:22:22 compute-0 nova_compute[260603]: 2025-10-02 09:22:22.753 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:22:22 compute-0 nova_compute[260603]: 2025-10-02 09:22:22.754 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:22:23 compute-0 nova_compute[260603]: 2025-10-02 09:22:23.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:23 compute-0 ceph-mon[74477]: pgmap v3127: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 02 09:22:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Oct 02 09:22:24 compute-0 nova_compute[260603]: 2025-10-02 09:22:24.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:24 compute-0 ceph-mon[74477]: pgmap v3128: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Oct 02 09:22:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:22:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Oct 02 09:22:26 compute-0 nova_compute[260603]: 2025-10-02 09:22:26.455 2 DEBUG nova.compute.manager [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-changed-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:22:26 compute-0 nova_compute[260603]: 2025-10-02 09:22:26.456 2 DEBUG nova.compute.manager [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Refreshing instance network info cache due to event network-changed-bfc1331e-8260-43ad-b409-38c22a730429. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:22:26 compute-0 nova_compute[260603]: 2025-10-02 09:22:26.456 2 DEBUG oslo_concurrency.lockutils [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:22:26 compute-0 nova_compute[260603]: 2025-10-02 09:22:26.457 2 DEBUG oslo_concurrency.lockutils [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:22:26 compute-0 nova_compute[260603]: 2025-10-02 09:22:26.458 2 DEBUG nova.network.neutron [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Refreshing network info cache for port bfc1331e-8260-43ad-b409-38c22a730429 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:22:27 compute-0 nova_compute[260603]: 2025-10-02 09:22:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:22:27 compute-0 ceph-mon[74477]: pgmap v3129: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Oct 02 09:22:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:22:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:22:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:22:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:22:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:22:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:22:28
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.log', 'vms']
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:22:28 compute-0 nova_compute[260603]: 2025-10-02 09:22:28.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 181 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 99 op/s
Oct 02 09:22:28 compute-0 nova_compute[260603]: 2025-10-02 09:22:28.519 2 DEBUG nova.network.neutron [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updated VIF entry in instance network info cache for port bfc1331e-8260-43ad-b409-38c22a730429. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:22:28 compute-0 nova_compute[260603]: 2025-10-02 09:22:28.520 2 DEBUG nova.network.neutron [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updating instance_info_cache with network_info: [{"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:22:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:22:28 compute-0 nova_compute[260603]: 2025-10-02 09:22:28.701 2 DEBUG oslo_concurrency.lockutils [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:22:29 compute-0 ovn_controller[152344]: 2025-10-02T09:22:29Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5c:40:0b 10.100.0.3
Oct 02 09:22:29 compute-0 ovn_controller[152344]: 2025-10-02T09:22:29Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5c:40:0b 10.100.0.3
Oct 02 09:22:29 compute-0 nova_compute[260603]: 2025-10-02 09:22:29.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:29 compute-0 ceph-mon[74477]: pgmap v3130: 305 pgs: 305 active+clean; 181 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 99 op/s
Oct 02 09:22:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 181 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Oct 02 09:22:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:22:31 compute-0 ceph-mon[74477]: pgmap v3131: 305 pgs: 305 active+clean; 181 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Oct 02 09:22:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 199 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 02 09:22:32 compute-0 ceph-mon[74477]: pgmap v3132: 305 pgs: 305 active+clean; 199 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 02 09:22:33 compute-0 nova_compute[260603]: 2025-10-02 09:22:33.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:22:34 compute-0 nova_compute[260603]: 2025-10-02 09:22:34.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:34.860 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:34.861 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:34.861 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:22:35 compute-0 ceph-mon[74477]: pgmap v3133: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:22:36 compute-0 podman[438679]: 2025-10-02 09:22:36.014685623 +0000 UTC m=+0.070335238 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 09:22:36 compute-0 podman[438678]: 2025-10-02 09:22:36.044792914 +0000 UTC m=+0.105597630 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:22:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:22:36 compute-0 ceph-mon[74477]: pgmap v3134: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:22:38 compute-0 nova_compute[260603]: 2025-10-02 09:22:38.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015183553306309825 of space, bias 1.0, pg target 0.45550659918929476 quantized to 32 (current 32)
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:22:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:22:39 compute-0 nova_compute[260603]: 2025-10-02 09:22:39.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:39 compute-0 ceph-mon[74477]: pgmap v3135: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 09:22:40 compute-0 podman[438723]: 2025-10-02 09:22:40.026718267 +0000 UTC m=+0.078243915 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 09:22:40 compute-0 podman[438724]: 2025-10-02 09:22:40.048762386 +0000 UTC m=+0.093107069 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 09:22:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 708 KiB/s wr, 36 op/s
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.526434) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396960526537, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1650, "num_deletes": 251, "total_data_size": 2698688, "memory_usage": 2742632, "flush_reason": "Manual Compaction"}
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Oct 02 09:22:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396960550041, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2640225, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64012, "largest_seqno": 65661, "table_properties": {"data_size": 2632528, "index_size": 4639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15627, "raw_average_key_size": 19, "raw_value_size": 2617304, "raw_average_value_size": 3346, "num_data_blocks": 208, "num_entries": 782, "num_filter_entries": 782, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396785, "oldest_key_time": 1759396785, "file_creation_time": 1759396960, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 23640 microseconds, and 11140 cpu microseconds.
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.550088) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2640225 bytes OK
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.550112) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.555458) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.555473) EVENT_LOG_v1 {"time_micros": 1759396960555468, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.555490) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2691567, prev total WAL file size 2691567, number of live WAL files 2.
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.556475) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2578KB)], [152(8552KB)]
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396960556546, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11398461, "oldest_snapshot_seqno": -1}
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8369 keys, 9666963 bytes, temperature: kUnknown
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396960609181, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 9666963, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9614623, "index_size": 30339, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 218710, "raw_average_key_size": 26, "raw_value_size": 9469009, "raw_average_value_size": 1131, "num_data_blocks": 1174, "num_entries": 8369, "num_filter_entries": 8369, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396960, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.609499) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 9666963 bytes
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.611051) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.2 rd, 183.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 8883, records dropped: 514 output_compression: NoCompression
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.611076) EVENT_LOG_v1 {"time_micros": 1759396960611064, "job": 94, "event": "compaction_finished", "compaction_time_micros": 52732, "compaction_time_cpu_micros": 24090, "output_level": 6, "num_output_files": 1, "total_output_size": 9666963, "num_input_records": 8883, "num_output_records": 8369, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396960611794, "job": 94, "event": "table_file_deletion", "file_number": 154}
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396960613464, "job": 94, "event": "table_file_deletion", "file_number": 152}
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.556316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.613519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.613527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.613530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.613533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:22:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.613536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:22:41 compute-0 ceph-mon[74477]: pgmap v3136: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 708 KiB/s wr, 36 op/s
Oct 02 09:22:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 708 KiB/s wr, 37 op/s
Oct 02 09:22:43 compute-0 nova_compute[260603]: 2025-10-02 09:22:43.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:43 compute-0 ceph-mon[74477]: pgmap v3137: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 708 KiB/s wr, 37 op/s
Oct 02 09:22:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:43.539 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:22:43 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:43.540 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:22:43 compute-0 nova_compute[260603]: 2025-10-02 09:22:43.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 57 KiB/s wr, 9 op/s
Oct 02 09:22:44 compute-0 nova_compute[260603]: 2025-10-02 09:22:44.473 2 DEBUG nova.compute.manager [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-changed-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:22:44 compute-0 nova_compute[260603]: 2025-10-02 09:22:44.474 2 DEBUG nova.compute.manager [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Refreshing instance network info cache due to event network-changed-bfc1331e-8260-43ad-b409-38c22a730429. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:22:44 compute-0 nova_compute[260603]: 2025-10-02 09:22:44.474 2 DEBUG oslo_concurrency.lockutils [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:22:44 compute-0 nova_compute[260603]: 2025-10-02 09:22:44.474 2 DEBUG oslo_concurrency.lockutils [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:22:44 compute-0 nova_compute[260603]: 2025-10-02 09:22:44.474 2 DEBUG nova.network.neutron [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Refreshing network info cache for port bfc1331e-8260-43ad-b409-38c22a730429 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:22:44 compute-0 nova_compute[260603]: 2025-10-02 09:22:44.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:44 compute-0 nova_compute[260603]: 2025-10-02 09:22:44.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:22:44 compute-0 nova_compute[260603]: 2025-10-02 09:22:44.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:22:44 compute-0 sudo[438763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:22:44 compute-0 sudo[438763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:44 compute-0 sudo[438763]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:44 compute-0 sudo[438788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:22:44 compute-0 sudo[438788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:44 compute-0 sudo[438788]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:44 compute-0 sudo[438813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:22:44 compute-0 sudo[438813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:44 compute-0 sudo[438813]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:44 compute-0 sudo[438838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 09:22:44 compute-0 sudo[438838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.051 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.052 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.053 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.053 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.054 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.055 2 INFO nova.compute.manager [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Terminating instance
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.056 2 DEBUG nova.compute.manager [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:22:45 compute-0 kernel: tapbfc1331e-82 (unregistering): left promiscuous mode
Oct 02 09:22:45 compute-0 NetworkManager[45129]: <info>  [1759396965.1129] device (tapbfc1331e-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:45 compute-0 ovn_controller[152344]: 2025-10-02T09:22:45Z|01695|binding|INFO|Releasing lport bfc1331e-8260-43ad-b409-38c22a730429 from this chassis (sb_readonly=0)
Oct 02 09:22:45 compute-0 ovn_controller[152344]: 2025-10-02T09:22:45Z|01696|binding|INFO|Setting lport bfc1331e-8260-43ad-b409-38c22a730429 down in Southbound
Oct 02 09:22:45 compute-0 ovn_controller[152344]: 2025-10-02T09:22:45Z|01697|binding|INFO|Removing iface tapbfc1331e-82 ovn-installed in OVS
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.165 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:40:0b 10.100.0.3 2001:db8::f816:3eff:fe5c:400b'], port_security=['fa:16:3e:5c:40:0b 10.100.0.3 2001:db8::f816:3eff:fe5c:400b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe5c:400b/64', 'neutron:device_id': '1e3be288-5261-4a77-a127-f7bf088caf01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5b689ca1-3c9b-4813-8474-00abea3332c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=886ca98a-7662-4ca0-8c8e-c35442cbbef0, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bfc1331e-8260-43ad-b409-38c22a730429) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.166 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bfc1331e-8260-43ad-b409-38c22a730429 in datapath bb79a700-778c-4189-bea4-a6e50510de5b unbound from our chassis
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.167 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb79a700-778c-4189-bea4-a6e50510de5b
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.186 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[63050e68-a87c-493a-8238-56a818c184ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:45 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Oct 02 09:22:45 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 14.666s CPU time.
Oct 02 09:22:45 compute-0 systemd-machined[214636]: Machine qemu-186-instance-00000098 terminated.
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.215 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[360ce56b-2ea7-4072-b913-6136f453bb55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.219 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a97ea5-725d-4258-8dbd-9aa436e52062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.245 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[95211e31-6fb8-4d57-a367-001e15e160bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.262 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa56d419-55a9-4bc3-acd9-861ed5141754]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79a700-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0e:c9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2940, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2940, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777310, 'reachable_time': 42986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 438929, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.280 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fcebfb21-7b0a-463a-aebd-285513d65815]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbb79a700-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777321, 'tstamp': 777321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 438938, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbb79a700-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777323, 'tstamp': 777323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 438938, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.282 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79a700-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.291 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb79a700-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.291 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.291 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb79a700-70, col_values=(('external_ids', {'iface-id': 'f52ffbc5-b75c-4a8b-a490-21571dd7145a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:45 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.291 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.295 2 INFO nova.virt.libvirt.driver [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Instance destroyed successfully.
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.295 2 DEBUG nova.objects.instance [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 1e3be288-5261-4a77-a127-f7bf088caf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:22:45 compute-0 podman[438952]: 2025-10-02 09:22:45.37472411 +0000 UTC m=+0.061127080 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.411 2 DEBUG nova.virt.libvirt.vif [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-421018847',display_name='tempest-TestGettingAddress-server-421018847',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-421018847',id=152,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJ4erUkDf9pFYvis3BxPTrsgrZAeghsAW2aYbDdKvJxPUtfd2zcNxkwWc27ijo1XxIL1GH95TwtVkIOZnFQCr789wREwZXl2iwWdFQxsXXMtQjjBE9pyaOAIR5A+kumzQ==',key_name='tempest-TestGettingAddress-2084952376',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:22:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-77p3vre3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:22:16Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=1e3be288-5261-4a77-a127-f7bf088caf01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.411 2 DEBUG nova.network.os_vif_util [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.412 2 DEBUG nova.network.os_vif_util [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.412 2 DEBUG os_vif [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfc1331e-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.421 2 INFO os_vif [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82')
Oct 02 09:22:45 compute-0 podman[438952]: 2025-10-02 09:22:45.475504338 +0000 UTC m=+0.161907288 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:22:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:22:45 compute-0 ceph-mon[74477]: pgmap v3138: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 57 KiB/s wr, 9 op/s
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.871 2 INFO nova.virt.libvirt.driver [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Deleting instance files /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01_del
Oct 02 09:22:45 compute-0 nova_compute[260603]: 2025-10-02 09:22:45.872 2 INFO nova.virt.libvirt.driver [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Deletion of /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01_del complete
Oct 02 09:22:46 compute-0 sudo[438838]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:22:46 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:22:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:22:46 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.135 2 INFO nova.compute.manager [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Took 1.08 seconds to destroy the instance on the hypervisor.
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.135 2 DEBUG oslo.service.loopingcall [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.136 2 DEBUG nova.compute.manager [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.136 2 DEBUG nova.network.neutron [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:22:46 compute-0 sudo[439128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:22:46 compute-0 sudo[439128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:46 compute-0 sudo[439128]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:46 compute-0 sudo[439153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:22:46 compute-0 sudo[439153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:46 compute-0 sudo[439153]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:46 compute-0 sudo[439178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:22:46 compute-0 sudo[439178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:46 compute-0 sudo[439178]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:46 compute-0 sudo[439203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:22:46 compute-0 sudo[439203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 02 09:22:46 compute-0 sudo[439203]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 09:22:46 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 09:22:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:22:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:22:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:22:46 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:22:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:22:46 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:22:46 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e0250a9e-0c5f-4921-99b3-46374d18fd56 does not exist
Oct 02 09:22:46 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 4cf7b506-6ace-43c2-9ca2-6347c2dc4c9a does not exist
Oct 02 09:22:46 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 31cc5b5e-69fa-42ed-ab38-7086fe52e636 does not exist
Oct 02 09:22:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:22:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:22:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:22:46 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:22:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:22:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.894 2 DEBUG nova.compute.manager [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-vif-unplugged-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.895 2 DEBUG oslo_concurrency.lockutils [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.895 2 DEBUG oslo_concurrency.lockutils [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.895 2 DEBUG oslo_concurrency.lockutils [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.896 2 DEBUG nova.compute.manager [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] No waiting events found dispatching network-vif-unplugged-bfc1331e-8260-43ad-b409-38c22a730429 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.896 2 DEBUG nova.compute.manager [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-vif-unplugged-bfc1331e-8260-43ad-b409-38c22a730429 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.896 2 DEBUG nova.compute.manager [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.896 2 DEBUG oslo_concurrency.lockutils [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.897 2 DEBUG oslo_concurrency.lockutils [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.897 2 DEBUG oslo_concurrency.lockutils [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.897 2 DEBUG nova.compute.manager [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] No waiting events found dispatching network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:22:46 compute-0 nova_compute[260603]: 2025-10-02 09:22:46.898 2 WARNING nova.compute.manager [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received unexpected event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 for instance with vm_state active and task_state deleting.
Oct 02 09:22:46 compute-0 sudo[439259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:22:46 compute-0 sudo[439259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:46 compute-0 sudo[439259]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:47 compute-0 sudo[439284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:22:47 compute-0 sudo[439284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:47 compute-0 sudo[439284]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:47 compute-0 sudo[439309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:22:47 compute-0 sudo[439309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:47 compute-0 sudo[439309]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:22:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:22:47 compute-0 ceph-mon[74477]: pgmap v3139: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 02 09:22:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 09:22:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:22:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:22:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:22:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:22:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:22:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:22:47 compute-0 sudo[439334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:22:47 compute-0 sudo[439334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:47 compute-0 podman[439398]: 2025-10-02 09:22:47.49065593 +0000 UTC m=+0.040086842 container create 8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:22:47 compute-0 systemd[1]: Started libpod-conmon-8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f.scope.
Oct 02 09:22:47 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:47.542 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:22:47 compute-0 podman[439398]: 2025-10-02 09:22:47.472309648 +0000 UTC m=+0.021740580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:22:47 compute-0 podman[439398]: 2025-10-02 09:22:47.576398999 +0000 UTC m=+0.125829921 container init 8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 09:22:47 compute-0 podman[439398]: 2025-10-02 09:22:47.58508481 +0000 UTC m=+0.134515722 container start 8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 09:22:47 compute-0 podman[439398]: 2025-10-02 09:22:47.588165927 +0000 UTC m=+0.137596859 container attach 8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 09:22:47 compute-0 affectionate_thompson[439415]: 167 167
Oct 02 09:22:47 compute-0 podman[439398]: 2025-10-02 09:22:47.592096579 +0000 UTC m=+0.141527491 container died 8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 09:22:47 compute-0 systemd[1]: libpod-8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f.scope: Deactivated successfully.
Oct 02 09:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8245cb5c949930e21268ec1e8e73c53d0ae79ffaa5faef695a21a13b49d09f97-merged.mount: Deactivated successfully.
Oct 02 09:22:47 compute-0 podman[439398]: 2025-10-02 09:22:47.639407387 +0000 UTC m=+0.188838299 container remove 8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 09:22:47 compute-0 systemd[1]: libpod-conmon-8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f.scope: Deactivated successfully.
Oct 02 09:22:47 compute-0 podman[439438]: 2025-10-02 09:22:47.811007337 +0000 UTC m=+0.045887504 container create bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_albattani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:22:47 compute-0 systemd[1]: Started libpod-conmon-bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a.scope.
Oct 02 09:22:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d5e8f9587a2bd9677ef69d07177d6a8af233328364eabcaabb37fb7a6ff74f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d5e8f9587a2bd9677ef69d07177d6a8af233328364eabcaabb37fb7a6ff74f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d5e8f9587a2bd9677ef69d07177d6a8af233328364eabcaabb37fb7a6ff74f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d5e8f9587a2bd9677ef69d07177d6a8af233328364eabcaabb37fb7a6ff74f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d5e8f9587a2bd9677ef69d07177d6a8af233328364eabcaabb37fb7a6ff74f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:47 compute-0 podman[439438]: 2025-10-02 09:22:47.880317372 +0000 UTC m=+0.115197549 container init bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_albattani, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:22:47 compute-0 podman[439438]: 2025-10-02 09:22:47.789737652 +0000 UTC m=+0.024617839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:22:47 compute-0 podman[439438]: 2025-10-02 09:22:47.890939484 +0000 UTC m=+0.125819651 container start bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct 02 09:22:47 compute-0 podman[439438]: 2025-10-02 09:22:47.895458575 +0000 UTC m=+0.130338772 container attach bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:22:48 compute-0 nova_compute[260603]: 2025-10-02 09:22:48.062 2 DEBUG nova.network.neutron [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updated VIF entry in instance network info cache for port bfc1331e-8260-43ad-b409-38c22a730429. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:22:48 compute-0 nova_compute[260603]: 2025-10-02 09:22:48.063 2 DEBUG nova.network.neutron [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updating instance_info_cache with network_info: [{"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:22:48 compute-0 nova_compute[260603]: 2025-10-02 09:22:48.281 2 DEBUG oslo_concurrency.lockutils [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:22:48 compute-0 nova_compute[260603]: 2025-10-02 09:22:48.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 02 09:22:48 compute-0 intelligent_albattani[439454]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:22:48 compute-0 intelligent_albattani[439454]: --> relative data size: 1.0
Oct 02 09:22:48 compute-0 intelligent_albattani[439454]: --> All data devices are unavailable
Oct 02 09:22:48 compute-0 systemd[1]: libpod-bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a.scope: Deactivated successfully.
Oct 02 09:22:48 compute-0 podman[439438]: 2025-10-02 09:22:48.926152518 +0000 UTC m=+1.161032695 container died bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_albattani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 09:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-38d5e8f9587a2bd9677ef69d07177d6a8af233328364eabcaabb37fb7a6ff74f-merged.mount: Deactivated successfully.
Oct 02 09:22:49 compute-0 podman[439438]: 2025-10-02 09:22:49.087008632 +0000 UTC m=+1.321888799 container remove bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_albattani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:22:49 compute-0 systemd[1]: libpod-conmon-bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a.scope: Deactivated successfully.
Oct 02 09:22:49 compute-0 sudo[439334]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:49 compute-0 sudo[439497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:22:49 compute-0 sudo[439497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:49 compute-0 sudo[439497]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:49 compute-0 sudo[439522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:22:49 compute-0 sudo[439522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:49 compute-0 sudo[439522]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:49 compute-0 sudo[439547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:22:49 compute-0 sudo[439547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:49 compute-0 sudo[439547]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:49 compute-0 sudo[439572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:22:49 compute-0 sudo[439572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:49 compute-0 ceph-mon[74477]: pgmap v3140: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 02 09:22:49 compute-0 podman[439637]: 2025-10-02 09:22:49.749152304 +0000 UTC m=+0.056873207 container create 5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:22:49 compute-0 systemd[1]: Started libpod-conmon-5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2.scope.
Oct 02 09:22:49 compute-0 podman[439637]: 2025-10-02 09:22:49.716094361 +0000 UTC m=+0.023815284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:22:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:22:49 compute-0 podman[439637]: 2025-10-02 09:22:49.903007339 +0000 UTC m=+0.210728262 container init 5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 09:22:49 compute-0 podman[439637]: 2025-10-02 09:22:49.909244774 +0000 UTC m=+0.216965677 container start 5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sinoussi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 09:22:49 compute-0 recursing_sinoussi[439653]: 167 167
Oct 02 09:22:49 compute-0 systemd[1]: libpod-5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2.scope: Deactivated successfully.
Oct 02 09:22:49 compute-0 podman[439637]: 2025-10-02 09:22:49.956569073 +0000 UTC m=+0.264289976 container attach 5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sinoussi, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 09:22:49 compute-0 podman[439637]: 2025-10-02 09:22:49.956963065 +0000 UTC m=+0.264683968 container died 5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 09:22:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6908a33c4e64b53782267465aaaa1a40cce2138c05deeec1ed918626d3aff8e4-merged.mount: Deactivated successfully.
Oct 02 09:22:50 compute-0 podman[439637]: 2025-10-02 09:22:50.187076382 +0000 UTC m=+0.494797285 container remove 5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sinoussi, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:22:50 compute-0 systemd[1]: libpod-conmon-5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2.scope: Deactivated successfully.
Oct 02 09:22:50 compute-0 nova_compute[260603]: 2025-10-02 09:22:50.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 02 09:22:50 compute-0 podman[439679]: 2025-10-02 09:22:50.441789258 +0000 UTC m=+0.059830000 container create 3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 09:22:50 compute-0 nova_compute[260603]: 2025-10-02 09:22:50.444 2 DEBUG nova.network.neutron [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:22:50 compute-0 systemd[1]: Started libpod-conmon-3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198.scope.
Oct 02 09:22:50 compute-0 podman[439679]: 2025-10-02 09:22:50.416428535 +0000 UTC m=+0.034469297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:22:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ab1eefe931ea7df4e54d553ba1ae486c8e71202a0c7f0d75b90a7969aa6e5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ab1eefe931ea7df4e54d553ba1ae486c8e71202a0c7f0d75b90a7969aa6e5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ab1eefe931ea7df4e54d553ba1ae486c8e71202a0c7f0d75b90a7969aa6e5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ab1eefe931ea7df4e54d553ba1ae486c8e71202a0c7f0d75b90a7969aa6e5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:50 compute-0 podman[439679]: 2025-10-02 09:22:50.533437821 +0000 UTC m=+0.151478553 container init 3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 09:22:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:22:50 compute-0 podman[439679]: 2025-10-02 09:22:50.540826721 +0000 UTC m=+0.158867433 container start 3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:22:50 compute-0 podman[439679]: 2025-10-02 09:22:50.544194857 +0000 UTC m=+0.162235569 container attach 3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:22:50 compute-0 nova_compute[260603]: 2025-10-02 09:22:50.621 2 DEBUG nova.compute.manager [req-ac09e89e-26f9-4a0a-bce4-66c0eeb16abf req-c9ab922f-b97a-42a2-90e0-490d8f0d553a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-vif-deleted-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:22:50 compute-0 nova_compute[260603]: 2025-10-02 09:22:50.623 2 INFO nova.compute.manager [req-ac09e89e-26f9-4a0a-bce4-66c0eeb16abf req-c9ab922f-b97a-42a2-90e0-490d8f0d553a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Neutron deleted interface bfc1331e-8260-43ad-b409-38c22a730429; detaching it from the instance and deleting it from the info cache
Oct 02 09:22:50 compute-0 nova_compute[260603]: 2025-10-02 09:22:50.623 2 DEBUG nova.network.neutron [req-ac09e89e-26f9-4a0a-bce4-66c0eeb16abf req-c9ab922f-b97a-42a2-90e0-490d8f0d553a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:22:50 compute-0 nova_compute[260603]: 2025-10-02 09:22:50.731 2 INFO nova.compute.manager [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Took 4.60 seconds to deallocate network for instance.
Oct 02 09:22:50 compute-0 nova_compute[260603]: 2025-10-02 09:22:50.813 2 DEBUG nova.compute.manager [req-ac09e89e-26f9-4a0a-bce4-66c0eeb16abf req-c9ab922f-b97a-42a2-90e0-490d8f0d553a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Detach interface failed, port_id=bfc1331e-8260-43ad-b409-38c22a730429, reason: Instance 1e3be288-5261-4a77-a127-f7bf088caf01 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 09:22:50 compute-0 nova_compute[260603]: 2025-10-02 09:22:50.928 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:50 compute-0 nova_compute[260603]: 2025-10-02 09:22:50.929 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:51 compute-0 nova_compute[260603]: 2025-10-02 09:22:51.012 2 DEBUG oslo_concurrency.processutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:22:51 compute-0 goofy_jemison[439696]: {
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:     "0": [
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:         {
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "devices": [
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "/dev/loop3"
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             ],
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_name": "ceph_lv0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_size": "21470642176",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "name": "ceph_lv0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "tags": {
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.cluster_name": "ceph",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.crush_device_class": "",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.encrypted": "0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.osd_id": "0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.type": "block",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.vdo": "0"
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             },
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "type": "block",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "vg_name": "ceph_vg0"
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:         }
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:     ],
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:     "1": [
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:         {
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "devices": [
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "/dev/loop4"
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             ],
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_name": "ceph_lv1",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_size": "21470642176",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "name": "ceph_lv1",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "tags": {
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.cluster_name": "ceph",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.crush_device_class": "",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.encrypted": "0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.osd_id": "1",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.type": "block",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.vdo": "0"
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             },
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "type": "block",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "vg_name": "ceph_vg1"
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:         }
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:     ],
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:     "2": [
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:         {
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "devices": [
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "/dev/loop5"
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             ],
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_name": "ceph_lv2",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_size": "21470642176",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "name": "ceph_lv2",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "tags": {
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.cluster_name": "ceph",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.crush_device_class": "",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.encrypted": "0",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.osd_id": "2",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.type": "block",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:                 "ceph.vdo": "0"
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             },
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "type": "block",
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:             "vg_name": "ceph_vg2"
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:         }
Oct 02 09:22:51 compute-0 goofy_jemison[439696]:     ]
Oct 02 09:22:51 compute-0 goofy_jemison[439696]: }
Oct 02 09:22:51 compute-0 systemd[1]: libpod-3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198.scope: Deactivated successfully.
Oct 02 09:22:51 compute-0 podman[439679]: 2025-10-02 09:22:51.343819272 +0000 UTC m=+0.961860004 container died 3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-91ab1eefe931ea7df4e54d553ba1ae486c8e71202a0c7f0d75b90a7969aa6e5a-merged.mount: Deactivated successfully.
Oct 02 09:22:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:22:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4135541183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:22:51 compute-0 nova_compute[260603]: 2025-10-02 09:22:51.609 2 DEBUG oslo_concurrency.processutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:22:51 compute-0 nova_compute[260603]: 2025-10-02 09:22:51.617 2 DEBUG nova.compute.provider_tree [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:22:51 compute-0 ceph-mon[74477]: pgmap v3141: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 02 09:22:51 compute-0 nova_compute[260603]: 2025-10-02 09:22:51.660 2 DEBUG nova.scheduler.client.report [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:22:51 compute-0 nova_compute[260603]: 2025-10-02 09:22:51.721 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:51 compute-0 nova_compute[260603]: 2025-10-02 09:22:51.806 2 INFO nova.scheduler.client.report [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 1e3be288-5261-4a77-a127-f7bf088caf01
Oct 02 09:22:51 compute-0 podman[439679]: 2025-10-02 09:22:51.942293585 +0000 UTC m=+1.560334337 container remove 3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 09:22:51 compute-0 systemd[1]: libpod-conmon-3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198.scope: Deactivated successfully.
Oct 02 09:22:51 compute-0 sudo[439572]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:52 compute-0 sudo[439741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:22:52 compute-0 sudo[439741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:52 compute-0 sudo[439741]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:52 compute-0 nova_compute[260603]: 2025-10-02 09:22:52.125 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:52 compute-0 sudo[439766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:22:52 compute-0 sudo[439766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:52 compute-0 sudo[439766]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:52 compute-0 sudo[439791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:22:52 compute-0 sudo[439791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:52 compute-0 sudo[439791]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:52 compute-0 sudo[439816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:22:52 compute-0 sudo[439816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 02 09:22:52 compute-0 podman[439883]: 2025-10-02 09:22:52.659028523 +0000 UTC m=+0.089607011 container create 34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cartwright, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 09:22:52 compute-0 podman[439883]: 2025-10-02 09:22:52.590399039 +0000 UTC m=+0.020977487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:22:52 compute-0 systemd[1]: Started libpod-conmon-34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8.scope.
Oct 02 09:22:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4135541183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:22:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:22:52 compute-0 podman[439883]: 2025-10-02 09:22:52.827416992 +0000 UTC m=+0.257995450 container init 34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:22:52 compute-0 podman[439883]: 2025-10-02 09:22:52.838537039 +0000 UTC m=+0.269115537 container start 34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:22:52 compute-0 reverent_cartwright[439899]: 167 167
Oct 02 09:22:52 compute-0 systemd[1]: libpod-34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8.scope: Deactivated successfully.
Oct 02 09:22:53 compute-0 podman[439883]: 2025-10-02 09:22:53.132810051 +0000 UTC m=+0.563388609 container attach 34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cartwright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 09:22:53 compute-0 podman[439883]: 2025-10-02 09:22:53.13404175 +0000 UTC m=+0.564620248 container died 34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0771d8ef4a65f30c10f07dcb7b7e47b5df531d523fc9622b464034f6b59c2d9a-merged.mount: Deactivated successfully.
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.578 2 DEBUG nova.compute.manager [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-changed-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.578 2 DEBUG nova.compute.manager [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Refreshing instance network info cache due to event network-changed-ce3d61f6-90a9-4869-ac95-256f110eea41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.579 2 DEBUG oslo_concurrency.lockutils [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.579 2 DEBUG oslo_concurrency.lockutils [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.579 2 DEBUG nova.network.neutron [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Refreshing network info cache for port ce3d61f6-90a9-4869-ac95-256f110eea41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.801 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.802 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.803 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.804 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.805 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.807 2 INFO nova.compute.manager [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Terminating instance
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.809 2 DEBUG nova.compute.manager [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:22:53 compute-0 ceph-mon[74477]: pgmap v3142: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 02 09:22:53 compute-0 podman[439883]: 2025-10-02 09:22:53.811154399 +0000 UTC m=+1.241732837 container remove 34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:22:53 compute-0 systemd[1]: libpod-conmon-34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8.scope: Deactivated successfully.
Oct 02 09:22:53 compute-0 kernel: tapce3d61f6-90 (unregistering): left promiscuous mode
Oct 02 09:22:53 compute-0 NetworkManager[45129]: <info>  [1759396973.8942] device (tapce3d61f6-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 09:22:53 compute-0 ovn_controller[152344]: 2025-10-02T09:22:53Z|01698|binding|INFO|Releasing lport ce3d61f6-90a9-4869-ac95-256f110eea41 from this chassis (sb_readonly=0)
Oct 02 09:22:53 compute-0 ovn_controller[152344]: 2025-10-02T09:22:53Z|01699|binding|INFO|Setting lport ce3d61f6-90a9-4869-ac95-256f110eea41 down in Southbound
Oct 02 09:22:53 compute-0 ovn_controller[152344]: 2025-10-02T09:22:53Z|01700|binding|INFO|Removing iface tapce3d61f6-90 ovn-installed in OVS
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:53.927 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:e3:da 10.100.0.4 2001:db8::f816:3eff:fec4:e3da'], port_security=['fa:16:3e:c4:e3:da 10.100.0.4 2001:db8::f816:3eff:fec4:e3da'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fec4:e3da/64', 'neutron:device_id': 'ccf01ee2-f5c6-4802-a9dd-f8c0423d4914', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5b689ca1-3c9b-4813-8474-00abea3332c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=886ca98a-7662-4ca0-8c8e-c35442cbbef0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ce3d61f6-90a9-4869-ac95-256f110eea41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:22:53 compute-0 nova_compute[260603]: 2025-10-02 09:22:53.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:53.934 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ce3d61f6-90a9-4869-ac95-256f110eea41 in datapath bb79a700-778c-4189-bea4-a6e50510de5b unbound from our chassis
Oct 02 09:22:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:53.934 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bb79a700-778c-4189-bea4-a6e50510de5b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 09:22:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:53.936 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a101a3-c0d7-4f4c-8ee5-b79aaf61ae5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:53 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:53.937 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b namespace which is not needed anymore
Oct 02 09:22:53 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Deactivated successfully.
Oct 02 09:22:53 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Consumed 16.136s CPU time.
Oct 02 09:22:53 compute-0 systemd-machined[214636]: Machine qemu-185-instance-00000097 terminated.
Oct 02 09:22:54 compute-0 nova_compute[260603]: 2025-10-02 09:22:54.056 2 INFO nova.virt.libvirt.driver [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Instance destroyed successfully.
Oct 02 09:22:54 compute-0 nova_compute[260603]: 2025-10-02 09:22:54.056 2 DEBUG nova.objects.instance [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:22:54 compute-0 nova_compute[260603]: 2025-10-02 09:22:54.084 2 DEBUG nova.virt.libvirt.vif [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:21:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-885089396',display_name='tempest-TestGettingAddress-server-885089396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-885089396',id=151,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJ4erUkDf9pFYvis3BxPTrsgrZAeghsAW2aYbDdKvJxPUtfd2zcNxkwWc27ijo1XxIL1GH95TwtVkIOZnFQCr789wREwZXl2iwWdFQxsXXMtQjjBE9pyaOAIR5A+kumzQ==',key_name='tempest-TestGettingAddress-2084952376',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:21:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-mdzn0750',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:21:36Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=ccf01ee2-f5c6-4802-a9dd-f8c0423d4914,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 09:22:54 compute-0 nova_compute[260603]: 2025-10-02 09:22:54.084 2 DEBUG nova.network.os_vif_util [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 09:22:54 compute-0 nova_compute[260603]: 2025-10-02 09:22:54.085 2 DEBUG nova.network.os_vif_util [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 09:22:54 compute-0 nova_compute[260603]: 2025-10-02 09:22:54.085 2 DEBUG os_vif [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 09:22:54 compute-0 nova_compute[260603]: 2025-10-02 09:22:54.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:54 compute-0 nova_compute[260603]: 2025-10-02 09:22:54.087 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce3d61f6-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:54 compute-0 nova_compute[260603]: 2025-10-02 09:22:54.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:54 compute-0 nova_compute[260603]: 2025-10-02 09:22:54.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 09:22:54 compute-0 nova_compute[260603]: 2025-10-02 09:22:54.092 2 INFO os_vif [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90')
Oct 02 09:22:54 compute-0 podman[439935]: 2025-10-02 09:22:54.016672538 +0000 UTC m=+0.024775195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:22:54 compute-0 podman[439935]: 2025-10-02 09:22:54.20885719 +0000 UTC m=+0.216959837 container create b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 09:22:54 compute-0 systemd[1]: Started libpod-conmon-b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7.scope.
Oct 02 09:22:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:22:54 compute-0 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [NOTICE]   (437108) : haproxy version is 2.8.14-c23fe91
Oct 02 09:22:54 compute-0 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [NOTICE]   (437108) : path to executable is /usr/sbin/haproxy
Oct 02 09:22:54 compute-0 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [WARNING]  (437108) : Exiting Master process...
Oct 02 09:22:54 compute-0 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [WARNING]  (437108) : Exiting Master process...
Oct 02 09:22:54 compute-0 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [ALERT]    (437108) : Current worker (437110) exited with code 143 (Terminated)
Oct 02 09:22:54 compute-0 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [WARNING]  (437108) : All workers exited. Exiting... (0)
Oct 02 09:22:54 compute-0 systemd[1]: libpod-63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d.scope: Deactivated successfully.
Oct 02 09:22:54 compute-0 conmon[437104]: conmon 63df10a98f81980cbadb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d.scope/container/memory.events
Oct 02 09:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8291ff0d529a747f2016907affefa6247a3441c4f5e649cb8395a22a54f862eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:54 compute-0 podman[439987]: 2025-10-02 09:22:54.319538867 +0000 UTC m=+0.063576897 container died 63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8291ff0d529a747f2016907affefa6247a3441c4f5e649cb8395a22a54f862eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8291ff0d529a747f2016907affefa6247a3441c4f5e649cb8395a22a54f862eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8291ff0d529a747f2016907affefa6247a3441c4f5e649cb8395a22a54f862eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:22:54 compute-0 podman[439935]: 2025-10-02 09:22:54.33815765 +0000 UTC m=+0.346260347 container init b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:22:54 compute-0 podman[439935]: 2025-10-02 09:22:54.352742385 +0000 UTC m=+0.360845062 container start b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:22:54 compute-0 podman[439935]: 2025-10-02 09:22:54.357172413 +0000 UTC m=+0.365275080 container attach b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:22:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d-userdata-shm.mount: Deactivated successfully.
Oct 02 09:22:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8ec26be00ec6c1706956eeee6591a41a079324f4042ac45aeef1cce56edc2ca-merged.mount: Deactivated successfully.
Oct 02 09:22:54 compute-0 podman[439987]: 2025-10-02 09:22:54.395396377 +0000 UTC m=+0.139434327 container cleanup 63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 09:22:54 compute-0 systemd[1]: libpod-conmon-63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d.scope: Deactivated successfully.
Oct 02 09:22:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 09:22:54 compute-0 podman[440028]: 2025-10-02 09:22:54.683404203 +0000 UTC m=+0.265069341 container remove 63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 09:22:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.689 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48ed838a-aa97-41e2-a757-151f198ecf6c]: (4, ('Thu Oct  2 09:22:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b (63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d)\n63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d\nThu Oct  2 09:22:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b (63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d)\n63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.692 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f1a893-8e77-4953-8ef5-71ffe80d277e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.693 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79a700-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:22:54 compute-0 kernel: tapbb79a700-70: left promiscuous mode
Oct 02 09:22:54 compute-0 nova_compute[260603]: 2025-10-02 09:22:54.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.713 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f16bb836-a7f1-4492-b26e-ec58eed5fff2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.742 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb2ae72-1807-4d94-8126-da1e36b07406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.743 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[39155c83-3807-4151-ae94-1ff5abf6370b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.757 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[02eb93f3-284b-4027-8e4d-23d4bf9464fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777303, 'reachable_time': 22149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 440044, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dbb79a700\x2d778c\x2d4189\x2dbea4\x2da6e50510de5b.mount: Deactivated successfully.
Oct 02 09:22:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.760 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 09:22:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.760 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7987dca4-92bf-43e2-9489-d9c7ef96307a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 09:22:54 compute-0 ceph-mon[74477]: pgmap v3143: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]: {
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "osd_id": 2,
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "type": "bluestore"
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:     },
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "osd_id": 1,
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "type": "bluestore"
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:     },
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "osd_id": 0,
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:         "type": "bluestore"
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]:     }
Oct 02 09:22:55 compute-0 gifted_montalcini[440002]: }
Oct 02 09:22:55 compute-0 systemd[1]: libpod-b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7.scope: Deactivated successfully.
Oct 02 09:22:55 compute-0 podman[439935]: 2025-10-02 09:22:55.297876805 +0000 UTC m=+1.305979442 container died b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 09:22:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-8291ff0d529a747f2016907affefa6247a3441c4f5e649cb8395a22a54f862eb-merged.mount: Deactivated successfully.
Oct 02 09:22:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.581 2 DEBUG nova.network.neutron [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updated VIF entry in instance network info cache for port ce3d61f6-90a9-4869-ac95-256f110eea41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.582 2 DEBUG nova.network.neutron [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.613 2 DEBUG oslo_concurrency.lockutils [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.673 2 DEBUG nova.compute.manager [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-vif-unplugged-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.673 2 DEBUG oslo_concurrency.lockutils [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.674 2 DEBUG oslo_concurrency.lockutils [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.674 2 DEBUG oslo_concurrency.lockutils [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.674 2 DEBUG nova.compute.manager [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] No waiting events found dispatching network-vif-unplugged-ce3d61f6-90a9-4869-ac95-256f110eea41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.675 2 DEBUG nova.compute.manager [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-vif-unplugged-ce3d61f6-90a9-4869-ac95-256f110eea41 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.675 2 DEBUG nova.compute.manager [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.675 2 DEBUG oslo_concurrency.lockutils [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.676 2 DEBUG oslo_concurrency.lockutils [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.676 2 DEBUG oslo_concurrency.lockutils [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.676 2 DEBUG nova.compute.manager [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] No waiting events found dispatching network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 09:22:55 compute-0 nova_compute[260603]: 2025-10-02 09:22:55.676 2 WARNING nova.compute.manager [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received unexpected event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 for instance with vm_state active and task_state deleting.
Oct 02 09:22:56 compute-0 podman[439935]: 2025-10-02 09:22:56.063561771 +0000 UTC m=+2.071664418 container remove b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 09:22:56 compute-0 systemd[1]: libpod-conmon-b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7.scope: Deactivated successfully.
Oct 02 09:22:56 compute-0 sudo[439816]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:22:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Oct 02 09:22:56 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:22:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:22:56 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:22:56 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 65b55690-b8fd-4633-8337-da5bb0d09b77 does not exist
Oct 02 09:22:56 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 4ecfdeba-41f0-46ea-8ca4-c8767e95a744 does not exist
Oct 02 09:22:56 compute-0 sudo[440088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:22:56 compute-0 sudo[440088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:56 compute-0 sudo[440088]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:56 compute-0 sudo[440113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:22:56 compute-0 sudo[440113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:22:56 compute-0 sudo[440113]: pam_unix(sudo:session): session closed for user root
Oct 02 09:22:57 compute-0 ceph-mon[74477]: pgmap v3144: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Oct 02 09:22:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:22:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:22:57 compute-0 nova_compute[260603]: 2025-10-02 09:22:57.741 2 INFO nova.virt.libvirt.driver [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Deleting instance files /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_del
Oct 02 09:22:57 compute-0 nova_compute[260603]: 2025-10-02 09:22:57.743 2 INFO nova.virt.libvirt.driver [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Deletion of /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_del complete
Oct 02 09:22:57 compute-0 nova_compute[260603]: 2025-10-02 09:22:57.879 2 INFO nova.compute.manager [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Took 4.07 seconds to destroy the instance on the hypervisor.
Oct 02 09:22:57 compute-0 nova_compute[260603]: 2025-10-02 09:22:57.879 2 DEBUG oslo.service.loopingcall [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:22:57 compute-0 nova_compute[260603]: 2025-10-02 09:22:57.880 2 DEBUG nova.compute.manager [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:22:57 compute-0 nova_compute[260603]: 2025-10-02 09:22:57.880 2 DEBUG nova.network.neutron [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:22:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:22:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:22:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:22:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:22:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:22:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:22:58 compute-0 nova_compute[260603]: 2025-10-02 09:22:58.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 13 KiB/s wr, 53 op/s
Oct 02 09:22:59 compute-0 nova_compute[260603]: 2025-10-02 09:22:59.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:22:59 compute-0 ceph-mon[74477]: pgmap v3145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 13 KiB/s wr, 53 op/s
Oct 02 09:23:00 compute-0 nova_compute[260603]: 2025-10-02 09:23:00.290 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396965.2889314, 1e3be288-5261-4a77-a127-f7bf088caf01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:23:00 compute-0 nova_compute[260603]: 2025-10-02 09:23:00.291 2 INFO nova.compute.manager [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] VM Stopped (Lifecycle Event)
Oct 02 09:23:00 compute-0 nova_compute[260603]: 2025-10-02 09:23:00.341 2 DEBUG nova.compute.manager [None req-36c68962-04f6-4b3b-8c85-2e7b4c562537 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:23:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 12 KiB/s wr, 26 op/s
Oct 02 09:23:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:23:01 compute-0 nova_compute[260603]: 2025-10-02 09:23:01.543 2 DEBUG nova.network.neutron [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:23:01 compute-0 ceph-mon[74477]: pgmap v3146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 12 KiB/s wr, 26 op/s
Oct 02 09:23:01 compute-0 nova_compute[260603]: 2025-10-02 09:23:01.652 2 DEBUG nova.compute.manager [req-c97c6734-5a50-40dc-a5bb-7fe9de5246cb req-abdc2cac-dc9d-4b80-b75a-abacb1baf664 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-vif-deleted-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 09:23:01 compute-0 nova_compute[260603]: 2025-10-02 09:23:01.653 2 INFO nova.compute.manager [req-c97c6734-5a50-40dc-a5bb-7fe9de5246cb req-abdc2cac-dc9d-4b80-b75a-abacb1baf664 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Neutron deleted interface ce3d61f6-90a9-4869-ac95-256f110eea41; detaching it from the instance and deleting it from the info cache
Oct 02 09:23:01 compute-0 nova_compute[260603]: 2025-10-02 09:23:01.653 2 DEBUG nova.network.neutron [req-c97c6734-5a50-40dc-a5bb-7fe9de5246cb req-abdc2cac-dc9d-4b80-b75a-abacb1baf664 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:23:01 compute-0 nova_compute[260603]: 2025-10-02 09:23:01.670 2 INFO nova.compute.manager [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Took 3.79 seconds to deallocate network for instance.
Oct 02 09:23:01 compute-0 nova_compute[260603]: 2025-10-02 09:23:01.688 2 DEBUG nova.compute.manager [req-c97c6734-5a50-40dc-a5bb-7fe9de5246cb req-abdc2cac-dc9d-4b80-b75a-abacb1baf664 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Detach interface failed, port_id=ce3d61f6-90a9-4869-ac95-256f110eea41, reason: Instance ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 09:23:01 compute-0 nova_compute[260603]: 2025-10-02 09:23:01.774 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:23:01 compute-0 nova_compute[260603]: 2025-10-02 09:23:01.775 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:23:01 compute-0 nova_compute[260603]: 2025-10-02 09:23:01.829 2 DEBUG oslo_concurrency.processutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:23:02 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:23:02 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1776422063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:23:02 compute-0 nova_compute[260603]: 2025-10-02 09:23:02.317 2 DEBUG oslo_concurrency.processutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:23:02 compute-0 nova_compute[260603]: 2025-10-02 09:23:02.326 2 DEBUG nova.compute.provider_tree [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:23:02 compute-0 nova_compute[260603]: 2025-10-02 09:23:02.352 2 DEBUG nova.scheduler.client.report [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:23:02 compute-0 nova_compute[260603]: 2025-10-02 09:23:02.392 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:23:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Oct 02 09:23:02 compute-0 nova_compute[260603]: 2025-10-02 09:23:02.447 2 INFO nova.scheduler.client.report [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance ccf01ee2-f5c6-4802-a9dd-f8c0423d4914
Oct 02 09:23:02 compute-0 nova_compute[260603]: 2025-10-02 09:23:02.551 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:23:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1776422063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:23:03 compute-0 nova_compute[260603]: 2025-10-02 09:23:03.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:03 compute-0 ceph-mon[74477]: pgmap v3147: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Oct 02 09:23:04 compute-0 nova_compute[260603]: 2025-10-02 09:23:04.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:23:04 compute-0 ceph-mon[74477]: pgmap v3148: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:23:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.574643) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396985574680, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 498, "num_deletes": 258, "total_data_size": 452825, "memory_usage": 462232, "flush_reason": "Manual Compaction"}
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396985604594, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 438089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65662, "largest_seqno": 66159, "table_properties": {"data_size": 435241, "index_size": 819, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6664, "raw_average_key_size": 18, "raw_value_size": 429532, "raw_average_value_size": 1193, "num_data_blocks": 36, "num_entries": 360, "num_filter_entries": 360, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396961, "oldest_key_time": 1759396961, "file_creation_time": 1759396985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 30040 microseconds, and 2394 cpu microseconds.
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.604671) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 438089 bytes OK
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.604705) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.653557) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.653601) EVENT_LOG_v1 {"time_micros": 1759396985653591, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.653629) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 449873, prev total WAL file size 449873, number of live WAL files 2.
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.654722) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373537' seq:72057594037927935, type:22 .. '6C6F676D0033303131' seq:0, type:0; will stop at (end)
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(427KB)], [155(9440KB)]
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396985654812, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 10105052, "oldest_snapshot_seqno": -1}
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8202 keys, 9993530 bytes, temperature: kUnknown
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396985935876, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 9993530, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9941265, "index_size": 30648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 216186, "raw_average_key_size": 26, "raw_value_size": 9797615, "raw_average_value_size": 1194, "num_data_blocks": 1185, "num_entries": 8202, "num_filter_entries": 8202, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.936226) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 9993530 bytes
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.937929) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 35.9 rd, 35.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.2 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(45.9) write-amplify(22.8) OK, records in: 8729, records dropped: 527 output_compression: NoCompression
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.937958) EVENT_LOG_v1 {"time_micros": 1759396985937945, "job": 96, "event": "compaction_finished", "compaction_time_micros": 281175, "compaction_time_cpu_micros": 47073, "output_level": 6, "num_output_files": 1, "total_output_size": 9993530, "num_input_records": 8729, "num_output_records": 8202, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396985938241, "job": 96, "event": "table_file_deletion", "file_number": 157}
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396985941522, "job": 96, "event": "table_file_deletion", "file_number": 155}
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.654596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.941778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.941801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.941804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.941807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:23:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.941810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:23:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:23:06 compute-0 podman[440161]: 2025-10-02 09:23:06.99564618 +0000 UTC m=+0.063048341 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:23:07 compute-0 podman[440160]: 2025-10-02 09:23:07.025216313 +0000 UTC m=+0.096050321 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:23:07 compute-0 ceph-mon[74477]: pgmap v3149: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:23:08 compute-0 nova_compute[260603]: 2025-10-02 09:23:08.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:23:08 compute-0 ceph-mon[74477]: pgmap v3150: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 09:23:09 compute-0 nova_compute[260603]: 2025-10-02 09:23:09.054 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396974.0529292, ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:23:09 compute-0 nova_compute[260603]: 2025-10-02 09:23:09.055 2 INFO nova.compute.manager [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] VM Stopped (Lifecycle Event)
Oct 02 09:23:09 compute-0 nova_compute[260603]: 2025-10-02 09:23:09.083 2 DEBUG nova.compute.manager [None req-9344112f-08ee-4e47-93e7-121a1f3bc9f5 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:23:09 compute-0 nova_compute[260603]: 2025-10-02 09:23:09.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:09 compute-0 nova_compute[260603]: 2025-10-02 09:23:09.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:09 compute-0 nova_compute[260603]: 2025-10-02 09:23:09.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:09 compute-0 nova_compute[260603]: 2025-10-02 09:23:09.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:23:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 341 B/s wr, 2 op/s
Oct 02 09:23:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:23:11 compute-0 podman[440207]: 2025-10-02 09:23:11.003966017 +0000 UTC m=+0.073069632 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:23:11 compute-0 podman[440208]: 2025-10-02 09:23:11.014330691 +0000 UTC m=+0.069342587 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 09:23:11 compute-0 ceph-mon[74477]: pgmap v3151: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 341 B/s wr, 2 op/s
Oct 02 09:23:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 341 B/s wr, 2 op/s
Oct 02 09:23:13 compute-0 nova_compute[260603]: 2025-10-02 09:23:13.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:13 compute-0 ceph-mon[74477]: pgmap v3152: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 341 B/s wr, 2 op/s
Oct 02 09:23:13 compute-0 nova_compute[260603]: 2025-10-02 09:23:13.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:23:13 compute-0 nova_compute[260603]: 2025-10-02 09:23:13.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:23:13 compute-0 nova_compute[260603]: 2025-10-02 09:23:13.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:23:13 compute-0 nova_compute[260603]: 2025-10-02 09:23:13.537 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:23:13 compute-0 nova_compute[260603]: 2025-10-02 09:23:13.538 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:23:14 compute-0 nova_compute[260603]: 2025-10-02 09:23:14.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:15 compute-0 ceph-mon[74477]: pgmap v3153: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:23:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:16 compute-0 nova_compute[260603]: 2025-10-02 09:23:16.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:23:16 compute-0 nova_compute[260603]: 2025-10-02 09:23:16.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:23:16 compute-0 nova_compute[260603]: 2025-10-02 09:23:16.552 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:23:16 compute-0 nova_compute[260603]: 2025-10-02 09:23:16.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:23:16 compute-0 nova_compute[260603]: 2025-10-02 09:23:16.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:23:16 compute-0 nova_compute[260603]: 2025-10-02 09:23:16.553 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:23:16 compute-0 nova_compute[260603]: 2025-10-02 09:23:16.554 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:23:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:23:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1979432588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:23:17 compute-0 nova_compute[260603]: 2025-10-02 09:23:17.059 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:23:17 compute-0 nova_compute[260603]: 2025-10-02 09:23:17.229 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:23:17 compute-0 nova_compute[260603]: 2025-10-02 09:23:17.230 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3558MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:23:17 compute-0 nova_compute[260603]: 2025-10-02 09:23:17.230 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:23:17 compute-0 nova_compute[260603]: 2025-10-02 09:23:17.230 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:23:17 compute-0 ceph-mon[74477]: pgmap v3154: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1979432588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:23:17 compute-0 nova_compute[260603]: 2025-10-02 09:23:17.790 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:23:17 compute-0 nova_compute[260603]: 2025-10-02 09:23:17.791 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:23:17 compute-0 nova_compute[260603]: 2025-10-02 09:23:17.818 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:23:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:23:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1721013755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:23:18 compute-0 nova_compute[260603]: 2025-10-02 09:23:18.283 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:23:18 compute-0 nova_compute[260603]: 2025-10-02 09:23:18.288 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:23:18 compute-0 nova_compute[260603]: 2025-10-02 09:23:18.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:18 compute-0 nova_compute[260603]: 2025-10-02 09:23:18.382 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:23:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:18 compute-0 nova_compute[260603]: 2025-10-02 09:23:18.480 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:23:18 compute-0 nova_compute[260603]: 2025-10-02 09:23:18.480 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:23:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1721013755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:23:19 compute-0 nova_compute[260603]: 2025-10-02 09:23:19.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:19 compute-0 ceph-mon[74477]: pgmap v3155: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:23:21 compute-0 ceph-mon[74477]: pgmap v3156: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:23:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2264696705' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:23:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:23:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2264696705' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:23:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:22 compute-0 nova_compute[260603]: 2025-10-02 09:23:22.476 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:23:22 compute-0 nova_compute[260603]: 2025-10-02 09:23:22.476 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:23:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2264696705' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:23:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2264696705' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:23:23 compute-0 nova_compute[260603]: 2025-10-02 09:23:23.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:23 compute-0 ceph-mon[74477]: pgmap v3157: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:24 compute-0 nova_compute[260603]: 2025-10-02 09:23:24.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:24 compute-0 nova_compute[260603]: 2025-10-02 09:23:24.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:23:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:23:25 compute-0 ceph-mon[74477]: pgmap v3158: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:27 compute-0 ceph-mon[74477]: pgmap v3159: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:23:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:23:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:23:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:23:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:23:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:23:28
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'volumes', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'images', 'default.rgw.meta']
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:23:28 compute-0 nova_compute[260603]: 2025-10-02 09:23:28.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:23:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:23:28 compute-0 ceph-mon[74477]: pgmap v3160: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:29 compute-0 nova_compute[260603]: 2025-10-02 09:23:29.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:29 compute-0 nova_compute[260603]: 2025-10-02 09:23:29.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:23:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:23:31 compute-0 ceph-mon[74477]: pgmap v3161: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:33 compute-0 nova_compute[260603]: 2025-10-02 09:23:33.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:33 compute-0 ceph-mon[74477]: pgmap v3162: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:34 compute-0 nova_compute[260603]: 2025-10-02 09:23:34.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:23:34.862 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:23:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:23:34.862 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:23:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:23:34.863 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:23:35 compute-0 ceph-mon[74477]: pgmap v3163: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:23:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:37 compute-0 ceph-mon[74477]: pgmap v3164: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:38 compute-0 podman[440293]: 2025-10-02 09:23:38.023345137 +0000 UTC m=+0.089495457 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:23:38 compute-0 podman[440292]: 2025-10-02 09:23:38.032626766 +0000 UTC m=+0.098668423 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 09:23:38 compute-0 nova_compute[260603]: 2025-10-02 09:23:38.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:39 compute-0 nova_compute[260603]: 2025-10-02 09:23:39.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:23:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:23:39 compute-0 ceph-mon[74477]: pgmap v3165: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:23:41 compute-0 ceph-mon[74477]: pgmap v3166: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:42 compute-0 podman[440338]: 2025-10-02 09:23:42.017716958 +0000 UTC m=+0.074996823 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:23:42 compute-0 podman[440337]: 2025-10-02 09:23:42.041732869 +0000 UTC m=+0.099652034 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:23:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:42 compute-0 ceph-mon[74477]: pgmap v3167: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:43 compute-0 nova_compute[260603]: 2025-10-02 09:23:43.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:44 compute-0 nova_compute[260603]: 2025-10-02 09:23:44.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:45 compute-0 ceph-mon[74477]: pgmap v3168: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:23:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:46 compute-0 nova_compute[260603]: 2025-10-02 09:23:46.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:23:46 compute-0 nova_compute[260603]: 2025-10-02 09:23:46.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:23:47 compute-0 ceph-mon[74477]: pgmap v3169: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:48 compute-0 nova_compute[260603]: 2025-10-02 09:23:48.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:49 compute-0 nova_compute[260603]: 2025-10-02 09:23:49.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:49 compute-0 ceph-mon[74477]: pgmap v3170: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:23:51 compute-0 ceph-mon[74477]: pgmap v3171: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:23:52.762 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:23:52 compute-0 nova_compute[260603]: 2025-10-02 09:23:52.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:23:52.763 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:23:52 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:23:52.763 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:23:53 compute-0 nova_compute[260603]: 2025-10-02 09:23:53.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:53 compute-0 ceph-mon[74477]: pgmap v3172: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:54 compute-0 nova_compute[260603]: 2025-10-02 09:23:54.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:23:55 compute-0 ceph-mon[74477]: pgmap v3173: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:56 compute-0 sudo[440376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:23:56 compute-0 sudo[440376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:23:56 compute-0 sudo[440376]: pam_unix(sudo:session): session closed for user root
Oct 02 09:23:56 compute-0 sudo[440401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:23:56 compute-0 sudo[440401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:23:56 compute-0 sudo[440401]: pam_unix(sudo:session): session closed for user root
Oct 02 09:23:56 compute-0 sudo[440426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:23:56 compute-0 sudo[440426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:23:56 compute-0 sudo[440426]: pam_unix(sudo:session): session closed for user root
Oct 02 09:23:57 compute-0 sudo[440451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:23:57 compute-0 sudo[440451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:23:57 compute-0 sudo[440451]: pam_unix(sudo:session): session closed for user root
Oct 02 09:23:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:23:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:23:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:23:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:23:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:23:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:23:57 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 7f5624d9-1c60-4165-ad0d-2bd9629e2431 does not exist
Oct 02 09:23:57 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f33e64ac-25ce-4748-ae78-cb185a8c38cd does not exist
Oct 02 09:23:57 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6baa1be7-be67-4817-ad4c-88e13aec3646 does not exist
Oct 02 09:23:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:23:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:23:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:23:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:23:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:23:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:23:57 compute-0 sudo[440508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:23:57 compute-0 sudo[440508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:23:57 compute-0 sudo[440508]: pam_unix(sudo:session): session closed for user root
Oct 02 09:23:57 compute-0 sudo[440533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:23:57 compute-0 sudo[440533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:23:57 compute-0 sudo[440533]: pam_unix(sudo:session): session closed for user root
Oct 02 09:23:57 compute-0 ceph-mon[74477]: pgmap v3174: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:23:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:23:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:23:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:23:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:23:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:23:57 compute-0 sudo[440558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:23:57 compute-0 sudo[440558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:23:57 compute-0 sudo[440558]: pam_unix(sudo:session): session closed for user root
Oct 02 09:23:57 compute-0 sudo[440583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:23:57 compute-0 sudo[440583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:23:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:23:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:23:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:23:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:23:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:23:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:23:58 compute-0 podman[440648]: 2025-10-02 09:23:58.078993178 +0000 UTC m=+0.044387637 container create 5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rubin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:23:58 compute-0 systemd[1]: Started libpod-conmon-5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24.scope.
Oct 02 09:23:58 compute-0 podman[440648]: 2025-10-02 09:23:58.058557139 +0000 UTC m=+0.023951618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:23:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:23:58 compute-0 podman[440648]: 2025-10-02 09:23:58.198896663 +0000 UTC m=+0.164291162 container init 5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:23:58 compute-0 podman[440648]: 2025-10-02 09:23:58.213237731 +0000 UTC m=+0.178632190 container start 5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rubin, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:23:58 compute-0 youthful_rubin[440664]: 167 167
Oct 02 09:23:58 compute-0 systemd[1]: libpod-5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24.scope: Deactivated successfully.
Oct 02 09:23:58 compute-0 podman[440648]: 2025-10-02 09:23:58.23051441 +0000 UTC m=+0.195908919 container attach 5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:23:58 compute-0 podman[440648]: 2025-10-02 09:23:58.231986066 +0000 UTC m=+0.197380525 container died 5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 02 09:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7c83a9a0320756c87a9f8fc7e30989ca0662ba8a44377f33a5af4ebd3cb7f15-merged.mount: Deactivated successfully.
Oct 02 09:23:58 compute-0 nova_compute[260603]: 2025-10-02 09:23:58.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:58 compute-0 podman[440648]: 2025-10-02 09:23:58.446007912 +0000 UTC m=+0.411402371 container remove 5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:23:58 compute-0 systemd[1]: libpod-conmon-5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24.scope: Deactivated successfully.
Oct 02 09:23:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:58 compute-0 podman[440690]: 2025-10-02 09:23:58.709283044 +0000 UTC m=+0.111122701 container create 975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ritchie, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:23:58 compute-0 podman[440690]: 2025-10-02 09:23:58.645823163 +0000 UTC m=+0.047662840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:23:58 compute-0 systemd[1]: Started libpod-conmon-975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3.scope.
Oct 02 09:23:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dcdbcddfe321aef264ff73a330b19322b0455464290a828664e1f7e2310d05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dcdbcddfe321aef264ff73a330b19322b0455464290a828664e1f7e2310d05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dcdbcddfe321aef264ff73a330b19322b0455464290a828664e1f7e2310d05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dcdbcddfe321aef264ff73a330b19322b0455464290a828664e1f7e2310d05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dcdbcddfe321aef264ff73a330b19322b0455464290a828664e1f7e2310d05/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:23:58 compute-0 podman[440690]: 2025-10-02 09:23:58.852338953 +0000 UTC m=+0.254178600 container init 975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 09:23:58 compute-0 podman[440690]: 2025-10-02 09:23:58.861568091 +0000 UTC m=+0.263407718 container start 975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 09:23:58 compute-0 podman[440690]: 2025-10-02 09:23:58.921459192 +0000 UTC m=+0.323298829 container attach 975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:23:59 compute-0 nova_compute[260603]: 2025-10-02 09:23:59.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:23:59 compute-0 ceph-mon[74477]: pgmap v3175: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:23:59 compute-0 blissful_ritchie[440706]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:23:59 compute-0 blissful_ritchie[440706]: --> relative data size: 1.0
Oct 02 09:23:59 compute-0 blissful_ritchie[440706]: --> All data devices are unavailable
Oct 02 09:23:59 compute-0 systemd[1]: libpod-975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3.scope: Deactivated successfully.
Oct 02 09:23:59 compute-0 systemd[1]: libpod-975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3.scope: Consumed 1.060s CPU time.
Oct 02 09:23:59 compute-0 podman[440690]: 2025-10-02 09:23:59.974352699 +0000 UTC m=+1.376192366 container died 975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 09:24:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9dcdbcddfe321aef264ff73a330b19322b0455464290a828664e1f7e2310d05-merged.mount: Deactivated successfully.
Oct 02 09:24:00 compute-0 podman[440690]: 2025-10-02 09:24:00.089139454 +0000 UTC m=+1.490979091 container remove 975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:24:00 compute-0 systemd[1]: libpod-conmon-975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3.scope: Deactivated successfully.
Oct 02 09:24:00 compute-0 sudo[440583]: pam_unix(sudo:session): session closed for user root
Oct 02 09:24:00 compute-0 sudo[440750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:24:00 compute-0 sudo[440750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:24:00 compute-0 sudo[440750]: pam_unix(sudo:session): session closed for user root
Oct 02 09:24:00 compute-0 sudo[440775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:24:00 compute-0 sudo[440775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:24:00 compute-0 sudo[440775]: pam_unix(sudo:session): session closed for user root
Oct 02 09:24:00 compute-0 sudo[440800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:24:00 compute-0 sudo[440800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:24:00 compute-0 sudo[440800]: pam_unix(sudo:session): session closed for user root
Oct 02 09:24:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:00 compute-0 sudo[440825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:24:00 compute-0 sudo[440825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:24:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:24:00 compute-0 podman[440892]: 2025-10-02 09:24:00.835715223 +0000 UTC m=+0.048974781 container create 8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 02 09:24:00 compute-0 systemd[1]: Started libpod-conmon-8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65.scope.
Oct 02 09:24:00 compute-0 podman[440892]: 2025-10-02 09:24:00.812930721 +0000 UTC m=+0.026190359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:24:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:24:00 compute-0 podman[440892]: 2025-10-02 09:24:00.930294947 +0000 UTC m=+0.143554575 container init 8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:24:00 compute-0 podman[440892]: 2025-10-02 09:24:00.942176358 +0000 UTC m=+0.155435906 container start 8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 02 09:24:00 compute-0 podman[440892]: 2025-10-02 09:24:00.945821562 +0000 UTC m=+0.159081160 container attach 8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:24:00 compute-0 systemd[1]: libpod-8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65.scope: Deactivated successfully.
Oct 02 09:24:00 compute-0 dazzling_kare[440909]: 167 167
Oct 02 09:24:00 compute-0 podman[440892]: 2025-10-02 09:24:00.951557101 +0000 UTC m=+0.164816659 container died 8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 02 09:24:00 compute-0 conmon[440909]: conmon 8620fb3a7333554d0d1d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65.scope/container/memory.events
Oct 02 09:24:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7667cbd134fa136678357bdd763197c76959f8b064b4fdf42f78d12e97390a6-merged.mount: Deactivated successfully.
Oct 02 09:24:00 compute-0 podman[440892]: 2025-10-02 09:24:00.998419345 +0000 UTC m=+0.211678923 container remove 8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 09:24:01 compute-0 systemd[1]: libpod-conmon-8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65.scope: Deactivated successfully.
Oct 02 09:24:01 compute-0 podman[440933]: 2025-10-02 09:24:01.200848508 +0000 UTC m=+0.060047627 container create bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:24:01 compute-0 systemd[1]: Started libpod-conmon-bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842.scope.
Oct 02 09:24:01 compute-0 podman[440933]: 2025-10-02 09:24:01.178077047 +0000 UTC m=+0.037276176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:24:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4424c1361ad9a7643db9e36769d7aae2788678db9d073b796e6625fe7ab2bfb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4424c1361ad9a7643db9e36769d7aae2788678db9d073b796e6625fe7ab2bfb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4424c1361ad9a7643db9e36769d7aae2788678db9d073b796e6625fe7ab2bfb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4424c1361ad9a7643db9e36769d7aae2788678db9d073b796e6625fe7ab2bfb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:24:01 compute-0 podman[440933]: 2025-10-02 09:24:01.29823787 +0000 UTC m=+0.157437009 container init bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 09:24:01 compute-0 podman[440933]: 2025-10-02 09:24:01.316031425 +0000 UTC m=+0.175230514 container start bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 09:24:01 compute-0 podman[440933]: 2025-10-02 09:24:01.320074111 +0000 UTC m=+0.179273210 container attach bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:24:01 compute-0 ceph-mon[74477]: pgmap v3176: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]: {
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:     "0": [
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:         {
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "devices": [
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "/dev/loop3"
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             ],
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_name": "ceph_lv0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_size": "21470642176",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "name": "ceph_lv0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "tags": {
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.cluster_name": "ceph",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.crush_device_class": "",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.encrypted": "0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.osd_id": "0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.type": "block",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.vdo": "0"
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             },
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "type": "block",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "vg_name": "ceph_vg0"
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:         }
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:     ],
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:     "1": [
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:         {
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "devices": [
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "/dev/loop4"
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             ],
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_name": "ceph_lv1",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_size": "21470642176",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "name": "ceph_lv1",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "tags": {
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.cluster_name": "ceph",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.crush_device_class": "",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.encrypted": "0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.osd_id": "1",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.type": "block",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.vdo": "0"
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             },
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "type": "block",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "vg_name": "ceph_vg1"
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:         }
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:     ],
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:     "2": [
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:         {
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "devices": [
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "/dev/loop5"
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             ],
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_name": "ceph_lv2",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_size": "21470642176",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "name": "ceph_lv2",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "tags": {
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.cluster_name": "ceph",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.crush_device_class": "",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.encrypted": "0",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.osd_id": "2",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.type": "block",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:                 "ceph.vdo": "0"
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             },
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "type": "block",
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:             "vg_name": "ceph_vg2"
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:         }
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]:     ]
Oct 02 09:24:02 compute-0 sleepy_meitner[440949]: }
Oct 02 09:24:02 compute-0 systemd[1]: libpod-bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842.scope: Deactivated successfully.
Oct 02 09:24:02 compute-0 podman[440933]: 2025-10-02 09:24:02.14313671 +0000 UTC m=+1.002335819 container died bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:24:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4424c1361ad9a7643db9e36769d7aae2788678db9d073b796e6625fe7ab2bfb6-merged.mount: Deactivated successfully.
Oct 02 09:24:02 compute-0 podman[440933]: 2025-10-02 09:24:02.255984604 +0000 UTC m=+1.115183693 container remove bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 09:24:02 compute-0 systemd[1]: libpod-conmon-bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842.scope: Deactivated successfully.
Oct 02 09:24:02 compute-0 sudo[440825]: pam_unix(sudo:session): session closed for user root
Oct 02 09:24:02 compute-0 sudo[440969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:24:02 compute-0 sudo[440969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:24:02 compute-0 sudo[440969]: pam_unix(sudo:session): session closed for user root
Oct 02 09:24:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:02 compute-0 sudo[440994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:24:02 compute-0 sudo[440994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:24:02 compute-0 sudo[440994]: pam_unix(sudo:session): session closed for user root
Oct 02 09:24:02 compute-0 sudo[441019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:24:02 compute-0 sudo[441019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:24:02 compute-0 sudo[441019]: pam_unix(sudo:session): session closed for user root
Oct 02 09:24:02 compute-0 sudo[441044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:24:02 compute-0 sudo[441044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:24:02 compute-0 ceph-mon[74477]: pgmap v3177: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:03 compute-0 podman[441110]: 2025-10-02 09:24:03.144406324 +0000 UTC m=+0.061423600 container create 907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 09:24:03 compute-0 systemd[1]: Started libpod-conmon-907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0.scope.
Oct 02 09:24:03 compute-0 podman[441110]: 2025-10-02 09:24:03.120419844 +0000 UTC m=+0.037437190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:24:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:24:03 compute-0 podman[441110]: 2025-10-02 09:24:03.258549079 +0000 UTC m=+0.175566385 container init 907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 09:24:03 compute-0 podman[441110]: 2025-10-02 09:24:03.272560616 +0000 UTC m=+0.189577892 container start 907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:24:03 compute-0 podman[441110]: 2025-10-02 09:24:03.276029475 +0000 UTC m=+0.193046751 container attach 907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 09:24:03 compute-0 upbeat_feistel[441126]: 167 167
Oct 02 09:24:03 compute-0 systemd[1]: libpod-907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0.scope: Deactivated successfully.
Oct 02 09:24:03 compute-0 conmon[441126]: conmon 907cf23f0bf31d0a55d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0.scope/container/memory.events
Oct 02 09:24:03 compute-0 podman[441110]: 2025-10-02 09:24:03.282822447 +0000 UTC m=+0.199839703 container died 907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 09:24:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3254f4c137e9cb420bd18445139dd050f4889513cf16ccbfc265c6edd0ab34c2-merged.mount: Deactivated successfully.
Oct 02 09:24:03 compute-0 podman[441110]: 2025-10-02 09:24:03.328427872 +0000 UTC m=+0.245445158 container remove 907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:24:03 compute-0 systemd[1]: libpod-conmon-907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0.scope: Deactivated successfully.
Oct 02 09:24:03 compute-0 nova_compute[260603]: 2025-10-02 09:24:03.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:03 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:24:03 compute-0 podman[441151]: 2025-10-02 09:24:03.53069066 +0000 UTC m=+0.055185966 container create 9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:24:03 compute-0 systemd[1]: Started libpod-conmon-9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81.scope.
Oct 02 09:24:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:24:03 compute-0 podman[441151]: 2025-10-02 09:24:03.513484532 +0000 UTC m=+0.037979858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33178b93e44ec6d8cc2d4ed8319c08ca33c73a6226728f6468e18c45820151f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33178b93e44ec6d8cc2d4ed8319c08ca33c73a6226728f6468e18c45820151f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33178b93e44ec6d8cc2d4ed8319c08ca33c73a6226728f6468e18c45820151f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33178b93e44ec6d8cc2d4ed8319c08ca33c73a6226728f6468e18c45820151f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:24:03 compute-0 podman[441151]: 2025-10-02 09:24:03.621876388 +0000 UTC m=+0.146371724 container init 9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 09:24:03 compute-0 podman[441151]: 2025-10-02 09:24:03.628649459 +0000 UTC m=+0.153144785 container start 9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:24:03 compute-0 podman[441151]: 2025-10-02 09:24:03.632292153 +0000 UTC m=+0.156787479 container attach 9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:24:04 compute-0 nova_compute[260603]: 2025-10-02 09:24:04.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]: {
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "osd_id": 2,
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "type": "bluestore"
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:     },
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "osd_id": 1,
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "type": "bluestore"
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:     },
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "osd_id": 0,
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:         "type": "bluestore"
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]:     }
Oct 02 09:24:04 compute-0 stupefied_stonebraker[441168]: }
Oct 02 09:24:04 compute-0 systemd[1]: libpod-9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81.scope: Deactivated successfully.
Oct 02 09:24:04 compute-0 podman[441151]: 2025-10-02 09:24:04.695038667 +0000 UTC m=+1.219534063 container died 9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:24:04 compute-0 systemd[1]: libpod-9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81.scope: Consumed 1.070s CPU time.
Oct 02 09:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-33178b93e44ec6d8cc2d4ed8319c08ca33c73a6226728f6468e18c45820151f3-merged.mount: Deactivated successfully.
Oct 02 09:24:04 compute-0 podman[441151]: 2025-10-02 09:24:04.822866169 +0000 UTC m=+1.347361465 container remove 9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 09:24:04 compute-0 systemd[1]: libpod-conmon-9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81.scope: Deactivated successfully.
Oct 02 09:24:04 compute-0 sudo[441044]: pam_unix(sudo:session): session closed for user root
Oct 02 09:24:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:24:04 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:24:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:24:04 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:24:04 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6dd329f5-f934-49c5-8595-518d02b4c16e does not exist
Oct 02 09:24:04 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 35db8778-9d33-4973-953e-f8d792e4c464 does not exist
Oct 02 09:24:04 compute-0 sudo[441215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:24:04 compute-0 sudo[441215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:24:04 compute-0 sudo[441215]: pam_unix(sudo:session): session closed for user root
Oct 02 09:24:05 compute-0 sudo[441240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:24:05 compute-0 sudo[441240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:24:05 compute-0 sudo[441240]: pam_unix(sudo:session): session closed for user root
Oct 02 09:24:05 compute-0 ceph-mon[74477]: pgmap v3178: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:24:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:24:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:24:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:07 compute-0 ceph-mon[74477]: pgmap v3179: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:08 compute-0 nova_compute[260603]: 2025-10-02 09:24:08.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:09 compute-0 podman[441266]: 2025-10-02 09:24:09.030955456 +0000 UTC m=+0.084365856 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:24:09 compute-0 podman[441265]: 2025-10-02 09:24:09.061901883 +0000 UTC m=+0.125789720 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:24:09 compute-0 nova_compute[260603]: 2025-10-02 09:24:09.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:09 compute-0 ovn_controller[152344]: 2025-10-02T09:24:09Z|01701|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 02 09:24:09 compute-0 ceph-mon[74477]: pgmap v3180: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:24:11 compute-0 nova_compute[260603]: 2025-10-02 09:24:11.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:24:11 compute-0 ceph-mon[74477]: pgmap v3181: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:13 compute-0 podman[441310]: 2025-10-02 09:24:13.008960328 +0000 UTC m=+0.067007154 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 09:24:13 compute-0 podman[441311]: 2025-10-02 09:24:13.01700165 +0000 UTC m=+0.066345634 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 09:24:13 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 09:24:13 compute-0 systemd[1]: virtsecretd.service: Consumed 1.243s CPU time.
Oct 02 09:24:13 compute-0 nova_compute[260603]: 2025-10-02 09:24:13.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:13 compute-0 nova_compute[260603]: 2025-10-02 09:24:13.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:24:13 compute-0 ceph-mon[74477]: pgmap v3182: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:14 compute-0 nova_compute[260603]: 2025-10-02 09:24:14.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:14 compute-0 nova_compute[260603]: 2025-10-02 09:24:14.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:24:14 compute-0 nova_compute[260603]: 2025-10-02 09:24:14.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:24:14 compute-0 nova_compute[260603]: 2025-10-02 09:24:14.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:24:14 compute-0 nova_compute[260603]: 2025-10-02 09:24:14.635 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:24:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:24:15 compute-0 ceph-mon[74477]: pgmap v3183: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:16 compute-0 nova_compute[260603]: 2025-10-02 09:24:16.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:24:17 compute-0 nova_compute[260603]: 2025-10-02 09:24:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:24:17 compute-0 nova_compute[260603]: 2025-10-02 09:24:17.603 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:24:17 compute-0 nova_compute[260603]: 2025-10-02 09:24:17.603 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:24:17 compute-0 nova_compute[260603]: 2025-10-02 09:24:17.603 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:24:17 compute-0 nova_compute[260603]: 2025-10-02 09:24:17.604 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:24:17 compute-0 nova_compute[260603]: 2025-10-02 09:24:17.604 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:24:17 compute-0 ceph-mon[74477]: pgmap v3184: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:24:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/360997613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.043 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.222 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.223 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3543MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.223 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.224 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.431 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.431 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.451 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:24:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:18 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/360997613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:24:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:24:18 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826731496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.903 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.909 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.943 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.945 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:24:18 compute-0 nova_compute[260603]: 2025-10-02 09:24:18.945 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:24:19 compute-0 nova_compute[260603]: 2025-10-02 09:24:19.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:19 compute-0 ceph-mon[74477]: pgmap v3185: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:19 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2826731496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:24:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:24:21 compute-0 ceph-mon[74477]: pgmap v3186: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:21 compute-0 nova_compute[260603]: 2025-10-02 09:24:21.943 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:24:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:24:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1356717144' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:24:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:24:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1356717144' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:24:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1356717144' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:24:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1356717144' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:24:23 compute-0 nova_compute[260603]: 2025-10-02 09:24:23.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:23 compute-0 nova_compute[260603]: 2025-10-02 09:24:23.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:24:23 compute-0 ceph-mon[74477]: pgmap v3187: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:24 compute-0 nova_compute[260603]: 2025-10-02 09:24:24.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:24:25 compute-0 ceph-mon[74477]: pgmap v3188: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:27 compute-0 ceph-mon[74477]: pgmap v3189: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:24:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:24:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:24:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:24:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:24:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:24:28
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', 'default.rgw.control']
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:28 compute-0 nova_compute[260603]: 2025-10-02 09:24:28.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:24:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:24:28 compute-0 ceph-mon[74477]: pgmap v3190: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:29 compute-0 nova_compute[260603]: 2025-10-02 09:24:29.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:29 compute-0 nova_compute[260603]: 2025-10-02 09:24:29.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:24:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:24:31 compute-0 ceph-mon[74477]: pgmap v3191: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:33 compute-0 nova_compute[260603]: 2025-10-02 09:24:33.236 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "29e12e23-cd78-4748-a814-e030401a2d37" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:24:33 compute-0 nova_compute[260603]: 2025-10-02 09:24:33.236 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "29e12e23-cd78-4748-a814-e030401a2d37" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:24:33 compute-0 nova_compute[260603]: 2025-10-02 09:24:33.374 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 09:24:33 compute-0 nova_compute[260603]: 2025-10-02 09:24:33.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:24:33 compute-0 nova_compute[260603]: 2025-10-02 09:24:33.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 09:24:33 compute-0 nova_compute[260603]: 2025-10-02 09:24:33.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:33 compute-0 ceph-mon[74477]: pgmap v3192: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:33 compute-0 nova_compute[260603]: 2025-10-02 09:24:33.751 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:24:33 compute-0 nova_compute[260603]: 2025-10-02 09:24:33.751 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:24:33 compute-0 nova_compute[260603]: 2025-10-02 09:24:33.765 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 09:24:33 compute-0 nova_compute[260603]: 2025-10-02 09:24:33.766 2 INFO nova.compute.claims [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Claim successful on node compute-0.ctlplane.example.com
Oct 02 09:24:33 compute-0 nova_compute[260603]: 2025-10-02 09:24:33.833 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 09:24:34 compute-0 nova_compute[260603]: 2025-10-02 09:24:34.074 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:24:34 compute-0 nova_compute[260603]: 2025-10-02 09:24:34.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:24:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512420117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:24:34 compute-0 nova_compute[260603]: 2025-10-02 09:24:34.543 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:24:34 compute-0 nova_compute[260603]: 2025-10-02 09:24:34.551 2 DEBUG nova.compute.provider_tree [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:24:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2512420117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:24:34 compute-0 nova_compute[260603]: 2025-10-02 09:24:34.657 2 DEBUG nova.scheduler.client.report [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:24:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:24:34.863 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:24:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:24:34.863 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:24:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:24:34.864 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:24:34 compute-0 nova_compute[260603]: 2025-10-02 09:24:34.908 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:24:34 compute-0 nova_compute[260603]: 2025-10-02 09:24:34.909 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 09:24:35 compute-0 nova_compute[260603]: 2025-10-02 09:24:35.089 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 09:24:35 compute-0 nova_compute[260603]: 2025-10-02 09:24:35.090 2 DEBUG nova.network.neutron [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 09:24:35 compute-0 nova_compute[260603]: 2025-10-02 09:24:35.260 2 INFO nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 09:24:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:24:35 compute-0 ceph-mon[74477]: pgmap v3193: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:35 compute-0 nova_compute[260603]: 2025-10-02 09:24:35.661 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 09:24:35 compute-0 nova_compute[260603]: 2025-10-02 09:24:35.775 2 DEBUG nova.network.neutron [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 09:24:35 compute-0 nova_compute[260603]: 2025-10-02 09:24:35.776 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.216 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.218 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.219 2 INFO nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Creating image(s)
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.254 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.294 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.333 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.340 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.450 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.451 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.452 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.453 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.477 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:24:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:36 compute-0 nova_compute[260603]: 2025-10-02 09:24:36.483 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 29e12e23-cd78-4748-a814-e030401a2d37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.269 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 29e12e23-cd78-4748-a814-e030401a2d37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.787s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.366 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] resizing rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 09:24:37 compute-0 ceph-mon[74477]: pgmap v3194: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.851 2 DEBUG nova.objects.instance [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lazy-loading 'migration_context' on Instance uuid 29e12e23-cd78-4748-a814-e030401a2d37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.968 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.969 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Ensure instance console log exists: /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.970 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.970 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.971 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.973 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.981 2 WARNING nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.988 2 DEBUG nova.virt.libvirt.host [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.989 2 DEBUG nova.virt.libvirt.host [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.993 2 DEBUG nova.virt.libvirt.host [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.993 2 DEBUG nova.virt.libvirt.host [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.994 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.995 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.995 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.996 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.996 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.997 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.997 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.998 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.998 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.999 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 09:24:37 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.999 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 09:24:38 compute-0 nova_compute[260603]: 2025-10-02 09:24:37.999 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 09:24:38 compute-0 nova_compute[260603]: 2025-10-02 09:24:38.004 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:24:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:24:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422868072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:24:38 compute-0 nova_compute[260603]: 2025-10-02 09:24:38.449 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:24:38 compute-0 nova_compute[260603]: 2025-10-02 09:24:38.475 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:24:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 02 09:24:38 compute-0 nova_compute[260603]: 2025-10-02 09:24:38.481 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:24:38 compute-0 nova_compute[260603]: 2025-10-02 09:24:38.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3422868072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:24:38 compute-0 ceph-mon[74477]: pgmap v3195: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 02 09:24:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 09:24:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066405845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:24:38 compute-0 nova_compute[260603]: 2025-10-02 09:24:38.960 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:24:38 compute-0 nova_compute[260603]: 2025-10-02 09:24:38.961 2 DEBUG nova.objects.instance [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29e12e23-cd78-4748-a814-e030401a2d37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:24:39 compute-0 nova_compute[260603]: 2025-10-02 09:24:39.107 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] End _get_guest_xml xml=<domain type="kvm">
Oct 02 09:24:39 compute-0 nova_compute[260603]:   <uuid>29e12e23-cd78-4748-a814-e030401a2d37</uuid>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   <name>instance-00000099</name>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   <memory>131072</memory>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   <vcpu>1</vcpu>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   <metadata>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <nova:name>tempest-AggregatesAdminTestJSON-server-1040576008</nova:name>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <nova:creationTime>2025-10-02 09:24:37</nova:creationTime>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <nova:flavor name="m1.nano">
Oct 02 09:24:39 compute-0 nova_compute[260603]:         <nova:memory>128</nova:memory>
Oct 02 09:24:39 compute-0 nova_compute[260603]:         <nova:disk>1</nova:disk>
Oct 02 09:24:39 compute-0 nova_compute[260603]:         <nova:swap>0</nova:swap>
Oct 02 09:24:39 compute-0 nova_compute[260603]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 09:24:39 compute-0 nova_compute[260603]:         <nova:vcpus>1</nova:vcpus>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       </nova:flavor>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <nova:owner>
Oct 02 09:24:39 compute-0 nova_compute[260603]:         <nova:user uuid="ab1bff8a265e4dc48c8c8ab958df08cd">tempest-AggregatesAdminTestJSON-1484718099-project-member</nova:user>
Oct 02 09:24:39 compute-0 nova_compute[260603]:         <nova:project uuid="6d90e7dd32004462a3f1becf8eeda717">tempest-AggregatesAdminTestJSON-1484718099</nova:project>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       </nova:owner>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <nova:ports/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     </nova:instance>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   </metadata>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   <sysinfo type="smbios">
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <system>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <entry name="manufacturer">RDO</entry>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <entry name="product">OpenStack Compute</entry>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <entry name="serial">29e12e23-cd78-4748-a814-e030401a2d37</entry>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <entry name="uuid">29e12e23-cd78-4748-a814-e030401a2d37</entry>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <entry name="family">Virtual Machine</entry>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     </system>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   </sysinfo>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   <os>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <boot dev="hd"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <smbios mode="sysinfo"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   </os>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   <features>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <acpi/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <apic/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <vmcoreinfo/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   </features>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   <clock offset="utc">
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <timer name="hpet" present="no"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   </clock>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   <cpu mode="host-model" match="exact">
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   </cpu>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   <devices>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <disk type="network" device="disk">
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/29e12e23-cd78-4748-a814-e030401a2d37_disk">
Oct 02 09:24:39 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       </source>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:24:39 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <target dev="vda" bus="virtio"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <disk type="network" device="cdrom">
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <driver type="raw" cache="none"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <source protocol="rbd" name="vms/29e12e23-cd78-4748-a814-e030401a2d37_disk.config">
Oct 02 09:24:39 compute-0 nova_compute[260603]:         <host name="192.168.122.100" port="6789"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       </source>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <auth username="openstack">
Oct 02 09:24:39 compute-0 nova_compute[260603]:         <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       </auth>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <target dev="sda" bus="sata"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     </disk>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <serial type="pty">
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <log file="/var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/console.log" append="off"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     </serial>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <video>
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <model type="virtio"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     </video>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <input type="tablet" bus="usb"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <rng model="virtio">
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <backend model="random">/dev/urandom</backend>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     </rng>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <controller type="usb" index="0"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     <memballoon model="virtio">
Oct 02 09:24:39 compute-0 nova_compute[260603]:       <stats period="10"/>
Oct 02 09:24:39 compute-0 nova_compute[260603]:     </memballoon>
Oct 02 09:24:39 compute-0 nova_compute[260603]:   </devices>
Oct 02 09:24:39 compute-0 nova_compute[260603]: </domain>
Oct 02 09:24:39 compute-0 nova_compute[260603]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 09:24:39 compute-0 nova_compute[260603]: 2025-10-02 09:24:39.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:24:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:24:39 compute-0 nova_compute[260603]: 2025-10-02 09:24:39.585 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:24:39 compute-0 nova_compute[260603]: 2025-10-02 09:24:39.585 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 09:24:39 compute-0 nova_compute[260603]: 2025-10-02 09:24:39.586 2 INFO nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Using config drive
Oct 02 09:24:39 compute-0 nova_compute[260603]: 2025-10-02 09:24:39.616 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:24:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4066405845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 09:24:40 compute-0 podman[441664]: 2025-10-02 09:24:40.009641932 +0000 UTC m=+0.068513201 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:24:40 compute-0 podman[441663]: 2025-10-02 09:24:40.079988789 +0000 UTC m=+0.142837193 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:24:40 compute-0 nova_compute[260603]: 2025-10-02 09:24:40.212 2 INFO nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Creating config drive at /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/disk.config
Oct 02 09:24:40 compute-0 nova_compute[260603]: 2025-10-02 09:24:40.222 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvctzr9u1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:24:40 compute-0 nova_compute[260603]: 2025-10-02 09:24:40.409 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvctzr9u1" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:24:40 compute-0 nova_compute[260603]: 2025-10-02 09:24:40.453 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 09:24:40 compute-0 nova_compute[260603]: 2025-10-02 09:24:40.460 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/disk.config 29e12e23-cd78-4748-a814-e030401a2d37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:24:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 02 09:24:40 compute-0 nova_compute[260603]: 2025-10-02 09:24:40.524 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:24:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:24:40 compute-0 nova_compute[260603]: 2025-10-02 09:24:40.660 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/disk.config 29e12e23-cd78-4748-a814-e030401a2d37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:24:40 compute-0 nova_compute[260603]: 2025-10-02 09:24:40.661 2 INFO nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Deleting local config drive /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/disk.config because it was imported into RBD.
Oct 02 09:24:40 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 09:24:40 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 09:24:40 compute-0 systemd-machined[214636]: New machine qemu-187-instance-00000099.
Oct 02 09:24:40 compute-0 ceph-mon[74477]: pgmap v3196: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 02 09:24:40 compute-0 systemd[1]: Started Virtual Machine qemu-187-instance-00000099.
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.825 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759397081.8255632, 29e12e23-cd78-4748-a814-e030401a2d37 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.828 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] VM Resumed (Lifecycle Event)
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.830 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.831 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.834 2 INFO nova.virt.libvirt.driver [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Instance spawned successfully.
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.835 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.909 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.914 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.994 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.994 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.995 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.995 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.995 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:24:41 compute-0 nova_compute[260603]: 2025-10-02 09:24:41.996 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 09:24:42 compute-0 nova_compute[260603]: 2025-10-02 09:24:42.057 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:24:42 compute-0 nova_compute[260603]: 2025-10-02 09:24:42.058 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759397081.8279276, 29e12e23-cd78-4748-a814-e030401a2d37 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:24:42 compute-0 nova_compute[260603]: 2025-10-02 09:24:42.059 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] VM Started (Lifecycle Event)
Oct 02 09:24:42 compute-0 nova_compute[260603]: 2025-10-02 09:24:42.180 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:24:42 compute-0 nova_compute[260603]: 2025-10-02 09:24:42.184 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 09:24:42 compute-0 nova_compute[260603]: 2025-10-02 09:24:42.249 2 INFO nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Took 6.03 seconds to spawn the instance on the hypervisor.
Oct 02 09:24:42 compute-0 nova_compute[260603]: 2025-10-02 09:24:42.250 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:24:42 compute-0 nova_compute[260603]: 2025-10-02 09:24:42.291 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 09:24:42 compute-0 nova_compute[260603]: 2025-10-02 09:24:42.414 2 INFO nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Took 8.71 seconds to build instance.
Oct 02 09:24:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 09:24:42 compute-0 nova_compute[260603]: 2025-10-02 09:24:42.625 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "29e12e23-cd78-4748-a814-e030401a2d37" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:24:43 compute-0 nova_compute[260603]: 2025-10-02 09:24:43.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:43 compute-0 ceph-mon[74477]: pgmap v3197: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 09:24:44 compute-0 podman[441821]: 2025-10-02 09:24:44.004658815 +0000 UTC m=+0.066964193 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 09:24:44 compute-0 podman[441820]: 2025-10-02 09:24:44.015934357 +0000 UTC m=+0.078907366 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 09:24:44 compute-0 nova_compute[260603]: 2025-10-02 09:24:44.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 09:24:44 compute-0 ceph-mon[74477]: pgmap v3198: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 09:24:45 compute-0 nova_compute[260603]: 2025-10-02 09:24:45.329 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "29e12e23-cd78-4748-a814-e030401a2d37" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:24:45 compute-0 nova_compute[260603]: 2025-10-02 09:24:45.331 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "29e12e23-cd78-4748-a814-e030401a2d37" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:24:45 compute-0 nova_compute[260603]: 2025-10-02 09:24:45.332 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "29e12e23-cd78-4748-a814-e030401a2d37-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:24:45 compute-0 nova_compute[260603]: 2025-10-02 09:24:45.332 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "29e12e23-cd78-4748-a814-e030401a2d37-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:24:45 compute-0 nova_compute[260603]: 2025-10-02 09:24:45.333 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "29e12e23-cd78-4748-a814-e030401a2d37-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:24:45 compute-0 nova_compute[260603]: 2025-10-02 09:24:45.335 2 INFO nova.compute.manager [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Terminating instance
Oct 02 09:24:45 compute-0 nova_compute[260603]: 2025-10-02 09:24:45.336 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "refresh_cache-29e12e23-cd78-4748-a814-e030401a2d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 09:24:45 compute-0 nova_compute[260603]: 2025-10-02 09:24:45.337 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquired lock "refresh_cache-29e12e23-cd78-4748-a814-e030401a2d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 09:24:45 compute-0 nova_compute[260603]: 2025-10-02 09:24:45.338 2 DEBUG nova.network.neutron [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 09:24:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:24:45 compute-0 nova_compute[260603]: 2025-10-02 09:24:45.676 2 DEBUG nova.network.neutron [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:24:45 compute-0 nova_compute[260603]: 2025-10-02 09:24:45.907 2 DEBUG nova.network.neutron [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:24:46 compute-0 nova_compute[260603]: 2025-10-02 09:24:46.131 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Releasing lock "refresh_cache-29e12e23-cd78-4748-a814-e030401a2d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 09:24:46 compute-0 nova_compute[260603]: 2025-10-02 09:24:46.132 2 DEBUG nova.compute.manager [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 09:24:46 compute-0 sshd-session[441859]: Invalid user ubuntu from 167.71.248.239 port 40862
Oct 02 09:24:46 compute-0 sshd-session[441859]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 09:24:46 compute-0 sshd-session[441859]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.71.248.239
Oct 02 09:24:46 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Deactivated successfully.
Oct 02 09:24:46 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Consumed 5.316s CPU time.
Oct 02 09:24:46 compute-0 systemd-machined[214636]: Machine qemu-187-instance-00000099 terminated.
Oct 02 09:24:46 compute-0 nova_compute[260603]: 2025-10-02 09:24:46.366 2 INFO nova.virt.libvirt.driver [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Instance destroyed successfully.
Oct 02 09:24:46 compute-0 nova_compute[260603]: 2025-10-02 09:24:46.367 2 DEBUG nova.objects.instance [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lazy-loading 'resources' on Instance uuid 29e12e23-cd78-4748-a814-e030401a2d37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 09:24:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 09:24:47 compute-0 nova_compute[260603]: 2025-10-02 09:24:47.603 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:24:47 compute-0 nova_compute[260603]: 2025-10-02 09:24:47.604 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 09:24:47 compute-0 ceph-mon[74477]: pgmap v3199: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 09:24:47 compute-0 sshd-session[441859]: Failed password for invalid user ubuntu from 167.71.248.239 port 40862 ssh2
Oct 02 09:24:48 compute-0 nova_compute[260603]: 2025-10-02 09:24:48.409 2 INFO nova.virt.libvirt.driver [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Deleting instance files /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37_del
Oct 02 09:24:48 compute-0 nova_compute[260603]: 2025-10-02 09:24:48.410 2 INFO nova.virt.libvirt.driver [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Deletion of /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37_del complete
Oct 02 09:24:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 63 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Oct 02 09:24:48 compute-0 sshd-session[441859]: Connection closed by invalid user ubuntu 167.71.248.239 port 40862 [preauth]
Oct 02 09:24:48 compute-0 nova_compute[260603]: 2025-10-02 09:24:48.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:48 compute-0 nova_compute[260603]: 2025-10-02 09:24:48.620 2 INFO nova.compute.manager [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Took 2.49 seconds to destroy the instance on the hypervisor.
Oct 02 09:24:48 compute-0 nova_compute[260603]: 2025-10-02 09:24:48.620 2 DEBUG oslo.service.loopingcall [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 09:24:48 compute-0 nova_compute[260603]: 2025-10-02 09:24:48.621 2 DEBUG nova.compute.manager [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 09:24:48 compute-0 nova_compute[260603]: 2025-10-02 09:24:48.621 2 DEBUG nova.network.neutron [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 09:24:48 compute-0 nova_compute[260603]: 2025-10-02 09:24:48.714 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:24:48 compute-0 nova_compute[260603]: 2025-10-02 09:24:48.714 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:24:49 compute-0 nova_compute[260603]: 2025-10-02 09:24:49.225 2 DEBUG nova.network.neutron [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 09:24:49 compute-0 nova_compute[260603]: 2025-10-02 09:24:49.253 2 DEBUG nova.network.neutron [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 09:24:49 compute-0 nova_compute[260603]: 2025-10-02 09:24:49.330 2 INFO nova.compute.manager [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Took 0.71 seconds to deallocate network for instance.
Oct 02 09:24:49 compute-0 nova_compute[260603]: 2025-10-02 09:24:49.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:49 compute-0 nova_compute[260603]: 2025-10-02 09:24:49.451 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:24:49 compute-0 nova_compute[260603]: 2025-10-02 09:24:49.452 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:24:49 compute-0 nova_compute[260603]: 2025-10-02 09:24:49.517 2 DEBUG oslo_concurrency.processutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:24:49 compute-0 ceph-mon[74477]: pgmap v3200: 305 pgs: 305 active+clean; 63 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Oct 02 09:24:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:24:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2445335509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:24:50 compute-0 nova_compute[260603]: 2025-10-02 09:24:50.054 2 DEBUG oslo_concurrency.processutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:24:50 compute-0 nova_compute[260603]: 2025-10-02 09:24:50.061 2 DEBUG nova.compute.provider_tree [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:24:50 compute-0 nova_compute[260603]: 2025-10-02 09:24:50.093 2 DEBUG nova.scheduler.client.report [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:24:50 compute-0 nova_compute[260603]: 2025-10-02 09:24:50.267 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:24:50 compute-0 nova_compute[260603]: 2025-10-02 09:24:50.423 2 INFO nova.scheduler.client.report [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Deleted allocations for instance 29e12e23-cd78-4748-a814-e030401a2d37
Oct 02 09:24:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 63 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 87 op/s
Oct 02 09:24:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:24:50 compute-0 nova_compute[260603]: 2025-10-02 09:24:50.634 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "29e12e23-cd78-4748-a814-e030401a2d37" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:24:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2445335509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:24:50 compute-0 ceph-mon[74477]: pgmap v3201: 305 pgs: 305 active+clean; 63 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 87 op/s
Oct 02 09:24:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 02 09:24:53 compute-0 nova_compute[260603]: 2025-10-02 09:24:53.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:53 compute-0 ceph-mon[74477]: pgmap v3202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 02 09:24:54 compute-0 nova_compute[260603]: 2025-10-02 09:24:54.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Oct 02 09:24:55 compute-0 ceph-mon[74477]: pgmap v3203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Oct 02 09:24:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:24:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Oct 02 09:24:57 compute-0 ceph-mon[74477]: pgmap v3204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Oct 02 09:24:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:24:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:24:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:24:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:24:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:24:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:24:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Oct 02 09:24:58 compute-0 nova_compute[260603]: 2025-10-02 09:24:58.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:59 compute-0 nova_compute[260603]: 2025-10-02 09:24:59.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:24:59 compute-0 ceph-mon[74477]: pgmap v3205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Oct 02 09:25:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 767 B/s wr, 13 op/s
Oct 02 09:25:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:25:01 compute-0 nova_compute[260603]: 2025-10-02 09:25:01.364 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759397086.3620563, 29e12e23-cd78-4748-a814-e030401a2d37 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 09:25:01 compute-0 nova_compute[260603]: 2025-10-02 09:25:01.364 2 INFO nova.compute.manager [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] VM Stopped (Lifecycle Event)
Oct 02 09:25:01 compute-0 nova_compute[260603]: 2025-10-02 09:25:01.543 2 DEBUG nova.compute.manager [None req-72f35939-edef-402d-a214-c2cfe84d200c - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 09:25:01 compute-0 ceph-mon[74477]: pgmap v3206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 767 B/s wr, 13 op/s
Oct 02 09:25:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 767 B/s wr, 13 op/s
Oct 02 09:25:03 compute-0 nova_compute[260603]: 2025-10-02 09:25:03.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:03 compute-0 ceph-mon[74477]: pgmap v3207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 767 B/s wr, 13 op/s
Oct 02 09:25:04 compute-0 nova_compute[260603]: 2025-10-02 09:25:04.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 02 09:25:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:25:04.745 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:25:04 compute-0 nova_compute[260603]: 2025-10-02 09:25:04.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:04 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:25:04.746 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:25:05 compute-0 sudo[441905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:25:05 compute-0 sudo[441905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:05 compute-0 sudo[441905]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:05 compute-0 sudo[441930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:25:05 compute-0 sudo[441930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:05 compute-0 sudo[441930]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:05 compute-0 sudo[441955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:25:05 compute-0 sudo[441955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:05 compute-0 sudo[441955]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:05 compute-0 sudo[441980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:25:05 compute-0 sudo[441980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:25:05 compute-0 ceph-mon[74477]: pgmap v3208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 02 09:25:05 compute-0 sudo[441980]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:25:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:25:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:25:05 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:25:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:25:05 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:25:05 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 347523e2-ca60-4462-a977-1328a2d99beb does not exist
Oct 02 09:25:05 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 73dbe6ce-4e7b-456c-9803-ab6388baff46 does not exist
Oct 02 09:25:05 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 97df788d-0c4f-490d-a7bc-04a943daa1ad does not exist
Oct 02 09:25:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:25:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:25:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:25:05 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:25:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:25:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:25:05 compute-0 sudo[442037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:25:05 compute-0 sudo[442037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:05 compute-0 sudo[442037]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:06 compute-0 sudo[442062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:25:06 compute-0 sudo[442062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:06 compute-0 sudo[442062]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:06 compute-0 sudo[442087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:25:06 compute-0 sudo[442087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:06 compute-0 sudo[442087]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:06 compute-0 sudo[442112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:25:06 compute-0 sudo[442112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:06 compute-0 podman[442178]: 2025-10-02 09:25:06.661024096 +0000 UTC m=+0.063184595 container create 50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_noyce, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 09:25:06 compute-0 systemd[1]: Started libpod-conmon-50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91.scope.
Oct 02 09:25:06 compute-0 podman[442178]: 2025-10-02 09:25:06.629405228 +0000 UTC m=+0.031565777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:25:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:25:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:25:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:25:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:25:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:25:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:25:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:25:06 compute-0 podman[442178]: 2025-10-02 09:25:06.752567785 +0000 UTC m=+0.154728314 container init 50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_noyce, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:25:06 compute-0 podman[442178]: 2025-10-02 09:25:06.766008805 +0000 UTC m=+0.168169304 container start 50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:25:06 compute-0 podman[442178]: 2025-10-02 09:25:06.769253577 +0000 UTC m=+0.171414076 container attach 50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 02 09:25:06 compute-0 systemd[1]: libpod-50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91.scope: Deactivated successfully.
Oct 02 09:25:06 compute-0 competent_noyce[442194]: 167 167
Oct 02 09:25:06 compute-0 conmon[442194]: conmon 50715c37edc0ab4a22c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91.scope/container/memory.events
Oct 02 09:25:06 compute-0 podman[442178]: 2025-10-02 09:25:06.776989558 +0000 UTC m=+0.179150067 container died 50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 09:25:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5153923d708fe883bb935d56c9de8f13cdbee3308c41d5ceb0e9863b212ca02d-merged.mount: Deactivated successfully.
Oct 02 09:25:06 compute-0 podman[442178]: 2025-10-02 09:25:06.818585257 +0000 UTC m=+0.220745756 container remove 50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 09:25:06 compute-0 systemd[1]: libpod-conmon-50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91.scope: Deactivated successfully.
Oct 02 09:25:07 compute-0 podman[442216]: 2025-10-02 09:25:07.001645325 +0000 UTC m=+0.046210934 container create 14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct 02 09:25:07 compute-0 systemd[1]: Started libpod-conmon-14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac.scope.
Oct 02 09:25:07 compute-0 podman[442216]: 2025-10-02 09:25:06.98259528 +0000 UTC m=+0.027160899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:25:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b196ebfa6bccbb85fc7c3732d20cbdda353b4662e81fc789dc4cce8c6c5e63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b196ebfa6bccbb85fc7c3732d20cbdda353b4662e81fc789dc4cce8c6c5e63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b196ebfa6bccbb85fc7c3732d20cbdda353b4662e81fc789dc4cce8c6c5e63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b196ebfa6bccbb85fc7c3732d20cbdda353b4662e81fc789dc4cce8c6c5e63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b196ebfa6bccbb85fc7c3732d20cbdda353b4662e81fc789dc4cce8c6c5e63/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:07 compute-0 podman[442216]: 2025-10-02 09:25:07.110506165 +0000 UTC m=+0.155071774 container init 14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 09:25:07 compute-0 podman[442216]: 2025-10-02 09:25:07.120442396 +0000 UTC m=+0.165007995 container start 14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 09:25:07 compute-0 podman[442216]: 2025-10-02 09:25:07.124165922 +0000 UTC m=+0.168731521 container attach 14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 09:25:07 compute-0 ceph-mon[74477]: pgmap v3209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:08 compute-0 thirsty_almeida[442233]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:25:08 compute-0 thirsty_almeida[442233]: --> relative data size: 1.0
Oct 02 09:25:08 compute-0 thirsty_almeida[442233]: --> All data devices are unavailable
Oct 02 09:25:08 compute-0 systemd[1]: libpod-14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac.scope: Deactivated successfully.
Oct 02 09:25:08 compute-0 systemd[1]: libpod-14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac.scope: Consumed 1.017s CPU time.
Oct 02 09:25:08 compute-0 podman[442216]: 2025-10-02 09:25:08.196102623 +0000 UTC m=+1.240668222 container died 14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 09:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-10b196ebfa6bccbb85fc7c3732d20cbdda353b4662e81fc789dc4cce8c6c5e63-merged.mount: Deactivated successfully.
Oct 02 09:25:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:08 compute-0 nova_compute[260603]: 2025-10-02 09:25:08.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:08 compute-0 podman[442216]: 2025-10-02 09:25:08.593318621 +0000 UTC m=+1.637884240 container remove 14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:25:08 compute-0 systemd[1]: libpod-conmon-14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac.scope: Deactivated successfully.
Oct 02 09:25:08 compute-0 sudo[442112]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:08 compute-0 sudo[442276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:25:08 compute-0 sudo[442276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:08 compute-0 sudo[442276]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:08 compute-0 sudo[442301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:25:08 compute-0 sudo[442301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:08 compute-0 sudo[442301]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:08 compute-0 sudo[442326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:25:08 compute-0 sudo[442326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:08 compute-0 sudo[442326]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:08 compute-0 sudo[442351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:25:08 compute-0 sudo[442351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:09 compute-0 podman[442417]: 2025-10-02 09:25:09.198191553 +0000 UTC m=+0.039080951 container create 3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:25:09 compute-0 systemd[1]: Started libpod-conmon-3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b.scope.
Oct 02 09:25:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:25:09 compute-0 podman[442417]: 2025-10-02 09:25:09.181234394 +0000 UTC m=+0.022123822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:25:09 compute-0 podman[442417]: 2025-10-02 09:25:09.280097292 +0000 UTC m=+0.120986730 container init 3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:25:09 compute-0 podman[442417]: 2025-10-02 09:25:09.287927917 +0000 UTC m=+0.128817335 container start 3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 09:25:09 compute-0 podman[442417]: 2025-10-02 09:25:09.291671574 +0000 UTC m=+0.132561002 container attach 3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 09:25:09 compute-0 romantic_turing[442434]: 167 167
Oct 02 09:25:09 compute-0 systemd[1]: libpod-3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b.scope: Deactivated successfully.
Oct 02 09:25:09 compute-0 podman[442417]: 2025-10-02 09:25:09.295300557 +0000 UTC m=+0.136189965 container died 3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 09:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-92452794eef8ff848e6ec735ed098783a27cef0539277850f67f21f8ee713a36-merged.mount: Deactivated successfully.
Oct 02 09:25:09 compute-0 podman[442417]: 2025-10-02 09:25:09.328287527 +0000 UTC m=+0.169176925 container remove 3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:25:09 compute-0 systemd[1]: libpod-conmon-3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b.scope: Deactivated successfully.
Oct 02 09:25:09 compute-0 nova_compute[260603]: 2025-10-02 09:25:09.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:09 compute-0 podman[442456]: 2025-10-02 09:25:09.494011393 +0000 UTC m=+0.039903307 container create 499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:25:09 compute-0 systemd[1]: Started libpod-conmon-499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644.scope.
Oct 02 09:25:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ebd393b95da0b8cf43abf9a74b92756975708042f606e7d16ed02f7b5a8d50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ebd393b95da0b8cf43abf9a74b92756975708042f606e7d16ed02f7b5a8d50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ebd393b95da0b8cf43abf9a74b92756975708042f606e7d16ed02f7b5a8d50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ebd393b95da0b8cf43abf9a74b92756975708042f606e7d16ed02f7b5a8d50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:09 compute-0 podman[442456]: 2025-10-02 09:25:09.561578203 +0000 UTC m=+0.107470127 container init 499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 09:25:09 compute-0 podman[442456]: 2025-10-02 09:25:09.567646313 +0000 UTC m=+0.113538227 container start 499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:25:09 compute-0 podman[442456]: 2025-10-02 09:25:09.570245194 +0000 UTC m=+0.116137108 container attach 499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 09:25:09 compute-0 podman[442456]: 2025-10-02 09:25:09.476719083 +0000 UTC m=+0.022610997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:25:09 compute-0 ceph-mon[74477]: pgmap v3210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]: {
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:     "0": [
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:         {
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "devices": [
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "/dev/loop3"
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             ],
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_name": "ceph_lv0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_size": "21470642176",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "name": "ceph_lv0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "tags": {
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.cluster_name": "ceph",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.crush_device_class": "",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.encrypted": "0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.osd_id": "0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.type": "block",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.vdo": "0"
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             },
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "type": "block",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "vg_name": "ceph_vg0"
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:         }
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:     ],
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:     "1": [
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:         {
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "devices": [
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "/dev/loop4"
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             ],
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_name": "ceph_lv1",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_size": "21470642176",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "name": "ceph_lv1",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "tags": {
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.cluster_name": "ceph",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.crush_device_class": "",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.encrypted": "0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.osd_id": "1",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.type": "block",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.vdo": "0"
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             },
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "type": "block",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "vg_name": "ceph_vg1"
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:         }
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:     ],
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:     "2": [
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:         {
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "devices": [
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "/dev/loop5"
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             ],
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_name": "ceph_lv2",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_size": "21470642176",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "name": "ceph_lv2",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "tags": {
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.cluster_name": "ceph",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.crush_device_class": "",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.encrypted": "0",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.osd_id": "2",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.type": "block",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:                 "ceph.vdo": "0"
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             },
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "type": "block",
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:             "vg_name": "ceph_vg2"
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:         }
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]:     ]
Oct 02 09:25:10 compute-0 amazing_mendeleev[442472]: }
Oct 02 09:25:10 compute-0 systemd[1]: libpod-499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644.scope: Deactivated successfully.
Oct 02 09:25:10 compute-0 podman[442456]: 2025-10-02 09:25:10.354114678 +0000 UTC m=+0.900006592 container died 499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-69ebd393b95da0b8cf43abf9a74b92756975708042f606e7d16ed02f7b5a8d50-merged.mount: Deactivated successfully.
Oct 02 09:25:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:10 compute-0 podman[442456]: 2025-10-02 09:25:10.508364556 +0000 UTC m=+1.054256470 container remove 499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:25:10 compute-0 systemd[1]: libpod-conmon-499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644.scope: Deactivated successfully.
Oct 02 09:25:10 compute-0 sudo[442351]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:25:10 compute-0 podman[442489]: 2025-10-02 09:25:10.704587522 +0000 UTC m=+0.310458855 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:25:10 compute-0 podman[442482]: 2025-10-02 09:25:10.721668364 +0000 UTC m=+0.332338277 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 09:25:10 compute-0 sudo[442528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:25:10 compute-0 sudo[442528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:10 compute-0 sudo[442528]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:10 compute-0 ceph-mon[74477]: pgmap v3211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:10 compute-0 sudo[442559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:25:10 compute-0 sudo[442559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:10 compute-0 sudo[442559]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:10 compute-0 sudo[442584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:25:10 compute-0 sudo[442584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:10 compute-0 sudo[442584]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:10 compute-0 sudo[442609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:25:10 compute-0 sudo[442609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:11 compute-0 podman[442674]: 2025-10-02 09:25:11.28358604 +0000 UTC m=+0.061538133 container create 5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:25:11 compute-0 podman[442674]: 2025-10-02 09:25:11.246898354 +0000 UTC m=+0.024850537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:25:11 compute-0 systemd[1]: Started libpod-conmon-5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab.scope.
Oct 02 09:25:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:25:11 compute-0 podman[442674]: 2025-10-02 09:25:11.454344324 +0000 UTC m=+0.232296497 container init 5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lewin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 09:25:11 compute-0 podman[442674]: 2025-10-02 09:25:11.463706166 +0000 UTC m=+0.241658299 container start 5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lewin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:25:11 compute-0 tender_lewin[442691]: 167 167
Oct 02 09:25:11 compute-0 systemd[1]: libpod-5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab.scope: Deactivated successfully.
Oct 02 09:25:11 compute-0 podman[442674]: 2025-10-02 09:25:11.487315684 +0000 UTC m=+0.265267807 container attach 5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lewin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:25:11 compute-0 podman[442674]: 2025-10-02 09:25:11.487882412 +0000 UTC m=+0.265834535 container died 5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lewin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-111b93b37ae9e3d83d2f79b16ee4ac80c1f50310ad14f3732486ca05601272c1-merged.mount: Deactivated successfully.
Oct 02 09:25:11 compute-0 podman[442674]: 2025-10-02 09:25:11.554002997 +0000 UTC m=+0.331955090 container remove 5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 09:25:11 compute-0 systemd[1]: libpod-conmon-5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab.scope: Deactivated successfully.
Oct 02 09:25:11 compute-0 podman[442716]: 2025-10-02 09:25:11.741203544 +0000 UTC m=+0.043296333 container create 9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hodgkin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:25:11 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:25:11.749 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:25:11 compute-0 systemd[1]: Started libpod-conmon-9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa.scope.
Oct 02 09:25:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a457c57dffd50c2cc7d345271e1ec1d2b96169864a129816ff0b015d4b69486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:11 compute-0 podman[442716]: 2025-10-02 09:25:11.721210459 +0000 UTC m=+0.023303258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a457c57dffd50c2cc7d345271e1ec1d2b96169864a129816ff0b015d4b69486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a457c57dffd50c2cc7d345271e1ec1d2b96169864a129816ff0b015d4b69486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a457c57dffd50c2cc7d345271e1ec1d2b96169864a129816ff0b015d4b69486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:25:11 compute-0 podman[442716]: 2025-10-02 09:25:11.836382557 +0000 UTC m=+0.138475446 container init 9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 09:25:11 compute-0 podman[442716]: 2025-10-02 09:25:11.844400467 +0000 UTC m=+0.146493256 container start 9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:25:11 compute-0 podman[442716]: 2025-10-02 09:25:11.848198136 +0000 UTC m=+0.150291015 container attach 9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hodgkin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:25:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]: {
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "osd_id": 2,
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "type": "bluestore"
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:     },
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "osd_id": 1,
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "type": "bluestore"
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:     },
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "osd_id": 0,
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:         "type": "bluestore"
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]:     }
Oct 02 09:25:12 compute-0 intelligent_hodgkin[442733]: }
Oct 02 09:25:12 compute-0 systemd[1]: libpod-9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa.scope: Deactivated successfully.
Oct 02 09:25:12 compute-0 podman[442716]: 2025-10-02 09:25:12.823128947 +0000 UTC m=+1.125221756 container died 9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hodgkin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a457c57dffd50c2cc7d345271e1ec1d2b96169864a129816ff0b015d4b69486-merged.mount: Deactivated successfully.
Oct 02 09:25:12 compute-0 podman[442716]: 2025-10-02 09:25:12.870990622 +0000 UTC m=+1.173083451 container remove 9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hodgkin, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:25:12 compute-0 systemd[1]: libpod-conmon-9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa.scope: Deactivated successfully.
Oct 02 09:25:12 compute-0 sudo[442609]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:25:12 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:25:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:25:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:25:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 09f8f95b-56eb-451b-821c-4bb855128451 does not exist
Oct 02 09:25:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b14efd20-43b0-4e9d-8ef5-ac2cec181cc2 does not exist
Oct 02 09:25:13 compute-0 sudo[442780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:25:13 compute-0 sudo[442780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:13 compute-0 sudo[442780]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:13 compute-0 sudo[442805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:25:13 compute-0 sudo[442805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:25:13 compute-0 sudo[442805]: pam_unix(sudo:session): session closed for user root
Oct 02 09:25:13 compute-0 nova_compute[260603]: 2025-10-02 09:25:13.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:25:13 compute-0 ceph-mon[74477]: pgmap v3212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:13 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:25:13 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:25:13 compute-0 nova_compute[260603]: 2025-10-02 09:25:13.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:14 compute-0 nova_compute[260603]: 2025-10-02 09:25:14.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:14 compute-0 podman[442831]: 2025-10-02 09:25:14.996308946 +0000 UTC m=+0.063092521 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 09:25:14 compute-0 podman[442830]: 2025-10-02 09:25:14.99640699 +0000 UTC m=+0.061684178 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:25:15 compute-0 nova_compute[260603]: 2025-10-02 09:25:15.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:25:15 compute-0 ceph-mon[74477]: pgmap v3213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:25:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:16 compute-0 nova_compute[260603]: 2025-10-02 09:25:16.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:25:16 compute-0 nova_compute[260603]: 2025-10-02 09:25:16.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:25:16 compute-0 nova_compute[260603]: 2025-10-02 09:25:16.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:25:16 compute-0 nova_compute[260603]: 2025-10-02 09:25:16.764 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:25:17 compute-0 nova_compute[260603]: 2025-10-02 09:25:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:25:17 compute-0 ceph-mon[74477]: pgmap v3214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:18 compute-0 nova_compute[260603]: 2025-10-02 09:25:18.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct 02 09:25:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct 02 09:25:18 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct 02 09:25:19 compute-0 nova_compute[260603]: 2025-10-02 09:25:19.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:19 compute-0 nova_compute[260603]: 2025-10-02 09:25:19.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:25:19 compute-0 nova_compute[260603]: 2025-10-02 09:25:19.687 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:25:19 compute-0 nova_compute[260603]: 2025-10-02 09:25:19.687 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:25:19 compute-0 nova_compute[260603]: 2025-10-02 09:25:19.687 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:25:19 compute-0 nova_compute[260603]: 2025-10-02 09:25:19.688 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:25:19 compute-0 nova_compute[260603]: 2025-10-02 09:25:19.688 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:25:19 compute-0 ceph-mon[74477]: pgmap v3215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:19 compute-0 ceph-mon[74477]: osdmap e291: 3 total, 3 up, 3 in
Oct 02 09:25:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:25:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454194714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.156 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.313 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.315 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3517MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.315 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.315 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:25:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:25:20 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1454194714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.865 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.866 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.878 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.899 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.900 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.914 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.934 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 09:25:20 compute-0 nova_compute[260603]: 2025-10-02 09:25:20.952 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:25:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:25:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/87705783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:25:21 compute-0 nova_compute[260603]: 2025-10-02 09:25:21.372 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:25:21 compute-0 nova_compute[260603]: 2025-10-02 09:25:21.377 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:25:21 compute-0 nova_compute[260603]: 2025-10-02 09:25:21.635 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:25:21 compute-0 ceph-mon[74477]: pgmap v3217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:21 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/87705783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:25:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:25:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1230208095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:25:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:25:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1230208095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:25:22 compute-0 nova_compute[260603]: 2025-10-02 09:25:22.222 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:25:22 compute-0 nova_compute[260603]: 2025-10-02 09:25:22.223 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:25:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Oct 02 09:25:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1230208095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:25:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1230208095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:25:22 compute-0 ceph-mon[74477]: pgmap v3218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Oct 02 09:25:23 compute-0 nova_compute[260603]: 2025-10-02 09:25:23.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:24 compute-0 nova_compute[260603]: 2025-10-02 09:25:24.219 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:25:24 compute-0 nova_compute[260603]: 2025-10-02 09:25:24.219 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:25:24 compute-0 nova_compute[260603]: 2025-10-02 09:25:24.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 02 09:25:24 compute-0 nova_compute[260603]: 2025-10-02 09:25:24.513 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:25:25 compute-0 ceph-mon[74477]: pgmap v3219: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 02 09:25:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:25:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct 02 09:25:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct 02 09:25:25 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct 02 09:25:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 02 09:25:26 compute-0 ceph-mon[74477]: osdmap e292: 3 total, 3 up, 3 in
Oct 02 09:25:27 compute-0 ceph-mon[74477]: pgmap v3221: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 02 09:25:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:25:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:25:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:25:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:25:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:25:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:25:28
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'vms', '.mgr', 'images', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta']
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 09:25:28 compute-0 nova_compute[260603]: 2025-10-02 09:25:28.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:25:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:25:29 compute-0 nova_compute[260603]: 2025-10-02 09:25:29.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:29 compute-0 ceph-mon[74477]: pgmap v3222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 09:25:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 02 09:25:30 compute-0 nova_compute[260603]: 2025-10-02 09:25:30.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:25:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:25:31 compute-0 ceph-mon[74477]: pgmap v3223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 02 09:25:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 1 op/s
Oct 02 09:25:32 compute-0 ceph-mon[74477]: pgmap v3224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 1 op/s
Oct 02 09:25:33 compute-0 nova_compute[260603]: 2025-10-02 09:25:33.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:34 compute-0 nova_compute[260603]: 2025-10-02 09:25:34.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:25:34.864 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:25:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:25:34.864 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:25:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:25:34.865 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:25:35 compute-0 ceph-mon[74477]: pgmap v3225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:25:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:37 compute-0 ceph-mon[74477]: pgmap v3226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:38 compute-0 nova_compute[260603]: 2025-10-02 09:25:38.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:25:39 compute-0 nova_compute[260603]: 2025-10-02 09:25:39.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:25:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:25:39 compute-0 ceph-mon[74477]: pgmap v3227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:25:41 compute-0 podman[442918]: 2025-10-02 09:25:41.003333767 +0000 UTC m=+0.065121065 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 09:25:41 compute-0 podman[442917]: 2025-10-02 09:25:41.027500362 +0000 UTC m=+0.094511373 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 09:25:41 compute-0 ceph-mon[74477]: pgmap v3228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:42 compute-0 ceph-mon[74477]: pgmap v3229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:43 compute-0 nova_compute[260603]: 2025-10-02 09:25:43.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:44 compute-0 nova_compute[260603]: 2025-10-02 09:25:44.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:45 compute-0 ceph-mon[74477]: pgmap v3230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:25:46 compute-0 podman[442964]: 2025-10-02 09:25:46.010643338 +0000 UTC m=+0.080436304 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:25:46 compute-0 podman[442965]: 2025-10-02 09:25:46.054661433 +0000 UTC m=+0.108642864 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 09:25:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:47 compute-0 ceph-mon[74477]: pgmap v3231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:48 compute-0 nova_compute[260603]: 2025-10-02 09:25:48.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:49 compute-0 nova_compute[260603]: 2025-10-02 09:25:49.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:49 compute-0 nova_compute[260603]: 2025-10-02 09:25:49.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:25:49 compute-0 nova_compute[260603]: 2025-10-02 09:25:49.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:25:49 compute-0 ceph-mon[74477]: pgmap v3232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:25:51 compute-0 ceph-mon[74477]: pgmap v3233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:53 compute-0 nova_compute[260603]: 2025-10-02 09:25:53.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:53 compute-0 ceph-mon[74477]: pgmap v3234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:54 compute-0 nova_compute[260603]: 2025-10-02 09:25:54.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:54 compute-0 ceph-mon[74477]: pgmap v3235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:25:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:57 compute-0 ceph-mon[74477]: pgmap v3236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:25:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:25:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:25:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:25:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:25:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:25:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:25:58 compute-0 nova_compute[260603]: 2025-10-02 09:25:58.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:59 compute-0 nova_compute[260603]: 2025-10-02 09:25:59.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:25:59 compute-0 ceph-mon[74477]: pgmap v3237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:26:01 compute-0 ceph-mon[74477]: pgmap v3238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:03 compute-0 ceph-mon[74477]: pgmap v3239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:03 compute-0 nova_compute[260603]: 2025-10-02 09:26:03.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:04 compute-0 nova_compute[260603]: 2025-10-02 09:26:04.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:05 compute-0 ceph-mon[74477]: pgmap v3240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:26:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:07 compute-0 ceph-mon[74477]: pgmap v3241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:08 compute-0 nova_compute[260603]: 2025-10-02 09:26:08.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:09 compute-0 nova_compute[260603]: 2025-10-02 09:26:09.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:09 compute-0 ceph-mon[74477]: pgmap v3242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:26:11 compute-0 ceph-mon[74477]: pgmap v3243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:12 compute-0 podman[443001]: 2025-10-02 09:26:12.01947034 +0000 UTC m=+0.078828913 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 09:26:12 compute-0 podman[443002]: 2025-10-02 09:26:12.022736102 +0000 UTC m=+0.066999614 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 09:26:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:13 compute-0 sudo[443047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:26:13 compute-0 sudo[443047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:13 compute-0 sudo[443047]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:13 compute-0 sudo[443072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:26:13 compute-0 sudo[443072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:13 compute-0 sudo[443072]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:13 compute-0 sudo[443097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:26:13 compute-0 sudo[443097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:13 compute-0 sudo[443097]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:13 compute-0 sudo[443122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:26:13 compute-0 sudo[443122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:13 compute-0 nova_compute[260603]: 2025-10-02 09:26:13.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:13 compute-0 ceph-mon[74477]: pgmap v3244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:13 compute-0 sudo[443122]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:26:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:26:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:26:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:26:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:26:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:26:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 05c4c6d0-e724-4615-96a5-67bc7eaa40b2 does not exist
Oct 02 09:26:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 571d8288-5313-41a1-82e1-71c46312fea9 does not exist
Oct 02 09:26:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 28bfeddb-a8b0-474a-b477-5c15b5e423f4 does not exist
Oct 02 09:26:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:26:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:26:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:26:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:26:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:26:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:26:13 compute-0 sudo[443177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:26:13 compute-0 sudo[443177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:13 compute-0 sudo[443177]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:13 compute-0 sudo[443202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:26:13 compute-0 sudo[443202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:13 compute-0 sudo[443202]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:14 compute-0 sudo[443227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:26:14 compute-0 sudo[443227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:14 compute-0 sudo[443227]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:14 compute-0 sudo[443252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:26:14 compute-0 sudo[443252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:14 compute-0 nova_compute[260603]: 2025-10-02 09:26:14.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:14 compute-0 podman[443318]: 2025-10-02 09:26:14.352830031 +0000 UTC m=+0.023047620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:26:14 compute-0 podman[443318]: 2025-10-02 09:26:14.498291524 +0000 UTC m=+0.168509133 container create 647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 09:26:14 compute-0 nova_compute[260603]: 2025-10-02 09:26:14.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:26:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:14 compute-0 systemd[1]: Started libpod-conmon-647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2.scope.
Oct 02 09:26:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:26:14 compute-0 podman[443318]: 2025-10-02 09:26:14.672507196 +0000 UTC m=+0.342724785 container init 647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 09:26:14 compute-0 podman[443318]: 2025-10-02 09:26:14.682261831 +0000 UTC m=+0.352479410 container start 647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:26:14 compute-0 nervous_brown[443335]: 167 167
Oct 02 09:26:14 compute-0 systemd[1]: libpod-647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2.scope: Deactivated successfully.
Oct 02 09:26:14 compute-0 podman[443318]: 2025-10-02 09:26:14.712501585 +0000 UTC m=+0.382719174 container attach 647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:26:14 compute-0 podman[443318]: 2025-10-02 09:26:14.713846117 +0000 UTC m=+0.384063716 container died 647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:26:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1477374622a5ffa6af3b166e5a2dacdedec7328222899ba6662c454c2a0ce1e8-merged.mount: Deactivated successfully.
Oct 02 09:26:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:26:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:26:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:26:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:26:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:26:14 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:26:14 compute-0 ceph-mon[74477]: pgmap v3245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:14 compute-0 podman[443318]: 2025-10-02 09:26:14.970268597 +0000 UTC m=+0.640486196 container remove 647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 09:26:14 compute-0 systemd[1]: libpod-conmon-647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2.scope: Deactivated successfully.
Oct 02 09:26:15 compute-0 podman[443361]: 2025-10-02 09:26:15.216077605 +0000 UTC m=+0.116522721 container create 9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 09:26:15 compute-0 podman[443361]: 2025-10-02 09:26:15.133307299 +0000 UTC m=+0.033752485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:26:15 compute-0 systemd[1]: Started libpod-conmon-9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d.scope.
Oct 02 09:26:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f6c95ff34805f68a191366630ea2063e3554c8cf9a0702a84f1644b0364c92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f6c95ff34805f68a191366630ea2063e3554c8cf9a0702a84f1644b0364c92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f6c95ff34805f68a191366630ea2063e3554c8cf9a0702a84f1644b0364c92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f6c95ff34805f68a191366630ea2063e3554c8cf9a0702a84f1644b0364c92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f6c95ff34805f68a191366630ea2063e3554c8cf9a0702a84f1644b0364c92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:15 compute-0 podman[443361]: 2025-10-02 09:26:15.37795698 +0000 UTC m=+0.278402186 container init 9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:26:15 compute-0 podman[443361]: 2025-10-02 09:26:15.387124237 +0000 UTC m=+0.287569343 container start 9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:26:15 compute-0 podman[443361]: 2025-10-02 09:26:15.434612381 +0000 UTC m=+0.335057497 container attach 9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 09:26:15 compute-0 nova_compute[260603]: 2025-10-02 09:26:15.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:26:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:26:16 compute-0 boring_spence[443378]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:26:16 compute-0 boring_spence[443378]: --> relative data size: 1.0
Oct 02 09:26:16 compute-0 boring_spence[443378]: --> All data devices are unavailable
Oct 02 09:26:16 compute-0 systemd[1]: libpod-9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d.scope: Deactivated successfully.
Oct 02 09:26:16 compute-0 podman[443361]: 2025-10-02 09:26:16.481029865 +0000 UTC m=+1.381474971 container died 9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:26:16 compute-0 systemd[1]: libpod-9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d.scope: Consumed 1.041s CPU time.
Oct 02 09:26:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8f6c95ff34805f68a191366630ea2063e3554c8cf9a0702a84f1644b0364c92-merged.mount: Deactivated successfully.
Oct 02 09:26:17 compute-0 podman[443361]: 2025-10-02 09:26:17.06583535 +0000 UTC m=+1.966280446 container remove 9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 09:26:17 compute-0 podman[443409]: 2025-10-02 09:26:17.080940682 +0000 UTC m=+0.553426247 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 09:26:17 compute-0 sudo[443252]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:17 compute-0 podman[443416]: 2025-10-02 09:26:17.136106135 +0000 UTC m=+0.606441052 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible)
Oct 02 09:26:17 compute-0 systemd[1]: libpod-conmon-9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d.scope: Deactivated successfully.
Oct 02 09:26:17 compute-0 sudo[443457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:26:17 compute-0 sudo[443457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:17 compute-0 sudo[443457]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:17 compute-0 sudo[443483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:26:17 compute-0 sudo[443483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:17 compute-0 sudo[443483]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:17 compute-0 sudo[443508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:26:17 compute-0 sudo[443508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:17 compute-0 sudo[443508]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:17 compute-0 sudo[443533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:26:17 compute-0 sudo[443533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:17 compute-0 ceph-mon[74477]: pgmap v3246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:17 compute-0 podman[443598]: 2025-10-02 09:26:17.824720254 +0000 UTC m=+0.107502359 container create ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 09:26:17 compute-0 podman[443598]: 2025-10-02 09:26:17.738074178 +0000 UTC m=+0.020856303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:26:17 compute-0 systemd[1]: Started libpod-conmon-ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7.scope.
Oct 02 09:26:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:26:18 compute-0 podman[443598]: 2025-10-02 09:26:18.204530767 +0000 UTC m=+0.487312912 container init ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nobel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:26:18 compute-0 podman[443598]: 2025-10-02 09:26:18.217619396 +0000 UTC m=+0.500401501 container start ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nobel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:26:18 compute-0 jovial_nobel[443615]: 167 167
Oct 02 09:26:18 compute-0 systemd[1]: libpod-ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7.scope: Deactivated successfully.
Oct 02 09:26:18 compute-0 podman[443598]: 2025-10-02 09:26:18.267353959 +0000 UTC m=+0.550136104 container attach ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nobel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:26:18 compute-0 podman[443598]: 2025-10-02 09:26:18.268185706 +0000 UTC m=+0.550967871 container died ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 09:26:18 compute-0 nova_compute[260603]: 2025-10-02 09:26:18.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:26:18 compute-0 nova_compute[260603]: 2025-10-02 09:26:18.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:26:18 compute-0 nova_compute[260603]: 2025-10-02 09:26:18.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:26:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:18 compute-0 nova_compute[260603]: 2025-10-02 09:26:18.545 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:26:18 compute-0 nova_compute[260603]: 2025-10-02 09:26:18.546 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:26:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-eacbe9107d4919bb95ddc49df1080dff17cfb32b07ff31631fc5913922a51bc2-merged.mount: Deactivated successfully.
Oct 02 09:26:18 compute-0 nova_compute[260603]: 2025-10-02 09:26:18.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:18 compute-0 podman[443598]: 2025-10-02 09:26:18.788646012 +0000 UTC m=+1.071428117 container remove ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 09:26:18 compute-0 systemd[1]: libpod-conmon-ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7.scope: Deactivated successfully.
Oct 02 09:26:19 compute-0 podman[443641]: 2025-10-02 09:26:18.923703241 +0000 UTC m=+0.022798954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:26:19 compute-0 ceph-mon[74477]: pgmap v3247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:19 compute-0 podman[443641]: 2025-10-02 09:26:19.123025606 +0000 UTC m=+0.222121309 container create 1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 02 09:26:19 compute-0 systemd[1]: Started libpod-conmon-1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d.scope.
Oct 02 09:26:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b411206e35fad3038cf87ff7e8c65f55d6ec3b29a7265f0290396b4e6f96389f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b411206e35fad3038cf87ff7e8c65f55d6ec3b29a7265f0290396b4e6f96389f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b411206e35fad3038cf87ff7e8c65f55d6ec3b29a7265f0290396b4e6f96389f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b411206e35fad3038cf87ff7e8c65f55d6ec3b29a7265f0290396b4e6f96389f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:19 compute-0 podman[443641]: 2025-10-02 09:26:19.372951812 +0000 UTC m=+0.472047555 container init 1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 09:26:19 compute-0 podman[443641]: 2025-10-02 09:26:19.386936309 +0000 UTC m=+0.486032002 container start 1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 09:26:19 compute-0 podman[443641]: 2025-10-02 09:26:19.417804443 +0000 UTC m=+0.516900196 container attach 1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:26:19 compute-0 nova_compute[260603]: 2025-10-02 09:26:19.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:20 compute-0 gracious_jang[443657]: {
Oct 02 09:26:20 compute-0 gracious_jang[443657]:     "0": [
Oct 02 09:26:20 compute-0 gracious_jang[443657]:         {
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "devices": [
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "/dev/loop3"
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             ],
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_name": "ceph_lv0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_size": "21470642176",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "name": "ceph_lv0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "tags": {
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.cluster_name": "ceph",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.crush_device_class": "",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.encrypted": "0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.osd_id": "0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.type": "block",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.vdo": "0"
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             },
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "type": "block",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "vg_name": "ceph_vg0"
Oct 02 09:26:20 compute-0 gracious_jang[443657]:         }
Oct 02 09:26:20 compute-0 gracious_jang[443657]:     ],
Oct 02 09:26:20 compute-0 gracious_jang[443657]:     "1": [
Oct 02 09:26:20 compute-0 gracious_jang[443657]:         {
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "devices": [
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "/dev/loop4"
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             ],
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_name": "ceph_lv1",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_size": "21470642176",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "name": "ceph_lv1",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "tags": {
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.cluster_name": "ceph",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.crush_device_class": "",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.encrypted": "0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.osd_id": "1",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.type": "block",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.vdo": "0"
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             },
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "type": "block",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "vg_name": "ceph_vg1"
Oct 02 09:26:20 compute-0 gracious_jang[443657]:         }
Oct 02 09:26:20 compute-0 gracious_jang[443657]:     ],
Oct 02 09:26:20 compute-0 gracious_jang[443657]:     "2": [
Oct 02 09:26:20 compute-0 gracious_jang[443657]:         {
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "devices": [
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "/dev/loop5"
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             ],
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_name": "ceph_lv2",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_size": "21470642176",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "name": "ceph_lv2",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "tags": {
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.cluster_name": "ceph",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.crush_device_class": "",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.encrypted": "0",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.osd_id": "2",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.type": "block",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:                 "ceph.vdo": "0"
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             },
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "type": "block",
Oct 02 09:26:20 compute-0 gracious_jang[443657]:             "vg_name": "ceph_vg2"
Oct 02 09:26:20 compute-0 gracious_jang[443657]:         }
Oct 02 09:26:20 compute-0 gracious_jang[443657]:     ]
Oct 02 09:26:20 compute-0 gracious_jang[443657]: }
Oct 02 09:26:20 compute-0 systemd[1]: libpod-1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d.scope: Deactivated successfully.
Oct 02 09:26:20 compute-0 podman[443641]: 2025-10-02 09:26:20.133158717 +0000 UTC m=+1.232254450 container died 1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 09:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b411206e35fad3038cf87ff7e8c65f55d6ec3b29a7265f0290396b4e6f96389f-merged.mount: Deactivated successfully.
Oct 02 09:26:20 compute-0 podman[443641]: 2025-10-02 09:26:20.184347116 +0000 UTC m=+1.283442819 container remove 1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 09:26:20 compute-0 systemd[1]: libpod-conmon-1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d.scope: Deactivated successfully.
Oct 02 09:26:20 compute-0 sudo[443533]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:20 compute-0 sudo[443680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:26:20 compute-0 sudo[443680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:20 compute-0 sudo[443680]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:20 compute-0 sudo[443705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:26:20 compute-0 sudo[443705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:20 compute-0 sudo[443705]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:20 compute-0 sudo[443730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:26:20 compute-0 sudo[443730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:20 compute-0 sudo[443730]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:20 compute-0 sudo[443755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:26:20 compute-0 sudo[443755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:20 compute-0 nova_compute[260603]: 2025-10-02 09:26:20.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:26:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:20 compute-0 nova_compute[260603]: 2025-10-02 09:26:20.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:26:20 compute-0 nova_compute[260603]: 2025-10-02 09:26:20.562 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:26:20 compute-0 nova_compute[260603]: 2025-10-02 09:26:20.562 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:26:20 compute-0 nova_compute[260603]: 2025-10-02 09:26:20.562 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:26:20 compute-0 nova_compute[260603]: 2025-10-02 09:26:20.562 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:26:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:26:20 compute-0 podman[443840]: 2025-10-02 09:26:20.865604674 +0000 UTC m=+0.069324885 container create 78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mayer, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:26:20 compute-0 podman[443840]: 2025-10-02 09:26:20.826090041 +0000 UTC m=+0.029810282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:26:20 compute-0 systemd[1]: Started libpod-conmon-78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe.scope.
Oct 02 09:26:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:26:20 compute-0 podman[443840]: 2025-10-02 09:26:20.992348163 +0000 UTC m=+0.196068454 container init 78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mayer, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:26:21 compute-0 podman[443840]: 2025-10-02 09:26:21.000698704 +0000 UTC m=+0.204418945 container start 78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mayer, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:26:21 compute-0 kind_mayer[443856]: 167 167
Oct 02 09:26:21 compute-0 systemd[1]: libpod-78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe.scope: Deactivated successfully.
Oct 02 09:26:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:26:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4147006343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:26:21 compute-0 podman[443840]: 2025-10-02 09:26:21.02969569 +0000 UTC m=+0.233416241 container attach 78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:26:21 compute-0 podman[443840]: 2025-10-02 09:26:21.030539227 +0000 UTC m=+0.234259468 container died 78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:26:21 compute-0 nova_compute[260603]: 2025-10-02 09:26:21.051 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ac2d1669624845626e9edf8abea10d7062dbd61178c3b967a481cce7ee82928-merged.mount: Deactivated successfully.
Oct 02 09:26:21 compute-0 nova_compute[260603]: 2025-10-02 09:26:21.215 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:26:21 compute-0 nova_compute[260603]: 2025-10-02 09:26:21.216 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3519MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:26:21 compute-0 nova_compute[260603]: 2025-10-02 09:26:21.217 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:26:21 compute-0 nova_compute[260603]: 2025-10-02 09:26:21.217 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:26:21 compute-0 podman[443840]: 2025-10-02 09:26:21.269647914 +0000 UTC m=+0.473368125 container remove 78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:26:21 compute-0 systemd[1]: libpod-conmon-78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe.scope: Deactivated successfully.
Oct 02 09:26:21 compute-0 nova_compute[260603]: 2025-10-02 09:26:21.291 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:26:21 compute-0 nova_compute[260603]: 2025-10-02 09:26:21.291 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:26:21 compute-0 podman[443881]: 2025-10-02 09:26:21.453054633 +0000 UTC m=+0.060877982 container create 396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:26:21 compute-0 systemd[1]: Started libpod-conmon-396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c.scope.
Oct 02 09:26:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:26:21 compute-0 podman[443881]: 2025-10-02 09:26:21.429419815 +0000 UTC m=+0.037243184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ceeb2d577026225d95a25b63341af937b4f85952db29b084083342c99e7f4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ceeb2d577026225d95a25b63341af937b4f85952db29b084083342c99e7f4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ceeb2d577026225d95a25b63341af937b4f85952db29b084083342c99e7f4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ceeb2d577026225d95a25b63341af937b4f85952db29b084083342c99e7f4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:26:21 compute-0 podman[443881]: 2025-10-02 09:26:21.556603098 +0000 UTC m=+0.164426487 container init 396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:26:21 compute-0 podman[443881]: 2025-10-02 09:26:21.569471159 +0000 UTC m=+0.177294538 container start 396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 02 09:26:21 compute-0 podman[443881]: 2025-10-02 09:26:21.575170977 +0000 UTC m=+0.182994366 container attach 396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:26:21 compute-0 ceph-mon[74477]: pgmap v3248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:21 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4147006343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:26:21 compute-0 nova_compute[260603]: 2025-10-02 09:26:21.591 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:26:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:26:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264831642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:26:22 compute-0 nova_compute[260603]: 2025-10-02 09:26:22.111 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:26:22 compute-0 nova_compute[260603]: 2025-10-02 09:26:22.118 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:26:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:26:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828276912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:26:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:26:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828276912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:26:22 compute-0 nova_compute[260603]: 2025-10-02 09:26:22.257 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:26:22 compute-0 nova_compute[260603]: 2025-10-02 09:26:22.259 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:26:22 compute-0 nova_compute[260603]: 2025-10-02 09:26:22.259 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:26:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:22 compute-0 kind_perlman[443897]: {
Oct 02 09:26:22 compute-0 kind_perlman[443897]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "osd_id": 2,
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "type": "bluestore"
Oct 02 09:26:22 compute-0 kind_perlman[443897]:     },
Oct 02 09:26:22 compute-0 kind_perlman[443897]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "osd_id": 1,
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "type": "bluestore"
Oct 02 09:26:22 compute-0 kind_perlman[443897]:     },
Oct 02 09:26:22 compute-0 kind_perlman[443897]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "osd_id": 0,
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:26:22 compute-0 kind_perlman[443897]:         "type": "bluestore"
Oct 02 09:26:22 compute-0 kind_perlman[443897]:     }
Oct 02 09:26:22 compute-0 kind_perlman[443897]: }
Oct 02 09:26:22 compute-0 systemd[1]: libpod-396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c.scope: Deactivated successfully.
Oct 02 09:26:22 compute-0 systemd[1]: libpod-396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c.scope: Consumed 1.019s CPU time.
Oct 02 09:26:22 compute-0 podman[443881]: 2025-10-02 09:26:22.581393757 +0000 UTC m=+1.189217106 container died 396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:26:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3264831642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:26:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3828276912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:26:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3828276912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-24ceeb2d577026225d95a25b63341af937b4f85952db29b084083342c99e7f4e-merged.mount: Deactivated successfully.
Oct 02 09:26:22 compute-0 podman[443881]: 2025-10-02 09:26:22.961000374 +0000 UTC m=+1.568823763 container remove 396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 09:26:22 compute-0 systemd[1]: libpod-conmon-396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c.scope: Deactivated successfully.
Oct 02 09:26:22 compute-0 sudo[443755]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:26:23 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:26:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.045203) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397183045237, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1824, "num_deletes": 253, "total_data_size": 2939113, "memory_usage": 2986768, "flush_reason": "Manual Compaction"}
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397183086908, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2898432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66160, "largest_seqno": 67983, "table_properties": {"data_size": 2889991, "index_size": 5190, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17255, "raw_average_key_size": 20, "raw_value_size": 2873078, "raw_average_value_size": 3368, "num_data_blocks": 230, "num_entries": 853, "num_filter_entries": 853, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396986, "oldest_key_time": 1759396986, "file_creation_time": 1759397183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 41756 microseconds, and 7899 cpu microseconds.
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.086954) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2898432 bytes OK
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.086975) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.102399) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.102422) EVENT_LOG_v1 {"time_micros": 1759397183102417, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.102442) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2931348, prev total WAL file size 2972016, number of live WAL files 2.
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.103416) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2830KB)], [158(9759KB)]
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397183103445, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12891962, "oldest_snapshot_seqno": -1}
Oct 02 09:26:23 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:26:23 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 72c99e21-5bce-448b-a503-786e3c7b024d does not exist
Oct 02 09:26:23 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6e3a5479-4a9a-4910-b517-cfb710ffd019 does not exist
Oct 02 09:26:23 compute-0 sudo[443966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:26:23 compute-0 sudo[443966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:23 compute-0 sudo[443966]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8533 keys, 11168005 bytes, temperature: kUnknown
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397183248514, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11168005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11112279, "index_size": 33277, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 223718, "raw_average_key_size": 26, "raw_value_size": 10961487, "raw_average_value_size": 1284, "num_data_blocks": 1292, "num_entries": 8533, "num_filter_entries": 8533, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.248808) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11168005 bytes
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.255255) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.8 rd, 76.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.5 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(8.3) write-amplify(3.9) OK, records in: 9055, records dropped: 522 output_compression: NoCompression
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.255273) EVENT_LOG_v1 {"time_micros": 1759397183255265, "job": 98, "event": "compaction_finished", "compaction_time_micros": 145180, "compaction_time_cpu_micros": 25011, "output_level": 6, "num_output_files": 1, "total_output_size": 11168005, "num_input_records": 9055, "num_output_records": 8533, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397183255796, "job": 98, "event": "table_file_deletion", "file_number": 160}
Oct 02 09:26:23 compute-0 sudo[443991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397183257279, "job": 98, "event": "table_file_deletion", "file_number": 158}
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.103361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.257413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.257419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.257421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.257423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:26:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.257424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:26:23 compute-0 sudo[443991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:26:23 compute-0 sudo[443991]: pam_unix(sudo:session): session closed for user root
Oct 02 09:26:23 compute-0 nova_compute[260603]: 2025-10-02 09:26:23.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:23 compute-0 ceph-mon[74477]: pgmap v3249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:23 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:26:23 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:26:24 compute-0 nova_compute[260603]: 2025-10-02 09:26:24.256 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:26:24 compute-0 nova_compute[260603]: 2025-10-02 09:26:24.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:24 compute-0 ceph-mon[74477]: pgmap v3250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:25 compute-0 nova_compute[260603]: 2025-10-02 09:26:25.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:26:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:26:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:27 compute-0 ceph-mon[74477]: pgmap v3251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:26:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:26:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:26:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:26:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:26:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:26:28
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'images', 'vms', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr']
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:26:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:26:28 compute-0 nova_compute[260603]: 2025-10-02 09:26:28.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:29 compute-0 nova_compute[260603]: 2025-10-02 09:26:29.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:29 compute-0 ceph-mon[74477]: pgmap v3252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:26:31 compute-0 ceph-mon[74477]: pgmap v3253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:32 compute-0 nova_compute[260603]: 2025-10-02 09:26:32.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:26:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:32 compute-0 ceph-mon[74477]: pgmap v3254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:33 compute-0 nova_compute[260603]: 2025-10-02 09:26:33.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:34 compute-0 nova_compute[260603]: 2025-10-02 09:26:34.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:26:34.865 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:26:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:26:34.866 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:26:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:26:34.866 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:26:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:26:35 compute-0 ceph-mon[74477]: pgmap v3255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:36 compute-0 ceph-mon[74477]: pgmap v3256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:38 compute-0 nova_compute[260603]: 2025-10-02 09:26:38.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:26:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:26:39 compute-0 nova_compute[260603]: 2025-10-02 09:26:39.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:39 compute-0 ceph-mon[74477]: pgmap v3257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:26:40 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 09:26:41 compute-0 ceph-mon[74477]: pgmap v3258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:26:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 67K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1342 writes, 6080 keys, 1342 commit groups, 1.0 writes per commit group, ingest: 8.77 MB, 0.01 MB/s
                                           Interval WAL: 1342 writes, 1342 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     82.6      1.03              0.32        49    0.021       0      0       0.0       0.0
                                             L6      1/0   10.65 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.8    129.7    109.8      3.74              1.46        48    0.078    318K    25K       0.0       0.0
                                            Sum      1/0   10.65 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.8    101.7    103.9      4.77              1.79        97    0.049    318K    25K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.3     50.9     52.0      1.11              0.21        10    0.111     43K   2572       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    129.7    109.8      3.74              1.46        48    0.078    318K    25K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     83.0      1.02              0.32        48    0.021       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.083, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.48 GB write, 0.08 MB/s write, 0.47 GB read, 0.08 MB/s read, 4.8 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.09 MB/s read, 1.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 55.00 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000313 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3544,52.69 MB,17.3328%) FilterBlock(98,890.67 KB,0.286117%) IndexBlock(98,1.44 MB,0.472827%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 09:26:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:43 compute-0 podman[444018]: 2025-10-02 09:26:43.029051783 +0000 UTC m=+0.081679753 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:26:43 compute-0 podman[444017]: 2025-10-02 09:26:43.093274329 +0000 UTC m=+0.145675292 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 09:26:43 compute-0 ceph-mon[74477]: pgmap v3259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:43 compute-0 nova_compute[260603]: 2025-10-02 09:26:43.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:44 compute-0 nova_compute[260603]: 2025-10-02 09:26:44.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:45 compute-0 ceph-mon[74477]: pgmap v3260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:26:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:47 compute-0 ceph-mon[74477]: pgmap v3261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:48 compute-0 podman[444065]: 2025-10-02 09:26:48.014524232 +0000 UTC m=+0.065977372 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:26:48 compute-0 podman[444066]: 2025-10-02 09:26:48.036772766 +0000 UTC m=+0.082721474 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:26:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:48 compute-0 nova_compute[260603]: 2025-10-02 09:26:48.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:49 compute-0 nova_compute[260603]: 2025-10-02 09:26:49.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:49 compute-0 nova_compute[260603]: 2025-10-02 09:26:49.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:26:49 compute-0 nova_compute[260603]: 2025-10-02 09:26:49.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:26:49 compute-0 ceph-mon[74477]: pgmap v3262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:26:50 compute-0 ceph-mon[74477]: pgmap v3263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:53 compute-0 ceph-mon[74477]: pgmap v3264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:53 compute-0 nova_compute[260603]: 2025-10-02 09:26:53.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:54 compute-0 nova_compute[260603]: 2025-10-02 09:26:54.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:55 compute-0 ceph-mon[74477]: pgmap v3265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:26:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:57 compute-0 ceph-mon[74477]: pgmap v3266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:26:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:26:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:26:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:26:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:26:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:26:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:26:58 compute-0 nova_compute[260603]: 2025-10-02 09:26:58.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:59 compute-0 nova_compute[260603]: 2025-10-02 09:26:59.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:26:59 compute-0 ceph-mon[74477]: pgmap v3267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:27:01 compute-0 ceph-mon[74477]: pgmap v3268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:03 compute-0 ceph-mon[74477]: pgmap v3269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:03 compute-0 nova_compute[260603]: 2025-10-02 09:27:03.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:04 compute-0 nova_compute[260603]: 2025-10-02 09:27:04.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:05 compute-0 ceph-mon[74477]: pgmap v3270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:27:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:07 compute-0 ceph-mon[74477]: pgmap v3271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:08 compute-0 nova_compute[260603]: 2025-10-02 09:27:08.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:09 compute-0 nova_compute[260603]: 2025-10-02 09:27:09.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:09 compute-0 ceph-mon[74477]: pgmap v3272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:27:11 compute-0 ceph-mon[74477]: pgmap v3273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:13 compute-0 nova_compute[260603]: 2025-10-02 09:27:13.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:13 compute-0 ceph-mon[74477]: pgmap v3274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:13 compute-0 podman[444106]: 2025-10-02 09:27:13.978563636 +0000 UTC m=+0.047267037 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 09:27:14 compute-0 podman[444105]: 2025-10-02 09:27:14.002511694 +0000 UTC m=+0.074177638 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 09:27:14 compute-0 nova_compute[260603]: 2025-10-02 09:27:14.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:14 compute-0 nova_compute[260603]: 2025-10-02 09:27:14.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:27:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:27:15 compute-0 ceph-mon[74477]: pgmap v3275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:16 compute-0 nova_compute[260603]: 2025-10-02 09:27:16.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:27:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:17 compute-0 ceph-mon[74477]: pgmap v3276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:18 compute-0 nova_compute[260603]: 2025-10-02 09:27:18.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:27:18 compute-0 nova_compute[260603]: 2025-10-02 09:27:18.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:27:18 compute-0 nova_compute[260603]: 2025-10-02 09:27:18.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:27:18 compute-0 nova_compute[260603]: 2025-10-02 09:27:18.543 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:27:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:18 compute-0 nova_compute[260603]: 2025-10-02 09:27:18.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:18 compute-0 ceph-mon[74477]: pgmap v3277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:19 compute-0 podman[444149]: 2025-10-02 09:27:19.009366861 +0000 UTC m=+0.067165319 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:27:19 compute-0 podman[444150]: 2025-10-02 09:27:19.031448751 +0000 UTC m=+0.085106980 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 09:27:19 compute-0 nova_compute[260603]: 2025-10-02 09:27:19.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:19 compute-0 nova_compute[260603]: 2025-10-02 09:27:19.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:27:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:27:21 compute-0 ceph-mon[74477]: pgmap v3278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:27:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3918914276' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:27:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:27:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3918914276' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:27:22 compute-0 nova_compute[260603]: 2025-10-02 09:27:22.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:27:22 compute-0 nova_compute[260603]: 2025-10-02 09:27:22.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:27:22 compute-0 nova_compute[260603]: 2025-10-02 09:27:22.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:27:22 compute-0 nova_compute[260603]: 2025-10-02 09:27:22.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:27:22 compute-0 nova_compute[260603]: 2025-10-02 09:27:22.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:27:22 compute-0 nova_compute[260603]: 2025-10-02 09:27:22.549 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:27:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Oct 02 09:27:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3918914276' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:27:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3918914276' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:27:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:27:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001045560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:27:22 compute-0 nova_compute[260603]: 2025-10-02 09:27:22.974 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.188 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.189 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3576MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.189 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.189 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.305 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.305 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.322 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:27:23 compute-0 sudo[444211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:27:23 compute-0 sudo[444211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:23 compute-0 sudo[444211]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:23 compute-0 sudo[444237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:27:23 compute-0 sudo[444237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:23 compute-0 sudo[444237]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:23 compute-0 sudo[444262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:27:23 compute-0 sudo[444262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:23 compute-0 sudo[444262]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:23 compute-0 sudo[444306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:27:23 compute-0 sudo[444306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:23 compute-0 ceph-mon[74477]: pgmap v3279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Oct 02 09:27:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3001045560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:27:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178586253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.803 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.811 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.834 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.836 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:27:23 compute-0 nova_compute[260603]: 2025-10-02 09:27:23.836 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:27:24 compute-0 sudo[444306]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:27:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:27:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:27:24 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:27:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:27:24 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:27:24 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a4cc0a1e-f33e-4fc7-aa37-5490d4fd89aa does not exist
Oct 02 09:27:24 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f6c7187d-3390-4dea-86fe-7d98a3531df4 does not exist
Oct 02 09:27:24 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 85288a4f-6e92-4495-8850-a8aec0bc19ba does not exist
Oct 02 09:27:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:27:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:27:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:27:24 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:27:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:27:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:27:24 compute-0 sudo[444367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:27:24 compute-0 sudo[444367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:24 compute-0 sudo[444367]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:24 compute-0 sudo[444392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:27:24 compute-0 sudo[444392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:24 compute-0 sudo[444392]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:24 compute-0 sudo[444417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:27:24 compute-0 sudo[444417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:24 compute-0 sudo[444417]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:24 compute-0 sudo[444442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:27:24 compute-0 sudo[444442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:24 compute-0 nova_compute[260603]: 2025-10-02 09:27:24.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 02 09:27:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1178586253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:27:24 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:27:24 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:27:24 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:27:24 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:27:24 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:27:24 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:27:24 compute-0 podman[444511]: 2025-10-02 09:27:24.766379339 +0000 UTC m=+0.041434716 container create 9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 09:27:24 compute-0 systemd[1]: Started libpod-conmon-9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851.scope.
Oct 02 09:27:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:27:24 compute-0 podman[444511]: 2025-10-02 09:27:24.748041056 +0000 UTC m=+0.023096453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:27:24 compute-0 podman[444511]: 2025-10-02 09:27:24.865099672 +0000 UTC m=+0.140155089 container init 9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:27:24 compute-0 podman[444511]: 2025-10-02 09:27:24.880245375 +0000 UTC m=+0.155300742 container start 9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 09:27:24 compute-0 podman[444511]: 2025-10-02 09:27:24.884728075 +0000 UTC m=+0.159783552 container attach 9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:27:24 compute-0 agitated_haibt[444528]: 167 167
Oct 02 09:27:24 compute-0 systemd[1]: libpod-9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851.scope: Deactivated successfully.
Oct 02 09:27:24 compute-0 podman[444511]: 2025-10-02 09:27:24.888647807 +0000 UTC m=+0.163703184 container died 9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 09:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc206c9ef383c99a0b6bb56e323e61c2138353c29b8ed8780e60c42148f45fcb-merged.mount: Deactivated successfully.
Oct 02 09:27:24 compute-0 podman[444511]: 2025-10-02 09:27:24.931069352 +0000 UTC m=+0.206124759 container remove 9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 09:27:24 compute-0 systemd[1]: libpod-conmon-9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851.scope: Deactivated successfully.
Oct 02 09:27:25 compute-0 podman[444550]: 2025-10-02 09:27:25.115489703 +0000 UTC m=+0.052337796 container create 828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 09:27:25 compute-0 systemd[1]: Started libpod-conmon-828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73.scope.
Oct 02 09:27:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:27:25 compute-0 podman[444550]: 2025-10-02 09:27:25.090231373 +0000 UTC m=+0.027079536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67ac39440d2688266d7135dc168bf5a663f9c542390a15b244e2b6a12643eae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67ac39440d2688266d7135dc168bf5a663f9c542390a15b244e2b6a12643eae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67ac39440d2688266d7135dc168bf5a663f9c542390a15b244e2b6a12643eae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67ac39440d2688266d7135dc168bf5a663f9c542390a15b244e2b6a12643eae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67ac39440d2688266d7135dc168bf5a663f9c542390a15b244e2b6a12643eae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:25 compute-0 podman[444550]: 2025-10-02 09:27:25.201376925 +0000 UTC m=+0.138225068 container init 828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:27:25 compute-0 podman[444550]: 2025-10-02 09:27:25.212771831 +0000 UTC m=+0.149619904 container start 828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:27:25 compute-0 podman[444550]: 2025-10-02 09:27:25.21624644 +0000 UTC m=+0.153094543 container attach 828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:27:25 compute-0 ceph-mon[74477]: pgmap v3280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 02 09:27:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:27:25 compute-0 nova_compute[260603]: 2025-10-02 09:27:25.832 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:27:25 compute-0 nova_compute[260603]: 2025-10-02 09:27:25.834 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:27:26 compute-0 festive_carver[444567]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:27:26 compute-0 festive_carver[444567]: --> relative data size: 1.0
Oct 02 09:27:26 compute-0 festive_carver[444567]: --> All data devices are unavailable
Oct 02 09:27:26 compute-0 systemd[1]: libpod-828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73.scope: Deactivated successfully.
Oct 02 09:27:26 compute-0 podman[444550]: 2025-10-02 09:27:26.314941237 +0000 UTC m=+1.251789310 container died 828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 09:27:26 compute-0 systemd[1]: libpod-828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73.scope: Consumed 1.038s CPU time.
Oct 02 09:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e67ac39440d2688266d7135dc168bf5a663f9c542390a15b244e2b6a12643eae-merged.mount: Deactivated successfully.
Oct 02 09:27:26 compute-0 podman[444550]: 2025-10-02 09:27:26.393529872 +0000 UTC m=+1.330377945 container remove 828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 09:27:26 compute-0 systemd[1]: libpod-conmon-828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73.scope: Deactivated successfully.
Oct 02 09:27:26 compute-0 sudo[444442]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:26 compute-0 sudo[444608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:27:26 compute-0 sudo[444608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 02 09:27:26 compute-0 sudo[444608]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:26 compute-0 sudo[444633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:27:26 compute-0 sudo[444633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:26 compute-0 sudo[444633]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:26 compute-0 sudo[444658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:27:26 compute-0 sudo[444658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:26 compute-0 sudo[444658]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:26 compute-0 sudo[444683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:27:26 compute-0 sudo[444683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:27 compute-0 podman[444749]: 2025-10-02 09:27:27.098302205 +0000 UTC m=+0.035946214 container create 4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:27:27 compute-0 systemd[1]: Started libpod-conmon-4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d.scope.
Oct 02 09:27:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:27:27 compute-0 podman[444749]: 2025-10-02 09:27:27.175177086 +0000 UTC m=+0.112821135 container init 4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gauss, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 09:27:27 compute-0 podman[444749]: 2025-10-02 09:27:27.082078028 +0000 UTC m=+0.019722057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:27:27 compute-0 podman[444749]: 2025-10-02 09:27:27.182549967 +0000 UTC m=+0.120193976 container start 4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 09:27:27 compute-0 podman[444749]: 2025-10-02 09:27:27.18620492 +0000 UTC m=+0.123848979 container attach 4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gauss, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 09:27:27 compute-0 systemd[1]: libpod-4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d.scope: Deactivated successfully.
Oct 02 09:27:27 compute-0 distracted_gauss[444765]: 167 167
Oct 02 09:27:27 compute-0 conmon[444765]: conmon 4976c1837aae89b48a9b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d.scope/container/memory.events
Oct 02 09:27:27 compute-0 podman[444749]: 2025-10-02 09:27:27.188365838 +0000 UTC m=+0.126009847 container died 4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:27:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0302047885f15203920e1921295e2bab35298391515a17821b24c1cf6a9bcc33-merged.mount: Deactivated successfully.
Oct 02 09:27:27 compute-0 podman[444749]: 2025-10-02 09:27:27.217729915 +0000 UTC m=+0.155373924 container remove 4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 09:27:27 compute-0 systemd[1]: libpod-conmon-4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d.scope: Deactivated successfully.
Oct 02 09:27:27 compute-0 podman[444792]: 2025-10-02 09:27:27.388667834 +0000 UTC m=+0.045359808 container create a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_agnesi, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:27:27 compute-0 systemd[1]: Started libpod-conmon-a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2.scope.
Oct 02 09:27:27 compute-0 podman[444792]: 2025-10-02 09:27:27.368089881 +0000 UTC m=+0.024781835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:27:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d5faa1a0d816f3b727794f60429ebe684af46c06472cdcd689509d40c0a56d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d5faa1a0d816f3b727794f60429ebe684af46c06472cdcd689509d40c0a56d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d5faa1a0d816f3b727794f60429ebe684af46c06472cdcd689509d40c0a56d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d5faa1a0d816f3b727794f60429ebe684af46c06472cdcd689509d40c0a56d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:27 compute-0 podman[444792]: 2025-10-02 09:27:27.497605887 +0000 UTC m=+0.154297901 container init a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_agnesi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:27:27 compute-0 podman[444792]: 2025-10-02 09:27:27.503631676 +0000 UTC m=+0.160323650 container start a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 09:27:27 compute-0 podman[444792]: 2025-10-02 09:27:27.507883348 +0000 UTC m=+0.164575392 container attach a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:27:27 compute-0 nova_compute[260603]: 2025-10-02 09:27:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:27:27 compute-0 ceph-mon[74477]: pgmap v3281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 02 09:27:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:27:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:27:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:27:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:27:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:27:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:27:28
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.mgr', '.rgw.root']
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]: {
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:     "0": [
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:         {
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "devices": [
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "/dev/loop3"
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             ],
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_name": "ceph_lv0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_size": "21470642176",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "name": "ceph_lv0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "tags": {
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.cluster_name": "ceph",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.crush_device_class": "",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.encrypted": "0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.osd_id": "0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.type": "block",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.vdo": "0"
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             },
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "type": "block",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "vg_name": "ceph_vg0"
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:         }
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:     ],
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:     "1": [
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:         {
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "devices": [
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "/dev/loop4"
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             ],
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_name": "ceph_lv1",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_size": "21470642176",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "name": "ceph_lv1",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "tags": {
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.cluster_name": "ceph",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.crush_device_class": "",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.encrypted": "0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.osd_id": "1",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.type": "block",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.vdo": "0"
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             },
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "type": "block",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "vg_name": "ceph_vg1"
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:         }
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:     ],
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:     "2": [
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:         {
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "devices": [
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "/dev/loop5"
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             ],
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_name": "ceph_lv2",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_size": "21470642176",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "name": "ceph_lv2",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "tags": {
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.cluster_name": "ceph",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.crush_device_class": "",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.encrypted": "0",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.osd_id": "2",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.type": "block",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:                 "ceph.vdo": "0"
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             },
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "type": "block",
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:             "vg_name": "ceph_vg2"
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:         }
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]:     ]
Oct 02 09:27:28 compute-0 compassionate_agnesi[444808]: }
Oct 02 09:27:28 compute-0 systemd[1]: libpod-a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2.scope: Deactivated successfully.
Oct 02 09:27:28 compute-0 podman[444792]: 2025-10-02 09:27:28.27729349 +0000 UTC m=+0.933985454 container died a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_agnesi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8d5faa1a0d816f3b727794f60429ebe684af46c06472cdcd689509d40c0a56d-merged.mount: Deactivated successfully.
Oct 02 09:27:28 compute-0 podman[444792]: 2025-10-02 09:27:28.336597733 +0000 UTC m=+0.993289667 container remove a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_agnesi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:27:28 compute-0 systemd[1]: libpod-conmon-a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2.scope: Deactivated successfully.
Oct 02 09:27:28 compute-0 sudo[444683]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:28 compute-0 sudo[444830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:27:28 compute-0 sudo[444830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:28 compute-0 sudo[444830]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:28 compute-0 sudo[444855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:27:28 compute-0 sudo[444855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:28 compute-0 sudo[444855]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 02 09:27:28 compute-0 sudo[444880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:27:28 compute-0 sudo[444880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:28 compute-0 sudo[444880]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:27:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:27:28 compute-0 sudo[444905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:27:28 compute-0 sudo[444905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:28 compute-0 nova_compute[260603]: 2025-10-02 09:27:28.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:29 compute-0 podman[444974]: 2025-10-02 09:27:29.142697481 +0000 UTC m=+0.071560926 container create 08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 09:27:29 compute-0 systemd[1]: Started libpod-conmon-08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa.scope.
Oct 02 09:27:29 compute-0 podman[444974]: 2025-10-02 09:27:29.117336148 +0000 UTC m=+0.046199593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:27:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:27:29 compute-0 podman[444974]: 2025-10-02 09:27:29.261215393 +0000 UTC m=+0.190078848 container init 08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:27:29 compute-0 podman[444974]: 2025-10-02 09:27:29.275065415 +0000 UTC m=+0.203928830 container start 08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 09:27:29 compute-0 podman[444974]: 2025-10-02 09:27:29.278609476 +0000 UTC m=+0.207472891 container attach 08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 09:27:29 compute-0 ecstatic_ishizaka[444991]: 167 167
Oct 02 09:27:29 compute-0 systemd[1]: libpod-08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa.scope: Deactivated successfully.
Oct 02 09:27:29 compute-0 podman[444974]: 2025-10-02 09:27:29.286071189 +0000 UTC m=+0.214934644 container died 08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:27:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4036798b60e16d5d154774b32e0676b9d7fd4d6d6303c31e28d5ad269d0cd63-merged.mount: Deactivated successfully.
Oct 02 09:27:29 compute-0 podman[444974]: 2025-10-02 09:27:29.345456964 +0000 UTC m=+0.274320409 container remove 08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 09:27:29 compute-0 systemd[1]: libpod-conmon-08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa.scope: Deactivated successfully.
Oct 02 09:27:29 compute-0 nova_compute[260603]: 2025-10-02 09:27:29.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:29 compute-0 podman[445014]: 2025-10-02 09:27:29.618338957 +0000 UTC m=+0.076195271 container create c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bhaskara, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:27:29 compute-0 systemd[1]: Started libpod-conmon-c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6.scope.
Oct 02 09:27:29 compute-0 ceph-mon[74477]: pgmap v3282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 02 09:27:29 compute-0 podman[445014]: 2025-10-02 09:27:29.591352234 +0000 UTC m=+0.049208588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:27:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9801916c7f2163211c03dce83b64506fa18b86e5327a3725edb480399cb441/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9801916c7f2163211c03dce83b64506fa18b86e5327a3725edb480399cb441/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9801916c7f2163211c03dce83b64506fa18b86e5327a3725edb480399cb441/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9801916c7f2163211c03dce83b64506fa18b86e5327a3725edb480399cb441/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:27:29 compute-0 podman[445014]: 2025-10-02 09:27:29.725640389 +0000 UTC m=+0.183496723 container init c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bhaskara, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:27:29 compute-0 podman[445014]: 2025-10-02 09:27:29.738589323 +0000 UTC m=+0.196445667 container start c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 09:27:29 compute-0 podman[445014]: 2025-10-02 09:27:29.742923998 +0000 UTC m=+0.200780332 container attach c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:27:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 02 09:27:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]: {
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "osd_id": 2,
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "type": "bluestore"
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:     },
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "osd_id": 1,
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "type": "bluestore"
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:     },
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "osd_id": 0,
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:         "type": "bluestore"
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]:     }
Oct 02 09:27:30 compute-0 cool_bhaskara[445031]: }
Oct 02 09:27:30 compute-0 systemd[1]: libpod-c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6.scope: Deactivated successfully.
Oct 02 09:27:30 compute-0 podman[445014]: 2025-10-02 09:27:30.950629261 +0000 UTC m=+1.408485565 container died c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:27:30 compute-0 systemd[1]: libpod-c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6.scope: Consumed 1.223s CPU time.
Oct 02 09:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd9801916c7f2163211c03dce83b64506fa18b86e5327a3725edb480399cb441-merged.mount: Deactivated successfully.
Oct 02 09:27:31 compute-0 podman[445014]: 2025-10-02 09:27:31.023639201 +0000 UTC m=+1.481495495 container remove c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 09:27:31 compute-0 systemd[1]: libpod-conmon-c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6.scope: Deactivated successfully.
Oct 02 09:27:31 compute-0 sudo[444905]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:27:31 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:27:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:27:31 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:27:31 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a64d73d6-2855-49a1-97df-e9e02d1dee10 does not exist
Oct 02 09:27:31 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 737a2ced-39a1-424f-8028-b68deb91c42a does not exist
Oct 02 09:27:31 compute-0 sudo[445078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:27:31 compute-0 sudo[445078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:31 compute-0 sudo[445078]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:31 compute-0 sudo[445103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:27:31 compute-0 sudo[445103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:27:31 compute-0 sudo[445103]: pam_unix(sudo:session): session closed for user root
Oct 02 09:27:31 compute-0 ceph-mon[74477]: pgmap v3283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 02 09:27:31 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:27:31 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:27:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct 02 09:27:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct 02 09:27:32 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct 02 09:27:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 831 KiB/s rd, 614 B/s wr, 15 op/s
Oct 02 09:27:33 compute-0 ceph-mon[74477]: osdmap e293: 3 total, 3 up, 3 in
Oct 02 09:27:33 compute-0 ceph-mon[74477]: pgmap v3285: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 831 KiB/s rd, 614 B/s wr, 15 op/s
Oct 02 09:27:33 compute-0 nova_compute[260603]: 2025-10-02 09:27:33.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:34 compute-0 nova_compute[260603]: 2025-10-02 09:27:34.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:34 compute-0 nova_compute[260603]: 2025-10-02 09:27:34.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:27:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 33 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 20 op/s
Oct 02 09:27:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:27:34.866 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:27:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:27:34.867 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:27:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:27:34.867 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:27:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct 02 09:27:35 compute-0 ceph-mon[74477]: pgmap v3286: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 33 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 20 op/s
Oct 02 09:27:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct 02 09:27:35 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct 02 09:27:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:27:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 33 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 25 op/s
Oct 02 09:27:36 compute-0 ceph-mon[74477]: osdmap e294: 3 total, 3 up, 3 in
Oct 02 09:27:37 compute-0 ceph-mon[74477]: pgmap v3288: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 33 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 25 op/s
Oct 02 09:27:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 02 09:27:38 compute-0 nova_compute[260603]: 2025-10-02 09:27:38.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:27:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:27:39 compute-0 nova_compute[260603]: 2025-10-02 09:27:39.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:39 compute-0 ceph-mon[74477]: pgmap v3289: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 02 09:27:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.3 KiB/s wr, 59 op/s
Oct 02 09:27:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:27:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct 02 09:27:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct 02 09:27:40 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct 02 09:27:41 compute-0 ceph-mon[74477]: pgmap v3290: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.3 KiB/s wr, 59 op/s
Oct 02 09:27:41 compute-0 ceph-mon[74477]: osdmap e295: 3 total, 3 up, 3 in
Oct 02 09:27:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Oct 02 09:27:43 compute-0 ceph-mon[74477]: pgmap v3292: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Oct 02 09:27:43 compute-0 nova_compute[260603]: 2025-10-02 09:27:43.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:44 compute-0 nova_compute[260603]: 2025-10-02 09:27:44.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.6 KiB/s wr, 33 op/s
Oct 02 09:27:44 compute-0 podman[445130]: 2025-10-02 09:27:44.986514875 +0000 UTC m=+0.051847790 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:27:45 compute-0 podman[445129]: 2025-10-02 09:27:45.012454635 +0000 UTC m=+0.077719528 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Oct 02 09:27:45 compute-0 ceph-mon[74477]: pgmap v3293: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.6 KiB/s wr, 33 op/s
Oct 02 09:27:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:27:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Oct 02 09:27:47 compute-0 ceph-mon[74477]: pgmap v3294: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Oct 02 09:27:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:48 compute-0 nova_compute[260603]: 2025-10-02 09:27:48.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct 02 09:27:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct 02 09:27:48 compute-0 ceph-mon[74477]: pgmap v3295: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:48 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct 02 09:27:49 compute-0 nova_compute[260603]: 2025-10-02 09:27:49.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:49 compute-0 ceph-mon[74477]: osdmap e296: 3 total, 3 up, 3 in
Oct 02 09:27:49 compute-0 podman[445174]: 2025-10-02 09:27:49.995708223 +0000 UTC m=+0.067822509 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 02 09:27:50 compute-0 podman[445175]: 2025-10-02 09:27:50.009501485 +0000 UTC m=+0.071725622 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 09:27:50 compute-0 nova_compute[260603]: 2025-10-02 09:27:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:27:50 compute-0 nova_compute[260603]: 2025-10-02 09:27:50.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:27:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:27:50 compute-0 ceph-mon[74477]: pgmap v3297: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:27:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Oct 02 09:27:53 compute-0 ceph-mon[74477]: pgmap v3298: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Oct 02 09:27:53 compute-0 nova_compute[260603]: 2025-10-02 09:27:53.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:54 compute-0 nova_compute[260603]: 2025-10-02 09:27:54.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 02 09:27:55 compute-0 ceph-mon[74477]: pgmap v3299: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 02 09:27:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:27:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 02 09:27:57 compute-0 ceph-mon[74477]: pgmap v3300: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 02 09:27:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:27:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:27:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:27:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:27:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:27:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:27:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 02 09:27:58 compute-0 nova_compute[260603]: 2025-10-02 09:27:58.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:59 compute-0 nova_compute[260603]: 2025-10-02 09:27:59.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:27:59 compute-0 ceph-mon[74477]: pgmap v3301: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 02 09:28:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Oct 02 09:28:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:28:01 compute-0 ceph-mon[74477]: pgmap v3302: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Oct 02 09:28:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 02 09:28:03 compute-0 ceph-mon[74477]: pgmap v3303: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 02 09:28:03 compute-0 nova_compute[260603]: 2025-10-02 09:28:03.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:04 compute-0 nova_compute[260603]: 2025-10-02 09:28:04.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Oct 02 09:28:05 compute-0 ceph-mon[74477]: pgmap v3304: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Oct 02 09:28:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.758882) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397285758908, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1093, "num_deletes": 253, "total_data_size": 1602403, "memory_usage": 1627800, "flush_reason": "Manual Compaction"}
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397285764936, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 985433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67984, "largest_seqno": 69076, "table_properties": {"data_size": 981206, "index_size": 1814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11182, "raw_average_key_size": 20, "raw_value_size": 972006, "raw_average_value_size": 1823, "num_data_blocks": 82, "num_entries": 533, "num_filter_entries": 533, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397183, "oldest_key_time": 1759397183, "file_creation_time": 1759397285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 6107 microseconds, and 3007 cpu microseconds.
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.764991) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 985433 bytes OK
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.765022) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.766406) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.766416) EVENT_LOG_v1 {"time_micros": 1759397285766413, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.766430) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1597307, prev total WAL file size 1597307, number of live WAL files 2.
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.767034) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373533' seq:72057594037927935, type:22 .. '6D6772737461740033303034' seq:0, type:0; will stop at (end)
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(962KB)], [161(10MB)]
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397285767058, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12153438, "oldest_snapshot_seqno": -1}
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8589 keys, 9440695 bytes, temperature: kUnknown
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397285824345, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9440695, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9387901, "index_size": 30217, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 225017, "raw_average_key_size": 26, "raw_value_size": 9239412, "raw_average_value_size": 1075, "num_data_blocks": 1168, "num_entries": 8589, "num_filter_entries": 8589, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.824559) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9440695 bytes
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.826358) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.9 rd, 164.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(21.9) write-amplify(9.6) OK, records in: 9066, records dropped: 477 output_compression: NoCompression
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.826373) EVENT_LOG_v1 {"time_micros": 1759397285826366, "job": 100, "event": "compaction_finished", "compaction_time_micros": 57352, "compaction_time_cpu_micros": 22374, "output_level": 6, "num_output_files": 1, "total_output_size": 9440695, "num_input_records": 9066, "num_output_records": 8589, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397285826652, "job": 100, "event": "table_file_deletion", "file_number": 163}
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397285828689, "job": 100, "event": "table_file_deletion", "file_number": 161}
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.766956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.828810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.828816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.828819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.828822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:28:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.828825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:28:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:06 compute-0 ceph-mon[74477]: pgmap v3305: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:08 compute-0 nova_compute[260603]: 2025-10-02 09:28:08.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:09 compute-0 nova_compute[260603]: 2025-10-02 09:28:09.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:09 compute-0 ceph-mon[74477]: pgmap v3306: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:28:11 compute-0 ceph-mon[74477]: pgmap v3307: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:12 compute-0 ceph-mon[74477]: pgmap v3308: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:13 compute-0 nova_compute[260603]: 2025-10-02 09:28:13.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:14 compute-0 nova_compute[260603]: 2025-10-02 09:28:14.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:14 compute-0 nova_compute[260603]: 2025-10-02 09:28:14.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:28:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:15 compute-0 ceph-mon[74477]: pgmap v3309: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:28:16 compute-0 podman[445215]: 2025-10-02 09:28:16.027736883 +0000 UTC m=+0.075562981 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 09:28:16 compute-0 podman[445214]: 2025-10-02 09:28:16.06830017 +0000 UTC m=+0.130155737 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 09:28:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:17 compute-0 nova_compute[260603]: 2025-10-02 09:28:17.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:28:17 compute-0 ceph-mon[74477]: pgmap v3310: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:18 compute-0 nova_compute[260603]: 2025-10-02 09:28:18.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:19 compute-0 nova_compute[260603]: 2025-10-02 09:28:19.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:28:19 compute-0 nova_compute[260603]: 2025-10-02 09:28:19.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:28:19 compute-0 nova_compute[260603]: 2025-10-02 09:28:19.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:28:19 compute-0 nova_compute[260603]: 2025-10-02 09:28:19.534 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:28:19 compute-0 nova_compute[260603]: 2025-10-02 09:28:19.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:19 compute-0 ceph-mon[74477]: pgmap v3311: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:28:21 compute-0 podman[445260]: 2025-10-02 09:28:21.002482336 +0000 UTC m=+0.061748330 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 09:28:21 compute-0 podman[445259]: 2025-10-02 09:28:21.022487581 +0000 UTC m=+0.077678568 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 09:28:21 compute-0 nova_compute[260603]: 2025-10-02 09:28:21.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:28:21 compute-0 ceph-mon[74477]: pgmap v3312: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:28:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.74 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1442 writes, 6120 keys, 1442 commit groups, 1.0 writes per commit group, ingest: 6.56 MB, 0.01 MB/s
                                           Interval WAL: 1442 writes, 522 syncs, 2.76 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 09:28:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:28:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2302162713' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:28:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:28:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2302162713' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:28:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2302162713' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:28:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2302162713' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:28:23 compute-0 nova_compute[260603]: 2025-10-02 09:28:23.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:28:23 compute-0 nova_compute[260603]: 2025-10-02 09:28:23.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:28:23 compute-0 nova_compute[260603]: 2025-10-02 09:28:23.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:28:23 compute-0 nova_compute[260603]: 2025-10-02 09:28:23.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:28:23 compute-0 nova_compute[260603]: 2025-10-02 09:28:23.545 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:28:23 compute-0 nova_compute[260603]: 2025-10-02 09:28:23.546 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:28:23 compute-0 nova_compute[260603]: 2025-10-02 09:28:23.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:23 compute-0 ceph-mon[74477]: pgmap v3313: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:28:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3270020275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.044 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.174 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.175 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3588MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.176 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.176 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.241 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.241 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.258 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:28:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:28:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3283747876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.690 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.695 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.713 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.715 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:28:24 compute-0 nova_compute[260603]: 2025-10-02 09:28:24.715 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:28:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3270020275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:28:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3283747876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:28:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:28:25 compute-0 ceph-mon[74477]: pgmap v3314: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:28:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 187K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1591 writes, 5946 keys, 1591 commit groups, 1.0 writes per commit group, ingest: 5.99 MB, 0.01 MB/s
                                           Interval WAL: 1591 writes, 616 syncs, 2.58 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 09:28:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:26 compute-0 nova_compute[260603]: 2025-10-02 09:28:26.712 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:28:26 compute-0 ceph-mon[74477]: pgmap v3315: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:28:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:28:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:28:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:28:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:28:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:28:28
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'vms', 'backups', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:28:28 compute-0 nova_compute[260603]: 2025-10-02 09:28:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:28:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:28:28 compute-0 nova_compute[260603]: 2025-10-02 09:28:28.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:29 compute-0 nova_compute[260603]: 2025-10-02 09:28:29.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:29 compute-0 ceph-mon[74477]: pgmap v3316: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:28:31 compute-0 sudo[445341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:28:31 compute-0 sudo[445341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:31 compute-0 sudo[445341]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:28:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 37K writes, 145K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 37K writes, 14K syncs, 2.69 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1178 writes, 4161 keys, 1178 commit groups, 1.0 writes per commit group, ingest: 5.22 MB, 0.01 MB/s
                                           Interval WAL: 1178 writes, 469 syncs, 2.51 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 09:28:31 compute-0 sudo[445366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:28:31 compute-0 sudo[445366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:31 compute-0 sudo[445366]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:31 compute-0 sudo[445391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:28:31 compute-0 sudo[445391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:31 compute-0 sudo[445391]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:31 compute-0 sudo[445416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:28:31 compute-0 sudo[445416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:31 compute-0 ceph-mon[74477]: pgmap v3317: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:32 compute-0 sudo[445416]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:28:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:28:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:28:32 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:28:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:28:32 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:28:32 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2af39d83-cb0b-4423-a41d-461070a09e7e does not exist
Oct 02 09:28:32 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 35c2d301-022b-4444-977c-4d772eb11cd0 does not exist
Oct 02 09:28:32 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6f276a30-309f-4e12-9f03-13a95808266a does not exist
Oct 02 09:28:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:28:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:28:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:28:32 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:28:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:28:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:28:32 compute-0 sudo[445472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:28:32 compute-0 sudo[445472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:32 compute-0 sudo[445472]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:32 compute-0 sudo[445497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:28:32 compute-0 sudo[445497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:32 compute-0 sudo[445497]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:32 compute-0 sudo[445522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:28:32 compute-0 sudo[445522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:32 compute-0 sudo[445522]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:32 compute-0 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct 02 09:28:32 compute-0 sudo[445547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:28:32 compute-0 sudo[445547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:28:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:28:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:28:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:28:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:28:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:28:32 compute-0 podman[445612]: 2025-10-02 09:28:32.860734693 +0000 UTC m=+0.048310189 container create fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:28:32 compute-0 systemd[1]: Started libpod-conmon-fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c.scope.
Oct 02 09:28:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:28:32 compute-0 podman[445612]: 2025-10-02 09:28:32.841161132 +0000 UTC m=+0.028736638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:28:32 compute-0 podman[445612]: 2025-10-02 09:28:32.958942582 +0000 UTC m=+0.146518098 container init fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:28:32 compute-0 podman[445612]: 2025-10-02 09:28:32.966564199 +0000 UTC m=+0.154139675 container start fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:28:32 compute-0 podman[445612]: 2025-10-02 09:28:32.9697886 +0000 UTC m=+0.157364126 container attach fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 09:28:32 compute-0 brave_galileo[445628]: 167 167
Oct 02 09:28:32 compute-0 systemd[1]: libpod-fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c.scope: Deactivated successfully.
Oct 02 09:28:32 compute-0 conmon[445628]: conmon fc224efb1d4af0007920 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c.scope/container/memory.events
Oct 02 09:28:32 compute-0 podman[445612]: 2025-10-02 09:28:32.976048106 +0000 UTC m=+0.163623612 container died fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dd066d5c975c60dd36ba13c696cd2d1853f5c82cc37ba0a997970b73cda15b9-merged.mount: Deactivated successfully.
Oct 02 09:28:33 compute-0 podman[445612]: 2025-10-02 09:28:33.03926645 +0000 UTC m=+0.226841926 container remove fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:28:33 compute-0 systemd[1]: libpod-conmon-fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c.scope: Deactivated successfully.
Oct 02 09:28:33 compute-0 podman[445652]: 2025-10-02 09:28:33.200797295 +0000 UTC m=+0.038763341 container create 81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 09:28:33 compute-0 systemd[1]: Started libpod-conmon-81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f.scope.
Oct 02 09:28:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f63360c7f6aa7aa2612efa026c22149c0e081ef24d8fccc99bdc97e53fa2443/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f63360c7f6aa7aa2612efa026c22149c0e081ef24d8fccc99bdc97e53fa2443/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f63360c7f6aa7aa2612efa026c22149c0e081ef24d8fccc99bdc97e53fa2443/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f63360c7f6aa7aa2612efa026c22149c0e081ef24d8fccc99bdc97e53fa2443/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f63360c7f6aa7aa2612efa026c22149c0e081ef24d8fccc99bdc97e53fa2443/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:33 compute-0 podman[445652]: 2025-10-02 09:28:33.273363042 +0000 UTC m=+0.111329098 container init 81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:28:33 compute-0 podman[445652]: 2025-10-02 09:28:33.185061634 +0000 UTC m=+0.023027700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:28:33 compute-0 podman[445652]: 2025-10-02 09:28:33.28674352 +0000 UTC m=+0.124709566 container start 81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:28:33 compute-0 podman[445652]: 2025-10-02 09:28:33.290455876 +0000 UTC m=+0.128421952 container attach 81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 09:28:33 compute-0 ceph-mon[74477]: pgmap v3318: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:33 compute-0 nova_compute[260603]: 2025-10-02 09:28:33.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:34 compute-0 determined_bouman[445669]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:28:34 compute-0 determined_bouman[445669]: --> relative data size: 1.0
Oct 02 09:28:34 compute-0 determined_bouman[445669]: --> All data devices are unavailable
Oct 02 09:28:34 compute-0 systemd[1]: libpod-81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f.scope: Deactivated successfully.
Oct 02 09:28:34 compute-0 systemd[1]: libpod-81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f.scope: Consumed 1.097s CPU time.
Oct 02 09:28:34 compute-0 podman[445698]: 2025-10-02 09:28:34.504249168 +0000 UTC m=+0.025397154 container died 81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 09:28:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:34 compute-0 nova_compute[260603]: 2025-10-02 09:28:34.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:28:34.867 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:28:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:28:34.868 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:28:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:28:34.869 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f63360c7f6aa7aa2612efa026c22149c0e081ef24d8fccc99bdc97e53fa2443-merged.mount: Deactivated successfully.
Oct 02 09:28:35 compute-0 podman[445698]: 2025-10-02 09:28:35.196572773 +0000 UTC m=+0.717720729 container remove 81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 09:28:35 compute-0 systemd[1]: libpod-conmon-81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f.scope: Deactivated successfully.
Oct 02 09:28:35 compute-0 sudo[445547]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:35 compute-0 sudo[445714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:28:35 compute-0 sudo[445714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:35 compute-0 sudo[445714]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:35 compute-0 sudo[445739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:28:35 compute-0 sudo[445739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:35 compute-0 sudo[445739]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:35 compute-0 sudo[445764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:28:35 compute-0 sudo[445764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:35 compute-0 sudo[445764]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:35 compute-0 nova_compute[260603]: 2025-10-02 09:28:35.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:28:35 compute-0 sudo[445789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:28:35 compute-0 sudo[445789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:28:35 compute-0 ceph-mon[74477]: pgmap v3319: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:35 compute-0 podman[445854]: 2025-10-02 09:28:35.908907682 +0000 UTC m=+0.056906148 container create 1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:28:35 compute-0 systemd[1]: Started libpod-conmon-1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c.scope.
Oct 02 09:28:35 compute-0 podman[445854]: 2025-10-02 09:28:35.879710431 +0000 UTC m=+0.027708997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:28:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:28:36 compute-0 podman[445854]: 2025-10-02 09:28:36.013325734 +0000 UTC m=+0.161324220 container init 1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lumiere, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:28:36 compute-0 podman[445854]: 2025-10-02 09:28:36.021063395 +0000 UTC m=+0.169061901 container start 1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lumiere, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:28:36 compute-0 podman[445854]: 2025-10-02 09:28:36.025856005 +0000 UTC m=+0.173854501 container attach 1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lumiere, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 09:28:36 compute-0 systemd[1]: libpod-1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c.scope: Deactivated successfully.
Oct 02 09:28:36 compute-0 dreamy_lumiere[445870]: 167 167
Oct 02 09:28:36 compute-0 podman[445854]: 2025-10-02 09:28:36.02887666 +0000 UTC m=+0.176875136 container died 1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lumiere, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-54c2537928ef1b90515b1051557ccba619648b0ae047bb4b87b867e4059d5a8a-merged.mount: Deactivated successfully.
Oct 02 09:28:36 compute-0 podman[445854]: 2025-10-02 09:28:36.069146287 +0000 UTC m=+0.217144743 container remove 1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lumiere, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:28:36 compute-0 systemd[1]: libpod-conmon-1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c.scope: Deactivated successfully.
Oct 02 09:28:36 compute-0 podman[445894]: 2025-10-02 09:28:36.256987145 +0000 UTC m=+0.050678965 container create 2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:28:36 compute-0 systemd[1]: Started libpod-conmon-2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73.scope.
Oct 02 09:28:36 compute-0 podman[445894]: 2025-10-02 09:28:36.236876106 +0000 UTC m=+0.030567886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:28:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd03c313b23b35baf6fc8fc4ca87fbf7ca9bf08437d65904fc8245a7db976e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd03c313b23b35baf6fc8fc4ca87fbf7ca9bf08437d65904fc8245a7db976e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd03c313b23b35baf6fc8fc4ca87fbf7ca9bf08437d65904fc8245a7db976e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd03c313b23b35baf6fc8fc4ca87fbf7ca9bf08437d65904fc8245a7db976e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:36 compute-0 podman[445894]: 2025-10-02 09:28:36.362456729 +0000 UTC m=+0.156148539 container init 2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:28:36 compute-0 podman[445894]: 2025-10-02 09:28:36.374854156 +0000 UTC m=+0.168545976 container start 2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:28:36 compute-0 podman[445894]: 2025-10-02 09:28:36.378610173 +0000 UTC m=+0.172301973 container attach 2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:28:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:37 compute-0 zealous_ride[445911]: {
Oct 02 09:28:37 compute-0 zealous_ride[445911]:     "0": [
Oct 02 09:28:37 compute-0 zealous_ride[445911]:         {
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "devices": [
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "/dev/loop3"
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             ],
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_name": "ceph_lv0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_size": "21470642176",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "name": "ceph_lv0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "tags": {
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.cluster_name": "ceph",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.crush_device_class": "",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.encrypted": "0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.osd_id": "0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.type": "block",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.vdo": "0"
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             },
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "type": "block",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "vg_name": "ceph_vg0"
Oct 02 09:28:37 compute-0 zealous_ride[445911]:         }
Oct 02 09:28:37 compute-0 zealous_ride[445911]:     ],
Oct 02 09:28:37 compute-0 zealous_ride[445911]:     "1": [
Oct 02 09:28:37 compute-0 zealous_ride[445911]:         {
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "devices": [
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "/dev/loop4"
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             ],
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_name": "ceph_lv1",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_size": "21470642176",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "name": "ceph_lv1",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "tags": {
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.cluster_name": "ceph",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.crush_device_class": "",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.encrypted": "0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.osd_id": "1",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.type": "block",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.vdo": "0"
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             },
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "type": "block",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "vg_name": "ceph_vg1"
Oct 02 09:28:37 compute-0 zealous_ride[445911]:         }
Oct 02 09:28:37 compute-0 zealous_ride[445911]:     ],
Oct 02 09:28:37 compute-0 zealous_ride[445911]:     "2": [
Oct 02 09:28:37 compute-0 zealous_ride[445911]:         {
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "devices": [
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "/dev/loop5"
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             ],
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_name": "ceph_lv2",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_size": "21470642176",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "name": "ceph_lv2",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "tags": {
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.cluster_name": "ceph",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.crush_device_class": "",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.encrypted": "0",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.osd_id": "2",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.type": "block",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:                 "ceph.vdo": "0"
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             },
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "type": "block",
Oct 02 09:28:37 compute-0 zealous_ride[445911]:             "vg_name": "ceph_vg2"
Oct 02 09:28:37 compute-0 zealous_ride[445911]:         }
Oct 02 09:28:37 compute-0 zealous_ride[445911]:     ]
Oct 02 09:28:37 compute-0 zealous_ride[445911]: }
Oct 02 09:28:37 compute-0 systemd[1]: libpod-2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73.scope: Deactivated successfully.
Oct 02 09:28:37 compute-0 podman[445894]: 2025-10-02 09:28:37.204861951 +0000 UTC m=+0.998553761 container died 2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 09:28:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bbd03c313b23b35baf6fc8fc4ca87fbf7ca9bf08437d65904fc8245a7db976e-merged.mount: Deactivated successfully.
Oct 02 09:28:37 compute-0 podman[445894]: 2025-10-02 09:28:37.267356703 +0000 UTC m=+1.061048513 container remove 2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:28:37 compute-0 systemd[1]: libpod-conmon-2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73.scope: Deactivated successfully.
Oct 02 09:28:37 compute-0 sudo[445789]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:37 compute-0 sudo[445931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:28:37 compute-0 sudo[445931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:37 compute-0 sudo[445931]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:37 compute-0 sudo[445956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:28:37 compute-0 sudo[445956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:37 compute-0 sudo[445956]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:37 compute-0 sudo[445981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:28:37 compute-0 sudo[445981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:37 compute-0 sudo[445981]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:37 compute-0 sudo[446006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:28:37 compute-0 sudo[446006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:37 compute-0 ceph-mon[74477]: pgmap v3320: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:38 compute-0 podman[446072]: 2025-10-02 09:28:37.999478781 +0000 UTC m=+0.038973319 container create 9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:28:38 compute-0 systemd[1]: Started libpod-conmon-9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18.scope.
Oct 02 09:28:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:28:38 compute-0 podman[446072]: 2025-10-02 09:28:38.073687859 +0000 UTC m=+0.113182427 container init 9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:28:38 compute-0 podman[446072]: 2025-10-02 09:28:37.983196692 +0000 UTC m=+0.022691250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:28:38 compute-0 podman[446072]: 2025-10-02 09:28:38.080599695 +0000 UTC m=+0.120094233 container start 9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:28:38 compute-0 podman[446072]: 2025-10-02 09:28:38.083264188 +0000 UTC m=+0.122758726 container attach 9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 09:28:38 compute-0 eager_newton[446089]: 167 167
Oct 02 09:28:38 compute-0 systemd[1]: libpod-9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18.scope: Deactivated successfully.
Oct 02 09:28:38 compute-0 podman[446094]: 2025-10-02 09:28:38.128309855 +0000 UTC m=+0.025723855 container died 9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 09:28:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-74de3b1e6de0e899225514c1a9ef368a11400223c1614a3e77694cb1fbc07f1c-merged.mount: Deactivated successfully.
Oct 02 09:28:38 compute-0 podman[446094]: 2025-10-02 09:28:38.160327825 +0000 UTC m=+0.057741805 container remove 9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:28:38 compute-0 systemd[1]: libpod-conmon-9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18.scope: Deactivated successfully.
Oct 02 09:28:38 compute-0 podman[446116]: 2025-10-02 09:28:38.337564441 +0000 UTC m=+0.056397153 container create bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_herschel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:28:38 compute-0 systemd[1]: Started libpod-conmon-bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd.scope.
Oct 02 09:28:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:28:38 compute-0 podman[446116]: 2025-10-02 09:28:38.309619748 +0000 UTC m=+0.028452510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5ce4da80d345c0c9ff2993fe758ffad7b959eb480c695e4e06f210b68c5719/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5ce4da80d345c0c9ff2993fe758ffad7b959eb480c695e4e06f210b68c5719/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5ce4da80d345c0c9ff2993fe758ffad7b959eb480c695e4e06f210b68c5719/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5ce4da80d345c0c9ff2993fe758ffad7b959eb480c695e4e06f210b68c5719/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:28:38 compute-0 podman[446116]: 2025-10-02 09:28:38.42173126 +0000 UTC m=+0.140563972 container init bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:28:38 compute-0 podman[446116]: 2025-10-02 09:28:38.433394723 +0000 UTC m=+0.152227405 container start bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_herschel, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:28:38 compute-0 podman[446116]: 2025-10-02 09:28:38.436052547 +0000 UTC m=+0.154885259 container attach bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:28:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:38 compute-0 ceph-mon[74477]: pgmap v3321: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:38 compute-0 nova_compute[260603]: 2025-10-02 09:28:38.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:28:39 compute-0 focused_herschel[446132]: {
Oct 02 09:28:39 compute-0 focused_herschel[446132]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "osd_id": 2,
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "type": "bluestore"
Oct 02 09:28:39 compute-0 focused_herschel[446132]:     },
Oct 02 09:28:39 compute-0 focused_herschel[446132]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "osd_id": 1,
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "type": "bluestore"
Oct 02 09:28:39 compute-0 focused_herschel[446132]:     },
Oct 02 09:28:39 compute-0 focused_herschel[446132]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "osd_id": 0,
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:28:39 compute-0 focused_herschel[446132]:         "type": "bluestore"
Oct 02 09:28:39 compute-0 focused_herschel[446132]:     }
Oct 02 09:28:39 compute-0 focused_herschel[446132]: }
Oct 02 09:28:39 compute-0 systemd[1]: libpod-bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd.scope: Deactivated successfully.
Oct 02 09:28:39 compute-0 systemd[1]: libpod-bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd.scope: Consumed 1.136s CPU time.
Oct 02 09:28:39 compute-0 conmon[446132]: conmon bd99a7827988a2143aba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd.scope/container/memory.events
Oct 02 09:28:39 compute-0 podman[446116]: 2025-10-02 09:28:39.567122805 +0000 UTC m=+1.285955527 container died bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_herschel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 09:28:39 compute-0 nova_compute[260603]: 2025-10-02 09:28:39.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e5ce4da80d345c0c9ff2993fe758ffad7b959eb480c695e4e06f210b68c5719-merged.mount: Deactivated successfully.
Oct 02 09:28:39 compute-0 podman[446116]: 2025-10-02 09:28:39.637676999 +0000 UTC m=+1.356509681 container remove bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:28:39 compute-0 systemd[1]: libpod-conmon-bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd.scope: Deactivated successfully.
Oct 02 09:28:39 compute-0 sudo[446006]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:28:39 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:28:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:28:39 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev de7a1190-ec75-450f-9b42-b02cd1bba31d does not exist
Oct 02 09:28:39 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 44101bc6-ea68-4613-97a0-684a24923ac6 does not exist
Oct 02 09:28:39 compute-0 sudo[446178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:28:39 compute-0 sudo[446178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:39 compute-0 sudo[446178]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:39 compute-0 sudo[446203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:28:39 compute-0 sudo[446203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:28:39 compute-0 sudo[446203]: pam_unix(sudo:session): session closed for user root
Oct 02 09:28:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:40 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:28:40 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:28:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:28:41 compute-0 ceph-mon[74477]: pgmap v3322: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:43 compute-0 ceph-mon[74477]: pgmap v3323: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:43 compute-0 nova_compute[260603]: 2025-10-02 09:28:43.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.519 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.520 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.523 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.523 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.524 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.524 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.553 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.572 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.572 2 WARNING nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.573 2 WARNING nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.573 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Removable base files: /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.574 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.574 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.575 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.575 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.575 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.576 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 02 09:28:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:44 compute-0 nova_compute[260603]: 2025-10-02 09:28:44.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:28:45 compute-0 ceph-mon[74477]: pgmap v3324: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:46 compute-0 ceph-mon[74477]: pgmap v3325: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:47 compute-0 podman[446229]: 2025-10-02 09:28:47.036149537 +0000 UTC m=+0.089169996 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 02 09:28:47 compute-0 podman[446228]: 2025-10-02 09:28:47.099531047 +0000 UTC m=+0.150581634 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 09:28:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:48 compute-0 nova_compute[260603]: 2025-10-02 09:28:48.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:49 compute-0 nova_compute[260603]: 2025-10-02 09:28:49.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:49 compute-0 ceph-mon[74477]: pgmap v3326: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:50 compute-0 nova_compute[260603]: 2025-10-02 09:28:50.577 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:28:50 compute-0 nova_compute[260603]: 2025-10-02 09:28:50.577 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:28:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:28:51 compute-0 ceph-mon[74477]: pgmap v3327: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:52 compute-0 podman[446272]: 2025-10-02 09:28:52.049198439 +0000 UTC m=+0.101958066 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:28:52 compute-0 podman[446273]: 2025-10-02 09:28:52.061813123 +0000 UTC m=+0.113534297 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:28:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:53 compute-0 ceph-mon[74477]: pgmap v3328: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:53 compute-0 nova_compute[260603]: 2025-10-02 09:28:53.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:54 compute-0 nova_compute[260603]: 2025-10-02 09:28:54.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:55 compute-0 ceph-mon[74477]: pgmap v3329: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:28:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:57 compute-0 ceph-mon[74477]: pgmap v3330: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:28:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:28:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:28:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:28:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:28:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:28:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:28:58 compute-0 nova_compute[260603]: 2025-10-02 09:28:58.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:59 compute-0 nova_compute[260603]: 2025-10-02 09:28:59.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:28:59 compute-0 ceph-mon[74477]: pgmap v3331: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:29:01 compute-0 ceph-mon[74477]: pgmap v3332: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:03 compute-0 ceph-mon[74477]: pgmap v3333: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:03 compute-0 nova_compute[260603]: 2025-10-02 09:29:03.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:04 compute-0 nova_compute[260603]: 2025-10-02 09:29:04.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:29:05 compute-0 ceph-mon[74477]: pgmap v3334: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:07 compute-0 ceph-mon[74477]: pgmap v3335: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:08 compute-0 nova_compute[260603]: 2025-10-02 09:29:08.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:09 compute-0 nova_compute[260603]: 2025-10-02 09:29:09.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:09 compute-0 ceph-mon[74477]: pgmap v3336: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:29:11 compute-0 sshd-session[446312]: Invalid user ubuntu from 106.36.198.78 port 58730
Oct 02 09:29:11 compute-0 sshd-session[446312]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 09:29:11 compute-0 sshd-session[446312]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=106.36.198.78
Oct 02 09:29:11 compute-0 ceph-mon[74477]: pgmap v3337: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:13 compute-0 sshd-session[446312]: Failed password for invalid user ubuntu from 106.36.198.78 port 58730 ssh2
Oct 02 09:29:13 compute-0 ceph-mon[74477]: pgmap v3338: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:13 compute-0 nova_compute[260603]: 2025-10-02 09:29:13.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:14 compute-0 nova_compute[260603]: 2025-10-02 09:29:14.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:14 compute-0 sshd-session[446312]: Received disconnect from 106.36.198.78 port 58730:11:  [preauth]
Oct 02 09:29:14 compute-0 sshd-session[446312]: Disconnected from invalid user ubuntu 106.36.198.78 port 58730 [preauth]
Oct 02 09:29:14 compute-0 ceph-mon[74477]: pgmap v3339: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:15 compute-0 nova_compute[260603]: 2025-10-02 09:29:15.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:29:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:17 compute-0 ceph-mon[74477]: pgmap v3340: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:18 compute-0 podman[446315]: 2025-10-02 09:29:18.020980987 +0000 UTC m=+0.072951860 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 09:29:18 compute-0 podman[446314]: 2025-10-02 09:29:18.046109001 +0000 UTC m=+0.104124153 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:29:18 compute-0 nova_compute[260603]: 2025-10-02 09:29:18.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:18 compute-0 nova_compute[260603]: 2025-10-02 09:29:18.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:19 compute-0 nova_compute[260603]: 2025-10-02 09:29:19.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:19 compute-0 nova_compute[260603]: 2025-10-02 09:29:19.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:29:19 compute-0 nova_compute[260603]: 2025-10-02 09:29:19.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:29:19 compute-0 nova_compute[260603]: 2025-10-02 09:29:19.538 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:29:19 compute-0 ceph-mon[74477]: pgmap v3341: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:19 compute-0 nova_compute[260603]: 2025-10-02 09:29:19.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:29:20 compute-0 ceph-mon[74477]: pgmap v3342: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:21 compute-0 nova_compute[260603]: 2025-10-02 09:29:21.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:29:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1236954431' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:29:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:29:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1236954431' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:29:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1236954431' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:29:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1236954431' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:29:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:23 compute-0 podman[446356]: 2025-10-02 09:29:23.003239934 +0000 UTC m=+0.069735308 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 09:29:23 compute-0 podman[446357]: 2025-10-02 09:29:23.033224891 +0000 UTC m=+0.100643625 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 09:29:23 compute-0 ceph-mon[74477]: pgmap v3343: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:23 compute-0 nova_compute[260603]: 2025-10-02 09:29:23.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:23 compute-0 nova_compute[260603]: 2025-10-02 09:29:23.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:29:23 compute-0 nova_compute[260603]: 2025-10-02 09:29:23.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:29:23 compute-0 nova_compute[260603]: 2025-10-02 09:29:23.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:29:23 compute-0 nova_compute[260603]: 2025-10-02 09:29:23.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:29:23 compute-0 nova_compute[260603]: 2025-10-02 09:29:23.548 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:29:23 compute-0 nova_compute[260603]: 2025-10-02 09:29:23.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:29:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467833117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:29:23 compute-0 nova_compute[260603]: 2025-10-02 09:29:23.985 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.162 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.163 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3585MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.164 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.164 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:29:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/467833117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.337 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.337 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.355 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:29:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:29:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23523161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.777 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.782 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.797 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.798 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:29:24 compute-0 nova_compute[260603]: 2025-10-02 09:29:24.799 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:29:25 compute-0 ceph-mon[74477]: pgmap v3344: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/23523161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:29:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:29:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:27 compute-0 ceph-mon[74477]: pgmap v3345: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:27 compute-0 nova_compute[260603]: 2025-10-02 09:29:27.794 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:27 compute-0 nova_compute[260603]: 2025-10-02 09:29:27.795 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:29:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:29:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:29:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:29:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:29:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:29:28
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms', '.mgr', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:29:28 compute-0 nova_compute[260603]: 2025-10-02 09:29:28.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:29:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:29:28 compute-0 nova_compute[260603]: 2025-10-02 09:29:28.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:29 compute-0 ceph-mon[74477]: pgmap v3346: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Oct 02 09:29:29 compute-0 nova_compute[260603]: 2025-10-02 09:29:29.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Oct 02 09:29:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:29:31 compute-0 ceph-mon[74477]: pgmap v3347: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Oct 02 09:29:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Oct 02 09:29:33 compute-0 ceph-mon[74477]: pgmap v3348: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Oct 02 09:29:33 compute-0 nova_compute[260603]: 2025-10-02 09:29:33.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 09:29:34 compute-0 nova_compute[260603]: 2025-10-02 09:29:34.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:29:34.869 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:29:34.869 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:29:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:29:34.869 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:29:35 compute-0 ceph-mon[74477]: pgmap v3349: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 09:29:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:29:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct 02 09:29:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct 02 09:29:36 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct 02 09:29:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Oct 02 09:29:37 compute-0 nova_compute[260603]: 2025-10-02 09:29:37.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:37 compute-0 nova_compute[260603]: 2025-10-02 09:29:37.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:37 compute-0 nova_compute[260603]: 2025-10-02 09:29:37.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 09:29:37 compute-0 nova_compute[260603]: 2025-10-02 09:29:37.532 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 09:29:37 compute-0 ceph-mon[74477]: osdmap e297: 3 total, 3 up, 3 in
Oct 02 09:29:37 compute-0 ceph-mon[74477]: pgmap v3351: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Oct 02 09:29:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct 02 09:29:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct 02 09:29:38 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct 02 09:29:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 33 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 4.1 MiB/s wr, 97 op/s
Oct 02 09:29:38 compute-0 nova_compute[260603]: 2025-10-02 09:29:38.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0005359424855114932 of space, bias 1.0, pg target 0.16078274565344797 quantized to 32 (current 32)
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:29:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:29:39 compute-0 ceph-mon[74477]: osdmap e298: 3 total, 3 up, 3 in
Oct 02 09:29:39 compute-0 ceph-mon[74477]: pgmap v3353: 305 pgs: 305 active+clean; 33 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 4.1 MiB/s wr, 97 op/s
Oct 02 09:29:39 compute-0 nova_compute[260603]: 2025-10-02 09:29:39.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:39 compute-0 sudo[446439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:29:39 compute-0 sudo[446439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:39 compute-0 sudo[446439]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:40 compute-0 sudo[446464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:29:40 compute-0 sudo[446464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:40 compute-0 sudo[446464]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:40 compute-0 sudo[446489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:29:40 compute-0 sudo[446489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:40 compute-0 sudo[446489]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:40 compute-0 sudo[446514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:29:40 compute-0 sudo[446514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 33 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.1 MiB/s wr, 46 op/s
Oct 02 09:29:40 compute-0 sudo[446514]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:29:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:29:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:29:40 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:29:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:29:40 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:29:40 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5be00a4f-30b7-41bc-8440-2c2bcea18e24 does not exist
Oct 02 09:29:40 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 4906b2a9-c516-4fa9-8e57-6cfd41fe01f9 does not exist
Oct 02 09:29:40 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 022ce5a7-38f4-49bb-9833-2de2b4c084c2 does not exist
Oct 02 09:29:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:29:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:29:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:29:40 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:29:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:29:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:29:40 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:29:40 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:29:40 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:29:40 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:29:40 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:29:40 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:29:40 compute-0 sudo[446571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:29:40 compute-0 sudo[446571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:40 compute-0 sudo[446571]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:29:40 compute-0 sudo[446596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:29:40 compute-0 sudo[446596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:40 compute-0 sudo[446596]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:40 compute-0 sudo[446621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:29:40 compute-0 sudo[446621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:40 compute-0 sudo[446621]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:40 compute-0 sudo[446646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:29:40 compute-0 sudo[446646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:41 compute-0 podman[446711]: 2025-10-02 09:29:41.303405801 +0000 UTC m=+0.076498910 container create c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shockley, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:29:41 compute-0 podman[446711]: 2025-10-02 09:29:41.252273773 +0000 UTC m=+0.025366902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:29:41 compute-0 systemd[1]: Started libpod-conmon-c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f.scope.
Oct 02 09:29:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:29:41 compute-0 podman[446711]: 2025-10-02 09:29:41.428322292 +0000 UTC m=+0.201415451 container init c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shockley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 09:29:41 compute-0 podman[446711]: 2025-10-02 09:29:41.434539657 +0000 UTC m=+0.207632766 container start c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:29:41 compute-0 cool_shockley[446727]: 167 167
Oct 02 09:29:41 compute-0 systemd[1]: libpod-c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f.scope: Deactivated successfully.
Oct 02 09:29:41 compute-0 conmon[446727]: conmon c3d6746e8c050295daae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f.scope/container/memory.events
Oct 02 09:29:41 compute-0 podman[446711]: 2025-10-02 09:29:41.462288434 +0000 UTC m=+0.235381583 container attach c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:29:41 compute-0 podman[446711]: 2025-10-02 09:29:41.462967124 +0000 UTC m=+0.236060233 container died c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shockley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0b2eb73da0f9d2ec8e02774a6d292c1e63692470b259c3c7ad07aed5fc606b4-merged.mount: Deactivated successfully.
Oct 02 09:29:41 compute-0 podman[446711]: 2025-10-02 09:29:41.67348522 +0000 UTC m=+0.446578329 container remove c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shockley, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:29:41 compute-0 systemd[1]: libpod-conmon-c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f.scope: Deactivated successfully.
Oct 02 09:29:41 compute-0 ceph-mon[74477]: pgmap v3354: 305 pgs: 305 active+clean; 33 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.1 MiB/s wr, 46 op/s
Oct 02 09:29:41 compute-0 podman[446751]: 2025-10-02 09:29:41.854796614 +0000 UTC m=+0.058529690 container create cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:29:41 compute-0 systemd[1]: Started libpod-conmon-cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd.scope.
Oct 02 09:29:41 compute-0 podman[446751]: 2025-10-02 09:29:41.818932753 +0000 UTC m=+0.022665859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:29:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd3dd45b98efe08a72d0e085abb02fc7f4b85ee5e055a67c4ca9d3676431d19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd3dd45b98efe08a72d0e085abb02fc7f4b85ee5e055a67c4ca9d3676431d19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd3dd45b98efe08a72d0e085abb02fc7f4b85ee5e055a67c4ca9d3676431d19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd3dd45b98efe08a72d0e085abb02fc7f4b85ee5e055a67c4ca9d3676431d19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd3dd45b98efe08a72d0e085abb02fc7f4b85ee5e055a67c4ca9d3676431d19/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:41 compute-0 podman[446751]: 2025-10-02 09:29:41.987675534 +0000 UTC m=+0.191408620 container init cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:29:41 compute-0 podman[446751]: 2025-10-02 09:29:41.996531541 +0000 UTC m=+0.200264617 container start cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 09:29:42 compute-0 podman[446751]: 2025-10-02 09:29:42.02437189 +0000 UTC m=+0.228105006 container attach cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 02 09:29:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 02 09:29:42 compute-0 suspicious_payne[446767]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:29:42 compute-0 suspicious_payne[446767]: --> relative data size: 1.0
Oct 02 09:29:42 compute-0 suspicious_payne[446767]: --> All data devices are unavailable
Oct 02 09:29:43 compute-0 systemd[1]: libpod-cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd.scope: Deactivated successfully.
Oct 02 09:29:43 compute-0 podman[446751]: 2025-10-02 09:29:43.016487928 +0000 UTC m=+1.220221004 container died cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:29:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fd3dd45b98efe08a72d0e085abb02fc7f4b85ee5e055a67c4ca9d3676431d19-merged.mount: Deactivated successfully.
Oct 02 09:29:43 compute-0 podman[446751]: 2025-10-02 09:29:43.082519041 +0000 UTC m=+1.286252117 container remove cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 09:29:43 compute-0 systemd[1]: libpod-conmon-cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd.scope: Deactivated successfully.
Oct 02 09:29:43 compute-0 sudo[446646]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:43 compute-0 sudo[446810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:29:43 compute-0 sudo[446810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:43 compute-0 sudo[446810]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:43 compute-0 sudo[446835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:29:43 compute-0 sudo[446835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:43 compute-0 sudo[446835]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:43 compute-0 sudo[446860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:29:43 compute-0 sudo[446860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:43 compute-0 sudo[446860]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:43 compute-0 sudo[446885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:29:43 compute-0 sudo[446885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:43 compute-0 podman[446948]: 2025-10-02 09:29:43.633228462 +0000 UTC m=+0.045608246 container create 9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lehmann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:29:43 compute-0 systemd[1]: Started libpod-conmon-9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9.scope.
Oct 02 09:29:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:29:43 compute-0 podman[446948]: 2025-10-02 09:29:43.610058568 +0000 UTC m=+0.022438382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:29:43 compute-0 podman[446948]: 2025-10-02 09:29:43.716492312 +0000 UTC m=+0.128872126 container init 9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 09:29:43 compute-0 podman[446948]: 2025-10-02 09:29:43.72247171 +0000 UTC m=+0.134851494 container start 9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 09:29:43 compute-0 happy_lehmann[446964]: 167 167
Oct 02 09:29:43 compute-0 systemd[1]: libpod-9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9.scope: Deactivated successfully.
Oct 02 09:29:43 compute-0 podman[446948]: 2025-10-02 09:29:43.726380591 +0000 UTC m=+0.138760375 container attach 9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:29:43 compute-0 conmon[446964]: conmon 9740726a5e6f0c2af15f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9.scope/container/memory.events
Oct 02 09:29:43 compute-0 podman[446948]: 2025-10-02 09:29:43.727509677 +0000 UTC m=+0.139889461 container died 9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lehmann, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:29:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-51dac1c3751c2a648049a8783b15878fb394ae34a01f90c88776214f9d525580-merged.mount: Deactivated successfully.
Oct 02 09:29:43 compute-0 podman[446948]: 2025-10-02 09:29:43.766606418 +0000 UTC m=+0.178986202 container remove 9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 09:29:43 compute-0 systemd[1]: libpod-conmon-9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9.scope: Deactivated successfully.
Oct 02 09:29:43 compute-0 ceph-mon[74477]: pgmap v3355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 02 09:29:43 compute-0 nova_compute[260603]: 2025-10-02 09:29:43.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:43 compute-0 podman[446986]: 2025-10-02 09:29:43.909945325 +0000 UTC m=+0.039149904 container create fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 09:29:43 compute-0 systemd[1]: Started libpod-conmon-fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6.scope.
Oct 02 09:29:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09dd5ef850258bea38f112048a19260e9649366c72d3ec4893b779cabe44f5c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09dd5ef850258bea38f112048a19260e9649366c72d3ec4893b779cabe44f5c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09dd5ef850258bea38f112048a19260e9649366c72d3ec4893b779cabe44f5c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09dd5ef850258bea38f112048a19260e9649366c72d3ec4893b779cabe44f5c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:43 compute-0 podman[446986]: 2025-10-02 09:29:43.893344997 +0000 UTC m=+0.022549596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:29:43 compute-0 podman[446986]: 2025-10-02 09:29:43.989695836 +0000 UTC m=+0.118900425 container init fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:29:43 compute-0 podman[446986]: 2025-10-02 09:29:43.994820826 +0000 UTC m=+0.124025405 container start fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 09:29:43 compute-0 podman[446986]: 2025-10-02 09:29:43.998677697 +0000 UTC m=+0.127882296 container attach fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 09:29:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 02 09:29:44 compute-0 crazy_hoover[447002]: {
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:     "0": [
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:         {
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "devices": [
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "/dev/loop3"
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             ],
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_name": "ceph_lv0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_size": "21470642176",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "name": "ceph_lv0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "tags": {
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.cluster_name": "ceph",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.crush_device_class": "",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.encrypted": "0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.osd_id": "0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.type": "block",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.vdo": "0"
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             },
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "type": "block",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "vg_name": "ceph_vg0"
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:         }
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:     ],
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:     "1": [
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:         {
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "devices": [
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "/dev/loop4"
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             ],
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_name": "ceph_lv1",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_size": "21470642176",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "name": "ceph_lv1",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "tags": {
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.cluster_name": "ceph",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.crush_device_class": "",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.encrypted": "0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.osd_id": "1",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.type": "block",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.vdo": "0"
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             },
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "type": "block",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "vg_name": "ceph_vg1"
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:         }
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:     ],
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:     "2": [
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:         {
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "devices": [
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "/dev/loop5"
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             ],
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_name": "ceph_lv2",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_size": "21470642176",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "name": "ceph_lv2",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "tags": {
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.cluster_name": "ceph",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.crush_device_class": "",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.encrypted": "0",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.osd_id": "2",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.type": "block",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:                 "ceph.vdo": "0"
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             },
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "type": "block",
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:             "vg_name": "ceph_vg2"
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:         }
Oct 02 09:29:44 compute-0 crazy_hoover[447002]:     ]
Oct 02 09:29:44 compute-0 crazy_hoover[447002]: }
Oct 02 09:29:44 compute-0 systemd[1]: libpod-fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6.scope: Deactivated successfully.
Oct 02 09:29:44 compute-0 podman[446986]: 2025-10-02 09:29:44.735200482 +0000 UTC m=+0.864405061 container died fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:29:44 compute-0 nova_compute[260603]: 2025-10-02 09:29:44.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-09dd5ef850258bea38f112048a19260e9649366c72d3ec4893b779cabe44f5c0-merged.mount: Deactivated successfully.
Oct 02 09:29:44 compute-0 podman[446986]: 2025-10-02 09:29:44.806060504 +0000 UTC m=+0.935265083 container remove fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 09:29:44 compute-0 systemd[1]: libpod-conmon-fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6.scope: Deactivated successfully.
Oct 02 09:29:44 compute-0 sudo[446885]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:44 compute-0 sudo[447025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:29:44 compute-0 sudo[447025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:44 compute-0 sudo[447025]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:44 compute-0 sudo[447050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:29:44 compute-0 sudo[447050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:44 compute-0 sudo[447050]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:44 compute-0 sudo[447075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:29:44 compute-0 sudo[447075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:44 compute-0 sudo[447075]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:45 compute-0 sudo[447100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:29:45 compute-0 sudo[447100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:45 compute-0 podman[447166]: 2025-10-02 09:29:45.424643486 +0000 UTC m=+0.066630002 container create 28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:29:45 compute-0 systemd[1]: Started libpod-conmon-28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a.scope.
Oct 02 09:29:45 compute-0 podman[447166]: 2025-10-02 09:29:45.377381319 +0000 UTC m=+0.019367875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:29:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:29:45 compute-0 podman[447166]: 2025-10-02 09:29:45.520600423 +0000 UTC m=+0.162586969 container init 28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 09:29:45 compute-0 podman[447166]: 2025-10-02 09:29:45.527343443 +0000 UTC m=+0.169329959 container start 28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 09:29:45 compute-0 great_merkle[447182]: 167 167
Oct 02 09:29:45 compute-0 systemd[1]: libpod-28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a.scope: Deactivated successfully.
Oct 02 09:29:45 compute-0 podman[447166]: 2025-10-02 09:29:45.534139016 +0000 UTC m=+0.176125532 container attach 28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 09:29:45 compute-0 podman[447166]: 2025-10-02 09:29:45.534426515 +0000 UTC m=+0.176413031 container died 28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 09:29:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef984ec635f5393b438c7c8d2648e644be953990fedebcfafe3ec1feaa8e5f02-merged.mount: Deactivated successfully.
Oct 02 09:29:45 compute-0 podman[447166]: 2025-10-02 09:29:45.614355321 +0000 UTC m=+0.256341837 container remove 28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:29:45 compute-0 systemd[1]: libpod-conmon-28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a.scope: Deactivated successfully.
Oct 02 09:29:45 compute-0 podman[447207]: 2025-10-02 09:29:45.772129789 +0000 UTC m=+0.044187531 container create 0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feistel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:29:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:29:45 compute-0 ceph-mon[74477]: pgmap v3356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 02 09:29:45 compute-0 systemd[1]: Started libpod-conmon-0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0.scope.
Oct 02 09:29:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:29:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1cf4230c8e24e94a2fdb0205530bdaff41c362f75b857d65b3766c05060012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1cf4230c8e24e94a2fdb0205530bdaff41c362f75b857d65b3766c05060012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:45 compute-0 podman[447207]: 2025-10-02 09:29:45.75165057 +0000 UTC m=+0.023708322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:29:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1cf4230c8e24e94a2fdb0205530bdaff41c362f75b857d65b3766c05060012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1cf4230c8e24e94a2fdb0205530bdaff41c362f75b857d65b3766c05060012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:29:45 compute-0 podman[447207]: 2025-10-02 09:29:45.869832751 +0000 UTC m=+0.141890493 container init 0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feistel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 09:29:45 compute-0 podman[447207]: 2025-10-02 09:29:45.876030384 +0000 UTC m=+0.148088106 container start 0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feistel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:29:45 compute-0 podman[447207]: 2025-10-02 09:29:45.882780296 +0000 UTC m=+0.154838038 container attach 0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feistel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 09:29:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct 02 09:29:46 compute-0 zen_feistel[447224]: {
Oct 02 09:29:46 compute-0 zen_feistel[447224]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "osd_id": 2,
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "type": "bluestore"
Oct 02 09:29:46 compute-0 zen_feistel[447224]:     },
Oct 02 09:29:46 compute-0 zen_feistel[447224]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "osd_id": 1,
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "type": "bluestore"
Oct 02 09:29:46 compute-0 zen_feistel[447224]:     },
Oct 02 09:29:46 compute-0 zen_feistel[447224]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "osd_id": 0,
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:29:46 compute-0 zen_feistel[447224]:         "type": "bluestore"
Oct 02 09:29:46 compute-0 zen_feistel[447224]:     }
Oct 02 09:29:46 compute-0 zen_feistel[447224]: }
Oct 02 09:29:46 compute-0 systemd[1]: libpod-0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0.scope: Deactivated successfully.
Oct 02 09:29:46 compute-0 podman[447207]: 2025-10-02 09:29:46.797109654 +0000 UTC m=+1.069167396 container died 0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feistel, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:29:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e1cf4230c8e24e94a2fdb0205530bdaff41c362f75b857d65b3766c05060012-merged.mount: Deactivated successfully.
Oct 02 09:29:46 compute-0 podman[447207]: 2025-10-02 09:29:46.851712759 +0000 UTC m=+1.123770481 container remove 0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 09:29:46 compute-0 systemd[1]: libpod-conmon-0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0.scope: Deactivated successfully.
Oct 02 09:29:46 compute-0 sudo[447100]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:29:46 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:29:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:29:46 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:29:46 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ecf2e04a-88b1-4116-b10f-b3c22df2ebc6 does not exist
Oct 02 09:29:46 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8ba57206-0e89-48d9-9c3e-c4f78d2e6ae2 does not exist
Oct 02 09:29:46 compute-0 sudo[447268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:29:46 compute-0 sudo[447268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:46 compute-0 sudo[447268]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:47 compute-0 sudo[447293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:29:47 compute-0 sudo[447293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:29:47 compute-0 sudo[447293]: pam_unix(sudo:session): session closed for user root
Oct 02 09:29:47 compute-0 ceph-mon[74477]: pgmap v3357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct 02 09:29:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:29:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:29:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 817 KiB/s wr, 12 op/s
Oct 02 09:29:48 compute-0 ceph-mon[74477]: pgmap v3358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 817 KiB/s wr, 12 op/s
Oct 02 09:29:48 compute-0 nova_compute[260603]: 2025-10-02 09:29:48.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:48 compute-0 podman[447319]: 2025-10-02 09:29:48.9955388 +0000 UTC m=+0.058694034 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 09:29:49 compute-0 podman[447318]: 2025-10-02 09:29:49.027612563 +0000 UTC m=+0.090494589 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 09:29:49 compute-0 nova_compute[260603]: 2025-10-02 09:29:49.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:50 compute-0 nova_compute[260603]: 2025-10-02 09:29:50.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:50 compute-0 nova_compute[260603]: 2025-10-02 09:29:50.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:29:50 compute-0 nova_compute[260603]: 2025-10-02 09:29:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 684 KiB/s wr, 10 op/s
Oct 02 09:29:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:29:51 compute-0 ceph-mon[74477]: pgmap v3359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 684 KiB/s wr, 10 op/s
Oct 02 09:29:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 684 KiB/s wr, 10 op/s
Oct 02 09:29:53 compute-0 ceph-mon[74477]: pgmap v3360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 684 KiB/s wr, 10 op/s
Oct 02 09:29:53 compute-0 nova_compute[260603]: 2025-10-02 09:29:53.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:53 compute-0 podman[447358]: 2025-10-02 09:29:53.991984183 +0000 UTC m=+0.052836302 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 09:29:54 compute-0 podman[447357]: 2025-10-02 09:29:54.001947154 +0000 UTC m=+0.065582510 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 02 09:29:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Oct 02 09:29:54 compute-0 nova_compute[260603]: 2025-10-02 09:29:54.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:54 compute-0 ceph-mon[74477]: pgmap v3361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Oct 02 09:29:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:29:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:57 compute-0 ceph-mon[74477]: pgmap v3362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:29:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:29:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:29:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:29:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:29:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:29:58 compute-0 nova_compute[260603]: 2025-10-02 09:29:58.532 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:29:58 compute-0 nova_compute[260603]: 2025-10-02 09:29:58.533 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 09:29:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:58 compute-0 nova_compute[260603]: 2025-10-02 09:29:58.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:29:58 compute-0 ceph-mon[74477]: pgmap v3363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:29:59 compute-0 nova_compute[260603]: 2025-10-02 09:29:59.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:30:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:30:01.478 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:30:01 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:30:01.479 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:30:01 compute-0 nova_compute[260603]: 2025-10-02 09:30:01.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:01 compute-0 ceph-mon[74477]: pgmap v3364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:03 compute-0 ceph-mon[74477]: pgmap v3365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:03 compute-0 nova_compute[260603]: 2025-10-02 09:30:03.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:04 compute-0 nova_compute[260603]: 2025-10-02 09:30:04.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:04 compute-0 ceph-mon[74477]: pgmap v3366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.127861) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397405127908, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1228, "num_deletes": 251, "total_data_size": 1830030, "memory_usage": 1863560, "flush_reason": "Manual Compaction"}
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397405169236, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1801236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69077, "largest_seqno": 70304, "table_properties": {"data_size": 1795363, "index_size": 3203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12438, "raw_average_key_size": 19, "raw_value_size": 1783530, "raw_average_value_size": 2844, "num_data_blocks": 143, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397286, "oldest_key_time": 1759397286, "file_creation_time": 1759397405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 41426 microseconds, and 4746 cpu microseconds.
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.169288) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1801236 bytes OK
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.169311) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.231296) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.231346) EVENT_LOG_v1 {"time_micros": 1759397405231335, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.231372) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1824471, prev total WAL file size 1824471, number of live WAL files 2.
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.232313) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1759KB)], [164(9219KB)]
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397405232378, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11241931, "oldest_snapshot_seqno": -1}
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8698 keys, 9471514 bytes, temperature: kUnknown
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397405397978, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9471514, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9417898, "index_size": 30783, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 227920, "raw_average_key_size": 26, "raw_value_size": 9267273, "raw_average_value_size": 1065, "num_data_blocks": 1185, "num_entries": 8698, "num_filter_entries": 8698, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.398189) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9471514 bytes
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.434277) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 67.9 rd, 57.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.0 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(11.5) write-amplify(5.3) OK, records in: 9216, records dropped: 518 output_compression: NoCompression
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.434321) EVENT_LOG_v1 {"time_micros": 1759397405434302, "job": 102, "event": "compaction_finished", "compaction_time_micros": 165486, "compaction_time_cpu_micros": 49692, "output_level": 6, "num_output_files": 1, "total_output_size": 9471514, "num_input_records": 9216, "num_output_records": 8698, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397405435126, "job": 102, "event": "table_file_deletion", "file_number": 166}
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397405438286, "job": 102, "event": "table_file_deletion", "file_number": 164}
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.232223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.438415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.438424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.438427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.438429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:30:05 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.438432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:30:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:30:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:07 compute-0 ceph-mon[74477]: pgmap v3367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:08 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:30:08.482 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:30:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:08 compute-0 ceph-mon[74477]: pgmap v3368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:08 compute-0 nova_compute[260603]: 2025-10-02 09:30:08.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:09 compute-0 nova_compute[260603]: 2025-10-02 09:30:09.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:30:11 compute-0 ceph-mon[74477]: pgmap v3369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 511 B/s wr, 7 op/s
Oct 02 09:30:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct 02 09:30:12 compute-0 ceph-mon[74477]: pgmap v3370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 511 B/s wr, 7 op/s
Oct 02 09:30:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct 02 09:30:12 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct 02 09:30:13 compute-0 nova_compute[260603]: 2025-10-02 09:30:13.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:14 compute-0 ceph-mon[74477]: osdmap e299: 3 total, 3 up, 3 in
Oct 02 09:30:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct 02 09:30:14 compute-0 nova_compute[260603]: 2025-10-02 09:30:14.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:15 compute-0 ceph-mon[74477]: pgmap v3372: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct 02 09:30:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:30:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct 02 09:30:17 compute-0 nova_compute[260603]: 2025-10-02 09:30:17.539 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:30:17 compute-0 ceph-mon[74477]: pgmap v3373: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct 02 09:30:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 02 09:30:18 compute-0 nova_compute[260603]: 2025-10-02 09:30:18.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:19 compute-0 nova_compute[260603]: 2025-10-02 09:30:19.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:19 compute-0 ceph-mon[74477]: pgmap v3374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 02 09:30:19 compute-0 podman[447398]: 2025-10-02 09:30:19.999464054 +0000 UTC m=+0.050528330 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:30:20 compute-0 podman[447397]: 2025-10-02 09:30:20.029553454 +0000 UTC m=+0.087068531 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 09:30:20 compute-0 nova_compute[260603]: 2025-10-02 09:30:20.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:30:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 02 09:30:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:30:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct 02 09:30:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct 02 09:30:20 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct 02 09:30:21 compute-0 nova_compute[260603]: 2025-10-02 09:30:21.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:30:21 compute-0 nova_compute[260603]: 2025-10-02 09:30:21.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:30:21 compute-0 nova_compute[260603]: 2025-10-02 09:30:21.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:30:21 compute-0 nova_compute[260603]: 2025-10-02 09:30:21.531 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:30:21 compute-0 nova_compute[260603]: 2025-10-02 09:30:21.532 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:30:21 compute-0 ceph-mon[74477]: pgmap v3375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 02 09:30:21 compute-0 ceph-mon[74477]: osdmap e300: 3 total, 3 up, 3 in
Oct 02 09:30:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:30:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2433323668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:30:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:30:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2433323668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:30:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 846 B/s wr, 17 op/s
Oct 02 09:30:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2433323668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:30:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2433323668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:30:23 compute-0 ceph-mon[74477]: pgmap v3377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 846 B/s wr, 17 op/s
Oct 02 09:30:23 compute-0 nova_compute[260603]: 2025-10-02 09:30:23.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:30:23 compute-0 nova_compute[260603]: 2025-10-02 09:30:23.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:30:23 compute-0 nova_compute[260603]: 2025-10-02 09:30:23.558 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:30:23 compute-0 nova_compute[260603]: 2025-10-02 09:30:23.558 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:30:23 compute-0 nova_compute[260603]: 2025-10-02 09:30:23.558 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:30:23 compute-0 nova_compute[260603]: 2025-10-02 09:30:23.558 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:30:23 compute-0 nova_compute[260603]: 2025-10-02 09:30:23.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:30:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738599554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:30:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3738599554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.041 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.198 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.199 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3565MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.199 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.200 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.283 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.284 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.302 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.417 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.417 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.533 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.562 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.580 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:30:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 02 09:30:24 compute-0 nova_compute[260603]: 2025-10-02 09:30:24.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:25 compute-0 podman[447485]: 2025-10-02 09:30:25.008805799 +0000 UTC m=+0.079998891 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 09:30:25 compute-0 podman[447486]: 2025-10-02 09:30:25.016663814 +0000 UTC m=+0.081346012 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 09:30:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:30:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2583034819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:30:25 compute-0 nova_compute[260603]: 2025-10-02 09:30:25.039 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:30:25 compute-0 nova_compute[260603]: 2025-10-02 09:30:25.043 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:30:25 compute-0 ceph-mon[74477]: pgmap v3378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 02 09:30:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2583034819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:30:25 compute-0 nova_compute[260603]: 2025-10-02 09:30:25.065 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:30:25 compute-0 nova_compute[260603]: 2025-10-02 09:30:25.066 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:30:25 compute-0 nova_compute[260603]: 2025-10-02 09:30:25.067 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:30:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:30:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 02 09:30:27 compute-0 ceph-mon[74477]: pgmap v3379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 02 09:30:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:30:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:30:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:30:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:30:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:30:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:30:28
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.mgr', 'vms', 'volumes', 'images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:30:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:30:28 compute-0 nova_compute[260603]: 2025-10-02 09:30:28.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:29 compute-0 nova_compute[260603]: 2025-10-02 09:30:29.062 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:30:29 compute-0 ceph-mon[74477]: pgmap v3380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:29 compute-0 nova_compute[260603]: 2025-10-02 09:30:29.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:30 compute-0 nova_compute[260603]: 2025-10-02 09:30:30.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:30:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:30:31 compute-0 ceph-mon[74477]: pgmap v3381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:33 compute-0 ceph-mon[74477]: pgmap v3382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:33 compute-0 nova_compute[260603]: 2025-10-02 09:30:33.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:34 compute-0 nova_compute[260603]: 2025-10-02 09:30:34.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:30:34.870 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:30:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:30:34.870 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:30:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:30:34.870 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:30:35 compute-0 ceph-mon[74477]: pgmap v3383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:30:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:37 compute-0 ceph-mon[74477]: pgmap v3384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:38 compute-0 nova_compute[260603]: 2025-10-02 09:30:38.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:39 compute-0 ceph-mon[74477]: pgmap v3385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:30:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:30:39 compute-0 nova_compute[260603]: 2025-10-02 09:30:39.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:30:39 compute-0 nova_compute[260603]: 2025-10-02 09:30:39.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.801241) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397440801291, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 542, "num_deletes": 255, "total_data_size": 548791, "memory_usage": 559720, "flush_reason": "Manual Compaction"}
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397440809079, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 543898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70305, "largest_seqno": 70846, "table_properties": {"data_size": 540893, "index_size": 976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6933, "raw_average_key_size": 18, "raw_value_size": 534844, "raw_average_value_size": 1437, "num_data_blocks": 44, "num_entries": 372, "num_filter_entries": 372, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397406, "oldest_key_time": 1759397406, "file_creation_time": 1759397440, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 7888 microseconds, and 4342 cpu microseconds.
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.809128) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 543898 bytes OK
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.809151) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.811032) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.811053) EVENT_LOG_v1 {"time_micros": 1759397440811046, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.811075) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 545702, prev total WAL file size 545702, number of live WAL files 2.
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.811673) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303130' seq:72057594037927935, type:22 .. '6C6F676D0033323631' seq:0, type:0; will stop at (end)
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(531KB)], [167(9249KB)]
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397440811715, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10015412, "oldest_snapshot_seqno": -1}
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8549 keys, 9916520 bytes, temperature: kUnknown
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397440896051, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 9916520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9862803, "index_size": 31268, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 225793, "raw_average_key_size": 26, "raw_value_size": 9713605, "raw_average_value_size": 1136, "num_data_blocks": 1205, "num_entries": 8549, "num_filter_entries": 8549, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397440, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.896334) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 9916520 bytes
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.898026) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.6 rd, 117.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.0 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(36.6) write-amplify(18.2) OK, records in: 9070, records dropped: 521 output_compression: NoCompression
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.898055) EVENT_LOG_v1 {"time_micros": 1759397440898043, "job": 104, "event": "compaction_finished", "compaction_time_micros": 84437, "compaction_time_cpu_micros": 52088, "output_level": 6, "num_output_files": 1, "total_output_size": 9916520, "num_input_records": 9070, "num_output_records": 8549, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397440898275, "job": 104, "event": "table_file_deletion", "file_number": 169}
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397440900454, "job": 104, "event": "table_file_deletion", "file_number": 167}
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.811580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.900589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.900594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.900596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.900599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:30:40 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.900601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:30:41 compute-0 ceph-mon[74477]: pgmap v3386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:43 compute-0 ceph-mon[74477]: pgmap v3387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:43 compute-0 nova_compute[260603]: 2025-10-02 09:30:43.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:44 compute-0 nova_compute[260603]: 2025-10-02 09:30:44.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:30:45 compute-0 ceph-mon[74477]: pgmap v3388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:46 compute-0 ceph-mon[74477]: pgmap v3389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:47 compute-0 sudo[447528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:30:47 compute-0 sudo[447528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:47 compute-0 sudo[447528]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:47 compute-0 sudo[447553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:30:47 compute-0 sudo[447553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:47 compute-0 sudo[447553]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:47 compute-0 sudo[447578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:30:47 compute-0 sudo[447578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:47 compute-0 sudo[447578]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:47 compute-0 sudo[447603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:30:47 compute-0 sudo[447603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:47 compute-0 sudo[447603]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:30:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:30:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:30:47 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:30:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:30:47 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:30:47 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 47d8df75-182c-43d1-afd1-d89fb78c5612 does not exist
Oct 02 09:30:47 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 167e5c71-fd02-47fb-a933-e7d36d981d1a does not exist
Oct 02 09:30:47 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 705c6b4b-c575-4c28-b9c6-564b7b332da9 does not exist
Oct 02 09:30:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:30:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:30:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:30:47 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:30:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:30:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:30:47 compute-0 sudo[447660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:30:47 compute-0 sudo[447660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:47 compute-0 sudo[447660]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:47 compute-0 sudo[447685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:30:47 compute-0 sudo[447685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:47 compute-0 sudo[447685]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:30:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:30:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:30:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:30:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:30:47 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:30:47 compute-0 sudo[447710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:30:47 compute-0 sudo[447710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:47 compute-0 sudo[447710]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:48 compute-0 sudo[447735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:30:48 compute-0 sudo[447735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:48 compute-0 podman[447800]: 2025-10-02 09:30:48.32888923 +0000 UTC m=+0.051041845 container create 6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 09:30:48 compute-0 systemd[1]: Started libpod-conmon-6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b.scope.
Oct 02 09:30:48 compute-0 podman[447800]: 2025-10-02 09:30:48.30198585 +0000 UTC m=+0.024138485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:30:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:30:48 compute-0 podman[447800]: 2025-10-02 09:30:48.439063861 +0000 UTC m=+0.161216496 container init 6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:30:48 compute-0 podman[447800]: 2025-10-02 09:30:48.447705291 +0000 UTC m=+0.169857906 container start 6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:30:48 compute-0 podman[447800]: 2025-10-02 09:30:48.454163373 +0000 UTC m=+0.176315988 container attach 6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 09:30:48 compute-0 interesting_borg[447816]: 167 167
Oct 02 09:30:48 compute-0 systemd[1]: libpod-6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b.scope: Deactivated successfully.
Oct 02 09:30:48 compute-0 podman[447800]: 2025-10-02 09:30:48.455999581 +0000 UTC m=+0.178152196 container died 6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7bbda7d2d4d4141faa19ac409dacf2e969e5bd0d058ed6451a5d13420024a93-merged.mount: Deactivated successfully.
Oct 02 09:30:48 compute-0 podman[447800]: 2025-10-02 09:30:48.547271991 +0000 UTC m=+0.269424606 container remove 6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:30:48 compute-0 systemd[1]: libpod-conmon-6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b.scope: Deactivated successfully.
Oct 02 09:30:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:48 compute-0 podman[447840]: 2025-10-02 09:30:48.728335186 +0000 UTC m=+0.056940379 container create e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:30:48 compute-0 podman[447840]: 2025-10-02 09:30:48.693827759 +0000 UTC m=+0.022432982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:30:48 compute-0 systemd[1]: Started libpod-conmon-e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84.scope.
Oct 02 09:30:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6871bfa19ccf24c6a52cbf8f51a94a86f907c6dd272e3cbe0e12558d82ea4733/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6871bfa19ccf24c6a52cbf8f51a94a86f907c6dd272e3cbe0e12558d82ea4733/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6871bfa19ccf24c6a52cbf8f51a94a86f907c6dd272e3cbe0e12558d82ea4733/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6871bfa19ccf24c6a52cbf8f51a94a86f907c6dd272e3cbe0e12558d82ea4733/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6871bfa19ccf24c6a52cbf8f51a94a86f907c6dd272e3cbe0e12558d82ea4733/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:48 compute-0 podman[447840]: 2025-10-02 09:30:48.835982889 +0000 UTC m=+0.164588112 container init e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 09:30:48 compute-0 podman[447840]: 2025-10-02 09:30:48.843201905 +0000 UTC m=+0.171807108 container start e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pike, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 09:30:48 compute-0 podman[447840]: 2025-10-02 09:30:48.850380599 +0000 UTC m=+0.178985782 container attach e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:30:48 compute-0 ceph-mon[74477]: pgmap v3390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:48 compute-0 nova_compute[260603]: 2025-10-02 09:30:48.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:49 compute-0 sleepy_pike[447856]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:30:49 compute-0 sleepy_pike[447856]: --> relative data size: 1.0
Oct 02 09:30:49 compute-0 sleepy_pike[447856]: --> All data devices are unavailable
Oct 02 09:30:49 compute-0 nova_compute[260603]: 2025-10-02 09:30:49.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:49 compute-0 systemd[1]: libpod-e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84.scope: Deactivated successfully.
Oct 02 09:30:49 compute-0 podman[447840]: 2025-10-02 09:30:49.887667948 +0000 UTC m=+1.216273151 container died e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pike, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6871bfa19ccf24c6a52cbf8f51a94a86f907c6dd272e3cbe0e12558d82ea4733-merged.mount: Deactivated successfully.
Oct 02 09:30:50 compute-0 podman[447840]: 2025-10-02 09:30:50.034222195 +0000 UTC m=+1.362827428 container remove e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 09:30:50 compute-0 systemd[1]: libpod-conmon-e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84.scope: Deactivated successfully.
Oct 02 09:30:50 compute-0 sudo[447735]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:50 compute-0 podman[447900]: 2025-10-02 09:30:50.138596016 +0000 UTC m=+0.056063872 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:30:50 compute-0 sudo[447912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:30:50 compute-0 sudo[447912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:50 compute-0 sudo[447912]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:50 compute-0 podman[447899]: 2025-10-02 09:30:50.179801703 +0000 UTC m=+0.104066462 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:30:50 compute-0 sudo[447968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:30:50 compute-0 sudo[447968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:50 compute-0 sudo[447968]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:50 compute-0 sudo[447993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:30:50 compute-0 sudo[447993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:50 compute-0 sudo[447993]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:50 compute-0 sudo[448018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:30:50 compute-0 sudo[448018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:50 compute-0 podman[448085]: 2025-10-02 09:30:50.675078623 +0000 UTC m=+0.024679582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:30:50 compute-0 podman[448085]: 2025-10-02 09:30:50.772512226 +0000 UTC m=+0.122113175 container create eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leavitt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:30:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:30:50 compute-0 systemd[1]: Started libpod-conmon-eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7.scope.
Oct 02 09:30:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:30:50 compute-0 podman[448085]: 2025-10-02 09:30:50.956982187 +0000 UTC m=+0.306583166 container init eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leavitt, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:30:50 compute-0 podman[448085]: 2025-10-02 09:30:50.966313119 +0000 UTC m=+0.315914068 container start eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leavitt, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 09:30:50 compute-0 dazzling_leavitt[448101]: 167 167
Oct 02 09:30:50 compute-0 systemd[1]: libpod-eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7.scope: Deactivated successfully.
Oct 02 09:30:50 compute-0 conmon[448101]: conmon eea79f9b02a764eac238 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7.scope/container/memory.events
Oct 02 09:30:50 compute-0 podman[448085]: 2025-10-02 09:30:50.991591788 +0000 UTC m=+0.341192767 container attach eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 09:30:50 compute-0 podman[448085]: 2025-10-02 09:30:50.993087985 +0000 UTC m=+0.342688944 container died eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dfb8fbf1a2ea90ec817ed240354d0d00348ef98b127a6719ce468c017ea8858-merged.mount: Deactivated successfully.
Oct 02 09:30:51 compute-0 podman[448085]: 2025-10-02 09:30:51.188656964 +0000 UTC m=+0.538257953 container remove eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leavitt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:30:51 compute-0 systemd[1]: libpod-conmon-eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7.scope: Deactivated successfully.
Oct 02 09:30:51 compute-0 podman[448124]: 2025-10-02 09:30:51.435314898 +0000 UTC m=+0.090856499 container create c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:30:51 compute-0 podman[448124]: 2025-10-02 09:30:51.379524115 +0000 UTC m=+0.035065766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:30:51 compute-0 systemd[1]: Started libpod-conmon-c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b.scope.
Oct 02 09:30:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8538371e7b504f9d5deaa52ea0164c076718bbc2740085bfe7eb76cb421f09c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8538371e7b504f9d5deaa52ea0164c076718bbc2740085bfe7eb76cb421f09c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8538371e7b504f9d5deaa52ea0164c076718bbc2740085bfe7eb76cb421f09c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8538371e7b504f9d5deaa52ea0164c076718bbc2740085bfe7eb76cb421f09c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:51 compute-0 podman[448124]: 2025-10-02 09:30:51.592944292 +0000 UTC m=+0.248485923 container init c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bhaskara, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 09:30:51 compute-0 podman[448124]: 2025-10-02 09:30:51.608179538 +0000 UTC m=+0.263721129 container start c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 09:30:51 compute-0 podman[448124]: 2025-10-02 09:30:51.630584947 +0000 UTC m=+0.286126578 container attach c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:30:51 compute-0 ceph-mon[74477]: pgmap v3391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]: {
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:     "0": [
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:         {
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "devices": [
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "/dev/loop3"
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             ],
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_name": "ceph_lv0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_size": "21470642176",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "name": "ceph_lv0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "tags": {
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.cluster_name": "ceph",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.crush_device_class": "",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.encrypted": "0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.osd_id": "0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.type": "block",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.vdo": "0"
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             },
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "type": "block",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "vg_name": "ceph_vg0"
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:         }
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:     ],
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:     "1": [
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:         {
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "devices": [
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "/dev/loop4"
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             ],
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_name": "ceph_lv1",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_size": "21470642176",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "name": "ceph_lv1",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "tags": {
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.cluster_name": "ceph",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.crush_device_class": "",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.encrypted": "0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.osd_id": "1",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.type": "block",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.vdo": "0"
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             },
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "type": "block",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "vg_name": "ceph_vg1"
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:         }
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:     ],
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:     "2": [
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:         {
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "devices": [
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "/dev/loop5"
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             ],
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_name": "ceph_lv2",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_size": "21470642176",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "name": "ceph_lv2",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "tags": {
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.cluster_name": "ceph",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.crush_device_class": "",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.encrypted": "0",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.osd_id": "2",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.type": "block",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:                 "ceph.vdo": "0"
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             },
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "type": "block",
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:             "vg_name": "ceph_vg2"
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:         }
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]:     ]
Oct 02 09:30:52 compute-0 sharp_bhaskara[448140]: }
Oct 02 09:30:52 compute-0 systemd[1]: libpod-c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b.scope: Deactivated successfully.
Oct 02 09:30:52 compute-0 podman[448149]: 2025-10-02 09:30:52.452902012 +0000 UTC m=+0.028684827 container died c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bhaskara, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:30:52 compute-0 nova_compute[260603]: 2025-10-02 09:30:52.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:30:52 compute-0 nova_compute[260603]: 2025-10-02 09:30:52.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:30:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8538371e7b504f9d5deaa52ea0164c076718bbc2740085bfe7eb76cb421f09c-merged.mount: Deactivated successfully.
Oct 02 09:30:53 compute-0 ceph-mon[74477]: pgmap v3392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:53 compute-0 podman[448149]: 2025-10-02 09:30:53.216460302 +0000 UTC m=+0.792243107 container remove c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bhaskara, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:30:53 compute-0 systemd[1]: libpod-conmon-c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b.scope: Deactivated successfully.
Oct 02 09:30:53 compute-0 sudo[448018]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:53 compute-0 sudo[448164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:30:53 compute-0 sudo[448164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:53 compute-0 sudo[448164]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:53 compute-0 sudo[448189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:30:53 compute-0 sudo[448189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:53 compute-0 sudo[448189]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:53 compute-0 sudo[448214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:30:53 compute-0 sudo[448214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:53 compute-0 sudo[448214]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:53 compute-0 sudo[448239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:30:53 compute-0 sudo[448239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:53 compute-0 nova_compute[260603]: 2025-10-02 09:30:53.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:54 compute-0 podman[448306]: 2025-10-02 09:30:54.010249915 +0000 UTC m=+0.101446759 container create 85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:30:54 compute-0 podman[448306]: 2025-10-02 09:30:53.946346579 +0000 UTC m=+0.037543463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:30:54 compute-0 systemd[1]: Started libpod-conmon-85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f.scope.
Oct 02 09:30:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:30:54 compute-0 podman[448306]: 2025-10-02 09:30:54.254597187 +0000 UTC m=+0.345794091 container init 85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 09:30:54 compute-0 podman[448306]: 2025-10-02 09:30:54.263683681 +0000 UTC m=+0.354880525 container start 85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:30:54 compute-0 youthful_galileo[448322]: 167 167
Oct 02 09:30:54 compute-0 systemd[1]: libpod-85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f.scope: Deactivated successfully.
Oct 02 09:30:54 compute-0 podman[448306]: 2025-10-02 09:30:54.331444657 +0000 UTC m=+0.422641471 container attach 85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:30:54 compute-0 podman[448306]: 2025-10-02 09:30:54.332454239 +0000 UTC m=+0.423651053 container died 85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 09:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8590b6b36f19ad5e399c47b90ca06255701049fe2f4c3cecce3e256ec96821a5-merged.mount: Deactivated successfully.
Oct 02 09:30:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:54 compute-0 nova_compute[260603]: 2025-10-02 09:30:54.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:54 compute-0 podman[448306]: 2025-10-02 09:30:54.879482236 +0000 UTC m=+0.970679040 container remove 85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:30:54 compute-0 systemd[1]: libpod-conmon-85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f.scope: Deactivated successfully.
Oct 02 09:30:54 compute-0 ceph-mon[74477]: pgmap v3393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:55 compute-0 podman[448347]: 2025-10-02 09:30:55.093976725 +0000 UTC m=+0.076747028 container create 7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 09:30:55 compute-0 podman[448347]: 2025-10-02 09:30:55.043567851 +0000 UTC m=+0.026338244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:30:55 compute-0 systemd[1]: Started libpod-conmon-7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14.scope.
Oct 02 09:30:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfae5424f15ce851447eb4b1819bf46d158f770bbbedf6534b81c66026cf865a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfae5424f15ce851447eb4b1819bf46d158f770bbbedf6534b81c66026cf865a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfae5424f15ce851447eb4b1819bf46d158f770bbbedf6534b81c66026cf865a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfae5424f15ce851447eb4b1819bf46d158f770bbbedf6534b81c66026cf865a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:30:55 compute-0 podman[448347]: 2025-10-02 09:30:55.291806505 +0000 UTC m=+0.274576808 container init 7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:30:55 compute-0 podman[448347]: 2025-10-02 09:30:55.300422054 +0000 UTC m=+0.283192357 container start 7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 09:30:55 compute-0 podman[448347]: 2025-10-02 09:30:55.31312121 +0000 UTC m=+0.295891533 container attach 7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 02 09:30:55 compute-0 podman[448362]: 2025-10-02 09:30:55.328996776 +0000 UTC m=+0.193302018 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 09:30:55 compute-0 podman[448363]: 2025-10-02 09:30:55.329037947 +0000 UTC m=+0.187601511 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:30:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]: {
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "osd_id": 2,
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "type": "bluestore"
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:     },
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "osd_id": 1,
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "type": "bluestore"
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:     },
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "osd_id": 0,
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:         "type": "bluestore"
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]:     }
Oct 02 09:30:56 compute-0 nostalgic_brattain[448384]: }
Oct 02 09:30:56 compute-0 systemd[1]: libpod-7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14.scope: Deactivated successfully.
Oct 02 09:30:56 compute-0 podman[448347]: 2025-10-02 09:30:56.312165905 +0000 UTC m=+1.294936208 container died 7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 09:30:56 compute-0 systemd[1]: libpod-7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14.scope: Consumed 1.017s CPU time.
Oct 02 09:30:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfae5424f15ce851447eb4b1819bf46d158f770bbbedf6534b81c66026cf865a-merged.mount: Deactivated successfully.
Oct 02 09:30:56 compute-0 podman[448347]: 2025-10-02 09:30:56.485153718 +0000 UTC m=+1.467924021 container remove 7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 09:30:56 compute-0 systemd[1]: libpod-conmon-7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14.scope: Deactivated successfully.
Oct 02 09:30:56 compute-0 sudo[448239]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:30:56 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:30:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:30:56 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:30:56 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0345baff-f42a-4282-80d6-2f3e05016b64 does not exist
Oct 02 09:30:56 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 4219c4e5-4fcf-4adb-9203-f42b05013cbd does not exist
Oct 02 09:30:56 compute-0 sudo[448452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:30:56 compute-0 sudo[448452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:56 compute-0 sudo[448452]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:56 compute-0 sudo[448477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:30:56 compute-0 sudo[448477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:30:56 compute-0 sudo[448477]: pam_unix(sudo:session): session closed for user root
Oct 02 09:30:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:30:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:30:57 compute-0 ceph-mon[74477]: pgmap v3394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:30:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:30:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:30:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:30:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:30:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:30:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:58 compute-0 nova_compute[260603]: 2025-10-02 09:30:58.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:30:59 compute-0 ceph-mon[74477]: pgmap v3395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:30:59 compute-0 nova_compute[260603]: 2025-10-02 09:30:59.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:31:01 compute-0 ceph-mon[74477]: pgmap v3396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:03 compute-0 ceph-mon[74477]: pgmap v3397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:03 compute-0 nova_compute[260603]: 2025-10-02 09:31:03.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:04 compute-0 nova_compute[260603]: 2025-10-02 09:31:04.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:04 compute-0 ceph-mon[74477]: pgmap v3398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:31:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:06 compute-0 nova_compute[260603]: 2025-10-02 09:31:06.940 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:31:07 compute-0 ceph-mon[74477]: pgmap v3399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:08 compute-0 nova_compute[260603]: 2025-10-02 09:31:08.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:09 compute-0 nova_compute[260603]: 2025-10-02 09:31:09.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:09 compute-0 ceph-mon[74477]: pgmap v3400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:31:11 compute-0 ceph-mon[74477]: pgmap v3401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:13 compute-0 ceph-mon[74477]: pgmap v3402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:13 compute-0 nova_compute[260603]: 2025-10-02 09:31:13.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:14 compute-0 nova_compute[260603]: 2025-10-02 09:31:14.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:15 compute-0 ceph-mon[74477]: pgmap v3403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:31:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:17 compute-0 ceph-mon[74477]: pgmap v3404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:18 compute-0 nova_compute[260603]: 2025-10-02 09:31:18.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:19 compute-0 nova_compute[260603]: 2025-10-02 09:31:19.539 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:31:19 compute-0 ceph-mon[74477]: pgmap v3405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:20 compute-0 nova_compute[260603]: 2025-10-02 09:31:20.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:31:21 compute-0 podman[448503]: 2025-10-02 09:31:21.045386156 +0000 UTC m=+0.080956770 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 09:31:21 compute-0 ceph-mon[74477]: pgmap v3406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:21 compute-0 podman[448502]: 2025-10-02 09:31:21.12205078 +0000 UTC m=+0.164774377 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 09:31:21 compute-0 nova_compute[260603]: 2025-10-02 09:31:21.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:31:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:31:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/575152297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:31:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:31:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/575152297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:31:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/575152297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:31:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/575152297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:31:22 compute-0 nova_compute[260603]: 2025-10-02 09:31:22.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:31:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:23 compute-0 ceph-mon[74477]: pgmap v3407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:23 compute-0 nova_compute[260603]: 2025-10-02 09:31:23.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:31:23 compute-0 nova_compute[260603]: 2025-10-02 09:31:23.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:31:23 compute-0 nova_compute[260603]: 2025-10-02 09:31:23.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:31:23 compute-0 nova_compute[260603]: 2025-10-02 09:31:23.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:31:23 compute-0 nova_compute[260603]: 2025-10-02 09:31:23.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:24 compute-0 nova_compute[260603]: 2025-10-02 09:31:24.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:31:24 compute-0 nova_compute[260603]: 2025-10-02 09:31:24.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:31:24 compute-0 nova_compute[260603]: 2025-10-02 09:31:24.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:31:24 compute-0 nova_compute[260603]: 2025-10-02 09:31:24.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:31:24 compute-0 nova_compute[260603]: 2025-10-02 09:31:24.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:31:24 compute-0 nova_compute[260603]: 2025-10-02 09:31:24.548 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:31:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:31:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031284939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:31:24 compute-0 nova_compute[260603]: 2025-10-02 09:31:24.993 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:31:25 compute-0 nova_compute[260603]: 2025-10-02 09:31:25.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:25 compute-0 nova_compute[260603]: 2025-10-02 09:31:25.151 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:31:25 compute-0 nova_compute[260603]: 2025-10-02 09:31:25.152 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3572MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:31:25 compute-0 nova_compute[260603]: 2025-10-02 09:31:25.152 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:31:25 compute-0 nova_compute[260603]: 2025-10-02 09:31:25.153 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:31:25 compute-0 nova_compute[260603]: 2025-10-02 09:31:25.578 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:31:25 compute-0 nova_compute[260603]: 2025-10-02 09:31:25.578 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:31:25 compute-0 nova_compute[260603]: 2025-10-02 09:31:25.599 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:31:25 compute-0 ceph-mon[74477]: pgmap v3408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:25 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4031284939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:31:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:31:26 compute-0 podman[448590]: 2025-10-02 09:31:26.010618413 +0000 UTC m=+0.067547060 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 09:31:26 compute-0 podman[448589]: 2025-10-02 09:31:26.029360518 +0000 UTC m=+0.085033626 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:31:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:31:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888347941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:31:26 compute-0 nova_compute[260603]: 2025-10-02 09:31:26.099 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:31:26 compute-0 nova_compute[260603]: 2025-10-02 09:31:26.105 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:31:26 compute-0 nova_compute[260603]: 2025-10-02 09:31:26.130 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:31:26 compute-0 nova_compute[260603]: 2025-10-02 09:31:26.131 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:31:26 compute-0 nova_compute[260603]: 2025-10-02 09:31:26.132 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:31:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3888347941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:31:27 compute-0 ceph-mon[74477]: pgmap v3409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:31:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:31:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:31:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:31:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:31:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:31:28
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.meta']
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:31:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:31:28 compute-0 nova_compute[260603]: 2025-10-02 09:31:28.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:29 compute-0 ceph-mon[74477]: pgmap v3410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:30 compute-0 nova_compute[260603]: 2025-10-02 09:31:30.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:31:31 compute-0 nova_compute[260603]: 2025-10-02 09:31:31.128 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:31:31 compute-0 nova_compute[260603]: 2025-10-02 09:31:31.128 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:31:31 compute-0 ceph-mon[74477]: pgmap v3411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:32 compute-0 nova_compute[260603]: 2025-10-02 09:31:32.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:31:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:33 compute-0 ceph-mon[74477]: pgmap v3412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:33 compute-0 nova_compute[260603]: 2025-10-02 09:31:33.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:31:34.870 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:31:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:31:34.871 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:31:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:31:34.871 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:31:35 compute-0 nova_compute[260603]: 2025-10-02 09:31:35.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:35 compute-0 ceph-mon[74477]: pgmap v3413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:31:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:37 compute-0 ceph-mon[74477]: pgmap v3414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:38 compute-0 nova_compute[260603]: 2025-10-02 09:31:38.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:31:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:31:39 compute-0 nova_compute[260603]: 2025-10-02 09:31:39.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:31:39 compute-0 ceph-mon[74477]: pgmap v3415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:40 compute-0 nova_compute[260603]: 2025-10-02 09:31:40.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:31:41 compute-0 ceph-mon[74477]: pgmap v3416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:42 compute-0 ceph-mon[74477]: pgmap v3417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:43 compute-0 nova_compute[260603]: 2025-10-02 09:31:43.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:45 compute-0 nova_compute[260603]: 2025-10-02 09:31:45.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:45 compute-0 ceph-mon[74477]: pgmap v3418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:31:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:47 compute-0 ceph-mon[74477]: pgmap v3419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:48 compute-0 nova_compute[260603]: 2025-10-02 09:31:48.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:49 compute-0 ceph-mon[74477]: pgmap v3420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:50 compute-0 nova_compute[260603]: 2025-10-02 09:31:50.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:31:51 compute-0 ceph-mon[74477]: pgmap v3421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:52 compute-0 podman[448628]: 2025-10-02 09:31:52.050889648 +0000 UTC m=+0.097055412 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 09:31:52 compute-0 podman[448627]: 2025-10-02 09:31:52.070741948 +0000 UTC m=+0.131289611 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:31:52 compute-0 nova_compute[260603]: 2025-10-02 09:31:52.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:31:52 compute-0 nova_compute[260603]: 2025-10-02 09:31:52.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:31:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:53 compute-0 ceph-mon[74477]: pgmap v3422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:53 compute-0 nova_compute[260603]: 2025-10-02 09:31:53.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:55 compute-0 nova_compute[260603]: 2025-10-02 09:31:55.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:55 compute-0 ceph-mon[74477]: pgmap v3423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:31:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:56 compute-0 sudo[448669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:31:56 compute-0 sudo[448669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:31:56 compute-0 sudo[448669]: pam_unix(sudo:session): session closed for user root
Oct 02 09:31:56 compute-0 podman[448693]: 2025-10-02 09:31:56.822025211 +0000 UTC m=+0.046940687 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:31:56 compute-0 sudo[448706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:31:56 compute-0 sudo[448706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:31:56 compute-0 sudo[448706]: pam_unix(sudo:session): session closed for user root
Oct 02 09:31:56 compute-0 podman[448694]: 2025-10-02 09:31:56.857639383 +0000 UTC m=+0.078372209 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 09:31:56 compute-0 sudo[448757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:31:56 compute-0 sudo[448757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:31:56 compute-0 sudo[448757]: pam_unix(sudo:session): session closed for user root
Oct 02 09:31:56 compute-0 sudo[448782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 09:31:56 compute-0 sudo[448782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:31:57 compute-0 sudo[448782]: pam_unix(sudo:session): session closed for user root
Oct 02 09:31:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:31:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:31:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:31:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:31:57 compute-0 sudo[448827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:31:57 compute-0 sudo[448827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:31:57 compute-0 sudo[448827]: pam_unix(sudo:session): session closed for user root
Oct 02 09:31:57 compute-0 sudo[448852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:31:57 compute-0 sudo[448852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:31:57 compute-0 sudo[448852]: pam_unix(sudo:session): session closed for user root
Oct 02 09:31:57 compute-0 sudo[448877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:31:57 compute-0 sudo[448877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:31:57 compute-0 sudo[448877]: pam_unix(sudo:session): session closed for user root
Oct 02 09:31:57 compute-0 sudo[448902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:31:57 compute-0 sudo[448902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:31:57 compute-0 ceph-mon[74477]: pgmap v3424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:31:57 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:31:57 compute-0 sudo[448902]: pam_unix(sudo:session): session closed for user root
Oct 02 09:31:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:31:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:31:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:31:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:31:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:31:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:31:57 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 4664fca8-489b-4a1c-925a-ca83df12e830 does not exist
Oct 02 09:31:57 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 1b31f815-7123-40e4-b2e0-23ee782f58e0 does not exist
Oct 02 09:31:57 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 92d32189-6ef2-4a09-ac62-ff989416f6b1 does not exist
Oct 02 09:31:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:31:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:31:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:31:57 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:31:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:31:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:31:57 compute-0 sudo[448957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:31:57 compute-0 sudo[448957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:31:57 compute-0 sudo[448957]: pam_unix(sudo:session): session closed for user root
Oct 02 09:31:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:31:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:31:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:31:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:31:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:31:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:31:58 compute-0 sudo[448982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:31:58 compute-0 sudo[448982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:31:58 compute-0 sudo[448982]: pam_unix(sudo:session): session closed for user root
Oct 02 09:31:58 compute-0 sudo[449007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:31:58 compute-0 sudo[449007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:31:58 compute-0 sudo[449007]: pam_unix(sudo:session): session closed for user root
Oct 02 09:31:58 compute-0 sudo[449032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:31:58 compute-0 sudo[449032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:31:58 compute-0 podman[449097]: 2025-10-02 09:31:58.426209527 +0000 UTC m=+0.036465010 container create e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:31:58 compute-0 systemd[1]: Started libpod-conmon-e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9.scope.
Oct 02 09:31:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:31:58 compute-0 podman[449097]: 2025-10-02 09:31:58.496975536 +0000 UTC m=+0.107231029 container init e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:31:58 compute-0 podman[449097]: 2025-10-02 09:31:58.503468349 +0000 UTC m=+0.113723822 container start e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 09:31:58 compute-0 podman[449097]: 2025-10-02 09:31:58.410800446 +0000 UTC m=+0.021055959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:31:58 compute-0 romantic_swartz[449114]: 167 167
Oct 02 09:31:58 compute-0 systemd[1]: libpod-e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9.scope: Deactivated successfully.
Oct 02 09:31:58 compute-0 podman[449097]: 2025-10-02 09:31:58.510311134 +0000 UTC m=+0.120566617 container attach e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 09:31:58 compute-0 podman[449097]: 2025-10-02 09:31:58.510597372 +0000 UTC m=+0.120852865 container died e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 09:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-24b8e8a8c753af238d85e46035dfd91b85fc80bb24a0d191367a4d34148e3bed-merged.mount: Deactivated successfully.
Oct 02 09:31:58 compute-0 podman[449097]: 2025-10-02 09:31:58.555550236 +0000 UTC m=+0.165805719 container remove e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:31:58 compute-0 systemd[1]: libpod-conmon-e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9.scope: Deactivated successfully.
Oct 02 09:31:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:58 compute-0 podman[449136]: 2025-10-02 09:31:58.699640717 +0000 UTC m=+0.036344436 container create df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:31:58 compute-0 systemd[1]: Started libpod-conmon-df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3.scope.
Oct 02 09:31:58 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:31:58 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:31:58 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:31:58 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:31:58 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:31:58 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:31:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249da64f6f3c756cc747b53196307ceb8ab546c7c7699688586d3d2ff6c7d866/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249da64f6f3c756cc747b53196307ceb8ab546c7c7699688586d3d2ff6c7d866/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249da64f6f3c756cc747b53196307ceb8ab546c7c7699688586d3d2ff6c7d866/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249da64f6f3c756cc747b53196307ceb8ab546c7c7699688586d3d2ff6c7d866/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249da64f6f3c756cc747b53196307ceb8ab546c7c7699688586d3d2ff6c7d866/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:31:58 compute-0 podman[449136]: 2025-10-02 09:31:58.683253665 +0000 UTC m=+0.019957384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:31:58 compute-0 podman[449136]: 2025-10-02 09:31:58.782107543 +0000 UTC m=+0.118811282 container init df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:31:58 compute-0 podman[449136]: 2025-10-02 09:31:58.79162359 +0000 UTC m=+0.128327299 container start df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:31:58 compute-0 podman[449136]: 2025-10-02 09:31:58.795543933 +0000 UTC m=+0.132247652 container attach df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:31:58 compute-0 nova_compute[260603]: 2025-10-02 09:31:58.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:31:59 compute-0 ceph-mon[74477]: pgmap v3425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:31:59 compute-0 vibrant_saha[449153]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:31:59 compute-0 vibrant_saha[449153]: --> relative data size: 1.0
Oct 02 09:31:59 compute-0 vibrant_saha[449153]: --> All data devices are unavailable
Oct 02 09:31:59 compute-0 systemd[1]: libpod-df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3.scope: Deactivated successfully.
Oct 02 09:31:59 compute-0 podman[449136]: 2025-10-02 09:31:59.878387914 +0000 UTC m=+1.215091613 container died df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 09:31:59 compute-0 systemd[1]: libpod-df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3.scope: Consumed 1.031s CPU time.
Oct 02 09:31:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-249da64f6f3c756cc747b53196307ceb8ab546c7c7699688586d3d2ff6c7d866-merged.mount: Deactivated successfully.
Oct 02 09:31:59 compute-0 podman[449136]: 2025-10-02 09:31:59.945552992 +0000 UTC m=+1.282256691 container remove df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 09:31:59 compute-0 systemd[1]: libpod-conmon-df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3.scope: Deactivated successfully.
Oct 02 09:31:59 compute-0 sudo[449032]: pam_unix(sudo:session): session closed for user root
Oct 02 09:32:00 compute-0 sudo[449197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:32:00 compute-0 sudo[449197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:32:00 compute-0 sudo[449197]: pam_unix(sudo:session): session closed for user root
Oct 02 09:32:00 compute-0 nova_compute[260603]: 2025-10-02 09:32:00.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:00 compute-0 sudo[449222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:32:00 compute-0 sudo[449222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:32:00 compute-0 sudo[449222]: pam_unix(sudo:session): session closed for user root
Oct 02 09:32:00 compute-0 sudo[449247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:32:00 compute-0 sudo[449247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:32:00 compute-0 sudo[449247]: pam_unix(sudo:session): session closed for user root
Oct 02 09:32:00 compute-0 sudo[449272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:32:00 compute-0 sudo[449272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:32:00 compute-0 podman[449338]: 2025-10-02 09:32:00.52592868 +0000 UTC m=+0.041759485 container create 942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jackson, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:32:00 compute-0 systemd[1]: Started libpod-conmon-942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32.scope.
Oct 02 09:32:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:32:00 compute-0 podman[449338]: 2025-10-02 09:32:00.504991456 +0000 UTC m=+0.020822291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:32:00 compute-0 podman[449338]: 2025-10-02 09:32:00.610167301 +0000 UTC m=+0.125998146 container init 942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jackson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:32:00 compute-0 podman[449338]: 2025-10-02 09:32:00.617661325 +0000 UTC m=+0.133492130 container start 942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:32:00 compute-0 podman[449338]: 2025-10-02 09:32:00.620490904 +0000 UTC m=+0.136321709 container attach 942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jackson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:32:00 compute-0 hungry_jackson[449354]: 167 167
Oct 02 09:32:00 compute-0 podman[449338]: 2025-10-02 09:32:00.624789528 +0000 UTC m=+0.140620343 container died 942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jackson, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 09:32:00 compute-0 systemd[1]: libpod-942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32.scope: Deactivated successfully.
Oct 02 09:32:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba0b658e876c1ab77bd38e08cb7b6f75f939a01364e92cd53baee529b4c9d194-merged.mount: Deactivated successfully.
Oct 02 09:32:00 compute-0 podman[449338]: 2025-10-02 09:32:00.665561362 +0000 UTC m=+0.181392187 container remove 942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jackson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:32:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:00 compute-0 systemd[1]: libpod-conmon-942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32.scope: Deactivated successfully.
Oct 02 09:32:00 compute-0 podman[449379]: 2025-10-02 09:32:00.844453059 +0000 UTC m=+0.048127684 container create 82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:32:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:32:00 compute-0 systemd[1]: Started libpod-conmon-82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255.scope.
Oct 02 09:32:00 compute-0 podman[449379]: 2025-10-02 09:32:00.828371087 +0000 UTC m=+0.032045702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:32:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5221f1465fa383fb321288e71c11891ff92f8f607cf337e390373cd78738efb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5221f1465fa383fb321288e71c11891ff92f8f607cf337e390373cd78738efb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5221f1465fa383fb321288e71c11891ff92f8f607cf337e390373cd78738efb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5221f1465fa383fb321288e71c11891ff92f8f607cf337e390373cd78738efb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:32:00 compute-0 podman[449379]: 2025-10-02 09:32:00.970386262 +0000 UTC m=+0.174060937 container init 82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:32:00 compute-0 podman[449379]: 2025-10-02 09:32:00.978404183 +0000 UTC m=+0.182078808 container start 82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:32:00 compute-0 podman[449379]: 2025-10-02 09:32:00.982020906 +0000 UTC m=+0.185695581 container attach 82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]: {
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:     "0": [
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:         {
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "devices": [
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "/dev/loop3"
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             ],
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_name": "ceph_lv0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_size": "21470642176",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "name": "ceph_lv0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "tags": {
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.cluster_name": "ceph",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.crush_device_class": "",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.encrypted": "0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.osd_id": "0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.type": "block",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.vdo": "0"
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             },
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "type": "block",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "vg_name": "ceph_vg0"
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:         }
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:     ],
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:     "1": [
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:         {
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "devices": [
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "/dev/loop4"
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             ],
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_name": "ceph_lv1",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_size": "21470642176",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "name": "ceph_lv1",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "tags": {
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.cluster_name": "ceph",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.crush_device_class": "",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.encrypted": "0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.osd_id": "1",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.type": "block",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.vdo": "0"
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             },
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "type": "block",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "vg_name": "ceph_vg1"
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:         }
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:     ],
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:     "2": [
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:         {
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "devices": [
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "/dev/loop5"
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             ],
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_name": "ceph_lv2",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_size": "21470642176",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "name": "ceph_lv2",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "tags": {
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.cluster_name": "ceph",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.crush_device_class": "",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.encrypted": "0",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.osd_id": "2",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.type": "block",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:                 "ceph.vdo": "0"
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             },
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "type": "block",
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:             "vg_name": "ceph_vg2"
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:         }
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]:     ]
Oct 02 09:32:01 compute-0 busy_ptolemy[449395]: }
Oct 02 09:32:01 compute-0 systemd[1]: libpod-82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255.scope: Deactivated successfully.
Oct 02 09:32:01 compute-0 podman[449379]: 2025-10-02 09:32:01.730127943 +0000 UTC m=+0.933802538 container died 82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:32:01 compute-0 ceph-mon[74477]: pgmap v3426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-5221f1465fa383fb321288e71c11891ff92f8f607cf337e390373cd78738efb8-merged.mount: Deactivated successfully.
Oct 02 09:32:01 compute-0 podman[449379]: 2025-10-02 09:32:01.791240882 +0000 UTC m=+0.994915487 container remove 82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:32:01 compute-0 systemd[1]: libpod-conmon-82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255.scope: Deactivated successfully.
Oct 02 09:32:01 compute-0 sudo[449272]: pam_unix(sudo:session): session closed for user root
Oct 02 09:32:01 compute-0 sudo[449415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:32:01 compute-0 sudo[449415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:32:01 compute-0 sudo[449415]: pam_unix(sudo:session): session closed for user root
Oct 02 09:32:01 compute-0 sudo[449440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:32:01 compute-0 sudo[449440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:32:01 compute-0 sudo[449440]: pam_unix(sudo:session): session closed for user root
Oct 02 09:32:02 compute-0 sudo[449465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:32:02 compute-0 sudo[449465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:32:02 compute-0 sudo[449465]: pam_unix(sudo:session): session closed for user root
Oct 02 09:32:02 compute-0 sudo[449490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:32:02 compute-0 sudo[449490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:32:02 compute-0 podman[449555]: 2025-10-02 09:32:02.416726709 +0000 UTC m=+0.037668938 container create cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:32:02 compute-0 systemd[1]: Started libpod-conmon-cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a.scope.
Oct 02 09:32:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:32:02 compute-0 podman[449555]: 2025-10-02 09:32:02.488929944 +0000 UTC m=+0.109872193 container init cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:32:02 compute-0 podman[449555]: 2025-10-02 09:32:02.496015365 +0000 UTC m=+0.116957584 container start cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 09:32:02 compute-0 podman[449555]: 2025-10-02 09:32:02.402097481 +0000 UTC m=+0.023039720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:32:02 compute-0 podman[449555]: 2025-10-02 09:32:02.498923556 +0000 UTC m=+0.119865785 container attach cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:32:02 compute-0 funny_mirzakhani[449571]: 167 167
Oct 02 09:32:02 compute-0 systemd[1]: libpod-cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a.scope: Deactivated successfully.
Oct 02 09:32:02 compute-0 podman[449555]: 2025-10-02 09:32:02.502029572 +0000 UTC m=+0.122971801 container died cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 09:32:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5384a1214e79141930b06481cfc4f2460bdc605884a25e8d691f5a08eb95ad8-merged.mount: Deactivated successfully.
Oct 02 09:32:02 compute-0 podman[449555]: 2025-10-02 09:32:02.534503357 +0000 UTC m=+0.155445576 container remove cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:32:02 compute-0 systemd[1]: libpod-conmon-cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a.scope: Deactivated successfully.
Oct 02 09:32:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:02 compute-0 podman[449596]: 2025-10-02 09:32:02.689562331 +0000 UTC m=+0.039312210 container create bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 09:32:02 compute-0 systemd[1]: Started libpod-conmon-bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a.scope.
Oct 02 09:32:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e56f7ae743b2421a84cdb494c60a789a28fdb439fe23b5481eae2011bf636e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e56f7ae743b2421a84cdb494c60a789a28fdb439fe23b5481eae2011bf636e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e56f7ae743b2421a84cdb494c60a789a28fdb439fe23b5481eae2011bf636e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e56f7ae743b2421a84cdb494c60a789a28fdb439fe23b5481eae2011bf636e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:32:02 compute-0 podman[449596]: 2025-10-02 09:32:02.674168849 +0000 UTC m=+0.023918748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:32:02 compute-0 podman[449596]: 2025-10-02 09:32:02.774126721 +0000 UTC m=+0.123876640 container init bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 02 09:32:02 compute-0 podman[449596]: 2025-10-02 09:32:02.78271023 +0000 UTC m=+0.132460109 container start bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:32:02 compute-0 podman[449596]: 2025-10-02 09:32:02.786197379 +0000 UTC m=+0.135947338 container attach bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]: {
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "osd_id": 2,
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "type": "bluestore"
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:     },
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "osd_id": 1,
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "type": "bluestore"
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:     },
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "osd_id": 0,
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:         "type": "bluestore"
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]:     }
Oct 02 09:32:03 compute-0 flamboyant_poincare[449612]: }
Oct 02 09:32:03 compute-0 ceph-mon[74477]: pgmap v3427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:03 compute-0 systemd[1]: libpod-bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a.scope: Deactivated successfully.
Oct 02 09:32:03 compute-0 conmon[449612]: conmon bffcb5fc646ab0db91e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a.scope/container/memory.events
Oct 02 09:32:03 compute-0 podman[449596]: 2025-10-02 09:32:03.764472565 +0000 UTC m=+1.114222454 container died bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 09:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-06e56f7ae743b2421a84cdb494c60a789a28fdb439fe23b5481eae2011bf636e-merged.mount: Deactivated successfully.
Oct 02 09:32:03 compute-0 podman[449596]: 2025-10-02 09:32:03.816278803 +0000 UTC m=+1.166028682 container remove bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:32:03 compute-0 systemd[1]: libpod-conmon-bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a.scope: Deactivated successfully.
Oct 02 09:32:03 compute-0 sudo[449490]: pam_unix(sudo:session): session closed for user root
Oct 02 09:32:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:32:03 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:32:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:32:03 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:32:03 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev aa25b502-1085-4ece-9179-ea30e7234369 does not exist
Oct 02 09:32:03 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b0118f18-8659-41f6-90fe-bb94a9936844 does not exist
Oct 02 09:32:03 compute-0 sudo[449658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:32:03 compute-0 sudo[449658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:32:03 compute-0 sudo[449658]: pam_unix(sudo:session): session closed for user root
Oct 02 09:32:03 compute-0 sudo[449683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:32:03 compute-0 sudo[449683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:32:03 compute-0 sudo[449683]: pam_unix(sudo:session): session closed for user root
Oct 02 09:32:04 compute-0 nova_compute[260603]: 2025-10-02 09:32:03.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:04 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:32:04 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:32:04 compute-0 ceph-mon[74477]: pgmap v3428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:05 compute-0 nova_compute[260603]: 2025-10-02 09:32:05.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:32:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:07 compute-0 ceph-mon[74477]: pgmap v3429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:09 compute-0 nova_compute[260603]: 2025-10-02 09:32:09.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:09 compute-0 ceph-mon[74477]: pgmap v3430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:10 compute-0 nova_compute[260603]: 2025-10-02 09:32:10.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:32:11 compute-0 ceph-mon[74477]: pgmap v3431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:13 compute-0 ceph-mon[74477]: pgmap v3432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:14 compute-0 nova_compute[260603]: 2025-10-02 09:32:14.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:14 compute-0 ceph-mon[74477]: pgmap v3433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:15 compute-0 nova_compute[260603]: 2025-10-02 09:32:15.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:32:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:17 compute-0 ceph-mon[74477]: pgmap v3434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:19 compute-0 nova_compute[260603]: 2025-10-02 09:32:19.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:19 compute-0 ceph-mon[74477]: pgmap v3435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:20 compute-0 nova_compute[260603]: 2025-10-02 09:32:20.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:20 compute-0 sshd-session[449708]: Invalid user ps from 167.71.248.239 port 53080
Oct 02 09:32:20 compute-0 sshd-session[449708]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 09:32:20 compute-0 sshd-session[449708]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.71.248.239
Oct 02 09:32:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:32:21 compute-0 nova_compute[260603]: 2025-10-02 09:32:21.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:32:21 compute-0 ceph-mon[74477]: pgmap v3436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:32:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/936917552' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:32:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:32:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/936917552' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:32:22 compute-0 sshd-session[449708]: Failed password for invalid user ps from 167.71.248.239 port 53080 ssh2
Oct 02 09:32:22 compute-0 nova_compute[260603]: 2025-10-02 09:32:22.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:32:22 compute-0 nova_compute[260603]: 2025-10-02 09:32:22.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:32:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:22 compute-0 sshd-session[449708]: Connection closed by invalid user ps 167.71.248.239 port 53080 [preauth]
Oct 02 09:32:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/936917552' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:32:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/936917552' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:32:22 compute-0 podman[449711]: 2025-10-02 09:32:22.988574189 +0000 UTC m=+0.055807994 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:32:23 compute-0 podman[449710]: 2025-10-02 09:32:23.018720601 +0000 UTC m=+0.085937135 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 09:32:23 compute-0 ceph-mon[74477]: pgmap v3437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:24 compute-0 nova_compute[260603]: 2025-10-02 09:32:24.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:24 compute-0 ceph-mon[74477]: pgmap v3438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:25 compute-0 nova_compute[260603]: 2025-10-02 09:32:25.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:25 compute-0 nova_compute[260603]: 2025-10-02 09:32:25.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:32:25 compute-0 nova_compute[260603]: 2025-10-02 09:32:25.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:32:25 compute-0 nova_compute[260603]: 2025-10-02 09:32:25.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:32:25 compute-0 nova_compute[260603]: 2025-10-02 09:32:25.645 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:32:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:32:26 compute-0 nova_compute[260603]: 2025-10-02 09:32:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:32:26 compute-0 nova_compute[260603]: 2025-10-02 09:32:26.608 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:32:26 compute-0 nova_compute[260603]: 2025-10-02 09:32:26.608 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:32:26 compute-0 nova_compute[260603]: 2025-10-02 09:32:26.609 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:32:26 compute-0 nova_compute[260603]: 2025-10-02 09:32:26.609 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:32:26 compute-0 nova_compute[260603]: 2025-10-02 09:32:26.609 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:32:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:26 compute-0 podman[449775]: 2025-10-02 09:32:26.992114997 +0000 UTC m=+0.060145600 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:32:27 compute-0 podman[449774]: 2025-10-02 09:32:27.0139813 +0000 UTC m=+0.084048726 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:32:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:32:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2492896351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.037 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.180 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.181 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3568MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.182 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.182 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.390 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.391 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.413 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:32:27 compute-0 ceph-mon[74477]: pgmap v3439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2492896351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:32:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:32:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3435835629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.845 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.849 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.877 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.879 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:32:27 compute-0 nova_compute[260603]: 2025-10-02 09:32:27.880 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:32:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:32:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:32:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:32:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:32:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:32:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:32:28
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', 'volumes']
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:32:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:32:28 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3435835629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:32:29 compute-0 nova_compute[260603]: 2025-10-02 09:32:29.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:29 compute-0 ceph-mon[74477]: pgmap v3440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:30 compute-0 nova_compute[260603]: 2025-10-02 09:32:30.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:30 compute-0 nova_compute[260603]: 2025-10-02 09:32:30.877 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:32:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:32:31 compute-0 ceph-mon[74477]: pgmap v3441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:32 compute-0 nova_compute[260603]: 2025-10-02 09:32:32.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:32:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:33 compute-0 ceph-mon[74477]: pgmap v3442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:34 compute-0 nova_compute[260603]: 2025-10-02 09:32:34.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:32:34.871 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:32:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:32:34.872 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:32:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:32:34.872 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:32:35 compute-0 nova_compute[260603]: 2025-10-02 09:32:35.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:35 compute-0 ceph-mon[74477]: pgmap v3443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:32:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:37 compute-0 ceph-mon[74477]: pgmap v3444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:39 compute-0 nova_compute[260603]: 2025-10-02 09:32:39.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:32:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:32:39 compute-0 ceph-mon[74477]: pgmap v3445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:40 compute-0 nova_compute[260603]: 2025-10-02 09:32:40.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:32:41 compute-0 nova_compute[260603]: 2025-10-02 09:32:41.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:32:41 compute-0 ceph-mon[74477]: pgmap v3446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:43 compute-0 ceph-mon[74477]: pgmap v3447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:44 compute-0 nova_compute[260603]: 2025-10-02 09:32:44.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:44 compute-0 ceph-mon[74477]: pgmap v3448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:45 compute-0 nova_compute[260603]: 2025-10-02 09:32:45.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:32:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:47 compute-0 ceph-mon[74477]: pgmap v3449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:48 compute-0 systemd[1]: Starting dnf makecache...
Oct 02 09:32:49 compute-0 nova_compute[260603]: 2025-10-02 09:32:49.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:49 compute-0 dnf[449838]: Metadata cache refreshed recently.
Oct 02 09:32:49 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 02 09:32:49 compute-0 systemd[1]: Finished dnf makecache.
Oct 02 09:32:49 compute-0 ceph-mon[74477]: pgmap v3450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:50 compute-0 nova_compute[260603]: 2025-10-02 09:32:50.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:32:51 compute-0 ceph-mon[74477]: pgmap v3451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:52 compute-0 ceph-mon[74477]: pgmap v3452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:54 compute-0 podman[449840]: 2025-10-02 09:32:54.017697507 +0000 UTC m=+0.067281432 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 09:32:54 compute-0 nova_compute[260603]: 2025-10-02 09:32:54.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:54 compute-0 podman[449839]: 2025-10-02 09:32:54.112874469 +0000 UTC m=+0.165031205 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 09:32:54 compute-0 nova_compute[260603]: 2025-10-02 09:32:54.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:32:54 compute-0 nova_compute[260603]: 2025-10-02 09:32:54.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:32:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:55 compute-0 ceph-mon[74477]: pgmap v3453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:55 compute-0 nova_compute[260603]: 2025-10-02 09:32:55.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:32:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:57 compute-0 ceph-mon[74477]: pgmap v3454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:32:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:32:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:32:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:32:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:32:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:32:57 compute-0 podman[449883]: 2025-10-02 09:32:57.990739683 +0000 UTC m=+0.054892925 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 09:32:57 compute-0 podman[449882]: 2025-10-02 09:32:57.996367799 +0000 UTC m=+0.063970220 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 09:32:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:32:59 compute-0 nova_compute[260603]: 2025-10-02 09:32:59.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:32:59 compute-0 ceph-mon[74477]: pgmap v3455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:00 compute-0 nova_compute[260603]: 2025-10-02 09:33:00.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:33:01 compute-0 ceph-mon[74477]: pgmap v3456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:03 compute-0 ceph-mon[74477]: pgmap v3457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:04 compute-0 sudo[449920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:33:04 compute-0 sudo[449920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:04 compute-0 sudo[449920]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:04 compute-0 sudo[449945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:33:04 compute-0 sudo[449945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:04 compute-0 sudo[449945]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:04 compute-0 nova_compute[260603]: 2025-10-02 09:33:04.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:04 compute-0 sudo[449970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:33:04 compute-0 sudo[449970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:04 compute-0 sudo[449970]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:04 compute-0 sudo[449995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 09:33:04 compute-0 sudo[449995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:04 compute-0 podman[450093]: 2025-10-02 09:33:04.722352192 +0000 UTC m=+0.063209056 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:33:04 compute-0 podman[450093]: 2025-10-02 09:33:04.830412227 +0000 UTC m=+0.171269081 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 09:33:05 compute-0 nova_compute[260603]: 2025-10-02 09:33:05.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:05 compute-0 sudo[449995]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:33:05 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:33:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:33:05 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:33:05 compute-0 sudo[450251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:33:05 compute-0 sudo[450251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:05 compute-0 sudo[450251]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:05 compute-0 sudo[450276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:33:05 compute-0 sudo[450276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:05 compute-0 sudo[450276]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:05 compute-0 sudo[450301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:33:05 compute-0 sudo[450301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:05 compute-0 sudo[450301]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:05 compute-0 sudo[450326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:33:05 compute-0 sudo[450326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:05 compute-0 ceph-mon[74477]: pgmap v3458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:33:05 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:33:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:33:06 compute-0 sudo[450326]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 09:33:06 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 09:33:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:33:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:33:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:33:06 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:33:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:33:06 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:33:06 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 473cd2f8-84fd-4f08-a352-0307668c1e5a does not exist
Oct 02 09:33:06 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 94143f1a-ebce-4d09-bff0-55ba09d715b2 does not exist
Oct 02 09:33:06 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev cef387da-af16-44b5-9258-27c3cc35fc52 does not exist
Oct 02 09:33:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:33:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:33:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:33:06 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:33:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:33:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:33:06 compute-0 sudo[450381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:33:06 compute-0 sudo[450381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:06 compute-0 sudo[450381]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:06 compute-0 sudo[450406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:33:06 compute-0 sudo[450406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:06 compute-0 sudo[450406]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:06 compute-0 sudo[450431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:33:06 compute-0 sudo[450431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:06 compute-0 sudo[450431]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:06 compute-0 sudo[450456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:33:06 compute-0 sudo[450456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:06 compute-0 podman[450522]: 2025-10-02 09:33:06.825958137 +0000 UTC m=+0.044999106 container create b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 09:33:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 09:33:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:33:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:33:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:33:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:33:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:33:06 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:33:06 compute-0 systemd[1]: Started libpod-conmon-b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367.scope.
Oct 02 09:33:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:33:06 compute-0 podman[450522]: 2025-10-02 09:33:06.805678414 +0000 UTC m=+0.024719463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:33:06 compute-0 podman[450522]: 2025-10-02 09:33:06.910666923 +0000 UTC m=+0.129707902 container init b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:33:06 compute-0 podman[450522]: 2025-10-02 09:33:06.918993963 +0000 UTC m=+0.138034932 container start b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:33:06 compute-0 podman[450522]: 2025-10-02 09:33:06.922535504 +0000 UTC m=+0.141576493 container attach b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:33:06 compute-0 cranky_einstein[450539]: 167 167
Oct 02 09:33:06 compute-0 systemd[1]: libpod-b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367.scope: Deactivated successfully.
Oct 02 09:33:06 compute-0 podman[450522]: 2025-10-02 09:33:06.92659259 +0000 UTC m=+0.145633579 container died b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:33:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae09a9f6c9e7943c5555e4e07157c780b7877831473a0da33a8b1901803c3ba6-merged.mount: Deactivated successfully.
Oct 02 09:33:06 compute-0 podman[450522]: 2025-10-02 09:33:06.974701773 +0000 UTC m=+0.193742752 container remove b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:33:07 compute-0 systemd[1]: libpod-conmon-b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367.scope: Deactivated successfully.
Oct 02 09:33:07 compute-0 podman[450563]: 2025-10-02 09:33:07.151251538 +0000 UTC m=+0.057914580 container create d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sutherland, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 09:33:07 compute-0 systemd[1]: Started libpod-conmon-d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae.scope.
Oct 02 09:33:07 compute-0 podman[450563]: 2025-10-02 09:33:07.121017523 +0000 UTC m=+0.027680625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:33:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f4c0892d7f37fb79b84284cb2a3cc124dfee5c8421a9b15f5715d38a822320/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f4c0892d7f37fb79b84284cb2a3cc124dfee5c8421a9b15f5715d38a822320/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f4c0892d7f37fb79b84284cb2a3cc124dfee5c8421a9b15f5715d38a822320/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f4c0892d7f37fb79b84284cb2a3cc124dfee5c8421a9b15f5715d38a822320/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f4c0892d7f37fb79b84284cb2a3cc124dfee5c8421a9b15f5715d38a822320/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:07 compute-0 podman[450563]: 2025-10-02 09:33:07.254839553 +0000 UTC m=+0.161502575 container init d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sutherland, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:33:07 compute-0 podman[450563]: 2025-10-02 09:33:07.263939268 +0000 UTC m=+0.170602290 container start d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sutherland, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 09:33:07 compute-0 podman[450563]: 2025-10-02 09:33:07.267398925 +0000 UTC m=+0.174061967 container attach d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:33:07 compute-0 ceph-mon[74477]: pgmap v3459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:08 compute-0 awesome_sutherland[450579]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:33:08 compute-0 awesome_sutherland[450579]: --> relative data size: 1.0
Oct 02 09:33:08 compute-0 awesome_sutherland[450579]: --> All data devices are unavailable
Oct 02 09:33:08 compute-0 systemd[1]: libpod-d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae.scope: Deactivated successfully.
Oct 02 09:33:08 compute-0 podman[450563]: 2025-10-02 09:33:08.28823459 +0000 UTC m=+1.194897622 container died d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sutherland, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2f4c0892d7f37fb79b84284cb2a3cc124dfee5c8421a9b15f5715d38a822320-merged.mount: Deactivated successfully.
Oct 02 09:33:08 compute-0 podman[450563]: 2025-10-02 09:33:08.347680717 +0000 UTC m=+1.254343729 container remove d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 09:33:08 compute-0 systemd[1]: libpod-conmon-d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae.scope: Deactivated successfully.
Oct 02 09:33:08 compute-0 sudo[450456]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:08 compute-0 sudo[450623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:33:08 compute-0 sudo[450623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:08 compute-0 sudo[450623]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:08 compute-0 sudo[450648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:33:08 compute-0 sudo[450648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:08 compute-0 sudo[450648]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:08 compute-0 sudo[450673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:33:08 compute-0 sudo[450673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:08 compute-0 sudo[450673]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:08 compute-0 sudo[450698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:33:08 compute-0 sudo[450698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:08 compute-0 podman[450764]: 2025-10-02 09:33:08.951394435 +0000 UTC m=+0.036401288 container create 75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jemison, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:33:08 compute-0 systemd[1]: Started libpod-conmon-75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d.scope.
Oct 02 09:33:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:33:09 compute-0 podman[450764]: 2025-10-02 09:33:09.016929211 +0000 UTC m=+0.101936064 container init 75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:33:09 compute-0 podman[450764]: 2025-10-02 09:33:09.025807069 +0000 UTC m=+0.110813922 container start 75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:33:09 compute-0 strange_jemison[450780]: 167 167
Oct 02 09:33:09 compute-0 podman[450764]: 2025-10-02 09:33:09.028761871 +0000 UTC m=+0.113768754 container attach 75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jemison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:33:09 compute-0 podman[450764]: 2025-10-02 09:33:09.029014868 +0000 UTC m=+0.114021721 container died 75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:33:09 compute-0 systemd[1]: libpod-75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d.scope: Deactivated successfully.
Oct 02 09:33:09 compute-0 podman[450764]: 2025-10-02 09:33:08.935615192 +0000 UTC m=+0.020622065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:33:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0c68dc87df7f35a78391ec517eaeeb69759aed3ef4c2ec5317fd8b4dbfe03b7-merged.mount: Deactivated successfully.
Oct 02 09:33:09 compute-0 podman[450764]: 2025-10-02 09:33:09.058357985 +0000 UTC m=+0.143364838 container remove 75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 09:33:09 compute-0 systemd[1]: libpod-conmon-75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d.scope: Deactivated successfully.
Oct 02 09:33:09 compute-0 nova_compute[260603]: 2025-10-02 09:33:09.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:09 compute-0 podman[450807]: 2025-10-02 09:33:09.222304376 +0000 UTC m=+0.045243254 container create 8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hamilton, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 09:33:09 compute-0 systemd[1]: Started libpod-conmon-8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4.scope.
Oct 02 09:33:09 compute-0 podman[450807]: 2025-10-02 09:33:09.199576966 +0000 UTC m=+0.022515824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:33:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b353217d7d895fc18250a1bf5f974032d5784f51945349b07a37214be15fd875/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b353217d7d895fc18250a1bf5f974032d5784f51945349b07a37214be15fd875/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b353217d7d895fc18250a1bf5f974032d5784f51945349b07a37214be15fd875/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b353217d7d895fc18250a1bf5f974032d5784f51945349b07a37214be15fd875/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:09 compute-0 podman[450807]: 2025-10-02 09:33:09.321922397 +0000 UTC m=+0.144861315 container init 8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hamilton, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:33:09 compute-0 podman[450807]: 2025-10-02 09:33:09.334106888 +0000 UTC m=+0.157045726 container start 8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hamilton, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 09:33:09 compute-0 podman[450807]: 2025-10-02 09:33:09.337405321 +0000 UTC m=+0.160344219 container attach 8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:33:09 compute-0 ceph-mon[74477]: pgmap v3460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]: {
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:     "0": [
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:         {
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "devices": [
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "/dev/loop3"
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             ],
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_name": "ceph_lv0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_size": "21470642176",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "name": "ceph_lv0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "tags": {
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.cluster_name": "ceph",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.crush_device_class": "",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.encrypted": "0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.osd_id": "0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.type": "block",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.vdo": "0"
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             },
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "type": "block",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "vg_name": "ceph_vg0"
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:         }
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:     ],
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:     "1": [
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:         {
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "devices": [
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "/dev/loop4"
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             ],
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_name": "ceph_lv1",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_size": "21470642176",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "name": "ceph_lv1",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "tags": {
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.cluster_name": "ceph",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.crush_device_class": "",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.encrypted": "0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.osd_id": "1",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.type": "block",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.vdo": "0"
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             },
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "type": "block",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "vg_name": "ceph_vg1"
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:         }
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:     ],
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:     "2": [
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:         {
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "devices": [
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "/dev/loop5"
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             ],
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_name": "ceph_lv2",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_size": "21470642176",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "name": "ceph_lv2",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "tags": {
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.cluster_name": "ceph",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.crush_device_class": "",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.encrypted": "0",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.osd_id": "2",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.type": "block",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:                 "ceph.vdo": "0"
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             },
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "type": "block",
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:             "vg_name": "ceph_vg2"
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:         }
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]:     ]
Oct 02 09:33:10 compute-0 dreamy_hamilton[450824]: }
Oct 02 09:33:10 compute-0 systemd[1]: libpod-8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4.scope: Deactivated successfully.
Oct 02 09:33:10 compute-0 podman[450807]: 2025-10-02 09:33:10.089708629 +0000 UTC m=+0.912647467 container died 8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hamilton, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:33:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b353217d7d895fc18250a1bf5f974032d5784f51945349b07a37214be15fd875-merged.mount: Deactivated successfully.
Oct 02 09:33:10 compute-0 podman[450807]: 2025-10-02 09:33:10.146520294 +0000 UTC m=+0.969459132 container remove 8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hamilton, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 09:33:10 compute-0 systemd[1]: libpod-conmon-8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4.scope: Deactivated successfully.
Oct 02 09:33:10 compute-0 sudo[450698]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:10 compute-0 sudo[450846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:33:10 compute-0 sudo[450846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:10 compute-0 sudo[450846]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:10 compute-0 nova_compute[260603]: 2025-10-02 09:33:10.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:10 compute-0 sudo[450871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:33:10 compute-0 sudo[450871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:10 compute-0 sudo[450871]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:10 compute-0 sudo[450896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:33:10 compute-0 sudo[450896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:10 compute-0 sudo[450896]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:10 compute-0 sudo[450921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:33:10 compute-0 sudo[450921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:33:10 compute-0 podman[450988]: 2025-10-02 09:33:10.912872481 +0000 UTC m=+0.048133985 container create 244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 09:33:10 compute-0 systemd[1]: Started libpod-conmon-244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2.scope.
Oct 02 09:33:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:33:10 compute-0 podman[450988]: 2025-10-02 09:33:10.89300743 +0000 UTC m=+0.028268954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:33:10 compute-0 podman[450988]: 2025-10-02 09:33:10.993467057 +0000 UTC m=+0.128728591 container init 244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:33:11 compute-0 podman[450988]: 2025-10-02 09:33:11.000226389 +0000 UTC m=+0.135487903 container start 244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:33:11 compute-0 podman[450988]: 2025-10-02 09:33:11.003232362 +0000 UTC m=+0.138493896 container attach 244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 09:33:11 compute-0 thirsty_herschel[451004]: 167 167
Oct 02 09:33:11 compute-0 systemd[1]: libpod-244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2.scope: Deactivated successfully.
Oct 02 09:33:11 compute-0 podman[451009]: 2025-10-02 09:33:11.053596366 +0000 UTC m=+0.031198915 container died 244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:33:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1db4d3e65b4f7132f29bddf27404bcbaeb589603b8a837e04a78081c194c5618-merged.mount: Deactivated successfully.
Oct 02 09:33:11 compute-0 podman[451009]: 2025-10-02 09:33:11.094193874 +0000 UTC m=+0.071796413 container remove 244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:33:11 compute-0 systemd[1]: libpod-conmon-244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2.scope: Deactivated successfully.
Oct 02 09:33:11 compute-0 podman[451029]: 2025-10-02 09:33:11.307052293 +0000 UTC m=+0.063398282 container create 32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamarr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:33:11 compute-0 systemd[1]: Started libpod-conmon-32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c.scope.
Oct 02 09:33:11 compute-0 podman[451029]: 2025-10-02 09:33:11.274188206 +0000 UTC m=+0.030534295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:33:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3db29172cb49752b549c7b52c8f83de944a19e28d2211aea5e22e43dfb25e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3db29172cb49752b549c7b52c8f83de944a19e28d2211aea5e22e43dfb25e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3db29172cb49752b549c7b52c8f83de944a19e28d2211aea5e22e43dfb25e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3db29172cb49752b549c7b52c8f83de944a19e28d2211aea5e22e43dfb25e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:33:11 compute-0 podman[451029]: 2025-10-02 09:33:11.389999513 +0000 UTC m=+0.146345502 container init 32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 09:33:11 compute-0 podman[451029]: 2025-10-02 09:33:11.39599832 +0000 UTC m=+0.152344299 container start 32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamarr, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:33:11 compute-0 podman[451029]: 2025-10-02 09:33:11.399481139 +0000 UTC m=+0.155827108 container attach 32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:33:11 compute-0 ceph-mon[74477]: pgmap v3461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]: {
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "osd_id": 2,
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "type": "bluestore"
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:     },
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "osd_id": 1,
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "type": "bluestore"
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:     },
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "osd_id": 0,
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:         "type": "bluestore"
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]:     }
Oct 02 09:33:12 compute-0 dazzling_lamarr[451046]: }
Oct 02 09:33:12 compute-0 systemd[1]: libpod-32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c.scope: Deactivated successfully.
Oct 02 09:33:12 compute-0 systemd[1]: libpod-32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c.scope: Consumed 1.028s CPU time.
Oct 02 09:33:12 compute-0 podman[451029]: 2025-10-02 09:33:12.420898572 +0000 UTC m=+1.177244581 container died 32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 09:33:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f3db29172cb49752b549c7b52c8f83de944a19e28d2211aea5e22e43dfb25e0-merged.mount: Deactivated successfully.
Oct 02 09:33:12 compute-0 podman[451029]: 2025-10-02 09:33:12.480107352 +0000 UTC m=+1.236453331 container remove 32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamarr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:33:12 compute-0 systemd[1]: libpod-conmon-32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c.scope: Deactivated successfully.
Oct 02 09:33:12 compute-0 sudo[450921]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:33:12 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:33:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:33:12 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:33:12 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev d516ae73-b727-435d-b808-00001310e754 does not exist
Oct 02 09:33:12 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 72088a30-55df-4415-b04a-df572faa848d does not exist
Oct 02 09:33:12 compute-0 sudo[451091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:33:12 compute-0 sudo[451091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:12 compute-0 sudo[451091]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:12 compute-0 sudo[451116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:33:12 compute-0 sudo[451116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:33:12 compute-0 sudo[451116]: pam_unix(sudo:session): session closed for user root
Oct 02 09:33:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:13 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:33:13 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:33:13 compute-0 ceph-mon[74477]: pgmap v3462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:14 compute-0 nova_compute[260603]: 2025-10-02 09:33:14.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:15 compute-0 nova_compute[260603]: 2025-10-02 09:33:15.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:15 compute-0 ceph-mon[74477]: pgmap v3463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:33:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:17 compute-0 ceph-mon[74477]: pgmap v3464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:19 compute-0 nova_compute[260603]: 2025-10-02 09:33:19.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:19 compute-0 ceph-mon[74477]: pgmap v3465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:20 compute-0 nova_compute[260603]: 2025-10-02 09:33:20.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:33:21 compute-0 nova_compute[260603]: 2025-10-02 09:33:21.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:33:21 compute-0 ceph-mon[74477]: pgmap v3466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:33:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3253008149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:33:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:33:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3253008149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:33:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3253008149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:33:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3253008149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:33:23 compute-0 nova_compute[260603]: 2025-10-02 09:33:23.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:33:23 compute-0 ceph-mon[74477]: pgmap v3467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:24 compute-0 nova_compute[260603]: 2025-10-02 09:33:24.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:24 compute-0 nova_compute[260603]: 2025-10-02 09:33:24.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:33:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:25 compute-0 podman[451142]: 2025-10-02 09:33:25.00948936 +0000 UTC m=+0.067583602 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 02 09:33:25 compute-0 podman[451141]: 2025-10-02 09:33:25.06549737 +0000 UTC m=+0.127691700 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 09:33:25 compute-0 nova_compute[260603]: 2025-10-02 09:33:25.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:25 compute-0 nova_compute[260603]: 2025-10-02 09:33:25.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:33:25 compute-0 nova_compute[260603]: 2025-10-02 09:33:25.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:33:25 compute-0 nova_compute[260603]: 2025-10-02 09:33:25.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:33:25 compute-0 nova_compute[260603]: 2025-10-02 09:33:25.589 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:33:25 compute-0 ceph-mon[74477]: pgmap v3468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:33:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:26 compute-0 ceph-mon[74477]: pgmap v3469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:27 compute-0 nova_compute[260603]: 2025-10-02 09:33:27.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:33:27 compute-0 nova_compute[260603]: 2025-10-02 09:33:27.563 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:33:27 compute-0 nova_compute[260603]: 2025-10-02 09:33:27.563 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:33:27 compute-0 nova_compute[260603]: 2025-10-02 09:33:27.563 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:33:27 compute-0 nova_compute[260603]: 2025-10-02 09:33:27.563 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:33:27 compute-0 nova_compute[260603]: 2025-10-02 09:33:27.563 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:33:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:33:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:33:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:33:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:33:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:33:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/56989600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:33:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:33:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:33:27 compute-0 nova_compute[260603]: 2025-10-02 09:33:27.995 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:33:28 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/56989600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:33:28 compute-0 nova_compute[260603]: 2025-10-02 09:33:28.142 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:33:28 compute-0 nova_compute[260603]: 2025-10-02 09:33:28.144 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3551MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:33:28 compute-0 nova_compute[260603]: 2025-10-02 09:33:28.145 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:33:28 compute-0 nova_compute[260603]: 2025-10-02 09:33:28.145 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:33:28
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'backups']
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:33:28 compute-0 nova_compute[260603]: 2025-10-02 09:33:28.224 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:33:28 compute-0 nova_compute[260603]: 2025-10-02 09:33:28.225 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:33:28 compute-0 nova_compute[260603]: 2025-10-02 09:33:28.354 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:33:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:33:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3094738245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:33:28 compute-0 nova_compute[260603]: 2025-10-02 09:33:28.769 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:33:28 compute-0 nova_compute[260603]: 2025-10-02 09:33:28.774 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:33:28 compute-0 nova_compute[260603]: 2025-10-02 09:33:28.812 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:33:28 compute-0 nova_compute[260603]: 2025-10-02 09:33:28.814 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:33:28 compute-0 nova_compute[260603]: 2025-10-02 09:33:28.814 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:33:28 compute-0 podman[451230]: 2025-10-02 09:33:28.988597095 +0000 UTC m=+0.050090696 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct 02 09:33:28 compute-0 podman[451229]: 2025-10-02 09:33:28.990124603 +0000 UTC m=+0.057965642 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:33:29 compute-0 ceph-mon[74477]: pgmap v3470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3094738245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:33:29 compute-0 nova_compute[260603]: 2025-10-02 09:33:29.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:30 compute-0 nova_compute[260603]: 2025-10-02 09:33:30.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:30 compute-0 nova_compute[260603]: 2025-10-02 09:33:30.811 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:33:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:33:31 compute-0 nova_compute[260603]: 2025-10-02 09:33:31.632 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:33:31 compute-0 ceph-mon[74477]: pgmap v3471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:33 compute-0 nova_compute[260603]: 2025-10-02 09:33:33.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:33:33 compute-0 ceph-mon[74477]: pgmap v3472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:34 compute-0 nova_compute[260603]: 2025-10-02 09:33:34.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:33:34.872 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:33:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:33:34.873 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:33:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:33:34.873 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:33:35 compute-0 nova_compute[260603]: 2025-10-02 09:33:35.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:35 compute-0 ceph-mon[74477]: pgmap v3473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:33:35 compute-0 sshd-session[451269]: Accepted publickey for zuul from 192.168.122.30 port 53334 ssh2: ECDSA SHA256:QEnwbgBR1jglQLPp4vwsTS2MMzDakrR2dLJ/eEaCKUI
Oct 02 09:33:35 compute-0 systemd-logind[787]: New session 54 of user zuul.
Oct 02 09:33:35 compute-0 systemd[1]: Started Session 54 of User zuul.
Oct 02 09:33:35 compute-0 sshd-session[451269]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 09:33:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:37 compute-0 ceph-mon[74477]: pgmap v3474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:38 compute-0 nova_compute[260603]: 2025-10-02 09:33:38.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:33:38.580 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:33:38 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:33:38.582 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:33:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:39 compute-0 nova_compute[260603]: 2025-10-02 09:33:39.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:33:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:33:39 compute-0 ceph-mon[74477]: pgmap v3475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:40 compute-0 nova_compute[260603]: 2025-10-02 09:33:40.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:33:41 compute-0 nova_compute[260603]: 2025-10-02 09:33:41.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:33:41 compute-0 ceph-mon[74477]: pgmap v3476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:42 compute-0 sshd-session[451272]: Connection closed by 192.168.122.30 port 53334
Oct 02 09:33:42 compute-0 sshd-session[451269]: pam_unix(sshd:session): session closed for user zuul
Oct 02 09:33:42 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct 02 09:33:42 compute-0 systemd-logind[787]: Session 54 logged out. Waiting for processes to exit.
Oct 02 09:33:42 compute-0 systemd-logind[787]: Removed session 54.
Oct 02 09:33:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:43 compute-0 ceph-mon[74477]: pgmap v3477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:44 compute-0 nova_compute[260603]: 2025-10-02 09:33:44.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:44 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:33:44.584 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:33:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:45 compute-0 nova_compute[260603]: 2025-10-02 09:33:45.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:45 compute-0 ceph-mon[74477]: pgmap v3478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:33:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:47 compute-0 ceph-mon[74477]: pgmap v3479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:48 compute-0 ceph-mon[74477]: pgmap v3480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:49 compute-0 nova_compute[260603]: 2025-10-02 09:33:49.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.254913) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397629254966, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1769, "num_deletes": 251, "total_data_size": 2894977, "memory_usage": 2937568, "flush_reason": "Manual Compaction"}
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397629417353, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 2823286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70847, "largest_seqno": 72615, "table_properties": {"data_size": 2815151, "index_size": 4948, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16461, "raw_average_key_size": 19, "raw_value_size": 2798965, "raw_average_value_size": 3396, "num_data_blocks": 221, "num_entries": 824, "num_filter_entries": 824, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397441, "oldest_key_time": 1759397441, "file_creation_time": 1759397629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 162484 microseconds, and 11048 cpu microseconds.
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.417397) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 2823286 bytes OK
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.417419) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.523043) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.523093) EVENT_LOG_v1 {"time_micros": 1759397629523082, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.523123) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2887431, prev total WAL file size 2887431, number of live WAL files 2.
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.524727) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(2757KB)], [170(9684KB)]
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397629524803, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 12739806, "oldest_snapshot_seqno": -1}
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 8859 keys, 11023086 bytes, temperature: kUnknown
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397629654961, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 11023086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10966180, "index_size": 33625, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 232827, "raw_average_key_size": 26, "raw_value_size": 10810505, "raw_average_value_size": 1220, "num_data_blocks": 1301, "num_entries": 8859, "num_filter_entries": 8859, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.655236) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 11023086 bytes
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.656733) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.8 rd, 84.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.5 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 9373, records dropped: 514 output_compression: NoCompression
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.656770) EVENT_LOG_v1 {"time_micros": 1759397629656760, "job": 106, "event": "compaction_finished", "compaction_time_micros": 130261, "compaction_time_cpu_micros": 50189, "output_level": 6, "num_output_files": 1, "total_output_size": 11023086, "num_input_records": 9373, "num_output_records": 8859, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397629657641, "job": 106, "event": "table_file_deletion", "file_number": 172}
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397629660005, "job": 106, "event": "table_file_deletion", "file_number": 170}
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.524654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.660066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.660071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.660073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.660075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:33:49 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.660077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:33:50 compute-0 nova_compute[260603]: 2025-10-02 09:33:50.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:33:51 compute-0 ceph-mon[74477]: pgmap v3481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:53 compute-0 ceph-mon[74477]: pgmap v3482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:54 compute-0 nova_compute[260603]: 2025-10-02 09:33:54.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:54 compute-0 nova_compute[260603]: 2025-10-02 09:33:54.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:33:54 compute-0 nova_compute[260603]: 2025-10-02 09:33:54.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:33:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:55 compute-0 nova_compute[260603]: 2025-10-02 09:33:55.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:55 compute-0 ceph-mon[74477]: pgmap v3483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:33:55 compute-0 podman[451527]: 2025-10-02 09:33:55.990575433 +0000 UTC m=+0.051292293 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:33:56 compute-0 podman[451526]: 2025-10-02 09:33:56.013776837 +0000 UTC m=+0.077117180 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 09:33:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:57 compute-0 ceph-mon[74477]: pgmap v3484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:33:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:33:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:33:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:33:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:33:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:33:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:33:59 compute-0 nova_compute[260603]: 2025-10-02 09:33:59.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:33:59 compute-0 ceph-mon[74477]: pgmap v3485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:00 compute-0 podman[451569]: 2025-10-02 09:34:00.005462185 +0000 UTC m=+0.063044960 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 09:34:00 compute-0 podman[451570]: 2025-10-02 09:34:00.018068029 +0000 UTC m=+0.078470992 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 09:34:00 compute-0 nova_compute[260603]: 2025-10-02 09:34:00.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:34:01 compute-0 ceph-mon[74477]: pgmap v3486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:03 compute-0 ceph-mon[74477]: pgmap v3487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:04 compute-0 nova_compute[260603]: 2025-10-02 09:34:04.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:05 compute-0 nova_compute[260603]: 2025-10-02 09:34:05.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:05 compute-0 ceph-mon[74477]: pgmap v3488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:34:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:06 compute-0 ceph-mon[74477]: pgmap v3489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:09 compute-0 nova_compute[260603]: 2025-10-02 09:34:09.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:10 compute-0 ceph-mon[74477]: pgmap v3490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:10 compute-0 nova_compute[260603]: 2025-10-02 09:34:10.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:34:11 compute-0 ceph-mon[74477]: pgmap v3491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:12 compute-0 sudo[451610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:34:12 compute-0 sudo[451610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:12 compute-0 sudo[451610]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:12 compute-0 sudo[451635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:34:12 compute-0 sudo[451635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:12 compute-0 sudo[451635]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:12 compute-0 sudo[451660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:34:12 compute-0 sudo[451660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:12 compute-0 sudo[451660]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:12 compute-0 sudo[451685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:34:12 compute-0 sudo[451685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:13 compute-0 sudo[451685]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:34:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:34:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:34:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:34:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:34:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:34:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2d215fff-7329-4955-9e0c-10627c5a07a0 does not exist
Oct 02 09:34:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a4c1bfa7-ab4e-47c1-b414-4b605c236a09 does not exist
Oct 02 09:34:13 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev bbacac02-c5f8-4e59-b2a6-6354c00e0117 does not exist
Oct 02 09:34:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:34:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:34:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:34:13 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:34:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:34:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:34:13 compute-0 sudo[451741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:34:13 compute-0 sudo[451741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:13 compute-0 sudo[451741]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:13 compute-0 sudo[451766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:34:13 compute-0 sudo[451766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:13 compute-0 sudo[451766]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:13 compute-0 sudo[451791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:34:13 compute-0 sudo[451791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:13 compute-0 sudo[451791]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:13 compute-0 sudo[451816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:34:13 compute-0 sudo[451816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:13 compute-0 ceph-mon[74477]: pgmap v3492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:13 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:34:13 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:34:13 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:34:13 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:34:13 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:34:13 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:34:14 compute-0 podman[451884]: 2025-10-02 09:34:14.092151956 +0000 UTC m=+0.115121757 container create 9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 09:34:14 compute-0 podman[451884]: 2025-10-02 09:34:14.002239288 +0000 UTC m=+0.025209109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:34:14 compute-0 nova_compute[260603]: 2025-10-02 09:34:14.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:14 compute-0 systemd[1]: Started libpod-conmon-9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5.scope.
Oct 02 09:34:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:34:14 compute-0 podman[451884]: 2025-10-02 09:34:14.237098744 +0000 UTC m=+0.260068555 container init 9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:34:14 compute-0 podman[451884]: 2025-10-02 09:34:14.24595004 +0000 UTC m=+0.268919831 container start 9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:34:14 compute-0 tender_jang[451901]: 167 167
Oct 02 09:34:14 compute-0 systemd[1]: libpod-9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5.scope: Deactivated successfully.
Oct 02 09:34:14 compute-0 podman[451884]: 2025-10-02 09:34:14.260203255 +0000 UTC m=+0.283173066 container attach 9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 09:34:14 compute-0 podman[451884]: 2025-10-02 09:34:14.260706191 +0000 UTC m=+0.283675982 container died 9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:34:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b94b8253487b55c725807ecaef1cbeb9890902276c796638ad1418de45e39cea-merged.mount: Deactivated successfully.
Oct 02 09:34:14 compute-0 podman[451884]: 2025-10-02 09:34:14.462218525 +0000 UTC m=+0.485188316 container remove 9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:34:14 compute-0 systemd[1]: libpod-conmon-9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5.scope: Deactivated successfully.
Oct 02 09:34:14 compute-0 podman[451927]: 2025-10-02 09:34:14.625594028 +0000 UTC m=+0.023074232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:34:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:14 compute-0 podman[451927]: 2025-10-02 09:34:14.727239693 +0000 UTC m=+0.124719897 container create 1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cerf, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:34:14 compute-0 systemd[1]: Started libpod-conmon-1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e.scope.
Oct 02 09:34:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4bae5057299a20c68268d58fea4284714015ffbb7f65dc026319f3124c08c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4bae5057299a20c68268d58fea4284714015ffbb7f65dc026319f3124c08c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4bae5057299a20c68268d58fea4284714015ffbb7f65dc026319f3124c08c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4bae5057299a20c68268d58fea4284714015ffbb7f65dc026319f3124c08c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4bae5057299a20c68268d58fea4284714015ffbb7f65dc026319f3124c08c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:14 compute-0 podman[451927]: 2025-10-02 09:34:14.94231168 +0000 UTC m=+0.339791904 container init 1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 09:34:14 compute-0 podman[451927]: 2025-10-02 09:34:14.950391273 +0000 UTC m=+0.347871467 container start 1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 09:34:15 compute-0 ceph-mon[74477]: pgmap v3493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:15 compute-0 podman[451927]: 2025-10-02 09:34:15.04699636 +0000 UTC m=+0.444476574 container attach 1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:34:15 compute-0 nova_compute[260603]: 2025-10-02 09:34:15.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:15 compute-0 focused_cerf[451944]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:34:15 compute-0 focused_cerf[451944]: --> relative data size: 1.0
Oct 02 09:34:15 compute-0 focused_cerf[451944]: --> All data devices are unavailable
Oct 02 09:34:15 compute-0 systemd[1]: libpod-1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e.scope: Deactivated successfully.
Oct 02 09:34:15 compute-0 podman[451927]: 2025-10-02 09:34:15.950731198 +0000 UTC m=+1.348211402 container died 1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cerf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:34:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:34:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d4bae5057299a20c68268d58fea4284714015ffbb7f65dc026319f3124c08c4-merged.mount: Deactivated successfully.
Oct 02 09:34:16 compute-0 podman[451927]: 2025-10-02 09:34:16.053979933 +0000 UTC m=+1.451460147 container remove 1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cerf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 09:34:16 compute-0 systemd[1]: libpod-conmon-1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e.scope: Deactivated successfully.
Oct 02 09:34:16 compute-0 sudo[451816]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:16 compute-0 sudo[451987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:34:16 compute-0 sudo[451987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:16 compute-0 sudo[451987]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:16 compute-0 sudo[452012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:34:16 compute-0 sudo[452012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:16 compute-0 sudo[452012]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:16 compute-0 sudo[452037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:34:16 compute-0 sudo[452037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:16 compute-0 sudo[452037]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:16 compute-0 sudo[452062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:34:16 compute-0 sudo[452062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:16 compute-0 podman[452128]: 2025-10-02 09:34:16.708245019 +0000 UTC m=+0.043132798 container create 2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 09:34:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:16 compute-0 systemd[1]: Started libpod-conmon-2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137.scope.
Oct 02 09:34:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:34:16 compute-0 podman[452128]: 2025-10-02 09:34:16.784922424 +0000 UTC m=+0.119810203 container init 2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:34:16 compute-0 podman[452128]: 2025-10-02 09:34:16.691434203 +0000 UTC m=+0.026322002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:34:16 compute-0 podman[452128]: 2025-10-02 09:34:16.791599392 +0000 UTC m=+0.126487171 container start 2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 09:34:16 compute-0 wonderful_pascal[452145]: 167 167
Oct 02 09:34:16 compute-0 podman[452128]: 2025-10-02 09:34:16.79474136 +0000 UTC m=+0.129629159 container attach 2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:34:16 compute-0 systemd[1]: libpod-2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137.scope: Deactivated successfully.
Oct 02 09:34:16 compute-0 podman[452128]: 2025-10-02 09:34:16.797053523 +0000 UTC m=+0.131941302 container died 2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:34:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-66e1b03d6eac417d121196692221215d785c4ae72a9c0a5d9a4a02772b510fd0-merged.mount: Deactivated successfully.
Oct 02 09:34:16 compute-0 podman[452128]: 2025-10-02 09:34:16.836473783 +0000 UTC m=+0.171361562 container remove 2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:34:16 compute-0 systemd[1]: libpod-conmon-2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137.scope: Deactivated successfully.
Oct 02 09:34:17 compute-0 podman[452169]: 2025-10-02 09:34:17.008152836 +0000 UTC m=+0.043815499 container create 8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 09:34:17 compute-0 systemd[1]: Started libpod-conmon-8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62.scope.
Oct 02 09:34:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:34:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40de3e6787684dff2f59205eda2767da813fa0359a346ef3581c1a28840d1ea1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40de3e6787684dff2f59205eda2767da813fa0359a346ef3581c1a28840d1ea1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40de3e6787684dff2f59205eda2767da813fa0359a346ef3581c1a28840d1ea1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40de3e6787684dff2f59205eda2767da813fa0359a346ef3581c1a28840d1ea1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:17 compute-0 podman[452169]: 2025-10-02 09:34:16.98968808 +0000 UTC m=+0.025350763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:34:17 compute-0 podman[452169]: 2025-10-02 09:34:17.091133858 +0000 UTC m=+0.126796541 container init 8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:34:17 compute-0 podman[452169]: 2025-10-02 09:34:17.098491858 +0000 UTC m=+0.134154521 container start 8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:34:17 compute-0 podman[452169]: 2025-10-02 09:34:17.101848192 +0000 UTC m=+0.137510885 container attach 8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:34:17 compute-0 ceph-mon[74477]: pgmap v3494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:17 compute-0 awesome_ellis[452185]: {
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:     "0": [
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:         {
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "devices": [
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "/dev/loop3"
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             ],
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_name": "ceph_lv0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_size": "21470642176",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "name": "ceph_lv0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "tags": {
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.cluster_name": "ceph",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.crush_device_class": "",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.encrypted": "0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.osd_id": "0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.type": "block",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.vdo": "0"
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             },
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "type": "block",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "vg_name": "ceph_vg0"
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:         }
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:     ],
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:     "1": [
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:         {
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "devices": [
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "/dev/loop4"
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             ],
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_name": "ceph_lv1",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_size": "21470642176",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "name": "ceph_lv1",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "tags": {
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.cluster_name": "ceph",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.crush_device_class": "",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.encrypted": "0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.osd_id": "1",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.type": "block",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.vdo": "0"
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             },
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "type": "block",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "vg_name": "ceph_vg1"
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:         }
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:     ],
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:     "2": [
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:         {
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "devices": [
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "/dev/loop5"
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             ],
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_name": "ceph_lv2",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_size": "21470642176",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "name": "ceph_lv2",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "tags": {
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.cluster_name": "ceph",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.crush_device_class": "",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.encrypted": "0",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.osd_id": "2",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.type": "block",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:                 "ceph.vdo": "0"
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             },
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "type": "block",
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:             "vg_name": "ceph_vg2"
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:         }
Oct 02 09:34:17 compute-0 awesome_ellis[452185]:     ]
Oct 02 09:34:17 compute-0 awesome_ellis[452185]: }
Oct 02 09:34:17 compute-0 systemd[1]: libpod-8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62.scope: Deactivated successfully.
Oct 02 09:34:17 compute-0 podman[452169]: 2025-10-02 09:34:17.862573663 +0000 UTC m=+0.898236336 container died 8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 09:34:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-40de3e6787684dff2f59205eda2767da813fa0359a346ef3581c1a28840d1ea1-merged.mount: Deactivated successfully.
Oct 02 09:34:17 compute-0 podman[452169]: 2025-10-02 09:34:17.916158728 +0000 UTC m=+0.951821391 container remove 8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:34:17 compute-0 systemd[1]: libpod-conmon-8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62.scope: Deactivated successfully.
Oct 02 09:34:17 compute-0 sudo[452062]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:18 compute-0 sudo[452206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:34:18 compute-0 sudo[452206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:18 compute-0 sudo[452206]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:18 compute-0 sudo[452231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:34:18 compute-0 sudo[452231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:18 compute-0 sudo[452231]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:18 compute-0 sudo[452256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:34:18 compute-0 sudo[452256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:18 compute-0 sudo[452256]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:18 compute-0 sudo[452281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:34:18 compute-0 sudo[452281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:18 compute-0 podman[452346]: 2025-10-02 09:34:18.509112868 +0000 UTC m=+0.042629333 container create 503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 09:34:18 compute-0 systemd[1]: Started libpod-conmon-503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb.scope.
Oct 02 09:34:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:34:18 compute-0 podman[452346]: 2025-10-02 09:34:18.491964993 +0000 UTC m=+0.025481478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:34:18 compute-0 podman[452346]: 2025-10-02 09:34:18.595516737 +0000 UTC m=+0.129033212 container init 503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 09:34:18 compute-0 podman[452346]: 2025-10-02 09:34:18.602699921 +0000 UTC m=+0.136216386 container start 503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 09:34:18 compute-0 ecstatic_wilson[452362]: 167 167
Oct 02 09:34:18 compute-0 podman[452346]: 2025-10-02 09:34:18.609799133 +0000 UTC m=+0.143315598 container attach 503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 02 09:34:18 compute-0 systemd[1]: libpod-503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb.scope: Deactivated successfully.
Oct 02 09:34:18 compute-0 podman[452346]: 2025-10-02 09:34:18.610270248 +0000 UTC m=+0.143786723 container died 503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 09:34:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-908585e7c5114e8ec5f58af0386fabea4ef3301af6b083c5e8b92182cea0cb7c-merged.mount: Deactivated successfully.
Oct 02 09:34:18 compute-0 podman[452346]: 2025-10-02 09:34:18.652524947 +0000 UTC m=+0.186041412 container remove 503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wilson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 09:34:18 compute-0 systemd[1]: libpod-conmon-503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb.scope: Deactivated successfully.
Oct 02 09:34:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:18 compute-0 podman[452384]: 2025-10-02 09:34:18.808821819 +0000 UTC m=+0.037815522 container create 482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kepler, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 09:34:18 compute-0 systemd[1]: Started libpod-conmon-482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da.scope.
Oct 02 09:34:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c344155613f30c8e008d97b46898832af8de288d450450ca4f0b2e8cdf981cf8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c344155613f30c8e008d97b46898832af8de288d450450ca4f0b2e8cdf981cf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c344155613f30c8e008d97b46898832af8de288d450450ca4f0b2e8cdf981cf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c344155613f30c8e008d97b46898832af8de288d450450ca4f0b2e8cdf981cf8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:34:18 compute-0 podman[452384]: 2025-10-02 09:34:18.888121176 +0000 UTC m=+0.117114899 container init 482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kepler, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:34:18 compute-0 podman[452384]: 2025-10-02 09:34:18.793684036 +0000 UTC m=+0.022677759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:34:18 compute-0 podman[452384]: 2025-10-02 09:34:18.896706884 +0000 UTC m=+0.125700587 container start 482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kepler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:34:18 compute-0 podman[452384]: 2025-10-02 09:34:18.899959106 +0000 UTC m=+0.128952809 container attach 482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:34:19 compute-0 nova_compute[260603]: 2025-10-02 09:34:19.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:19 compute-0 ceph-mon[74477]: pgmap v3495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:19 compute-0 quirky_kepler[452400]: {
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "osd_id": 2,
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "type": "bluestore"
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:     },
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "osd_id": 1,
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "type": "bluestore"
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:     },
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "osd_id": 0,
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:         "type": "bluestore"
Oct 02 09:34:19 compute-0 quirky_kepler[452400]:     }
Oct 02 09:34:19 compute-0 quirky_kepler[452400]: }
Oct 02 09:34:19 compute-0 systemd[1]: libpod-482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da.scope: Deactivated successfully.
Oct 02 09:34:19 compute-0 podman[452433]: 2025-10-02 09:34:19.917607051 +0000 UTC m=+0.022351719 container died 482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kepler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c344155613f30c8e008d97b46898832af8de288d450450ca4f0b2e8cdf981cf8-merged.mount: Deactivated successfully.
Oct 02 09:34:19 compute-0 podman[452433]: 2025-10-02 09:34:19.962834734 +0000 UTC m=+0.067579382 container remove 482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kepler, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:34:19 compute-0 systemd[1]: libpod-conmon-482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da.scope: Deactivated successfully.
Oct 02 09:34:20 compute-0 sudo[452281]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:34:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:34:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:34:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:34:20 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c71643c5-2d88-442e-b92e-9ad9cf6f8aec does not exist
Oct 02 09:34:20 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c33790ac-f923-41ba-b7bf-cecb07385bba does not exist
Oct 02 09:34:20 compute-0 sudo[452448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:34:20 compute-0 sudo[452448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:20 compute-0 sudo[452448]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:20 compute-0 sudo[452473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:34:20 compute-0 sudo[452473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:34:20 compute-0 sudo[452473]: pam_unix(sudo:session): session closed for user root
Oct 02 09:34:20 compute-0 nova_compute[260603]: 2025-10-02 09:34:20.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:34:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:34:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:34:21 compute-0 ceph-mon[74477]: pgmap v3496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:34:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141865596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:34:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:34:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141865596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:34:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2141865596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:34:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2141865596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:34:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:23 compute-0 ceph-mon[74477]: pgmap v3497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:23 compute-0 nova_compute[260603]: 2025-10-02 09:34:23.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:34:24 compute-0 nova_compute[260603]: 2025-10-02 09:34:24.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:24 compute-0 nova_compute[260603]: 2025-10-02 09:34:24.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:34:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:25 compute-0 nova_compute[260603]: 2025-10-02 09:34:25.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:25 compute-0 ceph-mon[74477]: pgmap v3498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:34:26 compute-0 nova_compute[260603]: 2025-10-02 09:34:26.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:34:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:27 compute-0 podman[452499]: 2025-10-02 09:34:27.014702445 +0000 UTC m=+0.072318450 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:34:27 compute-0 podman[452498]: 2025-10-02 09:34:27.054650083 +0000 UTC m=+0.111692590 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 02 09:34:27 compute-0 nova_compute[260603]: 2025-10-02 09:34:27.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:34:27 compute-0 nova_compute[260603]: 2025-10-02 09:34:27.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:34:27 compute-0 nova_compute[260603]: 2025-10-02 09:34:27.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:34:27 compute-0 nova_compute[260603]: 2025-10-02 09:34:27.542 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:34:27 compute-0 ceph-mon[74477]: pgmap v3499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:34:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:34:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:34:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:34:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:34:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:34:28
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'backups', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:34:28 compute-0 nova_compute[260603]: 2025-10-02 09:34:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:34:28 compute-0 nova_compute[260603]: 2025-10-02 09:34:28.567 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:34:28 compute-0 nova_compute[260603]: 2025-10-02 09:34:28.568 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:34:28 compute-0 nova_compute[260603]: 2025-10-02 09:34:28.569 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:34:28 compute-0 nova_compute[260603]: 2025-10-02 09:34:28.569 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:34:28 compute-0 nova_compute[260603]: 2025-10-02 09:34:28.570 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:34:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:28 compute-0 ceph-mon[74477]: pgmap v3500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:34:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2411532253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:34:29 compute-0 nova_compute[260603]: 2025-10-02 09:34:29.070 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:34:29 compute-0 nova_compute[260603]: 2025-10-02 09:34:29.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:29 compute-0 nova_compute[260603]: 2025-10-02 09:34:29.223 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:34:29 compute-0 nova_compute[260603]: 2025-10-02 09:34:29.224 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3543MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:34:29 compute-0 nova_compute[260603]: 2025-10-02 09:34:29.224 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:34:29 compute-0 nova_compute[260603]: 2025-10-02 09:34:29.224 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:34:29 compute-0 nova_compute[260603]: 2025-10-02 09:34:29.571 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:34:29 compute-0 nova_compute[260603]: 2025-10-02 09:34:29.572 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:34:29 compute-0 nova_compute[260603]: 2025-10-02 09:34:29.620 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:34:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:34:30 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601945178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:34:30 compute-0 nova_compute[260603]: 2025-10-02 09:34:30.044 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:34:30 compute-0 nova_compute[260603]: 2025-10-02 09:34:30.065 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:34:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2411532253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:34:30 compute-0 nova_compute[260603]: 2025-10-02 09:34:30.097 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:34:30 compute-0 nova_compute[260603]: 2025-10-02 09:34:30.099 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:34:30 compute-0 nova_compute[260603]: 2025-10-02 09:34:30.099 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:34:30 compute-0 nova_compute[260603]: 2025-10-02 09:34:30.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:34:31 compute-0 podman[452586]: 2025-10-02 09:34:31.001780598 +0000 UTC m=+0.053579574 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:34:31 compute-0 podman[452585]: 2025-10-02 09:34:31.003320006 +0000 UTC m=+0.058178128 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 09:34:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/601945178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:34:31 compute-0 ceph-mon[74477]: pgmap v3501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:33 compute-0 ceph-mon[74477]: pgmap v3502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:34 compute-0 nova_compute[260603]: 2025-10-02 09:34:34.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:34:34.873 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:34:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:34:34.873 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:34:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:34:34.874 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:34:35 compute-0 nova_compute[260603]: 2025-10-02 09:34:35.096 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:34:35 compute-0 nova_compute[260603]: 2025-10-02 09:34:35.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:35 compute-0 nova_compute[260603]: 2025-10-02 09:34:35.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:34:35 compute-0 ceph-mon[74477]: pgmap v3503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:34:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:37 compute-0 ceph-mon[74477]: pgmap v3504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:37 compute-0 nova_compute[260603]: 2025-10-02 09:34:37.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:34:37 compute-0 nova_compute[260603]: 2025-10-02 09:34:37.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 09:34:37 compute-0 nova_compute[260603]: 2025-10-02 09:34:37.546 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 09:34:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:39 compute-0 nova_compute[260603]: 2025-10-02 09:34:39.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:34:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:34:39 compute-0 ceph-mon[74477]: pgmap v3505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:40 compute-0 nova_compute[260603]: 2025-10-02 09:34:40.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:34:41 compute-0 nova_compute[260603]: 2025-10-02 09:34:41.547 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:34:41 compute-0 ceph-mon[74477]: pgmap v3506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:43 compute-0 ceph-mon[74477]: pgmap v3507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:44 compute-0 nova_compute[260603]: 2025-10-02 09:34:44.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:45 compute-0 nova_compute[260603]: 2025-10-02 09:34:45.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:45 compute-0 ceph-mon[74477]: pgmap v3508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:34:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:47 compute-0 ceph-mon[74477]: pgmap v3509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:49 compute-0 nova_compute[260603]: 2025-10-02 09:34:49.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:49 compute-0 ceph-mon[74477]: pgmap v3510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:50 compute-0 nova_compute[260603]: 2025-10-02 09:34:50.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:34:51 compute-0 ceph-mon[74477]: pgmap v3511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:53 compute-0 ceph-mon[74477]: pgmap v3512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:54 compute-0 nova_compute[260603]: 2025-10-02 09:34:54.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:55 compute-0 nova_compute[260603]: 2025-10-02 09:34:55.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:55 compute-0 nova_compute[260603]: 2025-10-02 09:34:55.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:34:55 compute-0 nova_compute[260603]: 2025-10-02 09:34:55.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:34:55 compute-0 nova_compute[260603]: 2025-10-02 09:34:55.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:34:55 compute-0 ceph-mon[74477]: pgmap v3513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:34:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:57 compute-0 ceph-mon[74477]: pgmap v3514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:34:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:34:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:34:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:34:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:34:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:34:57 compute-0 podman[452623]: 2025-10-02 09:34:57.990411561 +0000 UTC m=+0.059100096 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:34:58 compute-0 podman[452622]: 2025-10-02 09:34:58.04509482 +0000 UTC m=+0.109360327 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:34:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:34:59 compute-0 nova_compute[260603]: 2025-10-02 09:34:59.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:34:59 compute-0 ceph-mon[74477]: pgmap v3515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:00 compute-0 nova_compute[260603]: 2025-10-02 09:35:00.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:00 compute-0 ceph-mon[74477]: pgmap v3516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:35:01 compute-0 podman[452668]: 2025-10-02 09:35:01.986177958 +0000 UTC m=+0.054595827 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct 02 09:35:01 compute-0 podman[452667]: 2025-10-02 09:35:01.998898235 +0000 UTC m=+0.067154998 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true)
Oct 02 09:35:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:03 compute-0 ceph-mon[74477]: pgmap v3517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:04 compute-0 nova_compute[260603]: 2025-10-02 09:35:04.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:05 compute-0 ceph-mon[74477]: pgmap v3518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:05 compute-0 nova_compute[260603]: 2025-10-02 09:35:05.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:35:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:06 compute-0 nova_compute[260603]: 2025-10-02 09:35:06.991 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:35:06 compute-0 nova_compute[260603]: 2025-10-02 09:35:06.992 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 09:35:07 compute-0 ceph-mon[74477]: pgmap v3519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:09 compute-0 nova_compute[260603]: 2025-10-02 09:35:09.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:09 compute-0 ceph-mon[74477]: pgmap v3520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:10 compute-0 nova_compute[260603]: 2025-10-02 09:35:10.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:35:11 compute-0 ceph-mon[74477]: pgmap v3521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:13 compute-0 ceph-mon[74477]: pgmap v3522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:14 compute-0 nova_compute[260603]: 2025-10-02 09:35:14.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:14 compute-0 ceph-mon[74477]: pgmap v3523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:15 compute-0 nova_compute[260603]: 2025-10-02 09:35:15.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:35:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:17 compute-0 ceph-mon[74477]: pgmap v3524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:19 compute-0 nova_compute[260603]: 2025-10-02 09:35:19.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:19 compute-0 ceph-mon[74477]: pgmap v3525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:20 compute-0 sudo[452708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:35:20 compute-0 sudo[452708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:20 compute-0 sudo[452708]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:20 compute-0 sudo[452733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:35:20 compute-0 sudo[452733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:20 compute-0 sudo[452733]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:20 compute-0 sudo[452758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:35:20 compute-0 sudo[452758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:20 compute-0 sudo[452758]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:20 compute-0 sudo[452783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:35:20 compute-0 sudo[452783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:20 compute-0 nova_compute[260603]: 2025-10-02 09:35:20.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:20 compute-0 sudo[452783]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:35:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:35:20 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:35:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:35:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:35:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:35:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:35:21 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e76e2339-0dfb-449e-a0cb-e01ce31433c8 does not exist
Oct 02 09:35:21 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 85749207-8b0e-45e9-8bf6-67df3a6326d4 does not exist
Oct 02 09:35:21 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b201a201-87a1-4a65-832a-d1ce62870df2 does not exist
Oct 02 09:35:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:35:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:35:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:35:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:35:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:35:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:35:21 compute-0 sudo[452840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:35:21 compute-0 sudo[452840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:21 compute-0 sudo[452840]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:21 compute-0 sudo[452865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:35:21 compute-0 sudo[452865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:21 compute-0 sudo[452865]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:21 compute-0 sudo[452890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:35:21 compute-0 sudo[452890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:21 compute-0 sudo[452890]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:21 compute-0 sudo[452915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:35:21 compute-0 sudo[452915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:21 compute-0 podman[452980]: 2025-10-02 09:35:21.574557453 +0000 UTC m=+0.037206453 container create ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 09:35:21 compute-0 systemd[1]: Started libpod-conmon-ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a.scope.
Oct 02 09:35:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:35:21 compute-0 podman[452980]: 2025-10-02 09:35:21.645940723 +0000 UTC m=+0.108589813 container init ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 09:35:21 compute-0 podman[452980]: 2025-10-02 09:35:21.653726066 +0000 UTC m=+0.116375066 container start ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 09:35:21 compute-0 podman[452980]: 2025-10-02 09:35:21.559316347 +0000 UTC m=+0.021965377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:35:21 compute-0 podman[452980]: 2025-10-02 09:35:21.6570289 +0000 UTC m=+0.119677920 container attach ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 09:35:21 compute-0 vigilant_morse[452997]: 167 167
Oct 02 09:35:21 compute-0 systemd[1]: libpod-ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a.scope: Deactivated successfully.
Oct 02 09:35:21 compute-0 podman[452980]: 2025-10-02 09:35:21.658347091 +0000 UTC m=+0.120996101 container died ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 09:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e3de6dc1ccc7838e91d482230c66278116cb48bb4088bac92b34ae8db20b6e3-merged.mount: Deactivated successfully.
Oct 02 09:35:21 compute-0 podman[452980]: 2025-10-02 09:35:21.69611673 +0000 UTC m=+0.158765730 container remove ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 02 09:35:21 compute-0 systemd[1]: libpod-conmon-ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a.scope: Deactivated successfully.
Oct 02 09:35:21 compute-0 ceph-mon[74477]: pgmap v3526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:35:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:35:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:35:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:35:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:35:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:35:21 compute-0 podman[453022]: 2025-10-02 09:35:21.839716005 +0000 UTC m=+0.039218525 container create 80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:35:21 compute-0 systemd[1]: Started libpod-conmon-80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765.scope.
Oct 02 09:35:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcb4d1c999a31c4f6ab814a777e209bad3947310677d88ef83cd30981b67ec3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcb4d1c999a31c4f6ab814a777e209bad3947310677d88ef83cd30981b67ec3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcb4d1c999a31c4f6ab814a777e209bad3947310677d88ef83cd30981b67ec3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcb4d1c999a31c4f6ab814a777e209bad3947310677d88ef83cd30981b67ec3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcb4d1c999a31c4f6ab814a777e209bad3947310677d88ef83cd30981b67ec3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:21 compute-0 podman[453022]: 2025-10-02 09:35:21.823759817 +0000 UTC m=+0.023262357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:35:21 compute-0 podman[453022]: 2025-10-02 09:35:21.927455356 +0000 UTC m=+0.126957876 container init 80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dubinsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 09:35:21 compute-0 podman[453022]: 2025-10-02 09:35:21.933362011 +0000 UTC m=+0.132864531 container start 80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:35:21 compute-0 podman[453022]: 2025-10-02 09:35:21.942285519 +0000 UTC m=+0.141788059 container attach 80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dubinsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 09:35:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:35:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/324418563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:35:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:35:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/324418563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:35:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/324418563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:35:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/324418563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:35:22 compute-0 infallible_dubinsky[453039]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:35:22 compute-0 infallible_dubinsky[453039]: --> relative data size: 1.0
Oct 02 09:35:22 compute-0 infallible_dubinsky[453039]: --> All data devices are unavailable
Oct 02 09:35:22 compute-0 systemd[1]: libpod-80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765.scope: Deactivated successfully.
Oct 02 09:35:22 compute-0 podman[453068]: 2025-10-02 09:35:22.980043173 +0000 UTC m=+0.024593849 container died 80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dubinsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dcb4d1c999a31c4f6ab814a777e209bad3947310677d88ef83cd30981b67ec3-merged.mount: Deactivated successfully.
Oct 02 09:35:23 compute-0 podman[453068]: 2025-10-02 09:35:23.067152354 +0000 UTC m=+0.111703010 container remove 80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dubinsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 09:35:23 compute-0 systemd[1]: libpod-conmon-80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765.scope: Deactivated successfully.
Oct 02 09:35:23 compute-0 sudo[452915]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:23 compute-0 sudo[453083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:35:23 compute-0 sudo[453083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:23 compute-0 sudo[453083]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:23 compute-0 sudo[453108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:35:23 compute-0 sudo[453108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:23 compute-0 sudo[453108]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:23 compute-0 sudo[453133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:35:23 compute-0 sudo[453133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:23 compute-0 sudo[453133]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:23 compute-0 sudo[453158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:35:23 compute-0 sudo[453158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:23 compute-0 podman[453224]: 2025-10-02 09:35:23.792306943 +0000 UTC m=+0.075047984 container create 7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carson, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 09:35:23 compute-0 podman[453224]: 2025-10-02 09:35:23.746280006 +0000 UTC m=+0.029021067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:35:23 compute-0 systemd[1]: Started libpod-conmon-7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6.scope.
Oct 02 09:35:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:35:23 compute-0 ceph-mon[74477]: pgmap v3527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:24 compute-0 podman[453224]: 2025-10-02 09:35:24.003912743 +0000 UTC m=+0.286653814 container init 7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 09:35:24 compute-0 podman[453224]: 2025-10-02 09:35:24.012494552 +0000 UTC m=+0.295235593 container start 7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 09:35:24 compute-0 hungry_carson[453241]: 167 167
Oct 02 09:35:24 compute-0 systemd[1]: libpod-7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6.scope: Deactivated successfully.
Oct 02 09:35:24 compute-0 podman[453224]: 2025-10-02 09:35:24.088361441 +0000 UTC m=+0.371102502 container attach 7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:35:24 compute-0 podman[453224]: 2025-10-02 09:35:24.0902337 +0000 UTC m=+0.372974781 container died 7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d047a76148bd1661be28a6077230d891a68a8c5e5f319229e3acc546d5c5e291-merged.mount: Deactivated successfully.
Oct 02 09:35:24 compute-0 podman[453224]: 2025-10-02 09:35:24.167571015 +0000 UTC m=+0.450312066 container remove 7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 09:35:24 compute-0 systemd[1]: libpod-conmon-7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6.scope: Deactivated successfully.
Oct 02 09:35:24 compute-0 nova_compute[260603]: 2025-10-02 09:35:24.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:24 compute-0 podman[453267]: 2025-10-02 09:35:24.368054187 +0000 UTC m=+0.050102696 container create 70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:35:24 compute-0 systemd[1]: Started libpod-conmon-70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09.scope.
Oct 02 09:35:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912a3acbf25b788b494844aef9e336c8a45c7f32473ef1be82cf53a811c7eef0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912a3acbf25b788b494844aef9e336c8a45c7f32473ef1be82cf53a811c7eef0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912a3acbf25b788b494844aef9e336c8a45c7f32473ef1be82cf53a811c7eef0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912a3acbf25b788b494844aef9e336c8a45c7f32473ef1be82cf53a811c7eef0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:24 compute-0 podman[453267]: 2025-10-02 09:35:24.44050014 +0000 UTC m=+0.122548689 container init 70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:35:24 compute-0 podman[453267]: 2025-10-02 09:35:24.349139237 +0000 UTC m=+0.031187796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:35:24 compute-0 podman[453267]: 2025-10-02 09:35:24.448945424 +0000 UTC m=+0.130993933 container start 70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 09:35:24 compute-0 podman[453267]: 2025-10-02 09:35:24.451588426 +0000 UTC m=+0.133636965 container attach 70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 09:35:24 compute-0 nova_compute[260603]: 2025-10-02 09:35:24.542 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:35:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:35:24.937 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:35:24 compute-0 nova_compute[260603]: 2025-10-02 09:35:24.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:24 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:35:24.941 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:35:24 compute-0 sshd-session[453288]: Accepted publickey for zuul from 192.168.122.30 port 51664 ssh2: ECDSA SHA256:QEnwbgBR1jglQLPp4vwsTS2MMzDakrR2dLJ/eEaCKUI
Oct 02 09:35:24 compute-0 ceph-mon[74477]: pgmap v3528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:24 compute-0 systemd-logind[787]: New session 55 of user zuul.
Oct 02 09:35:25 compute-0 systemd[1]: Started Session 55 of User zuul.
Oct 02 09:35:25 compute-0 sshd-session[453288]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 09:35:25 compute-0 cool_bartik[453283]: {
Oct 02 09:35:25 compute-0 cool_bartik[453283]:     "0": [
Oct 02 09:35:25 compute-0 cool_bartik[453283]:         {
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "devices": [
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "/dev/loop3"
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             ],
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_name": "ceph_lv0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_size": "21470642176",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "name": "ceph_lv0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "tags": {
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.cluster_name": "ceph",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.crush_device_class": "",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.encrypted": "0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.osd_id": "0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.type": "block",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.vdo": "0"
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             },
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "type": "block",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "vg_name": "ceph_vg0"
Oct 02 09:35:25 compute-0 cool_bartik[453283]:         }
Oct 02 09:35:25 compute-0 cool_bartik[453283]:     ],
Oct 02 09:35:25 compute-0 cool_bartik[453283]:     "1": [
Oct 02 09:35:25 compute-0 cool_bartik[453283]:         {
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "devices": [
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "/dev/loop4"
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             ],
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_name": "ceph_lv1",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_size": "21470642176",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "name": "ceph_lv1",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "tags": {
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.cluster_name": "ceph",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.crush_device_class": "",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.encrypted": "0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.osd_id": "1",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.type": "block",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.vdo": "0"
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             },
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "type": "block",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "vg_name": "ceph_vg1"
Oct 02 09:35:25 compute-0 cool_bartik[453283]:         }
Oct 02 09:35:25 compute-0 cool_bartik[453283]:     ],
Oct 02 09:35:25 compute-0 cool_bartik[453283]:     "2": [
Oct 02 09:35:25 compute-0 cool_bartik[453283]:         {
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "devices": [
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "/dev/loop5"
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             ],
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_name": "ceph_lv2",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_size": "21470642176",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "name": "ceph_lv2",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "tags": {
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.cluster_name": "ceph",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.crush_device_class": "",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.encrypted": "0",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.osd_id": "2",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.type": "block",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:                 "ceph.vdo": "0"
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             },
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "type": "block",
Oct 02 09:35:25 compute-0 cool_bartik[453283]:             "vg_name": "ceph_vg2"
Oct 02 09:35:25 compute-0 cool_bartik[453283]:         }
Oct 02 09:35:25 compute-0 cool_bartik[453283]:     ]
Oct 02 09:35:25 compute-0 cool_bartik[453283]: }
Oct 02 09:35:25 compute-0 systemd[1]: libpod-70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09.scope: Deactivated successfully.
Oct 02 09:35:25 compute-0 podman[453267]: 2025-10-02 09:35:25.222432343 +0000 UTC m=+0.904480872 container died 70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-912a3acbf25b788b494844aef9e336c8a45c7f32473ef1be82cf53a811c7eef0-merged.mount: Deactivated successfully.
Oct 02 09:35:25 compute-0 podman[453267]: 2025-10-02 09:35:25.308620216 +0000 UTC m=+0.990668725 container remove 70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:35:25 compute-0 sudo[453376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Oct 02 09:35:25 compute-0 sudo[453376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:25 compute-0 systemd[1]: libpod-conmon-70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09.scope: Deactivated successfully.
Oct 02 09:35:25 compute-0 sudo[453376]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:25 compute-0 sudo[453158]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:25 compute-0 sudo[453379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:35:25 compute-0 sudo[453379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:25 compute-0 sudo[453379]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:25 compute-0 sudo[453425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Oct 02 09:35:25 compute-0 sudo[453425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:25 compute-0 groupadd[453430]: group added to /etc/group: name=podman, GID=42479
Oct 02 09:35:25 compute-0 groupadd[453430]: group added to /etc/gshadow: name=podman
Oct 02 09:35:25 compute-0 groupadd[453430]: new group: name=podman, GID=42479
Oct 02 09:35:25 compute-0 sudo[453425]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:25 compute-0 sudo[453429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:35:25 compute-0 sudo[453429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:25 compute-0 sudo[453429]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:25 compute-0 sudo[453459]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Oct 02 09:35:25 compute-0 sudo[453459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:25 compute-0 usermod[453466]: add 'zuul' to group 'podman'
Oct 02 09:35:25 compute-0 usermod[453466]: add 'zuul' to shadow group 'podman'
Oct 02 09:35:25 compute-0 nova_compute[260603]: 2025-10-02 09:35:25.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:35:25 compute-0 nova_compute[260603]: 2025-10-02 09:35:25.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:25 compute-0 sudo[453459]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:25 compute-0 sudo[453461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:35:25 compute-0 sudo[453461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:25 compute-0 sudo[453461]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:25 compute-0 sudo[453493]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Oct 02 09:35:25 compute-0 sudo[453493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:25 compute-0 sudo[453493]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:25 compute-0 sudo[453502]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Oct 02 09:35:25 compute-0 sudo[453502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:25 compute-0 sudo[453502]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:25 compute-0 sudo[453496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:35:25 compute-0 sudo[453496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:25 compute-0 sudo[453523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Oct 02 09:35:25 compute-0 sudo[453523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:25 compute-0 sudo[453523]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:25 compute-0 sudo[453528]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Oct 02 09:35:25 compute-0 sudo[453528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:25 compute-0 sudo[453528]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:25 compute-0 sudo[453531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Oct 02 09:35:25 compute-0 sudo[453531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:25 compute-0 sudo[453531]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:25 compute-0 sudo[453560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Oct 02 09:35:25 compute-0 sudo[453560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:25 compute-0 systemd[1]: Reloading.
Oct 02 09:35:25 compute-0 podman[453577]: 2025-10-02 09:35:25.910461833 +0000 UTC m=+0.041372972 container create 8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 09:35:25 compute-0 systemd-sysv-generator[453620]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 09:35:25 compute-0 systemd-rc-local-generator[453614]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 09:35:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:35:25 compute-0 podman[453577]: 2025-10-02 09:35:25.895373492 +0000 UTC m=+0.026284661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:35:26 compute-0 sudo[453560]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:26 compute-0 systemd[1]: Started libpod-conmon-8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0.scope.
Oct 02 09:35:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:35:26 compute-0 sudo[453627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Oct 02 09:35:26 compute-0 sudo[453627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:26 compute-0 podman[453577]: 2025-10-02 09:35:26.282739381 +0000 UTC m=+0.413650570 container init 8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 09:35:26 compute-0 podman[453577]: 2025-10-02 09:35:26.294673484 +0000 UTC m=+0.425584623 container start 8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:35:26 compute-0 podman[453577]: 2025-10-02 09:35:26.297386009 +0000 UTC m=+0.428297168 container attach 8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 09:35:26 compute-0 eager_cerf[453628]: 167 167
Oct 02 09:35:26 compute-0 systemd[1]: libpod-8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0.scope: Deactivated successfully.
Oct 02 09:35:26 compute-0 podman[453577]: 2025-10-02 09:35:26.300724403 +0000 UTC m=+0.431635542 container died 8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 09:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e9349e0f8777708b5ad3d61bfcd99cb21312072bbc01ce249904e1f4936db1b-merged.mount: Deactivated successfully.
Oct 02 09:35:26 compute-0 podman[453577]: 2025-10-02 09:35:26.338967318 +0000 UTC m=+0.469878457 container remove 8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:35:26 compute-0 systemd[1]: libpod-conmon-8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0.scope: Deactivated successfully.
Oct 02 09:35:26 compute-0 sudo[453627]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:26 compute-0 sudo[453649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Oct 02 09:35:26 compute-0 sudo[453649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:26 compute-0 systemd[1]: Reloading.
Oct 02 09:35:26 compute-0 podman[453655]: 2025-10-02 09:35:26.508340858 +0000 UTC m=+0.038904756 container create c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:35:26 compute-0 systemd-rc-local-generator[453694]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 09:35:26 compute-0 systemd-sysv-generator[453699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 09:35:26 compute-0 podman[453655]: 2025-10-02 09:35:26.492250676 +0000 UTC m=+0.022814574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:35:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:26 compute-0 systemd[1]: Started libpod-conmon-c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502.scope.
Oct 02 09:35:26 compute-0 systemd[1]: Starting Podman API Socket...
Oct 02 09:35:26 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 02 09:35:26 compute-0 sudo[453649]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ff0404dd1820922daaec999bda2d115bcbf1c934d9bcf9e9e459dd879651d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ff0404dd1820922daaec999bda2d115bcbf1c934d9bcf9e9e459dd879651d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ff0404dd1820922daaec999bda2d115bcbf1c934d9bcf9e9e459dd879651d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ff0404dd1820922daaec999bda2d115bcbf1c934d9bcf9e9e459dd879651d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:35:26 compute-0 podman[453655]: 2025-10-02 09:35:26.847612425 +0000 UTC m=+0.378176323 container init c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:35:26 compute-0 sudo[453710]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Oct 02 09:35:26 compute-0 sudo[453710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:26 compute-0 sudo[453710]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:26 compute-0 podman[453655]: 2025-10-02 09:35:26.858137784 +0000 UTC m=+0.388701662 container start c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:35:26 compute-0 podman[453655]: 2025-10-02 09:35:26.860990493 +0000 UTC m=+0.391554421 container attach c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 09:35:26 compute-0 sudo[453714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Oct 02 09:35:26 compute-0 sudo[453714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:26 compute-0 sudo[453714]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:26 compute-0 sudo[453718]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Oct 02 09:35:26 compute-0 sudo[453718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:26 compute-0 sudo[453718]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:26 compute-0 sudo[453721]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Oct 02 09:35:26 compute-0 sudo[453721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:26 compute-0 sudo[453721]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:26 compute-0 sudo[453724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Oct 02 09:35:26 compute-0 sudo[453724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:26 compute-0 sudo[453724]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:26 compute-0 sudo[453727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Oct 02 09:35:26 compute-0 dbus-broker-launch[772]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Oct 02 09:35:26 compute-0 sudo[453727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:26 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Oct 02 09:35:26 compute-0 systemd[1]: Closed Podman API Socket.
Oct 02 09:35:26 compute-0 systemd[1]: Stopping Podman API Socket...
Oct 02 09:35:26 compute-0 systemd[1]: Starting Podman API Socket...
Oct 02 09:35:27 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 02 09:35:27 compute-0 sudo[453727]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:27 compute-0 sudo[453382]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Oct 02 09:35:27 compute-0 sudo[453382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:27 compute-0 sudo[453382]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:27 compute-0 sshd-session[453733]: Accepted publickey for zuul from 192.168.122.30 port 36110 ssh2: ECDSA SHA256:QEnwbgBR1jglQLPp4vwsTS2MMzDakrR2dLJ/eEaCKUI
Oct 02 09:35:27 compute-0 systemd-logind[787]: New session 56 of user zuul.
Oct 02 09:35:27 compute-0 systemd[1]: Started Session 56 of User zuul.
Oct 02 09:35:27 compute-0 sshd-session[453733]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 09:35:27 compute-0 systemd[1]: Starting Podman API Service...
Oct 02 09:35:27 compute-0 systemd[1]: Started Podman API Service.
Oct 02 09:35:27 compute-0 podman[453737]: time="2025-10-02T09:35:27Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 02 09:35:27 compute-0 podman[453737]: time="2025-10-02T09:35:27Z" level=info msg="Setting parallel job count to 25"
Oct 02 09:35:27 compute-0 podman[453737]: time="2025-10-02T09:35:27Z" level=info msg="Using sqlite as database backend"
Oct 02 09:35:27 compute-0 podman[453737]: time="2025-10-02T09:35:27Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 02 09:35:27 compute-0 podman[453737]: time="2025-10-02T09:35:27Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 02 09:35:27 compute-0 podman[453737]: time="2025-10-02T09:35:27Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct 02 09:35:27 compute-0 podman[453737]: @ - - [02/Oct/2025:09:35:27 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 02 09:35:27 compute-0 podman[453737]: @ - - [02/Oct/2025:09:35:27 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 29027 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 02 09:35:27 compute-0 nova_compute[260603]: 2025-10-02 09:35:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:35:27 compute-0 nova_compute[260603]: 2025-10-02 09:35:27.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:35:27 compute-0 nova_compute[260603]: 2025-10-02 09:35:27.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:35:27 compute-0 nova_compute[260603]: 2025-10-02 09:35:27.596 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:35:27 compute-0 ceph-mon[74477]: pgmap v3529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:27 compute-0 busy_lalande[453706]: {
Oct 02 09:35:27 compute-0 busy_lalande[453706]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "osd_id": 2,
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "type": "bluestore"
Oct 02 09:35:27 compute-0 busy_lalande[453706]:     },
Oct 02 09:35:27 compute-0 busy_lalande[453706]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "osd_id": 1,
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "type": "bluestore"
Oct 02 09:35:27 compute-0 busy_lalande[453706]:     },
Oct 02 09:35:27 compute-0 busy_lalande[453706]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "osd_id": 0,
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:35:27 compute-0 busy_lalande[453706]:         "type": "bluestore"
Oct 02 09:35:27 compute-0 busy_lalande[453706]:     }
Oct 02 09:35:27 compute-0 busy_lalande[453706]: }
Oct 02 09:35:27 compute-0 systemd[1]: libpod-c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502.scope: Deactivated successfully.
Oct 02 09:35:27 compute-0 systemd[1]: libpod-c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502.scope: Consumed 1.028s CPU time.
Oct 02 09:35:27 compute-0 podman[453655]: 2025-10-02 09:35:27.895274418 +0000 UTC m=+1.425838356 container died c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 09:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-95ff0404dd1820922daaec999bda2d115bcbf1c934d9bcf9e9e459dd879651d2-merged.mount: Deactivated successfully.
Oct 02 09:35:27 compute-0 podman[453655]: 2025-10-02 09:35:27.961098324 +0000 UTC m=+1.491662192 container remove c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:35:27 compute-0 systemd[1]: libpod-conmon-c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502.scope: Deactivated successfully.
Oct 02 09:35:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:35:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:35:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:35:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:35:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:35:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:35:27 compute-0 sudo[453496]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:35:28 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:35:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:35:28 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev d3513ca8-b3b6-4f7f-9508-e0268f6b418a does not exist
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 27d81dc5-0665-4523-ae30-8ea9e84e4bb4 does not exist
Oct 02 09:35:28 compute-0 sudo[453792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:35:28 compute-0 sudo[453792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:28 compute-0 sudo[453792]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:35:28
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'volumes', 'backups', 'cephfs.cephfs.meta']
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:35:28 compute-0 podman[453817]: 2025-10-02 09:35:28.165473217 +0000 UTC m=+0.061037037 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 09:35:28 compute-0 sudo[453828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:35:28 compute-0 sudo[453828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:35:28 compute-0 sudo[453828]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:28 compute-0 podman[453816]: 2025-10-02 09:35:28.183642056 +0000 UTC m=+0.097085074 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 09:35:28 compute-0 nova_compute[260603]: 2025-10-02 09:35:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:35:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:35:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:35:29 compute-0 ceph-mon[74477]: pgmap v3530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:29 compute-0 nova_compute[260603]: 2025-10-02 09:35:29.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:29 compute-0 nova_compute[260603]: 2025-10-02 09:35:29.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:35:29 compute-0 nova_compute[260603]: 2025-10-02 09:35:29.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:35:29 compute-0 nova_compute[260603]: 2025-10-02 09:35:29.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:35:29 compute-0 nova_compute[260603]: 2025-10-02 09:35:29.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:35:29 compute-0 nova_compute[260603]: 2025-10-02 09:35:29.558 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:35:29 compute-0 nova_compute[260603]: 2025-10-02 09:35:29.558 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:35:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:35:30 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639851267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.094 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:35:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3639851267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.302 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.303 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3536MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.303 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.304 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.483 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.485 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.539 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.579 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.580 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.608 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.644 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 09:35:30 compute-0 nova_compute[260603]: 2025-10-02 09:35:30.677 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:35:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:35:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:35:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797151839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:35:31 compute-0 nova_compute[260603]: 2025-10-02 09:35:31.207 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:35:31 compute-0 nova_compute[260603]: 2025-10-02 09:35:31.213 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:35:31 compute-0 nova_compute[260603]: 2025-10-02 09:35:31.253 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:35:31 compute-0 nova_compute[260603]: 2025-10-02 09:35:31.254 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:35:31 compute-0 nova_compute[260603]: 2025-10-02 09:35:31.254 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:35:31 compute-0 ceph-mon[74477]: pgmap v3531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1797151839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:35:32 compute-0 nova_compute[260603]: 2025-10-02 09:35:32.252 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:35:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:32 compute-0 ceph-mon[74477]: pgmap v3532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:32 compute-0 podman[453954]: 2025-10-02 09:35:32.996988079 +0000 UTC m=+0.058209419 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 09:35:33 compute-0 podman[453953]: 2025-10-02 09:35:33.005697241 +0000 UTC m=+0.071478633 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:35:34 compute-0 nova_compute[260603]: 2025-10-02 09:35:34.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:34 compute-0 nova_compute[260603]: 2025-10-02 09:35:34.625 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:35:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:35:34.874 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:35:34.874 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:35:34.874 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:35:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:35:34.943 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:35:35 compute-0 nova_compute[260603]: 2025-10-02 09:35:35.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:35:36 compute-0 ceph-mon[74477]: pgmap v3533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:36 compute-0 nova_compute[260603]: 2025-10-02 09:35:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:35:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:37 compute-0 ceph-mon[74477]: pgmap v3534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:39 compute-0 nova_compute[260603]: 2025-10-02 09:35:39.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:35:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:35:39 compute-0 ceph-mon[74477]: pgmap v3535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:40 compute-0 nova_compute[260603]: 2025-10-02 09:35:40.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:35:41 compute-0 ceph-mon[74477]: pgmap v3536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:42 compute-0 podman[453737]: time="2025-10-02T09:35:42Z" level=info msg="Received shutdown.Stop(), terminating!" PID=453737
Oct 02 09:35:42 compute-0 systemd[1]: podman.service: Deactivated successfully.
Oct 02 09:35:42 compute-0 nova_compute[260603]: 2025-10-02 09:35:42.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:35:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:43 compute-0 ceph-mon[74477]: pgmap v3537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:44 compute-0 nova_compute[260603]: 2025-10-02 09:35:44.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:44 compute-0 ceph-mon[74477]: pgmap v3538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:45 compute-0 nova_compute[260603]: 2025-10-02 09:35:45.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:35:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:47 compute-0 ceph-mon[74477]: pgmap v3539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:49 compute-0 nova_compute[260603]: 2025-10-02 09:35:49.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:49 compute-0 ceph-mon[74477]: pgmap v3540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:50 compute-0 nova_compute[260603]: 2025-10-02 09:35:50.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:50 compute-0 sudo[453991]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Oct 02 09:35:50 compute-0 sudo[453991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:50 compute-0 sudo[453991]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:50 compute-0 sudo[454016]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Oct 02 09:35:50 compute-0 sudo[454016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:35:50 compute-0 sudo[454016]: pam_unix(sudo:session): session closed for user root
Oct 02 09:35:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:35:51 compute-0 ceph-mon[74477]: pgmap v3541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:53 compute-0 ceph-mon[74477]: pgmap v3542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:54 compute-0 nova_compute[260603]: 2025-10-02 09:35:54.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:55 compute-0 nova_compute[260603]: 2025-10-02 09:35:55.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:55 compute-0 ceph-mon[74477]: pgmap v3543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:35:56 compute-0 nova_compute[260603]: 2025-10-02 09:35:56.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:35:56 compute-0 nova_compute[260603]: 2025-10-02 09:35:56.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:35:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:57 compute-0 ceph-mon[74477]: pgmap v3544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:35:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:35:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:35:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:35:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:35:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:35:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:35:59 compute-0 podman[454042]: 2025-10-02 09:35:59.048628549 +0000 UTC m=+0.104343390 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 02 09:35:59 compute-0 podman[454041]: 2025-10-02 09:35:59.122507157 +0000 UTC m=+0.176504055 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 09:35:59 compute-0 nova_compute[260603]: 2025-10-02 09:35:59.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:35:59 compute-0 ceph-mon[74477]: pgmap v3545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:00 compute-0 nova_compute[260603]: 2025-10-02 09:36:00.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:36:01 compute-0 ceph-mon[74477]: pgmap v3546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:02 compute-0 sshd-session[453291]: Connection closed by 192.168.122.30 port 51664
Oct 02 09:36:02 compute-0 sshd-session[453288]: pam_unix(sshd:session): session closed for user zuul
Oct 02 09:36:02 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Oct 02 09:36:02 compute-0 systemd[1]: session-55.scope: Consumed 1.173s CPU time.
Oct 02 09:36:02 compute-0 systemd-logind[787]: Session 55 logged out. Waiting for processes to exit.
Oct 02 09:36:02 compute-0 systemd-logind[787]: Removed session 55.
Oct 02 09:36:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:03 compute-0 ceph-mon[74477]: pgmap v3547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:03 compute-0 sshd-session[453736]: Connection closed by 192.168.122.30 port 36110
Oct 02 09:36:03 compute-0 sshd-session[453733]: pam_unix(sshd:session): session closed for user zuul
Oct 02 09:36:03 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Oct 02 09:36:03 compute-0 systemd-logind[787]: Session 56 logged out. Waiting for processes to exit.
Oct 02 09:36:03 compute-0 systemd-logind[787]: Removed session 56.
Oct 02 09:36:03 compute-0 podman[454083]: 2025-10-02 09:36:03.992116867 +0000 UTC m=+0.056523806 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 09:36:03 compute-0 podman[454084]: 2025-10-02 09:36:03.993270584 +0000 UTC m=+0.054943867 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 09:36:04 compute-0 nova_compute[260603]: 2025-10-02 09:36:04.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:05 compute-0 nova_compute[260603]: 2025-10-02 09:36:05.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:05 compute-0 ceph-mon[74477]: pgmap v3548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:36:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:07 compute-0 ceph-mon[74477]: pgmap v3549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:09 compute-0 nova_compute[260603]: 2025-10-02 09:36:09.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:09 compute-0 ceph-mon[74477]: pgmap v3550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:10 compute-0 nova_compute[260603]: 2025-10-02 09:36:10.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:36:11 compute-0 ceph-mon[74477]: pgmap v3551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:13 compute-0 ceph-mon[74477]: pgmap v3552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:14 compute-0 nova_compute[260603]: 2025-10-02 09:36:14.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:14 compute-0 ceph-mon[74477]: pgmap v3553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:15 compute-0 nova_compute[260603]: 2025-10-02 09:36:15.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:36:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:17 compute-0 ceph-mon[74477]: pgmap v3554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:19 compute-0 nova_compute[260603]: 2025-10-02 09:36:19.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:19 compute-0 ceph-mon[74477]: pgmap v3555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:20 compute-0 nova_compute[260603]: 2025-10-02 09:36:20.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:20 compute-0 ceph-mon[74477]: pgmap v3556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:36:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:36:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2078246163' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:36:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:36:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2078246163' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:36:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2078246163' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:36:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2078246163' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:36:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:23 compute-0 ceph-mon[74477]: pgmap v3557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:24 compute-0 nova_compute[260603]: 2025-10-02 09:36:24.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:25 compute-0 nova_compute[260603]: 2025-10-02 09:36:25.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:25 compute-0 ceph-mon[74477]: pgmap v3558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.010511) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397786010551, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1449, "num_deletes": 250, "total_data_size": 2307191, "memory_usage": 2343536, "flush_reason": "Manual Compaction"}
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397786026315, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1325255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72616, "largest_seqno": 74064, "table_properties": {"data_size": 1320273, "index_size": 2313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13040, "raw_average_key_size": 20, "raw_value_size": 1309310, "raw_average_value_size": 2074, "num_data_blocks": 106, "num_entries": 631, "num_filter_entries": 631, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397630, "oldest_key_time": 1759397630, "file_creation_time": 1759397786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 15848 microseconds, and 3945 cpu microseconds.
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.026356) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1325255 bytes OK
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.026375) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.028026) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.028038) EVENT_LOG_v1 {"time_micros": 1759397786028034, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.028056) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 2300830, prev total WAL file size 2300830, number of live WAL files 2.
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.028714) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303033' seq:72057594037927935, type:22 .. '6D6772737461740033323534' seq:0, type:0; will stop at (end)
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1294KB)], [173(10MB)]
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397786028778, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 12348341, "oldest_snapshot_seqno": -1}
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9050 keys, 9994484 bytes, temperature: kUnknown
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397786085082, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 9994484, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9939054, "index_size": 31701, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22661, "raw_key_size": 236914, "raw_average_key_size": 26, "raw_value_size": 9782658, "raw_average_value_size": 1080, "num_data_blocks": 1226, "num_entries": 9050, "num_filter_entries": 9050, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.085390) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 9994484 bytes
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.086538) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.9 rd, 177.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.5 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(16.9) write-amplify(7.5) OK, records in: 9490, records dropped: 440 output_compression: NoCompression
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.086566) EVENT_LOG_v1 {"time_micros": 1759397786086553, "job": 108, "event": "compaction_finished", "compaction_time_micros": 56398, "compaction_time_cpu_micros": 26101, "output_level": 6, "num_output_files": 1, "total_output_size": 9994484, "num_input_records": 9490, "num_output_records": 9050, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397786087166, "job": 108, "event": "table_file_deletion", "file_number": 175}
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397786090807, "job": 108, "event": "table_file_deletion", "file_number": 173}
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.028657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.090896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.090902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.090904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.090906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:36:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.090908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:36:26 compute-0 nova_compute[260603]: 2025-10-02 09:36:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:36:26 compute-0 nova_compute[260603]: 2025-10-02 09:36:26.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:36:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:27 compute-0 ceph-mon[74477]: pgmap v3559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:36:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:36:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:36:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:36:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:36:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:36:28
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups', '.mgr', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log']
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:36:28 compute-0 sudo[454126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:36:28 compute-0 sudo[454126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:28 compute-0 sudo[454126]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:28 compute-0 sudo[454151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:36:28 compute-0 sudo[454151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:28 compute-0 sudo[454151]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:28 compute-0 sudo[454176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:36:28 compute-0 sudo[454176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:28 compute-0 sudo[454176]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:28 compute-0 sudo[454201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:36:28 compute-0 sudo[454201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:36:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:28 compute-0 sudo[454201]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:36:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:36:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:36:29 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:36:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:36:29 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:36:29 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev d9cdc942-8dd5-45a3-877b-d41652b29fc8 does not exist
Oct 02 09:36:29 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 9baea1fc-0a59-4b88-8495-efc78f0f9115 does not exist
Oct 02 09:36:29 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b0f0a5af-613b-456d-9e9c-d6f4ad404315 does not exist
Oct 02 09:36:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:36:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:36:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:36:29 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:36:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:36:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:36:29 compute-0 sudo[454256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:36:29 compute-0 sudo[454256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:29 compute-0 sudo[454256]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:29 compute-0 podman[454280]: 2025-10-02 09:36:29.188932739 +0000 UTC m=+0.065444735 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct 02 09:36:29 compute-0 sudo[454287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:36:29 compute-0 sudo[454287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:29 compute-0 sudo[454287]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:29 compute-0 sudo[454333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:36:29 compute-0 sudo[454333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:29 compute-0 sudo[454333]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:29 compute-0 podman[454325]: 2025-10-02 09:36:29.334030331 +0000 UTC m=+0.119023099 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 09:36:29 compute-0 sudo[454373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:36:29 compute-0 nova_compute[260603]: 2025-10-02 09:36:29.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:29 compute-0 sudo[454373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:29 compute-0 nova_compute[260603]: 2025-10-02 09:36:29.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:36:29 compute-0 nova_compute[260603]: 2025-10-02 09:36:29.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:36:29 compute-0 nova_compute[260603]: 2025-10-02 09:36:29.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:36:29 compute-0 nova_compute[260603]: 2025-10-02 09:36:29.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:36:29 compute-0 nova_compute[260603]: 2025-10-02 09:36:29.543 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:36:29 compute-0 nova_compute[260603]: 2025-10-02 09:36:29.544 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:36:29 compute-0 nova_compute[260603]: 2025-10-02 09:36:29.570 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:36:29 compute-0 nova_compute[260603]: 2025-10-02 09:36:29.570 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:36:29 compute-0 nova_compute[260603]: 2025-10-02 09:36:29.570 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:36:29 compute-0 nova_compute[260603]: 2025-10-02 09:36:29.571 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:36:29 compute-0 nova_compute[260603]: 2025-10-02 09:36:29.571 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:36:29 compute-0 podman[454454]: 2025-10-02 09:36:29.763572788 +0000 UTC m=+0.044124600 container create ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:36:29 compute-0 systemd[1]: Started libpod-conmon-ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744.scope.
Oct 02 09:36:29 compute-0 ceph-mon[74477]: pgmap v3560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:36:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:36:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:36:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:36:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:36:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:36:29 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:36:29 compute-0 podman[454454]: 2025-10-02 09:36:29.746859376 +0000 UTC m=+0.027411208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:36:29 compute-0 podman[454454]: 2025-10-02 09:36:29.852034781 +0000 UTC m=+0.132586673 container init ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:36:29 compute-0 podman[454454]: 2025-10-02 09:36:29.859860265 +0000 UTC m=+0.140412077 container start ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 09:36:29 compute-0 podman[454454]: 2025-10-02 09:36:29.863058475 +0000 UTC m=+0.143610327 container attach ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 09:36:29 compute-0 festive_edison[454479]: 167 167
Oct 02 09:36:29 compute-0 systemd[1]: libpod-ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744.scope: Deactivated successfully.
Oct 02 09:36:29 compute-0 podman[454454]: 2025-10-02 09:36:29.866176563 +0000 UTC m=+0.146728365 container died ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:36:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-387e5eae8d3a0ce780c8df3a7bcea16d867333014bed0287208f957e081bb936-merged.mount: Deactivated successfully.
Oct 02 09:36:29 compute-0 podman[454454]: 2025-10-02 09:36:29.905344726 +0000 UTC m=+0.185896538 container remove ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 02 09:36:29 compute-0 systemd[1]: libpod-conmon-ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744.scope: Deactivated successfully.
Oct 02 09:36:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:36:30 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2289469839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.047 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:36:30 compute-0 podman[454506]: 2025-10-02 09:36:30.141576934 +0000 UTC m=+0.068316715 container create d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:36:30 compute-0 systemd[1]: Started libpod-conmon-d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580.scope.
Oct 02 09:36:30 compute-0 podman[454506]: 2025-10-02 09:36:30.112904959 +0000 UTC m=+0.039644820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.217 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.219 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.219 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.219 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:36:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b72941278d7c1b545ca4a74a3ec98e197af35303000d889b298b0b0598b3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b72941278d7c1b545ca4a74a3ec98e197af35303000d889b298b0b0598b3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b72941278d7c1b545ca4a74a3ec98e197af35303000d889b298b0b0598b3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b72941278d7c1b545ca4a74a3ec98e197af35303000d889b298b0b0598b3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b72941278d7c1b545ca4a74a3ec98e197af35303000d889b298b0b0598b3a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:30 compute-0 podman[454506]: 2025-10-02 09:36:30.245260063 +0000 UTC m=+0.171999844 container init d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:36:30 compute-0 podman[454506]: 2025-10-02 09:36:30.252702505 +0000 UTC m=+0.179442286 container start d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:36:30 compute-0 podman[454506]: 2025-10-02 09:36:30.256129812 +0000 UTC m=+0.182869593 container attach d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.290 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.291 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.310 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:36:30 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2342216201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.756 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.761 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:36:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.782 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.786 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:36:30 compute-0 nova_compute[260603]: 2025-10-02 09:36:30.787 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:36:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2289469839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:36:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2342216201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:36:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:36:31 compute-0 hungry_poincare[454523]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:36:31 compute-0 hungry_poincare[454523]: --> relative data size: 1.0
Oct 02 09:36:31 compute-0 hungry_poincare[454523]: --> All data devices are unavailable
Oct 02 09:36:31 compute-0 systemd[1]: libpod-d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580.scope: Deactivated successfully.
Oct 02 09:36:31 compute-0 podman[454574]: 2025-10-02 09:36:31.301869235 +0000 UTC m=+0.024453514 container died d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f6b72941278d7c1b545ca4a74a3ec98e197af35303000d889b298b0b0598b3a-merged.mount: Deactivated successfully.
Oct 02 09:36:31 compute-0 podman[454574]: 2025-10-02 09:36:31.38393837 +0000 UTC m=+0.106522639 container remove d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:36:31 compute-0 systemd[1]: libpod-conmon-d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580.scope: Deactivated successfully.
Oct 02 09:36:31 compute-0 sudo[454373]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:31 compute-0 sudo[454589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:36:31 compute-0 sudo[454589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:31 compute-0 sudo[454589]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:31 compute-0 sudo[454614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:36:31 compute-0 sudo[454614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:31 compute-0 sudo[454614]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:31 compute-0 sudo[454639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:36:31 compute-0 sudo[454639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:31 compute-0 sudo[454639]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:31 compute-0 sudo[454664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:36:31 compute-0 sudo[454664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:31 compute-0 ceph-mon[74477]: pgmap v3561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:32 compute-0 podman[454725]: 2025-10-02 09:36:32.002284773 +0000 UTC m=+0.045574415 container create 5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 09:36:32 compute-0 systemd[1]: Started libpod-conmon-5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d.scope.
Oct 02 09:36:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:36:32 compute-0 podman[454725]: 2025-10-02 09:36:31.979415718 +0000 UTC m=+0.022705410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:36:32 compute-0 podman[454725]: 2025-10-02 09:36:32.091557721 +0000 UTC m=+0.134847433 container init 5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:36:32 compute-0 podman[454725]: 2025-10-02 09:36:32.09920822 +0000 UTC m=+0.142497862 container start 5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:36:32 compute-0 compassionate_elgamal[454741]: 167 167
Oct 02 09:36:32 compute-0 podman[454725]: 2025-10-02 09:36:32.104247718 +0000 UTC m=+0.147537380 container attach 5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:36:32 compute-0 systemd[1]: libpod-5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d.scope: Deactivated successfully.
Oct 02 09:36:32 compute-0 podman[454725]: 2025-10-02 09:36:32.105356682 +0000 UTC m=+0.148646344 container died 5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 09:36:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-bac952b325ce062a7232c736001858a2b6620db5be1f8574137ca31c14aa908f-merged.mount: Deactivated successfully.
Oct 02 09:36:32 compute-0 podman[454725]: 2025-10-02 09:36:32.149673797 +0000 UTC m=+0.192963439 container remove 5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:36:32 compute-0 systemd[1]: libpod-conmon-5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d.scope: Deactivated successfully.
Oct 02 09:36:32 compute-0 podman[454765]: 2025-10-02 09:36:32.34444192 +0000 UTC m=+0.043972905 container create c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:36:32 compute-0 systemd[1]: Started libpod-conmon-c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b.scope.
Oct 02 09:36:32 compute-0 podman[454765]: 2025-10-02 09:36:32.328515913 +0000 UTC m=+0.028046918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:36:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97add8675f6f5eec1661afc57075d6db5fecc08dfe2882f72dc64b1b0244f8c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97add8675f6f5eec1661afc57075d6db5fecc08dfe2882f72dc64b1b0244f8c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97add8675f6f5eec1661afc57075d6db5fecc08dfe2882f72dc64b1b0244f8c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97add8675f6f5eec1661afc57075d6db5fecc08dfe2882f72dc64b1b0244f8c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:32 compute-0 podman[454765]: 2025-10-02 09:36:32.460191265 +0000 UTC m=+0.159722340 container init c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:36:32 compute-0 podman[454765]: 2025-10-02 09:36:32.473553332 +0000 UTC m=+0.173084357 container start c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 09:36:32 compute-0 podman[454765]: 2025-10-02 09:36:32.478536178 +0000 UTC m=+0.178067173 container attach c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 09:36:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:33 compute-0 adoring_curie[454781]: {
Oct 02 09:36:33 compute-0 adoring_curie[454781]:     "0": [
Oct 02 09:36:33 compute-0 adoring_curie[454781]:         {
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "devices": [
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "/dev/loop3"
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             ],
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_name": "ceph_lv0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_size": "21470642176",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "name": "ceph_lv0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "tags": {
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.cluster_name": "ceph",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.crush_device_class": "",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.encrypted": "0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.osd_id": "0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.type": "block",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.vdo": "0"
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             },
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "type": "block",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "vg_name": "ceph_vg0"
Oct 02 09:36:33 compute-0 adoring_curie[454781]:         }
Oct 02 09:36:33 compute-0 adoring_curie[454781]:     ],
Oct 02 09:36:33 compute-0 adoring_curie[454781]:     "1": [
Oct 02 09:36:33 compute-0 adoring_curie[454781]:         {
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "devices": [
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "/dev/loop4"
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             ],
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_name": "ceph_lv1",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_size": "21470642176",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "name": "ceph_lv1",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "tags": {
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.cluster_name": "ceph",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.crush_device_class": "",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.encrypted": "0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.osd_id": "1",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.type": "block",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.vdo": "0"
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             },
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "type": "block",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "vg_name": "ceph_vg1"
Oct 02 09:36:33 compute-0 adoring_curie[454781]:         }
Oct 02 09:36:33 compute-0 adoring_curie[454781]:     ],
Oct 02 09:36:33 compute-0 adoring_curie[454781]:     "2": [
Oct 02 09:36:33 compute-0 adoring_curie[454781]:         {
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "devices": [
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "/dev/loop5"
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             ],
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_name": "ceph_lv2",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_size": "21470642176",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "name": "ceph_lv2",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "tags": {
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.cluster_name": "ceph",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.crush_device_class": "",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.encrypted": "0",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.osd_id": "2",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.type": "block",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:                 "ceph.vdo": "0"
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             },
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "type": "block",
Oct 02 09:36:33 compute-0 adoring_curie[454781]:             "vg_name": "ceph_vg2"
Oct 02 09:36:33 compute-0 adoring_curie[454781]:         }
Oct 02 09:36:33 compute-0 adoring_curie[454781]:     ]
Oct 02 09:36:33 compute-0 adoring_curie[454781]: }
Oct 02 09:36:33 compute-0 systemd[1]: libpod-c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b.scope: Deactivated successfully.
Oct 02 09:36:33 compute-0 podman[454765]: 2025-10-02 09:36:33.206405593 +0000 UTC m=+0.905936608 container died c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 02 09:36:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-97add8675f6f5eec1661afc57075d6db5fecc08dfe2882f72dc64b1b0244f8c4-merged.mount: Deactivated successfully.
Oct 02 09:36:33 compute-0 podman[454765]: 2025-10-02 09:36:33.317904296 +0000 UTC m=+1.017435281 container remove c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 09:36:33 compute-0 systemd[1]: libpod-conmon-c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b.scope: Deactivated successfully.
Oct 02 09:36:33 compute-0 sudo[454664]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:33 compute-0 sudo[454802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:36:33 compute-0 sudo[454802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:33 compute-0 sudo[454802]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:33 compute-0 sudo[454827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:36:33 compute-0 sudo[454827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:33 compute-0 sudo[454827]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:33 compute-0 sudo[454852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:36:33 compute-0 sudo[454852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:33 compute-0 sudo[454852]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:33 compute-0 sudo[454877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:36:33 compute-0 sudo[454877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:33 compute-0 ceph-mon[74477]: pgmap v3562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:34 compute-0 podman[454943]: 2025-10-02 09:36:34.010910772 +0000 UTC m=+0.050286773 container create 79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_burnell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 02 09:36:34 compute-0 systemd[1]: Started libpod-conmon-79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474.scope.
Oct 02 09:36:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:36:34 compute-0 podman[454943]: 2025-10-02 09:36:33.985342273 +0000 UTC m=+0.024718264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:36:34 compute-0 podman[454943]: 2025-10-02 09:36:34.0973177 +0000 UTC m=+0.136693681 container init 79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:36:34 compute-0 podman[454943]: 2025-10-02 09:36:34.106787626 +0000 UTC m=+0.146163587 container start 79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 09:36:34 compute-0 youthful_burnell[454961]: 167 167
Oct 02 09:36:34 compute-0 systemd[1]: libpod-79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474.scope: Deactivated successfully.
Oct 02 09:36:34 compute-0 conmon[454961]: conmon 79499508fd7981656b0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474.scope/container/memory.events
Oct 02 09:36:34 compute-0 podman[454943]: 2025-10-02 09:36:34.12610998 +0000 UTC m=+0.165485991 container attach 79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 02 09:36:34 compute-0 podman[454943]: 2025-10-02 09:36:34.128064191 +0000 UTC m=+0.167440172 container died 79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_burnell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 09:36:34 compute-0 podman[454957]: 2025-10-02 09:36:34.144064201 +0000 UTC m=+0.088506776 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 09:36:34 compute-0 podman[454960]: 2025-10-02 09:36:34.15524826 +0000 UTC m=+0.099682765 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 09:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f4f841618327cec25e04f596acd32e814a6fccea54e72e134f4ad5507b76753-merged.mount: Deactivated successfully.
Oct 02 09:36:34 compute-0 podman[454943]: 2025-10-02 09:36:34.177509045 +0000 UTC m=+0.216885006 container remove 79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_burnell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 09:36:34 compute-0 systemd[1]: libpod-conmon-79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474.scope: Deactivated successfully.
Oct 02 09:36:34 compute-0 podman[455024]: 2025-10-02 09:36:34.387656879 +0000 UTC m=+0.065349132 container create 241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:36:34 compute-0 nova_compute[260603]: 2025-10-02 09:36:34.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:34 compute-0 systemd[1]: Started libpod-conmon-241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec.scope.
Oct 02 09:36:34 compute-0 podman[455024]: 2025-10-02 09:36:34.356861598 +0000 UTC m=+0.034553901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:36:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c4e113c999d36ea65e7d0ca79b35daaa4f159ed4f693ede380d91b1a5e0d6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c4e113c999d36ea65e7d0ca79b35daaa4f159ed4f693ede380d91b1a5e0d6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c4e113c999d36ea65e7d0ca79b35daaa4f159ed4f693ede380d91b1a5e0d6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c4e113c999d36ea65e7d0ca79b35daaa4f159ed4f693ede380d91b1a5e0d6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:36:34 compute-0 podman[455024]: 2025-10-02 09:36:34.48951512 +0000 UTC m=+0.167207343 container init 241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hugle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:36:34 compute-0 podman[455024]: 2025-10-02 09:36:34.498249604 +0000 UTC m=+0.175941827 container start 241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hugle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:36:34 compute-0 podman[455024]: 2025-10-02 09:36:34.503413475 +0000 UTC m=+0.181105738 container attach 241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hugle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:36:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:36:34.876 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:36:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:36:34.878 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:36:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:36:34.879 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]: {
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "osd_id": 2,
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "type": "bluestore"
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:     },
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "osd_id": 1,
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "type": "bluestore"
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:     },
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "osd_id": 0,
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:         "type": "bluestore"
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]:     }
Oct 02 09:36:35 compute-0 beautiful_hugle[455042]: }
Oct 02 09:36:35 compute-0 nova_compute[260603]: 2025-10-02 09:36:35.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:35 compute-0 systemd[1]: libpod-241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec.scope: Deactivated successfully.
Oct 02 09:36:35 compute-0 podman[455024]: 2025-10-02 09:36:35.615143879 +0000 UTC m=+1.292836102 container died 241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hugle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 09:36:35 compute-0 systemd[1]: libpod-241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec.scope: Consumed 1.127s CPU time.
Oct 02 09:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-49c4e113c999d36ea65e7d0ca79b35daaa4f159ed4f693ede380d91b1a5e0d6b-merged.mount: Deactivated successfully.
Oct 02 09:36:35 compute-0 podman[455024]: 2025-10-02 09:36:35.685314801 +0000 UTC m=+1.363007034 container remove 241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:36:35 compute-0 systemd[1]: libpod-conmon-241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec.scope: Deactivated successfully.
Oct 02 09:36:35 compute-0 sudo[454877]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:36:35 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:36:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:36:35 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:36:35 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2abc1c95-8a04-4b7c-ab9d-7f34caf6423d does not exist
Oct 02 09:36:35 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 74202bd4-0ee3-48ae-83d1-ff14e0e30b9a does not exist
Oct 02 09:36:35 compute-0 nova_compute[260603]: 2025-10-02 09:36:35.784 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:36:35 compute-0 sudo[455087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:36:35 compute-0 sudo[455087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:35 compute-0 sudo[455087]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:35 compute-0 ceph-mon[74477]: pgmap v3563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:35 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:36:35 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:36:35 compute-0 sudo[455112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:36:35 compute-0 sudo[455112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:36:35 compute-0 sudo[455112]: pam_unix(sudo:session): session closed for user root
Oct 02 09:36:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:36:36 compute-0 nova_compute[260603]: 2025-10-02 09:36:36.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:36:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:37 compute-0 ceph-mon[74477]: pgmap v3564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:38 compute-0 ceph-mon[74477]: pgmap v3565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:39 compute-0 nova_compute[260603]: 2025-10-02 09:36:39.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:36:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:36:40 compute-0 nova_compute[260603]: 2025-10-02 09:36:40.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:36:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:36:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 74K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1346 writes, 6079 keys, 1346 commit groups, 1.0 writes per commit group, ingest: 8.71 MB, 0.01 MB/s
                                           Interval WAL: 1346 writes, 1346 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     72.9      1.26              0.35        54    0.023       0      0       0.0       0.0
                                             L6      1/0    9.53 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0    127.7    108.2      4.23              1.66        53    0.080    364K    28K       0.0       0.0
                                            Sum      1/0    9.53 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     98.4    100.1      5.49              2.01       107    0.051    364K    28K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.7     76.7     75.1      0.73              0.23        10    0.073     46K   2470       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    127.7    108.2      4.23              1.66        53    0.080    364K    28K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     73.2      1.26              0.35        53    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.090, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.54 GB write, 0.08 MB/s write, 0.53 GB read, 0.08 MB/s read, 5.5 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 60.11 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000513 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3863,57.50 MB,18.913%) FilterBlock(108,1019.11 KB,0.327376%) IndexBlock(108,1.62 MB,0.531463%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 09:36:41 compute-0 ceph-mon[74477]: pgmap v3566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:42 compute-0 ceph-mon[74477]: pgmap v3567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:44 compute-0 nova_compute[260603]: 2025-10-02 09:36:44.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:44 compute-0 nova_compute[260603]: 2025-10-02 09:36:44.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:36:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:45 compute-0 ceph-mon[74477]: pgmap v3568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:45 compute-0 nova_compute[260603]: 2025-10-02 09:36:45.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:36:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:47 compute-0 ceph-mon[74477]: pgmap v3569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:49 compute-0 nova_compute[260603]: 2025-10-02 09:36:49.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:49 compute-0 ceph-mon[74477]: pgmap v3570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:50 compute-0 nova_compute[260603]: 2025-10-02 09:36:50.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:36:51 compute-0 ceph-mon[74477]: pgmap v3571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:53 compute-0 ceph-mon[74477]: pgmap v3572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:54 compute-0 nova_compute[260603]: 2025-10-02 09:36:54.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:55 compute-0 ceph-mon[74477]: pgmap v3573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:55 compute-0 nova_compute[260603]: 2025-10-02 09:36:55.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:36:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:57 compute-0 nova_compute[260603]: 2025-10-02 09:36:57.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:36:57 compute-0 nova_compute[260603]: 2025-10-02 09:36:57.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:36:57 compute-0 ceph-mon[74477]: pgmap v3574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:36:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:36:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:36:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:36:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:36:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:36:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:36:59 compute-0 nova_compute[260603]: 2025-10-02 09:36:59.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:36:59 compute-0 ceph-mon[74477]: pgmap v3575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:00 compute-0 podman[455138]: 2025-10-02 09:37:00.008389691 +0000 UTC m=+0.073827446 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 09:37:00 compute-0 podman[455137]: 2025-10-02 09:37:00.088144713 +0000 UTC m=+0.147441226 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 09:37:00 compute-0 nova_compute[260603]: 2025-10-02 09:37:00.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:37:01 compute-0 ceph-mon[74477]: pgmap v3576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:03 compute-0 ceph-mon[74477]: pgmap v3577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:04 compute-0 nova_compute[260603]: 2025-10-02 09:37:04.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:04 compute-0 ceph-mon[74477]: pgmap v3578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:04 compute-0 podman[455180]: 2025-10-02 09:37:04.999391112 +0000 UTC m=+0.069307295 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Oct 02 09:37:05 compute-0 podman[455181]: 2025-10-02 09:37:05.006550956 +0000 UTC m=+0.075745137 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 02 09:37:05 compute-0 nova_compute[260603]: 2025-10-02 09:37:05.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:37:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:07 compute-0 ceph-mon[74477]: pgmap v3579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:09 compute-0 nova_compute[260603]: 2025-10-02 09:37:09.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:09 compute-0 ceph-mon[74477]: pgmap v3580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:10 compute-0 nova_compute[260603]: 2025-10-02 09:37:10.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:37:11 compute-0 ceph-mon[74477]: pgmap v3581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:13 compute-0 ceph-mon[74477]: pgmap v3582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:14 compute-0 nova_compute[260603]: 2025-10-02 09:37:14.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:15 compute-0 nova_compute[260603]: 2025-10-02 09:37:15.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:15 compute-0 ceph-mon[74477]: pgmap v3583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:37:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:16 compute-0 ceph-mon[74477]: pgmap v3584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:19 compute-0 nova_compute[260603]: 2025-10-02 09:37:19.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:19 compute-0 ceph-mon[74477]: pgmap v3585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:20 compute-0 nova_compute[260603]: 2025-10-02 09:37:20.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:37:21 compute-0 ceph-mon[74477]: pgmap v3586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:37:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2099761710' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:37:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:37:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2099761710' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:37:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2099761710' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:37:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2099761710' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:37:23 compute-0 ceph-mon[74477]: pgmap v3587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:24 compute-0 nova_compute[260603]: 2025-10-02 09:37:24.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:25 compute-0 nova_compute[260603]: 2025-10-02 09:37:25.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:25 compute-0 ceph-mon[74477]: pgmap v3588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:37:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:27 compute-0 nova_compute[260603]: 2025-10-02 09:37:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:37:27 compute-0 ceph-mon[74477]: pgmap v3589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:37:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:37:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:37:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:37:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:37:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:37:28
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'images', 'volumes', 'default.rgw.log']
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:37:28 compute-0 nova_compute[260603]: 2025-10-02 09:37:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:37:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:29 compute-0 nova_compute[260603]: 2025-10-02 09:37:29.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:29 compute-0 nova_compute[260603]: 2025-10-02 09:37:29.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:37:29 compute-0 nova_compute[260603]: 2025-10-02 09:37:29.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:37:29 compute-0 nova_compute[260603]: 2025-10-02 09:37:29.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:37:29 compute-0 nova_compute[260603]: 2025-10-02 09:37:29.534 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:37:29 compute-0 nova_compute[260603]: 2025-10-02 09:37:29.534 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:37:29 compute-0 nova_compute[260603]: 2025-10-02 09:37:29.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:37:29 compute-0 nova_compute[260603]: 2025-10-02 09:37:29.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:37:29 compute-0 nova_compute[260603]: 2025-10-02 09:37:29.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:37:29 compute-0 nova_compute[260603]: 2025-10-02 09:37:29.561 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:37:29 compute-0 nova_compute[260603]: 2025-10-02 09:37:29.562 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:37:29 compute-0 ceph-mon[74477]: pgmap v3590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:37:30 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1781629694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:37:30 compute-0 nova_compute[260603]: 2025-10-02 09:37:30.475 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.914s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:37:30 compute-0 nova_compute[260603]: 2025-10-02 09:37:30.615 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:37:30 compute-0 nova_compute[260603]: 2025-10-02 09:37:30.616 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3598MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:37:30 compute-0 nova_compute[260603]: 2025-10-02 09:37:30.616 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:37:30 compute-0 nova_compute[260603]: 2025-10-02 09:37:30.617 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:37:30 compute-0 nova_compute[260603]: 2025-10-02 09:37:30.674 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:37:30 compute-0 nova_compute[260603]: 2025-10-02 09:37:30.674 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:37:30 compute-0 nova_compute[260603]: 2025-10-02 09:37:30.691 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:37:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1781629694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:37:30 compute-0 nova_compute[260603]: 2025-10-02 09:37:30.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:30 compute-0 podman[455261]: 2025-10-02 09:37:30.981375973 +0000 UTC m=+0.049071524 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 09:37:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:37:31 compute-0 podman[455260]: 2025-10-02 09:37:31.036373291 +0000 UTC m=+0.104934299 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 09:37:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:37:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/103037187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:37:31 compute-0 nova_compute[260603]: 2025-10-02 09:37:31.125 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:37:31 compute-0 nova_compute[260603]: 2025-10-02 09:37:31.131 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:37:31 compute-0 nova_compute[260603]: 2025-10-02 09:37:31.147 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:37:31 compute-0 nova_compute[260603]: 2025-10-02 09:37:31.149 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:37:31 compute-0 nova_compute[260603]: 2025-10-02 09:37:31.149 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:37:31 compute-0 ceph-mon[74477]: pgmap v3591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/103037187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:37:32 compute-0 nova_compute[260603]: 2025-10-02 09:37:32.134 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:37:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:33 compute-0 nova_compute[260603]: 2025-10-02 09:37:33.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:37:33 compute-0 ceph-mon[74477]: pgmap v3592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:34 compute-0 nova_compute[260603]: 2025-10-02 09:37:34.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:34 compute-0 nova_compute[260603]: 2025-10-02 09:37:34.531 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:37:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:37:34.877 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:37:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:37:34.878 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:37:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:37:34.878 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:37:35 compute-0 ceph-mon[74477]: pgmap v3593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:35 compute-0 nova_compute[260603]: 2025-10-02 09:37:35.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:35 compute-0 sudo[455308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:37:35 compute-0 sudo[455308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:35 compute-0 sudo[455308]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:35 compute-0 podman[455307]: 2025-10-02 09:37:35.99371851 +0000 UTC m=+0.057536008 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:37:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:37:36 compute-0 podman[455316]: 2025-10-02 09:37:36.023466579 +0000 UTC m=+0.082217079 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct 02 09:37:36 compute-0 sudo[455361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:37:36 compute-0 sudo[455361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:36 compute-0 sudo[455361]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:36 compute-0 sudo[455396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:37:36 compute-0 sudo[455396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:36 compute-0 sudo[455396]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:36 compute-0 sudo[455421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:37:36 compute-0 sudo[455421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:36 compute-0 sudo[455421]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:37:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:37:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:37:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:37:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:37:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:37:36 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev bc4956c2-c904-42b5-ae0e-a09308514c6c does not exist
Oct 02 09:37:36 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 46b726d6-ddac-42a3-803e-797e6322c8cc does not exist
Oct 02 09:37:36 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev fa791a51-c999-42d8-b854-cf9d8a87fa66 does not exist
Oct 02 09:37:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:37:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:37:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:37:36 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:37:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:37:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:37:36 compute-0 sudo[455477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:37:36 compute-0 sudo[455477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:37:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:37:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:37:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:37:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:37:36 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:37:36 compute-0 sudo[455477]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:36 compute-0 sudo[455502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:37:36 compute-0 sudo[455502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:36 compute-0 sudo[455502]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:36 compute-0 sudo[455527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:37:36 compute-0 sudo[455527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:36 compute-0 sudo[455527]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:36 compute-0 sudo[455552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:37:36 compute-0 sudo[455552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:37 compute-0 podman[455618]: 2025-10-02 09:37:37.181482369 +0000 UTC m=+0.051657065 container create 366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:37:37 compute-0 systemd[1]: Started libpod-conmon-366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b.scope.
Oct 02 09:37:37 compute-0 podman[455618]: 2025-10-02 09:37:37.157454199 +0000 UTC m=+0.027628915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:37:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:37:37 compute-0 podman[455618]: 2025-10-02 09:37:37.272699398 +0000 UTC m=+0.142874114 container init 366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 09:37:37 compute-0 podman[455618]: 2025-10-02 09:37:37.282623509 +0000 UTC m=+0.152798205 container start 366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:37:37 compute-0 blissful_cartwright[455635]: 167 167
Oct 02 09:37:37 compute-0 systemd[1]: libpod-366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b.scope: Deactivated successfully.
Oct 02 09:37:37 compute-0 podman[455618]: 2025-10-02 09:37:37.290114462 +0000 UTC m=+0.160289148 container attach 366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:37:37 compute-0 podman[455618]: 2025-10-02 09:37:37.292592589 +0000 UTC m=+0.162767275 container died 366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:37:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0061ddd83d41609e3f87091a5ce4b520b27420ed84c3818469a522cf4cc9eb6-merged.mount: Deactivated successfully.
Oct 02 09:37:37 compute-0 podman[455618]: 2025-10-02 09:37:37.328279915 +0000 UTC m=+0.198454601 container remove 366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 09:37:37 compute-0 systemd[1]: libpod-conmon-366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b.scope: Deactivated successfully.
Oct 02 09:37:37 compute-0 podman[455661]: 2025-10-02 09:37:37.479121536 +0000 UTC m=+0.041044013 container create 02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bose, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:37:37 compute-0 systemd[1]: Started libpod-conmon-02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6.scope.
Oct 02 09:37:37 compute-0 nova_compute[260603]: 2025-10-02 09:37:37.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:37:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63feb6ba1f1bf8086060765c0599ab833021c989d05e14d70b52ea2542777ba5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63feb6ba1f1bf8086060765c0599ab833021c989d05e14d70b52ea2542777ba5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63feb6ba1f1bf8086060765c0599ab833021c989d05e14d70b52ea2542777ba5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63feb6ba1f1bf8086060765c0599ab833021c989d05e14d70b52ea2542777ba5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63feb6ba1f1bf8086060765c0599ab833021c989d05e14d70b52ea2542777ba5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:37 compute-0 podman[455661]: 2025-10-02 09:37:37.46294567 +0000 UTC m=+0.024868167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:37:37 compute-0 podman[455661]: 2025-10-02 09:37:37.565475233 +0000 UTC m=+0.127397710 container init 02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bose, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:37:37 compute-0 podman[455661]: 2025-10-02 09:37:37.572765001 +0000 UTC m=+0.134687478 container start 02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bose, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:37:37 compute-0 podman[455661]: 2025-10-02 09:37:37.58202429 +0000 UTC m=+0.143946787 container attach 02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bose, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 09:37:37 compute-0 ceph-mon[74477]: pgmap v3594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:38 compute-0 charming_bose[455677]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:37:38 compute-0 charming_bose[455677]: --> relative data size: 1.0
Oct 02 09:37:38 compute-0 charming_bose[455677]: --> All data devices are unavailable
Oct 02 09:37:38 compute-0 systemd[1]: libpod-02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6.scope: Deactivated successfully.
Oct 02 09:37:38 compute-0 conmon[455677]: conmon 02768c8f3d08d9aeb2cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6.scope/container/memory.events
Oct 02 09:37:38 compute-0 podman[455661]: 2025-10-02 09:37:38.588849028 +0000 UTC m=+1.150771505 container died 02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:37:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-63feb6ba1f1bf8086060765c0599ab833021c989d05e14d70b52ea2542777ba5-merged.mount: Deactivated successfully.
Oct 02 09:37:39 compute-0 ceph-mon[74477]: pgmap v3595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.082934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397859083015, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 848, "num_deletes": 251, "total_data_size": 1109973, "memory_usage": 1125648, "flush_reason": "Manual Compaction"}
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397859125724, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 1099206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74065, "largest_seqno": 74912, "table_properties": {"data_size": 1094940, "index_size": 1981, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9531, "raw_average_key_size": 19, "raw_value_size": 1086332, "raw_average_value_size": 2235, "num_data_blocks": 88, "num_entries": 486, "num_filter_entries": 486, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397787, "oldest_key_time": 1759397787, "file_creation_time": 1759397859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 42841 microseconds, and 4290 cpu microseconds.
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:37:39 compute-0 podman[455661]: 2025-10-02 09:37:39.125929184 +0000 UTC m=+1.687851661 container remove 02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bose, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.125787) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 1099206 bytes OK
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.125809) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.130912) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.130937) EVENT_LOG_v1 {"time_micros": 1759397859130929, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.130956) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 1105778, prev total WAL file size 1105778, number of live WAL files 2.
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.131463) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(1073KB)], [176(9760KB)]
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397859131497, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 11093690, "oldest_snapshot_seqno": -1}
Oct 02 09:37:39 compute-0 sudo[455552]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:39 compute-0 systemd[1]: libpod-conmon-02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6.scope: Deactivated successfully.
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9022 keys, 9356716 bytes, temperature: kUnknown
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397859213913, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 9356716, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9302092, "index_size": 30959, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22597, "raw_key_size": 237018, "raw_average_key_size": 26, "raw_value_size": 9146773, "raw_average_value_size": 1013, "num_data_blocks": 1187, "num_entries": 9022, "num_filter_entries": 9022, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:37:39 compute-0 sudo[455718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.214350) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9356716 bytes
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.217191) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.3 rd, 113.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.5 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(18.6) write-amplify(8.5) OK, records in: 9536, records dropped: 514 output_compression: NoCompression
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.217212) EVENT_LOG_v1 {"time_micros": 1759397859217202, "job": 110, "event": "compaction_finished", "compaction_time_micros": 82631, "compaction_time_cpu_micros": 26846, "output_level": 6, "num_output_files": 1, "total_output_size": 9356716, "num_input_records": 9536, "num_output_records": 9022, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397859217537, "job": 110, "event": "table_file_deletion", "file_number": 178}
Oct 02 09:37:39 compute-0 sudo[455718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397859219818, "job": 110, "event": "table_file_deletion", "file_number": 176}
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.131388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.219944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.219951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.219956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.219958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:37:39 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.219961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:37:39 compute-0 sudo[455718]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:39 compute-0 sudo[455743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:37:39 compute-0 sudo[455743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:39 compute-0 sudo[455743]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:39 compute-0 sudo[455768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:37:39 compute-0 sudo[455768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:39 compute-0 sudo[455768]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:39 compute-0 sudo[455793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:37:39 compute-0 sudo[455793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:39 compute-0 nova_compute[260603]: 2025-10-02 09:37:39.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:37:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:37:39 compute-0 podman[455859]: 2025-10-02 09:37:39.719853944 +0000 UTC m=+0.045549503 container create 70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 02 09:37:39 compute-0 podman[455859]: 2025-10-02 09:37:39.698434665 +0000 UTC m=+0.024130234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:37:39 compute-0 systemd[1]: Started libpod-conmon-70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f.scope.
Oct 02 09:37:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:37:39 compute-0 podman[455859]: 2025-10-02 09:37:39.856069949 +0000 UTC m=+0.181765528 container init 70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 02 09:37:39 compute-0 podman[455859]: 2025-10-02 09:37:39.86667529 +0000 UTC m=+0.192370849 container start 70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 09:37:39 compute-0 podman[455859]: 2025-10-02 09:37:39.869693754 +0000 UTC m=+0.195389373 container attach 70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:37:39 compute-0 nostalgic_swanson[455875]: 167 167
Oct 02 09:37:39 compute-0 systemd[1]: libpod-70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f.scope: Deactivated successfully.
Oct 02 09:37:39 compute-0 podman[455859]: 2025-10-02 09:37:39.871954425 +0000 UTC m=+0.197649984 container died 70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc91c4092c604054701d6cd8f4f7ed461c7046899de3bbd97e8c53f11ba048f8-merged.mount: Deactivated successfully.
Oct 02 09:37:39 compute-0 podman[455859]: 2025-10-02 09:37:39.908536048 +0000 UTC m=+0.234231607 container remove 70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 09:37:39 compute-0 systemd[1]: libpod-conmon-70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f.scope: Deactivated successfully.
Oct 02 09:37:40 compute-0 podman[455898]: 2025-10-02 09:37:40.093810975 +0000 UTC m=+0.072127594 container create 74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:37:40 compute-0 systemd[1]: Started libpod-conmon-74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d.scope.
Oct 02 09:37:40 compute-0 podman[455898]: 2025-10-02 09:37:40.06325159 +0000 UTC m=+0.041568269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:37:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8d11282fc6ab1b9e52138e58830f0048f73a1199f3f12afbd32e9d3373dfdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8d11282fc6ab1b9e52138e58830f0048f73a1199f3f12afbd32e9d3373dfdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8d11282fc6ab1b9e52138e58830f0048f73a1199f3f12afbd32e9d3373dfdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8d11282fc6ab1b9e52138e58830f0048f73a1199f3f12afbd32e9d3373dfdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:40 compute-0 podman[455898]: 2025-10-02 09:37:40.177431156 +0000 UTC m=+0.155747765 container init 74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:37:40 compute-0 podman[455898]: 2025-10-02 09:37:40.191771124 +0000 UTC m=+0.170087713 container start 74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:37:40 compute-0 podman[455898]: 2025-10-02 09:37:40.195028766 +0000 UTC m=+0.173345355 container attach 74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:37:40 compute-0 nova_compute[260603]: 2025-10-02 09:37:40.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]: {
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:     "0": [
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:         {
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "devices": [
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "/dev/loop3"
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             ],
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_name": "ceph_lv0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_size": "21470642176",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "name": "ceph_lv0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "tags": {
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.cluster_name": "ceph",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.crush_device_class": "",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.encrypted": "0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.osd_id": "0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.type": "block",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.vdo": "0"
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             },
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "type": "block",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "vg_name": "ceph_vg0"
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:         }
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:     ],
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:     "1": [
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:         {
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "devices": [
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "/dev/loop4"
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             ],
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_name": "ceph_lv1",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_size": "21470642176",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "name": "ceph_lv1",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "tags": {
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.cluster_name": "ceph",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.crush_device_class": "",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.encrypted": "0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.osd_id": "1",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.type": "block",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.vdo": "0"
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             },
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "type": "block",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "vg_name": "ceph_vg1"
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:         }
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:     ],
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:     "2": [
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:         {
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "devices": [
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "/dev/loop5"
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             ],
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_name": "ceph_lv2",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_size": "21470642176",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "name": "ceph_lv2",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "tags": {
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.cluster_name": "ceph",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.crush_device_class": "",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.encrypted": "0",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.osd_id": "2",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.type": "block",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:                 "ceph.vdo": "0"
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             },
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "type": "block",
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:             "vg_name": "ceph_vg2"
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:         }
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]:     ]
Oct 02 09:37:40 compute-0 mystifying_khorana[455914]: }
Oct 02 09:37:40 compute-0 systemd[1]: libpod-74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d.scope: Deactivated successfully.
Oct 02 09:37:40 compute-0 podman[455898]: 2025-10-02 09:37:40.955821249 +0000 UTC m=+0.934137868 container died 74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 02 09:37:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d8d11282fc6ab1b9e52138e58830f0048f73a1199f3f12afbd32e9d3373dfdd-merged.mount: Deactivated successfully.
Oct 02 09:37:41 compute-0 podman[455898]: 2025-10-02 09:37:41.202586037 +0000 UTC m=+1.180902626 container remove 74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:37:41 compute-0 systemd[1]: libpod-conmon-74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d.scope: Deactivated successfully.
Oct 02 09:37:41 compute-0 sudo[455793]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:41 compute-0 sudo[455935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:37:41 compute-0 sudo[455935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:41 compute-0 sudo[455935]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:41 compute-0 sudo[455960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:37:41 compute-0 sudo[455960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:41 compute-0 sudo[455960]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:41 compute-0 sudo[455985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:37:41 compute-0 sudo[455985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:41 compute-0 sudo[455985]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:41 compute-0 sudo[456010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:37:41 compute-0 sudo[456010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:41 compute-0 podman[456073]: 2025-10-02 09:37:41.813514919 +0000 UTC m=+0.056943140 container create dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cerf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 09:37:41 compute-0 systemd[1]: Started libpod-conmon-dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62.scope.
Oct 02 09:37:41 compute-0 ceph-mon[74477]: pgmap v3596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:41 compute-0 podman[456073]: 2025-10-02 09:37:41.777839364 +0000 UTC m=+0.021267605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:37:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:37:41 compute-0 podman[456073]: 2025-10-02 09:37:41.948721212 +0000 UTC m=+0.192149443 container init dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cerf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 09:37:41 compute-0 podman[456073]: 2025-10-02 09:37:41.955642698 +0000 UTC m=+0.199070929 container start dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:37:41 compute-0 podman[456073]: 2025-10-02 09:37:41.960345205 +0000 UTC m=+0.203773446 container attach dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cerf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 09:37:41 compute-0 awesome_cerf[456089]: 167 167
Oct 02 09:37:41 compute-0 systemd[1]: libpod-dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62.scope: Deactivated successfully.
Oct 02 09:37:41 compute-0 conmon[456089]: conmon dbd8e49d6b8ccd7732b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62.scope/container/memory.events
Oct 02 09:37:41 compute-0 podman[456073]: 2025-10-02 09:37:41.962400119 +0000 UTC m=+0.205828350 container died dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cerf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 09:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1d09a04cbc87023e8acdad9e79ac530923eaf75a566e7b11e293066e4bb93ba-merged.mount: Deactivated successfully.
Oct 02 09:37:42 compute-0 podman[456073]: 2025-10-02 09:37:42.075280695 +0000 UTC m=+0.318708906 container remove dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:37:42 compute-0 systemd[1]: libpod-conmon-dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62.scope: Deactivated successfully.
Oct 02 09:37:42 compute-0 podman[456112]: 2025-10-02 09:37:42.229938226 +0000 UTC m=+0.036778730 container create faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 09:37:42 compute-0 systemd[1]: Started libpod-conmon-faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d.scope.
Oct 02 09:37:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6000cb6dbea9d37d460578e91d7d69129b36f522cc69cf6abd2db004782822/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6000cb6dbea9d37d460578e91d7d69129b36f522cc69cf6abd2db004782822/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6000cb6dbea9d37d460578e91d7d69129b36f522cc69cf6abd2db004782822/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6000cb6dbea9d37d460578e91d7d69129b36f522cc69cf6abd2db004782822/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:37:42 compute-0 podman[456112]: 2025-10-02 09:37:42.292875971 +0000 UTC m=+0.099716505 container init faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_blackwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:37:42 compute-0 podman[456112]: 2025-10-02 09:37:42.298809226 +0000 UTC m=+0.105649740 container start faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_blackwell, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:37:42 compute-0 podman[456112]: 2025-10-02 09:37:42.301686006 +0000 UTC m=+0.108526540 container attach faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 09:37:42 compute-0 podman[456112]: 2025-10-02 09:37:42.213280006 +0000 UTC m=+0.020120550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:37:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]: {
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "osd_id": 2,
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "type": "bluestore"
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:     },
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "osd_id": 1,
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "type": "bluestore"
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:     },
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "osd_id": 0,
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:         "type": "bluestore"
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]:     }
Oct 02 09:37:43 compute-0 eloquent_blackwell[456128]: }
Oct 02 09:37:43 compute-0 systemd[1]: libpod-faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d.scope: Deactivated successfully.
Oct 02 09:37:43 compute-0 podman[456112]: 2025-10-02 09:37:43.21937988 +0000 UTC m=+1.026220394 container died faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_blackwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 09:37:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf6000cb6dbea9d37d460578e91d7d69129b36f522cc69cf6abd2db004782822-merged.mount: Deactivated successfully.
Oct 02 09:37:43 compute-0 podman[456112]: 2025-10-02 09:37:43.26483608 +0000 UTC m=+1.071676594 container remove faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:37:43 compute-0 systemd[1]: libpod-conmon-faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d.scope: Deactivated successfully.
Oct 02 09:37:43 compute-0 sudo[456010]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:37:43 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:37:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:37:43 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:37:43 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 673050f3-1de8-4476-a834-0d7e5681b345 does not exist
Oct 02 09:37:43 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5077c134-ed9a-44d4-b172-a0f39ff50eef does not exist
Oct 02 09:37:43 compute-0 sudo[456175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:37:43 compute-0 sudo[456175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:43 compute-0 sudo[456175]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:43 compute-0 sudo[456200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:37:43 compute-0 sudo[456200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:37:43 compute-0 sudo[456200]: pam_unix(sudo:session): session closed for user root
Oct 02 09:37:43 compute-0 ceph-mon[74477]: pgmap v3597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:43 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:37:43 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:37:44 compute-0 nova_compute[260603]: 2025-10-02 09:37:44.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:45 compute-0 nova_compute[260603]: 2025-10-02 09:37:45.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:45 compute-0 ceph-mon[74477]: pgmap v3598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:37:46 compute-0 nova_compute[260603]: 2025-10-02 09:37:46.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:37:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:47 compute-0 ceph-mon[74477]: pgmap v3599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:49 compute-0 nova_compute[260603]: 2025-10-02 09:37:49.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:49 compute-0 ceph-mon[74477]: pgmap v3600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:50 compute-0 nova_compute[260603]: 2025-10-02 09:37:50.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:37:51 compute-0 ceph-mon[74477]: pgmap v3601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:53 compute-0 ceph-mon[74477]: pgmap v3602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:54 compute-0 nova_compute[260603]: 2025-10-02 09:37:54.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:54 compute-0 ceph-mon[74477]: pgmap v3603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:55 compute-0 nova_compute[260603]: 2025-10-02 09:37:55.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:37:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:57 compute-0 nova_compute[260603]: 2025-10-02 09:37:57.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:37:57 compute-0 nova_compute[260603]: 2025-10-02 09:37:57.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:37:57 compute-0 ceph-mon[74477]: pgmap v3604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:37:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:37:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:37:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:37:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:37:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:37:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:37:59 compute-0 nova_compute[260603]: 2025-10-02 09:37:59.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:37:59 compute-0 ceph-mon[74477]: pgmap v3605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:00 compute-0 nova_compute[260603]: 2025-10-02 09:38:00.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:38:01 compute-0 ceph-mon[74477]: pgmap v3606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:02 compute-0 podman[456226]: 2025-10-02 09:38:02.000595998 +0000 UTC m=+0.061083689 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 09:38:02 compute-0 podman[456225]: 2025-10-02 09:38:02.029835581 +0000 UTC m=+0.090936691 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 09:38:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:02 compute-0 ceph-mon[74477]: pgmap v3607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:04 compute-0 nova_compute[260603]: 2025-10-02 09:38:04.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:05 compute-0 ceph-mon[74477]: pgmap v3608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:05 compute-0 nova_compute[260603]: 2025-10-02 09:38:05.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:38:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:06 compute-0 podman[456269]: 2025-10-02 09:38:06.994346984 +0000 UTC m=+0.061020817 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible)
Oct 02 09:38:06 compute-0 podman[456270]: 2025-10-02 09:38:06.996106169 +0000 UTC m=+0.062985498 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:38:08 compute-0 ceph-mon[74477]: pgmap v3609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:09 compute-0 ceph-mon[74477]: pgmap v3610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:09 compute-0 nova_compute[260603]: 2025-10-02 09:38:09.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:10 compute-0 nova_compute[260603]: 2025-10-02 09:38:10.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:38:11 compute-0 ceph-mon[74477]: pgmap v3611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:13 compute-0 ceph-mon[74477]: pgmap v3612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:14 compute-0 nova_compute[260603]: 2025-10-02 09:38:14.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:14 compute-0 ceph-mon[74477]: pgmap v3613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:15 compute-0 nova_compute[260603]: 2025-10-02 09:38:15.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:38:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:17 compute-0 ceph-mon[74477]: pgmap v3614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:19 compute-0 nova_compute[260603]: 2025-10-02 09:38:19.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:19 compute-0 ceph-mon[74477]: pgmap v3615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:20 compute-0 nova_compute[260603]: 2025-10-02 09:38:20.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:38:21 compute-0 ceph-mon[74477]: pgmap v3616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:38:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 597 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 125 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:38:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:38:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/334300868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:38:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:38:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/334300868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:38:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/334300868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:38:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/334300868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:38:23 compute-0 ceph-mon[74477]: pgmap v3617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:24 compute-0 nova_compute[260603]: 2025-10-02 09:38:24.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:25 compute-0 ceph-mon[74477]: pgmap v3618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:26 compute-0 nova_compute[260603]: 2025-10-02 09:38:26.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.033317) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397906033352, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 608, "num_deletes": 257, "total_data_size": 699141, "memory_usage": 711832, "flush_reason": "Manual Compaction"}
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397906040100, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 693135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74913, "largest_seqno": 75520, "table_properties": {"data_size": 689830, "index_size": 1212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7280, "raw_average_key_size": 18, "raw_value_size": 683287, "raw_average_value_size": 1738, "num_data_blocks": 54, "num_entries": 393, "num_filter_entries": 393, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397860, "oldest_key_time": 1759397860, "file_creation_time": 1759397906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 6819 microseconds, and 3725 cpu microseconds.
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.040135) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 693135 bytes OK
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.040152) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.041582) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.041596) EVENT_LOG_v1 {"time_micros": 1759397906041591, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.041611) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 695828, prev total WAL file size 695828, number of live WAL files 2.
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.042141) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323630' seq:72057594037927935, type:22 .. '6C6F676D0033353133' seq:0, type:0; will stop at (end)
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(676KB)], [179(9137KB)]
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397906042222, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 10049851, "oldest_snapshot_seqno": -1}
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 8891 keys, 9938348 bytes, temperature: kUnknown
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397906134998, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 9938348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9883341, "index_size": 31662, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22277, "raw_key_size": 235195, "raw_average_key_size": 26, "raw_value_size": 9728963, "raw_average_value_size": 1094, "num_data_blocks": 1216, "num_entries": 8891, "num_filter_entries": 8891, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.135378) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 9938348 bytes
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.137116) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.2 rd, 107.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(28.8) write-amplify(14.3) OK, records in: 9415, records dropped: 524 output_compression: NoCompression
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.137139) EVENT_LOG_v1 {"time_micros": 1759397906137129, "job": 112, "event": "compaction_finished", "compaction_time_micros": 92863, "compaction_time_cpu_micros": 34468, "output_level": 6, "num_output_files": 1, "total_output_size": 9938348, "num_input_records": 9415, "num_output_records": 8891, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397906137491, "job": 112, "event": "table_file_deletion", "file_number": 181}
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397906139860, "job": 112, "event": "table_file_deletion", "file_number": 179}
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.041974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.139917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.139922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.139924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.139926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:38:26 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.139928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:38:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:38:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.71 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 278 writes, 624 keys, 278 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 278 writes, 128 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:38:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:27 compute-0 ceph-mon[74477]: pgmap v3619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:27 compute-0 nova_compute[260603]: 2025-10-02 09:38:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:38:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:38:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:38:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:38:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:38:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:38:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:38:28
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'backups', 'vms', 'volumes', 'default.rgw.log']
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:38:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:29 compute-0 nova_compute[260603]: 2025-10-02 09:38:29.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:38:29 compute-0 nova_compute[260603]: 2025-10-02 09:38:29.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:38:29 compute-0 nova_compute[260603]: 2025-10-02 09:38:29.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:38:29 compute-0 nova_compute[260603]: 2025-10-02 09:38:29.554 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:38:29 compute-0 nova_compute[260603]: 2025-10-02 09:38:29.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:30 compute-0 ceph-mon[74477]: pgmap v3620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:30 compute-0 nova_compute[260603]: 2025-10-02 09:38:30.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:38:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:31 compute-0 nova_compute[260603]: 2025-10-02 09:38:31.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:38:31 compute-0 ceph-mon[74477]: pgmap v3621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:38:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 145K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 338 writes, 741 keys, 338 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s
                                           Interval WAL: 338 writes, 158 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:38:31 compute-0 nova_compute[260603]: 2025-10-02 09:38:31.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:38:31 compute-0 nova_compute[260603]: 2025-10-02 09:38:31.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:38:31 compute-0 nova_compute[260603]: 2025-10-02 09:38:31.712 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:38:31 compute-0 nova_compute[260603]: 2025-10-02 09:38:31.713 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:38:31 compute-0 nova_compute[260603]: 2025-10-02 09:38:31.713 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:38:31 compute-0 nova_compute[260603]: 2025-10-02 09:38:31.713 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:38:31 compute-0 nova_compute[260603]: 2025-10-02 09:38:31.714 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:38:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:38:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768344955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:38:32 compute-0 nova_compute[260603]: 2025-10-02 09:38:32.149 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:38:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2768344955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:38:32 compute-0 nova_compute[260603]: 2025-10-02 09:38:32.344 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:38:32 compute-0 nova_compute[260603]: 2025-10-02 09:38:32.345 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:38:32 compute-0 nova_compute[260603]: 2025-10-02 09:38:32.345 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:38:32 compute-0 nova_compute[260603]: 2025-10-02 09:38:32.346 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:38:32 compute-0 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct 02 09:38:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:32 compute-0 nova_compute[260603]: 2025-10-02 09:38:32.925 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:38:32 compute-0 nova_compute[260603]: 2025-10-02 09:38:32.926 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:38:33 compute-0 nova_compute[260603]: 2025-10-02 09:38:33.010 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:38:33 compute-0 podman[456328]: 2025-10-02 09:38:33.012058427 +0000 UTC m=+0.073018502 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller)
Oct 02 09:38:33 compute-0 podman[456329]: 2025-10-02 09:38:33.039803923 +0000 UTC m=+0.084978755 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:38:33 compute-0 ceph-mon[74477]: pgmap v3622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:38:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1913189035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:38:33 compute-0 nova_compute[260603]: 2025-10-02 09:38:33.443 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:38:33 compute-0 nova_compute[260603]: 2025-10-02 09:38:33.447 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:38:33 compute-0 nova_compute[260603]: 2025-10-02 09:38:33.592 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:38:33 compute-0 nova_compute[260603]: 2025-10-02 09:38:33.594 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:38:33 compute-0 nova_compute[260603]: 2025-10-02 09:38:33.594 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:38:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1913189035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:38:34 compute-0 nova_compute[260603]: 2025-10-02 09:38:34.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:38:34.878 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:38:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:38:34.879 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:38:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:38:34.879 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:38:35 compute-0 ceph-mon[74477]: pgmap v3623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:36 compute-0 nova_compute[260603]: 2025-10-02 09:38:36.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:38:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:37 compute-0 nova_compute[260603]: 2025-10-02 09:38:37.590 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:38:37 compute-0 ceph-mon[74477]: pgmap v3624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:38 compute-0 podman[456396]: 2025-10-02 09:38:38.009661485 +0000 UTC m=+0.069736749 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 09:38:38 compute-0 podman[456395]: 2025-10-02 09:38:38.011328637 +0000 UTC m=+0.081331491 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, container_name=multipathd)
Oct 02 09:38:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:39 compute-0 nova_compute[260603]: 2025-10-02 09:38:39.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:38:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:38:39 compute-0 nova_compute[260603]: 2025-10-02 09:38:39.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:39 compute-0 ceph-mon[74477]: pgmap v3625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:41 compute-0 nova_compute[260603]: 2025-10-02 09:38:41.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:38:41 compute-0 ceph-mon[74477]: pgmap v3626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:43 compute-0 sudo[456436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:38:43 compute-0 sudo[456436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:43 compute-0 sudo[456436]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:43 compute-0 sudo[456461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:38:43 compute-0 sudo[456461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:43 compute-0 sudo[456461]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:43 compute-0 sudo[456486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:38:43 compute-0 sudo[456486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:43 compute-0 sudo[456486]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:43 compute-0 sudo[456511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:38:43 compute-0 sudo[456511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:43 compute-0 ceph-mon[74477]: pgmap v3627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:44 compute-0 sudo[456511]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:38:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:38:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:38:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:38:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:38:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:38:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev d428e1e7-b3b7-4b32-aa7f-d1ae8877e488 does not exist
Oct 02 09:38:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ed41cc08-eeda-461c-8745-178dcf9fc3de does not exist
Oct 02 09:38:44 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 31bfa002-d9ad-4d4e-b734-5f83fa37c39c does not exist
Oct 02 09:38:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:38:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:38:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:38:44 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:38:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:38:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:38:44 compute-0 sudo[456566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:38:44 compute-0 sudo[456566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:44 compute-0 sudo[456566]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:44 compute-0 sudo[456591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:38:44 compute-0 sudo[456591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:44 compute-0 sudo[456591]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:44 compute-0 sudo[456616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:38:44 compute-0 sudo[456616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:44 compute-0 sudo[456616]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:44 compute-0 sudo[456641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:38:44 compute-0 sudo[456641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:44 compute-0 nova_compute[260603]: 2025-10-02 09:38:44.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:44 compute-0 podman[456708]: 2025-10-02 09:38:44.805099677 +0000 UTC m=+0.035988015 container create 3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:38:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:44 compute-0 systemd[1]: Started libpod-conmon-3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4.scope.
Oct 02 09:38:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:38:44 compute-0 podman[456708]: 2025-10-02 09:38:44.879667485 +0000 UTC m=+0.110555853 container init 3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:38:44 compute-0 podman[456708]: 2025-10-02 09:38:44.789630264 +0000 UTC m=+0.020518622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:38:44 compute-0 podman[456708]: 2025-10-02 09:38:44.888527503 +0000 UTC m=+0.119415851 container start 3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:38:44 compute-0 podman[456708]: 2025-10-02 09:38:44.891360311 +0000 UTC m=+0.122248649 container attach 3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct 02 09:38:44 compute-0 silly_galois[456725]: 167 167
Oct 02 09:38:44 compute-0 systemd[1]: libpod-3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4.scope: Deactivated successfully.
Oct 02 09:38:44 compute-0 podman[456708]: 2025-10-02 09:38:44.894469638 +0000 UTC m=+0.125357976 container died 3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1edcec2658f371448fcbfadf4b8a67a58585492f11dcb7dfcad02beddc837dbe-merged.mount: Deactivated successfully.
Oct 02 09:38:44 compute-0 podman[456708]: 2025-10-02 09:38:44.933349773 +0000 UTC m=+0.164238131 container remove 3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:38:44 compute-0 systemd[1]: libpod-conmon-3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4.scope: Deactivated successfully.
Oct 02 09:38:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:38:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:38:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:38:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:38:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:38:44 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:38:44 compute-0 ceph-mon[74477]: pgmap v3628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:45 compute-0 podman[456748]: 2025-10-02 09:38:45.097481359 +0000 UTC m=+0.036519361 container create 573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:38:45 compute-0 systemd[1]: Started libpod-conmon-573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e.scope.
Oct 02 09:38:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e0be225597c18a44629d179110158188554cc61b3119a06caccc909268757a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e0be225597c18a44629d179110158188554cc61b3119a06caccc909268757a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e0be225597c18a44629d179110158188554cc61b3119a06caccc909268757a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e0be225597c18a44629d179110158188554cc61b3119a06caccc909268757a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e0be225597c18a44629d179110158188554cc61b3119a06caccc909268757a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:45 compute-0 podman[456748]: 2025-10-02 09:38:45.160204258 +0000 UTC m=+0.099242280 container init 573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 09:38:45 compute-0 podman[456748]: 2025-10-02 09:38:45.167026712 +0000 UTC m=+0.106064714 container start 573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 09:38:45 compute-0 podman[456748]: 2025-10-02 09:38:45.171030966 +0000 UTC m=+0.110068968 container attach 573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 09:38:45 compute-0 podman[456748]: 2025-10-02 09:38:45.081734677 +0000 UTC m=+0.020772699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:38:46 compute-0 nova_compute[260603]: 2025-10-02 09:38:46.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:38:46 compute-0 charming_ride[456765]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:38:46 compute-0 charming_ride[456765]: --> relative data size: 1.0
Oct 02 09:38:46 compute-0 charming_ride[456765]: --> All data devices are unavailable
Oct 02 09:38:46 compute-0 systemd[1]: libpod-573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e.scope: Deactivated successfully.
Oct 02 09:38:46 compute-0 podman[456794]: 2025-10-02 09:38:46.254980453 +0000 UTC m=+0.046002537 container died 573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2e0be225597c18a44629d179110158188554cc61b3119a06caccc909268757a-merged.mount: Deactivated successfully.
Oct 02 09:38:46 compute-0 podman[456794]: 2025-10-02 09:38:46.316938459 +0000 UTC m=+0.107960563 container remove 573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 09:38:46 compute-0 systemd[1]: libpod-conmon-573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e.scope: Deactivated successfully.
Oct 02 09:38:46 compute-0 sudo[456641]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:46 compute-0 sudo[456809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:38:46 compute-0 sudo[456809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:46 compute-0 sudo[456809]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:46 compute-0 sudo[456834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:38:46 compute-0 sudo[456834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:46 compute-0 sudo[456834]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:46 compute-0 sudo[456859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:38:46 compute-0 sudo[456859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:46 compute-0 sudo[456859]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:46 compute-0 sudo[456884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:38:46 compute-0 sudo[456884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:47 compute-0 podman[456949]: 2025-10-02 09:38:47.057781398 +0000 UTC m=+0.026722405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:38:47 compute-0 podman[456949]: 2025-10-02 09:38:47.160062083 +0000 UTC m=+0.129003070 container create d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:38:47 compute-0 systemd[1]: Started libpod-conmon-d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362.scope.
Oct 02 09:38:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:38:47 compute-0 podman[456949]: 2025-10-02 09:38:47.314734094 +0000 UTC m=+0.283675101 container init d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:38:47 compute-0 podman[456949]: 2025-10-02 09:38:47.320930177 +0000 UTC m=+0.289871154 container start d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 09:38:47 compute-0 flamboyant_antonelli[456966]: 167 167
Oct 02 09:38:47 compute-0 systemd[1]: libpod-d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362.scope: Deactivated successfully.
Oct 02 09:38:47 compute-0 podman[456949]: 2025-10-02 09:38:47.379240749 +0000 UTC m=+0.348181746 container attach d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 09:38:47 compute-0 podman[456949]: 2025-10-02 09:38:47.379909859 +0000 UTC m=+0.348850856 container died d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 09:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2dc370b4ecbf2587b83985f4387666a76efff99d332f92019c5ce1b4d3752ad-merged.mount: Deactivated successfully.
Oct 02 09:38:47 compute-0 podman[456949]: 2025-10-02 09:38:47.649257042 +0000 UTC m=+0.618198029 container remove d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 09:38:47 compute-0 systemd[1]: libpod-conmon-d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362.scope: Deactivated successfully.
Oct 02 09:38:47 compute-0 podman[456991]: 2025-10-02 09:38:47.827382366 +0000 UTC m=+0.029847592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:38:47 compute-0 podman[456991]: 2025-10-02 09:38:47.960952049 +0000 UTC m=+0.163417225 container create 88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_black, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 09:38:48 compute-0 ceph-mon[74477]: pgmap v3629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:48 compute-0 systemd[1]: Started libpod-conmon-88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84.scope.
Oct 02 09:38:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aaf873931f8cb44a6625cc9f0aa09d01c856e7dc38e3434c5bec4dd15a20a19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aaf873931f8cb44a6625cc9f0aa09d01c856e7dc38e3434c5bec4dd15a20a19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aaf873931f8cb44a6625cc9f0aa09d01c856e7dc38e3434c5bec4dd15a20a19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aaf873931f8cb44a6625cc9f0aa09d01c856e7dc38e3434c5bec4dd15a20a19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:48 compute-0 podman[456991]: 2025-10-02 09:38:48.376364073 +0000 UTC m=+0.578829289 container init 88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:38:48 compute-0 podman[456991]: 2025-10-02 09:38:48.388373759 +0000 UTC m=+0.590838955 container start 88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_black, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:38:48 compute-0 podman[456991]: 2025-10-02 09:38:48.495732152 +0000 UTC m=+0.698197328 container attach 88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_black, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 09:38:48 compute-0 nova_compute[260603]: 2025-10-02 09:38:48.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:38:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:49 compute-0 lucid_black[457008]: {
Oct 02 09:38:49 compute-0 lucid_black[457008]:     "0": [
Oct 02 09:38:49 compute-0 lucid_black[457008]:         {
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "devices": [
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "/dev/loop3"
Oct 02 09:38:49 compute-0 lucid_black[457008]:             ],
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_name": "ceph_lv0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_size": "21470642176",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "name": "ceph_lv0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "tags": {
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.cluster_name": "ceph",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.crush_device_class": "",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.encrypted": "0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.osd_id": "0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.type": "block",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.vdo": "0"
Oct 02 09:38:49 compute-0 lucid_black[457008]:             },
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "type": "block",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "vg_name": "ceph_vg0"
Oct 02 09:38:49 compute-0 lucid_black[457008]:         }
Oct 02 09:38:49 compute-0 lucid_black[457008]:     ],
Oct 02 09:38:49 compute-0 lucid_black[457008]:     "1": [
Oct 02 09:38:49 compute-0 lucid_black[457008]:         {
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "devices": [
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "/dev/loop4"
Oct 02 09:38:49 compute-0 lucid_black[457008]:             ],
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_name": "ceph_lv1",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_size": "21470642176",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "name": "ceph_lv1",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "tags": {
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.cluster_name": "ceph",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.crush_device_class": "",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.encrypted": "0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.osd_id": "1",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.type": "block",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.vdo": "0"
Oct 02 09:38:49 compute-0 lucid_black[457008]:             },
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "type": "block",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "vg_name": "ceph_vg1"
Oct 02 09:38:49 compute-0 lucid_black[457008]:         }
Oct 02 09:38:49 compute-0 lucid_black[457008]:     ],
Oct 02 09:38:49 compute-0 lucid_black[457008]:     "2": [
Oct 02 09:38:49 compute-0 lucid_black[457008]:         {
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "devices": [
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "/dev/loop5"
Oct 02 09:38:49 compute-0 lucid_black[457008]:             ],
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_name": "ceph_lv2",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_size": "21470642176",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "name": "ceph_lv2",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "tags": {
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.cluster_name": "ceph",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.crush_device_class": "",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.encrypted": "0",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.osd_id": "2",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.type": "block",
Oct 02 09:38:49 compute-0 lucid_black[457008]:                 "ceph.vdo": "0"
Oct 02 09:38:49 compute-0 lucid_black[457008]:             },
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "type": "block",
Oct 02 09:38:49 compute-0 lucid_black[457008]:             "vg_name": "ceph_vg2"
Oct 02 09:38:49 compute-0 lucid_black[457008]:         }
Oct 02 09:38:49 compute-0 lucid_black[457008]:     ]
Oct 02 09:38:49 compute-0 lucid_black[457008]: }
Oct 02 09:38:49 compute-0 systemd[1]: libpod-88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84.scope: Deactivated successfully.
Oct 02 09:38:49 compute-0 podman[456991]: 2025-10-02 09:38:49.177094494 +0000 UTC m=+1.379559700 container died 88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_black, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:38:49 compute-0 ceph-mon[74477]: pgmap v3630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aaf873931f8cb44a6625cc9f0aa09d01c856e7dc38e3434c5bec4dd15a20a19-merged.mount: Deactivated successfully.
Oct 02 09:38:49 compute-0 podman[456991]: 2025-10-02 09:38:49.376825632 +0000 UTC m=+1.579290808 container remove 88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_black, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 09:38:49 compute-0 systemd[1]: libpod-conmon-88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84.scope: Deactivated successfully.
Oct 02 09:38:49 compute-0 sudo[456884]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:49 compute-0 sudo[457030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:38:49 compute-0 sudo[457030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:49 compute-0 sudo[457030]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:49 compute-0 sudo[457055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:38:49 compute-0 sudo[457055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:49 compute-0 sudo[457055]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:49 compute-0 sudo[457080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:38:49 compute-0 sudo[457080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:49 compute-0 sudo[457080]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:49 compute-0 sudo[457105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:38:49 compute-0 sudo[457105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:49 compute-0 nova_compute[260603]: 2025-10-02 09:38:49.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:50 compute-0 podman[457171]: 2025-10-02 09:38:50.010822275 +0000 UTC m=+0.045488772 container create ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:38:50 compute-0 systemd[1]: Started libpod-conmon-ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455.scope.
Oct 02 09:38:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:38:50 compute-0 podman[457171]: 2025-10-02 09:38:49.995412964 +0000 UTC m=+0.030079491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:38:50 compute-0 podman[457171]: 2025-10-02 09:38:50.096625224 +0000 UTC m=+0.131291751 container init ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:38:50 compute-0 podman[457171]: 2025-10-02 09:38:50.104933355 +0000 UTC m=+0.139599862 container start ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 09:38:50 compute-0 podman[457171]: 2025-10-02 09:38:50.109000532 +0000 UTC m=+0.143667059 container attach ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 09:38:50 compute-0 trusting_mendeleev[457187]: 167 167
Oct 02 09:38:50 compute-0 systemd[1]: libpod-ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455.scope: Deactivated successfully.
Oct 02 09:38:50 compute-0 podman[457171]: 2025-10-02 09:38:50.111895712 +0000 UTC m=+0.146562219 container died ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc32a0800df2293f42aa62cf28b3900145c6bb7a2702bdfaedf39c73b215e442-merged.mount: Deactivated successfully.
Oct 02 09:38:50 compute-0 podman[457171]: 2025-10-02 09:38:50.144927134 +0000 UTC m=+0.179593641 container remove ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:38:50 compute-0 systemd[1]: libpod-conmon-ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455.scope: Deactivated successfully.
Oct 02 09:38:50 compute-0 podman[457211]: 2025-10-02 09:38:50.315860402 +0000 UTC m=+0.050300952 container create 990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 09:38:50 compute-0 systemd[1]: Started libpod-conmon-990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a.scope.
Oct 02 09:38:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:38:50 compute-0 podman[457211]: 2025-10-02 09:38:50.297936223 +0000 UTC m=+0.032376823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ffff26f0a0514173f06239ef4bb27c41e8dbcff558025f49aebfea3f04e9cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ffff26f0a0514173f06239ef4bb27c41e8dbcff558025f49aebfea3f04e9cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ffff26f0a0514173f06239ef4bb27c41e8dbcff558025f49aebfea3f04e9cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ffff26f0a0514173f06239ef4bb27c41e8dbcff558025f49aebfea3f04e9cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:38:50 compute-0 podman[457211]: 2025-10-02 09:38:50.408967841 +0000 UTC m=+0.143408421 container init 990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:38:50 compute-0 podman[457211]: 2025-10-02 09:38:50.423587868 +0000 UTC m=+0.158028418 container start 990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 09:38:50 compute-0 podman[457211]: 2025-10-02 09:38:50.426416946 +0000 UTC m=+0.160857606 container attach 990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 09:38:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:51 compute-0 nova_compute[260603]: 2025-10-02 09:38:51.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:38:51 compute-0 blissful_germain[457228]: {
Oct 02 09:38:51 compute-0 blissful_germain[457228]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "osd_id": 2,
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "type": "bluestore"
Oct 02 09:38:51 compute-0 blissful_germain[457228]:     },
Oct 02 09:38:51 compute-0 blissful_germain[457228]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "osd_id": 1,
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "type": "bluestore"
Oct 02 09:38:51 compute-0 blissful_germain[457228]:     },
Oct 02 09:38:51 compute-0 blissful_germain[457228]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "osd_id": 0,
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:38:51 compute-0 blissful_germain[457228]:         "type": "bluestore"
Oct 02 09:38:51 compute-0 blissful_germain[457228]:     }
Oct 02 09:38:51 compute-0 blissful_germain[457228]: }
Oct 02 09:38:51 compute-0 systemd[1]: libpod-990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a.scope: Deactivated successfully.
Oct 02 09:38:51 compute-0 podman[457211]: 2025-10-02 09:38:51.40206969 +0000 UTC m=+1.136510280 container died 990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-45ffff26f0a0514173f06239ef4bb27c41e8dbcff558025f49aebfea3f04e9cd-merged.mount: Deactivated successfully.
Oct 02 09:38:51 compute-0 podman[457211]: 2025-10-02 09:38:51.57302169 +0000 UTC m=+1.307462240 container remove 990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:38:51 compute-0 systemd[1]: libpod-conmon-990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a.scope: Deactivated successfully.
Oct 02 09:38:51 compute-0 sudo[457105]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:38:51 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:38:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:38:51 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:38:51 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a2b8a4bc-0058-44a4-9483-b6976efb2b6f does not exist
Oct 02 09:38:51 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0edac61b-7317-4a3b-8463-2ee476d47451 does not exist
Oct 02 09:38:51 compute-0 sudo[457273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:38:51 compute-0 sudo[457273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:51 compute-0 sudo[457273]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:51 compute-0 sudo[457298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:38:51 compute-0 sudo[457298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:38:51 compute-0 sudo[457298]: pam_unix(sudo:session): session closed for user root
Oct 02 09:38:51 compute-0 ceph-mon[74477]: pgmap v3631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:51 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:38:51 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:38:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:54 compute-0 ceph-mon[74477]: pgmap v3632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:54 compute-0 nova_compute[260603]: 2025-10-02 09:38:54.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:55 compute-0 ceph-mon[74477]: pgmap v3633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:38:56 compute-0 nova_compute[260603]: 2025-10-02 09:38:56.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:57 compute-0 nova_compute[260603]: 2025-10-02 09:38:57.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:38:57 compute-0 nova_compute[260603]: 2025-10-02 09:38:57.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:38:57 compute-0 ceph-mon[74477]: pgmap v3634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:38:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:38:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:38:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:38:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:38:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:38:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:38:59 compute-0 nova_compute[260603]: 2025-10-02 09:38:59.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:38:59 compute-0 ceph-mon[74477]: pgmap v3635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:39:01 compute-0 nova_compute[260603]: 2025-10-02 09:39:01.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:02 compute-0 ceph-mon[74477]: pgmap v3636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:03 compute-0 ceph-mon[74477]: pgmap v3637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:04 compute-0 podman[457324]: 2025-10-02 09:39:04.02585946 +0000 UTC m=+0.079826185 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:39:04 compute-0 podman[457323]: 2025-10-02 09:39:04.114337604 +0000 UTC m=+0.168017729 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:39:04 compute-0 nova_compute[260603]: 2025-10-02 09:39:04.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:05 compute-0 ceph-mon[74477]: pgmap v3638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:39:06 compute-0 nova_compute[260603]: 2025-10-02 09:39:06.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:07 compute-0 ceph-mon[74477]: pgmap v3639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:08 compute-0 podman[457367]: 2025-10-02 09:39:08.997102754 +0000 UTC m=+0.057400434 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:39:09 compute-0 podman[457366]: 2025-10-02 09:39:09.005555679 +0000 UTC m=+0.068852332 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 09:39:09 compute-0 ceph-mon[74477]: pgmap v3640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:09 compute-0 nova_compute[260603]: 2025-10-02 09:39:09.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:39:11 compute-0 nova_compute[260603]: 2025-10-02 09:39:11.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:11 compute-0 ceph-mon[74477]: pgmap v3641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:13 compute-0 ceph-mon[74477]: pgmap v3642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:14 compute-0 nova_compute[260603]: 2025-10-02 09:39:14.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:15 compute-0 ceph-mon[74477]: pgmap v3643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:39:16 compute-0 nova_compute[260603]: 2025-10-02 09:39:16.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:17 compute-0 ceph-mon[74477]: pgmap v3644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct 02 09:39:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct 02 09:39:18 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct 02 09:39:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 3 op/s
Oct 02 09:39:19 compute-0 ceph-mon[74477]: osdmap e301: 3 total, 3 up, 3 in
Oct 02 09:39:19 compute-0 ceph-mon[74477]: pgmap v3646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 3 op/s
Oct 02 09:39:19 compute-0 nova_compute[260603]: 2025-10-02 09:39:19.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct 02 09:39:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct 02 09:39:20 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct 02 09:39:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.4 KiB/s rd, 3 op/s
Oct 02 09:39:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:39:21 compute-0 nova_compute[260603]: 2025-10-02 09:39:21.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:21 compute-0 ceph-mon[74477]: osdmap e302: 3 total, 3 up, 3 in
Oct 02 09:39:21 compute-0 ceph-mon[74477]: pgmap v3648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.4 KiB/s rd, 3 op/s
Oct 02 09:39:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:39:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1108771135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:39:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:39:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1108771135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:39:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1108771135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:39:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1108771135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:39:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 59 op/s
Oct 02 09:39:23 compute-0 ceph-mon[74477]: pgmap v3649: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 59 op/s
Oct 02 09:39:24 compute-0 nova_compute[260603]: 2025-10-02 09:39:24.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 02 09:39:25 compute-0 ceph-mon[74477]: pgmap v3650: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 02 09:39:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:39:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct 02 09:39:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct 02 09:39:26 compute-0 nova_compute[260603]: 2025-10-02 09:39:26.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:26 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct 02 09:39:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.5 KiB/s wr, 59 op/s
Oct 02 09:39:27 compute-0 ceph-mon[74477]: osdmap e303: 3 total, 3 up, 3 in
Oct 02 09:39:27 compute-0 ceph-mon[74477]: pgmap v3652: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.5 KiB/s wr, 59 op/s
Oct 02 09:39:27 compute-0 nova_compute[260603]: 2025-10-02 09:39:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:39:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:39:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:39:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:39:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:39:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:39:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:39:28
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes']
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:39:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct 02 09:39:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct 02 09:39:28 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:39:28 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 02 09:39:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 16 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 2.0 MiB/s wr, 73 op/s
Oct 02 09:39:29 compute-0 ceph-mon[74477]: osdmap e304: 3 total, 3 up, 3 in
Oct 02 09:39:29 compute-0 ceph-mon[74477]: pgmap v3654: 305 pgs: 305 active+clean; 16 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 2.0 MiB/s wr, 73 op/s
Oct 02 09:39:29 compute-0 nova_compute[260603]: 2025-10-02 09:39:29.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:30 compute-0 nova_compute[260603]: 2025-10-02 09:39:30.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:39:30 compute-0 nova_compute[260603]: 2025-10-02 09:39:30.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:39:30 compute-0 nova_compute[260603]: 2025-10-02 09:39:30.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:39:30 compute-0 nova_compute[260603]: 2025-10-02 09:39:30.563 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:39:30 compute-0 nova_compute[260603]: 2025-10-02 09:39:30.564 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:39:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 16 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 2.0 MiB/s wr, 17 op/s
Oct 02 09:39:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:39:31 compute-0 nova_compute[260603]: 2025-10-02 09:39:31.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:31 compute-0 nova_compute[260603]: 2025-10-02 09:39:31.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:39:31 compute-0 nova_compute[260603]: 2025-10-02 09:39:31.578 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:39:31 compute-0 nova_compute[260603]: 2025-10-02 09:39:31.579 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:39:31 compute-0 nova_compute[260603]: 2025-10-02 09:39:31.579 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:39:31 compute-0 nova_compute[260603]: 2025-10-02 09:39:31.579 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:39:31 compute-0 nova_compute[260603]: 2025-10-02 09:39:31.579 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:39:31 compute-0 ceph-mon[74477]: pgmap v3655: 305 pgs: 305 active+clean; 16 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 2.0 MiB/s wr, 17 op/s
Oct 02 09:39:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:39:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2453467086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:39:32 compute-0 nova_compute[260603]: 2025-10-02 09:39:32.037 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:39:32 compute-0 nova_compute[260603]: 2025-10-02 09:39:32.218 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:39:32 compute-0 nova_compute[260603]: 2025-10-02 09:39:32.219 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3567MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:39:32 compute-0 nova_compute[260603]: 2025-10-02 09:39:32.220 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:39:32 compute-0 nova_compute[260603]: 2025-10-02 09:39:32.220 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:39:32 compute-0 nova_compute[260603]: 2025-10-02 09:39:32.372 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:39:32 compute-0 nova_compute[260603]: 2025-10-02 09:39:32.372 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:39:32 compute-0 nova_compute[260603]: 2025-10-02 09:39:32.392 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:39:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 21 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 2.6 MiB/s wr, 74 op/s
Oct 02 09:39:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:39:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622228759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:39:32 compute-0 nova_compute[260603]: 2025-10-02 09:39:32.871 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:39:32 compute-0 nova_compute[260603]: 2025-10-02 09:39:32.877 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:39:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2453467086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:39:32 compute-0 ceph-mon[74477]: pgmap v3656: 305 pgs: 305 active+clean; 21 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 2.6 MiB/s wr, 74 op/s
Oct 02 09:39:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1622228759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:39:33 compute-0 nova_compute[260603]: 2025-10-02 09:39:33.051 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:39:33 compute-0 nova_compute[260603]: 2025-10-02 09:39:33.053 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:39:33 compute-0 nova_compute[260603]: 2025-10-02 09:39:33.053 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:39:34 compute-0 nova_compute[260603]: 2025-10-02 09:39:34.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 2.3 MiB/s wr, 95 op/s
Oct 02 09:39:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:39:34.880 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:39:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:39:34.880 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:39:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:39:34.880 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:39:34 compute-0 podman[457453]: 2025-10-02 09:39:34.987899025 +0000 UTC m=+0.055763363 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 09:39:35 compute-0 podman[457452]: 2025-10-02 09:39:35.013126183 +0000 UTC m=+0.085734999 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:39:35 compute-0 nova_compute[260603]: 2025-10-02 09:39:35.053 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:39:35 compute-0 ceph-mon[74477]: pgmap v3657: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 2.3 MiB/s wr, 95 op/s
Oct 02 09:39:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:39:36 compute-0 nova_compute[260603]: 2025-10-02 09:39:36.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:36 compute-0 nova_compute[260603]: 2025-10-02 09:39:36.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:39:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 2.0 MiB/s wr, 83 op/s
Oct 02 09:39:37 compute-0 nova_compute[260603]: 2025-10-02 09:39:37.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:39:37 compute-0 ceph-mon[74477]: pgmap v3658: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 2.0 MiB/s wr, 83 op/s
Oct 02 09:39:38 compute-0 nova_compute[260603]: 2025-10-02 09:39:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:39:38 compute-0 nova_compute[260603]: 2025-10-02 09:39:38.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 09:39:38 compute-0 nova_compute[260603]: 2025-10-02 09:39:38.650 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 09:39:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.9 MiB/s wr, 74 op/s
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033308812756397733 of space, bias 1.0, pg target 0.0999264382691932 quantized to 32 (current 32)
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:39:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:39:39 compute-0 nova_compute[260603]: 2025-10-02 09:39:39.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:39 compute-0 ceph-mon[74477]: pgmap v3659: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.9 MiB/s wr, 74 op/s
Oct 02 09:39:39 compute-0 podman[457498]: 2025-10-02 09:39:39.998889652 +0000 UTC m=+0.053554914 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 09:39:40 compute-0 podman[457497]: 2025-10-02 09:39:40.035728992 +0000 UTC m=+0.086118950 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 09:39:40 compute-0 nova_compute[260603]: 2025-10-02 09:39:40.649 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:39:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 379 KiB/s wr, 59 op/s
Oct 02 09:39:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:39:41 compute-0 nova_compute[260603]: 2025-10-02 09:39:41.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:41 compute-0 ceph-mon[74477]: pgmap v3660: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 379 KiB/s wr, 59 op/s
Oct 02 09:39:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 379 KiB/s wr, 59 op/s
Oct 02 09:39:43 compute-0 ceph-mon[74477]: pgmap v3661: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 379 KiB/s wr, 59 op/s
Oct 02 09:39:44 compute-0 nova_compute[260603]: 2025-10-02 09:39:44.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 85 B/s wr, 19 op/s
Oct 02 09:39:45 compute-0 ceph-mon[74477]: pgmap v3662: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 85 B/s wr, 19 op/s
Oct 02 09:39:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:39:46 compute-0 nova_compute[260603]: 2025-10-02 09:39:46.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:47 compute-0 ceph-mon[74477]: pgmap v3663: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:39:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 09:39:49 compute-0 ceph-mon[74477]: pgmap v3664: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 09:39:49 compute-0 nova_compute[260603]: 2025-10-02 09:39:49.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:50 compute-0 nova_compute[260603]: 2025-10-02 09:39:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:39:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 09:39:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:39:51 compute-0 nova_compute[260603]: 2025-10-02 09:39:51.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:51 compute-0 sudo[457536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:39:51 compute-0 sudo[457536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:51 compute-0 sudo[457536]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:51 compute-0 ceph-mon[74477]: pgmap v3665: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 09:39:51 compute-0 sudo[457561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:39:51 compute-0 sudo[457561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:51 compute-0 sudo[457561]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:52 compute-0 sudo[457586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:39:52 compute-0 sudo[457586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:52 compute-0 sudo[457586]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:52 compute-0 sudo[457611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:39:52 compute-0 sudo[457611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:52 compute-0 unix_chkpwd[457638]: password check failed for user (root)
Oct 02 09:39:52 compute-0 sshd-session[457634]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.71.248.239  user=root
Oct 02 09:39:52 compute-0 sudo[457611]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:39:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:39:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:39:52 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:39:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:39:52 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:39:52 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev c9e6f6f9-60b0-48c2-94fc-33648c5fb986 does not exist
Oct 02 09:39:52 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev a7a3bac3-33ee-498f-9a20-2a8707f01bf8 does not exist
Oct 02 09:39:52 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6b172b12-f6ea-4706-8f6a-b7fb3f4807e4 does not exist
Oct 02 09:39:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:39:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:39:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:39:52 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:39:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:39:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:39:52 compute-0 sudo[457672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:39:52 compute-0 sudo[457672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:52 compute-0 sudo[457672]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:52 compute-0 sudo[457697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:39:52 compute-0 sudo[457697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:52 compute-0 sudo[457697]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:52 compute-0 sudo[457722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:39:52 compute-0 sudo[457722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:52 compute-0 sudo[457722]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:52 compute-0 sudo[457747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:39:52 compute-0 sudo[457747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 09:39:52 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:39:52 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:39:52 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:39:52 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:39:52 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:39:52 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:39:53 compute-0 podman[457812]: 2025-10-02 09:39:53.203811863 +0000 UTC m=+0.047356770 container create 66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shtern, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:39:53 compute-0 systemd[1]: Started libpod-conmon-66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f.scope.
Oct 02 09:39:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:39:53 compute-0 podman[457812]: 2025-10-02 09:39:53.178591875 +0000 UTC m=+0.022136832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:39:53 compute-0 podman[457812]: 2025-10-02 09:39:53.284572875 +0000 UTC m=+0.128117792 container init 66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct 02 09:39:53 compute-0 podman[457812]: 2025-10-02 09:39:53.291058688 +0000 UTC m=+0.134603595 container start 66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shtern, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 09:39:53 compute-0 podman[457812]: 2025-10-02 09:39:53.294805525 +0000 UTC m=+0.138350482 container attach 66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:39:53 compute-0 quizzical_shtern[457829]: 167 167
Oct 02 09:39:53 compute-0 systemd[1]: libpod-66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f.scope: Deactivated successfully.
Oct 02 09:39:53 compute-0 podman[457812]: 2025-10-02 09:39:53.297690255 +0000 UTC m=+0.141235162 container died 66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shtern, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 09:39:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddbb11813b32c562f7f4f0976fb3bd93e7708ec3ea25463bcd32ce9b10e5c226-merged.mount: Deactivated successfully.
Oct 02 09:39:53 compute-0 podman[457812]: 2025-10-02 09:39:53.34202417 +0000 UTC m=+0.185569077 container remove 66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 09:39:53 compute-0 systemd[1]: libpod-conmon-66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f.scope: Deactivated successfully.
Oct 02 09:39:53 compute-0 podman[457854]: 2025-10-02 09:39:53.529159225 +0000 UTC m=+0.040626570 container create edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 02 09:39:53 compute-0 systemd[1]: Started libpod-conmon-edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625.scope.
Oct 02 09:39:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:39:53 compute-0 podman[457854]: 2025-10-02 09:39:53.512278678 +0000 UTC m=+0.023746043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38077d29e507c11ac9d3436944a3c7bd109b21ae6ec5ebfb274d384b221c5fcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38077d29e507c11ac9d3436944a3c7bd109b21ae6ec5ebfb274d384b221c5fcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38077d29e507c11ac9d3436944a3c7bd109b21ae6ec5ebfb274d384b221c5fcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38077d29e507c11ac9d3436944a3c7bd109b21ae6ec5ebfb274d384b221c5fcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38077d29e507c11ac9d3436944a3c7bd109b21ae6ec5ebfb274d384b221c5fcc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:53 compute-0 podman[457854]: 2025-10-02 09:39:53.624552635 +0000 UTC m=+0.136020000 container init edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 09:39:53 compute-0 sshd-session[457634]: Failed password for root from 167.71.248.239 port 39574 ssh2
Oct 02 09:39:53 compute-0 podman[457854]: 2025-10-02 09:39:53.633448703 +0000 UTC m=+0.144916058 container start edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 02 09:39:53 compute-0 podman[457854]: 2025-10-02 09:39:53.637027664 +0000 UTC m=+0.148495019 container attach edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 09:39:53 compute-0 ceph-mon[74477]: pgmap v3666: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 09:39:54 compute-0 sshd-session[457634]: Connection closed by authenticating user root 167.71.248.239 port 39574 [preauth]
Oct 02 09:39:54 compute-0 competent_margulis[457871]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:39:54 compute-0 competent_margulis[457871]: --> relative data size: 1.0
Oct 02 09:39:54 compute-0 competent_margulis[457871]: --> All data devices are unavailable
Oct 02 09:39:54 compute-0 systemd[1]: libpod-edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625.scope: Deactivated successfully.
Oct 02 09:39:54 compute-0 podman[457854]: 2025-10-02 09:39:54.653682328 +0000 UTC m=+1.165149693 container died edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:39:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-38077d29e507c11ac9d3436944a3c7bd109b21ae6ec5ebfb274d384b221c5fcc-merged.mount: Deactivated successfully.
Oct 02 09:39:54 compute-0 podman[457854]: 2025-10-02 09:39:54.725609045 +0000 UTC m=+1.237076400 container remove edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:39:54 compute-0 systemd[1]: libpod-conmon-edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625.scope: Deactivated successfully.
Oct 02 09:39:54 compute-0 sudo[457747]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:54 compute-0 sudo[457913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:39:54 compute-0 sudo[457913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:54 compute-0 nova_compute[260603]: 2025-10-02 09:39:54.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:54 compute-0 sudo[457913]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 09:39:54 compute-0 sudo[457938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:39:54 compute-0 sudo[457938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:54 compute-0 sudo[457938]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:54 compute-0 sudo[457963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:39:54 compute-0 sudo[457963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:54 compute-0 sudo[457963]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:55 compute-0 sudo[457988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:39:55 compute-0 sudo[457988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:55 compute-0 podman[458054]: 2025-10-02 09:39:55.369073164 +0000 UTC m=+0.077029457 container create cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:39:55 compute-0 podman[458054]: 2025-10-02 09:39:55.31484199 +0000 UTC m=+0.022798343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:39:55 compute-0 systemd[1]: Started libpod-conmon-cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1.scope.
Oct 02 09:39:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:39:55 compute-0 podman[458054]: 2025-10-02 09:39:55.507145776 +0000 UTC m=+0.215102089 container init cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tharp, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:39:55 compute-0 podman[458054]: 2025-10-02 09:39:55.515379803 +0000 UTC m=+0.223336096 container start cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tharp, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:39:55 compute-0 podman[458054]: 2025-10-02 09:39:55.51848351 +0000 UTC m=+0.226439803 container attach cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:39:55 compute-0 admiring_tharp[458070]: 167 167
Oct 02 09:39:55 compute-0 systemd[1]: libpod-cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1.scope: Deactivated successfully.
Oct 02 09:39:55 compute-0 conmon[458070]: conmon cd50fa1e0ca5cab949d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1.scope/container/memory.events
Oct 02 09:39:55 compute-0 podman[458054]: 2025-10-02 09:39:55.521925418 +0000 UTC m=+0.229881751 container died cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 02 09:39:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6749435fea4f7bf247fc3193571c3ec8ae43b033aad5700cc09e6a5fe94ff87-merged.mount: Deactivated successfully.
Oct 02 09:39:55 compute-0 podman[458054]: 2025-10-02 09:39:55.573876091 +0000 UTC m=+0.281832394 container remove cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 09:39:55 compute-0 systemd[1]: libpod-conmon-cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1.scope: Deactivated successfully.
Oct 02 09:39:55 compute-0 podman[458094]: 2025-10-02 09:39:55.743228571 +0000 UTC m=+0.046735322 container create 7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 09:39:55 compute-0 systemd[1]: Started libpod-conmon-7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb.scope.
Oct 02 09:39:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935ed63ffb8e7bed81c510c05ab0f51ff3d24af3b241a62bb17790309372cab0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935ed63ffb8e7bed81c510c05ab0f51ff3d24af3b241a62bb17790309372cab0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935ed63ffb8e7bed81c510c05ab0f51ff3d24af3b241a62bb17790309372cab0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935ed63ffb8e7bed81c510c05ab0f51ff3d24af3b241a62bb17790309372cab0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:55 compute-0 podman[458094]: 2025-10-02 09:39:55.817604333 +0000 UTC m=+0.121111184 container init 7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 09:39:55 compute-0 podman[458094]: 2025-10-02 09:39:55.722317937 +0000 UTC m=+0.025824718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:39:55 compute-0 podman[458094]: 2025-10-02 09:39:55.829163885 +0000 UTC m=+0.132670636 container start 7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:39:55 compute-0 podman[458094]: 2025-10-02 09:39:55.833959784 +0000 UTC m=+0.137466595 container attach 7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:39:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:39:56 compute-0 ceph-mon[74477]: pgmap v3667: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 09:39:56 compute-0 nova_compute[260603]: 2025-10-02 09:39:56.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:56 compute-0 angry_noether[458110]: {
Oct 02 09:39:56 compute-0 angry_noether[458110]:     "0": [
Oct 02 09:39:56 compute-0 angry_noether[458110]:         {
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "devices": [
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "/dev/loop3"
Oct 02 09:39:56 compute-0 angry_noether[458110]:             ],
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_name": "ceph_lv0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_size": "21470642176",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "name": "ceph_lv0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "tags": {
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.cluster_name": "ceph",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.crush_device_class": "",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.encrypted": "0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.osd_id": "0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.type": "block",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.vdo": "0"
Oct 02 09:39:56 compute-0 angry_noether[458110]:             },
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "type": "block",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "vg_name": "ceph_vg0"
Oct 02 09:39:56 compute-0 angry_noether[458110]:         }
Oct 02 09:39:56 compute-0 angry_noether[458110]:     ],
Oct 02 09:39:56 compute-0 angry_noether[458110]:     "1": [
Oct 02 09:39:56 compute-0 angry_noether[458110]:         {
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "devices": [
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "/dev/loop4"
Oct 02 09:39:56 compute-0 angry_noether[458110]:             ],
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_name": "ceph_lv1",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_size": "21470642176",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "name": "ceph_lv1",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "tags": {
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.cluster_name": "ceph",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.crush_device_class": "",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.encrypted": "0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.osd_id": "1",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.type": "block",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.vdo": "0"
Oct 02 09:39:56 compute-0 angry_noether[458110]:             },
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "type": "block",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "vg_name": "ceph_vg1"
Oct 02 09:39:56 compute-0 angry_noether[458110]:         }
Oct 02 09:39:56 compute-0 angry_noether[458110]:     ],
Oct 02 09:39:56 compute-0 angry_noether[458110]:     "2": [
Oct 02 09:39:56 compute-0 angry_noether[458110]:         {
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "devices": [
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "/dev/loop5"
Oct 02 09:39:56 compute-0 angry_noether[458110]:             ],
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_name": "ceph_lv2",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_size": "21470642176",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "name": "ceph_lv2",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "tags": {
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.cluster_name": "ceph",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.crush_device_class": "",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.encrypted": "0",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.osd_id": "2",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.type": "block",
Oct 02 09:39:56 compute-0 angry_noether[458110]:                 "ceph.vdo": "0"
Oct 02 09:39:56 compute-0 angry_noether[458110]:             },
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "type": "block",
Oct 02 09:39:56 compute-0 angry_noether[458110]:             "vg_name": "ceph_vg2"
Oct 02 09:39:56 compute-0 angry_noether[458110]:         }
Oct 02 09:39:56 compute-0 angry_noether[458110]:     ]
Oct 02 09:39:56 compute-0 angry_noether[458110]: }
Oct 02 09:39:56 compute-0 systemd[1]: libpod-7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb.scope: Deactivated successfully.
Oct 02 09:39:56 compute-0 podman[458094]: 2025-10-02 09:39:56.602449227 +0000 UTC m=+0.905955978 container died 7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:39:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-935ed63ffb8e7bed81c510c05ab0f51ff3d24af3b241a62bb17790309372cab0-merged.mount: Deactivated successfully.
Oct 02 09:39:56 compute-0 podman[458094]: 2025-10-02 09:39:56.68608207 +0000 UTC m=+0.989588821 container remove 7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:39:56 compute-0 systemd[1]: libpod-conmon-7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb.scope: Deactivated successfully.
Oct 02 09:39:56 compute-0 sudo[457988]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:56 compute-0 sudo[458131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:39:56 compute-0 sudo[458131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:56 compute-0 sudo[458131]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 09:39:56 compute-0 sudo[458156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:39:56 compute-0 sudo[458156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:56 compute-0 sudo[458156]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:56 compute-0 sudo[458181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:39:56 compute-0 sudo[458181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:56 compute-0 sudo[458181]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:57 compute-0 sudo[458206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:39:57 compute-0 sudo[458206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:57 compute-0 ceph-mon[74477]: pgmap v3668: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 09:39:57 compute-0 podman[458271]: 2025-10-02 09:39:57.424082201 +0000 UTC m=+0.049047203 container create 69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swartz, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 09:39:57 compute-0 systemd[1]: Started libpod-conmon-69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980.scope.
Oct 02 09:39:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:39:57 compute-0 podman[458271]: 2025-10-02 09:39:57.489098312 +0000 UTC m=+0.114063334 container init 69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:39:57 compute-0 podman[458271]: 2025-10-02 09:39:57.496649458 +0000 UTC m=+0.121614440 container start 69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:39:57 compute-0 podman[458271]: 2025-10-02 09:39:57.500225849 +0000 UTC m=+0.125190881 container attach 69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swartz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:39:57 compute-0 ecstatic_swartz[458286]: 167 167
Oct 02 09:39:57 compute-0 systemd[1]: libpod-69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980.scope: Deactivated successfully.
Oct 02 09:39:57 compute-0 podman[458271]: 2025-10-02 09:39:57.407949097 +0000 UTC m=+0.032914089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:39:57 compute-0 podman[458271]: 2025-10-02 09:39:57.504136231 +0000 UTC m=+0.129101223 container died 69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:39:57 compute-0 nova_compute[260603]: 2025-10-02 09:39:57.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:39:57 compute-0 nova_compute[260603]: 2025-10-02 09:39:57.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-59c60ba4590a1c04566513d243c11741d2f28dd0f08750bd792c98eb88d72026-merged.mount: Deactivated successfully.
Oct 02 09:39:57 compute-0 podman[458271]: 2025-10-02 09:39:57.805318219 +0000 UTC m=+0.430283211 container remove 69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:39:57 compute-0 systemd[1]: libpod-conmon-69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980.scope: Deactivated successfully.
Oct 02 09:39:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:39:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:39:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:39:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:39:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:39:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:39:58 compute-0 podman[458311]: 2025-10-02 09:39:58.00705196 +0000 UTC m=+0.047694911 container create 4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 02 09:39:58 compute-0 systemd[1]: Started libpod-conmon-4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d.scope.
Oct 02 09:39:58 compute-0 podman[458311]: 2025-10-02 09:39:57.989220873 +0000 UTC m=+0.029863854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:39:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d6608643fe287ec4e643a54d7282d22117864ebeff6afd9d365648699a2b85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d6608643fe287ec4e643a54d7282d22117864ebeff6afd9d365648699a2b85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d6608643fe287ec4e643a54d7282d22117864ebeff6afd9d365648699a2b85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d6608643fe287ec4e643a54d7282d22117864ebeff6afd9d365648699a2b85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:39:58 compute-0 podman[458311]: 2025-10-02 09:39:58.124944602 +0000 UTC m=+0.165587573 container init 4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 09:39:58 compute-0 podman[458311]: 2025-10-02 09:39:58.137274448 +0000 UTC m=+0.177917389 container start 4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 09:39:58 compute-0 podman[458311]: 2025-10-02 09:39:58.14118947 +0000 UTC m=+0.181832511 container attach 4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 09:39:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]: {
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "osd_id": 2,
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "type": "bluestore"
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:     },
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "osd_id": 1,
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "type": "bluestore"
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:     },
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "osd_id": 0,
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:         "type": "bluestore"
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]:     }
Oct 02 09:39:59 compute-0 xenodochial_boyd[458327]: }
Oct 02 09:39:59 compute-0 systemd[1]: libpod-4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d.scope: Deactivated successfully.
Oct 02 09:39:59 compute-0 podman[458311]: 2025-10-02 09:39:59.081868271 +0000 UTC m=+1.122511212 container died 4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3d6608643fe287ec4e643a54d7282d22117864ebeff6afd9d365648699a2b85-merged.mount: Deactivated successfully.
Oct 02 09:39:59 compute-0 podman[458311]: 2025-10-02 09:39:59.133467113 +0000 UTC m=+1.174110054 container remove 4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:39:59 compute-0 systemd[1]: libpod-conmon-4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d.scope: Deactivated successfully.
Oct 02 09:39:59 compute-0 sudo[458206]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:39:59 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:39:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:39:59 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:39:59 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 92759472-e73a-48dd-b1a6-5407d4aeaa46 does not exist
Oct 02 09:39:59 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 99a78ea7-e179-46bb-a30b-c3302844269d does not exist
Oct 02 09:39:59 compute-0 sudo[458375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:39:59 compute-0 sudo[458375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:59 compute-0 sudo[458375]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:59 compute-0 sudo[458400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:39:59 compute-0 sudo[458400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:39:59 compute-0 sudo[458400]: pam_unix(sudo:session): session closed for user root
Oct 02 09:39:59 compute-0 nova_compute[260603]: 2025-10-02 09:39:59.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:39:59 compute-0 ceph-mon[74477]: pgmap v3669: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 09:39:59 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:39:59 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:40:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:40:01 compute-0 nova_compute[260603]: 2025-10-02 09:40:01.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:01 compute-0 ceph-mon[74477]: pgmap v3670: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:03 compute-0 ceph-mon[74477]: pgmap v3671: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:04 compute-0 nova_compute[260603]: 2025-10-02 09:40:04.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:05 compute-0 ceph-mon[74477]: pgmap v3672: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:05 compute-0 nova_compute[260603]: 2025-10-02 09:40:05.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:40:06 compute-0 podman[458426]: 2025-10-02 09:40:06.025623198 +0000 UTC m=+0.080337851 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 09:40:06 compute-0 podman[458425]: 2025-10-02 09:40:06.046107107 +0000 UTC m=+0.106794877 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 09:40:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:40:06 compute-0 nova_compute[260603]: 2025-10-02 09:40:06.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:07 compute-0 ceph-mon[74477]: pgmap v3673: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:09 compute-0 nova_compute[260603]: 2025-10-02 09:40:09.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:09 compute-0 ceph-mon[74477]: pgmap v3674: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:10 compute-0 ceph-mon[74477]: pgmap v3675: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:10 compute-0 podman[458472]: 2025-10-02 09:40:10.99811313 +0000 UTC m=+0.066483027 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:40:11 compute-0 podman[458471]: 2025-10-02 09:40:11.000884837 +0000 UTC m=+0.071091021 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:40:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:40:11 compute-0 nova_compute[260603]: 2025-10-02 09:40:11.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:12 compute-0 nova_compute[260603]: 2025-10-02 09:40:12.553 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:40:12 compute-0 nova_compute[260603]: 2025-10-02 09:40:12.554 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 09:40:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:13 compute-0 ceph-mon[74477]: pgmap v3676: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:14 compute-0 nova_compute[260603]: 2025-10-02 09:40:14.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:15 compute-0 ceph-mon[74477]: pgmap v3677: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:40:16 compute-0 nova_compute[260603]: 2025-10-02 09:40:16.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:17 compute-0 ceph-mon[74477]: pgmap v3678: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:19 compute-0 ceph-mon[74477]: pgmap v3679: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:20 compute-0 nova_compute[260603]: 2025-10-02 09:40:20.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:40:21 compute-0 nova_compute[260603]: 2025-10-02 09:40:21.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:21 compute-0 ceph-mon[74477]: pgmap v3680: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:40:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2564754252' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:40:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:40:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2564754252' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:40:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2564754252' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:40:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2564754252' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:40:23 compute-0 ceph-mon[74477]: pgmap v3681: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:24 compute-0 ceph-mon[74477]: pgmap v3682: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:25 compute-0 nova_compute[260603]: 2025-10-02 09:40:25.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:40:26 compute-0 nova_compute[260603]: 2025-10-02 09:40:26.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct 02 09:40:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct 02 09:40:26 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct 02 09:40:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:27 compute-0 ceph-mon[74477]: osdmap e305: 3 total, 3 up, 3 in
Oct 02 09:40:27 compute-0 ceph-mon[74477]: pgmap v3684: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:40:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:40:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:40:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:40:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:40:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:40:28
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta']
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:40:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 457 KiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 09:40:29 compute-0 nova_compute[260603]: 2025-10-02 09:40:29.549 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:40:29 compute-0 ceph-mon[74477]: pgmap v3685: 305 pgs: 305 active+clean; 457 KiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 09:40:30 compute-0 nova_compute[260603]: 2025-10-02 09:40:30.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:30 compute-0 nova_compute[260603]: 2025-10-02 09:40:30.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:40:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 457 KiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 09:40:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:40:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct 02 09:40:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct 02 09:40:31 compute-0 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct 02 09:40:31 compute-0 nova_compute[260603]: 2025-10-02 09:40:31.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:31 compute-0 nova_compute[260603]: 2025-10-02 09:40:31.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:40:31 compute-0 nova_compute[260603]: 2025-10-02 09:40:31.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:40:31 compute-0 nova_compute[260603]: 2025-10-02 09:40:31.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:40:31 compute-0 nova_compute[260603]: 2025-10-02 09:40:31.870 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:40:31 compute-0 nova_compute[260603]: 2025-10-02 09:40:31.871 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:40:31 compute-0 ceph-mon[74477]: pgmap v3686: 305 pgs: 305 active+clean; 457 KiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 09:40:31 compute-0 ceph-mon[74477]: osdmap e306: 3 total, 3 up, 3 in
Oct 02 09:40:32 compute-0 nova_compute[260603]: 2025-10-02 09:40:32.119 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:40:32 compute-0 nova_compute[260603]: 2025-10-02 09:40:32.119 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:40:32 compute-0 nova_compute[260603]: 2025-10-02 09:40:32.119 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:40:32 compute-0 nova_compute[260603]: 2025-10-02 09:40:32.120 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:40:32 compute-0 nova_compute[260603]: 2025-10-02 09:40:32.120 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:40:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:40:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3912608870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:40:32 compute-0 nova_compute[260603]: 2025-10-02 09:40:32.578 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:40:32 compute-0 nova_compute[260603]: 2025-10-02 09:40:32.745 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:40:32 compute-0 nova_compute[260603]: 2025-10-02 09:40:32.746 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3566MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:40:32 compute-0 nova_compute[260603]: 2025-10-02 09:40:32.747 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:40:32 compute-0 nova_compute[260603]: 2025-10-02 09:40:32.747 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:40:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 02 09:40:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3912608870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:40:33 compute-0 nova_compute[260603]: 2025-10-02 09:40:33.748 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:40:33 compute-0 nova_compute[260603]: 2025-10-02 09:40:33.748 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:40:33 compute-0 nova_compute[260603]: 2025-10-02 09:40:33.762 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 09:40:33 compute-0 nova_compute[260603]: 2025-10-02 09:40:33.778 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 09:40:33 compute-0 nova_compute[260603]: 2025-10-02 09:40:33.778 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 09:40:33 compute-0 nova_compute[260603]: 2025-10-02 09:40:33.792 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 09:40:33 compute-0 nova_compute[260603]: 2025-10-02 09:40:33.820 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 09:40:33 compute-0 nova_compute[260603]: 2025-10-02 09:40:33.837 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:40:33 compute-0 ceph-mon[74477]: pgmap v3688: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 02 09:40:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:40:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984032977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:40:34 compute-0 nova_compute[260603]: 2025-10-02 09:40:34.309 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:40:34 compute-0 nova_compute[260603]: 2025-10-02 09:40:34.315 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:40:34 compute-0 nova_compute[260603]: 2025-10-02 09:40:34.517 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:40:34 compute-0 nova_compute[260603]: 2025-10-02 09:40:34.519 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:40:34 compute-0 nova_compute[260603]: 2025-10-02 09:40:34.520 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:40:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 02 09:40:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:40:34.881 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:40:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:40:34.881 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:40:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:40:34.881 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:40:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/984032977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:40:35 compute-0 nova_compute[260603]: 2025-10-02 09:40:35.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:35 compute-0 ceph-mon[74477]: pgmap v3689: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 02 09:40:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:40:36 compute-0 nova_compute[260603]: 2025-10-02 09:40:36.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 09:40:36 compute-0 ceph-mon[74477]: pgmap v3690: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 09:40:37 compute-0 podman[458555]: 2025-10-02 09:40:37.039572458 +0000 UTC m=+0.097345752 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:40:37 compute-0 podman[458554]: 2025-10-02 09:40:37.039601229 +0000 UTC m=+0.102796273 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Oct 02 09:40:37 compute-0 nova_compute[260603]: 2025-10-02 09:40:37.170 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:40:37 compute-0 nova_compute[260603]: 2025-10-02 09:40:37.170 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:40:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:40:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:40:39 compute-0 ceph-mon[74477]: pgmap v3691: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:40 compute-0 nova_compute[260603]: 2025-10-02 09:40:40.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:40:41 compute-0 nova_compute[260603]: 2025-10-02 09:40:41.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:41 compute-0 nova_compute[260603]: 2025-10-02 09:40:41.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:40:41 compute-0 ceph-mon[74477]: pgmap v3692: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:41 compute-0 podman[458599]: 2025-10-02 09:40:41.998989523 +0000 UTC m=+0.072881058 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 09:40:42 compute-0 podman[458600]: 2025-10-02 09:40:42.012391351 +0000 UTC m=+0.071846285 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 09:40:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:43 compute-0 ceph-mon[74477]: pgmap v3693: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:45 compute-0 nova_compute[260603]: 2025-10-02 09:40:45.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:45 compute-0 ceph-mon[74477]: pgmap v3694: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:40:46 compute-0 nova_compute[260603]: 2025-10-02 09:40:46.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:46 compute-0 nova_compute[260603]: 2025-10-02 09:40:46.972 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:40:47 compute-0 ceph-mon[74477]: pgmap v3695: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:49 compute-0 ceph-mon[74477]: pgmap v3696: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:50 compute-0 nova_compute[260603]: 2025-10-02 09:40:50.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:50 compute-0 nova_compute[260603]: 2025-10-02 09:40:50.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:40:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:40:51 compute-0 ceph-mon[74477]: pgmap v3697: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:51 compute-0 nova_compute[260603]: 2025-10-02 09:40:51.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:53 compute-0 ceph-mon[74477]: pgmap v3698: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:55 compute-0 nova_compute[260603]: 2025-10-02 09:40:55.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:55 compute-0 ceph-mon[74477]: pgmap v3699: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:40:56 compute-0 nova_compute[260603]: 2025-10-02 09:40:56.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:40:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:57 compute-0 ceph-mon[74477]: pgmap v3700: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:40:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:40:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:40:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:40:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:40:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:40:58 compute-0 nova_compute[260603]: 2025-10-02 09:40:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:40:58 compute-0 nova_compute[260603]: 2025-10-02 09:40:58.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:40:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:40:59 compute-0 sudo[458637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:40:59 compute-0 sudo[458637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:40:59 compute-0 sudo[458637]: pam_unix(sudo:session): session closed for user root
Oct 02 09:40:59 compute-0 sudo[458662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:40:59 compute-0 sudo[458662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:40:59 compute-0 sudo[458662]: pam_unix(sudo:session): session closed for user root
Oct 02 09:40:59 compute-0 sudo[458687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:40:59 compute-0 sudo[458687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:40:59 compute-0 sudo[458687]: pam_unix(sudo:session): session closed for user root
Oct 02 09:40:59 compute-0 sudo[458712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:40:59 compute-0 sudo[458712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:40:59 compute-0 ceph-mon[74477]: pgmap v3701: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:00 compute-0 nova_compute[260603]: 2025-10-02 09:41:00.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:00 compute-0 sudo[458712]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:41:00 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:41:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:41:00 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:41:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:41:00 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:41:00 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6a4c2b3b-7293-4e4b-950c-8269007cb4f5 does not exist
Oct 02 09:41:00 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6c4a6cc6-f305-4579-9d7d-4c7d42820818 does not exist
Oct 02 09:41:00 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 700ba972-b5db-47f6-a4b1-47e9d8dbcc9e does not exist
Oct 02 09:41:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:41:00 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:41:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:41:00 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:41:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:41:00 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:41:00 compute-0 sudo[458769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:41:00 compute-0 sudo[458769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:00 compute-0 sudo[458769]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:00 compute-0 sudo[458794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:41:00 compute-0 sudo[458794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:00 compute-0 sudo[458794]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:00 compute-0 sudo[458819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:41:00 compute-0 sudo[458819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:00 compute-0 sudo[458819]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:00 compute-0 sudo[458844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:41:00 compute-0 sudo[458844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:00 compute-0 podman[458909]: 2025-10-02 09:41:00.80025178 +0000 UTC m=+0.036183651 container create b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 09:41:00 compute-0 systemd[1]: Started libpod-conmon-b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c.scope.
Oct 02 09:41:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:41:00 compute-0 podman[458909]: 2025-10-02 09:41:00.854934608 +0000 UTC m=+0.090866479 container init b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 02 09:41:00 compute-0 podman[458909]: 2025-10-02 09:41:00.864123855 +0000 UTC m=+0.100055726 container start b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:41:00 compute-0 hungry_kirch[458925]: 167 167
Oct 02 09:41:00 compute-0 systemd[1]: libpod-b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c.scope: Deactivated successfully.
Oct 02 09:41:00 compute-0 podman[458909]: 2025-10-02 09:41:00.870140523 +0000 UTC m=+0.106072394 container attach b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 09:41:00 compute-0 podman[458909]: 2025-10-02 09:41:00.870459883 +0000 UTC m=+0.106391744 container died b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 09:41:00 compute-0 podman[458909]: 2025-10-02 09:41:00.784533829 +0000 UTC m=+0.020465700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:41:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8709e6475ea883e631ffa976460ef56923c8c086b7a7a87c4ea1a6a44b9ef345-merged.mount: Deactivated successfully.
Oct 02 09:41:00 compute-0 podman[458909]: 2025-10-02 09:41:00.908961625 +0000 UTC m=+0.144893496 container remove b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:41:00 compute-0 systemd[1]: libpod-conmon-b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c.scope: Deactivated successfully.
Oct 02 09:41:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:41:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:41:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:41:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:41:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:41:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:41:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:41:01 compute-0 podman[458947]: 2025-10-02 09:41:01.104664458 +0000 UTC m=+0.058358624 container create 6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:41:01 compute-0 systemd[1]: Started libpod-conmon-6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a.scope.
Oct 02 09:41:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f821aa36c430f4bc378429178f89174e212259e59b8554609919ebcfe40f99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f821aa36c430f4bc378429178f89174e212259e59b8554609919ebcfe40f99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f821aa36c430f4bc378429178f89174e212259e59b8554609919ebcfe40f99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f821aa36c430f4bc378429178f89174e212259e59b8554609919ebcfe40f99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f821aa36c430f4bc378429178f89174e212259e59b8554609919ebcfe40f99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:01 compute-0 podman[458947]: 2025-10-02 09:41:01.086413649 +0000 UTC m=+0.040107805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:41:01 compute-0 podman[458947]: 2025-10-02 09:41:01.194420951 +0000 UTC m=+0.148115087 container init 6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:41:01 compute-0 podman[458947]: 2025-10-02 09:41:01.201509954 +0000 UTC m=+0.155204090 container start 6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:41:01 compute-0 podman[458947]: 2025-10-02 09:41:01.204234358 +0000 UTC m=+0.157928544 container attach 6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 09:41:01 compute-0 nova_compute[260603]: 2025-10-02 09:41:01.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:01 compute-0 ceph-mon[74477]: pgmap v3702: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:02 compute-0 wonderful_rosalind[458963]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:41:02 compute-0 wonderful_rosalind[458963]: --> relative data size: 1.0
Oct 02 09:41:02 compute-0 wonderful_rosalind[458963]: --> All data devices are unavailable
Oct 02 09:41:02 compute-0 systemd[1]: libpod-6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a.scope: Deactivated successfully.
Oct 02 09:41:02 compute-0 systemd[1]: libpod-6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a.scope: Consumed 1.096s CPU time.
Oct 02 09:41:02 compute-0 podman[458947]: 2025-10-02 09:41:02.342509282 +0000 UTC m=+1.296203418 container died 6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 09:41:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5f821aa36c430f4bc378429178f89174e212259e59b8554609919ebcfe40f99-merged.mount: Deactivated successfully.
Oct 02 09:41:02 compute-0 podman[458947]: 2025-10-02 09:41:02.396490998 +0000 UTC m=+1.350185144 container remove 6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:41:02 compute-0 systemd[1]: libpod-conmon-6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a.scope: Deactivated successfully.
Oct 02 09:41:02 compute-0 sudo[458844]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:02 compute-0 sudo[459004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:41:02 compute-0 sudo[459004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:02 compute-0 sudo[459004]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:02 compute-0 sudo[459029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:41:02 compute-0 sudo[459029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:02 compute-0 sudo[459029]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:02 compute-0 sudo[459054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:41:02 compute-0 sudo[459054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:02 compute-0 sudo[459054]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:02 compute-0 sudo[459079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:41:02 compute-0 sudo[459079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:02 compute-0 podman[459143]: 2025-10-02 09:41:02.985532137 +0000 UTC m=+0.035169721 container create 8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 02 09:41:03 compute-0 systemd[1]: Started libpod-conmon-8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2.scope.
Oct 02 09:41:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:41:03 compute-0 podman[459143]: 2025-10-02 09:41:03.045900262 +0000 UTC m=+0.095537866 container init 8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 09:41:03 compute-0 podman[459143]: 2025-10-02 09:41:03.053022234 +0000 UTC m=+0.102659818 container start 8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 09:41:03 compute-0 podman[459143]: 2025-10-02 09:41:03.056682749 +0000 UTC m=+0.106320353 container attach 8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hermann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 09:41:03 compute-0 wizardly_hermann[459159]: 167 167
Oct 02 09:41:03 compute-0 systemd[1]: libpod-8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2.scope: Deactivated successfully.
Oct 02 09:41:03 compute-0 podman[459143]: 2025-10-02 09:41:03.05799392 +0000 UTC m=+0.107631514 container died 8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hermann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:41:03 compute-0 podman[459143]: 2025-10-02 09:41:02.971021323 +0000 UTC m=+0.020658927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:41:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c79659304bf1242f4ec1813c1a94576864c918c0ce5f286ca94443790ea37f7-merged.mount: Deactivated successfully.
Oct 02 09:41:03 compute-0 podman[459143]: 2025-10-02 09:41:03.092057184 +0000 UTC m=+0.141694768 container remove 8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:41:03 compute-0 systemd[1]: libpod-conmon-8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2.scope: Deactivated successfully.
Oct 02 09:41:03 compute-0 podman[459182]: 2025-10-02 09:41:03.255990664 +0000 UTC m=+0.056980701 container create 74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 02 09:41:03 compute-0 podman[459182]: 2025-10-02 09:41:03.226239825 +0000 UTC m=+0.027229942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:41:03 compute-0 systemd[1]: Started libpod-conmon-74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48.scope.
Oct 02 09:41:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2d8970043b09718b35c3835ab4d0a6aa53703eeefa8c998ab0871c932cc777/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2d8970043b09718b35c3835ab4d0a6aa53703eeefa8c998ab0871c932cc777/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2d8970043b09718b35c3835ab4d0a6aa53703eeefa8c998ab0871c932cc777/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2d8970043b09718b35c3835ab4d0a6aa53703eeefa8c998ab0871c932cc777/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:03 compute-0 podman[459182]: 2025-10-02 09:41:03.370250203 +0000 UTC m=+0.171240260 container init 74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 09:41:03 compute-0 podman[459182]: 2025-10-02 09:41:03.382046051 +0000 UTC m=+0.183036078 container start 74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:41:03 compute-0 podman[459182]: 2025-10-02 09:41:03.385856681 +0000 UTC m=+0.186846728 container attach 74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 09:41:03 compute-0 ceph-mon[74477]: pgmap v3703: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:04 compute-0 infallible_darwin[459198]: {
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:     "0": [
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:         {
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "devices": [
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "/dev/loop3"
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             ],
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_name": "ceph_lv0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_size": "21470642176",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "name": "ceph_lv0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "tags": {
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.cluster_name": "ceph",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.crush_device_class": "",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.encrypted": "0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.osd_id": "0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.type": "block",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.vdo": "0"
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             },
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "type": "block",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "vg_name": "ceph_vg0"
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:         }
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:     ],
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:     "1": [
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:         {
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "devices": [
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "/dev/loop4"
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             ],
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_name": "ceph_lv1",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_size": "21470642176",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "name": "ceph_lv1",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "tags": {
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.cluster_name": "ceph",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.crush_device_class": "",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.encrypted": "0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.osd_id": "1",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.type": "block",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.vdo": "0"
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             },
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "type": "block",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "vg_name": "ceph_vg1"
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:         }
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:     ],
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:     "2": [
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:         {
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "devices": [
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "/dev/loop5"
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             ],
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_name": "ceph_lv2",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_size": "21470642176",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "name": "ceph_lv2",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "tags": {
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.cluster_name": "ceph",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.crush_device_class": "",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.encrypted": "0",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.osd_id": "2",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.type": "block",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:                 "ceph.vdo": "0"
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             },
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "type": "block",
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:             "vg_name": "ceph_vg2"
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:         }
Oct 02 09:41:04 compute-0 infallible_darwin[459198]:     ]
Oct 02 09:41:04 compute-0 infallible_darwin[459198]: }
Oct 02 09:41:04 compute-0 systemd[1]: libpod-74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48.scope: Deactivated successfully.
Oct 02 09:41:04 compute-0 podman[459182]: 2025-10-02 09:41:04.155684176 +0000 UTC m=+0.956674223 container died 74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:41:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d2d8970043b09718b35c3835ab4d0a6aa53703eeefa8c998ab0871c932cc777-merged.mount: Deactivated successfully.
Oct 02 09:41:04 compute-0 podman[459182]: 2025-10-02 09:41:04.202191948 +0000 UTC m=+1.003181975 container remove 74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 09:41:04 compute-0 systemd[1]: libpod-conmon-74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48.scope: Deactivated successfully.
Oct 02 09:41:04 compute-0 sudo[459079]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:04 compute-0 sudo[459222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:41:04 compute-0 sudo[459222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:04 compute-0 sudo[459222]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:04 compute-0 sudo[459247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:41:04 compute-0 sudo[459247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:04 compute-0 sudo[459247]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:04 compute-0 sudo[459272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:41:04 compute-0 sudo[459272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:04 compute-0 sudo[459272]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:04 compute-0 sudo[459297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:41:04 compute-0 sudo[459297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:04 compute-0 podman[459363]: 2025-10-02 09:41:04.908304223 +0000 UTC m=+0.044948925 container create 04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 09:41:04 compute-0 systemd[1]: Started libpod-conmon-04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88.scope.
Oct 02 09:41:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:41:04 compute-0 podman[459363]: 2025-10-02 09:41:04.889976821 +0000 UTC m=+0.026621533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:41:04 compute-0 ceph-mon[74477]: pgmap v3704: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:04 compute-0 podman[459363]: 2025-10-02 09:41:04.996471487 +0000 UTC m=+0.133116179 container init 04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:41:05 compute-0 podman[459363]: 2025-10-02 09:41:05.00842498 +0000 UTC m=+0.145069692 container start 04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 09:41:05 compute-0 podman[459363]: 2025-10-02 09:41:05.012114576 +0000 UTC m=+0.148759288 container attach 04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:41:05 compute-0 nervous_sutherland[459379]: 167 167
Oct 02 09:41:05 compute-0 systemd[1]: libpod-04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88.scope: Deactivated successfully.
Oct 02 09:41:05 compute-0 podman[459363]: 2025-10-02 09:41:05.014400367 +0000 UTC m=+0.151045069 container died 04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 09:41:05 compute-0 nova_compute[260603]: 2025-10-02 09:41:05.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-35f3aad7bea8b1ec61897cb00f8360311f9eca2a6ed1df99f2feb808abead92f-merged.mount: Deactivated successfully.
Oct 02 09:41:05 compute-0 podman[459363]: 2025-10-02 09:41:05.094698135 +0000 UTC m=+0.231342847 container remove 04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:41:05 compute-0 systemd[1]: libpod-conmon-04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88.scope: Deactivated successfully.
Oct 02 09:41:05 compute-0 podman[459404]: 2025-10-02 09:41:05.282668127 +0000 UTC m=+0.038833874 container create 36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 09:41:05 compute-0 systemd[1]: Started libpod-conmon-36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d.scope.
Oct 02 09:41:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc519a3db1073b3527b7a64a1d1deeb0245a3a35cd820704a9b6e03d64db145/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc519a3db1073b3527b7a64a1d1deeb0245a3a35cd820704a9b6e03d64db145/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc519a3db1073b3527b7a64a1d1deeb0245a3a35cd820704a9b6e03d64db145/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc519a3db1073b3527b7a64a1d1deeb0245a3a35cd820704a9b6e03d64db145/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:41:05 compute-0 podman[459404]: 2025-10-02 09:41:05.267820643 +0000 UTC m=+0.023986420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:41:05 compute-0 podman[459404]: 2025-10-02 09:41:05.368643071 +0000 UTC m=+0.124808818 container init 36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 09:41:05 compute-0 podman[459404]: 2025-10-02 09:41:05.376993702 +0000 UTC m=+0.133159449 container start 36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:41:05 compute-0 podman[459404]: 2025-10-02 09:41:05.379508991 +0000 UTC m=+0.135674728 container attach 36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 09:41:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:41:06 compute-0 nova_compute[260603]: 2025-10-02 09:41:06.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:06 compute-0 zealous_chaum[459421]: {
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "osd_id": 2,
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "type": "bluestore"
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:     },
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "osd_id": 1,
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "type": "bluestore"
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:     },
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "osd_id": 0,
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:         "type": "bluestore"
Oct 02 09:41:06 compute-0 zealous_chaum[459421]:     }
Oct 02 09:41:06 compute-0 zealous_chaum[459421]: }
Oct 02 09:41:06 compute-0 systemd[1]: libpod-36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d.scope: Deactivated successfully.
Oct 02 09:41:06 compute-0 systemd[1]: libpod-36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d.scope: Consumed 1.051s CPU time.
Oct 02 09:41:06 compute-0 podman[459404]: 2025-10-02 09:41:06.423706456 +0000 UTC m=+1.179872203 container died 36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:41:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bc519a3db1073b3527b7a64a1d1deeb0245a3a35cd820704a9b6e03d64db145-merged.mount: Deactivated successfully.
Oct 02 09:41:06 compute-0 podman[459404]: 2025-10-02 09:41:06.507017478 +0000 UTC m=+1.263183225 container remove 36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:41:06 compute-0 systemd[1]: libpod-conmon-36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d.scope: Deactivated successfully.
Oct 02 09:41:06 compute-0 sudo[459297]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:41:06 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:41:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:41:06 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:41:06 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 7ba347a6-dff5-4711-8ec8-25fb6d9164d9 does not exist
Oct 02 09:41:06 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6d66d829-c9ef-4dc3-966a-d5889f0c0db7 does not exist
Oct 02 09:41:06 compute-0 sudo[459468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:41:06 compute-0 sudo[459468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:06 compute-0 sudo[459468]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:06 compute-0 sudo[459493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:41:06 compute-0 sudo[459493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:41:06 compute-0 sudo[459493]: pam_unix(sudo:session): session closed for user root
Oct 02 09:41:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:07 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:41:07 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:41:07 compute-0 ceph-mon[74477]: pgmap v3705: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:08 compute-0 podman[459519]: 2025-10-02 09:41:08.011718138 +0000 UTC m=+0.073753665 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:41:08 compute-0 podman[459518]: 2025-10-02 09:41:08.061617796 +0000 UTC m=+0.123421055 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:41:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:09 compute-0 ceph-mon[74477]: pgmap v3706: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:10 compute-0 nova_compute[260603]: 2025-10-02 09:41:10.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:41:11 compute-0 nova_compute[260603]: 2025-10-02 09:41:11.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:11 compute-0 ceph-mon[74477]: pgmap v3707: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:12 compute-0 podman[459564]: 2025-10-02 09:41:12.995378127 +0000 UTC m=+0.061954496 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:41:12 compute-0 ceph-mon[74477]: pgmap v3708: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:13 compute-0 podman[459565]: 2025-10-02 09:41:13.014835965 +0000 UTC m=+0.082671854 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct 02 09:41:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:15 compute-0 nova_compute[260603]: 2025-10-02 09:41:15.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:15 compute-0 ceph-mon[74477]: pgmap v3709: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:41:16 compute-0 nova_compute[260603]: 2025-10-02 09:41:16.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:17 compute-0 ceph-mon[74477]: pgmap v3710: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:19 compute-0 ceph-mon[74477]: pgmap v3711: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:20 compute-0 nova_compute[260603]: 2025-10-02 09:41:20.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:41:21 compute-0 nova_compute[260603]: 2025-10-02 09:41:21.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:21 compute-0 ceph-mon[74477]: pgmap v3712: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:41:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2708946234' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:41:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:41:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2708946234' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:41:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.975189) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398082975252, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1725, "num_deletes": 255, "total_data_size": 2758381, "memory_usage": 2788640, "flush_reason": "Manual Compaction"}
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Oct 02 09:41:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2708946234' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:41:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2708946234' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398082990200, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2708960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75521, "largest_seqno": 77245, "table_properties": {"data_size": 2700883, "index_size": 4954, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16489, "raw_average_key_size": 20, "raw_value_size": 2684736, "raw_average_value_size": 3298, "num_data_blocks": 220, "num_entries": 814, "num_filter_entries": 814, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397907, "oldest_key_time": 1759397907, "file_creation_time": 1759398082, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 15085 microseconds, and 5983 cpu microseconds.
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.990276) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2708960 bytes OK
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.990308) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.992711) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.992741) EVENT_LOG_v1 {"time_micros": 1759398082992730, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.992822) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2750947, prev total WAL file size 2750947, number of live WAL files 2.
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.994499) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2645KB)], [182(9705KB)]
Oct 02 09:41:22 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398082994572, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 12647308, "oldest_snapshot_seqno": -1}
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9181 keys, 10904264 bytes, temperature: kUnknown
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398083058522, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 10904264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10846201, "index_size": 34014, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241867, "raw_average_key_size": 26, "raw_value_size": 10685607, "raw_average_value_size": 1163, "num_data_blocks": 1310, "num_entries": 9181, "num_filter_entries": 9181, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759398082, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.058816) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 10904264 bytes
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.060195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.5 rd, 170.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.5 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(8.7) write-amplify(4.0) OK, records in: 9705, records dropped: 524 output_compression: NoCompression
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.060213) EVENT_LOG_v1 {"time_micros": 1759398083060205, "job": 114, "event": "compaction_finished", "compaction_time_micros": 64022, "compaction_time_cpu_micros": 28103, "output_level": 6, "num_output_files": 1, "total_output_size": 10904264, "num_input_records": 9705, "num_output_records": 9181, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398083060887, "job": 114, "event": "table_file_deletion", "file_number": 184}
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398083062869, "job": 114, "event": "table_file_deletion", "file_number": 182}
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.994331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.062972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.062978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.062986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.062988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:41:23 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.062990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:41:23 compute-0 ceph-mon[74477]: pgmap v3713: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:25 compute-0 nova_compute[260603]: 2025-10-02 09:41:25.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:25 compute-0 ceph-mon[74477]: pgmap v3714: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:41:26 compute-0 nova_compute[260603]: 2025-10-02 09:41:26.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:27 compute-0 ceph-mon[74477]: pgmap v3715: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:41:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:41:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:41:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:41:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:41:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:41:28
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms']
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:41:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:29 compute-0 ceph-mon[74477]: pgmap v3716: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:29 compute-0 nova_compute[260603]: 2025-10-02 09:41:29.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:41:30 compute-0 nova_compute[260603]: 2025-10-02 09:41:30.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:41:31 compute-0 nova_compute[260603]: 2025-10-02 09:41:31.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:31 compute-0 sshd-session[459603]: Accepted publickey for zuul from 192.168.122.10 port 56220 ssh2: ECDSA SHA256:QEnwbgBR1jglQLPp4vwsTS2MMzDakrR2dLJ/eEaCKUI
Oct 02 09:41:31 compute-0 systemd-logind[787]: New session 57 of user zuul.
Oct 02 09:41:31 compute-0 systemd[1]: Started Session 57 of User zuul.
Oct 02 09:41:31 compute-0 sshd-session[459603]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 09:41:31 compute-0 sudo[459607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 02 09:41:31 compute-0 sudo[459607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:41:31 compute-0 nova_compute[260603]: 2025-10-02 09:41:31.935 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:41:31 compute-0 ceph-mon[74477]: pgmap v3717: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:32 compute-0 nova_compute[260603]: 2025-10-02 09:41:32.579 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:41:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:33 compute-0 nova_compute[260603]: 2025-10-02 09:41:33.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:41:33 compute-0 nova_compute[260603]: 2025-10-02 09:41:33.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:41:33 compute-0 nova_compute[260603]: 2025-10-02 09:41:33.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:41:33 compute-0 nova_compute[260603]: 2025-10-02 09:41:33.561 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:41:33 compute-0 nova_compute[260603]: 2025-10-02 09:41:33.562 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:41:33 compute-0 nova_compute[260603]: 2025-10-02 09:41:33.597 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:41:33 compute-0 nova_compute[260603]: 2025-10-02 09:41:33.597 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:41:33 compute-0 nova_compute[260603]: 2025-10-02 09:41:33.598 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:41:33 compute-0 nova_compute[260603]: 2025-10-02 09:41:33.598 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:41:33 compute-0 nova_compute[260603]: 2025-10-02 09:41:33.598 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:41:33 compute-0 ceph-mon[74477]: pgmap v3718: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:41:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190454191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.008 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.179 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.181 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3540MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.181 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.181 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.282 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.283 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.324 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:41:34 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23115 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:41:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175116722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.751 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.756 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.785 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.787 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:41:34 compute-0 nova_compute[260603]: 2025-10-02 09:41:34.787 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:41:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:41:34.882 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:41:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:41:34.882 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:41:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:41:34.882 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:41:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/190454191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:41:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1175116722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:41:35 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23119 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:35 compute-0 nova_compute[260603]: 2025-10-02 09:41:35.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 09:41:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3387588497' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 09:41:35 compute-0 nova_compute[260603]: 2025-10-02 09:41:35.744 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:41:35 compute-0 ceph-mon[74477]: from='client.23115 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:35 compute-0 ceph-mon[74477]: pgmap v3719: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3387588497' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 09:41:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:41:36 compute-0 nova_compute[260603]: 2025-10-02 09:41:36.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:36 compute-0 nova_compute[260603]: 2025-10-02 09:41:36.513 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:41:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:37 compute-0 ceph-mon[74477]: from='client.23119 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:37 compute-0 ceph-mon[74477]: pgmap v3720: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:38 compute-0 ovs-vsctl[459953]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 02 09:41:38 compute-0 podman[459934]: 2025-10-02 09:41:38.998566052 +0000 UTC m=+0.054147732 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 09:41:39 compute-0 podman[459930]: 2025-10-02 09:41:39.039586614 +0000 UTC m=+0.095402632 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:41:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:41:39 compute-0 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 02 09:41:39 compute-0 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 02 09:41:39 compute-0 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 09:41:39 compute-0 ceph-mon[74477]: pgmap v3721: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:40 compute-0 nova_compute[260603]: 2025-10-02 09:41:40.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:40 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: cache status {prefix=cache status} (starting...)
Oct 02 09:41:40 compute-0 nova_compute[260603]: 2025-10-02 09:41:40.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:41:40 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: client ls {prefix=client ls} (starting...)
Oct 02 09:41:40 compute-0 lvm[460325]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 02 09:41:40 compute-0 lvm[460325]: VG ceph_vg2 finished
Oct 02 09:41:40 compute-0 lvm[460330]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 09:41:40 compute-0 lvm[460330]: VG ceph_vg0 finished
Oct 02 09:41:40 compute-0 lvm[460343]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 02 09:41:40 compute-0 lvm[460343]: VG ceph_vg1 finished
Oct 02 09:41:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:40 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23123 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:41:41 compute-0 kernel: block dm-0: the capability attribute has been deprecated.
Oct 02 09:41:41 compute-0 nova_compute[260603]: 2025-10-02 09:41:41.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:41 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: damage ls {prefix=damage ls} (starting...)
Oct 02 09:41:41 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23125 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:41 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump loads {prefix=dump loads} (starting...)
Oct 02 09:41:41 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 02 09:41:41 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 02 09:41:41 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 02 09:41:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 09:41:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670286013' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 09:41:41 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 02 09:41:41 compute-0 ceph-mon[74477]: pgmap v3722: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/670286013' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 09:41:42 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23131 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:42 compute-0 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T09:41:42.179+0000 7f67e8e61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 09:41:42 compute-0 ceph-mgr[74774]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 09:41:42 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 02 09:41:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:41:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2828878450' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:41:42 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 02 09:41:42 compute-0 nova_compute[260603]: 2025-10-02 09:41:42.447 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:41:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 02 09:41:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3240732150' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 09:41:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 02 09:41:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4125070202' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 09:41:42 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: ops {prefix=ops} (starting...)
Oct 02 09:41:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:43 compute-0 ceph-mon[74477]: from='client.23123 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:43 compute-0 ceph-mon[74477]: from='client.23125 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2828878450' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:41:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3240732150' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 09:41:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4125070202' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 09:41:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 09:41:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135961721' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 09:41:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 02 09:41:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1369396267' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 09:41:43 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: session ls {prefix=session ls} (starting...)
Oct 02 09:41:43 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: status {prefix=status} (starting...)
Oct 02 09:41:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 09:41:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/923827585' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 09:41:43 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23145 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:43 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23147 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 09:41:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452661649' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 09:41:43 compute-0 podman[460777]: 2025-10-02 09:41:43.998114401 +0000 UTC m=+0.062641738 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 09:41:43 compute-0 podman[460780]: 2025-10-02 09:41:43.999357129 +0000 UTC m=+0.063636678 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:41:44 compute-0 ceph-mon[74477]: from='client.23131 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:44 compute-0 ceph-mon[74477]: pgmap v3723: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1135961721' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 09:41:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1369396267' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 09:41:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/923827585' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 09:41:44 compute-0 ceph-mon[74477]: from='client.23145 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:44 compute-0 ceph-mon[74477]: from='client.23147 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3452661649' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 09:41:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 09:41:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3996156377' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 09:41:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 09:41:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193128793' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 09:41:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 02 09:41:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/752878372' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 09:41:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 09:41:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989067092' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 09:41:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3996156377' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 09:41:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/193128793' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 09:41:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/752878372' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 09:41:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1989067092' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 09:41:45 compute-0 ceph-mon[74477]: pgmap v3724: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:45 compute-0 nova_compute[260603]: 2025-10-02 09:41:45.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:45 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23159 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:45 compute-0 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T09:41:45.176+0000 7f67e8e61640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 09:41:45 compute-0 ceph-mgr[74774]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 09:41:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 09:41:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1768599386' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 09:41:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 02 09:41:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724206446' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 09:41:45 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23165 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 02 09:41:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/574475291' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 09:41:46 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23169 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:46 compute-0 ceph-mon[74477]: from='client.23159 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1768599386' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 09:41:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2724206446' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 09:41:46 compute-0 ceph-mon[74477]: from='client.23165 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/574475291' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 09:41:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:41:46 compute-0 nova_compute[260603]: 2025-10-02 09:41:46.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:46 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23172 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 09:41:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925258120' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 09:41:46 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23175 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:08:54.524274+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.403163910s of 45.503894806s, submitted: 67
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266199040 unmapped: 48218112 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:08:55.524410+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2883895 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266199040 unmapped: 48218112 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:08:56.524653+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c649f0000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649f0000 session 0x562c646f94a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64381c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c624745a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64991400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64991400 session 0x562c6467ab40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266199040 unmapped: 48218112 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c65472400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65472400 session 0x562c61e4fc20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64992400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:08:57.524826+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,14])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266354688 unmapped: 48062464 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64992400 session 0x562c627490e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661dc400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dc400 session 0x562c6274ba40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64381c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c6551be00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64991400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:08:58.524957+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64991400 session 0x562c62732780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64992400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64992400 session 0x562c6551b0e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266354688 unmapped: 48062464 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:08:59.525068+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d9000/0x0/0x4ffc00000, data 0x10b8faf/0x1245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266362880 unmapped: 48054272 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:00.525213+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910535 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266387456 unmapped: 48029696 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:01.525397+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266403840 unmapped: 48013312 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:02.525584+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266403840 unmapped: 48013312 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:03.525778+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266403840 unmapped: 48013312 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:04.525965+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266403840 unmapped: 48013312 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:05.526214+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910535 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d9000/0x0/0x4ffc00000, data 0x10b8faf/0x1245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266403840 unmapped: 48013312 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:06.526465+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266403840 unmapped: 48013312 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:07.526686+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266412032 unmapped: 48005120 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:08.526821+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64999c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64999c00 session 0x562c6471a780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266412032 unmapped: 48005120 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62b25400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62b25400 session 0x562c6367cd20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:09.526973+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d9000/0x0/0x4ffc00000, data 0x10b8faf/0x1245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64381c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c61e4e1e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266412032 unmapped: 48005120 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64991400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.368391991s of 15.228948593s, submitted: 103
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64991400 session 0x562c6274a960
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:10.527111+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915699 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266420224 unmapped: 47996928 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64992400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64999c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:11.527247+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d7000/0x0/0x4ffc00000, data 0x10b8fe2/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266428416 unmapped: 47988736 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:12.527386+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:13.527632+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:14.527824+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:15.527972+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2926231 data_alloc: 218103808 data_used: 1384448
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:16.528197+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d7000/0x0/0x4ffc00000, data 0x10b8fe2/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:17.528441+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d7000/0x0/0x4ffc00000, data 0x10b8fe2/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:18.528847+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:19.529015+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:20.529146+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2926231 data_alloc: 218103808 data_used: 1384448
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:21.529279+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:22.529429+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d7000/0x0/0x4ffc00000, data 0x10b8fe2/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.722763062s of 12.753391266s, submitted: 11
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268009472 unmapped: 46407680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:23.529585+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 264888320 unmapped: 49528832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:24.529811+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 264806400 unmapped: 49610752 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:25.530019+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x1972fe2/0x1b01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,2])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006123 data_alloc: 218103808 data_used: 2637824
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265863168 unmapped: 48553984 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:26.530262+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265863168 unmapped: 48553984 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:27.530504+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265863168 unmapped: 48553984 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:28.530675+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265863168 unmapped: 48553984 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:29.530847+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef0a000/0x0/0x4ffc00000, data 0x197cfe2/0x1b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265863168 unmapped: 48553984 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:30.531003+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3007883 data_alloc: 218103808 data_used: 2797568
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:31.531167+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:32.531313+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef0a000/0x0/0x4ffc00000, data 0x197cfe2/0x1b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:33.531455+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:34.531590+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:35.531742+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3007883 data_alloc: 218103808 data_used: 2797568
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:36.531920+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:37.532081+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef0a000/0x0/0x4ffc00000, data 0x197cfe2/0x1b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef0a000/0x0/0x4ffc00000, data 0x197cfe2/0x1b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54e800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.115167618s of 15.105305672s, submitted: 79
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266182656 unmapped: 48234496 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:38.532219+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54e800 session 0x562c64eff680
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c647e6c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c647e6c00 session 0x562c6279b680
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69f40400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f40400 session 0x562c64ef6000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69f40400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f40400 session 0x562c6363e3c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64381c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c645694a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266182656 unmapped: 48234496 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:39.532384+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:40.532518+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee956000/0x0/0x4ffc00000, data 0x1f39fe2/0x20c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053655 data_alloc: 218103808 data_used: 2797568
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:41.532649+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:42.532812+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:43.532966+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee956000/0x0/0x4ffc00000, data 0x1f39fe2/0x20c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:44.533062+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:45.533182+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053655 data_alloc: 218103808 data_used: 2797568
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:46.533353+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f1400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c64569860
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266084352 unmapped: 48332800 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:47.533489+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c67f41000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c67dd5800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee955000/0x0/0x4ffc00000, data 0x1f3a005/0x20c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266084352 unmapped: 48332800 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:48.533598+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee955000/0x0/0x4ffc00000, data 0x1f3a005/0x20c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267239424 unmapped: 47177728 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:49.533697+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:50.533808+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee955000/0x0/0x4ffc00000, data 0x1f3a005/0x20c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096060 data_alloc: 218103808 data_used: 8437760
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:51.533917+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:52.534020+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:53.534164+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:54.534363+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee955000/0x0/0x4ffc00000, data 0x1f3a005/0x20c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:55.560495+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096060 data_alloc: 218103808 data_used: 8437760
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.969276428s of 18.160720825s, submitted: 17
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:56.560813+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:57.560948+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:58.561168+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271720448 unmapped: 42696704 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:59.561310+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273137664 unmapped: 41279488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:00.561453+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee1a7000/0x0/0x4ffc00000, data 0x26e0005/0x286f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3165242 data_alloc: 234881024 data_used: 9703424
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:01.561639+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:02.561819+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:03.561945+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:04.562064+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:05.562175+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3165730 data_alloc: 234881024 data_used: 9703424
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee110000/0x0/0x4ffc00000, data 0x277f005/0x290e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:06.562357+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:07.562552+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee0ef000/0x0/0x4ffc00000, data 0x27a0005/0x292f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:08.562682+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:09.562791+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:10.562901+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3166530 data_alloc: 234881024 data_used: 9723904
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272695296 unmapped: 41721856 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:11.563008+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.562002182s of 15.969758034s, submitted: 113
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67f41000 session 0x562c6475f2c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd5800 session 0x562c62748f00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee0dc000/0x0/0x4ffc00000, data 0x27b3005/0x2942000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:12.563848+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272695296 unmapped: 41721856 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f1400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c6363e5a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee0dc000/0x0/0x4ffc00000, data 0x27b3005/0x2942000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:13.567659+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271130624 unmapped: 43286528 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:14.567860+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271130624 unmapped: 43286528 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:15.568006+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271130624 unmapped: 43286528 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef12000/0x0/0x4ffc00000, data 0x197cfe2/0x1b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010920 data_alloc: 218103808 data_used: 2801664
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:16.568193+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271130624 unmapped: 43286528 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:17.568356+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271130624 unmapped: 43286528 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64992400 session 0x562c61e56000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64999c00 session 0x562c627485a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:18.568481+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271138816 unmapped: 43278336 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64380400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:19.568616+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268460032 unmapped: 45957120 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64380400 session 0x562c6475f0e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef37c000/0x0/0x4ffc00000, data 0xe53fe2/0xfe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:20.568825+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:21.568999+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:22.569213+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:23.569368+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:24.569507+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:25.569634+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:26.569848+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:27.569999+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:28.570175+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:29.570331+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:30.570456+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:31.570582+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:32.570716+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:33.570853+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:34.571021+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:35.571198+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:36.571377+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:37.571538+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:38.571715+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:39.571879+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:40.572019+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:41.572207+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:42.572347+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:43.572516+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:44.572656+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:45.572819+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:46.573011+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:47.573203+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:48.573324+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:49.573518+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:50.573698+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:51.573865+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:52.574042+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:53.574231+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:54.574399+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:55.574550+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:56.574801+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:57.574967+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:58.575139+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:59.575392+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:00.575575+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:01.575807+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:02.576004+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:03.576142+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:04.576383+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:05.576560+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:06.576824+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:07.576995+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 44867584 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6547a000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547a000 session 0x562c62a0a960
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f1400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c643a74a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64380400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64380400 session 0x562c6b3063c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64992400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64992400 session 0x562c64ef6960
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64999c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 55.805976868s of 56.174449921s, submitted: 53
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:08.577179+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268500992 unmapped: 45916160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:09.577328+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64999c00 session 0x562c645692c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6547a000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547a000 session 0x562c62733a40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f1400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c645674a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64380400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef723000/0x0/0x4ffc00000, data 0x116dfbf/0x12fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64380400 session 0x562c646d9680
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64992400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64992400 session 0x562c6274a780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:10.577539+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930959 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:11.577688+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:12.577860+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:13.578042+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef723000/0x0/0x4ffc00000, data 0x116dfbf/0x12fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:14.578267+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:15.578462+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930959 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:16.578683+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:17.578813+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:18.578965+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef723000/0x0/0x4ffc00000, data 0x116dfbf/0x12fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:19.579119+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef723000/0x0/0x4ffc00000, data 0x116dfbf/0x12fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:20.579264+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268599296 unmapped: 45817856 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930959 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:21.579394+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268599296 unmapped: 45817856 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:22.579578+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268599296 unmapped: 45817856 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64902c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.797731400s of 14.340232849s, submitted: 7
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902c00 session 0x562c62749860
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6547d000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:23.579772+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54e800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268599296 unmapped: 45817856 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:24.579904+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268607488 unmapped: 45809664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6ff000/0x0/0x4ffc00000, data 0x1191fbf/0x131f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:25.580066+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268607488 unmapped: 45809664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2933491 data_alloc: 218103808 data_used: 53248
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:26.580211+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268607488 unmapped: 45809664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:27.580348+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6ff000/0x0/0x4ffc00000, data 0x1191fbf/0x131f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:28.580470+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:29.580596+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:30.580719+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949011 data_alloc: 218103808 data_used: 2220032
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:31.580957+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6ff000/0x0/0x4ffc00000, data 0x1191fbf/0x131f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:32.581107+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:33.581234+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:34.581356+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:35.581495+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6ff000/0x0/0x4ffc00000, data 0x1191fbf/0x131f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949011 data_alloc: 218103808 data_used: 2220032
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:36.581664+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.914422989s of 13.919497490s, submitted: 1
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268918784 unmapped: 45498368 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:37.581792+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:38.581943+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:39.582480+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:40.583164+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990425 data_alloc: 218103808 data_used: 2490368
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:41.583312+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:42.583508+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:43.584594+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:44.584740+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:45.585028+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990425 data_alloc: 218103808 data_used: 2490368
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:46.585351+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:47.585639+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:48.585851+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:49.586068+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:50.586250+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990425 data_alloc: 218103808 data_used: 2490368
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:51.586459+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:52.586715+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:53.586919+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:54.587033+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62862c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.337739944s of 17.901878357s, submitted: 21
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862c00 session 0x562c6274b860
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62862c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862c00 session 0x562c61e570e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f1400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c643a63c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64380400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64380400 session 0x562c6551a780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64902c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:55.587192+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902c00 session 0x562c6456bc20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef16000/0x0/0x4ffc00000, data 0x197afbf/0x1b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019370 data_alloc: 218103808 data_used: 2490368
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:56.587418+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:57.587662+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef16000/0x0/0x4ffc00000, data 0x197afbf/0x1b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:58.587850+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef16000/0x0/0x4ffc00000, data 0x197afbf/0x1b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef16000/0x0/0x4ffc00000, data 0x197afbf/0x1b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:59.588075+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:00.588215+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef16000/0x0/0x4ffc00000, data 0x197afbf/0x1b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019370 data_alloc: 218103808 data_used: 2490368
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:01.588398+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:02.588643+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:03.588817+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef16000/0x0/0x4ffc00000, data 0x197afbf/0x1b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:04.588956+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c65477000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477000 session 0x562c64d20b40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:05.589110+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c65477000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477000 session 0x562c646bb0e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62862c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862c00 session 0x562c64eff2c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:06.589317+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f1400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019370 data_alloc: 218103808 data_used: 2490368
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.536311150s of 11.944916725s, submitted: 13
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:07.589485+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c6367c000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:08.589705+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64686000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64903400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:09.589819+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:10.589992+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:11.590158+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051281 data_alloc: 218103808 data_used: 6467584
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:12.590301+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:13.591175+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:14.591438+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:15.591625+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:16.592949+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051281 data_alloc: 218103808 data_used: 6467584
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:17.593128+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:18.593294+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:19.593486+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269344768 unmapped: 45072384 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.889685631s of 13.700659752s, submitted: 6
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:20.594362+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269344768 unmapped: 45072384 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:21.595028+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086317 data_alloc: 218103808 data_used: 6467584
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272089088 unmapped: 42328064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:22.595189+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272498688 unmapped: 41918464 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:23.595448+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272097280 unmapped: 42319872 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:24.595579+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272097280 unmapped: 42319872 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:25.595777+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91f000/0x0/0x4ffc00000, data 0x1f6fff2/0x20ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272097280 unmapped: 42319872 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:26.595991+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099259 data_alloc: 218103808 data_used: 6471680
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272097280 unmapped: 42319872 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:27.596205+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:28.596389+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:29.596561+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:30.596733+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:31.596892+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099259 data_alloc: 218103808 data_used: 6471680
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:32.597015+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:33.597152+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:34.597363+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:35.597618+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:36.597884+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099259 data_alloc: 218103808 data_used: 6471680
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:37.598199+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:38.598396+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:39.598663+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:40.598905+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:41.599221+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099259 data_alloc: 218103808 data_used: 6471680
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:42.599448+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:43.599679+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272121856 unmapped: 42295296 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:44.599816+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272121856 unmapped: 42295296 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:45.599986+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272121856 unmapped: 42295296 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:46.600229+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099259 data_alloc: 218103808 data_used: 6471680
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272121856 unmapped: 42295296 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:47.600380+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272130048 unmapped: 42287104 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:48.600527+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272130048 unmapped: 42287104 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:49.600815+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272130048 unmapped: 42287104 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:50.600965+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272130048 unmapped: 42287104 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:51.601046+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099259 data_alloc: 218103808 data_used: 6471680
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.167901993s of 31.580886841s, submitted: 39
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64686000 session 0x562c647034a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64903400 session 0x562c6456a960
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273178624 unmapped: 41238528 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:52.601171+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62862000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271876096 unmapped: 42541056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862000 session 0x562c628f12c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:53.601332+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271876096 unmapped: 42541056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:54.601448+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271876096 unmapped: 42541056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:55.601600+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271876096 unmapped: 42541056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:56.601861+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998448 data_alloc: 218103808 data_used: 2490368
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271876096 unmapped: 42541056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547d000 session 0x562c64d20d20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54e800 session 0x562c62749e00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:57.602001+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6547e400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547e400 session 0x562c62a0b860
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:58.602248+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:59.602413+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:00.602574+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:01.602687+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:02.602847+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:03.603114+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:04.603274+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:05.603433+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:06.603639+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271900672 unmapped: 42516480 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:07.603757+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271900672 unmapped: 42516480 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:08.603913+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271900672 unmapped: 42516480 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:09.604074+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271900672 unmapped: 42516480 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:10.604227+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 42500096 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:11.604355+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 42500096 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:12.604478+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 42500096 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:13.604640+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 42500096 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:14.604888+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 42500096 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:15.605035+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 42500096 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:16.605266+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:17.605503+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:18.605937+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:19.606376+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:20.606704+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:21.606987+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:22.607194+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:23.607365+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:24.607532+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:25.607668+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271933440 unmapped: 42483712 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:26.607861+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271933440 unmapped: 42483712 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:27.608060+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271933440 unmapped: 42483712 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:28.608261+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271933440 unmapped: 42483712 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:29.608472+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271933440 unmapped: 42483712 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:30.608658+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271933440 unmapped: 42483712 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:31.608836+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271941632 unmapped: 42475520 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:32.608992+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c65472400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.886993408s of 41.488769531s, submitted: 38
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271941632 unmapped: 42475520 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:33.609174+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65472400 session 0x562c62474d20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62862000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862000 session 0x562c6456a1e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c67f40800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67f40800 session 0x562c64ef70e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c65472400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65472400 session 0x562c62732f00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6547d000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547d000 session 0x562c628ee780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273080320 unmapped: 41336832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:34.609453+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273080320 unmapped: 41336832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:35.609620+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273080320 unmapped: 41336832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:36.609822+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7cb000/0x0/0x4ffc00000, data 0x10c6faf/0x1253000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951343 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273080320 unmapped: 41336832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:37.609940+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273080320 unmapped: 41336832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:38.610085+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7cb000/0x0/0x4ffc00000, data 0x10c6faf/0x1253000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273080320 unmapped: 41336832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:39.610249+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7cb000/0x0/0x4ffc00000, data 0x10c6faf/0x1253000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273088512 unmapped: 41328640 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:40.610376+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273088512 unmapped: 41328640 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:41.612670+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951343 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273088512 unmapped: 41328640 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:42.612830+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f70000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.799205780s of 10.012652397s, submitted: 24
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273096704 unmapped: 41320448 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:43.612986+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7cb000/0x0/0x4ffc00000, data 0x10c6faf/0x1253000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f70000 session 0x562c646e2b40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62862000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c65472400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273104896 unmapped: 41312256 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:44.613133+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273104896 unmapped: 41312256 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:45.613296+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273104896 unmapped: 41312256 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:46.613462+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952056 data_alloc: 218103808 data_used: 49152
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:47.613620+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7ca000/0x0/0x4ffc00000, data 0x10c6fd2/0x1254000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:48.613778+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:49.613957+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:50.614138+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:51.614283+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2960536 data_alloc: 218103808 data_used: 1224704
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:52.614456+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:53.614591+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7ca000/0x0/0x4ffc00000, data 0x10c6fd2/0x1254000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:54.614704+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273121280 unmapped: 41295872 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:55.614864+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273121280 unmapped: 41295872 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:56.615096+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.232060432s of 13.526559830s, submitted: 5
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2962708 data_alloc: 218103808 data_used: 1277952
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274333696 unmapped: 40083456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:57.615338+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273162240 unmapped: 41254912 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:58.615464+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272490496 unmapped: 41926656 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:59.615574+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef039000/0x0/0x4ffc00000, data 0x1856fd2/0x19e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:00.615706+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:01.615969+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029688 data_alloc: 218103808 data_used: 1294336
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:02.616161+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:03.616356+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eefb3000/0x0/0x4ffc00000, data 0x18dcfd2/0x1a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:04.616481+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:05.616641+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:06.616790+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef93000/0x0/0x4ffc00000, data 0x18fdfd2/0x1a8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3027644 data_alloc: 218103808 data_used: 1294336
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:07.616949+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:08.617105+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:09.617196+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:10.617301+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:11.617449+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef93000/0x0/0x4ffc00000, data 0x18fdfd2/0x1a8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3027644 data_alloc: 218103808 data_used: 1294336
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.022167206s of 15.504188538s, submitted: 83
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef93000/0x0/0x4ffc00000, data 0x18fdfd2/0x1a8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:12.617653+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:13.617822+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:14.617987+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:15.618176+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:16.618364+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c648ff800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c648ff800 session 0x562c628e30e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64e73800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64e73800 session 0x562c64566b40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c65473400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65473400 session 0x562c6551a3c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f70800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f70800 session 0x562c627330e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69211000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69211000 session 0x562c61e4e000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103486 data_alloc: 218103808 data_used: 1294336
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:17.618523+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272867328 unmapped: 43655168 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5ae000/0x0/0x4ffc00000, data 0x22e2fd2/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:18.618686+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272867328 unmapped: 43655168 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:19.618905+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272875520 unmapped: 43646976 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5ab000/0x0/0x4ffc00000, data 0x22e5fd2/0x2473000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:20.619079+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272875520 unmapped: 43646976 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:21.619346+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272883712 unmapped: 43638784 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103914 data_alloc: 218103808 data_used: 1294336
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:22.619520+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272883712 unmapped: 43638784 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:23.619854+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5ab000/0x0/0x4ffc00000, data 0x22e5fd2/0x2473000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272883712 unmapped: 43638784 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64990800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64990800 session 0x562c62a0a1e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6547f000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547f000 session 0x562c646d83c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:24.620060+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272883712 unmapped: 43638784 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c649f1800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649f1800 session 0x562c628eeb40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c649c4000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.639535904s of 12.789156914s, submitted: 15
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:25.620363+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272891904 unmapped: 43630592 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c4000 session 0x562c62a46000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c67dd5800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:26.620535+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c668c4c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272891904 unmapped: 43630592 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3105752 data_alloc: 218103808 data_used: 1294336
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:27.620697+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272891904 unmapped: 43630592 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:28.620854+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272891904 unmapped: 43630592 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5aa000/0x0/0x4ffc00000, data 0x22e5fe2/0x2474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:29.620954+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272900096 unmapped: 43622400 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:30.621057+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5aa000/0x0/0x4ffc00000, data 0x22e5fe2/0x2474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:31.621241+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3179164 data_alloc: 234881024 data_used: 11558912
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5a8000/0x0/0x4ffc00000, data 0x22e6fe2/0x2475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:32.621471+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:33.621672+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:34.621878+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5a8000/0x0/0x4ffc00000, data 0x22e6fe2/0x2475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5a8000/0x0/0x4ffc00000, data 0x22e6fe2/0x2475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:35.622077+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5a8000/0x0/0x4ffc00000, data 0x22e6fe2/0x2475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:36.622290+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3179164 data_alloc: 234881024 data_used: 11558912
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5a8000/0x0/0x4ffc00000, data 0x22e6fe2/0x2475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:37.622435+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:38.622608+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5a8000/0x0/0x4ffc00000, data 0x22e6fe2/0x2475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.232625008s of 13.830818176s, submitted: 4
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:39.622736+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274964480 unmapped: 41558016 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:40.622932+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edef8000/0x0/0x4ffc00000, data 0x2997fe2/0x2b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275292160 unmapped: 41230336 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:41.623167+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275300352 unmapped: 41222144 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236902 data_alloc: 234881024 data_used: 13082624
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:42.623348+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:43.623527+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:44.623680+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edeeb000/0x0/0x4ffc00000, data 0x29a4fe2/0x2b33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:45.623841+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:46.623950+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3242978 data_alloc: 234881024 data_used: 13316096
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:47.624077+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edeeb000/0x0/0x4ffc00000, data 0x29a4fe2/0x2b33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:48.624214+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.206348419s of 10.187524796s, submitted: 61
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:49.624428+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:50.624610+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:51.624806+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd5800 session 0x562c6467b680
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c668c4c00 session 0x562c628ee3c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edeeb000/0x0/0x4ffc00000, data 0x29a4fe2/0x2b33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c67dd5800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3036586 data_alloc: 218103808 data_used: 1302528
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd5800 session 0x562c628e3e00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:52.624967+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270229504 unmapped: 46292992 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:53.625161+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270229504 unmapped: 46292992 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef89000/0x0/0x4ffc00000, data 0x1907fd2/0x1a95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:54.625303+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270229504 unmapped: 46292992 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef89000/0x0/0x4ffc00000, data 0x1907fd2/0x1a95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862000 session 0x562c627323c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65472400 session 0x562c628dd4a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:55.625422+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270229504 unmapped: 46292992 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f40400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:56.625612+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2940567 data_alloc: 218103808 data_used: 49152
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:57.625901+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:58.626064+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.968927860s of 10.047794342s, submitted: 39
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49fd2/0xfd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:59.626558+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:00.626682+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:01.626839+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939752 data_alloc: 218103808 data_used: 49152
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:02.626980+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:03.627126+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49fd2/0xfd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:04.627278+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f40400 session 0x562c64ef6780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49fd2/0xfd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:05.627415+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:06.627587+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:07.627791+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:08.627942+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:09.628103+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:10.628223+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:11.628369+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:12.628491+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:13.628682+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:14.628879+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:15.629023+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:16.629189+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:17.629325+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:18.629495+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:19.629671+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:20.629872+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:21.630027+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:22.630232+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:23.630429+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:24.630598+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:25.630788+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:26.630982+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:27.631149+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:28.631330+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:29.631470+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:30.631600+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:31.631791+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:32.631941+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:33.632102+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:34.632350+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:35.632492+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:36.632674+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:37.632833+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:38.632991+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:39.633113+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:40.633288+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:41.633500+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:42.633869+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c65477000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477000 session 0x562c628ee780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69856000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69856000 session 0x562c62732f00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64684800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64684800 session 0x562c64ef70e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:43.634035+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6547ec00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547ec00 session 0x562c6456a1e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c63620800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:44.634190+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.979553223s of 45.496376038s, submitted: 7
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:45.639567+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa46000/0x0/0x4ffc00000, data 0xe49fd8/0xfd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270262272 unmapped: 46260224 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:46.639900+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273637376 unmapped: 42885120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2977829 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:47.640135+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273637376 unmapped: 42885120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:48.640276+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270491648 unmapped: 46030848 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:49.640439+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270491648 unmapped: 46030848 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c63620800 session 0x562c62474d20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64684800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64684800 session 0x562c647034a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c65477000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477000 session 0x562c6367c000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6547ec00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:50.640620+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270491648 unmapped: 46030848 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6f9000/0x0/0x4ffc00000, data 0x1197fd8/0x1325000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:51.640733+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270426112 unmapped: 46096384 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547ec00 session 0x562c64eff2c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69856000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69856000 session 0x562c6456bc20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2968269 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:52.640866+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c63621000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c63621000 session 0x562c6551a780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270426112 unmapped: 46096384 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:53.641034+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64684800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64684800 session 0x562c643a63c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270426112 unmapped: 46096384 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c65477000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477000 session 0x562c646bb860
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6547ec00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:54.641236+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.162470818s of 10.054964066s, submitted: 29
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270426112 unmapped: 46096384 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:55.641427+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270434304 unmapped: 46088192 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6f7000/0x0/0x4ffc00000, data 0x1198044/0x1327000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:56.641647+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270737408 unmapped: 45785088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974484 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:57.641843+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547ec00 session 0x562c6b3074a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270737408 unmapped: 45785088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:58.642025+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6d3000/0x0/0x4ffc00000, data 0x11bc044/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c67dd4c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270737408 unmapped: 45785088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661dcc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:59.642152+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6d3000/0x0/0x4ffc00000, data 0x11bc044/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270737408 unmapped: 45785088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:00.642284+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6d3000/0x0/0x4ffc00000, data 0x11bc044/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270737408 unmapped: 45785088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:01.642397+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270868480 unmapped: 45654016 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998324 data_alloc: 218103808 data_used: 3317760
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:02.642556+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270868480 unmapped: 45654016 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:03.642662+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270868480 unmapped: 45654016 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6d3000/0x0/0x4ffc00000, data 0x11bc044/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:04.642798+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270868480 unmapped: 45654016 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:05.642998+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6d3000/0x0/0x4ffc00000, data 0x11bc044/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 45645824 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:06.643136+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 45645824 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998324 data_alloc: 218103808 data_used: 3317760
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:07.643261+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 45645824 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:08.643392+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6d3000/0x0/0x4ffc00000, data 0x11bc044/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 45645824 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:09.643517+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 45645824 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:10.643641+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.463221550s of 15.829542160s, submitted: 6
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271310848 unmapped: 45211648 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:11.643828+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef629000/0x0/0x4ffc00000, data 0x1266044/0x13f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,1,13,0,0,6])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272531456 unmapped: 43991040 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014386 data_alloc: 218103808 data_used: 3338240
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:12.643987+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272875520 unmapped: 43646976 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:13.644162+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274784256 unmapped: 41738240 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:14.644311+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274972672 unmapped: 41549824 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:15.644452+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275210240 unmapped: 41312256 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:16.644635+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275218432 unmapped: 41304064 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3054432 data_alloc: 218103808 data_used: 3563520
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:17.644720+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef7c000/0x0/0x4ffc00000, data 0x190d044/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275218432 unmapped: 41304064 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:18.644950+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275152896 unmapped: 41369600 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:19.645123+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef65000/0x0/0x4ffc00000, data 0x192a044/0x1ab9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:20.645322+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:21.645482+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3071318 data_alloc: 218103808 data_used: 3534848
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:22.645638+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:23.645879+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:24.646043+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeed3000/0x0/0x4ffc00000, data 0x19b4044/0x1b43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:25.646210+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.730766296s of 15.541202545s, submitted: 109
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:26.646432+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275619840 unmapped: 40902656 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3066714 data_alloc: 218103808 data_used: 3538944
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:27.646597+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:28.646723+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:29.646859+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:30.647091+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeeb7000/0x0/0x4ffc00000, data 0x19d8044/0x1b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:31.647237+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3066714 data_alloc: 218103808 data_used: 3538944
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:32.647399+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:33.647563+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:34.647702+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeeb7000/0x0/0x4ffc00000, data 0x19d8044/0x1b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275636224 unmapped: 40886272 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:35.647869+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275636224 unmapped: 40886272 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeeb7000/0x0/0x4ffc00000, data 0x19d8044/0x1b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:36.648076+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275636224 unmapped: 40886272 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3066714 data_alloc: 218103808 data_used: 3538944
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:37.648223+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeeb7000/0x0/0x4ffc00000, data 0x19d8044/0x1b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:38.648394+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:39.648604+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeeb7000/0x0/0x4ffc00000, data 0x19d8044/0x1b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:40.648841+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:41.649056+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3066714 data_alloc: 218103808 data_used: 3538944
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:42.649233+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:43.649430+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:44.649585+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.826000214s of 18.922349930s, submitted: 5
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 40869888 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:45.649729+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeeb1000/0x0/0x4ffc00000, data 0x19de044/0x1b6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 40869888 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f62400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:46.649941+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f62400 session 0x562c6551b0e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69f3f800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c61e4e1e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69f3f800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c6274a960
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64684800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276881408 unmapped: 39641088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64684800 session 0x562c645694a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64900400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64900400 session 0x562c61e56000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedcb000/0x0/0x4ffc00000, data 0x1ac4044/0x1c53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080850 data_alloc: 218103808 data_used: 3538944
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:47.650073+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276881408 unmapped: 39641088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:48.650266+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276881408 unmapped: 39641088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:49.650426+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276881408 unmapped: 39641088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:50.650585+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276881408 unmapped: 39641088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:51.650792+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedcb000/0x0/0x4ffc00000, data 0x1ac4044/0x1c53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276881408 unmapped: 39641088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:52.650935+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080322 data_alloc: 218103808 data_used: 3538944
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276889600 unmapped: 39632896 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64902c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902c00 session 0x562c62a0a960
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:53.651090+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62b25400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62b25400 session 0x562c643a74a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276889600 unmapped: 39632896 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:54.651249+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64684800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64684800 session 0x562c64ef6960
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64900400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.592034340s of 10.031236649s, submitted: 11
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276889600 unmapped: 39632896 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:55.651536+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64900400 session 0x562c62733a40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64902c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:56.651690+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69f3f800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:57.651886+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081975 data_alloc: 218103808 data_used: 3538944
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:58.652850+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:59.652950+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:00.653113+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:01.653231+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:02.653397+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087095 data_alloc: 218103808 data_used: 4190208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:03.653522+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:04.653665+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:05.653846+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:06.654062+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276914176 unmapped: 39608320 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:07.654253+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087095 data_alloc: 218103808 data_used: 4190208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.632834435s of 12.789543152s, submitted: 4
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:08.654421+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:09.654583+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:10.654706+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:11.654893+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed507000/0x0/0x4ffc00000, data 0x21e7067/0x2377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277700608 unmapped: 38821888 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:12.655061+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3138289 data_alloc: 218103808 data_used: 4243456
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277708800 unmapped: 38813696 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:13.655215+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4cf000/0x0/0x4ffc00000, data 0x221f067/0x23af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:14.655389+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:15.655514+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:16.655646+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:17.655833+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3145465 data_alloc: 218103808 data_used: 4243456
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:18.655948+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4c7000/0x0/0x4ffc00000, data 0x2227067/0x23b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:19.656056+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.898307800s of 12.210510254s, submitted: 59
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:20.656202+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902c00 session 0x562c628ede00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c61e56f00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 38780928 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f70800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:21.656346+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4c7000/0x0/0x4ffc00000, data 0x2227067/0x23b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 38780928 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:22.656478+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3146537 data_alloc: 218103808 data_used: 4231168
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277749760 unmapped: 38772736 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:23.656600+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4bf000/0x0/0x4ffc00000, data 0x222f067/0x23bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277749760 unmapped: 38772736 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:24.656717+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277749760 unmapped: 38772736 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:25.656878+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:26.657071+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f70800 session 0x562c646baf00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd08000/0x0/0x4ffc00000, data 0x19e6067/0x1b76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:27.657192+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074399 data_alloc: 218103808 data_used: 3526656
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd08000/0x0/0x4ffc00000, data 0x19e6044/0x1b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:28.657331+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:29.657528+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:30.657789+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:31.658031+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd08000/0x0/0x4ffc00000, data 0x19e6044/0x1b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:32.658279+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074399 data_alloc: 218103808 data_used: 3526656
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:33.658458+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.508622169s of 13.601557732s, submitted: 32
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c6475e780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dcc00 session 0x562c628f0000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:34.658598+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c649c5000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:35.658700+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:36.658880+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:37.659101+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074223 data_alloc: 218103808 data_used: 3526656
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd09000/0x0/0x4ffc00000, data 0x19e6044/0x1b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:38.659239+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:39.659405+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c5000 session 0x562c6367cd20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:40.659525+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:41.659662+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:42.659831+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:43.660559+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:44.660818+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:45.660972+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:46.661184+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:47.661363+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:48.661514+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:49.661742+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:50.662006+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:51.662163+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:52.662310+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:53.662475+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:54.662632+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:55.662834+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:56.663073+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:57.663263+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:58.663505+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:59.663745+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:00.664001+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 36K writes, 140K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.70 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1788 writes, 7086 keys, 1788 commit groups, 1.0 writes per commit group, ingest: 8.42 MB, 0.01 MB/s
                                           Interval WAL: 1788 writes, 697 syncs, 2.57 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:01.664180+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets getting new tickets!
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:02.665128+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _finish_auth 0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:02.666327+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:03.665285+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:04.665444+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:05.665673+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:06.665895+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: mgrc ms_handle_reset ms_handle_reset con 0x562c64994400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct 02 09:41:46 compute-0 ceph-osd[90385]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: get_auth_request con 0x562c62b25400 auth_method 0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: mgrc handle_mgr_configure stats_period=5
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:07.666010+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:08.666266+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:09.667177+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:10.667796+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:11.668055+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:12.668397+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:13.668623+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:14.668835+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:15.668967+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:16.669181+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:17.669377+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:18.669567+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:19.669721+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:20.669893+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274432000 unmapped: 42090496 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:21.670055+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274432000 unmapped: 42090496 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:22.670232+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274432000 unmapped: 42090496 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69f3f800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.216686249s of 49.199211121s, submitted: 27
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2958270 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c6b307a40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6963f000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6963f000 session 0x562c6363ef00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c649c5000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c5000 session 0x562c646d6780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661dcc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dcc00 session 0x562c628dc1e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c67dd4c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c646d7860
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:23.670383+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:24.670722+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:25.670981+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:26.671219+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:27.671429+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3003011 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:28.671663+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:29.671825+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c627dfc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:30.671986+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c6315f0e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274382848 unmapped: 46342144 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64901800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f41400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:31.672143+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274391040 unmapped: 46333952 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:32.672302+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274399232 unmapped: 46325760 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026672 data_alloc: 218103808 data_used: 3313664
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:33.672496+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:34.672673+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:35.672872+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:36.673106+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:37.673300+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045552 data_alloc: 218103808 data_used: 5947392
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:38.673509+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:39.673672+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:40.673891+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:41.674061+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:42.674233+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.794466019s of 20.170822144s, submitted: 32
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045624 data_alloc: 218103808 data_used: 5947392
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:43.674380+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276774912 unmapped: 43950080 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,20])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:44.674548+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275988480 unmapped: 44736512 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:45.674708+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276111360 unmapped: 44613632 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:46.674889+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:47.675026+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092030 data_alloc: 218103808 data_used: 6160384
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:48.675167+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd83000/0x0/0x4ffc00000, data 0x196e011/0x1afb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:49.675248+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:50.675414+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:51.675588+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:52.675834+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd62000/0x0/0x4ffc00000, data 0x198f011/0x1b1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3090850 data_alloc: 218103808 data_used: 6164480
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:53.675990+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:54.676157+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:55.676333+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.045855522s of 12.648342133s, submitted: 63
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:56.676489+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd62000/0x0/0x4ffc00000, data 0x198f011/0x1b1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276332544 unmapped: 44392448 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54e400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:57.676621+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276340736 unmapped: 44384256 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092395 data_alloc: 218103808 data_used: 6164480
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:58.676744+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 284459008 unmapped: 40468480 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:59.676886+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 48259072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54e400 session 0x562c64ef6960
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54e400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54e400 session 0x562c643a61e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c627dfc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c6551b2c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c649c5000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:00.677040+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 48259072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:01.677213+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 48259072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:02.677380+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151015 data_alloc: 218103808 data_used: 6164480
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c5000 session 0x562c6b3061e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f71400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f71400 session 0x562c6274ab40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:03.677576+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:04.677741+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:05.677917+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.307779789s of 10.068083763s, submitted: 53
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 09:41:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/180055983' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:06.678073+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62874c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:07.678206+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3152489 data_alloc: 218103808 data_used: 6164480
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:08.678402+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276692992 unmapped: 48234496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62874c00 session 0x562c646d9680
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:09.678533+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62874c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276701184 unmapped: 48226304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c627dfc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:10.678707+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276701184 unmapped: 48226304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:11.678831+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276701184 unmapped: 48226304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:12.678976+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279396352 unmapped: 45531136 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3210696 data_alloc: 234881024 data_used: 14290944
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:13.679136+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279412736 unmapped: 45514752 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:14.679292+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:15.679446+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:16.679618+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.747380257s of 11.106559753s, submitted: 63
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:17.679795+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3211048 data_alloc: 234881024 data_used: 14290944
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:18.679962+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:19.680162+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:20.680303+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279437312 unmapped: 45490176 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:21.680537+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279437312 unmapped: 45490176 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:22.680741+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282304512 unmapped: 42622976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228632 data_alloc: 234881024 data_used: 14290944
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:23.680915+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282304512 unmapped: 42622976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:24.681061+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 284680192 unmapped: 40247296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:25.681189+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebf33000/0x0/0x4ffc00000, data 0x261e011/0x27ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:26.681394+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.735648155s of 10.060455322s, submitted: 31
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:27.681551+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247268 data_alloc: 234881024 data_used: 14290944
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:28.681710+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebebb000/0x0/0x4ffc00000, data 0x2696011/0x2823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:29.681805+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:30.681978+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeb6000/0x0/0x4ffc00000, data 0x269b011/0x2828000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:31.682132+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:32.682276+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3249180 data_alloc: 234881024 data_used: 14315520
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:33.682460+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:34.682615+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:35.682728+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:36.682953+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeb0000/0x0/0x4ffc00000, data 0x26a1011/0x282e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:37.683139+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3249180 data_alloc: 234881024 data_used: 14315520
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.374366760s of 11.633337975s, submitted: 16
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:38.683307+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:39.683478+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:40.683657+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:41.684058+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:42.684172+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253572 data_alloc: 234881024 data_used: 14413824
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:43.684300+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:44.684415+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:45.684580+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:46.684785+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:47.685441+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253572 data_alloc: 234881024 data_used: 14413824
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:48.685561+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:49.685716+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:50.685885+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:51.686074+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:52.686289+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253572 data_alloc: 234881024 data_used: 14413824
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:53.686531+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.346201897s of 15.059863091s, submitted: 4
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62874c00 session 0x562c64efe960
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c628dda40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69f3f800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:54.686858+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:55.687064+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebea5000/0x0/0x4ffc00000, data 0x26ac011/0x2839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:56.687283+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:57.687445+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253376 data_alloc: 234881024 data_used: 14413824
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:58.687643+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:59.687843+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282861568 unmapped: 42065920 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x199c011/0x1b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:00.688007+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282861568 unmapped: 42065920 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:01.688265+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c64ef7860
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:02.688463+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099936 data_alloc: 218103808 data_used: 6164480
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:03.688622+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x199c011/0x1b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:04.688862+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:05.689010+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.364779472s of 12.359013557s, submitted: 40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64901800 session 0x562c6b307a40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f41400 session 0x562c6551bc20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c627dfc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:06.689162+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:07.689296+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276013056 unmapped: 48914432 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2973676 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed307000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:08.689464+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:09.689629+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c64effc20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:10.689805+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:11.689945+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:12.690079+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:13.690237+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:14.690401+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:15.690715+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:16.691040+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:17.691254+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:18.691652+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:19.691863+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:20.692186+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:21.692384+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:22.692526+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:23.692968+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:24.693216+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:25.693455+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:26.693672+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:27.693863+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:28.694000+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:29.694310+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:30.694457+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:31.694662+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:32.694946+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:33.695136+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:34.695368+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:35.695534+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:36.695708+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:37.695897+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:38.696041+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:39.696194+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:40.696362+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:41.696509+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:42.696727+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:43.696926+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:44.697095+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:45.697229+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:46.698328+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:47.698486+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:48.698700+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:49.698852+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64998000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64998000 session 0x562c62a46f00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661dd800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dd800 session 0x562c6274a780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f1400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c6471a3c0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:50.699016+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64999000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64999000 session 0x562c646d94a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c627dfc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.197753906s of 44.837074280s, submitted: 31
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c6475f4a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f1400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c6b3065a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64998000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64998000 session 0x562c6467bc20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661dd800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dd800 session 0x562c6363f0e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c65477c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477c00 session 0x562c6315f860
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:51.699411+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:52.699558+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:53.699712+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001882 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:54.699947+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:55.700157+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:56.700406+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:57.700529+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:58.700683+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001882 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:59.700904+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:00.701129+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:01.701260+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661ddc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661ddc00 session 0x562c646e2b40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:02.701407+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c67dd4c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c645674a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:03.701550+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001882 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54fc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c64566780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c68fe9000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.026555061s of 13.081507683s, submitted: 4
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c68fe9000 session 0x562c628ed4a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276045824 unmapped: 48881664 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:04.701692+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62874400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64e73c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:05.701889+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:06.702066+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:07.702241+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:08.702472+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030199 data_alloc: 218103808 data_used: 3846144
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:09.703229+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:10.703374+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:11.703531+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:12.703790+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:13.703967+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030199 data_alloc: 218103808 data_used: 3846144
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:14.704160+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:15.704320+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.270869255s of 12.386258125s, submitted: 6
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 48275456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:16.704555+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278601728 unmapped: 46325760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:17.704719+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278700032 unmapped: 46227456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:18.704885+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3067793 data_alloc: 218103808 data_used: 4562944
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:19.705058+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:20.705216+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:21.705441+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:22.705622+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:23.705812+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3071175 data_alloc: 218103808 data_used: 4702208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:24.705969+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:25.706093+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:26.706313+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:27.706480+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:28.706655+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3071495 data_alloc: 218103808 data_used: 4710400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:29.706811+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:30.706940+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:31.707080+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:32.707233+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62863400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62863400 session 0x562c6279b0e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62863400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62863400 session 0x562c646ba000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661ddc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661ddc00 session 0x562c61e57a40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c67dd4c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c64effe00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c68fe9000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.001466751s of 16.817237854s, submitted: 48
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279355392 unmapped: 45572096 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:33.707344+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c68fe9000 session 0x562c628ec000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3088659 data_alloc: 218103808 data_used: 4710400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54fc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c62748000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54fc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c6363e1e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62863400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62863400 session 0x562c64566d20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661ddc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661ddc00 session 0x562c61e57a40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:34.707517+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:35.707685+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eceab000/0x0/0x4ffc00000, data 0x16a5fbf/0x1833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:36.707859+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:37.719423+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:38.719622+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3088819 data_alloc: 218103808 data_used: 4714496
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eceab000/0x0/0x4ffc00000, data 0x16a5fbf/0x1833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:39.719797+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eceab000/0x0/0x4ffc00000, data 0x16a5fbf/0x1833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:40.719952+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62862c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:41.720144+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:42.720308+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.341195107s of 10.009945869s, submitted: 15
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862c00 session 0x562c6279b0e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:43.720514+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092880 data_alloc: 218103808 data_used: 4714496
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:44.720714+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64902400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f71800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:45.720925+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279543808 unmapped: 45383680 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:46.721111+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:47.721237+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:48.721421+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103412 data_alloc: 218103808 data_used: 6053888
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:49.721608+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:50.721803+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:51.721999+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:52.722183+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:53.722363+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103412 data_alloc: 218103808 data_used: 6053888
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:54.722698+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:55.722810+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:56.722960+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.366211891s of 13.564207077s, submitted: 1
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42450944 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:57.723087+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:58.723214+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x1da3fbf/0x1f31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172036 data_alloc: 218103808 data_used: 7139328
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:59.723386+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec790000/0x0/0x4ffc00000, data 0x1da7fbf/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:00.723534+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:01.723692+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:02.723818+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec790000/0x0/0x4ffc00000, data 0x1da7fbf/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:03.724001+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172664 data_alloc: 218103808 data_used: 7143424
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:04.724188+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:05.724315+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:06.724476+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec7a6000/0x0/0x4ffc00000, data 0x1daafbf/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:07.724602+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:08.724817+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3166132 data_alloc: 218103808 data_used: 7143424
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:09.724963+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:10.725126+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:11.725269+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec7a6000/0x0/0x4ffc00000, data 0x1daafbf/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:12.725406+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:13.725534+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3166772 data_alloc: 218103808 data_used: 7204864
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:14.725670+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902400 session 0x562c6363f0e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.399305344s of 18.292930603s, submitted: 109
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f71800 session 0x562c6551a780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64902400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:15.725800+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282681344 unmapped: 42246144 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902400 session 0x562c645665a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:16.726065+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:17.726283+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:18.726438+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080618 data_alloc: 218103808 data_used: 4771840
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:19.726571+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:20.726716+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:21.726820+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:22.726944+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:23.727096+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62874400 session 0x562c62a46000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64e73c00 session 0x562c64efe1e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3082950 data_alloc: 218103808 data_used: 4759552
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c643a4c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:24.727259+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280166400 unmapped: 44761088 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.822414398s of 10.087261200s, submitted: 43
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:25.727454+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280166400 unmapped: 44761088 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:26.727646+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:27.727838+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c643a4c00 session 0x562c64eff680
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:28.728055+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:29.728256+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:30.728418+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:31.728618+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:32.728834+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:33.728972+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:34.729101+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:35.729255+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:36.729624+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:37.729825+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:38.730089+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:39.730290+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:40.730435+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:41.730602+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:42.730846+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:43.731026+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:44.731188+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:45.731321+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:46.731470+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:47.731612+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:48.731851+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:49.731999+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:50.732120+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:51.732277+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:52.732436+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:53.732575+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:54.732712+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:55.732903+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:56.733135+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:57.733308+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:58.733487+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280182784 unmapped: 44744704 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:59.733652+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:00.733811+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:01.733985+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:02.734168+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:03.734358+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:04.734533+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:05.734718+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:06.734968+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:07.735164+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:08.735376+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:09.767737+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:10.767887+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:11.768135+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:12.768311+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:13.768606+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:14.768806+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:15.769060+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:16.769256+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:17.769452+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:18.769629+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:19.769808+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:20.769989+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:21.770182+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:22.770306+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:23.770511+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:24.770662+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:25.770783+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:26.770974+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:27.771111+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:28.771254+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:29.771380+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:30.771499+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:31.771681+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280231936 unmapped: 44695552 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:32.771849+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280231936 unmapped: 44695552 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:33.772020+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280231936 unmapped: 44695552 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:34.772165+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:35.772270+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:36.772436+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:37.772648+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:38.772860+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:39.773035+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:40.773187+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:41.773364+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:42.773572+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:43.773675+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:44.773807+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:45.773927+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:46.774231+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:47.774385+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280256512 unmapped: 44670976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:48.774538+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280256512 unmapped: 44670976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:49.774729+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280256512 unmapped: 44670976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:50.774922+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:51.775056+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:52.775254+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:53.775428+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:54.775614+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:55.775817+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280272896 unmapped: 44654592 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:56.775984+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280272896 unmapped: 44654592 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:57.776171+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280272896 unmapped: 44654592 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:58.776384+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:59.776553+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:00.776694+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:01.776821+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:02.776981+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:03.777181+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280289280 unmapped: 44638208 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:04.777361+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280289280 unmapped: 44638208 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:05.777503+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280289280 unmapped: 44638208 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64901800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:06.778058+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 100.534408569s of 101.709854126s, submitted: 14
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64901800 session 0x562c6551af00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54fc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c6456ad20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64381c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c6475e780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f41000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:07.778223+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f41000 session 0x562c628e2d20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f63800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f63800 session 0x562c646f8f00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64381c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c64ef61e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:08.778422+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3017460 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecfaa000/0x0/0x4ffc00000, data 0x1197faf/0x1324000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64901800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64901800 session 0x562c646ba960
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:09.778559+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f41000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f41000 session 0x562c61e4eb40
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54fc00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c62732000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:10.778798+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 42319872 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64998000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f0000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:11.778945+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282615808 unmapped: 42311680 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:12.779102+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecfa8000/0x0/0x4ffc00000, data 0x1197fe2/0x1326000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:13.779257+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3043767 data_alloc: 218103808 data_used: 3309568
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:14.779423+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecfa8000/0x0/0x4ffc00000, data 0x1197fe2/0x1326000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:15.779543+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64998000 session 0x562c627330e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f0000 session 0x562c646d94a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64381c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:16.779692+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.375305176s of 10.585134506s, submitted: 14
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:17.779833+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c62474d20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:18.779989+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:19.780148+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:20.780297+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:21.780426+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:22.780560+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:23.780850+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:24.780975+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:25.781086+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:26.781269+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:27.781406+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:28.781622+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:29.781865+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:30.782043+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:31.782176+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:32.782352+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:33.782548+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:34.782696+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:35.782847+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:36.783042+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:37.783257+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:38.783414+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:39.783563+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:40.784213+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:41.784384+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:42.784540+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:43.784702+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:44.784849+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:45.784984+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:46.785198+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c68fe8800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:47.785315+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.151807785s of 30.341138840s, submitted: 13
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:48.785428+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 42262528 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 291 ms_handle_reset con 0x562c68fe8800 session 0x562c62475e00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997150 data_alloc: 218103808 data_used: 53248
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:49.785552+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 42262528 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0xe4bb5d/0xfd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:50.785834+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:51.785992+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:52.786132+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:53.786263+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0xe4bb5d/0xfd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997150 data_alloc: 218103808 data_used: 53248
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:54.786396+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0xe4bb5d/0xfd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:55.786516+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:56.786702+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:57.786843+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:58.787027+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:59.787214+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:00.787328+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:01.787477+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:02.787600+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:03.787832+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:04.788033+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:05.788209+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:06.788469+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:07.788727+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:08.788956+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:09.789116+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:10.789315+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:11.789492+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:12.789691+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:13.789852+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:14.790025+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:15.790175+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:16.790386+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:17.790560+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:18.790733+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:19.790921+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:20.791086+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:21.791254+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:22.791379+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:23.791567+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:24.791717+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:25.791830+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:26.791964+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:27.792072+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:28.792286+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:29.792449+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:30.792565+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:31.792690+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:32.792832+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:33.792962+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:34.793123+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:35.793301+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:36.793492+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:37.793654+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:38.793806+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:39.793935+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:40.794088+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:41.794255+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:42.794398+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:43.794523+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:44.794649+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:45.794832+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:46.795028+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:47.795215+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:48.795358+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:49.795502+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:50.795639+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:51.795790+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:52.796025+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:53.796177+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:54.796413+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:55.796778+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282796032 unmapped: 42131456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:56.796937+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282796032 unmapped: 42131456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:57.797079+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:58.797208+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:59.797373+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:00.797507+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:01.797657+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:02.797828+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:03.798058+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:04.798247+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:05.798399+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:06.798630+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:07.798815+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:08.799002+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:09.799175+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:10.799318+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:11.799515+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:12.799673+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:13.799840+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:14.800092+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:15.800265+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:16.800458+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:17.800590+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:18.800814+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:19.800983+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:20.801127+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:21.801262+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:22.801435+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:23.801586+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:24.801818+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:25.802062+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:26.802315+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:27.802465+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:28.802628+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:29.802801+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:30.802951+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:31.803120+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:32.803274+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:33.803482+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:34.803633+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42074112 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:35.803856+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42074112 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:36.804097+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42074112 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:37.804297+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:38.804471+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:39.804691+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:40.804852+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:41.805018+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:42.805144+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:43.805269+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:44.805422+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:45.805543+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:46.805823+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:47.806025+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:48.806343+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:49.806522+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:50.806667+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64687c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 290488320 unmapped: 34439168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:51.806794+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 ms_handle_reset con 0x562c64687c00 session 0x562c6315e1e0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:52.806970+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:53.807108+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:54.807237+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3012284 data_alloc: 218103808 data_used: 6860800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:55.807380+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:56.807528+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:57.807842+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:58.808061+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 35389440 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:59.808226+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3012284 data_alloc: 218103808 data_used: 6860800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:00.808345+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 35389440 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64900400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:01.808505+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 35389440 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 134.109100342s of 134.325363159s, submitted: 30
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 293 ms_handle_reset con 0x562c64900400 session 0x562c6315ed20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:02.808691+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 293 heartbeat osd_stat(store_statfs(0x4edaef000/0x0/0x4ffc00000, data 0x64f191/0x7de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:03.808886+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:04.809056+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941203 data_alloc: 218103808 data_used: 45056
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f0000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:05.809240+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 294 ms_handle_reset con 0x562c630f0000 session 0x562c64efe000
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 294 heartbeat osd_stat(store_statfs(0x4edf5c000/0x0/0x4ffc00000, data 0x1e0d62/0x371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:06.809477+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:07.809703+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:08.809867+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 294 heartbeat osd_stat(store_statfs(0x4edf5c000/0x0/0x4ffc00000, data 0x1e0d62/0x371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:09.810039+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902681 data_alloc: 218103808 data_used: 53248
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:10.810274+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:11.810528+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:12.810729+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64bd9c00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 294 ms_handle_reset con 0x562c64bd9c00 session 0x562c6315e780
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.945940018s of 11.156617165s, submitted: 55
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 295 heartbeat osd_stat(store_statfs(0x4edf5c000/0x0/0x4ffc00000, data 0x1e0d62/0x371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:13.810925+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:14.811086+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905655 data_alloc: 218103808 data_used: 53248
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:15.811151+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:16.811391+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:17.811491+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64687400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:18.811618+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf5a000/0x0/0x4ffc00000, data 0x1e27c5/0x374000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 ms_handle_reset con 0x562c64687400 session 0x562c6456af00
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:19.811856+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:20.811994+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:21.812323+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:22.812475+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:23.812606+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:24.812688+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:25.812816+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:26.813036+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:27.813234+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:28.813367+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:29.813548+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:30.813732+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:31.813961+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:32.814120+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:33.814272+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:34.814450+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:35.814565+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:36.814745+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:37.814967+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:38.815142+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:39.815325+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:40.815435+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:41.815559+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:42.815682+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:43.815833+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:44.816026+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:45.816196+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:46.816411+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:47.816548+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:48.816723+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:49.816905+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:50.817039+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:51.817156+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:52.817313+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:53.817448+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:54.817606+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:55.817797+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:56.817987+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:57.818190+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:58.818348+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:59.818498+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:00.818706+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 37K writes, 145K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 37K writes, 14K syncs, 2.69 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1178 writes, 4161 keys, 1178 commit groups, 1.0 writes per commit group, ingest: 5.22 MB, 0.01 MB/s
                                           Interval WAL: 1178 writes, 469 syncs, 2.51 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:01.818829+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:02.818998+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:03.819140+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:04.819839+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:05.820012+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:06.820211+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:07.820338+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:08.820542+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:09.820688+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:10.820836+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:11.820996+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:12.821171+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:13.821367+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:14.821516+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:15.821627+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:16.821848+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:17.822010+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:18.822142+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:19.822344+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286097408 unmapped: 38830080 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:20.822461+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286097408 unmapped: 38830080 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:21.822601+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286097408 unmapped: 38830080 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:22.822798+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:23.822953+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:24.823041+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:25.823146+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:26.823348+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:27.823492+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:28.823642+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:29.823784+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:30.823885+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:31.824017+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:32.824143+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:33.824280+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:34.824385+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:35.824501+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:36.824624+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:37.824762+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:38.824913+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:39.825059+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:40.825212+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:41.825348+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:42.825466+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:43.825639+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:44.825778+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:45.825909+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:46.826122+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:47.826284+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:48.826472+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:49.826655+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:50.826788+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:51.826965+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286146560 unmapped: 38780928 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:52.827151+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:53.827284+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:54.827412+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:55.827536+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 102.574424744s of 102.733085632s, submitted: 24
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf57000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:56.827683+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:57.827827+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286203904 unmapped: 38723584 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:58.827953+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:59.828104+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2909566 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:00.828278+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf57000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:01.828433+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:02.828624+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:03.828822+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6547c800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf57000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286261248 unmapped: 38666240 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:04.828964+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 38674432 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2968052 data_alloc: 218103808 data_used: 61440
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:05.829118+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286261248 unmapped: 47063040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.070027351s of 10.197218895s, submitted: 103
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 297 ms_handle_reset con 0x562c6547c800 session 0x562c61dd25a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:06.829296+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54e800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286277632 unmapped: 47046656 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:07.829430+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286285824 unmapped: 47038464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74f000/0x0/0x4ffc00000, data 0x9e5f38/0xb7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 ms_handle_reset con 0x562c6c54e800 session 0x562c646f8d20
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:08.829557+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:09.829708+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:10.829875+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:11.830019+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:12.830173+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:13.830379+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:14.830526+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:15.830881+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:16.831053+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:17.831194+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:18.831316+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:19.831438+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:20.831605+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:21.831866+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:22.832051+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:23.832172+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:24.832295+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:25.832420+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:26.832610+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:27.832885+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:28.833093+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:29.833263+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:30.833415+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:31.833732+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:32.833950+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:33.834091+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:34.834235+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:35.834351+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:36.834516+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:37.834646+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:38.834828+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:39.835038+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286343168 unmapped: 46981120 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62874800
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 34.268466949s of 34.596256256s, submitted: 25
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:40.835232+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286351360 unmapped: 46972928 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74d000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:41.835389+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286359552 unmapped: 46964736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:42.835551+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286384128 unmapped: 46940160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed749000/0x0/0x4ffc00000, data 0x9e9686/0xb84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:43.835726+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286384128 unmapped: 46940160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:44.835906+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287432704 unmapped: 45891584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982217 data_alloc: 218103808 data_used: 77824
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:45.836035+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 299 ms_handle_reset con 0x562c62874800 session 0x562c624745a0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:46.836209+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:47.836371+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed74a000/0x0/0x4ffc00000, data 0x9e9686/0xb84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:48.836499+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:49.836663+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2981401 data_alloc: 218103808 data_used: 77824
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.584912777s of 10.009610176s, submitted: 37
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:50.836853+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:51.837025+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:52.837253+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:53.837920+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:54.838084+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985575 data_alloc: 218103808 data_used: 86016
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:55.838249+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:56.838474+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:57.838782+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:58.839001+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:59.839315+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:00.839480+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:01.839727+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:02.839965+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:03.840221+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:04.840498+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:05.840640+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:06.840858+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:07.841077+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:08.841289+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:09.841513+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:10.841699+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286441472 unmapped: 46882816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:11.841901+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286441472 unmapped: 46882816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:12.842102+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286441472 unmapped: 46882816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:13.842262+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:14.842410+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:15.842554+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:16.842799+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:17.842917+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:18.843054+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:19.843249+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 46866432 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:20.843633+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:21.843897+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:22.844092+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:23.844282+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:24.844437+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:25.844600+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:26.844797+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:27.844986+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:28.845128+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:29.845288+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:30.845425+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:31.845580+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:32.845810+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:33.845933+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:34.846022+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286482432 unmapped: 46841856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:35.846184+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286482432 unmapped: 46841856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:36.846426+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286482432 unmapped: 46841856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:37.846609+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:38.846842+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:39.847031+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:40.847193+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:41.847376+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:42.847741+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:43.847970+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:44.848215+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:45.848355+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:46.848641+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:47.848875+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:48.849088+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:49.849247+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:50.849415+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:51.849610+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:52.849781+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:53.850137+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:54.850290+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:55.850408+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:56.850607+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:57.850803+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:58.850905+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:59.851037+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:00.851206+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:01.851386+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:02.851528+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:03.851683+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:04.851820+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:05.851921+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:06.852098+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 46792704 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:07.852207+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 46792704 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:08.852373+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 46792704 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:09.852563+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:10.852673+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:11.852812+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:12.852953+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:13.853085+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:14.853219+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:15.853414+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:16.853600+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:17.853714+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:18.853870+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:19.854032+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:20.854279+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:21.854439+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:22.854568+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:23.854690+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:24.854802+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:25.854917+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:26.855058+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:27.855238+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:28.855375+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:29.855521+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:30.855727+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 46751744 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:31.855827+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 46751744 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:32.855970+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 46751744 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:33.856118+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:34.856246+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:35.856383+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:36.856535+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:37.856668+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:38.856858+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:39.857036+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:40.857158+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:41.857284+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:42.857425+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:43.876518+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:44.876632+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:45.876872+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:46.877087+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:47.877224+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:48.877384+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:49.877537+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:50.877669+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:51.877836+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:52.877958+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:53.878092+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:54.878218+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 46702592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:55.878363+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 46702592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:56.878528+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 46702592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:57.878692+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:58.878841+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:59.879210+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:00.880073+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:01.880312+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:02.881210+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:03.881861+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:04.882189+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:05.882675+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:06.883164+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:07.883571+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:08.884092+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:09.884394+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:10.884797+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:11.885110+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286646272 unmapped: 46678016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:12.885393+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286646272 unmapped: 46678016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:13.885626+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286646272 unmapped: 46678016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:14.885881+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:15.886056+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:16.886299+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:17.886517+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:18.886811+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:19.887054+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286662656 unmapped: 46661632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:20.887272+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286662656 unmapped: 46661632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:21.887485+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286662656 unmapped: 46661632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:22.887726+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:23.887915+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:24.888110+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:25.888287+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:26.888537+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:27.888790+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:28.888961+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:29.889131+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:30.889311+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:31.889520+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:32.889703+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:33.889955+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:34.890093+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:35.890245+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:36.890449+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:37.890639+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:38.890788+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:39.890953+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:40.891085+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:41.891253+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:42.891448+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:43.891896+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:44.892288+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:45.892573+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:46.892845+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:47.893079+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:48.893274+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286711808 unmapped: 46612480 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:49.893520+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286711808 unmapped: 46612480 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:50.893848+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286711808 unmapped: 46612480 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:51.894150+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:52.894417+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:53.894626+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:54.894910+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:55.895090+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:56.895320+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 46596096 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:57.895499+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 46596096 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:58.895702+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 46596096 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:59.895883+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:00.896043+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:01.896238+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:02.896446+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:03.896614+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:04.896840+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:05.897020+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:06.897219+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: mgrc ms_handle_reset ms_handle_reset con 0x562c62b25400
Oct 02 09:41:46 compute-0 ceph-osd[90385]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct 02 09:41:46 compute-0 ceph-osd[90385]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: get_auth_request con 0x562c62874800 auth_method 0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: mgrc handle_mgr_configure stats_period=5
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:07.897369+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:08.897547+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:09.897806+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:10.897944+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:11.898103+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:12.898324+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:13.898499+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:14.898659+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:15.898793+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286752768 unmapped: 46571520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:16.899006+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286752768 unmapped: 46571520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:17.899153+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286752768 unmapped: 46571520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:18.899348+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:19.899546+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:20.899683+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:21.899873+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:22.900090+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:23.900263+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:24.900443+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:25.900565+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:26.900732+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:27.900903+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:28.901052+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:29.901181+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:30.901370+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:31.901655+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:32.901857+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:33.901989+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:34.902130+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:35.902250+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:36.902446+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:37.902606+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:38.902946+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:39.903202+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:40.903342+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:41.903469+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:42.903589+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:43.903715+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:44.903831+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:45.903999+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 46522368 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:46.904190+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286810112 unmapped: 46514176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:47.904332+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286810112 unmapped: 46514176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:48.904517+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:49.904636+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:50.904807+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:51.905057+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:52.905215+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:53.905406+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:54.905641+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:55.905786+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286826496 unmapped: 46497792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:56.905970+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286826496 unmapped: 46497792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:57.906108+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286826496 unmapped: 46497792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:58.906349+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:59.906521+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:00.907056+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:01.907258+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:02.907410+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:03.907572+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:04.907827+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:05.908046+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:06.908276+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:07.908403+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:08.908595+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:09.908804+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:10.908978+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286851072 unmapped: 46473216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:11.909115+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286851072 unmapped: 46473216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:12.909299+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:13.909458+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:14.909612+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:15.909775+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:16.910065+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:17.910201+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:18.910380+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286875648 unmapped: 46448640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:19.910558+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286875648 unmapped: 46448640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:20.910735+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286875648 unmapped: 46448640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:21.910925+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:22.911099+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:23.911266+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:24.911675+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:25.911807+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:26.911998+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:27.912174+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:28.912368+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:29.912517+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:30.912678+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:31.912807+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:32.913004+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:33.913130+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:34.913318+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286900224 unmapped: 46424064 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:35.913470+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286908416 unmapped: 46415872 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:36.913661+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286908416 unmapped: 46415872 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:37.913826+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:38.914013+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:39.914188+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:40.914452+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:46 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:46 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:41.914661+0000)
Oct 02 09:41:46 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:46 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:46 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:42.914795+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:43.914920+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:44.915075+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:45.915246+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:46.915459+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:47.915599+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:48.915768+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:49.915914+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:50.916054+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:51.916178+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:52.916326+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:53.916456+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:54.916566+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286949376 unmapped: 46374912 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:55.916679+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286949376 unmapped: 46374912 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:56.916822+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286949376 unmapped: 46374912 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:57.917280+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286957568 unmapped: 46366720 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:58.917440+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286973952 unmapped: 46350336 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:59.917674+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286973952 unmapped: 46350336 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:00.917890+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286973952 unmapped: 46350336 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:01.918070+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:02.918214+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:03.918432+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:04.918633+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:05.918849+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:06.919106+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:07.919283+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:08.919450+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:09.919653+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:10.919784+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:11.919918+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:12.920106+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:13.920281+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:14.920448+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:15.920615+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:16.920887+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:17.921037+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:18.921223+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:19.921389+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:20.921564+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:21.921904+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:22.922058+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:23.922238+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:24.922398+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:25.922534+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:26.922732+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:27.922966+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:28.923182+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:29.923355+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:30.923540+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287023104 unmapped: 46301184 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:31.923710+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287023104 unmapped: 46301184 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:32.923971+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287023104 unmapped: 46301184 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:33.924151+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:34.924285+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:35.924429+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:36.924629+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:37.924805+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:38.924974+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:39.925160+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:40.925324+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:41.925478+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:42.925646+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:43.925803+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:44.925986+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:45.926134+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:46.926333+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 46260224 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:47.926472+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 46260224 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:48.926606+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 46260224 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:49.926730+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:50.926882+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:51.927127+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:52.927324+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:53.927536+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:54.927689+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 46243840 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:55.927854+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 46243840 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:56.928087+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 46243840 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:57.928263+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:58.928401+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:59.928533+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:00.928804+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:01.928942+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:02.929150+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:03.929320+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:04.929477+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:05.929626+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:06.929827+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:07.929981+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:08.930146+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:09.930296+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:10.930476+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 46211072 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:11.930636+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 46211072 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:12.930812+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:13.930998+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:14.931208+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:15.931331+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:16.931516+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:17.931672+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:18.931844+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:19.931997+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:20.932134+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:21.932311+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:22.932499+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:23.932665+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:24.932836+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:25.933004+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:26.933187+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:27.933564+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:28.933743+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:29.934024+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:30.934178+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:31.934315+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 46170112 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:32.934438+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 46170112 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:33.934569+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 46170112 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:34.934789+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:35.934955+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:36.935148+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:37.935270+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:38.935405+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:39.935556+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:40.935712+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:41.935873+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:42.936036+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:43.936179+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 46145536 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:44.936298+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 46145536 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:45.936432+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:46.936648+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:47.936806+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:48.936945+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:49.937145+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:50.937351+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 46129152 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:51.937503+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:52.937647+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:53.937835+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:54.938053+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:55.938237+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:56.938494+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:57.938673+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:58.938859+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:59.939091+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:00.939250+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:01.939418+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:02.939581+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:03.939829+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:04.939989+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:05.940178+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:06.940359+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:07.940512+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287227904 unmapped: 46096384 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:08.940666+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287227904 unmapped: 46096384 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:09.940834+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287227904 unmapped: 46096384 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:10.940994+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:11.941124+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:12.941245+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:13.941375+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:14.941503+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:15.941632+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 46080000 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:16.941797+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 46080000 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:17.941932+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 46071808 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:18.942068+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:19.942190+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:20.942368+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:21.942562+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:22.942854+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:23.943021+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:24.943193+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:25.943433+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:26.943628+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:27.944050+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:28.944197+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:29.944340+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:30.944520+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:31.944737+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:32.944934+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:33.945073+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:34.945213+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:35.945407+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:36.945600+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:37.945832+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:38.945999+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:39.946179+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:40.946365+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:41.946513+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:42.946716+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:43.946950+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:44.947102+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:45.959559+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:46.959799+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:47.959967+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:48.960111+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:49.960241+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:50.960369+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:51.960500+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:52.960690+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:53.960851+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:54.960995+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:55.961127+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:56.961298+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:57.961528+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:58.961808+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:59.961967+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:00.962122+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 145K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 338 writes, 741 keys, 338 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s
                                           Interval WAL: 338 writes, 158 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:01.962290+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:02.962412+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:03.962595+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:04.962732+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:05.962901+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:06.963094+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:07.963243+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:08.963373+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:09.963537+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:10.963719+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:11.964098+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 45973504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:12.964260+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 45973504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:13.964466+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 45973504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:14.964619+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:15.964761+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:16.965033+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:17.965193+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:18.965362+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:19.965504+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287375360 unmapped: 45948928 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:20.965655+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287375360 unmapped: 45948928 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:21.965803+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:22.965956+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:23.966104+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:24.966245+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:25.966392+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:26.966541+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:27.966680+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:28.966802+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:29.966935+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:30.967052+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:31.967179+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:32.967306+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:33.967464+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 45924352 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:34.967613+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:35.967775+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:36.967939+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:37.968050+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:38.968158+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:39.968282+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:40.968423+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:41.968562+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:42.968689+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:43.968846+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:44.968947+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:45.969096+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:46.969299+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64900800
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 536.308471680s of 536.377807617s, submitted: 15
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 45899776 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985147 data_alloc: 218103808 data_used: 94208
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:47.969518+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 45883392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 301 ms_handle_reset con 0x562c64900800 session 0x562c64d21c20
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:48.969689+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 45883392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:49.969819+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 301 heartbeat osd_stat(store_statfs(0x4edf44000/0x0/0x4ffc00000, data 0x1ecc74/0x388000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c647e6800
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 45883392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:50.969936+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 302 ms_handle_reset con 0x562c647e6800 session 0x562c6279be00
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:51.970036+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935656 data_alloc: 218103808 data_used: 110592
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:52.970168+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:53.970240+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 302 heartbeat osd_stat(store_statfs(0x4edf43000/0x0/0x4ffc00000, data 0x1ee812/0x389000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:54.970363+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:55.970476+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c649f1800
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 45817856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:56.970659+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.243688583s of 10.019772530s, submitted: 100
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 45809664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2942373 data_alloc: 218103808 data_used: 114688
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:57.970788+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287563776 unmapped: 45760512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 ms_handle_reset con 0x562c649f1800 session 0x562c628ec3c0
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:58.970956+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 45719552 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:59.971084+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:00.971249+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:01.971457+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:02.971593+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:03.971661+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:04.971792+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:05.971988+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:06.972122+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:07.972244+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:08.972348+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:09.972472+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:10.972597+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:11.972857+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:12.973063+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:13.973179+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:14.973318+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 45686784 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:15.973466+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 45686784 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:16.973662+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:17.973834+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:18.973982+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:19.974152+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:20.974352+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:21.974509+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:22.974678+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:23.974824+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:24.974991+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:25.975166+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:26.975343+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:27.975454+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:28.975592+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:29.975894+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:30.976024+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:31.976146+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:32.976271+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:33.976514+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:34.976669+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:35.976801+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:36.976972+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:37.977170+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:38.977317+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:39.977460+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:40.977578+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:41.977733+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:42.977922+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:43.978085+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:44.978214+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:45.978400+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:46.978563+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:47.978681+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:48.978865+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:49.979053+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:50.979248+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:51.979437+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:52.979570+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:53.979725+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:54.979900+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 45629440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:55.980004+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 45629440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6bb59400
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.478626251s of 59.437236786s, submitted: 96
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 304 handle_osd_map epochs [305,305], i have 305, src has [1,305]
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:56.980203+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 305 ms_handle_reset con 0x562c6bb59400 session 0x562c628e30e0
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949134 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:57.980327+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:58.980457+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf3b000/0x0/0x4ffc00000, data 0x1f39c3/0x392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:59.980619+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:00.980801+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf3b000/0x0/0x4ffc00000, data 0x1f39c3/0x392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:01.980948+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: get_auth_request con 0x562c61dbd400 auth_method 0
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:02.981114+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 45555712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:03.981287+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 45555712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:04.981437+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:05.981589+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:06.981777+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:07.981872+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:08.981969+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:09.982130+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:10.982335+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 45539328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:11.982460+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 45539328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:12.982583+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 45539328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:13.982810+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:14.982950+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:15.983095+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:16.983253+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:17.983382+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:18.983681+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 45522944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:19.983865+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 45522944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:20.984062+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 45522944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:21.984254+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:22.984373+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:23.984495+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:24.984675+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:25.984809+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:26.985038+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:27.985213+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:28.985368+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:29.985570+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:30.985686+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:31.985813+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:32.985906+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:33.986037+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:34.986180+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:35.986415+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:36.986664+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:37.986826+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:38.987030+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:39.987367+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:40.987691+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:41.987986+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:42.988127+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 45481984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:43.988333+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 45481984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:44.988644+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 45481984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:45.988978+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:46.989212+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:47.989329+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:48.989595+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:49.989796+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:50.990024+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:51.990187+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:52.990353+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:53.990797+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:54.990959+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:55.991182+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:56.991350+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:57.991597+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:58.991829+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 45457408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:59.991982+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 45457408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:00.992119+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 45449216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:01.992253+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 45449216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:02.992442+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:03.992594+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:04.992781+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:05.992979+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:06.993150+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:07.993281+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:08.993404+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:09.993528+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:10.993641+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:11.993770+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:12.993895+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:47 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:47 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 45293568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:13.994022+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: do_command 'config diff' '{prefix=config diff}'
Oct 02 09:41:47 compute-0 ceph-osd[90385]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 09:41:47 compute-0 ceph-osd[90385]: do_command 'config show' '{prefix=config show}'
Oct 02 09:41:47 compute-0 ceph-osd[90385]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 09:41:47 compute-0 ceph-osd[90385]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 09:41:47 compute-0 ceph-osd[90385]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 09:41:47 compute-0 ceph-osd[90385]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 09:41:47 compute-0 ceph-osd[90385]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 45817856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:14.994159+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:41:47 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 45850624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:41:47 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:15.994297+0000)
Oct 02 09:41:47 compute-0 ceph-osd[90385]: do_command 'log dump' '{prefix=log dump}'
Oct 02 09:41:47 compute-0 ceph-mon[74477]: from='client.23169 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:47 compute-0 ceph-mon[74477]: from='client.23172 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3925258120' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 09:41:47 compute-0 ceph-mon[74477]: from='client.23175 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/180055983' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 09:41:47 compute-0 ceph-mon[74477]: pgmap v3725: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:47 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23179 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 09:41:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2696113627' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 09:41:47 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23183 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 09:41:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751981710' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 09:41:47 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23187 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:48 compute-0 ceph-mon[74477]: from='client.23179 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:48 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2696113627' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 09:41:48 compute-0 ceph-mon[74477]: from='client.23183 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:48 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1751981710' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 09:41:48 compute-0 ceph-mon[74477]: from='client.23187 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 09:41:48 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/915543584' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 09:41:48 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23191 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:48 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 02 09:41:48 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3965773376' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 09:41:48 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23195 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/915543584' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 09:41:49 compute-0 ceph-mon[74477]: from='client.23191 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3965773376' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 09:41:49 compute-0 ceph-mon[74477]: from='client.23195 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:49 compute-0 ceph-mon[74477]: pgmap v3726: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:49 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23201 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:49 compute-0 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T09:41:49.343+0000 7f67e8e61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 09:41:49 compute-0 ceph-mgr[74774]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 09:41:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 02 09:41:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719431498' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 09:41:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 02 09:41:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/474808673' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 09:41:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 02 09:41:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4252180943' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 09:41:50 compute-0 nova_compute[260603]: 2025-10-02 09:41:50.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:50 compute-0 ceph-mon[74477]: from='client.23201 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2719431498' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 09:41:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/474808673' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 09:41:50 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4252180943' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 09:41:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 02 09:41:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1130492162' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 09:41:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 02 09:41:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233334211' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 09:41:50 compute-0 crontab[461949]: (root) LIST (root)
Oct 02 09:41:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 02 09:41:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/400909382' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 09:41:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 02 09:41:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270087015' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 09:41:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 02 09:41:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1347777016' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 09:41:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 02 09:41:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2976900227' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 09:41:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:41:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1130492162' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 09:41:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2233334211' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 09:41:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/400909382' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 09:41:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2270087015' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 09:41:51 compute-0 ceph-mon[74477]: pgmap v3727: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1347777016' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 09:41:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2976900227' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:08:54.605807+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 54337536 heap: 383778816 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.584579468s of 45.537818909s, submitted: 71
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:08:55.605958+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 329449472 unmapped: 54329344 heap: 383778816 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb75f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3319974 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:08:56.606110+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 329449472 unmapped: 54329344 heap: 383778816 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da193b680
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da060ab40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06cf000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562d9f911c20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da2309400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da2309400 session 0x562d9f4af4a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f948800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:08:57.606330+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338706432 unmapped: 48750592 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f948800 session 0x562da17014a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da1982f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da16f4b40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06cf000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:08:58.606526+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562da202b860
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328ec00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328ec00 session 0x562da14b4f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327884800 unmapped: 59572224 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae3f000/0x0/0x4ffc00000, data 0x17d8816/0x196f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:08:59.606678+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327884800 unmapped: 59572224 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:00.606879+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327819264 unmapped: 59637760 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395744 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:01.607123+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:02.607268+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:03.607470+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae3f000/0x0/0x4ffc00000, data 0x17d884f/0x196f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:04.607680+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:05.607826+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395744 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:06.607949+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:07.608205+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:08.608381+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae3f000/0x0/0x4ffc00000, data 0x17d884f/0x196f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:09.608529+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f948800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.190498352s of 15.229944229s, submitted: 132
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f948800 session 0x562da1e9a960
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:10.608874+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327999488 unmapped: 59457536 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398952 data_alloc: 218103808 data_used: 8122368
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:11.609074+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae1b000/0x0/0x4ffc00000, data 0x17fc84f/0x1993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327999488 unmapped: 59457536 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:12.609330+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327999488 unmapped: 59457536 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae1b000/0x0/0x4ffc00000, data 0x17fc84f/0x1993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae1b000/0x0/0x4ffc00000, data 0x17fc84f/0x1993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:13.609495+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:14.609818+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae1b000/0x0/0x4ffc00000, data 0x17fc84f/0x1993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:15.610027+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae1b000/0x0/0x4ffc00000, data 0x17fc84f/0x1993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465992 data_alloc: 218103808 data_used: 17485824
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:16.610784+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:17.610989+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae1b000/0x0/0x4ffc00000, data 0x17fc84f/0x1993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:18.611244+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:19.611458+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:20.611605+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465992 data_alloc: 218103808 data_used: 17485824
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:21.611815+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:22.611969+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.618291855s of 12.628442764s, submitted: 2
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:23.612144+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98c3000/0x0/0x4ffc00000, data 0x1bb484f/0x1d4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334258176 unmapped: 53198848 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:24.612351+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334258176 unmapped: 53198848 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:25.612514+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 52977664 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521134 data_alloc: 234881024 data_used: 18034688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:26.612691+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 52969472 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:27.612866+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 52969472 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:28.613066+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 52969472 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e96c6000/0x0/0x4ffc00000, data 0x1db184f/0x1f48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:29.613260+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 52969472 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:30.613450+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 52969472 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519938 data_alloc: 234881024 data_used: 18042880
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:31.613633+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:32.613905+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:33.614065+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e96a5000/0x0/0x4ffc00000, data 0x1dd284f/0x1f69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:34.614167+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:35.614370+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519938 data_alloc: 234881024 data_used: 18042880
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:36.614581+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:37.614745+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06cf000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562da193a5a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949800 session 0x562d9f4b0d20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9facd000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562d9f8d83c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3171c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562d9f8d9680
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3171c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.862783432s of 15.095202446s, submitted: 84
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d82000/0x0/0x4ffc00000, data 0x26f3888/0x288c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562d9f8d9c20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:38.614932+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f948800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f948800 session 0x562da1c8ad20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949800 session 0x562da068d2c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9facd000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562da1e9bc20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06cf000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562d9f8d8d20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:39.615122+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:40.615233+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3593837 data_alloc: 234881024 data_used: 18042880
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:41.615443+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:42.615624+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:43.615781+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:44.615946+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06cf000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562da202bc20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:45.616114+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f948800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f948800 session 0x562da2063e00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3593837 data_alloc: 234881024 data_used: 18042880
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:46.616261+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949800 session 0x562da2882780
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9facd000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562d9f911860
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:47.616395+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3171c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 54468608 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:48.616557+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 54468608 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:49.616663+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:50.616762+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3654474 data_alloc: 234881024 data_used: 26480640
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:51.616877+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:52.617007+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:53.617141+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:54.617280+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:55.617435+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.848459244s of 18.252662659s, submitted: 42
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3654826 data_alloc: 234881024 data_used: 26480640
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:56.617572+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:57.617707+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:58.617894+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338001920 unmapped: 51077120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:59.618048+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8350000/0x0/0x4ffc00000, data 0x31258c1/0x32be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338354176 unmapped: 50724864 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e834b000/0x0/0x4ffc00000, data 0x31298c1/0x32c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:00.618195+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e834b000/0x0/0x4ffc00000, data 0x31298c1/0x32c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3750220 data_alloc: 234881024 data_used: 27578368
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:01.618352+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8341000/0x0/0x4ffc00000, data 0x31338c1/0x32cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:02.618502+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:03.618690+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:04.618832+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:05.619036+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8341000/0x0/0x4ffc00000, data 0x31338c1/0x32cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3750220 data_alloc: 234881024 data_used: 27578368
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:06.619187+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:07.619298+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:08.619430+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.315657616s of 12.759304047s, submitted: 82
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:09.619621+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:10.619836+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8341000/0x0/0x4ffc00000, data 0x31338c1/0x32cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 50970624 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3752140 data_alloc: 234881024 data_used: 27672576
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:11.620045+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 50970624 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562d9f4b1c20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340c00 session 0x562da060a000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:12.620513+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3171c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562da03d0960
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 50929664 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:13.620660+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 50929664 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:14.620826+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 50929664 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:15.620980+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 50929664 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:16.621084+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533874 data_alloc: 234881024 data_used: 18051072
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e95c7000/0x0/0x4ffc00000, data 0x1de484f/0x1f7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 50929664 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:17.621186+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 50929664 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da1f85860
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da16f41e0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:18.621314+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f948800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338173952 unmapped: 50905088 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.686920166s of 10.025838852s, submitted: 74
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:19.621424+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f948800 session 0x562da16f54a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:20.621584+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:21.621803+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:22.621972+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:23.622111+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:24.622265+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:25.622383+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:26.622516+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:27.622655+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:28.622807+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:29.622943+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:30.623078+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:31.623258+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:32.623401+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:33.623586+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:34.623809+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:35.623976+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:36.624147+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:37.624287+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:38.624435+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:39.624603+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:40.624736+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:41.624933+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:42.625085+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:43.625229+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:44.625354+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:45.625514+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:46.625634+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:47.625814+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 54165504 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:48.625984+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 54165504 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:49.626151+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 54165504 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:50.626404+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 54165504 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:51.626591+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:52.626897+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:53.627220+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:54.627513+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:55.627693+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:56.627842+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:57.627984+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:58.628168+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:59.628375+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:00.628548+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:01.628735+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:02.628984+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:03.629138+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:04.629273+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:05.629438+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:06.629615+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334938112 unmapped: 54140928 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:07.629862+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334938112 unmapped: 54140928 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.946887970s of 49.407531738s, submitted: 22
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:08.630015+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343089152 unmapped: 49668096 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd3000/0x0/0x4ffc00000, data 0x16a5806/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,4,0,5])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:09.630163+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da068d2c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da14b4f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340c00 session 0x562da16f4b40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3171c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 57794560 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562d9f4af4a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949800 session 0x562da1f19c20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:10.630300+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 57794560 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:11.630525+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411752 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 57794560 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:12.631017+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 57794560 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:13.631328+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd3000/0x0/0x4ffc00000, data 0x16a583f/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 57794560 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:14.631587+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334970880 unmapped: 57786368 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:15.631808+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334970880 unmapped: 57786368 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:16.631966+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411752 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334970880 unmapped: 57786368 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:17.632125+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334970880 unmapped: 57786368 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:18.632344+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334970880 unmapped: 57786368 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:19.632524+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd3000/0x0/0x4ffc00000, data 0x16a583f/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da1709a40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334979072 unmapped: 57778176 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:20.632656+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562d9f9885a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334979072 unmapped: 57778176 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:21.632812+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd3000/0x0/0x4ffc00000, data 0x16a583f/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411752 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334979072 unmapped: 57778176 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:22.632972+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340c00 session 0x562da17081e0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3171c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.625880241s of 14.252739906s, submitted: 39
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562d9edd54a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334995456 unmapped: 57761792 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9facd000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:23.633111+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06cf000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:24.633230+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd2000/0x0/0x4ffc00000, data 0x16a5862/0x183c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:25.633349+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:26.633458+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414638 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd2000/0x0/0x4ffc00000, data 0x16a5862/0x183c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:27.633605+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:28.633736+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:29.633887+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:30.634026+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:31.634186+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472558 data_alloc: 218103808 data_used: 14942208
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:32.634318+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd2000/0x0/0x4ffc00000, data 0x16a5862/0x183c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:33.634451+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd2000/0x0/0x4ffc00000, data 0x16a5862/0x183c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:34.634610+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd2000/0x0/0x4ffc00000, data 0x16a5862/0x183c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:35.634815+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:36.635014+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.789023399s of 13.874046326s, submitted: 11
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498930 data_alloc: 218103808 data_used: 14954496
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336822272 unmapped: 55934976 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:37.635129+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0x1c6b862/0x1e02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336822272 unmapped: 55934976 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:38.635305+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 54878208 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e97d9000/0x0/0x4ffc00000, data 0x1c96862/0x1e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:39.635452+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 54878208 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:40.635666+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 54878208 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:41.637402+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3530596 data_alloc: 218103808 data_used: 14950400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 54878208 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e97d9000/0x0/0x4ffc00000, data 0x1c96862/0x1e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:42.638360+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 54878208 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:43.638837+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 54878208 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:44.639388+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 56598528 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:45.639648+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 56598528 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:46.639933+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525844 data_alloc: 218103808 data_used: 14954496
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:47.640351+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e97de000/0x0/0x4ffc00000, data 0x1c99862/0x1e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:48.640520+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:49.640813+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:50.641028+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:51.641202+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525844 data_alloc: 218103808 data_used: 14954496
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:52.641409+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e97de000/0x0/0x4ffc00000, data 0x1c99862/0x1e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:53.641548+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328e800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da03d0d20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328e800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da1701e00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da18b4780
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da202ad20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:54.641716+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.834115982s of 18.167894363s, submitted: 75
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340c00 session 0x562da17094a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:55.641869+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:56.642266+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3645462 data_alloc: 218103808 data_used: 14954496
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:57.642555+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8921000/0x0/0x4ffc00000, data 0x2b56862/0x2ced000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:58.642714+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:59.642971+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:00.643106+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:01.643320+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3645462 data_alloc: 218103808 data_used: 14954496
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:02.643492+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 63578112 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:03.643632+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8921000/0x0/0x4ffc00000, data 0x2b56862/0x2ced000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 63578112 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:04.643772+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3171c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562da19063c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 63578112 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:05.643897+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562d9f8d61e0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da02fc000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 63578112 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:06.644002+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3645462 data_alloc: 218103808 data_used: 14954496
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 63578112 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:07.644130+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340c00 session 0x562da1771680
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337584128 unmapped: 63569920 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:08.644302+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8921000/0x0/0x4ffc00000, data 0x2b56862/0x2ced000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328e800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.322172165s of 14.704947472s, submitted: 23
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:09.644448+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337592320 unmapped: 63561728 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:10.644576+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 62341120 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:11.644827+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3745354 data_alloc: 234881024 data_used: 28442624
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:12.646326+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:13.646531+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e891f000/0x0/0x4ffc00000, data 0x2b57862/0x2cee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:14.647529+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:15.648263+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:16.648407+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3745354 data_alloc: 234881024 data_used: 28442624
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:17.648964+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:18.649442+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:19.649645+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e891f000/0x0/0x4ffc00000, data 0x2b57862/0x2cee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:20.649827+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.767458916s of 11.075243950s, submitted: 3
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 54853632 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:21.650217+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 52125696 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e81dc000/0x0/0x4ffc00000, data 0x329b862/0x3432000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1,19])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3820578 data_alloc: 234881024 data_used: 28479488
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:22.650582+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 52232192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:23.650927+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 52232192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7ef2000/0x0/0x4ffc00000, data 0x3585862/0x371c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:24.651068+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 51331072 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:25.651279+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 51109888 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7ea3000/0x0/0x4ffc00000, data 0x35d3862/0x376a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,2])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:26.651460+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 50888704 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3842816 data_alloc: 234881024 data_used: 31006720
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:27.651625+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 50855936 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:28.651838+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 50855936 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e6d000/0x0/0x4ffc00000, data 0x3609862/0x37a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:29.652058+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 50855936 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e6d000/0x0/0x4ffc00000, data 0x3609862/0x37a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:30.652217+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:31.652679+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.121047020s of 11.344943047s, submitted: 137
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3841648 data_alloc: 234881024 data_used: 31023104
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:32.652797+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4e000/0x0/0x4ffc00000, data 0x3629862/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:33.652992+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4e000/0x0/0x4ffc00000, data 0x3629862/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:34.653117+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:35.653257+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:36.653501+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3841648 data_alloc: 234881024 data_used: 31023104
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:37.653734+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4e000/0x0/0x4ffc00000, data 0x3629862/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:38.653996+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:39.654168+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:40.654277+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 50814976 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:41.654476+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 50814976 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3841648 data_alloc: 234881024 data_used: 31023104
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:42.654699+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4e000/0x0/0x4ffc00000, data 0x3629862/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.616966248s of 10.726793289s, submitted: 2
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 50814976 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:43.654918+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 50814976 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:44.655098+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 50814976 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:45.655255+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 50814976 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:46.655400+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 50806784 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3842188 data_alloc: 234881024 data_used: 31031296
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:47.655586+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 50806784 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4b000/0x0/0x4ffc00000, data 0x362c862/0x37c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:48.655812+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 50806784 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4b000/0x0/0x4ffc00000, data 0x362c862/0x37c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:49.655984+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 50806784 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:50.656109+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 50806784 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:51.656304+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 50798592 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da20630e0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3844268 data_alloc: 234881024 data_used: 31129600
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4b000/0x0/0x4ffc00000, data 0x362c862/0x37c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:52.656514+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 50798592 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.174716949s of 10.465250015s, submitted: 3
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da17010e0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:53.656701+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 54419456 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e97db000/0x0/0x4ffc00000, data 0x1c9c862/0x1e33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:54.656925+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 54419456 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:55.657078+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 54419456 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:56.657174+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 54419456 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3537040 data_alloc: 218103808 data_used: 14401536
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562da02c12c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562da14b4b40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:57.657334+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 54411264 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da19825a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba800/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:58.657482+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:59.657606+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:00.657835+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:01.658224+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:02.658366+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:03.658537+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:04.658660+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:05.658834+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:06.659002+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:07.659916+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:08.660119+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:09.660294+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:10.660436+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:11.660628+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:12.660814+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:13.660998+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:14.661175+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:15.661348+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:16.661541+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:17.661813+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:18.662009+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:19.662213+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:20.662383+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:21.662569+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:22.662780+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:23.662948+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:24.663126+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:25.663314+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:26.663452+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:27.663627+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:28.663918+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:29.664084+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:30.664319+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:31.664536+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:32.664678+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da14b4960
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da1e5c5a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da1906f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9facd000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562d9f8d92c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06cf000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.645301819s of 40.069389343s, submitted: 91
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:33.664883+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562da1e9be00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 61734912 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da285cd20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da0340c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340c00 session 0x562da16f52c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562d9f8d81e0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9facd000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562da285cd20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea133000/0x0/0x4ffc00000, data 0x134484e/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:34.665109+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:35.665383+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:36.665557+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405689 data_alloc: 218103808 data_used: 8122368
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:37.665717+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:38.665911+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:39.666047+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea133000/0x0/0x4ffc00000, data 0x134484e/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:40.666174+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:41.666781+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06cf000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562d9f8d92c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405689 data_alloc: 218103808 data_used: 8122368
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1906f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:42.666968+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328e800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da14b4960
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.455097198s of 10.046999931s, submitted: 23
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:43.667199+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea132000/0x0/0x4ffc00000, data 0x134485e/0x14dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da19825a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9facd000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea132000/0x0/0x4ffc00000, data 0x134485e/0x14dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06cf000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:44.667353+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea132000/0x0/0x4ffc00000, data 0x134485e/0x14dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:45.667562+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:46.667808+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429607 data_alloc: 218103808 data_used: 11268096
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:47.667953+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:48.668094+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea132000/0x0/0x4ffc00000, data 0x134485e/0x14dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:49.668225+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:50.668408+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:51.668544+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440967 data_alloc: 218103808 data_used: 12881920
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:52.668667+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:53.669009+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:54.669126+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea132000/0x0/0x4ffc00000, data 0x134485e/0x14dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:55.669227+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:56.669359+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.729443550s of 13.749686241s, submitted: 1
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 nova_compute[260603]: 2025-10-02 09:41:51.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473237 data_alloc: 218103808 data_used: 12881920
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:57.669602+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343187456 unmapped: 57966592 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:58.669824+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343236608 unmapped: 57917440 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:59.669971+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:00.670204+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98fb000/0x0/0x4ffc00000, data 0x1b7b85e/0x1d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:01.670431+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98fb000/0x0/0x4ffc00000, data 0x1b7b85e/0x1d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520797 data_alloc: 218103808 data_used: 13578240
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:02.670599+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:03.670839+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:04.671022+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:05.671230+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:06.671371+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520013 data_alloc: 218103808 data_used: 13590528
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:07.671510+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98f8000/0x0/0x4ffc00000, data 0x1b7e85e/0x1d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:08.671696+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98f8000/0x0/0x4ffc00000, data 0x1b7e85e/0x1d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:09.671823+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:10.671936+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98f8000/0x0/0x4ffc00000, data 0x1b7e85e/0x1d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:11.672094+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 57901056 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520013 data_alloc: 218103808 data_used: 13590528
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:12.672261+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 57901056 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:13.672410+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 57901056 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:14.672596+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98f8000/0x0/0x4ffc00000, data 0x1b7e85e/0x1d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 57901056 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:15.672844+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 57901056 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328e800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da1f85c20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562d9f8d63c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1964000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562da1ad4d20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da329a400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562d9f9103c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:16.673019+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da329a400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.446899414s of 19.630510330s, submitted: 91
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da18b4b40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da170ba40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1964000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562d9f988f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328e800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da1a1b2c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562d9f8d6780
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345407488 unmapped: 55746560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577041 data_alloc: 218103808 data_used: 13590528
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:17.673168+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345407488 unmapped: 55746560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:18.673329+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345407488 unmapped: 55746560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e924b000/0x0/0x4ffc00000, data 0x222a86e/0x23c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:19.673472+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 55738368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:20.673626+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 55738368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:21.673940+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 55738368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577041 data_alloc: 218103808 data_used: 13590528
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:22.674131+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 55738368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e924b000/0x0/0x4ffc00000, data 0x222a86e/0x23c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:23.674285+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 55738368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:24.674515+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 55738368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:25.674643+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da27ce5a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 55697408 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:26.674799+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1964000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 55697408 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9226000/0x0/0x4ffc00000, data 0x224e891/0x23e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580874 data_alloc: 218103808 data_used: 13598720
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:27.674931+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345464832 unmapped: 55689216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:28.675064+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 55730176 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:29.675187+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.688966751s of 13.939682961s, submitted: 17
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:30.675326+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:31.675514+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9226000/0x0/0x4ffc00000, data 0x224e891/0x23e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623130 data_alloc: 234881024 data_used: 19468288
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:32.675700+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:33.675880+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:34.676091+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9226000/0x0/0x4ffc00000, data 0x224e891/0x23e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:35.676259+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9226000/0x0/0x4ffc00000, data 0x224e891/0x23e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:36.676371+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623130 data_alloc: 234881024 data_used: 19468288
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:37.676555+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:38.676690+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 53026816 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:39.676823+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 52486144 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8066000/0x0/0x4ffc00000, data 0x2ffe891/0x3198000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.595734596s of 10.033220291s, submitted: 70
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:40.677074+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 52412416 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:41.677284+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 52412416 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3736804 data_alloc: 234881024 data_used: 20041728
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:42.677518+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:43.677741+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e801c000/0x0/0x4ffc00000, data 0x3048891/0x31e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:44.677933+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:45.678073+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:46.678191+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:47.678444+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739248 data_alloc: 234881024 data_used: 20037632
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:48.678618+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e801c000/0x0/0x4ffc00000, data 0x3048891/0x31e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:49.678872+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:50.679030+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da202a5a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:51.679183+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562da1e9a5a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328e800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.884395599s of 11.480986595s, submitted: 23
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 55320576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da25825a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:52.679336+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3534844 data_alloc: 218103808 data_used: 13668352
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 55320576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:53.679480+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 55320576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e94c3000/0x0/0x4ffc00000, data 0x1b7e85e/0x1d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:54.679646+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 55320576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562da02c12c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562d9f4b14a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:55.680323+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 55304192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:56.680501+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54247424 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9895000/0x0/0x4ffc00000, data 0x114f84f/0x12e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:57.680686+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3393446 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54247424 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:58.680876+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:59.681024+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:00.681147+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:01.681354+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.958738804s of 10.038612366s, submitted: 58
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:02.681504+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:03.681647+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:04.681823+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da1ae43c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:05.681941+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:06.682112+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:07.682293+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:08.682428+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:09.682566+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:10.682718+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:11.683068+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:12.683251+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:13.683440+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:14.683570+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:15.683739+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:16.683936+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:17.684072+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:18.684190+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 54280192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:19.684405+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 54280192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:20.684588+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 54280192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:21.684826+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 54280192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:22.685005+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:23.685212+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:24.685378+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:25.685530+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:26.685647+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:27.685808+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:28.685985+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:29.686173+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:30.686325+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:31.686520+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:32.686720+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:33.686876+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:34.687047+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:35.687190+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:36.687350+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:37.687561+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:38.687791+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 54255616 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:39.687923+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 54255616 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:40.688063+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 54255616 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:41.688281+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 54255616 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:42.688448+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 54255616 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:43.688597+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54247424 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:44.688824+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54247424 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:45.688950+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1964000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.313434601s of 44.072120667s, submitted: 4
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 53207040 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,5])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:46.689154+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,6,5])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 48996352 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:47.689348+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451037 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 51093504 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:48.689543+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54239232 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:49.689874+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562da03d1e00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54239232 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:50.690003+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54239232 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:51.690197+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54239232 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:52.690339+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437597 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54231040 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:53.690463+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54231040 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:54.690604+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54231040 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328e800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:55.690794+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.686579227s of 10.136670113s, submitted: 14
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54231040 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:56.690943+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 54222848 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:57.691108+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438038 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da1708b40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 54222848 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:58.691261+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 54222848 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da329a400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:59.691355+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:00.691486+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:01.691599+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:02.691818+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467930 data_alloc: 218103808 data_used: 12324864
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:03.691947+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:04.692045+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:05.692175+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:06.692352+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:07.692561+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467930 data_alloc: 218103808 data_used: 12324864
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:08.692677+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:09.692829+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:10.692974+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.153639793s of 14.558044434s, submitted: 3
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347774976 unmapped: 53379072 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:11.693131+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347447296 unmapped: 53706752 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:12.693316+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492032 data_alloc: 218103808 data_used: 12374016
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347447296 unmapped: 53706752 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:13.693492+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:14.693666+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9758000/0x0/0x4ffc00000, data 0x19117dd/0x1aa6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347447296 unmapped: 53706752 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:15.693854+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9758000/0x0/0x4ffc00000, data 0x19117dd/0x1aa6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347447296 unmapped: 53706752 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:16.693984+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347447296 unmapped: 53706752 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:17.694157+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509594 data_alloc: 218103808 data_used: 12570624
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347447296 unmapped: 53706752 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:18.694329+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:19.694505+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:20.694721+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:21.694967+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:22.695146+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514002 data_alloc: 218103808 data_used: 12632064
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:23.695329+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:24.695561+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:25.695855+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:26.696214+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:27.696319+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514002 data_alloc: 218103808 data_used: 12632064
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:28.696398+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:29.696580+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:30.696840+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:31.696996+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:32.697242+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514002 data_alloc: 218103808 data_used: 12632064
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:33.697414+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:34.697729+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:35.697930+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 53682176 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:36.698078+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 53682176 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:37.698283+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 53682176 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514002 data_alloc: 218103808 data_used: 12632064
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:38.698455+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 53682176 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:39.698895+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:40.699083+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:41.699361+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:42.699536+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514002 data_alloc: 218103808 data_used: 12632064
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:43.699712+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:44.699899+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:45.700067+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da1f185a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da1d35680
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da03d0000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06cf000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562da03d0d20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1964000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.753906250s of 35.480411530s, submitted: 49
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:46.700791+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 43393024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562da202ad20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d806/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328e800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da1907680
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:47.700913+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3593951 data_alloc: 218103808 data_used: 12632064
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:48.701067+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347488256 unmapped: 53665792 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:49.701232+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347488256 unmapped: 53665792 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:50.701449+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347488256 unmapped: 53665792 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:51.701625+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:52.701804+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3593951 data_alloc: 218103808 data_used: 12632064
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:53.701940+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:54.702119+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:55.702308+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328e800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:56.702462+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:57.702617+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3605151 data_alloc: 218103808 data_used: 14209024
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:58.702764+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:59.702879+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:00.703016+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:01.703203+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:02.703364+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659071 data_alloc: 234881024 data_used: 21536768
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:03.703477+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:04.703606+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:05.703721+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:06.703859+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:07.704087+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.032077789s of 21.478837967s, submitted: 37
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3690449 data_alloc: 234881024 data_used: 21581824
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,18])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:08.704246+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350855168 unmapped: 50298880 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85d6000/0x0/0x4ffc00000, data 0x2a8c83f/0x2c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:09.704380+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 50290688 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:10.704590+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 50290688 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:11.704831+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 50290688 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85b2000/0x0/0x4ffc00000, data 0x2aad83f/0x2c43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,4])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:12.705000+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50233344 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733393 data_alloc: 234881024 data_used: 22216704
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:13.705216+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:14.705401+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:15.705548+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:16.705686+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85a5000/0x0/0x4ffc00000, data 0x2ac383f/0x2c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:17.705880+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740617 data_alloc: 234881024 data_used: 22421504
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:18.706026+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:19.706182+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.158976555s of 12.613157272s, submitted: 94
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:20.706300+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da170be00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85a5000/0x0/0x4ffc00000, data 0x2ac383f/0x2c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:21.706490+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 50208768 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:22.706610+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 50208768 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3738385 data_alloc: 234881024 data_used: 22425600
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:23.706823+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 50208768 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85a5000/0x0/0x4ffc00000, data 0x2ac383f/0x2c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,3])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8a9b000/0x0/0x4ffc00000, data 0x192083f/0x1ab6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:24.706949+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:25.707101+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:26.707279+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da060d680
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:27.707417+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525679 data_alloc: 218103808 data_used: 12595200
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:28.707594+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:29.707824+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:30.707994+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:31.708234+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:32.708479+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525679 data_alloc: 218103808 data_used: 12595200
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:33.708666+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562d9fa010e0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.587030411s of 13.759822845s, submitted: 32
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562d9f4b05a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:34.708850+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06cf000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:35.709015+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:36.709197+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:37.709369+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413519 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x11d77dd/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:38.709508+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349446144 unmapped: 51707904 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:39.709623+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562d9f910f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:40.709817+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:41.709985+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:42.710138+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:43.710337+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:44.710494+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:45.710774+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:46.710980+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:47.711161+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:48.711310+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:49.711475+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:50.711644+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:51.711826+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:52.712009+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:53.712190+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:54.712344+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:55.712588+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2280 writes, 9757 keys, 2280 commit groups, 1.0 writes per commit group, ingest: 10.53 MB, 0.02 MB/s
                                           Interval WAL: 2280 writes, 833 syncs, 2.74 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:56.712768+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets getting new tickets!
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:57.713031+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _finish_auth 0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:57.714480+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:58.713212+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:59.713451+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:00.713719+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1cb7800 session 0x562da1ad5680
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:01.714046+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: mgrc ms_handle_reset ms_handle_reset con 0x562da328f400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct 02 09:41:51 compute-0 ceph-osd[89321]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: get_auth_request con 0x562da3171c00 auth_method 0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: mgrc handle_mgr_configure stats_period=5
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da40e3400 session 0x562da1f84960
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06d4400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da02c0d20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1965c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:02.714356+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:03.714524+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:04.714699+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:05.715136+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:06.716305+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:07.718483+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:08.718789+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:09.718993+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:10.719432+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:11.719951+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:12.720283+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:13.720519+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:14.720812+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:15.721264+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:16.721690+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:17.722046+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:18.722557+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:19.722717+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:20.723003+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:21.723288+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1a1a000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1700f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da329a400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da02fc960
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:22.723466+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562d9f8d9c20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1964000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.360366821s of 48.620044708s, submitted: 32
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 43769856 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562da1e9be00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da03d0000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1f185a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473234 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1ae43c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da329a400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da1e9a5a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:23.723704+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:24.724090+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:25.724302+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8904000/0x0/0x4ffc00000, data 0x15c57dd/0x175a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:26.724646+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:27.725345+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 50561024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473010 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:28.725490+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3298400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3298400 session 0x562d9f8d6780
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 50561024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8904000/0x0/0x4ffc00000, data 0x15c57dd/0x175a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da1a1b2c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:29.725681+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 50561024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562d9f988f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:30.725888+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da18b4b40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da329a400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3297c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 50257920 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:31.726089+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 50257920 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:32.726243+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529512 data_alloc: 218103808 data_used: 15495168
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:33.726485+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e88df000/0x0/0x4ffc00000, data 0x15e97ed/0x177f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:34.726694+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:35.726830+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:36.726991+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:37.727179+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e88df000/0x0/0x4ffc00000, data 0x15e97ed/0x177f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529512 data_alloc: 218103808 data_used: 15495168
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:38.727337+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:39.727423+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:40.727599+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:41.727846+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:42.728001+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.683063507s of 20.067323685s, submitted: 24
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e88df000/0x0/0x4ffc00000, data 0x15e97ed/0x177f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565370 data_alloc: 218103808 data_used: 15523840
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:43.728124+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 49455104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:44.728309+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350699520 unmapped: 50454528 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:45.728402+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:46.728547+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:47.728695+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600800 data_alloc: 218103808 data_used: 15843328
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:48.728848+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:49.728981+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:50.729373+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:51.729597+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:52.729793+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600800 data_alloc: 218103808 data_used: 15843328
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:53.729974+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:54.730190+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:55.730356+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.367254257s of 13.001447678s, submitted: 92
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 49397760 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:56.730579+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 49397760 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:57.730828+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da5c8fc00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 49389568 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3677723 data_alloc: 218103808 data_used: 15843328
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:58.730977+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:59.731150+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da5c8fc00 session 0x562d9f8d7c20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:00.731287+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x2514816/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:01.731515+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:02.731682+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da170b2c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3656683 data_alloc: 218103808 data_used: 15843328
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:03.731832+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:04.731985+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:05.732109+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x251484f/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x251484f/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.334874630s of 10.473359108s, submitted: 68
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:06.732220+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x251484f/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:07.733070+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3658552 data_alloc: 218103808 data_used: 15843328
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:08.733218+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da2063860
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:09.733342+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:10.733494+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 52977664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:11.733689+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 52936704 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:12.733820+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 52928512 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:13.733990+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703584 data_alloc: 234881024 data_used: 22114304
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:14.734176+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:15.734357+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:16.734478+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:17.734629+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:18.734789+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703584 data_alloc: 234881024 data_used: 22114304
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:19.736138+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:20.736306+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:21.736534+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.022311211s of 15.682323456s, submitted: 60
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,2,2,2])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 52387840 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:22.736734+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353009664 unmapped: 51822592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:23.736981+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744446 data_alloc: 234881024 data_used: 22147072
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74cd000/0x0/0x4ffc00000, data 0x29f9872/0x2b91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,2])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 51150848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:24.737135+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 51150848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:25.737270+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 51150848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:26.758677+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 51077120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:27.758869+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 51740672 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:28.759085+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3750872 data_alloc: 234881024 data_used: 22351872
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 51740672 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:29.759193+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e739e000/0x0/0x4ffc00000, data 0x2b28872/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e739e000/0x0/0x4ffc00000, data 0x2b28872/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:30.759355+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:31.759543+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.567046165s of 10.198050499s, submitted: 62
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:32.759683+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7359000/0x0/0x4ffc00000, data 0x2b6d872/0x2d05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:33.759913+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754768 data_alloc: 234881024 data_used: 22589440
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:34.760078+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:35.760222+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 50446336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:36.760503+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7320000/0x0/0x4ffc00000, data 0x2ba6872/0x2d3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 50446336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:37.760687+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 50446336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:38.760862+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757992 data_alloc: 234881024 data_used: 22646784
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:39.761031+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e731b000/0x0/0x4ffc00000, data 0x2bab872/0x2d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:40.761155+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e731b000/0x0/0x4ffc00000, data 0x2bab872/0x2d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:41.761350+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:42.761485+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:43.761623+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762988 data_alloc: 234881024 data_used: 22687744
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:44.761885+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7317000/0x0/0x4ffc00000, data 0x2baf872/0x2d47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7317000/0x0/0x4ffc00000, data 0x2baf872/0x2d47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:45.762054+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:46.762231+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.851724625s of 15.161529541s, submitted: 25
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 50421760 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:47.762366+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:48.762541+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3764976 data_alloc: 234881024 data_used: 22679552
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72fa000/0x0/0x4ffc00000, data 0x2bcc872/0x2d64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:49.762732+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:50.762975+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72fa000/0x0/0x4ffc00000, data 0x2bcc872/0x2d64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:51.763246+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:52.763431+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 50356224 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:53.763656+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762328 data_alloc: 234881024 data_used: 22679552
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da060ab40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 50356224 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:54.763820+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:55.764010+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72f2000/0x0/0x4ffc00000, data 0x2bd4872/0x2d6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:56.764159+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.166407108s of 10.001169205s, submitted: 24
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72f2000/0x0/0x4ffc00000, data 0x2bd4872/0x2d6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:57.764306+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:58.764493+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762224 data_alloc: 234881024 data_used: 22679552
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 50339840 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:59.764663+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8106000/0x0/0x4ffc00000, data 0x1dc0872/0x1f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 50331648 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:00.764822+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 50323456 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:01.765010+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da2582d20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 50323456 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:02.765176+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8106000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 50315264 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:03.765316+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3606314 data_alloc: 218103808 data_used: 15831040
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 50307072 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:04.765524+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 50307072 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:05.765667+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da1ad4d20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da068cb40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 50307072 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:06.765772+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.448636055s of 10.010467529s, submitted: 54
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8108000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 52862976 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:07.765945+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8fea000/0x0/0x4ffc00000, data 0xede7ed/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 52854784 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:08.766106+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440388 data_alloc: 218103808 data_used: 8228864
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 52854784 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:09.766250+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da03d1e00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:10.766382+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:11.766530+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:12.766678+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:13.766849+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:14.767018+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:15.767181+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:16.767567+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:17.767895+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:18.768352+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:19.768652+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:20.768909+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:21.769127+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:22.769334+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:23.769481+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:24.769854+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:25.770195+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:26.770544+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:27.770842+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:28.771023+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:29.771248+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:30.771417+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:31.771684+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:32.771914+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:33.772129+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:34.772299+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 52813824 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:35.772439+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:36.772577+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:37.772773+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:38.772951+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:39.773169+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:40.773370+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:41.773614+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:42.773868+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:43.774069+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:44.774227+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:45.774385+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:46.774530+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:47.774673+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:48.774880+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:49.775136+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:50.775332+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.401294708s of 43.381404877s, submitted: 17
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562d9f989a40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562d9f4b14a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3297c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562d9f989680
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da329a400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da1e9b0e0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da16f41e0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:51.775498+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:52.775652+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:53.775845+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479417 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:54.776076+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6f000/0x0/0x4ffc00000, data 0x135983f/0x14ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:55.776285+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:56.776474+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:57.776677+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:58.776837+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479417 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:59.777049+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:00.777188+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6f000/0x0/0x4ffc00000, data 0x135983f/0x14ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:01.777376+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da202be00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:02.777552+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da16f52c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:03.777707+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3297c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da170be00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.937417030s of 13.075335503s, submitted: 42
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1f85c20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481740 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:04.777858+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:05.778110+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6d000/0x0/0x4ffc00000, data 0x1359872/0x14f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:06.778226+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:07.778440+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:08.778683+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515152 data_alloc: 218103808 data_used: 12677120
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:09.778883+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:10.779068+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6d000/0x0/0x4ffc00000, data 0x1359872/0x14f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:11.779267+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:12.779424+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:13.779606+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515152 data_alloc: 218103808 data_used: 12677120
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:14.779824+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:15.779974+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 52584448 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6d000/0x0/0x4ffc00000, data 0x1359872/0x14f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.298130989s of 12.432414055s, submitted: 6
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:16.780195+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:17.780387+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8330000/0x0/0x4ffc00000, data 0x1b96872/0x1d2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:18.780521+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3585360 data_alloc: 218103808 data_used: 13922304
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:19.780709+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:20.780886+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:21.781092+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82ea000/0x0/0x4ffc00000, data 0x1bdc872/0x1d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:22.781254+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:23.781438+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3585680 data_alloc: 218103808 data_used: 13930496
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:24.781590+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c9000/0x0/0x4ffc00000, data 0x1bfd872/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:25.781716+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:26.781891+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:27.782080+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c9000/0x0/0x4ffc00000, data 0x1bfd872/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:28.782251+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583492 data_alloc: 218103808 data_used: 13934592
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:29.782418+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:30.782571+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:31.782831+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:32.782989+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.958930016s of 17.081628799s, submitted: 105
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:33.783119+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c9000/0x0/0x4ffc00000, data 0x1bfd872/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da14b4f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642870 data_alloc: 218103808 data_used: 13934592
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:34.783268+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:35.783396+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:36.783527+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:37.783669+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:38.783855+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642870 data_alloc: 218103808 data_used: 13934592
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:39.784000+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:40.784340+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3297c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:41.784559+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:42.784715+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562d9f8d9a40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:43.784933+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643467 data_alloc: 218103808 data_used: 13934592
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:44.785118+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da5c8fc00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b52000/0x0/0x4ffc00000, data 0x2373895/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:45.785290+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:46.785398+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:47.785513+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:48.785650+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.500934601s of 15.522595406s, submitted: 17
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692123 data_alloc: 234881024 data_used: 20611072
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b52000/0x0/0x4ffc00000, data 0x2373895/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:49.785810+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:50.785970+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:51.786196+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:52.786392+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:53.786605+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692123 data_alloc: 234881024 data_used: 20611072
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b52000/0x0/0x4ffc00000, data 0x2373895/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:54.786842+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:55.787001+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:56.787157+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 45514752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:57.787231+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358465536 unmapped: 46366720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:58.787396+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:59.787538+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e6e9f000/0x0/0x4ffc00000, data 0x3025895/0x31be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:00.787670+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:01.787841+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:02.788059+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:03.788214+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:04.788366+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:05.788496+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e6e9f000/0x0/0x4ffc00000, data 0x3025895/0x31be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:06.788634+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:07.788818+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:08.788975+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:09.789132+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:10.789273+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:11.789446+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e6e9f000/0x0/0x4ffc00000, data 0x3025895/0x31be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.707262039s of 23.078323364s, submitted: 83
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:12.789590+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:13.789731+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:14.789890+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da5c8fc00 session 0x562da16f4f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:15.790015+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562da14b4b40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:16.790196+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c2000/0x0/0x4ffc00000, data 0x1c03872/0x1d9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:17.790358+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:18.790497+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595381 data_alloc: 218103808 data_used: 13934592
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:19.790800+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:20.790983+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:21.791147+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:22.791272+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82bb000/0x0/0x4ffc00000, data 0x1c0a872/0x1da2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:23.791430+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.594208717s of 11.749836922s, submitted: 32
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da202a780
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da18b5680
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 51036160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594709 data_alloc: 218103808 data_used: 13930496
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:24.791556+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 49971200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:25.791700+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 49971200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:26.791917+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 49971200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da1f18b40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:27.792200+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:28.792464+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:29.792659+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:30.792804+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:31.792994+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:32.793183+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:33.793313+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:34.793464+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:35.793598+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:36.793765+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:37.793925+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:38.794070+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:39.794267+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:40.794464+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:41.794687+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:42.794889+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:43.795123+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:44.795299+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:45.795466+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:46.795669+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:47.795849+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:48.796040+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:49.796223+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:50.796359+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:51.796563+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:52.796698+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:53.796863+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:54.797031+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:55.797211+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:56.797400+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:57.797574+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:58.797807+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:59.798004+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:00.798178+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:01.798388+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:02.798623+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:03.798842+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 49938432 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:04.799049+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 49938432 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:05.799267+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 49938432 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:06.799525+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:07.799711+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:08.800023+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:09.800171+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:10.800336+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:11.800559+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:12.800825+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:13.801008+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:14.801187+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:15.801390+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:16.801545+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:17.801739+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:18.801937+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:19.802110+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:20.802295+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:21.802522+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:22.802664+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:23.802943+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:24.803078+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:25.803271+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:26.803419+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:27.803596+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:28.803772+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:29.803943+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:30.804090+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:31.804288+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:32.804425+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:33.804584+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:34.805260+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:35.805432+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:36.805567+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:37.805834+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:38.805989+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:39.806137+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:40.806294+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:41.806505+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:42.806737+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:43.806914+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 49872896 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:44.807046+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 49872896 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:45.807208+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 49872896 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:46.807374+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:47.807539+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:48.807699+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:49.807828+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:50.807994+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:51.808186+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:52.808331+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:53.808510+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:54.808676+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:55.808818+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:56.808933+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:57.809117+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:58.809277+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:59.809440+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:00.809634+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:01.809847+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:02.809977+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355000320 unmapped: 49831936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:03.810194+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 49823744 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:04.810379+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 49823744 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:05.810573+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1904f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562d9f8da5a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3297c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da202a3c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da1a1bc20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 100.757339478s of 102.810211182s, submitted: 53
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 49815552 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:06.810719+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da1708960
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1770b40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562da25825a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3297c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 49725440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da14b4780
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:07.810829+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da03d0f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 49725440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:08.811016+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 49725440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:09.811168+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495520 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8be5000/0x0/0x4ffc00000, data 0x12e37ed/0x1479000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da02fc000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:10.811318+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:11.811537+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:12.811703+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:13.811842+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:14.812014+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3522193 data_alloc: 218103808 data_used: 11350016
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:15.812187+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8bc1000/0x0/0x4ffc00000, data 0x13077ed/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da068c1e0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562d9f9885a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da5c8fc00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:16.812364+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.136798859s of 10.462966919s, submitted: 20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 52199424 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:17.812497+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da5c8fc00 session 0x562da2583860
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:18.812675+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:19.812891+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:20.813040+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:21.813249+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:22.813388+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:23.813554+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:24.813678+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:25.813801+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:26.813977+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:27.814130+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:28.814319+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:29.814508+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:30.814647+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:31.814864+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:32.815030+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:33.815182+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:34.815355+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 52166656 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:35.815513+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:36.815662+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:37.815869+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:38.816043+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:39.816230+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:40.816357+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:41.816528+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:42.816687+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:43.816862+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:44.816980+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:45.817126+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:46.817314+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.090339661s of 30.667470932s, submitted: 30
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:47.817429+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 52142080 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 290 handle_osd_map epochs [291,291], i have 291, src has [1,291]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:48.817591+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: get_auth_request con 0x562da0341000 auth_method 0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 291 ms_handle_reset con 0x562da18b8800 session 0x562d9f9112c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:49.817714+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462964 data_alloc: 218103808 data_used: 8126464
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:50.817892+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:51.818096+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:52.818222+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:53.818339+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:54.818466+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462964 data_alloc: 218103808 data_used: 8126464
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:55.818614+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:56.818738+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:57.818886+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:58.819254+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:59.819439+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:00.819571+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:01.819779+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:02.819900+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:03.820089+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:04.820287+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:05.820464+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:06.820708+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:07.820864+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:08.821017+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:09.821193+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:10.821373+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:11.821533+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:12.821670+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:13.821798+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:14.821972+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:15.822115+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:16.822276+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:17.822416+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:18.822578+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:19.822727+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:20.822877+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:21.823058+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 52068352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:22.823180+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 52068352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:23.823348+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 52060160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:24.823495+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 52060160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:25.823628+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 52060160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:26.823770+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:27.823919+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:28.824062+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:29.824237+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:30.824354+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:31.824500+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:32.824646+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:33.824786+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:34.824915+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:35.825074+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:36.825226+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:37.825399+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:38.825551+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:39.825688+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 52035584 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:40.825807+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 52035584 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:41.825991+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:42.826129+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:43.826247+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:44.826363+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:45.826530+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 52019200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:46.826705+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 52019200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:47.826843+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:48.826960+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:49.827113+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:50.827246+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:51.827428+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:52.827726+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:53.827952+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:54.828118+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:55.828257+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:56.828409+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:57.828547+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:58.828670+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:59.828862+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:00.829008+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:01.829217+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:02.829380+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:03.829636+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 51978240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:04.829812+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 51978240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:05.830007+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:06.830203+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:07.830670+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:08.830919+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:09.831201+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:10.831431+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:11.831643+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 51961856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:12.831864+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:13.832032+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:14.832193+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:15.832358+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:16.832501+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:17.832672+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:18.832838+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 51945472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:19.832982+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:20.833098+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:21.833247+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:22.833406+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:23.833541+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:24.833694+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:25.833828+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:26.833996+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:27.834227+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:28.834377+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:29.834532+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:30.834656+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:31.834842+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:32.835015+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:33.835230+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:34.835521+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:35.835676+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:36.835863+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:37.836014+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:38.836211+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:39.836369+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:40.836501+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:41.836665+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:42.836932+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:43.837112+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:44.837275+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:45.837429+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:46.837610+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:47.837874+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:48.838040+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:49.838242+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:50.838394+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352960512 unmapped: 51871744 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 123.357223511s of 123.813163757s, submitted: 62
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 ms_handle_reset con 0x562da192d800 session 0x562da068c960
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:51.838613+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 ms_handle_reset con 0x562da1d55400 session 0x562da1ebbc20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 51855360 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:52.838780+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:53.838938+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:54.839097+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466082 data_alloc: 218103808 data_used: 9117696
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:55.839230+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:56.839394+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:57.839579+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:58.839765+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:59.840000+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466082 data_alloc: 218103808 data_used: 9117696
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:00.840194+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 51830784 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:01.840446+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.466549873s of 10.502127647s, submitted: 11
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 293 ms_handle_reset con 0x562da230f800 session 0x562da060cb40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:02.840801+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:03.841024+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:04.841165+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e9479000/0x0/0x4ffc00000, data 0xa4f97d/0xbe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429452 data_alloc: 218103808 data_used: 4472832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:05.841304+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 294 ms_handle_reset con 0x562da192d000 session 0x562d9f910f00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:06.841425+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:07.841609+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9c77000/0x0/0x4ffc00000, data 0x251508/0x3e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:08.841817+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:09.841983+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3358060 data_alloc: 218103808 data_used: 1130496
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9c77000/0x0/0x4ffc00000, data 0x251508/0x3e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:10.842143+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:11.842300+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:12.842436+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:13.842575+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:14.842693+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360842 data_alloc: 218103808 data_used: 1130496
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:15.842843+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e9c75000/0x0/0x4ffc00000, data 0x252f6b/0x3e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:16.843327+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:17.843643+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.849654198s of 16.112354279s, submitted: 77
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:18.843859+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 ms_handle_reset con 0x562da18b8800 session 0x562da193a5a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:19.844055+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:20.844160+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:21.844272+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:22.844381+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:23.844485+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:24.844599+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:25.844728+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:26.844906+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:27.845101+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:28.845273+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:29.845402+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:30.845542+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:31.845688+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:32.845801+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:33.845945+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:34.846055+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:35.846232+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:36.846354+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:37.846508+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:38.846659+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:39.846829+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:40.846931+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:41.847099+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:42.847250+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:43.847394+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:44.847568+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:45.847711+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:46.847913+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:47.848163+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:48.848355+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:49.848498+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:50.848651+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:51.848812+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:52.848966+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:53.849123+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:54.849243+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:55.849467+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 187K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1591 writes, 5946 keys, 1591 commit groups, 1.0 writes per commit group, ingest: 5.99 MB, 0.01 MB/s
                                           Interval WAL: 1591 writes, 616 syncs, 2.58 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:56.849600+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:57.849723+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:58.849868+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:59.849980+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:00.850096+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:01.850254+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:02.850449+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:03.850604+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:04.850853+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:05.851033+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:06.851237+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:07.851398+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:08.851573+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:09.851719+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:10.851895+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346030080 unmapped: 58802176 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:11.852107+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:12.852255+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:13.852423+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:14.852560+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:15.852714+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:16.852823+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:17.852950+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:18.853110+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:19.853312+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:20.853559+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:21.853810+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:22.853968+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:23.854092+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 58769408 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:24.854194+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 58769408 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:25.854330+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 58769408 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:26.854493+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:27.854675+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:28.854830+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:29.854939+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:30.855049+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:31.855204+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 58753024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:32.855327+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 58753024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:33.855474+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 58753024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:34.855619+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 58744832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:35.917061+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:36.917235+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:37.917388+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:38.917566+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:39.917703+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:40.917859+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:41.918032+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:42.918179+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:43.918324+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:44.918452+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:45.918596+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:46.918731+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:47.918931+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:48.919179+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:49.919343+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:50.919539+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:51.919738+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:52.920018+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:53.920249+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 58703872 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:54.920457+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 58695680 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:55.920628+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.635398865s of 97.845252991s, submitted: 11
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 58662912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:56.920771+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c70000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 58638336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:57.920925+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c70000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 58580992 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:58.921064+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 58580992 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 02 09:41:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/382984171' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:59.921219+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 58580992 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c70000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:00.921410+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368383 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:01.921593+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:02.921777+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:03.921987+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:04.922146+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:05.922298+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370197 data_alloc: 218103808 data_used: 1138688
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c3c/0x3ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.818475723s of 10.272746086s, submitted: 91
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 297 ms_handle_reset con 0x562da192d800 session 0x562da202a960
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 58564608 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:06.922424+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:07.922546+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 297 handle_osd_map epochs [298,298], i have 298, src has [1,298]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 66936832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 ms_handle_reset con 0x562da1d55400 session 0x562da2063860
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:08.922710+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:09.922864+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:10.922997+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:11.923147+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:12.923266+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:13.923396+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:14.923522+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:15.923727+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:16.923886+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:17.924037+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:18.924281+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:19.924491+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346333184 unmapped: 66895872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:20.924699+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346333184 unmapped: 66895872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:21.924964+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346333184 unmapped: 66895872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:22.925141+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:23.925328+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:24.925450+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:25.925605+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:26.925812+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:27.925965+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:28.926122+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:29.926289+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:30.926465+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:31.926657+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:32.926800+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:33.926963+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:34.927136+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:35.927318+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:36.927497+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:37.927634+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:38.927854+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:39.927988+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:40.928117+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464650 data_alloc: 218103808 data_used: 1150976
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.875930786s of 35.106796265s, submitted: 17
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:41.928355+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:42.928506+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:43.928647+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:44.928821+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xec9f07/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:45.928969+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464947 data_alloc: 218103808 data_used: 1155072
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 299 ms_handle_reset con 0x562da230f800 session 0x562d9f4b0780
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:46.929194+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:47.929361+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec9dc3/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:48.929507+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:49.929681+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:50.929796+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:51.929946+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:52.930113+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 66805760 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:53.930223+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 66805760 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:54.930359+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 66805760 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:55.930522+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:56.930709+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:57.930839+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:58.931072+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:59.931222+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:00.931355+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:01.931540+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:02.931804+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:03.932043+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:04.932336+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:05.932553+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:06.932718+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:07.932947+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:08.933164+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:09.933399+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:10.933629+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:11.933821+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:12.933969+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:13.934187+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:14.934447+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:15.934613+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:16.934780+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:17.934899+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:18.935097+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:19.935256+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:20.935446+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:21.935643+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:22.935796+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:23.935925+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:24.936078+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:25.936227+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:26.936410+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:27.936678+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:28.936902+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:29.937048+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:30.937281+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:31.937503+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:32.937638+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:33.937854+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:34.938036+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:35.938228+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:36.938406+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:37.938627+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:38.938824+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:39.939033+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:40.939247+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:41.939439+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:42.939599+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:43.939874+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:44.940094+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:45.940300+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:46.940573+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:47.940864+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:48.941069+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:49.941224+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:50.941426+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:51.941586+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:52.941721+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:53.941897+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:54.942057+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:55.942601+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:56.942744+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:57.942925+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:58.943070+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346578944 unmapped: 66650112 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:59.943190+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:00.943280+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:01.943429+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:02.943571+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:03.943703+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:04.943838+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:05.943926+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:06.944079+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:07.944248+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:08.944378+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:09.944587+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:10.944910+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:11.945171+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:12.945450+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:13.946032+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:14.946173+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:15.946323+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:16.946420+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:17.946563+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346628096 unmapped: 66600960 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:18.946684+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346628096 unmapped: 66600960 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:19.946848+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:20.947020+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:21.947193+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:22.947346+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:23.947508+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:24.947631+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:25.947799+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:26.947922+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:27.948042+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:28.948169+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:29.948310+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:30.948448+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:31.948616+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:32.948727+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:33.948891+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:34.949014+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:35.949138+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:36.949371+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:37.949554+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:38.949794+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:39.951716+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:40.952488+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:41.952705+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:42.953007+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:43.953189+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:44.953345+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:45.953509+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:46.953643+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346710016 unmapped: 66519040 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:47.953800+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:48.953962+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:49.954133+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:50.954271+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:51.954446+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:52.954593+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:53.954724+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:54.954903+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:55.955082+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:56.955208+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:57.955341+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:58.955507+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:59.955677+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:00.955992+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:01.956981+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:02.957455+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:03.958010+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:04.958368+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:05.958609+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:06.958879+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:07.959256+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:08.959730+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:09.960201+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:10.960577+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:11.960915+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:12.961166+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:13.961378+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:14.961616+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:15.961807+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346783744 unmapped: 66445312 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:16.962020+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346783744 unmapped: 66445312 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:17.962175+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346783744 unmapped: 66445312 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:18.962363+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:19.962555+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:20.962724+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:21.963075+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:22.963300+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346800128 unmapped: 66428928 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:23.963463+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346800128 unmapped: 66428928 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:24.963801+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346808320 unmapped: 66420736 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:25.964026+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346808320 unmapped: 66420736 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:26.964395+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 66412544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:27.964536+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 66412544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:28.964707+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 66412544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:29.964935+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:30.965161+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:31.965467+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:32.965648+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:33.965888+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:34.966027+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346841088 unmapped: 66387968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:35.966140+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 66379776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:36.966269+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 66379776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:37.966423+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 66379776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:38.966614+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:39.966818+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:40.966982+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:41.967256+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:42.967588+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:43.967856+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:44.968085+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:45.968262+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:46.968442+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:47.968635+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:48.968805+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:49.969030+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:50.969194+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 66347008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:51.969367+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 66347008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:52.969577+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 66338816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:53.969843+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 66338816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:54.969969+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 66330624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:55.970141+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 66330624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 02 09:41:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1870072186' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:56.970305+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 66330624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:57.970473+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 66330624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:58.970613+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:59.970793+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:00.970974+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 ms_handle_reset con 0x562d9f949c00 session 0x562da1a1ad20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:01.971146+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: mgrc ms_handle_reset ms_handle_reset con 0x562da3171c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct 02 09:41:51 compute-0 ceph-osd[89321]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: get_auth_request con 0x562da3298400 auth_method 0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: mgrc handle_mgr_configure stats_period=5
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 ms_handle_reset con 0x562da06d4400 session 0x562da1907c20
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328e800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 ms_handle_reset con 0x562da1965c00 session 0x562da060c5a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da40e3400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:02.971371+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:03.971548+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 66314240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:04.971807+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 66314240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:05.971959+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 66314240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:06.972126+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:07.972277+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:08.972852+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:09.972990+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:10.973181+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:11.973407+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:12.973804+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 66297856 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:13.974090+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 66297856 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:14.974325+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 66289664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:15.974457+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 66289664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:16.974590+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 66289664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:17.974807+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 66289664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:18.974929+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346947584 unmapped: 66281472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:19.975135+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:20.975323+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:21.975560+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:22.975822+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:23.975965+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:24.976174+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:25.976339+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:26.976540+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:27.976715+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 66256896 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:28.976895+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 66256896 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:29.977348+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 66248704 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:30.977508+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346996736 unmapped: 66232320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:31.977663+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346996736 unmapped: 66232320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:32.977868+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347004928 unmapped: 66224128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:33.978063+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347004928 unmapped: 66224128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:34.978235+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 66215936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:35.978449+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 66215936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:36.978668+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 66215936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:37.978872+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 66215936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:38.979111+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:39.979251+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:40.979447+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:41.979678+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:42.979858+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 66199552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:43.979999+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 66199552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:44.980155+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 66199552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:45.980315+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 66199552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:46.980441+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:47.980564+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:48.980836+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:49.980970+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:50.981216+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:51.981551+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:52.981853+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:53.982102+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:54.982303+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:55.982447+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:56.982595+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:57.982794+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:58.983598+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:59.983765+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:00.983906+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:01.984118+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:02.984305+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:03.984450+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:04.984676+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:05.984865+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:06.985055+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:07.985261+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:08.985494+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:09.985675+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:10.985812+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:11.986044+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:12.986293+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:13.986490+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:14.986720+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:15.986990+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:16.987179+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:17.987363+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:18.987557+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:19.987782+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347127808 unmapped: 66101248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:20.987978+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:21.988212+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:22.988381+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:23.988600+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:24.988813+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:25.989027+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:26.989238+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:27.989401+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:28.989588+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:29.989738+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:30.989904+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:31.990069+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:32.990296+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:33.990534+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:34.990814+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:35.991038+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:36.991264+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:37.991466+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:38.991687+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:39.991929+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:40.992164+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:41.992383+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:42.992548+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:43.992711+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:44.992883+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:45.993038+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:46.993264+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:47.993448+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:48.993624+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:49.993798+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:50.993936+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 66035712 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:51.994123+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:52.994298+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:53.994439+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:54.994569+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:55.994709+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:56.994831+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:57.994967+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:58.995130+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:59.995296+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:00.995433+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:01.995587+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:02.995908+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:03.996038+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:04.996222+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:05.996386+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:06.996538+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:07.996697+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:08.996896+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:09.997043+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:10.997865+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:11.998032+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346218496 unmapped: 67010560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:12.998174+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346218496 unmapped: 67010560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:13.998407+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346218496 unmapped: 67010560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:14.998560+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346218496 unmapped: 67010560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:15.998861+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:16.999026+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:17.999168+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:18.999319+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:19.999488+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:20.999651+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:21.999917+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:23.000103+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:24.000272+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:25.000442+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:26.000568+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:27.000735+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:28.000968+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:29.005625+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:30.005828+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:31.006007+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 66977792 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:32.006214+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:33.006500+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:34.006645+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:35.006824+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:36.006966+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:37.007106+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 66961408 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:38.007274+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 66961408 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:39.007409+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:40.007580+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:41.007737+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:42.007913+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:43.008089+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:44.008238+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:45.008361+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:46.008608+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:47.008847+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:48.009088+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:49.009228+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:50.009358+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:51.009527+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:52.009725+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:53.009917+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:54.010128+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:55.010317+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:56.010464+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:57.010622+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:58.010824+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:59.010945+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:00.011101+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:01.011275+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:02.011819+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:03.012060+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:04.012211+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:05.012374+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:06.012522+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:07.012687+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:08.012914+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:09.013148+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:10.013345+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:11.013485+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:12.013670+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:13.013882+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:14.014054+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:15.014217+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:16.014371+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:17.014525+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:18.014701+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:19.014813+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:20.014947+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:21.015081+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:22.015289+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:23.015512+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:24.015680+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:25.015843+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:26.015982+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:27.016158+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:28.016311+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:29.016459+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:30.016607+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:31.016803+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 66846720 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:32.016983+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:33.017116+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:34.017307+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:35.017472+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:36.017620+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:37.017878+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:38.018009+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:39.018360+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:40.018490+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:41.018728+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:42.019459+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:43.019643+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:44.019794+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:45.019927+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:46.020074+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:47.020246+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:48.020379+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:49.020497+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:50.020680+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:51.020851+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:52.021089+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:53.021247+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:54.021438+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:55.021578+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:56.021714+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:57.021886+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:58.022048+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:59.022225+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:00.022391+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:01.022516+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:02.022675+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:03.022811+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:04.022944+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:05.023093+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:06.023229+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:07.023353+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:08.023500+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:09.023696+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:10.023857+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:11.023988+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:12.024159+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:13.024276+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:14.024443+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:15.024653+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:16.024791+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:17.024927+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:18.025100+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:19.025247+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:20.025371+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:21.025532+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:22.025800+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:23.026093+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:24.026248+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:25.026435+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:26.026582+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:27.026880+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:28.027047+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:29.027203+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:30.027406+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:31.027532+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:32.027694+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:33.027881+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:34.028021+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:35.028154+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:36.028292+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:37.028428+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:38.028575+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:39.028699+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:40.028860+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:41.029026+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:42.029324+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:43.029497+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:44.029644+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:45.029826+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:46.029976+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:47.030262+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:48.030418+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:49.030562+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:50.030707+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:51.031036+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:52.031297+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:53.031587+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:54.031779+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:55.031921+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:56.032047+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.71 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 278 writes, 624 keys, 278 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 278 writes, 128 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:57.032236+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:58.032426+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:59.032635+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:00.032845+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:01.032976+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:02.033170+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:03.033284+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:04.033455+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:05.033614+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:06.033778+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:07.033919+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:08.034069+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:09.034270+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:10.034446+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:11.034565+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:12.034802+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:13.035012+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:14.035152+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:15.035333+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:16.035518+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:17.035661+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:18.035812+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:19.035951+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:20.036076+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:21.036236+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:22.036450+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:23.036602+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:24.036823+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:25.036945+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:26.037056+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:27.037173+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:28.037294+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:29.037422+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 66584576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:30.037526+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 66584576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:31.037637+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 66584576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:32.037784+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 66584576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:33.037900+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:34.038051+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:35.038217+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:36.038356+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:37.038504+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:38.038638+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:39.038789+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:40.038976+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:41.039112+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:42.039275+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:43.039494+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:44.039636+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:45.039809+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:46.039974+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:47.040101+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1965c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:48.040237+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 545.098144531s of 546.776916504s, submitted: 42
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472079 data_alloc: 218103808 data_used: 1167360
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 301 ms_handle_reset con 0x562da1965c00 session 0x562da18b4780
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:49.040359+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9463000/0x0/0x4ffc00000, data 0xa5d3e7/0xbfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9463000/0x0/0x4ffc00000, data 0xa5d3e7/0xbfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:50.040481+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 66527232 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9463000/0x0/0x4ffc00000, data 0xa5d3e7/0xbfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:51.040604+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346710016 unmapped: 66519040 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 302 ms_handle_reset con 0x562da1d55400 session 0x562da202ab40
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:52.040799+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:53.040926+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346726400 unmapped: 66502656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390468 data_alloc: 218103808 data_used: 1171456
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:54.041065+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346726400 unmapped: 66502656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9c60000/0x0/0x4ffc00000, data 0x25efb8/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:55.041143+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346726400 unmapped: 66502656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:56.041228+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9c60000/0x0/0x4ffc00000, data 0x25efb8/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346726400 unmapped: 66502656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:57.041375+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346750976 unmapped: 66478080 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:58.041502+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.232305527s of 10.012306213s, submitted: 80
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e9c5d000/0x0/0x4ffc00000, data 0x260a1b/0x400000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 ms_handle_reset con 0x562da230f800 session 0x562da02fc5a0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:59.041632+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347840512 unmapped: 65388544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:00.041899+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347840512 unmapped: 65388544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:01.042094+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:02.042323+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:03.042447+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:04.042580+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:05.042663+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:06.042813+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:07.042926+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:08.043046+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:09.043173+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:10.043296+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:11.043475+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:12.043630+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:13.043834+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:14.043973+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:15.044105+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:16.044281+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:17.044544+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:18.044707+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:19.044884+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:20.044976+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 65339392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:21.045132+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 65339392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:22.045301+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 65339392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:23.045435+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 65339392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:24.045591+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:25.045729+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:26.045869+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:27.046030+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:28.046189+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:29.046320+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:30.046507+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:31.046662+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:32.046815+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:33.046946+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:34.047093+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:35.047251+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:36.047459+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:37.047599+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:38.047792+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:39.047933+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 65314816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:40.048096+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 65314816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:41.048252+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:42.048523+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:43.048679+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:44.048863+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:45.049018+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:46.049227+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:47.049360+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:48.049491+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:49.049650+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:50.049842+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:51.049974+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:52.050145+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:53.050275+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:54.050418+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:55.050586+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 65290240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:56.050790+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 65282048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1a59c00
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.771060944s of 58.697120667s, submitted: 80
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:57.050914+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 305 ms_handle_reset con 0x562da1a59c00 session 0x562da17092c0
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 65257472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9847000/0x0/0x4ffc00000, data 0x264169/0x406000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:58.051054+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 65257472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399710 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:59.051205+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 65257472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:00.051326+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 65249280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:01.051447+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 65249280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:02.051629+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1eaf000
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 305 ms_handle_reset con 0x562da1eaf000 session 0x562da18b4960
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 65241088 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:03.051875+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 65241088 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:04.052025+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:05.052144+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:06.052307+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:07.052421+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:08.052553+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:09.052684+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:10.052850+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:11.053044+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 65208320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:12.053247+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 65208320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:13.053399+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 65208320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:14.053542+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 65208320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:15.053665+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 65200128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:16.053826+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 65200128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:17.053975+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 65200128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:18.054103+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 65200128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:19.054303+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:20.054474+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:21.054698+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:22.054923+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:23.055043+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:24.055177+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 65183744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:25.055326+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 65183744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:26.055529+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 65183744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:27.055717+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 65167360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:28.055843+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 65167360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:29.056035+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 65167360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:30.056329+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 65167360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:31.056469+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:32.056672+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:33.056810+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:34.056944+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:35.057409+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:36.058228+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:37.058595+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:38.058776+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:39.058907+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 65150976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:40.059289+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 65150976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:41.059643+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 65150976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:42.059871+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 65150976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:43.060011+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:44.060304+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:45.060809+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:46.061208+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:47.061457+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:48.061637+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:49.061787+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:50.062113+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:51.062625+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 65126400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:52.062908+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 65126400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:53.063290+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 65126400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:54.063944+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 65118208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:55.064306+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 65118208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:56.064587+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 65110016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:57.064871+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 65110016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:58.065029+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 65110016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:59.065292+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:00.065734+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:01.066060+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:02.066234+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:03.066392+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:04.066549+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:05.066788+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:06.066970+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:07.067237+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 65085440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:08.067377+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 65085440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:09.067498+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 65085440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:10.067637+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 65085440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:11.067778+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 65077248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:12.067914+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 65069056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:13.068028+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 65069056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:14.068140+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 65069056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:15.068281+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 65052672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:16.068411+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 65052672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:17.068551+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 65052672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:18.068682+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: do_command 'config diff' '{prefix=config diff}'
Oct 02 09:41:51 compute-0 ceph-osd[89321]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:51 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:51 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:41:51 compute-0 ceph-osd[89321]: do_command 'config show' '{prefix=config show}'
Oct 02 09:41:51 compute-0 ceph-osd[89321]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 09:41:51 compute-0 ceph-osd[89321]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 09:41:51 compute-0 ceph-osd[89321]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:19.068810+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 09:41:51 compute-0 ceph-osd[89321]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 09:41:51 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347783168 unmapped: 65445888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:20.068951+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 65609728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:41:51 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:21.069083+0000)
Oct 02 09:41:51 compute-0 ceph-osd[89321]: do_command 'log dump' '{prefix=log dump}'
Oct 02 09:41:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 02 09:41:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236580648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 09:41:51 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:41:51 compute-0 rsyslogd[1004]: imjournal from <np0005465604:ceph-osd>: begin to drop messages due to rate-limiting
Oct 02 09:41:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 02 09:41:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3014685490' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 09:41:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 09:41:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2932642482' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 09:41:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/382984171' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 09:41:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1870072186' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 09:41:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4236580648' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 09:41:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3014685490' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 09:41:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2932642482' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 09:41:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 02 09:41:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/333747920' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 09:41:52 compute-0 nova_compute[260603]: 2025-10-02 09:41:52.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:41:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 02 09:41:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2316335049' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 09:41:52 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23235 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:53 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23237 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:53 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23239 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:53 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/333747920' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 09:41:53 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2316335049' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 09:41:53 compute-0 ceph-mon[74477]: from='client.23235 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:53 compute-0 ceph-mon[74477]: pgmap v3728: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:53 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23241 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:53 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23243 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:53 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23247 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:54 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23249 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:54 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 02 09:41:54 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067170726' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 09:41:54 compute-0 ceph-mon[74477]: from='client.23237 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:54 compute-0 ceph-mon[74477]: from='client.23239 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:54 compute-0 ceph-mon[74477]: from='client.23241 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:54 compute-0 ceph-mon[74477]: from='client.23243 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:54 compute-0 ceph-mon[74477]: from='client.23247 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3729: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:54 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23253 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:55 compute-0 nova_compute[260603]: 2025-10-02 09:41:55.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 02 09:41:55 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2912881838' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 09:41:55 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23257 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 09:41:55 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4218874357' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865400 session 0x555e8079d2c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 325836800 unmapped: 65904640 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:11.370844+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9e61000/0x0/0x4ffc00000, data 0x1b8b1d8/0x1d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 325836800 unmapped: 65904640 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3391161 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:12.370980+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 325836800 unmapped: 65904640 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:13.371093+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 326320128 unmapped: 65421312 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:14.371224+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 326320128 unmapped: 65421312 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:15.371394+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 326320128 unmapped: 65421312 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9e61000/0x0/0x4ffc00000, data 0x1b8b1d8/0x1d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:16.371608+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 326320128 unmapped: 65421312 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9e61000/0x0/0x4ffc00000, data 0x1b8b1d8/0x1d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462201 data_alloc: 234881024 data_used: 17805312
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9e61000/0x0/0x4ffc00000, data 0x1b8b1d8/0x1d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:17.371901+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 326320128 unmapped: 65421312 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:18.372106+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 326320128 unmapped: 65421312 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9e61000/0x0/0x4ffc00000, data 0x1b8b1d8/0x1d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:19.372341+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 326320128 unmapped: 65421312 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:20.372481+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9e61000/0x0/0x4ffc00000, data 0x1b8b1d8/0x1d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 326320128 unmapped: 65421312 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9e61000/0x0/0x4ffc00000, data 0x1b8b1d8/0x1d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:21.372676+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 326320128 unmapped: 65421312 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462201 data_alloc: 234881024 data_used: 17805312
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:22.372881+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 326320128 unmapped: 65421312 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.803715706s of 12.807387352s, submitted: 1
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:23.373024+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 328777728 unmapped: 62963712 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:24.373252+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 329834496 unmapped: 61906944 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:25.373469+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 60850176 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:26.373608+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330252288 unmapped: 61489152 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e93c6000/0x0/0x4ffc00000, data 0x26261d8/0x27b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551575 data_alloc: 234881024 data_used: 18472960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:27.373852+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330252288 unmapped: 61489152 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:28.374090+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e93c6000/0x0/0x4ffc00000, data 0x26261d8/0x27b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330252288 unmapped: 61489152 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:29.374229+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330252288 unmapped: 61489152 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:30.374372+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330252288 unmapped: 61489152 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:31.374515+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e93c3000/0x0/0x4ffc00000, data 0x26291d8/0x27bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330260480 unmapped: 61480960 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551223 data_alloc: 234881024 data_used: 18472960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:32.374646+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330260480 unmapped: 61480960 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:33.374820+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330260480 unmapped: 61480960 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:34.374999+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330260480 unmapped: 61480960 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e93c3000/0x0/0x4ffc00000, data 0x26291d8/0x27bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:35.375188+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330260480 unmapped: 61480960 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:36.375402+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330260480 unmapped: 61480960 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551223 data_alloc: 234881024 data_used: 18472960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:37.375813+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e93c3000/0x0/0x4ffc00000, data 0x26291d8/0x27bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330260480 unmapped: 61480960 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e6ac00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e6ac00 session 0x555e7e866000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e822212c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e807ebe00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e6ac00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e6ac00 session 0x555e7e867c20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.060773849s of 14.958057404s, submitted: 78
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:38.375976+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e7f6c6960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865400 session 0x555e7e8bc780
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e7f9b23c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330285056 unmapped: 61456384 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7ebd94a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e6ac00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e6ac00 session 0x555e7f8ed0e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:39.376133+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330309632 unmapped: 61431808 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:40.376283+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330309632 unmapped: 61431808 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:41.376421+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8d63000/0x0/0x4ffc00000, data 0x2c891d8/0x2e1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 61423616 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8d63000/0x0/0x4ffc00000, data 0x2c891d8/0x2e1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602640 data_alloc: 234881024 data_used: 18472960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:42.376576+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 61423616 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:43.376816+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 61423616 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:44.376994+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e807eb2c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 61423616 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:45.377172+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865400 session 0x555e8061cf00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 61423616 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:46.377376+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8d63000/0x0/0x4ffc00000, data 0x2c891d8/0x2e1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 61423616 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e8c879e00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3603950 data_alloc: 234881024 data_used: 18472960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e8064b4a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:47.381894+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330326016 unmapped: 61415424 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e6ac00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:48.382029+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8d3e000/0x0/0x4ffc00000, data 0x2cad1e8/0x2e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 330326016 unmapped: 61415424 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:49.382179+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 332857344 unmapped: 58884096 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:50.382282+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8d3e000/0x0/0x4ffc00000, data 0x2cad1e8/0x2e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 332857344 unmapped: 58884096 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:51.382401+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 332857344 unmapped: 58884096 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3656038 data_alloc: 234881024 data_used: 25161728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:52.382617+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 332857344 unmapped: 58884096 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:53.382813+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 332857344 unmapped: 58884096 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:54.382973+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 332857344 unmapped: 58884096 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:55.383111+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 332857344 unmapped: 58884096 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8d3e000/0x0/0x4ffc00000, data 0x2cad1e8/0x2e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:56.383254+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.888341904s of 18.258932114s, submitted: 22
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8d3e000/0x0/0x4ffc00000, data 0x2cad1e8/0x2e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 332857344 unmapped: 58884096 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3656346 data_alloc: 234881024 data_used: 25161728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:57.383422+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 332857344 unmapped: 58884096 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8d3c000/0x0/0x4ffc00000, data 0x2cae1e8/0x2e41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:58.383580+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 332857344 unmapped: 58884096 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8d3c000/0x0/0x4ffc00000, data 0x2cae1e8/0x2e41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:09:59.383697+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 332955648 unmapped: 58785792 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:00.383828+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e86b8000/0x0/0x4ffc00000, data 0x332d1e8/0x34c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333160448 unmapped: 58580992 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e869a000/0x0/0x4ffc00000, data 0x33431e8/0x34d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:01.383968+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333160448 unmapped: 58580992 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3721410 data_alloc: 234881024 data_used: 25501696
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:02.384056+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333160448 unmapped: 58580992 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:03.384162+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333160448 unmapped: 58580992 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:04.384315+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333160448 unmapped: 58580992 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:05.384413+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333160448 unmapped: 58580992 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e869a000/0x0/0x4ffc00000, data 0x33431e8/0x34d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:06.384545+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e869a000/0x0/0x4ffc00000, data 0x33431e8/0x34d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333160448 unmapped: 58580992 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3721410 data_alloc: 234881024 data_used: 25501696
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:07.384801+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333160448 unmapped: 58580992 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:08.384980+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333160448 unmapped: 58580992 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:09.385134+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333160448 unmapped: 58580992 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e869a000/0x0/0x4ffc00000, data 0x33431e8/0x34d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:10.385320+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333168640 unmapped: 58572800 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:11.385507+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333168640 unmapped: 58572800 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e869a000/0x0/0x4ffc00000, data 0x33431e8/0x34d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3721410 data_alloc: 234881024 data_used: 25501696
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.748953819s of 15.986430168s, submitted: 52
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:12.385654+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e6ac00 session 0x555e807b1e00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e807eb2c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333201408 unmapped: 58540032 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865400 session 0x555e80495e00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:13.385817+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333201408 unmapped: 58540032 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:14.385953+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333201408 unmapped: 58540032 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:15.386103+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333209600 unmapped: 58531840 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e93c0000/0x0/0x4ffc00000, data 0x262c1d8/0x27be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:16.386279+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e93c0000/0x0/0x4ffc00000, data 0x262c1d8/0x27be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333209600 unmapped: 58531840 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563653 data_alloc: 234881024 data_used: 18472960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:17.386455+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333209600 unmapped: 58531840 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:18.386601+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865800 session 0x555e80495680
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333209600 unmapped: 58531840 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:19.386738+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e7fa7dc20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333225984 unmapped: 58515456 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:20.386886+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333225984 unmapped: 58515456 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:21.387149+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333225984 unmapped: 58515456 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339773 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:22.387264+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333225984 unmapped: 58515456 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:23.387454+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333225984 unmapped: 58515456 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:24.387586+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333225984 unmapped: 58515456 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:25.387725+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333234176 unmapped: 58507264 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:26.387811+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333234176 unmapped: 58507264 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:27.388065+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339773 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333234176 unmapped: 58507264 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:28.388230+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333234176 unmapped: 58507264 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:29.388405+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333234176 unmapped: 58507264 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:30.388523+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333242368 unmapped: 58499072 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:31.388703+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333242368 unmapped: 58499072 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:32.388882+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339773 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333242368 unmapped: 58499072 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:33.389000+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333242368 unmapped: 58499072 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:34.389131+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333242368 unmapped: 58499072 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:35.389295+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333250560 unmapped: 58490880 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:36.389448+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333250560 unmapped: 58490880 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:37.389598+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339773 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333250560 unmapped: 58490880 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:38.389780+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333250560 unmapped: 58490880 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:39.389907+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333250560 unmapped: 58490880 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:40.390044+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333250560 unmapped: 58490880 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:41.390182+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333250560 unmapped: 58490880 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:42.390360+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339773 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333250560 unmapped: 58490880 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:43.390542+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333258752 unmapped: 58482688 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:44.390672+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333266944 unmapped: 58474496 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:45.390813+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333266944 unmapped: 58474496 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:46.390951+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333266944 unmapped: 58474496 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:47.391134+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339773 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333266944 unmapped: 58474496 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:48.391274+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333266944 unmapped: 58474496 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:49.391410+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333266944 unmapped: 58474496 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:50.391629+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333266944 unmapped: 58474496 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:51.391785+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333275136 unmapped: 58466304 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:52.391972+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339773 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333275136 unmapped: 58466304 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:53.392170+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333275136 unmapped: 58466304 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:54.392339+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333275136 unmapped: 58466304 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:55.392502+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333275136 unmapped: 58466304 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:56.392669+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333275136 unmapped: 58466304 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:57.392879+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339773 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333275136 unmapped: 58466304 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:58.393054+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333283328 unmapped: 58458112 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:10:59.393213+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333283328 unmapped: 58458112 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:00.393343+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333283328 unmapped: 58458112 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:01.393499+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333283328 unmapped: 58458112 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:02.393674+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339773 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333291520 unmapped: 58449920 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:03.393858+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333291520 unmapped: 58449920 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:04.393994+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333291520 unmapped: 58449920 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:05.394175+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333291520 unmapped: 58449920 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:06.394303+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333291520 unmapped: 58449920 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:07.394485+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339773 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333299712 unmapped: 58441728 heap: 391741440 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:08.394621+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 55.828273773s of 56.132221222s, submitted: 42
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343564288 unmapped: 49799168 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:09.394794+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7e8663c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333619200 unmapped: 59744256 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9de3000/0x0/0x4ffc00000, data 0x1c091d8/0x1d9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:10.394999+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333619200 unmapped: 59744256 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:11.395269+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333619200 unmapped: 59744256 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:12.395478+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424103 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333619200 unmapped: 59744256 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:13.395656+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9de3000/0x0/0x4ffc00000, data 0x1c091d8/0x1d9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333619200 unmapped: 59744256 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:14.395850+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333619200 unmapped: 59744256 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:15.396066+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9de3000/0x0/0x4ffc00000, data 0x1c091d8/0x1d9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333627392 unmapped: 59736064 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:16.396284+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333627392 unmapped: 59736064 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:17.396475+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424103 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9de3000/0x0/0x4ffc00000, data 0x1c091d8/0x1d9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333627392 unmapped: 59736064 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:18.396686+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333627392 unmapped: 59736064 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:19.396847+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333627392 unmapped: 59736064 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e6ac00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e6ac00 session 0x555e7ebd9e00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:20.396975+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e807dc1e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333627392 unmapped: 59736064 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:21.397119+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333627392 unmapped: 59736064 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:22.397246+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424103 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e7ebdb4a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.644862175s of 14.316731453s, submitted: 19
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e7e8f4f00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9de3000/0x0/0x4ffc00000, data 0x1c091d8/0x1d9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333627392 unmapped: 59736064 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:23.397363+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9de2000/0x0/0x4ffc00000, data 0x1c091e8/0x1d9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333635584 unmapped: 59727872 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:24.397489+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333635584 unmapped: 59727872 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:25.397625+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333635584 unmapped: 59727872 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:26.397808+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e6ac00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 333643776 unmapped: 59719680 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:27.397944+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3499765 data_alloc: 234881024 data_used: 17612800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 58564608 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:28.398083+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 58564608 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:29.398207+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9de2000/0x0/0x4ffc00000, data 0x1c091e8/0x1d9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 58564608 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:30.398351+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9de2000/0x0/0x4ffc00000, data 0x1c091e8/0x1d9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 58564608 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:31.398469+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 58564608 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:32.398632+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3499765 data_alloc: 234881024 data_used: 17612800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 58564608 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:33.398801+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 58564608 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:34.398935+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 58564608 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:35.399083+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 58564608 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:36.399239+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9de2000/0x0/0x4ffc00000, data 0x1c091e8/0x1d9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.888571739s of 13.896160126s, submitted: 2
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338870272 unmapped: 54493184 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:37.399416+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617343 data_alloc: 234881024 data_used: 19578880
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339148800 unmapped: 54214656 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:38.399675+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:39.399871+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8ed6000/0x0/0x4ffc00000, data 0x2b0d1e8/0x2ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:40.401740+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:41.403444+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:42.404886+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636025 data_alloc: 234881024 data_used: 19800064
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:43.406074+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8ed6000/0x0/0x4ffc00000, data 0x2b0d1e8/0x2ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:44.407107+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:45.407308+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:46.407450+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:47.408080+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628621 data_alloc: 234881024 data_used: 19800064
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8ebd000/0x0/0x4ffc00000, data 0x2b2e1e8/0x2cc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:48.408233+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:49.408556+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:50.408694+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8ebd000/0x0/0x4ffc00000, data 0x2b2e1e8/0x2cc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:51.408830+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:52.408996+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628941 data_alloc: 234881024 data_used: 19808256
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:53.409152+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.834208488s of 17.500257492s, submitted: 144
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865800 session 0x555e8003cd20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 54059008 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c7000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c7000 session 0x555e8077f0e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e66800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e66800 session 0x555e7e8bc3c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:54.409285+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e8077f860
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c7000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c7000 session 0x555e7e8674a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 54009856 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e7f6c63c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865800 session 0x555e807d1860
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c2400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:55.409407+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c2400 session 0x555e7e7db0e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e8076e3c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x2deb25a/0x2f80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 54009856 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:56.409538+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 54009856 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:57.410521+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x2deb25a/0x2f80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3658938 data_alloc: 234881024 data_used: 19812352
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x2deb25a/0x2f80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 54009856 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:58.410785+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 54009856 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:11:59.410941+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x2deb25a/0x2f80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339353600 unmapped: 54009856 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:00.411072+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 54001664 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:01.411275+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 54001664 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:02.411439+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3658938 data_alloc: 234881024 data_used: 19812352
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 54001664 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:03.411593+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 54001664 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:04.411842+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 54001664 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:05.412188+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x2deb25a/0x2f80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x2deb25a/0x2f80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339361792 unmapped: 54001664 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:06.412377+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c7000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.783671379s of 12.373286247s, submitted: 31
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 53993472 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:07.412558+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3663879 data_alloc: 234881024 data_used: 19812352
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 53993472 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:08.412724+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c7000 session 0x555e82221e00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8bda000/0x0/0x4ffc00000, data 0x2e0f25a/0x2fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 53993472 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:09.412836+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339722240 unmapped: 53641216 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:10.412974+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:11.413185+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 53321728 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:12.415544+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 53321728 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8bda000/0x0/0x4ffc00000, data 0x2e0f25a/0x2fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683371 data_alloc: 234881024 data_used: 22556672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:13.415958+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 53321728 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:14.416584+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 53321728 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:15.416937+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 53321728 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8bda000/0x0/0x4ffc00000, data 0x2e0f25a/0x2fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:16.418208+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 53321728 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8bda000/0x0/0x4ffc00000, data 0x2e0f25a/0x2fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:17.418427+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 53321728 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683371 data_alloc: 234881024 data_used: 22556672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:18.418594+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 53321728 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:19.418774+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340049920 unmapped: 53313536 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8bda000/0x0/0x4ffc00000, data 0x2e0f25a/0x2fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:20.419043+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.262046814s of 13.839698792s, submitted: 12
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8bda000/0x0/0x4ffc00000, data 0x2e0f25a/0x2fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340049920 unmapped: 53313536 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:21.419260+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 51740672 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:22.419455+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 51265536 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8891000/0x0/0x4ffc00000, data 0x315825a/0x32ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,6])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718827 data_alloc: 234881024 data_used: 22581248
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:23.419724+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343998464 unmapped: 49364992 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:24.420032+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49831936 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:25.420496+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343556096 unmapped: 49807360 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e839e000/0x0/0x4ffc00000, data 0x363d25a/0x37d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:26.420736+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 50470912 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:27.420944+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 50454528 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770699 data_alloc: 234881024 data_used: 22937600
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:28.421096+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 50454528 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:29.421261+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 50454528 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:30.421442+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 50454528 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8369000/0x0/0x4ffc00000, data 0x367825a/0x380d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:31.421593+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 50454528 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.633143425s of 11.422609329s, submitted: 109
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:32.421797+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 49397760 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3764699 data_alloc: 234881024 data_used: 22941696
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:33.421921+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 49397760 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:34.422072+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 49397760 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:35.422275+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 49397760 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e836e000/0x0/0x4ffc00000, data 0x367b25a/0x3810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e836e000/0x0/0x4ffc00000, data 0x367b25a/0x3810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:36.422440+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 49397760 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e836e000/0x0/0x4ffc00000, data 0x367b25a/0x3810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-mon[74477]: from='client.23249 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:37.422731+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 49397760 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e836e000/0x0/0x4ffc00000, data 0x367b25a/0x3810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3764699 data_alloc: 234881024 data_used: 22941696
Oct 02 09:41:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3067170726' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-mon[74477]: pgmap v3729: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:38.422938+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 49397760 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-mon[74477]: from='client.23253 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2912881838' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:39.423137+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 49397760 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4218874357' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:40.423307+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 49389568 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:41.423479+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 49389568 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:42.423662+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e836e000/0x0/0x4ffc00000, data 0x367b25a/0x3810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343973888 unmapped: 49389568 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3766635 data_alloc: 234881024 data_used: 22929408
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:43.423845+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e836e000/0x0/0x4ffc00000, data 0x367b25a/0x3810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 49381376 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:44.424035+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 49381376 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:45.424223+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 49381376 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:46.424375+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 49381376 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:47.424599+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 49381376 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3766635 data_alloc: 234881024 data_used: 22929408
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e836e000/0x0/0x4ffc00000, data 0x367b25a/0x3810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:48.424734+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 49381376 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:49.424885+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 49381376 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:50.425012+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343982080 unmapped: 49381376 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e836e000/0x0/0x4ffc00000, data 0x367b25a/0x3810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:51.425166+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.514595032s of 19.727893829s, submitted: 16
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343998464 unmapped: 49364992 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e836e000/0x0/0x4ffc00000, data 0x367b25a/0x3810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e81552780
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865800 session 0x555e8061d0e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:52.426129+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343998464 unmapped: 49364992 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85863800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3766527 data_alloc: 234881024 data_used: 23134208
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e836e000/0x0/0x4ffc00000, data 0x367b25a/0x3810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:53.426249+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85863800 session 0x555e8c878000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 344047616 unmapped: 49315840 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:54.426378+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8eaa000/0x0/0x4ffc00000, data 0x2b401e8/0x2cd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 344047616 unmapped: 49315840 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:55.426491+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 344047616 unmapped: 49315840 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:56.426672+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 344047616 unmapped: 49315840 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:57.426823+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e6ac00 session 0x555e7f6c6000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e84777680
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 344047616 unmapped: 49315840 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8eaa000/0x0/0x4ffc00000, data 0x2b401e8/0x2cd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371185 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e8079d2c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:58.426982+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 52682752 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:12:59.427149+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 52682752 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:00.427337+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 52682752 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:01.427534+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 52682752 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:02.427846+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 52682752 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3367750 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:03.428025+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 52682752 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:04.428173+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 52674560 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:05.428326+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340697088 unmapped: 52666368 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:06.428482+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340697088 unmapped: 52666368 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:07.428666+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340697088 unmapped: 52666368 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3367750 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:08.428806+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340697088 unmapped: 52666368 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:09.428970+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340697088 unmapped: 52666368 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:10.429142+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340697088 unmapped: 52666368 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:11.429354+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 52658176 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:12.429538+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 52658176 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3367750 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:13.429664+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 52658176 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:14.429818+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 52658176 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:15.429967+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 52649984 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:16.430096+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 52649984 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:17.430327+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 52649984 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3367750 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:18.430482+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 52649984 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:19.430892+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 52649984 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:20.431042+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 52649984 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:21.431199+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 52649984 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:22.431381+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 52649984 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3367750 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:23.431510+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 52641792 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:24.431694+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 52641792 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:25.431914+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 52641792 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:26.432098+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 52641792 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:27.432316+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 52641792 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3367750 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:28.432480+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 52641792 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:29.432661+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 52641792 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:30.432823+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 52641792 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:31.432976+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340729856 unmapped: 52633600 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:32.433092+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340729856 unmapped: 52633600 heap: 393363456 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c7000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c7000 session 0x555e82220780
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e7ec59e00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e817d63c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e7f668780
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.779174805s of 41.623676300s, submitted: 58
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3367750 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:33.433250+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4ea812000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,15])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 49504256 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7e8661e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:34.433451+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 63938560 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:35.433643+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 63938560 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:36.433809+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 63938560 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:37.433978+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 63938560 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477972 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:38.434217+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e99db000/0x0/0x4ffc00000, data 0x20111d8/0x21a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 63938560 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e99db000/0x0/0x4ffc00000, data 0x20111d8/0x21a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:39.434412+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 63938560 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e99db000/0x0/0x4ffc00000, data 0x20111d8/0x21a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:40.434544+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 63938560 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:41.434677+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 63938560 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:42.434809+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 63938560 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e6ac00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.548509598s of 10.030391693s, submitted: 18
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477972 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:43.434969+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 63938560 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e6ac00 session 0x555e7ebd85a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:44.436893+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c7000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e99b7000/0x0/0x4ffc00000, data 0x20351d8/0x21c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 63938560 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:45.437104+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e99b7000/0x0/0x4ffc00000, data 0x20351d8/0x21c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337305600 unmapped: 63930368 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:46.437266+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 337305600 unmapped: 63930368 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:47.437398+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 64610304 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3588213 data_alloc: 234881024 data_used: 21250048
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:48.437538+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 64610304 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:49.437640+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 64610304 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:50.437820+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e99b7000/0x0/0x4ffc00000, data 0x20351d8/0x21c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 64610304 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e99b7000/0x0/0x4ffc00000, data 0x20351d8/0x21c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:51.437968+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 64610304 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e99b7000/0x0/0x4ffc00000, data 0x20351d8/0x21c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:52.438148+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 64610304 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3588213 data_alloc: 234881024 data_used: 21250048
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:53.438267+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 64610304 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:54.438407+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 64610304 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e99b7000/0x0/0x4ffc00000, data 0x20351d8/0x21c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:55.438621+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 64610304 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:56.438806+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 64610304 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.563847542s of 13.976526260s, submitted: 8
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:57.439045+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 59228160 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3657527 data_alloc: 234881024 data_used: 21245952
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:58.439210+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 60956672 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:13:59.439381+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:00.439541+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7f48000/0x0/0x4ffc00000, data 0x28fc1d8/0x2a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:01.439675+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:02.439825+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669573 data_alloc: 234881024 data_used: 22044672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:03.439999+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:04.440157+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7f48000/0x0/0x4ffc00000, data 0x28fc1d8/0x2a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:05.440307+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:06.440435+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:07.440599+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7f48000/0x0/0x4ffc00000, data 0x28fc1d8/0x2a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669573 data_alloc: 234881024 data_used: 22044672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:08.440859+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:09.440994+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7f48000/0x0/0x4ffc00000, data 0x28fc1d8/0x2a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:10.441142+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:11.441272+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:12.441393+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7f48000/0x0/0x4ffc00000, data 0x28fc1d8/0x2a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669573 data_alloc: 234881024 data_used: 22044672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:13.441523+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:14.441647+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:15.441774+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 60948480 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e818f3c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e818f3c00 session 0x555e807ea3c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:16.442109+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e7f9b30e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7e7da780
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e6ac00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e6ac00 session 0x555e8c878f00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.578821182s of 19.470146179s, submitted: 113
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e7ebd81e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63000 session 0x555e81552960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341417984 unmapped: 59817984 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e817d7680
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7f6c8960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e6ac00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e6ac00 session 0x555e822203c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:17.442881+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341426176 unmapped: 59809792 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706911 data_alloc: 234881024 data_used: 22044672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:18.443017+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341426176 unmapped: 59809792 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7a9c000/0x0/0x4ffc00000, data 0x2daf23a/0x2f42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:19.443164+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7a9c000/0x0/0x4ffc00000, data 0x2daf23a/0x2f42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341426176 unmapped: 59809792 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:20.443384+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341426176 unmapped: 59809792 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:21.443525+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341426176 unmapped: 59809792 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:22.443733+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341426176 unmapped: 59809792 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7a9c000/0x0/0x4ffc00000, data 0x2daf23a/0x2f42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706911 data_alloc: 234881024 data_used: 22044672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:23.443949+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e8077fa40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341426176 unmapped: 59809792 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:24.444064+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e82b9b400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e82b9b400 session 0x555e807d1680
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7a9c000/0x0/0x4ffc00000, data 0x2daf23a/0x2f42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341426176 unmapped: 59809792 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e7e8bdc20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:25.444278+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 59801600 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7f666000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:26.444417+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e6ac00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.281478882s of 10.016018867s, submitted: 49
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 59793408 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:27.444638+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7a9c000/0x0/0x4ffc00000, data 0x2daf23a/0x2f42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 59793408 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7a9c000/0x0/0x4ffc00000, data 0x2daf23a/0x2f42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3708496 data_alloc: 234881024 data_used: 22044672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:28.444859+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 59793408 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:29.444976+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7a9c000/0x0/0x4ffc00000, data 0x2daf23a/0x2f42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 59793408 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7a9c000/0x0/0x4ffc00000, data 0x2daf23a/0x2f42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:30.445144+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 59785216 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:31.445268+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 59785216 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:32.445662+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 59785216 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3741648 data_alloc: 234881024 data_used: 26664960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:33.445803+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 59785216 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:34.445972+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 59785216 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:35.446202+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 59785216 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7a9c000/0x0/0x4ffc00000, data 0x2daf23a/0x2f42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:36.446372+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 59785216 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:37.446536+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7a9c000/0x0/0x4ffc00000, data 0x2daf23a/0x2f42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 59785216 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:38.446662+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3742128 data_alloc: 234881024 data_used: 26677248
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.788257599s of 12.290170670s, submitted: 3
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341450752 unmapped: 59785216 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:39.446791+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7a9c000/0x0/0x4ffc00000, data 0x2daf23a/0x2f42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 57556992 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:40.446964+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 57548800 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:41.447256+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 57548800 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:42.447443+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76dd000/0x0/0x4ffc00000, data 0x316623a/0x32f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343171072 unmapped: 58064896 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:43.447601+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3789168 data_alloc: 234881024 data_used: 26685440
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343187456 unmapped: 58048512 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:44.447770+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343187456 unmapped: 58048512 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:45.447981+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e763f000/0x0/0x4ffc00000, data 0x320c23a/0x339f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343187456 unmapped: 58048512 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:46.470321+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e763f000/0x0/0x4ffc00000, data 0x320c23a/0x339f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343187456 unmapped: 58048512 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:47.470494+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343187456 unmapped: 58048512 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e763f000/0x0/0x4ffc00000, data 0x320c23a/0x339f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:48.470682+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3789328 data_alloc: 234881024 data_used: 26689536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343130112 unmapped: 58105856 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:49.470826+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e761a000/0x0/0x4ffc00000, data 0x323123a/0x33c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343130112 unmapped: 58105856 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:50.470951+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e761a000/0x0/0x4ffc00000, data 0x323123a/0x33c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343138304 unmapped: 58097664 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:51.471145+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.773070335s of 12.658089638s, submitted: 98
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e6ac00 session 0x555e7ebdd4a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e82221c20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4c000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 343162880 unmapped: 58073088 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e761a000/0x0/0x4ffc00000, data 0x323123a/0x33c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:52.471242+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f4c000 session 0x555e8077f860
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 58474496 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:53.471449+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674252 data_alloc: 234881024 data_used: 22032384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 58474496 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:54.471602+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7f4f000/0x0/0x4ffc00000, data 0x28fc1d8/0x2a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7f4f000/0x0/0x4ffc00000, data 0x28fc1d8/0x2a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 58474496 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:55.471770+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865800 session 0x555e7ed805a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c7000 session 0x555e82220d20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 58474496 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:56.471947+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 62832640 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:57.472127+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e964e000/0x0/0x4ffc00000, data 0x11fe1d8/0x1390000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 62832640 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:58.472293+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398096 data_alloc: 218103808 data_used: 7856128
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 62832640 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:14:59.472461+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:00.472625+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e964e000/0x0/0x4ffc00000, data 0x11fe1d8/0x1390000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:01.472793+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.792087078s of 10.196167946s, submitted: 72
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:02.472938+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:03.473120+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398096 data_alloc: 218103808 data_used: 7856128
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:04.473292+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e8063cb40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:05.473423+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:06.473597+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:07.473830+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:08.473987+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390964 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:09.474296+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:10.474463+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:11.474611+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:12.474775+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:13.474956+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390964 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:14.475132+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:15.475300+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:16.475442+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:17.475630+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:18.475823+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390964 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:19.475952+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:20.476095+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:21.476233+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:22.476427+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:23.476601+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390964 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:24.476829+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:25.477020+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:26.477177+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:27.477352+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:28.477525+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390964 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:29.477684+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:30.477875+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:31.478098+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:32.478306+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:33.478448+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390964 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:34.478650+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:35.478799+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:36.479065+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:37.479279+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:38.479440+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390964 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:39.479617+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:40.479849+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:41.480160+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:42.480491+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9672000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 62824448 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:43.480684+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e807d10e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e6ac00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e6ac00 session 0x555e807b0d20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e7f66de00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7f9b2000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c7000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.338344574s of 41.877983093s, submitted: 8
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390964 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339468288 unmapped: 61767680 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:44.480986+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e84d1000/0x0/0x4ffc00000, data 0x11da1e8/0x136d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 61759488 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:45.481223+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 61759488 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:46.481456+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 48586752 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:47.481684+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 48578560 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:48.481856+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526859 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 52879360 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:49.481990+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c7000 session 0x555e7ebd94a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865800 session 0x555e7ebd9c20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340811776 unmapped: 60424192 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cfc00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cfc00 session 0x555e807d1860
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:50.482125+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e77fa000/0x0/0x4ffc00000, data 0x1eb11e8/0x2044000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340811776 unmapped: 60424192 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:51.482385+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e77fa000/0x0/0x4ffc00000, data 0x1eb11e8/0x2044000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,2])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e8c878000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e8003c960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340811776 unmapped: 60424192 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:52.482512+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c7000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c7000 session 0x555e8c8792c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340811776 unmapped: 60424192 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:53.482667+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e77fa000/0x0/0x4ffc00000, data 0x1eb11e8/0x2044000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865800 session 0x555e7f667a40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494179 data_alloc: 218103808 data_used: 7749632
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4d000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f4d000 session 0x555e7ed9c3c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340811776 unmapped: 60424192 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4d000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:54.482792+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.648233891s of 10.751329422s, submitted: 25
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e77fa000/0x0/0x4ffc00000, data 0x1eb11e8/0x2044000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340811776 unmapped: 60424192 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:55.482995+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340819968 unmapped: 60416000 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:56.483131+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e77fa000/0x0/0x4ffc00000, data 0x1eb11e8/0x2044000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340819968 unmapped: 60416000 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:57.483310+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f4d000 session 0x555e7f667a40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340819968 unmapped: 60416000 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:58.483413+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494179 data_alloc: 218103808 data_used: 7749632
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:15:59.483541+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340819968 unmapped: 60416000 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:00.483661+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 340819968 unmapped: 60416000 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e77fa000/0x0/0x4ffc00000, data 0x1eb11e8/0x2044000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:01.483789+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 59801600 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:02.483962+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 59801600 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:03.484112+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e77fa000/0x0/0x4ffc00000, data 0x1eb11e8/0x2044000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 59801600 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589351 data_alloc: 234881024 data_used: 19677184
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:04.484250+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 59801600 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e77fa000/0x0/0x4ffc00000, data 0x1eb11e8/0x2044000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:05.484353+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 59801600 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:06.484478+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 59801600 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:07.484665+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e77fa000/0x0/0x4ffc00000, data 0x1eb11e8/0x2044000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 59801600 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:08.484800+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 59801600 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589351 data_alloc: 234881024 data_used: 19677184
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:09.484929+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 59801600 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:10.485046+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 59801600 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.189483643s of 16.175752640s, submitted: 2
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:11.485191+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 51421184 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e739a000/0x0/0x4ffc00000, data 0x23111e8/0x24a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1,2,18,17])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:12.485351+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 52322304 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:13.485533+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 54837248 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672535 data_alloc: 234881024 data_used: 19673088
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:14.485906+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 54837248 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6cad000/0x0/0x4ffc00000, data 0x29f61e8/0x2b89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:15.486105+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 54771712 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:16.486299+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345800704 unmapped: 55435264 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:17.486558+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345800704 unmapped: 55435264 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:18.486729+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345800704 unmapped: 55435264 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681423 data_alloc: 234881024 data_used: 20602880
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:19.486926+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345800704 unmapped: 55435264 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c37000/0x0/0x4ffc00000, data 0x2a741e8/0x2c07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:20.487062+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:21.487220+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:22.487468+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:23.487729+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693749 data_alloc: 234881024 data_used: 20774912
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:24.488027+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:25.488205+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c27000/0x0/0x4ffc00000, data 0x2a841e8/0x2c17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.394841194s of 15.558679581s, submitted: 108
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:26.488337+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c27000/0x0/0x4ffc00000, data 0x2a841e8/0x2c17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:27.488501+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:28.488661+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693765 data_alloc: 234881024 data_used: 20774912
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c27000/0x0/0x4ffc00000, data 0x2a841e8/0x2c17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:29.488803+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:30.488946+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:31.489069+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:32.489238+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:33.489392+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c27000/0x0/0x4ffc00000, data 0x2a841e8/0x2c17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693765 data_alloc: 234881024 data_used: 20774912
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:34.489604+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:35.489856+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55427072 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:36.490014+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 55418880 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:37.490169+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 55418880 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:38.490321+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 55418880 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693765 data_alloc: 234881024 data_used: 20774912
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c27000/0x0/0x4ffc00000, data 0x2a841e8/0x2c17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:39.490486+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 55418880 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:40.490694+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 55418880 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:41.490966+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345817088 unmapped: 55418880 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:42.491112+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 55410688 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c27000/0x0/0x4ffc00000, data 0x2a841e8/0x2c17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:43.491275+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 55410688 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693765 data_alloc: 234881024 data_used: 20774912
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:44.491400+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 55402496 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.923933029s of 18.943651199s, submitted: 1
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:45.491559+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 55402496 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7ed80d20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c7000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c7000 session 0x555e7ed80b40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865800 session 0x555e7ed801e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cf000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cf000 session 0x555e7ed80f00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:46.491727+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 55402496 heap: 401235968 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7ed80960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4d000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f4d000 session 0x555e8c879680
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c7000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c7000 session 0x555e8079d4a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865800 session 0x555e817d7c20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e87eca000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e87eca000 session 0x555e807eb4a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:47.491992+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 65568768 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:48.492134+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 65568768 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774848 data_alloc: 234881024 data_used: 20774912
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e61bf000/0x0/0x4ffc00000, data 0x34eb1f8/0x367f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:49.492312+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e61bf000/0x0/0x4ffc00000, data 0x34eb1f8/0x367f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 65568768 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:50.492506+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 65568768 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:51.492668+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 65568768 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:52.492828+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e61bd000/0x0/0x4ffc00000, data 0x34ec1f8/0x3680000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 65568768 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:53.493017+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e822203c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 65568768 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3775156 data_alloc: 234881024 data_used: 20774912
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4d000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f4d000 session 0x555e7e8661e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:54.493175+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 65568768 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c7000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c7000 session 0x555e7f666000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e61bd000/0x0/0x4ffc00000, data 0x34ec1f8/0x3680000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.157384872s of 10.047403336s, submitted: 29
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:55.493330+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 65568768 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865800 session 0x555e8064a5a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:56.493466+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 65568768 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:57.503560+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 65568768 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:58.503797+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 60776448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3858903 data_alloc: 234881024 data_used: 31604736
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:59.503916+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 60776448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:00.504025+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6199000/0x0/0x4ffc00000, data 0x3510208/0x36a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 60776448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:01.504147+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 60776448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:02.504298+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 60776448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:03.504419+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 60776448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3858903 data_alloc: 234881024 data_used: 31604736
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6199000/0x0/0x4ffc00000, data 0x3510208/0x36a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:04.504559+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 60776448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:05.504706+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 60776448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:06.504883+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 60776448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:07.505015+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 60776448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.988707542s of 12.701445580s, submitted: 7
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5b3d000/0x0/0x4ffc00000, data 0x3b6c208/0x3d01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:08.505252+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 57335808 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3941273 data_alloc: 234881024 data_used: 32915456
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:09.505454+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 57262080 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:10.505633+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 57262080 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:11.505828+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 57262080 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e57a9000/0x0/0x4ffc00000, data 0x3f00208/0x4095000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:12.505996+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355532800 unmapped: 57253888 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:13.506188+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355549184 unmapped: 57237504 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3943335 data_alloc: 234881024 data_used: 32980992
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:14.506318+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 57196544 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:15.506437+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 57196544 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:16.506552+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 57196544 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e538d000/0x0/0x4ffc00000, data 0x3f0c208/0x40a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:17.506717+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e538d000/0x0/0x4ffc00000, data 0x3f0c208/0x40a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 57196544 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:18.506847+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 57196544 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3944135 data_alloc: 234881024 data_used: 33067008
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:19.506994+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 57196544 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e538d000/0x0/0x4ffc00000, data 0x3f0c208/0x40a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:20.507128+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.364212990s of 12.667222977s, submitted: 85
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 57163776 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e807dde00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e8079d2c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:21.507278+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 57163776 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:22.507404+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 57163776 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:23.507530+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355639296 unmapped: 57147392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714295 data_alloc: 234881024 data_used: 20889600
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:24.507686+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e67f1000/0x0/0x4ffc00000, data 0x2aa91f8/0x2c3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:25.507817+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:26.507941+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e807dd0e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6816000/0x0/0x4ffc00000, data 0x2a851e8/0x2c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:27.508133+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:28.508278+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6816000/0x0/0x4ffc00000, data 0x2a851e8/0x2c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3704758 data_alloc: 234881024 data_used: 20774912
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:29.508437+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:30.508639+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:31.508786+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:32.508959+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6816000/0x0/0x4ffc00000, data 0x2a851e8/0x2c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6816000/0x0/0x4ffc00000, data 0x2a851e8/0x2c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:33.509123+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.611377716s of 13.367643356s, submitted: 45
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3704666 data_alloc: 234881024 data_used: 20774912
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e8c878000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:34.510903+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4d000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6816000/0x0/0x4ffc00000, data 0x2a851e8/0x2c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:35.511048+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:36.511217+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 60129280 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:37.511389+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 60129280 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6f4e000/0x0/0x4ffc00000, data 0x15011d8/0x1693000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:38.511538+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6f4e000/0x0/0x4ffc00000, data 0x15011d8/0x1693000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:39.511694+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f4d000 session 0x555e7f6c6960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:40.511841+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:41.511966+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:42.512142+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:43.512318+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:44.512542+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:45.512869+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:46.513145+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:47.556002+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:48.556241+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:49.556453+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:50.556608+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:51.556882+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 43K writes, 174K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 43K writes, 16K syncs, 2.74 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2386 writes, 10K keys, 2386 commit groups, 1.0 writes per commit group, ingest: 12.81 MB, 0.02 MB/s
                                           Interval WAL: 2386 writes, 857 syncs, 2.78 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:52.557113+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets getting new tickets!
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:53.557394+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _finish_auth 0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:53.558250+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:54.557651+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:55.558065+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:56.558299+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:57.558546+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: mgrc ms_handle_reset ms_handle_reset con 0x555e816bc400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct 02 09:41:55 compute-0 ceph-osd[88314]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: get_auth_request con 0x555e816cfc00 auth_method 0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: mgrc handle_mgr_configure stats_period=5
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:58.558870+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:59.559103+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:00.559332+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:01.559528+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85862c00 session 0x555e8061d2c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:02.559869+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:03.560111+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:04.560344+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:05.560576+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:06.560893+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:07.561362+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:08.561543+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:09.562041+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:10.562539+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:11.562969+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:12.563339+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:13.563616+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:14.563854+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:15.564084+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:16.564255+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:17.564545+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:18.565205+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:19.565622+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:20.566110+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 70443008 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:21.566244+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 70443008 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e8063d0e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4d000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f4d000 session 0x555e817d63c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:22.566374+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e807b10e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e8077fa40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c7000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.933547974s of 48.876960754s, submitted: 33
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 70443008 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c7000 session 0x555e80650000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7eb1000/0x0/0x4ffc00000, data 0x13ea1e8/0x157d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e8063cd20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4d000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f4d000 session 0x555e807d0000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:23.566648+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e817b8b40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e8061c5a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 70025216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492502 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:24.566846+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 70025216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:25.567037+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 70025216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:26.567322+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7860000/0x0/0x4ffc00000, data 0x1a3b1e8/0x1bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 70025216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:27.567693+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7860000/0x0/0x4ffc00000, data 0x1a3b1e8/0x1bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 70025216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:28.568089+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 70017024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492502 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:29.568564+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 70017024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:30.568722+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865800 session 0x555e7e8663c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7860000/0x0/0x4ffc00000, data 0x1a3b1e8/0x1bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e785f000/0x0/0x4ffc00000, data 0x1a3b20b/0x1bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:31.568805+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:32.568953+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:33.569113+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547691 data_alloc: 218103808 data_used: 15233024
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:34.569272+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e785f000/0x0/0x4ffc00000, data 0x1a3b20b/0x1bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:35.569514+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:36.569674+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:37.569847+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:38.570002+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e785f000/0x0/0x4ffc00000, data 0x1a3b20b/0x1bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547691 data_alloc: 218103808 data_used: 15233024
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:39.570151+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:40.570322+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:41.570414+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e785f000/0x0/0x4ffc00000, data 0x1a3b20b/0x1bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:42.570585+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.164596558s of 20.521892548s, submitted: 23
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:43.570814+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 64430080 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615039 data_alloc: 218103808 data_used: 15269888
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:44.574113+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 64012288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:45.574328+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 63848448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:46.574523+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 63848448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:47.574695+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa11800 session 0x555e7e867680
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4d000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 63848448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c6b000/0x0/0x4ffc00000, data 0x262620b/0x27ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:48.574955+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 63848448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3661025 data_alloc: 218103808 data_used: 16412672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:49.575220+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 63848448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:50.575459+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 63848448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:51.575600+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 64651264 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:52.575882+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c71000/0x0/0x4ffc00000, data 0x262920b/0x27bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 64651264 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:53.576087+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c71000/0x0/0x4ffc00000, data 0x262920b/0x27bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 64651264 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652529 data_alloc: 218103808 data_used: 16412672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:54.576319+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 64651264 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:55.576498+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.104067802s of 12.720571518s, submitted: 138
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 64643072 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:56.576640+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 64643072 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e7f66c960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e7e8bd860
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7f9b32c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e807ebc20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:57.576843+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa43400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 64634880 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:58.577264+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 64618496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e665b000/0x0/0x4ffc00000, data 0x2c3e21b/0x2dd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3708561 data_alloc: 218103808 data_used: 16412672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:59.577401+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa43400 session 0x555e7fa7da40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7e8672c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e7e8670e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 64610304 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:00.577558+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 64602112 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:01.577721+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 64602112 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:02.577841+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e665b000/0x0/0x4ffc00000, data 0x2c3e21b/0x2dd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 64602112 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e8079da40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e7f66da40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:03.577995+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e873f7000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e873f7000 session 0x555e7ed80000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 64577536 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707505 data_alloc: 218103808 data_used: 16412672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:04.578149+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e807eab40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 64577536 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:05.578283+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.773425579s of 10.016960144s, submitted: 49
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 64569344 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:06.578434+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 64561152 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e817d7860
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:07.578614+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e769b000/0x0/0x4ffc00000, data 0x2c3e21b/0x2dd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 64561152 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:08.578792+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 64561152 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e817d63c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712259 data_alloc: 218103808 data_used: 16412672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:09.578941+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7676000/0x0/0x4ffc00000, data 0x2c6222b/0x2df8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cc000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 64561152 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:10.579078+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7676000/0x0/0x4ffc00000, data 0x2c6222b/0x2df8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 64561152 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:11.579216+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 64528384 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:12.579416+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:13.579595+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:14.579789+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757843 data_alloc: 234881024 data_used: 22790144
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:15.579937+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7676000/0x0/0x4ffc00000, data 0x2c6222b/0x2df8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:16.580076+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.123291016s of 10.906085968s, submitted: 74
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:17.580268+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:18.580425+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:19.580622+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758679 data_alloc: 234881024 data_used: 22790144
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:20.580811+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7674000/0x0/0x4ffc00000, data 0x2c6322b/0x2df9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:21.580987+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 61669376 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:22.581200+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7340000/0x0/0x4ffc00000, data 0x2f9722b/0x312d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,37,11])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 59514880 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:23.581426+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 59170816 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:24.581658+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826649 data_alloc: 234881024 data_used: 22818816
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 57294848 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:25.581802+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6b95000/0x0/0x4ffc00000, data 0x374322b/0x38d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,12])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 57999360 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:26.582056+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.919890881s of 10.145005226s, submitted: 99
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 57769984 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:27.582296+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 57769984 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:28.582452+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6963000/0x0/0x4ffc00000, data 0x396f22b/0x3b05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:29.582636+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3862983 data_alloc: 234881024 data_used: 23425024
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6969000/0x0/0x4ffc00000, data 0x396f22b/0x3b05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:30.582817+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6969000/0x0/0x4ffc00000, data 0x396f22b/0x3b05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:31.583052+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:32.583241+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:33.583382+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6944000/0x0/0x4ffc00000, data 0x399222b/0x3b28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,4])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:34.583537+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3867965 data_alloc: 234881024 data_used: 23457792
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6944000/0x0/0x4ffc00000, data 0x399222b/0x3b28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:35.583710+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:36.583883+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.491674423s of 10.077248573s, submitted: 14
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:37.584108+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:38.584258+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6943000/0x0/0x4ffc00000, data 0x399522b/0x3b2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:39.584434+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3867095 data_alloc: 234881024 data_used: 23453696
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 57745408 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:40.584585+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 57745408 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:41.584778+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 57737216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:42.585007+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6933000/0x0/0x4ffc00000, data 0x39a522b/0x3b3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 57737216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:43.585152+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 57737216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:44.585314+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3875927 data_alloc: 234881024 data_used: 23781376
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 57737216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:45.585547+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 57737216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6933000/0x0/0x4ffc00000, data 0x39a522b/0x3b3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:46.585733+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 57737216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:47.585958+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.368193626s of 10.428758621s, submitted: 12
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6933000/0x0/0x4ffc00000, data 0x39a522b/0x3b3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:48.586141+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:49.586357+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874275 data_alloc: 234881024 data_used: 23781376
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:50.586542+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6931000/0x0/0x4ffc00000, data 0x39a722b/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:51.586732+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:52.586993+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6931000/0x0/0x4ffc00000, data 0x39a722b/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:53.587170+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e817b8b40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cc000 session 0x555e7f669e00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:54.587342+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874747 data_alloc: 234881024 data_used: 23781376
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:55.587478+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 57720832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:56.587606+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 57720832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e692f000/0x0/0x4ffc00000, data 0x39a922b/0x3b3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:57.587798+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.071995735s of 10.076697350s, submitted: 6
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 57720832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:58.587941+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 65871872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:59.588112+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674879 data_alloc: 218103808 data_used: 16527360
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6930000/0x0/0x4ffc00000, data 0x334b21b/0x34e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 65871872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:00.588247+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 65871872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:01.588398+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7fa7dc20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 65871872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:02.588533+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 65871872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:03.588675+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65863680 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:04.588867+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666796 data_alloc: 218103808 data_used: 16412672
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7caf000/0x0/0x4ffc00000, data 0x262b20b/0x27bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65863680 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:05.589027+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e8c878d20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65863680 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:06.590787+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:07.590988+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 74260480 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.774965286s of 10.106699944s, submitted: 31
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:08.591169+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:09.591328+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450222 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1fb/0x136d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e7f9b30e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:10.591510+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:11.591701+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:12.591919+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:13.592083+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:14.592328+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:15.592740+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:16.593000+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:17.593321+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:18.593498+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:19.593806+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:20.594174+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:21.594325+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:22.594538+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:23.594733+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:24.594905+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:25.595146+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:26.595443+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:27.595698+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:28.595856+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:29.596025+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:30.596171+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:31.596503+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:32.596734+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:33.597029+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:34.597267+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:35.597533+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:36.597738+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:37.598214+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:38.598379+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:39.598588+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:40.598818+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:41.599082+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:42.599391+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:43.599671+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:44.600005+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:45.600179+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:46.600417+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:47.600675+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:48.600876+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:49.601071+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e8064a5a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e81553c20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:50.601242+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7f8ed860
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e807685a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.617778778s of 43.052780151s, submitted: 10
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,2,0,4])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e7ebdcd20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e8061d860
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e817d6960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e7fa7cf00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7f9b3680
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:51.601380+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:52.601584+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:53.601845+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:54.602025+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553286 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:55.602208+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8402000/0x0/0x4ffc00000, data 0x1ed91e8/0x206c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:56.602404+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:57.602578+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:58.602805+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:59.602997+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553286 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:00.603195+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 74219520 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8402000/0x0/0x4ffc00000, data 0x1ed91e8/0x206c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:01.603388+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 74219520 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:02.603576+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 74219520 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8402000/0x0/0x4ffc00000, data 0x1ed91e8/0x206c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:03.603848+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 74219520 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.918078423s of 13.026695251s, submitted: 19
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7ebdcb40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:04.603972+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 74063872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e83de000/0x0/0x4ffc00000, data 0x1efd1e8/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3555290 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:05.604117+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 74063872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:06.604231+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:07.604404+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:08.604656+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:09.604814+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e83de000/0x0/0x4ffc00000, data 0x1efd1e8/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643290 data_alloc: 234881024 data_used: 20152320
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:10.605045+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:11.605182+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:12.605361+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:13.605521+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e83de000/0x0/0x4ffc00000, data 0x1efd1e8/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:14.605685+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643290 data_alloc: 234881024 data_used: 20152320
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:15.605836+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.514795303s of 12.579584122s, submitted: 2
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:16.606046+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 67002368 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:17.606285+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 66961408 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76ae000/0x0/0x4ffc00000, data 0x2c241e8/0x2db7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:18.606457+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 66961408 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:19.606613+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3756342 data_alloc: 234881024 data_used: 20328448
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:20.606783+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:21.606967+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76ad000/0x0/0x4ffc00000, data 0x2c2e1e8/0x2dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:22.607150+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76ad000/0x0/0x4ffc00000, data 0x2c2e1e8/0x2dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:23.607298+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:24.607384+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76ad000/0x0/0x4ffc00000, data 0x2c2e1e8/0x2dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3756358 data_alloc: 234881024 data_used: 20328448
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:25.607564+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:26.607811+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:27.608018+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:28.608156+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76aa000/0x0/0x4ffc00000, data 0x2c311e8/0x2dc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:29.608349+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3756358 data_alloc: 234881024 data_used: 20328448
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:30.608599+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:31.608793+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:32.608937+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865c00 session 0x555e8061d2c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81069000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81069000 session 0x555e8c8792c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e817d74a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e80494f00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.623231888s of 16.666929245s, submitted: 92
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76aa000/0x0/0x4ffc00000, data 0x2c311e8/0x2dc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,1,4,8])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:33.609099+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e817d6f00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865c00 session 0x555e7f9b3680
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 71647232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f47400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f47400 session 0x555e817d6960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7ebdcd20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7f9b30e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:34.609262+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6a36000/0x0/0x4ffc00000, data 0x38a4211/0x3a38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 71639040 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3855272 data_alloc: 234881024 data_used: 20328448
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:35.609398+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 71639040 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:36.609533+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 71639040 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:37.609718+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 71639040 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:38.609868+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e7f669e00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 71630848 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6a36000/0x0/0x4ffc00000, data 0x38a424a/0x3a38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:39.610077+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865c00 session 0x555e817d63c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 71630848 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3855272 data_alloc: 234881024 data_used: 20328448
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:40.610274+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f47000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f47000 session 0x555e817d7860
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 71630848 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:41.610447+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6a35000/0x0/0x4ffc00000, data 0x38a425a/0x3a39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 71630848 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:42.610624+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 71630848 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e807eab40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:43.610852+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 71622656 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:44.610993+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.369200706s of 11.610096931s, submitted: 47
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 71622656 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857066 data_alloc: 234881024 data_used: 20328448
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:45.611098+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 71622656 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:46.611239+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6a35000/0x0/0x4ffc00000, data 0x38a425a/0x3a39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 64831488 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:47.611432+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353525760 unmapped: 63463424 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:48.611590+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6a35000/0x0/0x4ffc00000, data 0x38a425a/0x3a39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:49.611722+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3947278 data_alloc: 234881024 data_used: 33079296
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:50.611982+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:51.612230+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:52.612400+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:53.612562+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6a33000/0x0/0x4ffc00000, data 0x38a525a/0x3a3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:54.612880+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3947454 data_alloc: 234881024 data_used: 33079296
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:55.613046+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.134132385s of 11.804743767s, submitted: 3
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:56.613167+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 61513728 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:57.613305+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6521000/0x0/0x4ffc00000, data 0x3daf25a/0x3f44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 59662336 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:58.613409+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:59.613562+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4005096 data_alloc: 234881024 data_used: 33775616
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:00.613701+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:01.613830+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:02.613972+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e64b0000/0x0/0x4ffc00000, data 0x3e2925a/0x3fbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:03.614163+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:04.614326+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4004220 data_alloc: 234881024 data_used: 33779712
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:05.614481+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e648f000/0x0/0x4ffc00000, data 0x3e4a25a/0x3fdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:06.614592+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:07.614807+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:08.614973+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:09.615115+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4004220 data_alloc: 234881024 data_used: 33779712
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:10.615248+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e648f000/0x0/0x4ffc00000, data 0x3e4a25a/0x3fdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:11.615381+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357203968 unmapped: 59785216 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:12.615557+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357203968 unmapped: 59785216 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:13.615800+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357203968 unmapped: 59785216 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:14.615927+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e648f000/0x0/0x4ffc00000, data 0x3e4a25a/0x3fdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.914283752s of 18.503477097s, submitted: 87
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7f66da40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e7f6c94a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357203968 unmapped: 59785216 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4004532 data_alloc: 234881024 data_used: 33849344
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:15.616214+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865c00 session 0x555e8064b4a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:16.616359+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e72a8000/0x0/0x4ffc00000, data 0x2c321e8/0x2dc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:17.617482+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:18.617646+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:19.617791+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e72a7000/0x0/0x4ffc00000, data 0x2c331e8/0x2dc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768085 data_alloc: 234881024 data_used: 20328448
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:20.621845+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e72a7000/0x0/0x4ffc00000, data 0x2c331e8/0x2dc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:21.621977+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:22.622106+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:23.622291+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e807dc5a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e7f668b40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:24.622422+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3483899 data_alloc: 218103808 data_used: 7856128
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90de000/0x0/0x4ffc00000, data 0x11fe1d8/0x1390000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:25.622582+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:26.622717+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.506937981s of 11.834042549s, submitted: 56
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:27.622914+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7ed9cd20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:28.623101+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:29.623318+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:30.623466+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:31.623677+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:32.623874+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:33.624020+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:34.624197+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:35.624327+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:36.624464+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:37.624635+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:38.624851+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:39.624994+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:40.625174+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:41.625398+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:42.625575+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:43.625717+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:44.625861+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:45.626045+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:46.626275+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:47.626468+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:48.626641+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:49.626821+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:50.626957+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:51.627153+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:52.627321+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:53.627480+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:54.627667+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:55.627860+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:56.628046+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:57.628274+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:58.628456+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:59.628666+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:00.628873+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:01.629049+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:02.629256+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 66707456 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:03.629459+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 66707456 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:04.629628+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 66707456 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:05.629853+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 66707456 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:06.630037+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 66707456 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:07.630237+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:08.630425+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:09.630617+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:10.630817+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:11.631017+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:12.631205+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:13.631360+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:14.631577+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:15.631737+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 66691072 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:16.631906+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 66691072 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:17.632082+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 66691072 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:18.632220+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:19.632401+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:20.632719+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:21.632881+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:22.633031+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:23.633166+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:24.633370+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:25.633552+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66674688 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:26.633797+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66674688 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:27.634000+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66674688 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:28.634163+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66674688 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:29.634311+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66674688 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:30.634501+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66674688 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:31.634699+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:32.634967+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:33.635209+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:34.635370+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:35.635626+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:36.635825+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:37.636031+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:38.636250+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:39.636467+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 66658304 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:40.636622+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:41.636824+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:42.637010+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:43.637139+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:44.637481+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:45.637612+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:46.637817+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:47.637995+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:48.638146+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:49.638309+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:50.638464+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:51.638714+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:52.638937+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:53.639213+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:54.639405+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:55.639551+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 66633728 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:56.639804+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 66633728 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:57.640041+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 66633728 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:58.640163+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 66617344 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:59.640342+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 66617344 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:00.640510+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 66617344 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:01.640692+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 66617344 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:02.640841+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 66617344 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:03.640976+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 66609152 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:04.641112+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 66609152 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:05.641308+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 66609152 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 99.549827576s of 99.685768127s, submitted: 2
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:06.641464+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7ebd9680
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e7f667e00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865c00 session 0x555e7ed9c960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 69836800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:07.641629+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865c00 session 0x555e7f9b32c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e807ebc20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 69836800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:08.641837+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e833a000/0x0/0x4ffc00000, data 0x1fa123a/0x2134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 69836800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:09.642016+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 69836800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:10.642145+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e807eb4a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3588099 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 69828608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:11.642304+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e833a000/0x0/0x4ffc00000, data 0x1fa123a/0x2134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 69828608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:12.642430+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 67952640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:13.642546+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 67952640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:14.642682+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e833a000/0x0/0x4ffc00000, data 0x1fa123a/0x2134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 67952640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:15.642810+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689991 data_alloc: 234881024 data_used: 22016000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e8c8785a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.975601196s of 10.013640404s, submitted: 48
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 67952640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:16.642945+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:17.643104+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e7e8bcb40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:18.643312+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:19.643455+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:20.643590+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487613 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:21.643703+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:22.643853+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:23.644029+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:24.644151+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:25.644334+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487613 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:26.644502+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:27.644692+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:28.644880+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:29.645477+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:30.645674+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487613 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:31.645886+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:32.646071+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:33.646205+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:34.646357+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:35.646570+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487613 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:36.646774+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:37.647026+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:38.647163+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:39.647316+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:40.701221+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487613 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:41.701367+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:42.701517+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:43.701709+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:44.701873+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:45.702082+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487613 data_alloc: 218103808 data_used: 7745536
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:46.702214+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:47.702411+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:48.702550+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e822210e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.237102509s of 32.277904510s, submitted: 31
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 291 ms_handle_reset con 0x555e80e03000 session 0x555e817b9c20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:49.702730+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:50.703004+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3490903 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e90ff000/0x0/0x4ffc00000, data 0x11dbc98/0x136e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:51.703306+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:52.703464+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:53.703653+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e90ff000/0x0/0x4ffc00000, data 0x11dbc98/0x136e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:54.703793+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e90ff000/0x0/0x4ffc00000, data 0x11dbc98/0x136e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:55.703947+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:56.704085+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 70549504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:57.704262+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 70549504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:58.704458+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 70549504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:59.704647+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 70549504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:00.704843+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:01.705060+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:02.705204+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:03.705429+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 70533120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:04.705633+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 70533120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:05.705880+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 70533120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:06.706062+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 70533120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:07.706276+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 70533120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:08.706489+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:09.706629+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:10.706845+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:11.706996+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:12.707138+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:13.707318+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:14.707508+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:15.707664+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:16.707849+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 70516736 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:17.708050+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 70516736 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:18.708229+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 70516736 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:19.708421+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 70508544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:20.708629+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 70508544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:21.708817+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 70508544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:22.709124+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 70508544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:23.709310+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 70508544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:24.709445+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:25.709598+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:26.709854+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:27.710020+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:28.710219+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:29.710404+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:30.710588+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:31.710796+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:32.710963+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:33.711091+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:34.711229+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:35.711445+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:36.711564+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:37.711810+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:38.711977+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:39.712118+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:40.712260+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 70475776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:41.712425+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 70475776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:42.712582+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 70475776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:43.712739+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 70467584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:44.712980+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 70467584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:45.713155+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 70467584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:46.713336+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 70467584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:47.713515+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 70467584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:48.713665+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 70451200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:49.713820+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 70451200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:50.713947+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 70451200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:51.714078+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 70443008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:52.714261+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 70443008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:53.714426+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 70434816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:54.714648+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 70434816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:55.714869+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 70434816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:56.715074+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:57.715298+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:58.715439+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:59.715628+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:00.715816+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:01.715970+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:02.716165+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:03.716402+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:04.716606+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:05.716846+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:06.717064+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:07.717395+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:08.717570+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:09.717816+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:10.718002+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:11.718194+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:12.718419+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 70402048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:13.718585+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 70402048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:14.718816+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 70402048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:15.718963+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 70402048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:16.719129+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 70402048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:17.719314+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 70393856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:18.719476+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 70393856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:19.719646+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 70393856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:20.719823+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:21.720015+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:22.720200+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:23.720403+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:24.720580+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:25.720780+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:26.720919+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:27.721098+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:28.721256+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:29.721430+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:30.721558+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:31.721739+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:32.721961+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:33.722164+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:34.722310+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:35.722462+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 70361088 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:36.722713+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:37.723018+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:38.723207+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:39.723400+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:40.723568+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:41.723804+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:42.724032+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:43.724356+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:44.724512+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 70336512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:45.724647+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 70336512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:46.724828+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 70336512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:47.725007+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 70328320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:48.725198+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 70328320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:49.725386+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 70328320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:50.725576+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 70328320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:51.725775+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 ms_handle_reset con 0x555e8100f000 session 0x555e807eb0e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 70631424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:52.725941+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 70623232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:53.726084+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 70623232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:54.726244+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 70623232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:55.726402+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 70615040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7819264
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:56.726558+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 70615040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:57.726834+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 70615040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:58.727058+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 70615040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:59.727267+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 70615040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:00.727465+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 70606848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e87eca400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 132.208267212s of 132.352478027s, submitted: 24
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493245 data_alloc: 218103808 data_used: 7819264
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:01.727591+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 70598656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 293 ms_handle_reset con 0x555e87eca400 session 0x555e7f6692c0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:02.727781+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e98fa000/0x0/0x4ffc00000, data 0x9df2a9/0xb73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:03.727982+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:04.728200+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:05.728351+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 294 ms_handle_reset con 0x555e81011400 session 0x555e8063c780
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371250 data_alloc: 218103808 data_used: 1015808
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:06.728505+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 294 heartbeat osd_stat(store_statfs(0x4ea0f8000/0x0/0x4ffc00000, data 0x1e0e6a/0x375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:07.728718+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:08.728934+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:09.729129+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 294 heartbeat osd_stat(store_statfs(0x4ea0f8000/0x0/0x4ffc00000, data 0x1e0e6a/0x375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:10.729318+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371250 data_alloc: 218103808 data_used: 1015808
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 294 heartbeat osd_stat(store_statfs(0x4ea0f8000/0x0/0x4ffc00000, data 0x1e0e6a/0x375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.180046082s of 10.362761497s, submitted: 42
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:11.729486+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:12.729617+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 295 heartbeat osd_stat(store_statfs(0x4ea0f5000/0x0/0x4ffc00000, data 0x1e28cd/0x378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: get_auth_request con 0x555e7ec18c00 auth_method 0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:13.729828+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:14.729974+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:15.730115+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374224 data_alloc: 218103808 data_used: 1015808
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:16.730408+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:17.730678+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 295 heartbeat osd_stat(store_statfs(0x4ea0f5000/0x0/0x4ffc00000, data 0x1e28cd/0x378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:18.730801+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100e400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 ms_handle_reset con 0x555e8100e400 session 0x555e8061c5a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 74186752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:19.731027+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 74186752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:20.731159+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 74186752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379014 data_alloc: 218103808 data_used: 1015808
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:21.731315+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 74186752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:22.731484+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 74186752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:23.731656+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 74186752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:24.731835+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 74170368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:25.732017+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 74170368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:26.732198+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 74170368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:27.732465+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 74162176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:28.732636+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 74162176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:29.732814+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 74162176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:30.732958+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 74162176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:31.733068+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 74162176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:32.733189+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:33.733347+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:34.733506+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:35.733633+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:36.733824+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:37.734024+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:38.734197+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:39.734371+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:40.734538+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 74137600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:41.734669+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 74137600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:42.734784+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 74129408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:43.734931+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 74129408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:44.735032+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 74129408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:45.735179+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 74129408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:46.735385+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 74129408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:47.735567+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 74121216 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:48.735725+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:49.735837+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:50.735992+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:51.736113+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.74 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1442 writes, 6120 keys, 1442 commit groups, 1.0 writes per commit group, ingest: 6.56 MB, 0.01 MB/s
                                           Interval WAL: 1442 writes, 522 syncs, 2.76 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:52.736209+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:53.736354+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:54.736492+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:55.736717+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:56.736861+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 74104832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:57.737009+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 74096640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:58.737129+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 74096640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:59.737308+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 74096640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:00.737489+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 74096640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:01.737626+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 74088448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:02.737858+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 74088448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:03.738029+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 74088448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:04.738173+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 74072064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:05.738347+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 74063872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:06.738494+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 74063872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:07.738674+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 74063872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:08.738811+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 74063872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:09.738950+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 74063872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:10.739143+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 74063872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:11.739315+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 74055680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:12.739499+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 74055680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:13.739668+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 74055680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:14.739840+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 74055680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:15.739974+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 74047488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:16.740121+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 74047488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:17.740284+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 74047488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:18.740452+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 74047488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:19.740563+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 74039296 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:20.740693+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346628096 unmapped: 74031104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:21.740819+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346628096 unmapped: 74031104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:22.741030+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 74022912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:23.741228+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 74022912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:24.741397+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 74022912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:25.741567+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 74022912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:26.741710+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 74022912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:27.741916+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 74022912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:28.742057+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 74014720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:29.742212+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 74014720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:30.742330+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 74006528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:31.742471+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 74006528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:32.742611+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 74006528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:33.742812+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 73998336 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:34.742983+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 73998336 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:35.743109+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:36.743232+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:37.743371+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:38.743521+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:39.743687+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:40.743860+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:41.744037+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:42.744202+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:43.744346+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 73981952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:44.744489+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 73981952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:45.744627+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 73973760 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:46.744796+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:47.745021+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:48.745213+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:49.745410+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:50.745558+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:51.745712+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:52.745908+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:53.746066+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:54.746222+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:55.746379+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 104.105834961s of 104.461563110s, submitted: 11
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 73957376 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3378118 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:56.746548+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 73957376 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:57.746823+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 73916416 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:58.747038+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f2000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346750976 unmapped: 73908224 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:59.747187+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f2000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346750976 unmapped: 73908224 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:00.747323+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 73900032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3378118 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:01.747468+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 73900032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:02.747661+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 73900032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:03.747824+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f2000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 73900032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:04.747964+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 73900032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:05.748147+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.567169189s of 10.098338127s, submitted: 107
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f2000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 73900032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 ms_handle_reset con 0x555e80e03000 session 0x555e7e7db0e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432966 data_alloc: 218103808 data_used: 1019904
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:06.748258+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 69214208 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:07.748380+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e98f2000/0x0/0x4ffc00000, data 0x9e446d/0xb7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e8c7e000/0x0/0x4ffc00000, data 0x1655fea/0x17ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 298 handle_osd_map epochs [298,298], i have 298, src has [1,298]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 72810496 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:08.748555+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 298 ms_handle_reset con 0x555e8100f000 session 0x555e7ed80960
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 72802304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:09.748708+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 72802304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:10.748863+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 72794112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:11.748997+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 72794112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:12.749123+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 72794112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:13.749246+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 72794112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:14.749329+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 72794112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:15.749437+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:16.749560+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:17.749796+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:18.749980+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:19.750150+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:20.750373+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:21.750565+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:22.750679+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:23.750815+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 72777728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:24.750978+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 72777728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:25.751153+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 72761344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:26.751568+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 72761344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:27.751805+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 72761344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:28.751996+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 72761344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:29.752195+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 72761344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:30.752333+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 72761344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:31.752473+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 72753152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:32.752634+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 72753152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:33.752897+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 72753152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:34.753034+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 72744960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:35.753197+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 72744960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:36.753338+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 72744960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:37.753513+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 72744960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:38.753697+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 72744960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:39.753827+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 72736768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:40.754047+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 72736768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:41.754169+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.684211731s of 35.786945343s, submitted: 6
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 72736768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:42.754335+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 72728576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:43.754507+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 72728576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:44.754626+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 72728576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:45.754783+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 299 ms_handle_reset con 0x555e81011400 session 0x555e8079d4a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 72728576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:46.754961+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528958 data_alloc: 218103808 data_used: 1028096
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1659738/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1659738/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 72728576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:47.755191+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1659738/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1659738/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:48.755342+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 72720384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1659738/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:49.755564+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 72720384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:50.755737+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 72720384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:51.755922+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 72712192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533132 data_alloc: 218103808 data_used: 1036288
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:52.756070+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 72712192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:53.756262+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 72712192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:54.756467+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 72712192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:55.756642+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 72712192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:56.757206+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:57.757473+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:58.757590+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:59.757790+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:00.757966+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:01.758114+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:02.758448+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:03.758620+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:04.758952+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 72679424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:05.759198+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 72679424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:06.759330+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 72679424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:07.759575+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 72679424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:08.759810+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 72679424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:09.760142+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 72679424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:10.760395+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 72671232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:11.760816+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 72671232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:12.761105+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 72671232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:13.761384+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 72671232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:14.761609+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 72671232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:15.761802+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 72663040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:16.762012+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 72663040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:17.762173+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 72663040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:18.762319+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 72663040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:19.762532+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 72663040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:20.762854+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 72654848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:21.763077+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 72654848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:22.763315+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 72654848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:23.763511+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 72646656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:24.763696+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 72646656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:25.763923+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 72646656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:26.764138+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 72646656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:27.764393+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 72646656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:28.764630+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:29.764827+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:30.765171+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:31.765391+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:32.765629+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:33.765799+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:34.765953+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:35.766166+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:36.766332+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 72622080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:37.766527+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 72622080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:38.766822+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 72622080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:39.767007+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 72613888 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:40.767215+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 72613888 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:41.767376+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 72613888 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:42.767529+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 72613888 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:43.767672+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 72613888 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:44.767820+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 72605696 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:45.768002+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 72605696 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:46.768211+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 72605696 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:47.768416+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 72597504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:48.768606+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 72597504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:49.768889+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 72597504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:50.769097+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 72597504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:51.769234+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 72597504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:52.769360+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 72589312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:53.769486+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 72589312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:54.769610+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 72589312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:55.769744+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 72581120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:56.769953+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 72581120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:57.770179+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 72581120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:58.770289+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 72581120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:59.770374+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 72581120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:00.770512+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 72572928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:01.770640+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 72572928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:02.770771+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 72572928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:03.770906+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 72572928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:04.771028+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 72572928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:05.771148+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 72572928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:06.771273+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 72564736 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:07.771438+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 72556544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:08.771577+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 72548352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:09.771701+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:10.771867+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:11.772115+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:12.772302+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:13.772439+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:14.772606+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:15.772791+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:16.772940+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 72531968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:17.773131+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 72531968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:18.773269+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 72531968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:19.773402+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 72515584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:20.773565+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 72515584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:21.773729+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 72515584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:22.773896+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 72515584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:23.774047+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 72515584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:24.774214+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 72507392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:25.774340+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 72507392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:26.774547+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 72499200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:27.774741+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 72499200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:28.774885+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 72499200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:29.775032+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 72499200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:30.775127+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 72499200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:31.775249+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 72499200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:32.775409+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 72482816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:33.775957+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 72482816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:34.776069+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 72482816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:35.776212+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 72482816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:36.776370+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 72474624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:37.776584+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:38.776796+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:39.776943+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:40.777112+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:41.777289+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:42.777459+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:43.777588+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:44.777709+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:45.777877+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:46.778015+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348200960 unmapped: 72458240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:47.778174+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348200960 unmapped: 72458240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:48.778339+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 72441856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:49.778519+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 72441856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:50.778674+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 72441856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:51.778821+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 72433664 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:52.778967+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 72433664 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:53.779164+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 72433664 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:54.779416+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 72433664 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:55.779585+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 72433664 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:56.779926+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 72425472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:57.780111+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 72425472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:58.780314+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 72425472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:59.780509+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 72417280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:00.780994+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 72417280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:01.781454+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 72417280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:02.781651+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 72417280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:03.781961+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 72417280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:04.782490+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348250112 unmapped: 72409088 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:05.782707+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 72400896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:06.782882+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 72400896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:07.783144+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 72400896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:08.783519+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 72400896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:09.783910+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 72400896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:10.784216+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 72400896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:11.784936+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 72384512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:12.785376+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 72384512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:13.785529+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 72384512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:14.785855+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 72376320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:15.786081+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 72376320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:16.786395+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 72376320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:17.786725+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 72376320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:18.786983+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 72376320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:19.787221+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348291072 unmapped: 72368128 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:20.787434+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348291072 unmapped: 72368128 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:21.787695+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348291072 unmapped: 72368128 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:22.787961+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348299264 unmapped: 72359936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:23.788124+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348299264 unmapped: 72359936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:24.788315+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348299264 unmapped: 72359936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:25.788537+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348299264 unmapped: 72359936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:26.788696+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348299264 unmapped: 72359936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:27.788932+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:28.789071+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:29.789226+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:30.789390+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:31.789530+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:32.789690+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:33.790017+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:34.790195+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:35.790480+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:36.790651+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:37.790932+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:38.791057+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:39.791213+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:40.791442+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:41.791587+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:42.791721+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:43.791935+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 72318976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:44.792163+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 72318976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:45.792331+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 72318976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:46.792509+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 72310784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:47.792732+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 72310784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:48.792912+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 72310784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:49.793092+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 72310784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:50.793309+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 72310784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:51.793565+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 72310784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:52.793738+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 72302592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:53.793950+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 72302592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:54.794099+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 72294400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:55.794248+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 72294400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:56.794389+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 72294400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:57.794551+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 72294400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:58.794682+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 72294400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:59.794915+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348381184 unmapped: 72278016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:00.795073+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348381184 unmapped: 72278016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:01.795257+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 ms_handle_reset con 0x555e7fa10c00 session 0x555e7ebd81e0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e87eca400
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348381184 unmapped: 72278016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:02.795459+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348381184 unmapped: 72278016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:03.795706+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 72269824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:04.795903+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 72269824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:05.796045+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 72269824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:06.796199+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 72269824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:07.796362+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348397568 unmapped: 72261632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:08.796500+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348397568 unmapped: 72261632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:09.796668+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 72253440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:10.796857+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 72253440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:11.796992+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 72253440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:12.797238+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348413952 unmapped: 72245248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:13.797416+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348413952 unmapped: 72245248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:14.797551+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348413952 unmapped: 72245248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:15.797697+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348430336 unmapped: 72228864 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:16.797886+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348430336 unmapped: 72228864 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:17.798150+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348438528 unmapped: 72220672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:18.798277+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348438528 unmapped: 72220672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:19.798413+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348438528 unmapped: 72220672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:20.798540+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 72212480 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:21.798656+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 72212480 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:22.798806+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348454912 unmapped: 72204288 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:23.798970+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 72196096 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:24.799155+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 72196096 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:25.799291+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 72196096 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:26.799475+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348471296 unmapped: 72187904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:27.799688+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348471296 unmapped: 72187904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:28.799971+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348471296 unmapped: 72187904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:29.800161+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348471296 unmapped: 72187904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:30.800302+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348471296 unmapped: 72187904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:31.800488+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:32.800611+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:33.800806+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:34.800948+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:35.801069+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:36.801286+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:37.801481+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:38.801679+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:39.801903+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348495872 unmapped: 72163328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:40.802160+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 72155136 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:41.802309+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 72155136 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:42.802442+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 72146944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:43.802581+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 72146944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:44.802740+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 72146944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:45.802935+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 72146944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:46.803054+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 ms_handle_reset con 0x555e80f4d000 session 0x555e807dd4a0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 72146944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:47.803601+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348520448 unmapped: 72138752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:48.803736+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348520448 unmapped: 72138752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:49.803866+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348520448 unmapped: 72138752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:50.804067+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348520448 unmapped: 72138752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:51.804244+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348528640 unmapped: 72130560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:52.804457+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348528640 unmapped: 72130560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:53.804640+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348528640 unmapped: 72130560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:54.804809+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348528640 unmapped: 72130560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:55.804950+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 72122368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:56.805077+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 72122368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:57.805302+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 72122368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:58.805484+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 72122368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:59.805642+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348545024 unmapped: 72114176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:00.805771+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348545024 unmapped: 72114176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:01.805881+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348545024 unmapped: 72114176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:02.806018+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348545024 unmapped: 72114176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:03.806141+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:04.806264+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:05.806418+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:06.806581+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:07.806862+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:08.807242+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:09.807402+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:10.807536+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:11.807650+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:12.807806+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348569600 unmapped: 72089600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:13.807944+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348569600 unmapped: 72089600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:14.808080+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348577792 unmapped: 72081408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:15.808259+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348577792 unmapped: 72081408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:16.808465+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348577792 unmapped: 72081408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:17.808725+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348577792 unmapped: 72081408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:18.808963+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348585984 unmapped: 72073216 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:19.809140+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348585984 unmapped: 72073216 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:20.809329+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:21.809721+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:22.809912+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:23.810057+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:24.810291+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:25.810466+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:26.810678+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:27.810896+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:28.811080+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348610560 unmapped: 72048640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:29.811247+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:30.811409+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:31.811574+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:32.811730+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:33.811928+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:34.812134+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:35.812293+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:36.812430+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:37.812626+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:38.812855+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:39.812997+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:40.813122+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:41.813255+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:42.813383+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:43.813516+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:44.813706+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 72024064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:45.813849+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 72024064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:46.814005+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 72015872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:47.814162+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 72015872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:48.814284+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 72015872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:49.814398+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 72007680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:50.814542+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 72007680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:51.814667+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 72007680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:52.814792+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 71999488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:53.814926+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 71999488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:54.815067+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 71999488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:55.815198+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 71991296 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:56.815326+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 71991296 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:57.815482+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 71991296 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:58.815616+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 71991296 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:59.815803+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 71991296 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:00.815937+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348676096 unmapped: 71983104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:01.816080+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348676096 unmapped: 71983104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:02.816302+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348676096 unmapped: 71983104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:03.816470+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348676096 unmapped: 71983104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:04.816646+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348676096 unmapped: 71983104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:05.816839+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348684288 unmapped: 71974912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:06.817090+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348684288 unmapped: 71974912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:07.817371+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348684288 unmapped: 71974912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:08.817557+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348692480 unmapped: 71966720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:09.817697+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348692480 unmapped: 71966720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:10.817815+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 71958528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:11.817949+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 71958528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:12.818107+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 71958528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:13.818257+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 71958528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:14.818381+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 71958528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:15.818585+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348708864 unmapped: 71950336 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:16.818727+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 71942144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:17.818961+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 71942144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:18.819087+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 71942144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:19.819233+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 71942144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:20.819345+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 71942144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:21.819526+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348725248 unmapped: 71933952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:22.819672+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348725248 unmapped: 71933952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:23.819818+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348725248 unmapped: 71933952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:24.820013+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348733440 unmapped: 71925760 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:25.820264+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348733440 unmapped: 71925760 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:26.820475+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 71917568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:27.820705+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 71917568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:28.820847+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 71917568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:29.821126+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 71917568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:30.821335+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 71917568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:31.821588+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 71917568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:32.821772+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348758016 unmapped: 71901184 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:33.822008+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348758016 unmapped: 71901184 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:34.822203+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 71892992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:35.822347+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 71892992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:36.822469+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 71892992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:37.822706+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 71892992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:38.822843+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 71892992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:39.823008+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 71892992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:40.823156+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:41.823284+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:42.823446+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:43.823856+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:44.824029+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:45.824180+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:46.824316+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:47.824491+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:48.824684+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348798976 unmapped: 71860224 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:49.824833+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348798976 unmapped: 71860224 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:50.824980+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348798976 unmapped: 71860224 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:51.825110+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 71852032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:52.825241+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 71852032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:53.825386+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 71852032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:54.825576+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 71852032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:55.825736+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 71852032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:56.825915+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:57.826129+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:58.826295+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:59.826470+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:00.826786+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:01.826919+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:02.827070+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:03.827191+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:04.827329+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 71827456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:05.827476+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 71827456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:06.827671+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 71827456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:07.827839+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 71827456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:08.828132+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 71827456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:09.828300+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 71819264 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:10.828430+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 71819264 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:11.828597+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 71819264 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:12.828741+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348848128 unmapped: 71811072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:13.828939+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348848128 unmapped: 71811072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:14.829194+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348848128 unmapped: 71811072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:15.829331+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348856320 unmapped: 71802880 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:16.829528+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348856320 unmapped: 71802880 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:17.829799+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348856320 unmapped: 71802880 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:18.829952+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348856320 unmapped: 71802880 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:19.830126+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348856320 unmapped: 71802880 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:20.830301+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348872704 unmapped: 71786496 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:21.830485+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348872704 unmapped: 71786496 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:22.830696+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 71778304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:23.830901+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 71778304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:24.831064+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 71778304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:25.831235+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 71778304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:26.831396+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 71778304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:27.831605+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 71778304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:28.831826+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 71770112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:29.832035+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 71770112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:30.832210+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 71761920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:31.832468+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 71761920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:32.832669+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 71761920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:33.832814+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 71761920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:34.832962+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 71753728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:35.833138+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 71745536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:36.833332+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 71745536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:37.833541+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 71745536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:38.833680+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:39.833832+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:40.833981+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:41.834172+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:42.834361+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:43.834508+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:44.834694+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:45.834889+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 71729152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:46.835113+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 71729152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:47.835378+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 71729152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:48.835523+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 71729152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:49.835663+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 71729152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:50.835816+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 71729152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:51.836059+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 71720960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:52.836294+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 71720960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:53.836516+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 71720960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:54.836727+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 71712768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:55.836949+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 71712768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:56.837131+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 71712768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:57.837320+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 71712768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:58.837510+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 71712768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:59.837728+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 71704576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:00.837916+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 71704576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:01.838082+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 71704576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:02.838239+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 71696384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:03.838397+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 71696384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:04.838542+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 71696384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:05.838687+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 71696384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:06.838829+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 71696384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:07.839010+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 71688192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:08.839138+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 71688192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:09.839254+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 71688192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:10.839395+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 71680000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:11.839537+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 71680000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:12.839680+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 71680000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:13.839824+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 71680000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:14.839963+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 71680000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:15.840098+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:16.840271+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:17.840473+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:18.840622+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:19.841016+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:20.841215+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:21.841367+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:22.841526+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:23.841806+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:24.841990+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:25.842233+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:26.842391+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:27.842530+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:28.842678+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:29.842812+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:30.842969+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 71647232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:31.843146+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 71630848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:32.843288+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 71630848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:33.843429+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 71630848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:34.843551+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 71630848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:35.843704+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 71630848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:36.843926+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349036544 unmapped: 71622656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:37.844129+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349036544 unmapped: 71622656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:38.844279+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349036544 unmapped: 71622656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:39.844460+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:40.844611+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:41.844777+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:42.844952+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:43.845093+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:44.845232+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:45.845349+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:46.845496+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:47.845695+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:48.845825+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:49.845974+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:50.846109+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:51.846380+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 597 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 125 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:52.846559+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:53.846829+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:54.847033+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 71589888 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:55.847173+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 71606272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:56.847399+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 71606272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:57.847589+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 71606272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:58.847719+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 71606272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:59.847850+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:00.848031+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:01.848185+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:02.848320+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:03.848478+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 71581696 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:04.848626+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 71581696 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:05.848770+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 71581696 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:06.848958+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 71573504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:07.849193+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 71573504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:08.849349+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 71573504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:09.849517+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 71573504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:10.849701+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 71573504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:11.849840+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:12.849965+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:13.850079+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:14.850191+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:15.850325+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:16.850465+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:17.850675+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:18.850838+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:19.850985+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:20.851151+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349110272 unmapped: 71548928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:21.851350+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:22.851517+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:23.851743+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:24.852017+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:25.852176+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:26.852324+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:27.852564+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:28.852740+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349134848 unmapped: 71524352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:29.852999+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349134848 unmapped: 71524352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:30.853147+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349143040 unmapped: 71516160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:31.853303+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349143040 unmapped: 71516160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:32.853429+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349143040 unmapped: 71516160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:33.853586+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349143040 unmapped: 71516160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:34.853727+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349143040 unmapped: 71516160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:35.853874+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349143040 unmapped: 71516160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:36.854025+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 71507968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:37.854156+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 71507968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:38.854322+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 71507968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:39.854560+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 71507968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:40.854846+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 71499776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:41.855083+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 71499776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:42.855265+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 71499776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:43.855402+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 71499776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:44.855568+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349175808 unmapped: 71483392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:45.855731+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349175808 unmapped: 71483392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:46.855915+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349175808 unmapped: 71483392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f49000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:47.856138+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349175808 unmapped: 71483392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 545.605590820s of 546.408325195s, submitted: 26
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 301 ms_handle_reset con 0x555e80f49000 session 0x555e82221e00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:48.856318+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482690 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349200384 unmapped: 71458816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:49.856452+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349200384 unmapped: 71458816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e886ea800
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e9473000/0x0/0x4ffc00000, data 0xe5cd49/0xffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:50.856627+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 71434240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e9473000/0x0/0x4ffc00000, data 0xe5cd49/0xffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 302 ms_handle_reset con 0x555e886ea800 session 0x555e7ed9cf00
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:51.856835+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 71434240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:52.857019+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 71426048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:53.857218+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401510 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 71426048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:54.857368+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 71426048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 302 heartbeat osd_stat(store_statfs(0x4ea0e0000/0x0/0x4ffc00000, data 0x1ee91a/0x38d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:55.857484+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 71426048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:56.857612+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 71393280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:57.857906+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 71393280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 303 handle_osd_map epochs [304,304], i have 304, src has [1,304]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.549840927s of 10.031791687s, submitted: 62
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 ms_handle_reset con 0x555e80e03000 session 0x555e7ebdcb40
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:58.858048+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519807 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 69287936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:59.858188+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 69263360 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,1])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:00.858297+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:01.858426+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:02.858546+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:03.858680+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:04.858861+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:05.858971+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:06.859087+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:07.859289+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:08.859401+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 69230592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:09.859720+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 69230592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:10.859851+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 69230592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:11.859973+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:12.860093+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:13.860229+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:14.860371+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:15.860516+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 69214208 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:16.860647+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:17.860811+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:18.860971+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:19.861085+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:20.861200+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:21.861317+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:22.861454+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:23.861662+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:24.861948+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:25.862067+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:26.862207+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:27.862356+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:28.862483+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:29.862600+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:30.862795+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:31.862937+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:32.863052+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:33.863172+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:34.863440+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:35.863705+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:36.863824+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:37.863992+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:38.864120+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:39.864398+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:40.864572+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:41.864770+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:42.865404+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:43.865531+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 69173248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:44.865893+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 69173248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:45.866039+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 69165056 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:46.866230+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 69165056 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:47.866406+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 69165056 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:48.866546+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 69165056 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:49.866802+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 69165056 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:50.866959+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 69148672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:51.867147+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 69148672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:52.867339+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 69148672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:53.867487+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 69148672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:54.867663+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 69148672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:55.867877+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 69148672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f49000
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 57.483512878s of 58.355438232s, submitted: 97
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:56.867985+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351526912 unmapped: 69132288 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 305 ms_handle_reset con 0x555e80f49000 session 0x555e8003cd20
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f36000/0x0/0x4ffc00000, data 0x1f3acb/0x396000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:57.868124+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351526912 unmapped: 69132288 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:58.868297+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414397 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351526912 unmapped: 69132288 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:59.868418+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f36000/0x0/0x4ffc00000, data 0x1f3acb/0x396000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351526912 unmapped: 69132288 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:00.868551+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351526912 unmapped: 69132288 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:01.868696+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 69115904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:02.868890+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 69115904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:03.869015+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 69115904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:04.869209+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351551488 unmapped: 69107712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:05.869336+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351551488 unmapped: 69107712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:06.869494+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351551488 unmapped: 69107712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:07.869649+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351551488 unmapped: 69107712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:08.869806+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351551488 unmapped: 69107712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:09.869940+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:10.870058+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:11.870174+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:12.870294+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:13.870397+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:14.870541+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 69091328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:15.870677+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 69091328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:16.870795+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 69091328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:17.870956+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 69091328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:18.871073+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 69091328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:19.871214+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 69091328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:20.871370+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:21.871520+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:22.871648+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:23.871785+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:24.871930+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:25.872091+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:26.872235+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:27.872475+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:28.872694+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 69058560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:29.872839+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 69058560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:30.872961+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 69050368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:31.873073+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 69050368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:32.873180+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 69050368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:33.873298+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 69050368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:34.873423+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 69050368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:35.873597+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 69050368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:36.874221+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 69042176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:37.874980+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:38.875712+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:39.875998+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:40.876569+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:41.877046+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:42.877443+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:43.877899+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:44.878172+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:45.878364+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:46.878528+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:47.878833+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:48.879086+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351641600 unmapped: 69017600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:49.879213+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351641600 unmapped: 69017600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:50.879567+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351641600 unmapped: 69017600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:51.879714+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351641600 unmapped: 69017600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:52.879880+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351657984 unmapped: 69001216 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:53.880038+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351657984 unmapped: 69001216 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:54.880186+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 68993024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:55.880342+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:56.880512+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:57.880680+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:58.880866+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:59.880994+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:00.881125+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:01.881245+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:02.881402+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:03.881568+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:04.881675+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:05.881877+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:06.882020+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:07.882195+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 68968448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:08.882515+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 68968448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:09.882647+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 68968448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:10.882774+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 68960256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:11.882898+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351707136 unmapped: 68952064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:12.883093+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351707136 unmapped: 68952064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:13.883218+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351707136 unmapped: 68952064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:14.883348+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351707136 unmapped: 68952064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:15.883507+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:16.883929+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:17.884082+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:18.884223+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:19.884350+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:20.884468+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:21.884595+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:22.884782+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: do_command 'config diff' '{prefix=config diff}'
Oct 02 09:41:55 compute-0 ceph-osd[88314]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 09:41:55 compute-0 ceph-osd[88314]: do_command 'config show' '{prefix=config show}'
Oct 02 09:41:55 compute-0 ceph-osd[88314]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 09:41:55 compute-0 ceph-osd[88314]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 09:41:55 compute-0 ceph-osd[88314]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 68943872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 09:41:55 compute-0 ceph-osd[88314]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:23.886662+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:41:55 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:41:55 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:41:55 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:24.888717+0000)
Oct 02 09:41:55 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 68886528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:41:55 compute-0 ceph-osd[88314]: do_command 'log dump' '{prefix=log dump}'
Oct 02 09:41:55 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 02 09:41:55 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964923443' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 09:41:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:41:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 09:41:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 09:41:56 compute-0 nova_compute[260603]: 2025-10-02 09:41:56.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:41:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 02 09:41:56 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1290160590' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 09:41:56 compute-0 ceph-mon[74477]: from='client.23257 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:41:56 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3964923443' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 09:41:56 compute-0 ceph-mon[74477]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 09:41:56 compute-0 ceph-mon[74477]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 09:41:56 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1290160590' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 09:41:56 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23269 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 02 09:41:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1325233682' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 09:41:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 02 09:41:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1640317546' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 09:41:57 compute-0 ceph-mon[74477]: from='client.23269 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:57 compute-0 ceph-mon[74477]: pgmap v3730: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:57 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1325233682' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 09:41:57 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1640317546' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 09:41:57 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 02 09:41:57 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3916675252' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 09:41:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:41:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:41:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:41:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:41:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:41:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:41:58 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 09:41:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 02 09:41:58 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1226133847' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 09:41:58 compute-0 systemd[1]: Started Hostname Service.
Oct 02 09:41:58 compute-0 nova_compute[260603]: 2025-10-02 09:41:58.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:41:58 compute-0 nova_compute[260603]: 2025-10-02 09:41:58.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:41:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3916675252' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 09:41:58 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1226133847' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 09:41:58 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23279 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 02 09:41:59 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526654532' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 09:41:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 02 09:41:59 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860684791' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 09:41:59 compute-0 ceph-mon[74477]: from='client.23279 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:41:59 compute-0 ceph-mon[74477]: pgmap v3731: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:41:59 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/526654532' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 09:42:00 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23285 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:00 compute-0 nova_compute[260603]: 2025-10-02 09:42:00.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:00 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 02 09:42:00 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/54654168' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 09:42:00 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2860684791' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 09:42:00 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/54654168' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 09:42:00 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23289 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:42:01 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23291 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:01 compute-0 nova_compute[260603]: 2025-10-02 09:42:01.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 02 09:42:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326160965' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 09:42:01 compute-0 ceph-mon[74477]: from='client.23285 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:01 compute-0 ceph-mon[74477]: from='client.23289 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:01 compute-0 ceph-mon[74477]: pgmap v3732: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:01 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1326160965' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 09:42:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 02 09:42:01 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/50172308' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23297 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23299 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:42:02 compute-0 ceph-mon[74477]: from='client.23291 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:02 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/50172308' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 09:42:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 02 09:42:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943482219' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 09:42:03 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 02 09:42:03 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2140101625' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 09:42:03 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23305 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:03 compute-0 ceph-mon[74477]: from='client.23297 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:03 compute-0 ceph-mon[74477]: from='client.23299 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:03 compute-0 ceph-mon[74477]: pgmap v3733: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/943482219' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 09:42:03 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2140101625' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 09:42:04 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23307 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 09:42:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220318636' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 09:42:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:04 compute-0 ceph-mon[74477]: from='client.23305 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:04 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3220318636' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 09:42:04 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 02 09:42:04 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842801231' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 09:42:05 compute-0 nova_compute[260603]: 2025-10-02 09:42:05.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:05 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 02 09:42:05 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1239959145' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 09:42:05 compute-0 ovs-appctl[464209]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 09:42:05 compute-0 ovs-appctl[464213]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 09:42:05 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23315 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:05 compute-0 ovs-appctl[464219]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 09:42:05 compute-0 ceph-mon[74477]: from='client.23307 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:42:05 compute-0 ceph-mon[74477]: pgmap v3734: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:05 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1842801231' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 09:42:05 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1239959145' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 09:42:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:42:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 09:42:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470939801' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 09:42:06 compute-0 nova_compute[260603]: 2025-10-02 09:42:06.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 02 09:42:06 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152379171' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 09:42:06 compute-0 sudo[464435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:42:06 compute-0 sudo[464435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:06 compute-0 sudo[464435]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:06 compute-0 sudo[464477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:42:06 compute-0 sudo[464477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:06 compute-0 sudo[464477]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:06 compute-0 sudo[464534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:42:06 compute-0 sudo[464534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:06 compute-0 sudo[464534]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:06 compute-0 sudo[464569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 09:42:06 compute-0 sudo[464569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:06 compute-0 ceph-mon[74477]: from='client.23315 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:06 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/470939801' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 09:42:06 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3152379171' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 09:42:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 02 09:42:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945654822' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 09:42:07 compute-0 sudo[464569]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:42:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct 02 09:42:07 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476701747' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 09:42:08 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:42:08 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:42:08 compute-0 ceph-mon[74477]: pgmap v3735: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1945654822' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 09:42:08 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2476701747' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 09:42:08 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23325 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:09 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:42:09 compute-0 sudo[464980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:42:09 compute-0 sudo[464980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:09 compute-0 sudo[464980]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:09 compute-0 sudo[465020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:42:09 compute-0 sudo[465020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:09 compute-0 sudo[465020]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:09 compute-0 podman[465008]: 2025-10-02 09:42:09.69335985 +0000 UTC m=+0.098163307 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 09:42:09 compute-0 podman[465007]: 2025-10-02 09:42:09.697668965 +0000 UTC m=+0.104148064 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:42:09 compute-0 sudo[465074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:42:09 compute-0 sudo[465074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:09 compute-0 sudo[465074]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:09 compute-0 sudo[465104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:42:09 compute-0 sudo[465104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:10 compute-0 nova_compute[260603]: 2025-10-02 09:42:10.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct 02 09:42:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412654617' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 09:42:10 compute-0 sudo[465104]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:42:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:42:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:42:10 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:42:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:42:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:42:10 compute-0 ceph-mon[74477]: from='client.23325 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:10 compute-0 ceph-mon[74477]: pgmap v3736: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:42:10 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:42:10 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b8f4a3b4-5e80-420f-9a8d-9d7bc5ff5c18 does not exist
Oct 02 09:42:10 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 3b29c2ae-33c5-4c49-8318-41989b5f358f does not exist
Oct 02 09:42:10 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 6eb47939-0112-41b0-beb1-abf53011cbd9 does not exist
Oct 02 09:42:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:42:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:42:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:42:10 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:42:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:42:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:42:10 compute-0 sudo[465236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:42:10 compute-0 sudo[465236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:10 compute-0 sudo[465236]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:10 compute-0 sudo[465272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:42:10 compute-0 sudo[465272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:10 compute-0 sudo[465272]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:10 compute-0 sudo[465308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:42:10 compute-0 sudo[465308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:10 compute-0 sudo[465308]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:10 compute-0 sudo[465353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:42:10 compute-0 sudo[465353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:10 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct 02 09:42:10 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478216595' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 09:42:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:42:11 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23331 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:11 compute-0 podman[465507]: 2025-10-02 09:42:11.202424785 +0000 UTC m=+0.023230157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:42:11 compute-0 podman[465507]: 2025-10-02 09:42:11.355365935 +0000 UTC m=+0.176171277 container create f023a3fd93f8bcfcf7a8908f56f1558287ab856d85dadad9c562761019ad3620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:42:11 compute-0 nova_compute[260603]: 2025-10-02 09:42:11.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:11 compute-0 systemd[1]: Started libpod-conmon-f023a3fd93f8bcfcf7a8908f56f1558287ab856d85dadad9c562761019ad3620.scope.
Oct 02 09:42:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:42:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/412654617' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 09:42:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:42:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:42:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:42:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:42:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:42:11 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:42:11 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3478216595' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 09:42:11 compute-0 ceph-mon[74477]: pgmap v3737: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:11 compute-0 podman[465507]: 2025-10-02 09:42:11.589690961 +0000 UTC m=+0.410496323 container init f023a3fd93f8bcfcf7a8908f56f1558287ab856d85dadad9c562761019ad3620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:42:11 compute-0 podman[465507]: 2025-10-02 09:42:11.611356438 +0000 UTC m=+0.432161780 container start f023a3fd93f8bcfcf7a8908f56f1558287ab856d85dadad9c562761019ad3620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:42:11 compute-0 determined_meitner[465571]: 167 167
Oct 02 09:42:11 compute-0 systemd[1]: libpod-f023a3fd93f8bcfcf7a8908f56f1558287ab856d85dadad9c562761019ad3620.scope: Deactivated successfully.
Oct 02 09:42:11 compute-0 podman[465507]: 2025-10-02 09:42:11.701331348 +0000 UTC m=+0.522136720 container attach f023a3fd93f8bcfcf7a8908f56f1558287ab856d85dadad9c562761019ad3620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:42:11 compute-0 podman[465507]: 2025-10-02 09:42:11.701820773 +0000 UTC m=+0.522626125 container died f023a3fd93f8bcfcf7a8908f56f1558287ab856d85dadad9c562761019ad3620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:42:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct 02 09:42:11 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3757959851' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 09:42:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2956b8f4807e8570da448916cb6075a218608febc634cd476e07c07c667fe466-merged.mount: Deactivated successfully.
Oct 02 09:42:12 compute-0 podman[465507]: 2025-10-02 09:42:12.084173276 +0000 UTC m=+0.904978618 container remove f023a3fd93f8bcfcf7a8908f56f1558287ab856d85dadad9c562761019ad3620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 09:42:12 compute-0 systemd[1]: libpod-conmon-f023a3fd93f8bcfcf7a8908f56f1558287ab856d85dadad9c562761019ad3620.scope: Deactivated successfully.
Oct 02 09:42:12 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23335 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:12 compute-0 podman[465694]: 2025-10-02 09:42:12.312392384 +0000 UTC m=+0.090781736 container create a226ffd2f3bb897dfdbe864e423063e72089d610e63a46defc4905d622039940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 09:42:12 compute-0 podman[465694]: 2025-10-02 09:42:12.251069209 +0000 UTC m=+0.029458601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:42:12 compute-0 systemd[1]: Started libpod-conmon-a226ffd2f3bb897dfdbe864e423063e72089d610e63a46defc4905d622039940.scope.
Oct 02 09:42:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bce268e7455a45137a67734028a1231fef979dd314995568f405aa8e3df4a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bce268e7455a45137a67734028a1231fef979dd314995568f405aa8e3df4a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bce268e7455a45137a67734028a1231fef979dd314995568f405aa8e3df4a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bce268e7455a45137a67734028a1231fef979dd314995568f405aa8e3df4a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bce268e7455a45137a67734028a1231fef979dd314995568f405aa8e3df4a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:12 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23337 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:12 compute-0 podman[465694]: 2025-10-02 09:42:12.52485547 +0000 UTC m=+0.303244842 container init a226ffd2f3bb897dfdbe864e423063e72089d610e63a46defc4905d622039940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 02 09:42:12 compute-0 podman[465694]: 2025-10-02 09:42:12.530338292 +0000 UTC m=+0.308727644 container start a226ffd2f3bb897dfdbe864e423063e72089d610e63a46defc4905d622039940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 09:42:12 compute-0 podman[465694]: 2025-10-02 09:42:12.627747435 +0000 UTC m=+0.406136787 container attach a226ffd2f3bb897dfdbe864e423063e72089d610e63a46defc4905d622039940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 09:42:12 compute-0 ceph-mon[74477]: from='client.23331 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:12 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3757959851' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 09:42:12 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Oct 02 09:42:12 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958912962' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 09:42:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:13 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct 02 09:42:13 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951131785' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 09:42:13 compute-0 pensive_carson[465750]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:42:13 compute-0 pensive_carson[465750]: --> relative data size: 1.0
Oct 02 09:42:13 compute-0 pensive_carson[465750]: --> All data devices are unavailable
Oct 02 09:42:13 compute-0 systemd[1]: libpod-a226ffd2f3bb897dfdbe864e423063e72089d610e63a46defc4905d622039940.scope: Deactivated successfully.
Oct 02 09:42:13 compute-0 podman[465694]: 2025-10-02 09:42:13.571852253 +0000 UTC m=+1.350241605 container died a226ffd2f3bb897dfdbe864e423063e72089d610e63a46defc4905d622039940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:42:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-10bce268e7455a45137a67734028a1231fef979dd314995568f405aa8e3df4a4-merged.mount: Deactivated successfully.
Oct 02 09:42:13 compute-0 podman[465694]: 2025-10-02 09:42:13.644361667 +0000 UTC m=+1.422751019 container remove a226ffd2f3bb897dfdbe864e423063e72089d610e63a46defc4905d622039940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 09:42:13 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23343 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:13 compute-0 systemd[1]: libpod-conmon-a226ffd2f3bb897dfdbe864e423063e72089d610e63a46defc4905d622039940.scope: Deactivated successfully.
Oct 02 09:42:13 compute-0 sudo[465353]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:13 compute-0 sudo[466100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:42:13 compute-0 sudo[466100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:13 compute-0 sudo[466100]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:13 compute-0 ceph-mon[74477]: from='client.23335 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:13 compute-0 ceph-mon[74477]: from='client.23337 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:13 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2958912962' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 09:42:13 compute-0 ceph-mon[74477]: pgmap v3738: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:13 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1951131785' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 09:42:13 compute-0 sudo[466147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:42:13 compute-0 sudo[466147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:13 compute-0 sudo[466147]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:13 compute-0 sudo[466200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:42:13 compute-0 sudo[466200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:13 compute-0 sudo[466200]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:13 compute-0 sudo[466235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:42:13 compute-0 sudo[466235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23345 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:42:14 compute-0 podman[466343]: 2025-10-02 09:42:14.284242234 +0000 UTC m=+0.047598678 container create 24e75e3f53ca051f27609e22d8d569990b77b22a0532fcac5995d1ef4793d568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:42:14 compute-0 systemd[1]: Started libpod-conmon-24e75e3f53ca051f27609e22d8d569990b77b22a0532fcac5995d1ef4793d568.scope.
Oct 02 09:42:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:42:14 compute-0 podman[466343]: 2025-10-02 09:42:14.263753424 +0000 UTC m=+0.027109888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:42:14 compute-0 podman[466343]: 2025-10-02 09:42:14.365054299 +0000 UTC m=+0.128410753 container init 24e75e3f53ca051f27609e22d8d569990b77b22a0532fcac5995d1ef4793d568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:42:14 compute-0 podman[466343]: 2025-10-02 09:42:14.379451778 +0000 UTC m=+0.142808232 container start 24e75e3f53ca051f27609e22d8d569990b77b22a0532fcac5995d1ef4793d568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 09:42:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 09:42:14 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704785021' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 09:42:14 compute-0 podman[466343]: 2025-10-02 09:42:14.383913228 +0000 UTC m=+0.147269672 container attach 24e75e3f53ca051f27609e22d8d569990b77b22a0532fcac5995d1ef4793d568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:42:14 compute-0 dreamy_wilbur[466358]: 167 167
Oct 02 09:42:14 compute-0 podman[466343]: 2025-10-02 09:42:14.387711986 +0000 UTC m=+0.151068430 container died 24e75e3f53ca051f27609e22d8d569990b77b22a0532fcac5995d1ef4793d568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 09:42:14 compute-0 systemd[1]: libpod-24e75e3f53ca051f27609e22d8d569990b77b22a0532fcac5995d1ef4793d568.scope: Deactivated successfully.
Oct 02 09:42:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3352ea46b3c7c5a58e9e14e1130e13883db59a216de95188dea54bdb928a1069-merged.mount: Deactivated successfully.
Oct 02 09:42:14 compute-0 podman[466343]: 2025-10-02 09:42:14.429708278 +0000 UTC m=+0.193064732 container remove 24e75e3f53ca051f27609e22d8d569990b77b22a0532fcac5995d1ef4793d568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:42:14 compute-0 systemd[1]: libpod-conmon-24e75e3f53ca051f27609e22d8d569990b77b22a0532fcac5995d1ef4793d568.scope: Deactivated successfully.
Oct 02 09:42:14 compute-0 podman[466354]: 2025-10-02 09:42:14.439888796 +0000 UTC m=+0.106380775 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 09:42:14 compute-0 podman[466357]: 2025-10-02 09:42:14.446584455 +0000 UTC m=+0.113223748 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 09:42:14 compute-0 podman[466445]: 2025-10-02 09:42:14.586149144 +0000 UTC m=+0.044015076 container create 941160cc649b93ff7e6fbe8a12af1aba3d0c5cac424d829331c24cfeac9666ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 02 09:42:14 compute-0 systemd[1]: Started libpod-conmon-941160cc649b93ff7e6fbe8a12af1aba3d0c5cac424d829331c24cfeac9666ae.scope.
Oct 02 09:42:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e3d6d36d7fe14fa999e2a1b182584f0f499878a96393353a440b2ff2664d91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e3d6d36d7fe14fa999e2a1b182584f0f499878a96393353a440b2ff2664d91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e3d6d36d7fe14fa999e2a1b182584f0f499878a96393353a440b2ff2664d91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e3d6d36d7fe14fa999e2a1b182584f0f499878a96393353a440b2ff2664d91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:14 compute-0 podman[466445]: 2025-10-02 09:42:14.566030005 +0000 UTC m=+0.023895937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:42:14 compute-0 podman[466445]: 2025-10-02 09:42:14.673636726 +0000 UTC m=+0.131502668 container init 941160cc649b93ff7e6fbe8a12af1aba3d0c5cac424d829331c24cfeac9666ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 09:42:14 compute-0 podman[466445]: 2025-10-02 09:42:14.68398034 +0000 UTC m=+0.141846252 container start 941160cc649b93ff7e6fbe8a12af1aba3d0c5cac424d829331c24cfeac9666ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 09:42:14 compute-0 podman[466445]: 2025-10-02 09:42:14.690901976 +0000 UTC m=+0.148767948 container attach 941160cc649b93ff7e6fbe8a12af1aba3d0c5cac424d829331c24cfeac9666ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 09:42:14 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Oct 02 09:42:14 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495170003' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 09:42:14 compute-0 ceph-mon[74477]: from='client.23343 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:14 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/704785021' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 09:42:14 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1495170003' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 09:42:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:15 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23351 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:15 compute-0 nova_compute[260603]: 2025-10-02 09:42:15.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:15 compute-0 focused_villani[466462]: {
Oct 02 09:42:15 compute-0 focused_villani[466462]:     "0": [
Oct 02 09:42:15 compute-0 focused_villani[466462]:         {
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "devices": [
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "/dev/loop3"
Oct 02 09:42:15 compute-0 focused_villani[466462]:             ],
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_name": "ceph_lv0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_size": "21470642176",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "name": "ceph_lv0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "tags": {
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.cluster_name": "ceph",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.crush_device_class": "",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.encrypted": "0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.osd_id": "0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.type": "block",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.vdo": "0"
Oct 02 09:42:15 compute-0 focused_villani[466462]:             },
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "type": "block",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "vg_name": "ceph_vg0"
Oct 02 09:42:15 compute-0 focused_villani[466462]:         }
Oct 02 09:42:15 compute-0 focused_villani[466462]:     ],
Oct 02 09:42:15 compute-0 focused_villani[466462]:     "1": [
Oct 02 09:42:15 compute-0 focused_villani[466462]:         {
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "devices": [
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "/dev/loop4"
Oct 02 09:42:15 compute-0 focused_villani[466462]:             ],
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_name": "ceph_lv1",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_size": "21470642176",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "name": "ceph_lv1",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "tags": {
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.cluster_name": "ceph",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.crush_device_class": "",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.encrypted": "0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.osd_id": "1",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.type": "block",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.vdo": "0"
Oct 02 09:42:15 compute-0 focused_villani[466462]:             },
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "type": "block",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "vg_name": "ceph_vg1"
Oct 02 09:42:15 compute-0 focused_villani[466462]:         }
Oct 02 09:42:15 compute-0 focused_villani[466462]:     ],
Oct 02 09:42:15 compute-0 focused_villani[466462]:     "2": [
Oct 02 09:42:15 compute-0 focused_villani[466462]:         {
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "devices": [
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "/dev/loop5"
Oct 02 09:42:15 compute-0 focused_villani[466462]:             ],
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_name": "ceph_lv2",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_size": "21470642176",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "name": "ceph_lv2",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "tags": {
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.cluster_name": "ceph",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.crush_device_class": "",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.encrypted": "0",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.osd_id": "2",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.type": "block",
Oct 02 09:42:15 compute-0 focused_villani[466462]:                 "ceph.vdo": "0"
Oct 02 09:42:15 compute-0 focused_villani[466462]:             },
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "type": "block",
Oct 02 09:42:15 compute-0 focused_villani[466462]:             "vg_name": "ceph_vg2"
Oct 02 09:42:15 compute-0 focused_villani[466462]:         }
Oct 02 09:42:15 compute-0 focused_villani[466462]:     ]
Oct 02 09:42:15 compute-0 focused_villani[466462]: }
Oct 02 09:42:15 compute-0 systemd[1]: libpod-941160cc649b93ff7e6fbe8a12af1aba3d0c5cac424d829331c24cfeac9666ae.scope: Deactivated successfully.
Oct 02 09:42:15 compute-0 podman[466445]: 2025-10-02 09:42:15.437053821 +0000 UTC m=+0.894919733 container died 941160cc649b93ff7e6fbe8a12af1aba3d0c5cac424d829331c24cfeac9666ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 09:42:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6e3d6d36d7fe14fa999e2a1b182584f0f499878a96393353a440b2ff2664d91-merged.mount: Deactivated successfully.
Oct 02 09:42:15 compute-0 podman[466445]: 2025-10-02 09:42:15.490605794 +0000 UTC m=+0.948471706 container remove 941160cc649b93ff7e6fbe8a12af1aba3d0c5cac424d829331c24cfeac9666ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:42:15 compute-0 systemd[1]: libpod-conmon-941160cc649b93ff7e6fbe8a12af1aba3d0c5cac424d829331c24cfeac9666ae.scope: Deactivated successfully.
Oct 02 09:42:15 compute-0 sudo[466235]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:15 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23353 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:15 compute-0 sudo[466530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:42:15 compute-0 sudo[466530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:15 compute-0 sudo[466530]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:15 compute-0 sudo[466557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:42:15 compute-0 sudo[466557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:15 compute-0 sudo[466557]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:15 compute-0 sudo[466592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:42:15 compute-0 sudo[466592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:15 compute-0 sudo[466592]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:15 compute-0 sudo[466630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:42:15 compute-0 sudo[466630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:15 compute-0 ceph-mon[74477]: from='client.23345 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:15 compute-0 ceph-mon[74477]: pgmap v3739: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:15 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 09:42:15 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033559267' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 09:42:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:42:16 compute-0 podman[466711]: 2025-10-02 09:42:16.131936116 +0000 UTC m=+0.040214528 container create 2efbd40a6ce998029609d6efc08771bcec8c3e10e7849e65488a07e40f12de2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:42:16 compute-0 systemd[1]: Started libpod-conmon-2efbd40a6ce998029609d6efc08771bcec8c3e10e7849e65488a07e40f12de2c.scope.
Oct 02 09:42:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:42:16 compute-0 podman[466711]: 2025-10-02 09:42:16.203241623 +0000 UTC m=+0.111520055 container init 2efbd40a6ce998029609d6efc08771bcec8c3e10e7849e65488a07e40f12de2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:42:16 compute-0 podman[466711]: 2025-10-02 09:42:16.112336463 +0000 UTC m=+0.020614875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:42:16 compute-0 podman[466711]: 2025-10-02 09:42:16.209712075 +0000 UTC m=+0.117990487 container start 2efbd40a6ce998029609d6efc08771bcec8c3e10e7849e65488a07e40f12de2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ardinghelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 09:42:16 compute-0 podman[466711]: 2025-10-02 09:42:16.212475531 +0000 UTC m=+0.120753983 container attach 2efbd40a6ce998029609d6efc08771bcec8c3e10e7849e65488a07e40f12de2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:42:16 compute-0 sad_ardinghelli[466748]: 167 167
Oct 02 09:42:16 compute-0 systemd[1]: libpod-2efbd40a6ce998029609d6efc08771bcec8c3e10e7849e65488a07e40f12de2c.scope: Deactivated successfully.
Oct 02 09:42:16 compute-0 podman[466711]: 2025-10-02 09:42:16.215775255 +0000 UTC m=+0.124053687 container died 2efbd40a6ce998029609d6efc08771bcec8c3e10e7849e65488a07e40f12de2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:42:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5a67b88bab1dd27d2604a9aefd25a9e2a4f16d81d7ded463737e87fc5e5a3d3-merged.mount: Deactivated successfully.
Oct 02 09:42:16 compute-0 podman[466711]: 2025-10-02 09:42:16.256455475 +0000 UTC m=+0.164733897 container remove 2efbd40a6ce998029609d6efc08771bcec8c3e10e7849e65488a07e40f12de2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 09:42:16 compute-0 systemd[1]: libpod-conmon-2efbd40a6ce998029609d6efc08771bcec8c3e10e7849e65488a07e40f12de2c.scope: Deactivated successfully.
Oct 02 09:42:16 compute-0 nova_compute[260603]: 2025-10-02 09:42:16.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct 02 09:42:16 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412774349' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 09:42:16 compute-0 podman[466774]: 2025-10-02 09:42:16.488553094 +0000 UTC m=+0.056186435 container create aac8ea7fa9ad9541bcc5d3a90694f1f27d03f51e26692722f435a46d107f62d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_robinson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:42:16 compute-0 systemd[1]: Started libpod-conmon-aac8ea7fa9ad9541bcc5d3a90694f1f27d03f51e26692722f435a46d107f62d0.scope.
Oct 02 09:42:16 compute-0 podman[466774]: 2025-10-02 09:42:16.467085574 +0000 UTC m=+0.034718935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:42:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36300cd7c9fc6daf674fa5b0df8555fb34fcd7054ab26de4371ddf10bc43402c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36300cd7c9fc6daf674fa5b0df8555fb34fcd7054ab26de4371ddf10bc43402c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36300cd7c9fc6daf674fa5b0df8555fb34fcd7054ab26de4371ddf10bc43402c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36300cd7c9fc6daf674fa5b0df8555fb34fcd7054ab26de4371ddf10bc43402c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:42:16 compute-0 podman[466774]: 2025-10-02 09:42:16.597551809 +0000 UTC m=+0.165185150 container init aac8ea7fa9ad9541bcc5d3a90694f1f27d03f51e26692722f435a46d107f62d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_robinson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 02 09:42:16 compute-0 podman[466774]: 2025-10-02 09:42:16.606809029 +0000 UTC m=+0.174442350 container start aac8ea7fa9ad9541bcc5d3a90694f1f27d03f51e26692722f435a46d107f62d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_robinson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:42:16 compute-0 podman[466774]: 2025-10-02 09:42:16.614201609 +0000 UTC m=+0.181834930 container attach aac8ea7fa9ad9541bcc5d3a90694f1f27d03f51e26692722f435a46d107f62d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_robinson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:42:16 compute-0 ceph-mon[74477]: from='client.23351 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:16 compute-0 ceph-mon[74477]: from='client.23353 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:42:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3033559267' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 09:42:16 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1412774349' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 09:42:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:17 compute-0 objective_robinson[466806]: {
Oct 02 09:42:17 compute-0 objective_robinson[466806]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "osd_id": 2,
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "type": "bluestore"
Oct 02 09:42:17 compute-0 objective_robinson[466806]:     },
Oct 02 09:42:17 compute-0 objective_robinson[466806]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "osd_id": 1,
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "type": "bluestore"
Oct 02 09:42:17 compute-0 objective_robinson[466806]:     },
Oct 02 09:42:17 compute-0 objective_robinson[466806]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "osd_id": 0,
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:42:17 compute-0 objective_robinson[466806]:         "type": "bluestore"
Oct 02 09:42:17 compute-0 objective_robinson[466806]:     }
Oct 02 09:42:17 compute-0 objective_robinson[466806]: }
Oct 02 09:42:17 compute-0 podman[467078]: 2025-10-02 09:42:17.604217322 +0000 UTC m=+0.033187088 container died aac8ea7fa9ad9541bcc5d3a90694f1f27d03f51e26692722f435a46d107f62d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 09:42:17 compute-0 ceph-mon[74477]: pgmap v3740: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:17 compute-0 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 09:42:18 compute-0 systemd[1]: libpod-aac8ea7fa9ad9541bcc5d3a90694f1f27d03f51e26692722f435a46d107f62d0.scope: Deactivated successfully.
Oct 02 09:42:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-36300cd7c9fc6daf674fa5b0df8555fb34fcd7054ab26de4371ddf10bc43402c-merged.mount: Deactivated successfully.
Oct 02 09:42:18 compute-0 podman[467078]: 2025-10-02 09:42:18.276237892 +0000 UTC m=+0.705207668 container remove aac8ea7fa9ad9541bcc5d3a90694f1f27d03f51e26692722f435a46d107f62d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_robinson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 09:42:18 compute-0 systemd[1]: libpod-conmon-aac8ea7fa9ad9541bcc5d3a90694f1f27d03f51e26692722f435a46d107f62d0.scope: Deactivated successfully.
Oct 02 09:42:18 compute-0 sudo[466630]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:42:18 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:42:18 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:42:18 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:42:18 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5a16b8f3-d214-4728-b601-9cd9f70e076f does not exist
Oct 02 09:42:18 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ddc77e15-0d9b-4602-ac8d-9e2182a5462a does not exist
Oct 02 09:42:18 compute-0 sudo[467171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:42:18 compute-0 sudo[467171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:18 compute-0 sudo[467171]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:18 compute-0 sudo[467201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:42:18 compute-0 sudo[467201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:42:18 compute-0 sudo[467201]: pam_unix(sudo:session): session closed for user root
Oct 02 09:42:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:42:19 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:42:19 compute-0 ceph-mon[74477]: pgmap v3741: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:20 compute-0 nova_compute[260603]: 2025-10-02 09:42:20.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:21 compute-0 ceph-mon[74477]: pgmap v3742: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:42:21 compute-0 systemd[1]: Starting Time & Date Service...
Oct 02 09:42:21 compute-0 systemd[1]: Started Time & Date Service.
Oct 02 09:42:21 compute-0 nova_compute[260603]: 2025-10-02 09:42:21.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:42:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/503596140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:42:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:42:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/503596140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:42:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/503596140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:42:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/503596140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:42:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:23 compute-0 ceph-mon[74477]: pgmap v3743: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:25 compute-0 nova_compute[260603]: 2025-10-02 09:42:25.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:25 compute-0 ceph-mon[74477]: pgmap v3744: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:42:26 compute-0 nova_compute[260603]: 2025-10-02 09:42:26.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:42:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:42:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:42:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:42:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:42:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:42:27 compute-0 ceph-mon[74477]: pgmap v3745: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:42:28
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'backups', 'images', 'vms', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:42:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:29 compute-0 ceph-mon[74477]: pgmap v3746: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:30 compute-0 nova_compute[260603]: 2025-10-02 09:42:30.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:30 compute-0 nova_compute[260603]: 2025-10-02 09:42:30.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:42:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:31 compute-0 ceph-mon[74477]: pgmap v3747: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:42:31 compute-0 nova_compute[260603]: 2025-10-02 09:42:31.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:33 compute-0 nova_compute[260603]: 2025-10-02 09:42:33.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:42:33 compute-0 nova_compute[260603]: 2025-10-02 09:42:33.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:42:33 compute-0 nova_compute[260603]: 2025-10-02 09:42:33.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:42:33 compute-0 nova_compute[260603]: 2025-10-02 09:42:33.558 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:42:33 compute-0 nova_compute[260603]: 2025-10-02 09:42:33.558 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:42:33 compute-0 nova_compute[260603]: 2025-10-02 09:42:33.558 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:42:33 compute-0 nova_compute[260603]: 2025-10-02 09:42:33.558 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:42:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:42:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3818170485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:42:33 compute-0 nova_compute[260603]: 2025-10-02 09:42:33.992 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:42:34 compute-0 ceph-mon[74477]: pgmap v3748: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3818170485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:42:34 compute-0 nova_compute[260603]: 2025-10-02 09:42:34.123 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:42:34 compute-0 nova_compute[260603]: 2025-10-02 09:42:34.124 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3385MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:42:34 compute-0 nova_compute[260603]: 2025-10-02 09:42:34.124 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:42:34 compute-0 nova_compute[260603]: 2025-10-02 09:42:34.124 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:42:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:42:34.882 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:42:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:42:34.883 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:42:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:42:34.883 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:42:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:35 compute-0 ceph-mon[74477]: pgmap v3749: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:35 compute-0 nova_compute[260603]: 2025-10-02 09:42:35.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:35 compute-0 nova_compute[260603]: 2025-10-02 09:42:35.346 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:42:35 compute-0 nova_compute[260603]: 2025-10-02 09:42:35.347 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:42:35 compute-0 nova_compute[260603]: 2025-10-02 09:42:35.368 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:42:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:42:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2690449585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:42:35 compute-0 nova_compute[260603]: 2025-10-02 09:42:35.825 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:42:35 compute-0 nova_compute[260603]: 2025-10-02 09:42:35.833 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:42:35 compute-0 nova_compute[260603]: 2025-10-02 09:42:35.872 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:42:35 compute-0 nova_compute[260603]: 2025-10-02 09:42:35.874 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:42:35 compute-0 nova_compute[260603]: 2025-10-02 09:42:35.875 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:42:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:42:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2690449585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:42:36 compute-0 nova_compute[260603]: 2025-10-02 09:42:36.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:37 compute-0 ceph-mon[74477]: pgmap v3750: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:37 compute-0 nova_compute[260603]: 2025-10-02 09:42:37.875 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:42:37 compute-0 nova_compute[260603]: 2025-10-02 09:42:37.875 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:42:37 compute-0 nova_compute[260603]: 2025-10-02 09:42:37.876 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:42:37 compute-0 nova_compute[260603]: 2025-10-02 09:42:37.876 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:42:37 compute-0 nova_compute[260603]: 2025-10-02 09:42:37.928 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:42:37 compute-0 nova_compute[260603]: 2025-10-02 09:42:37.928 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:42:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3751: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:42:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:42:39 compute-0 podman[467412]: 2025-10-02 09:42:39.980940309 +0000 UTC m=+0.050464158 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 02 09:42:40 compute-0 podman[467411]: 2025-10-02 09:42:40.028573446 +0000 UTC m=+0.097833556 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:42:40 compute-0 nova_compute[260603]: 2025-10-02 09:42:40.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:40 compute-0 ceph-mon[74477]: pgmap v3751: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3752: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:42:41 compute-0 nova_compute[260603]: 2025-10-02 09:42:41.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:41 compute-0 ceph-mon[74477]: pgmap v3752: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:42 compute-0 nova_compute[260603]: 2025-10-02 09:42:42.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:42:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3753: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:44 compute-0 ceph-mon[74477]: pgmap v3753: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:44 compute-0 podman[467451]: 2025-10-02 09:42:44.619524422 +0000 UTC m=+0.056578598 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 02 09:42:44 compute-0 podman[467450]: 2025-10-02 09:42:44.619798911 +0000 UTC m=+0.060236613 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 09:42:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3754: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:45 compute-0 ceph-mon[74477]: pgmap v3754: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:45 compute-0 nova_compute[260603]: 2025-10-02 09:42:45.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:42:46 compute-0 nova_compute[260603]: 2025-10-02 09:42:46.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3755: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:48 compute-0 ceph-mon[74477]: pgmap v3755: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3756: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:49 compute-0 ceph-mon[74477]: pgmap v3756: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:50 compute-0 nova_compute[260603]: 2025-10-02 09:42:50.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3757: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:42:51 compute-0 nova_compute[260603]: 2025-10-02 09:42:51.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:51 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 09:42:51 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 09:42:52 compute-0 ceph-mon[74477]: pgmap v3757: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:52 compute-0 nova_compute[260603]: 2025-10-02 09:42:52.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:42:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3758: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:53 compute-0 ceph-mon[74477]: pgmap v3758: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3759: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:55 compute-0 ceph-mon[74477]: pgmap v3759: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:55 compute-0 nova_compute[260603]: 2025-10-02 09:42:55.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:42:56 compute-0 nova_compute[260603]: 2025-10-02 09:42:56.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:42:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3760: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:57 compute-0 ceph-mon[74477]: pgmap v3760: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:42:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:42:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:42:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:42:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:42:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:42:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3761: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:42:59 compute-0 ceph-mon[74477]: pgmap v3761: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:00 compute-0 nova_compute[260603]: 2025-10-02 09:43:00.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:00 compute-0 nova_compute[260603]: 2025-10-02 09:43:00.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:43:00 compute-0 nova_compute[260603]: 2025-10-02 09:43:00.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:43:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3762: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:43:01 compute-0 ceph-mon[74477]: pgmap v3762: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:01 compute-0 nova_compute[260603]: 2025-10-02 09:43:01.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:02 compute-0 sudo[459607]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:02 compute-0 sshd-session[459606]: Received disconnect from 192.168.122.10 port 56220:11: disconnected by user
Oct 02 09:43:02 compute-0 sshd-session[459606]: Disconnected from user zuul 192.168.122.10 port 56220
Oct 02 09:43:02 compute-0 sshd-session[459603]: pam_unix(sshd:session): session closed for user zuul
Oct 02 09:43:02 compute-0 systemd-logind[787]: Session 57 logged out. Waiting for processes to exit.
Oct 02 09:43:02 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Oct 02 09:43:02 compute-0 systemd[1]: session-57.scope: Consumed 2min 51.957s CPU time, 1.1G memory peak, read 509.0M from disk, written 404.6M to disk.
Oct 02 09:43:02 compute-0 systemd-logind[787]: Removed session 57.
Oct 02 09:43:02 compute-0 sshd-session[467494]: Accepted publickey for zuul from 192.168.122.10 port 56964 ssh2: ECDSA SHA256:QEnwbgBR1jglQLPp4vwsTS2MMzDakrR2dLJ/eEaCKUI
Oct 02 09:43:02 compute-0 systemd-logind[787]: New session 58 of user zuul.
Oct 02 09:43:02 compute-0 systemd[1]: Started Session 58 of User zuul.
Oct 02 09:43:02 compute-0 sshd-session[467494]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 09:43:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3763: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:02 compute-0 sudo[467498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-02-vaofoql.tar.xz
Oct 02 09:43:02 compute-0 sudo[467498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:43:03 compute-0 sudo[467498]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:03 compute-0 sshd-session[467497]: Received disconnect from 192.168.122.10 port 56964:11: disconnected by user
Oct 02 09:43:03 compute-0 sshd-session[467497]: Disconnected from user zuul 192.168.122.10 port 56964
Oct 02 09:43:03 compute-0 sshd-session[467494]: pam_unix(sshd:session): session closed for user zuul
Oct 02 09:43:03 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Oct 02 09:43:03 compute-0 systemd-logind[787]: Session 58 logged out. Waiting for processes to exit.
Oct 02 09:43:03 compute-0 systemd-logind[787]: Removed session 58.
Oct 02 09:43:03 compute-0 sshd-session[467523]: Accepted publickey for zuul from 192.168.122.10 port 56968 ssh2: ECDSA SHA256:QEnwbgBR1jglQLPp4vwsTS2MMzDakrR2dLJ/eEaCKUI
Oct 02 09:43:03 compute-0 systemd-logind[787]: New session 59 of user zuul.
Oct 02 09:43:03 compute-0 systemd[1]: Started Session 59 of User zuul.
Oct 02 09:43:03 compute-0 ceph-mon[74477]: pgmap v3763: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:03 compute-0 sshd-session[467523]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 09:43:03 compute-0 sudo[467527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 02 09:43:03 compute-0 sudo[467527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:43:03 compute-0 sudo[467527]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:03 compute-0 sshd-session[467526]: Received disconnect from 192.168.122.10 port 56968:11: disconnected by user
Oct 02 09:43:03 compute-0 sshd-session[467526]: Disconnected from user zuul 192.168.122.10 port 56968
Oct 02 09:43:03 compute-0 sshd-session[467523]: pam_unix(sshd:session): session closed for user zuul
Oct 02 09:43:03 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Oct 02 09:43:03 compute-0 systemd-logind[787]: Session 59 logged out. Waiting for processes to exit.
Oct 02 09:43:03 compute-0 systemd-logind[787]: Removed session 59.
Oct 02 09:43:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3764: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:05 compute-0 nova_compute[260603]: 2025-10-02 09:43:05.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:05 compute-0 ceph-mon[74477]: pgmap v3764: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:43:06 compute-0 nova_compute[260603]: 2025-10-02 09:43:06.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3765: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:07 compute-0 ceph-mon[74477]: pgmap v3765: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3766: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:10 compute-0 ceph-mon[74477]: pgmap v3766: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:10 compute-0 nova_compute[260603]: 2025-10-02 09:43:10.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3767: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:11 compute-0 podman[467553]: 2025-10-02 09:43:11.000549931 +0000 UTC m=+0.066121616 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 09:43:11 compute-0 podman[467552]: 2025-10-02 09:43:11.031694134 +0000 UTC m=+0.100266682 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:43:11 compute-0 ceph-mon[74477]: pgmap v3767: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:43:11 compute-0 nova_compute[260603]: 2025-10-02 09:43:11.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3768: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:13 compute-0 ceph-mon[74477]: pgmap v3768: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3769: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:15 compute-0 podman[467592]: 2025-10-02 09:43:15.006012791 +0000 UTC m=+0.078026548 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 09:43:15 compute-0 podman[467593]: 2025-10-02 09:43:15.041664565 +0000 UTC m=+0.100415568 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 09:43:15 compute-0 nova_compute[260603]: 2025-10-02 09:43:15.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:16 compute-0 ceph-mon[74477]: pgmap v3769: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:43:16 compute-0 nova_compute[260603]: 2025-10-02 09:43:16.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3770: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:17 compute-0 ceph-mon[74477]: pgmap v3770: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:18 compute-0 sudo[467632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:43:18 compute-0 sudo[467632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:18 compute-0 sudo[467632]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:18 compute-0 sudo[467657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:43:18 compute-0 sudo[467657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:18 compute-0 sudo[467657]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:18 compute-0 sudo[467682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:43:18 compute-0 sudo[467682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:18 compute-0 sudo[467682]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:18 compute-0 sudo[467707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 09:43:18 compute-0 sudo[467707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3771: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:19 compute-0 ceph-mon[74477]: pgmap v3771: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:19 compute-0 podman[467804]: 2025-10-02 09:43:19.198019547 +0000 UTC m=+0.080187515 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:43:19 compute-0 podman[467804]: 2025-10-02 09:43:19.31014973 +0000 UTC m=+0.192317668 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:43:20 compute-0 sudo[467707]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:43:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:43:20 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:43:20 compute-0 nova_compute[260603]: 2025-10-02 09:43:20.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:20 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:43:20 compute-0 sudo[467962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:43:20 compute-0 sudo[467962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:20 compute-0 sudo[467962]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:20 compute-0 sudo[467987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:43:20 compute-0 sudo[467987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:20 compute-0 sudo[467987]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:20 compute-0 sudo[468012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:43:20 compute-0 sudo[468012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:20 compute-0 sudo[468012]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:20 compute-0 sudo[468037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:43:20 compute-0 sudo[468037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3772: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:20 compute-0 sudo[468037]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 09:43:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 09:43:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:43:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:43:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:43:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:43:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:43:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:43:21 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8b148435-a074-40d9-93ef-2875bd0d525b does not exist
Oct 02 09:43:21 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5bbb946b-f6b8-407e-aa5e-320ddc6717c4 does not exist
Oct 02 09:43:21 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev f41cf005-177b-4bd0-83fd-1501a791c193 does not exist
Oct 02 09:43:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:43:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:43:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:43:21 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:43:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:43:21 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:43:21 compute-0 sudo[468093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:43:21 compute-0 sudo[468093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:21 compute-0 sudo[468093]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:43:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:43:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:43:21 compute-0 ceph-mon[74477]: pgmap v3772: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 09:43:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:43:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:43:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:43:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:43:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:43:21 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:43:21 compute-0 sudo[468118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:43:21 compute-0 sudo[468118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:21 compute-0 sudo[468118]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:21 compute-0 sudo[468143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:43:21 compute-0 sudo[468143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:21 compute-0 sudo[468143]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:21 compute-0 sudo[468168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:43:21 compute-0 sudo[468168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:21 compute-0 nova_compute[260603]: 2025-10-02 09:43:21.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:21 compute-0 podman[468234]: 2025-10-02 09:43:21.585427808 +0000 UTC m=+0.040557768 container create 9536af21e3b7f238e7d9f58f21d2a711db19855cf38519f2ce2d9cc336c67158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cray, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:43:21 compute-0 systemd[1]: Started libpod-conmon-9536af21e3b7f238e7d9f58f21d2a711db19855cf38519f2ce2d9cc336c67158.scope.
Oct 02 09:43:21 compute-0 podman[468234]: 2025-10-02 09:43:21.566815027 +0000 UTC m=+0.021945007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:43:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:43:21 compute-0 podman[468234]: 2025-10-02 09:43:21.696444845 +0000 UTC m=+0.151574835 container init 9536af21e3b7f238e7d9f58f21d2a711db19855cf38519f2ce2d9cc336c67158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cray, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:43:21 compute-0 podman[468234]: 2025-10-02 09:43:21.704179787 +0000 UTC m=+0.159309747 container start 9536af21e3b7f238e7d9f58f21d2a711db19855cf38519f2ce2d9cc336c67158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 09:43:21 compute-0 heuristic_cray[468251]: 167 167
Oct 02 09:43:21 compute-0 systemd[1]: libpod-9536af21e3b7f238e7d9f58f21d2a711db19855cf38519f2ce2d9cc336c67158.scope: Deactivated successfully.
Oct 02 09:43:21 compute-0 podman[468234]: 2025-10-02 09:43:21.720313281 +0000 UTC m=+0.175443271 container attach 9536af21e3b7f238e7d9f58f21d2a711db19855cf38519f2ce2d9cc336c67158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 09:43:21 compute-0 podman[468234]: 2025-10-02 09:43:21.720903759 +0000 UTC m=+0.176033719 container died 9536af21e3b7f238e7d9f58f21d2a711db19855cf38519f2ce2d9cc336c67158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 09:43:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ad30f5f9c4eb8917705a8c1a6cf99fe49ebcd9a45f22a468e813a34733e5b80-merged.mount: Deactivated successfully.
Oct 02 09:43:21 compute-0 podman[468234]: 2025-10-02 09:43:21.802510859 +0000 UTC m=+0.257640819 container remove 9536af21e3b7f238e7d9f58f21d2a711db19855cf38519f2ce2d9cc336c67158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cray, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 09:43:21 compute-0 systemd[1]: libpod-conmon-9536af21e3b7f238e7d9f58f21d2a711db19855cf38519f2ce2d9cc336c67158.scope: Deactivated successfully.
Oct 02 09:43:21 compute-0 podman[468277]: 2025-10-02 09:43:21.950778179 +0000 UTC m=+0.041551039 container create c75d9c1254cf2e35cab65338d4bdc461b6a36ad34213cc0820006b7ae15bd0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 09:43:21 compute-0 systemd[1]: Started libpod-conmon-c75d9c1254cf2e35cab65338d4bdc461b6a36ad34213cc0820006b7ae15bd0c4.scope.
Oct 02 09:43:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebf6cbf10568fb0aff85a41881fe7d5b05af6f396c0b7875fc01095d148af81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebf6cbf10568fb0aff85a41881fe7d5b05af6f396c0b7875fc01095d148af81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebf6cbf10568fb0aff85a41881fe7d5b05af6f396c0b7875fc01095d148af81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebf6cbf10568fb0aff85a41881fe7d5b05af6f396c0b7875fc01095d148af81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebf6cbf10568fb0aff85a41881fe7d5b05af6f396c0b7875fc01095d148af81/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:22 compute-0 podman[468277]: 2025-10-02 09:43:21.933574692 +0000 UTC m=+0.024347572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:43:22 compute-0 podman[468277]: 2025-10-02 09:43:22.149498666 +0000 UTC m=+0.240271556 container init c75d9c1254cf2e35cab65338d4bdc461b6a36ad34213cc0820006b7ae15bd0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:43:22 compute-0 podman[468277]: 2025-10-02 09:43:22.155585047 +0000 UTC m=+0.246357907 container start c75d9c1254cf2e35cab65338d4bdc461b6a36ad34213cc0820006b7ae15bd0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:43:22 compute-0 podman[468277]: 2025-10-02 09:43:22.212577087 +0000 UTC m=+0.303349947 container attach c75d9c1254cf2e35cab65338d4bdc461b6a36ad34213cc0820006b7ae15bd0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 09:43:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:43:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3934323234' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:43:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:43:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3934323234' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:43:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3934323234' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:43:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/3934323234' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:43:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3773: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:23 compute-0 stoic_blackwell[468293]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:43:23 compute-0 stoic_blackwell[468293]: --> relative data size: 1.0
Oct 02 09:43:23 compute-0 stoic_blackwell[468293]: --> All data devices are unavailable
Oct 02 09:43:23 compute-0 systemd[1]: libpod-c75d9c1254cf2e35cab65338d4bdc461b6a36ad34213cc0820006b7ae15bd0c4.scope: Deactivated successfully.
Oct 02 09:43:23 compute-0 podman[468277]: 2025-10-02 09:43:23.194021351 +0000 UTC m=+1.284794211 container died c75d9c1254cf2e35cab65338d4bdc461b6a36ad34213cc0820006b7ae15bd0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 09:43:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-bebf6cbf10568fb0aff85a41881fe7d5b05af6f396c0b7875fc01095d148af81-merged.mount: Deactivated successfully.
Oct 02 09:43:23 compute-0 podman[468277]: 2025-10-02 09:43:23.351143179 +0000 UTC m=+1.441916039 container remove c75d9c1254cf2e35cab65338d4bdc461b6a36ad34213cc0820006b7ae15bd0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:43:23 compute-0 ceph-mon[74477]: pgmap v3773: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:23 compute-0 systemd[1]: libpod-conmon-c75d9c1254cf2e35cab65338d4bdc461b6a36ad34213cc0820006b7ae15bd0c4.scope: Deactivated successfully.
Oct 02 09:43:23 compute-0 sudo[468168]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:23 compute-0 sudo[468336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:43:23 compute-0 sudo[468336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:23 compute-0 sudo[468336]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:23 compute-0 sudo[468361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:43:23 compute-0 sudo[468361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:23 compute-0 sudo[468361]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:23 compute-0 sudo[468386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:43:23 compute-0 sudo[468386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:23 compute-0 sudo[468386]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:23 compute-0 sudo[468411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:43:23 compute-0 sudo[468411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:23 compute-0 podman[468477]: 2025-10-02 09:43:23.953629668 +0000 UTC m=+0.040989001 container create 6d7b82ce846541bfc6c610d2d830bf572c167c91901d431a0e56c8f1748d1a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:43:23 compute-0 systemd[1]: Started libpod-conmon-6d7b82ce846541bfc6c610d2d830bf572c167c91901d431a0e56c8f1748d1a07.scope.
Oct 02 09:43:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:43:24 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:43:24 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:43:24 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:43:24 compute-0 podman[468477]: 2025-10-02 09:43:24.026492043 +0000 UTC m=+0.113851366 container init 6d7b82ce846541bfc6c610d2d830bf572c167c91901d431a0e56c8f1748d1a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct 02 09:43:24 compute-0 podman[468477]: 2025-10-02 09:43:23.938063292 +0000 UTC m=+0.025422635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:43:24 compute-0 podman[468477]: 2025-10-02 09:43:24.034893685 +0000 UTC m=+0.122252998 container start 6d7b82ce846541bfc6c610d2d830bf572c167c91901d431a0e56c8f1748d1a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:43:24 compute-0 podman[468477]: 2025-10-02 09:43:24.038073705 +0000 UTC m=+0.125433028 container attach 6d7b82ce846541bfc6c610d2d830bf572c167c91901d431a0e56c8f1748d1a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:43:24 compute-0 systemd[1]: libpod-6d7b82ce846541bfc6c610d2d830bf572c167c91901d431a0e56c8f1748d1a07.scope: Deactivated successfully.
Oct 02 09:43:24 compute-0 objective_brahmagupta[468494]: 167 167
Oct 02 09:43:24 compute-0 conmon[468494]: conmon 6d7b82ce846541bfc6c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d7b82ce846541bfc6c610d2d830bf572c167c91901d431a0e56c8f1748d1a07.scope/container/memory.events
Oct 02 09:43:24 compute-0 podman[468477]: 2025-10-02 09:43:24.040031567 +0000 UTC m=+0.127390890 container died 6d7b82ce846541bfc6c610d2d830bf572c167c91901d431a0e56c8f1748d1a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-dba64822fee0e240d09b7f425428f7dc4b34d0dd6f7dba5b467553a5af1c9e1e-merged.mount: Deactivated successfully.
Oct 02 09:43:24 compute-0 podman[468477]: 2025-10-02 09:43:24.074519834 +0000 UTC m=+0.161879147 container remove 6d7b82ce846541bfc6c610d2d830bf572c167c91901d431a0e56c8f1748d1a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 09:43:24 compute-0 systemd[1]: libpod-conmon-6d7b82ce846541bfc6c610d2d830bf572c167c91901d431a0e56c8f1748d1a07.scope: Deactivated successfully.
Oct 02 09:43:24 compute-0 podman[468518]: 2025-10-02 09:43:24.286862436 +0000 UTC m=+0.077208413 container create b8af8c79de3305b2c00fcae927afd6e25c6fefdc0ca736c60dd4abc062aad9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 09:43:24 compute-0 podman[468518]: 2025-10-02 09:43:24.235262984 +0000 UTC m=+0.025609021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:43:24 compute-0 systemd[1]: Started libpod-conmon-b8af8c79de3305b2c00fcae927afd6e25c6fefdc0ca736c60dd4abc062aad9e8.scope.
Oct 02 09:43:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffeddca99928bfd08fefb8de292be5b3059c67fae0695bad2bba347f0b4a2cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffeddca99928bfd08fefb8de292be5b3059c67fae0695bad2bba347f0b4a2cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffeddca99928bfd08fefb8de292be5b3059c67fae0695bad2bba347f0b4a2cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffeddca99928bfd08fefb8de292be5b3059c67fae0695bad2bba347f0b4a2cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:24 compute-0 podman[468518]: 2025-10-02 09:43:24.3948872 +0000 UTC m=+0.185233187 container init b8af8c79de3305b2c00fcae927afd6e25c6fefdc0ca736c60dd4abc062aad9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_matsumoto, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 09:43:24 compute-0 podman[468518]: 2025-10-02 09:43:24.405315626 +0000 UTC m=+0.195661613 container start b8af8c79de3305b2c00fcae927afd6e25c6fefdc0ca736c60dd4abc062aad9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_matsumoto, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:43:24 compute-0 podman[468518]: 2025-10-02 09:43:24.432057391 +0000 UTC m=+0.222403398 container attach b8af8c79de3305b2c00fcae927afd6e25c6fefdc0ca736c60dd4abc062aad9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_matsumoto, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 09:43:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3774: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]: {
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:     "0": [
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:         {
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "devices": [
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "/dev/loop3"
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             ],
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_name": "ceph_lv0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_size": "21470642176",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "name": "ceph_lv0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "tags": {
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.cluster_name": "ceph",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.crush_device_class": "",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.encrypted": "0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.osd_id": "0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.type": "block",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.vdo": "0"
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             },
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "type": "block",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "vg_name": "ceph_vg0"
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:         }
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:     ],
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:     "1": [
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:         {
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "devices": [
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "/dev/loop4"
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             ],
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_name": "ceph_lv1",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_size": "21470642176",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "name": "ceph_lv1",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "tags": {
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.cluster_name": "ceph",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.crush_device_class": "",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.encrypted": "0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.osd_id": "1",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.type": "block",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.vdo": "0"
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             },
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "type": "block",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "vg_name": "ceph_vg1"
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:         }
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:     ],
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:     "2": [
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:         {
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "devices": [
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "/dev/loop5"
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             ],
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_name": "ceph_lv2",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_size": "21470642176",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "name": "ceph_lv2",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "tags": {
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.cluster_name": "ceph",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.crush_device_class": "",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.encrypted": "0",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.osd_id": "2",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.type": "block",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:                 "ceph.vdo": "0"
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             },
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "type": "block",
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:             "vg_name": "ceph_vg2"
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:         }
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]:     ]
Oct 02 09:43:25 compute-0 gallant_matsumoto[468535]: }
Oct 02 09:43:25 compute-0 systemd[1]: libpod-b8af8c79de3305b2c00fcae927afd6e25c6fefdc0ca736c60dd4abc062aad9e8.scope: Deactivated successfully.
Oct 02 09:43:25 compute-0 podman[468518]: 2025-10-02 09:43:25.195455775 +0000 UTC m=+0.985801762 container died b8af8c79de3305b2c00fcae927afd6e25c6fefdc0ca736c60dd4abc062aad9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 09:43:25 compute-0 nova_compute[260603]: 2025-10-02 09:43:25.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-cffeddca99928bfd08fefb8de292be5b3059c67fae0695bad2bba347f0b4a2cd-merged.mount: Deactivated successfully.
Oct 02 09:43:25 compute-0 podman[468518]: 2025-10-02 09:43:25.247907523 +0000 UTC m=+1.038253510 container remove b8af8c79de3305b2c00fcae927afd6e25c6fefdc0ca736c60dd4abc062aad9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 02 09:43:25 compute-0 systemd[1]: libpod-conmon-b8af8c79de3305b2c00fcae927afd6e25c6fefdc0ca736c60dd4abc062aad9e8.scope: Deactivated successfully.
Oct 02 09:43:25 compute-0 sudo[468411]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:25 compute-0 sudo[468555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:43:25 compute-0 sudo[468555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:25 compute-0 sudo[468555]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:25 compute-0 sudo[468580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:43:25 compute-0 sudo[468580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:25 compute-0 sudo[468580]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:25 compute-0 sudo[468605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:43:25 compute-0 sudo[468605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:25 compute-0 sudo[468605]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:25 compute-0 sudo[468630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:43:25 compute-0 sudo[468630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:25 compute-0 podman[468697]: 2025-10-02 09:43:25.841806583 +0000 UTC m=+0.052889032 container create 3b0b8f16abe14047b1aec14d60de6b49d78b32a6a138d35ad34273a30c167a42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_golick, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:43:25 compute-0 systemd[1]: Started libpod-conmon-3b0b8f16abe14047b1aec14d60de6b49d78b32a6a138d35ad34273a30c167a42.scope.
Oct 02 09:43:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:43:25 compute-0 podman[468697]: 2025-10-02 09:43:25.818715282 +0000 UTC m=+0.029797761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:43:25 compute-0 podman[468697]: 2025-10-02 09:43:25.931434293 +0000 UTC m=+0.142516752 container init 3b0b8f16abe14047b1aec14d60de6b49d78b32a6a138d35ad34273a30c167a42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 09:43:25 compute-0 podman[468697]: 2025-10-02 09:43:25.937317057 +0000 UTC m=+0.148399516 container start 3b0b8f16abe14047b1aec14d60de6b49d78b32a6a138d35ad34273a30c167a42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_golick, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:43:25 compute-0 awesome_golick[468713]: 167 167
Oct 02 09:43:25 compute-0 systemd[1]: libpod-3b0b8f16abe14047b1aec14d60de6b49d78b32a6a138d35ad34273a30c167a42.scope: Deactivated successfully.
Oct 02 09:43:25 compute-0 podman[468697]: 2025-10-02 09:43:25.951513811 +0000 UTC m=+0.162596280 container attach 3b0b8f16abe14047b1aec14d60de6b49d78b32a6a138d35ad34273a30c167a42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_golick, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:43:25 compute-0 podman[468697]: 2025-10-02 09:43:25.951898632 +0000 UTC m=+0.162981091 container died 3b0b8f16abe14047b1aec14d60de6b49d78b32a6a138d35ad34273a30c167a42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_golick, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-eab8399f3529b467f4ef6ec5d892c2f3f4287169f69c2893a3e3ca5f0ead75e9-merged.mount: Deactivated successfully.
Oct 02 09:43:26 compute-0 ceph-mon[74477]: pgmap v3774: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:26 compute-0 podman[468697]: 2025-10-02 09:43:26.058812422 +0000 UTC m=+0.269894881 container remove 3b0b8f16abe14047b1aec14d60de6b49d78b32a6a138d35ad34273a30c167a42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:43:26 compute-0 systemd[1]: libpod-conmon-3b0b8f16abe14047b1aec14d60de6b49d78b32a6a138d35ad34273a30c167a42.scope: Deactivated successfully.
Oct 02 09:43:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:43:26 compute-0 podman[468739]: 2025-10-02 09:43:26.248991272 +0000 UTC m=+0.070697699 container create b0a86b6e3a0ad0a900d9a969f8f644998b625e0291d1aab374fc7cd966ede8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ramanujan, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 09:43:26 compute-0 systemd[1]: Started libpod-conmon-b0a86b6e3a0ad0a900d9a969f8f644998b625e0291d1aab374fc7cd966ede8a6.scope.
Oct 02 09:43:26 compute-0 podman[468739]: 2025-10-02 09:43:26.197476363 +0000 UTC m=+0.019182800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:43:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54cc9a14667edc2ed10a27860553dfe9facc1484b830edbba127883871bf3f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54cc9a14667edc2ed10a27860553dfe9facc1484b830edbba127883871bf3f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54cc9a14667edc2ed10a27860553dfe9facc1484b830edbba127883871bf3f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54cc9a14667edc2ed10a27860553dfe9facc1484b830edbba127883871bf3f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:43:26 compute-0 podman[468739]: 2025-10-02 09:43:26.308583433 +0000 UTC m=+0.130289860 container init b0a86b6e3a0ad0a900d9a969f8f644998b625e0291d1aab374fc7cd966ede8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 09:43:26 compute-0 podman[468739]: 2025-10-02 09:43:26.318875275 +0000 UTC m=+0.140581692 container start b0a86b6e3a0ad0a900d9a969f8f644998b625e0291d1aab374fc7cd966ede8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:43:26 compute-0 podman[468739]: 2025-10-02 09:43:26.361632 +0000 UTC m=+0.183338437 container attach b0a86b6e3a0ad0a900d9a969f8f644998b625e0291d1aab374fc7cd966ede8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 09:43:26 compute-0 nova_compute[260603]: 2025-10-02 09:43:26.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3775: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:27 compute-0 ceph-mon[74477]: pgmap v3775: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]: {
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "osd_id": 2,
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "type": "bluestore"
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:     },
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "osd_id": 1,
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "type": "bluestore"
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:     },
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "osd_id": 0,
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:         "type": "bluestore"
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]:     }
Oct 02 09:43:27 compute-0 distracted_ramanujan[468755]: }
Oct 02 09:43:27 compute-0 systemd[1]: libpod-b0a86b6e3a0ad0a900d9a969f8f644998b625e0291d1aab374fc7cd966ede8a6.scope: Deactivated successfully.
Oct 02 09:43:27 compute-0 podman[468739]: 2025-10-02 09:43:27.247236382 +0000 UTC m=+1.068942809 container died b0a86b6e3a0ad0a900d9a969f8f644998b625e0291d1aab374fc7cd966ede8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 09:43:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d54cc9a14667edc2ed10a27860553dfe9facc1484b830edbba127883871bf3f9-merged.mount: Deactivated successfully.
Oct 02 09:43:27 compute-0 podman[468739]: 2025-10-02 09:43:27.682307612 +0000 UTC m=+1.504014029 container remove b0a86b6e3a0ad0a900d9a969f8f644998b625e0291d1aab374fc7cd966ede8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ramanujan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 09:43:27 compute-0 sudo[468630]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:43:27 compute-0 systemd[1]: libpod-conmon-b0a86b6e3a0ad0a900d9a969f8f644998b625e0291d1aab374fc7cd966ede8a6.scope: Deactivated successfully.
Oct 02 09:43:27 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:43:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:43:27 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:43:27 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 2aba85a5-7c12-4412-8a7c-71b9986787fc does not exist
Oct 02 09:43:27 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 80ded6e2-6d62-48c3-95d2-3388a2585c43 does not exist
Oct 02 09:43:27 compute-0 sudo[468800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:43:27 compute-0 sudo[468800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:27 compute-0 sudo[468800]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:27 compute-0 sudo[468825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:43:27 compute-0 sudo[468825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:43:27 compute-0 sudo[468825]: pam_unix(sudo:session): session closed for user root
Oct 02 09:43:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:43:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:43:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:43:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:43:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:43:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:43:28
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.meta', '.mgr']
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:43:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:43:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:43:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3776: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:29 compute-0 ceph-mon[74477]: pgmap v3776: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:30 compute-0 nova_compute[260603]: 2025-10-02 09:43:30.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3777: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:31 compute-0 ceph-mon[74477]: pgmap v3777: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:43:31 compute-0 nova_compute[260603]: 2025-10-02 09:43:31.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:31 compute-0 nova_compute[260603]: 2025-10-02 09:43:31.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:43:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3778: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:33 compute-0 nova_compute[260603]: 2025-10-02 09:43:33.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:43:33 compute-0 nova_compute[260603]: 2025-10-02 09:43:33.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:43:33 compute-0 nova_compute[260603]: 2025-10-02 09:43:33.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:43:33 compute-0 nova_compute[260603]: 2025-10-02 09:43:33.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:43:33 compute-0 nova_compute[260603]: 2025-10-02 09:43:33.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:43:33 compute-0 nova_compute[260603]: 2025-10-02 09:43:33.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:43:33 compute-0 nova_compute[260603]: 2025-10-02 09:43:33.549 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:43:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:43:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4112269394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:43:33 compute-0 nova_compute[260603]: 2025-10-02 09:43:33.988 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:43:34 compute-0 ceph-mon[74477]: pgmap v3778: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:34 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4112269394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:43:34 compute-0 nova_compute[260603]: 2025-10-02 09:43:34.133 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:43:34 compute-0 nova_compute[260603]: 2025-10-02 09:43:34.134 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3437MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:43:34 compute-0 nova_compute[260603]: 2025-10-02 09:43:34.135 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:43:34 compute-0 nova_compute[260603]: 2025-10-02 09:43:34.135 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:43:34 compute-0 nova_compute[260603]: 2025-10-02 09:43:34.353 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:43:34 compute-0 nova_compute[260603]: 2025-10-02 09:43:34.354 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:43:34 compute-0 nova_compute[260603]: 2025-10-02 09:43:34.471 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:43:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:43:34.883 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:43:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:43:34.884 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:43:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:43:34.884 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:43:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:43:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2627066930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:43:34 compute-0 nova_compute[260603]: 2025-10-02 09:43:34.917 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:43:34 compute-0 nova_compute[260603]: 2025-10-02 09:43:34.921 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:43:34 compute-0 nova_compute[260603]: 2025-10-02 09:43:34.936 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:43:34 compute-0 nova_compute[260603]: 2025-10-02 09:43:34.937 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:43:34 compute-0 nova_compute[260603]: 2025-10-02 09:43:34.938 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:43:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3779: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2627066930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:43:35 compute-0 nova_compute[260603]: 2025-10-02 09:43:35.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:36 compute-0 ceph-mon[74477]: pgmap v3779: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:43:36 compute-0 nova_compute[260603]: 2025-10-02 09:43:36.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3780: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:37 compute-0 ceph-mon[74477]: pgmap v3780: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:37 compute-0 nova_compute[260603]: 2025-10-02 09:43:37.938 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:43:37 compute-0 nova_compute[260603]: 2025-10-02 09:43:37.939 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:43:37 compute-0 nova_compute[260603]: 2025-10-02 09:43:37.939 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:43:37 compute-0 nova_compute[260603]: 2025-10-02 09:43:37.962 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:43:37 compute-0 nova_compute[260603]: 2025-10-02 09:43:37.962 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:43:38 compute-0 nova_compute[260603]: 2025-10-02 09:43:38.538 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:43:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3781: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:39 compute-0 ceph-mon[74477]: pgmap v3781: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:43:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:43:40 compute-0 nova_compute[260603]: 2025-10-02 09:43:40.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3782: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:43:41 compute-0 nova_compute[260603]: 2025-10-02 09:43:41.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:42 compute-0 podman[468896]: 2025-10-02 09:43:42.002413625 +0000 UTC m=+0.062335759 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 09:43:42 compute-0 ceph-mon[74477]: pgmap v3782: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:42 compute-0 podman[468895]: 2025-10-02 09:43:42.033524566 +0000 UTC m=+0.093448050 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:43:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3783: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:43 compute-0 ceph-mon[74477]: pgmap v3783: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:43 compute-0 nova_compute[260603]: 2025-10-02 09:43:43.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:43:44 compute-0 nova_compute[260603]: 2025-10-02 09:43:44.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:43:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3784: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:45 compute-0 ceph-mon[74477]: pgmap v3784: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:45 compute-0 nova_compute[260603]: 2025-10-02 09:43:45.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:45 compute-0 podman[468941]: 2025-10-02 09:43:45.995954021 +0000 UTC m=+0.065875059 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 09:43:46 compute-0 podman[468942]: 2025-10-02 09:43:46.010624329 +0000 UTC m=+0.075616043 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:43:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:43:46 compute-0 nova_compute[260603]: 2025-10-02 09:43:46.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3785: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:48 compute-0 ceph-mon[74477]: pgmap v3785: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3786: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:49 compute-0 ceph-mon[74477]: pgmap v3786: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:50 compute-0 nova_compute[260603]: 2025-10-02 09:43:50.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3787: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:43:51 compute-0 nova_compute[260603]: 2025-10-02 09:43:51.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:52 compute-0 ceph-mon[74477]: pgmap v3787: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3788: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:53 compute-0 ceph-mon[74477]: pgmap v3788: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:54 compute-0 nova_compute[260603]: 2025-10-02 09:43:54.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:43:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3789: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:55 compute-0 nova_compute[260603]: 2025-10-02 09:43:55.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:55 compute-0 ceph-mon[74477]: pgmap v3789: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:43:56 compute-0 nova_compute[260603]: 2025-10-02 09:43:56.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:43:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3790: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:57 compute-0 ceph-mon[74477]: pgmap v3790: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:43:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:43:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:43:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:43:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:43:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:43:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3791: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:43:59 compute-0 ceph-mon[74477]: pgmap v3791: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:00 compute-0 nova_compute[260603]: 2025-10-02 09:44:00.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:00 compute-0 nova_compute[260603]: 2025-10-02 09:44:00.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:44:00 compute-0 nova_compute[260603]: 2025-10-02 09:44:00.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:44:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3792: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:01 compute-0 ceph-mon[74477]: pgmap v3792: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:44:01 compute-0 nova_compute[260603]: 2025-10-02 09:44:01.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3793: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:03 compute-0 ceph-mon[74477]: pgmap v3793: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3794: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:05 compute-0 nova_compute[260603]: 2025-10-02 09:44:05.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:06 compute-0 ceph-mon[74477]: pgmap v3794: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:44:06 compute-0 nova_compute[260603]: 2025-10-02 09:44:06.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3795: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:07 compute-0 ceph-mon[74477]: pgmap v3795: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3796: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:09 compute-0 ceph-mon[74477]: pgmap v3796: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:10 compute-0 nova_compute[260603]: 2025-10-02 09:44:10.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3797: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:44:11 compute-0 ceph-mon[74477]: pgmap v3797: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:11 compute-0 nova_compute[260603]: 2025-10-02 09:44:11.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3798: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:13 compute-0 podman[468980]: 2025-10-02 09:44:13.00062517 +0000 UTC m=+0.059708266 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 09:44:13 compute-0 podman[468979]: 2025-10-02 09:44:13.036652836 +0000 UTC m=+0.097080524 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:44:13 compute-0 ceph-mon[74477]: pgmap v3798: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3799: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:15 compute-0 ceph-mon[74477]: pgmap v3799: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:15 compute-0 nova_compute[260603]: 2025-10-02 09:44:15.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:44:16 compute-0 nova_compute[260603]: 2025-10-02 09:44:16.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3800: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:16 compute-0 podman[469023]: 2025-10-02 09:44:16.989727279 +0000 UTC m=+0.058269892 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 09:44:17 compute-0 podman[469024]: 2025-10-02 09:44:17.011452577 +0000 UTC m=+0.068599664 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:44:17 compute-0 ceph-mon[74477]: pgmap v3800: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3801: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:19 compute-0 ceph-mon[74477]: pgmap v3801: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:20 compute-0 nova_compute[260603]: 2025-10-02 09:44:20.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3802: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:21 compute-0 ceph-mon[74477]: pgmap v3802: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:44:21 compute-0 nova_compute[260603]: 2025-10-02 09:44:21.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:44:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/382307733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:44:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:44:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/382307733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:44:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/382307733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:44:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/382307733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:44:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3803: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:23 compute-0 ceph-mon[74477]: pgmap v3803: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3804: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:25 compute-0 ceph-mon[74477]: pgmap v3804: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:25 compute-0 nova_compute[260603]: 2025-10-02 09:44:25.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:44:26 compute-0 nova_compute[260603]: 2025-10-02 09:44:26.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3805: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:27 compute-0 ceph-mon[74477]: pgmap v3805: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:27 compute-0 sudo[469061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:44:27 compute-0 sudo[469061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:27 compute-0 sudo[469061]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:44:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:44:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:44:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:44:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:44:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:44:28 compute-0 sudo[469086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:44:28 compute-0 sudo[469086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:28 compute-0 sudo[469086]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:28 compute-0 sudo[469111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:44:28 compute-0 sudo[469111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:28 compute-0 sudo[469111]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:28 compute-0 sudo[469136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:44:28 compute-0 sudo[469136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:44:28
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'volumes', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data']
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:44:28 compute-0 sudo[469136]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:44:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:44:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:44:28 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:44:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:44:28 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev aabeae2c-c91c-429d-a9c4-9bac8a218031 does not exist
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5db0beb3-b29b-46c9-bc13-4515c21a0d31 does not exist
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 3168f644-acfc-4212-b050-b8eaf1c93102 does not exist
Oct 02 09:44:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:44:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:44:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:44:28 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:44:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:44:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:44:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:44:28 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:44:28 compute-0 sudo[469192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:44:28 compute-0 sudo[469192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:28 compute-0 sudo[469192]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3806: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:28 compute-0 sudo[469217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:44:28 compute-0 sudo[469217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:28 compute-0 sudo[469217]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:29 compute-0 sudo[469242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:44:29 compute-0 sudo[469242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:29 compute-0 sudo[469242]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:29 compute-0 sudo[469267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:44:29 compute-0 sudo[469267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:29 compute-0 podman[469332]: 2025-10-02 09:44:29.425991951 +0000 UTC m=+0.027027556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:44:29 compute-0 podman[469332]: 2025-10-02 09:44:29.982006988 +0000 UTC m=+0.583042583 container create 1fa3c32d3ae2d761eb8d044f420d6f60f70c1025eb361074761c5f87aae9fe09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:44:30 compute-0 nova_compute[260603]: 2025-10-02 09:44:30.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:30 compute-0 systemd[1]: Started libpod-conmon-1fa3c32d3ae2d761eb8d044f420d6f60f70c1025eb361074761c5f87aae9fe09.scope.
Oct 02 09:44:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:44:30 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:44:30 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:44:30 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:44:30 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:44:30 compute-0 ceph-mon[74477]: pgmap v3806: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:30 compute-0 podman[469332]: 2025-10-02 09:44:30.517352279 +0000 UTC m=+1.118387924 container init 1fa3c32d3ae2d761eb8d044f420d6f60f70c1025eb361074761c5f87aae9fe09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:44:30 compute-0 podman[469332]: 2025-10-02 09:44:30.524281386 +0000 UTC m=+1.125316951 container start 1fa3c32d3ae2d761eb8d044f420d6f60f70c1025eb361074761c5f87aae9fe09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 09:44:30 compute-0 distracted_khorana[469349]: 167 167
Oct 02 09:44:30 compute-0 systemd[1]: libpod-1fa3c32d3ae2d761eb8d044f420d6f60f70c1025eb361074761c5f87aae9fe09.scope: Deactivated successfully.
Oct 02 09:44:30 compute-0 podman[469332]: 2025-10-02 09:44:30.742733999 +0000 UTC m=+1.343769554 container attach 1fa3c32d3ae2d761eb8d044f420d6f60f70c1025eb361074761c5f87aae9fe09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:44:30 compute-0 podman[469332]: 2025-10-02 09:44:30.744316889 +0000 UTC m=+1.345352484 container died 1fa3c32d3ae2d761eb8d044f420d6f60f70c1025eb361074761c5f87aae9fe09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 09:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fedc154bdecee09dfe62a3152ae30a37c2adca39f6513d0cea1cf0b85f6795e6-merged.mount: Deactivated successfully.
Oct 02 09:44:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3807: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:44:31 compute-0 podman[469332]: 2025-10-02 09:44:31.217625843 +0000 UTC m=+1.818661398 container remove 1fa3c32d3ae2d761eb8d044f420d6f60f70c1025eb361074761c5f87aae9fe09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 09:44:31 compute-0 systemd[1]: libpod-conmon-1fa3c32d3ae2d761eb8d044f420d6f60f70c1025eb361074761c5f87aae9fe09.scope: Deactivated successfully.
Oct 02 09:44:31 compute-0 ceph-mon[74477]: pgmap v3807: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:31 compute-0 podman[469373]: 2025-10-02 09:44:31.361828707 +0000 UTC m=+0.022664979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:44:31 compute-0 podman[469373]: 2025-10-02 09:44:31.467636932 +0000 UTC m=+0.128473184 container create 0833317cee182e22cb83ca0f95dc4c2a64363696443050fe03cfc31bb2b41db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:44:31 compute-0 nova_compute[260603]: 2025-10-02 09:44:31.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:31 compute-0 systemd[1]: Started libpod-conmon-0833317cee182e22cb83ca0f95dc4c2a64363696443050fe03cfc31bb2b41db8.scope.
Oct 02 09:44:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1386de154a68c0191cf6213d118aa30dce155512f864c04789e5a6dd8a11644f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1386de154a68c0191cf6213d118aa30dce155512f864c04789e5a6dd8a11644f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1386de154a68c0191cf6213d118aa30dce155512f864c04789e5a6dd8a11644f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1386de154a68c0191cf6213d118aa30dce155512f864c04789e5a6dd8a11644f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1386de154a68c0191cf6213d118aa30dce155512f864c04789e5a6dd8a11644f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:31 compute-0 podman[469373]: 2025-10-02 09:44:31.725088593 +0000 UTC m=+0.385924935 container init 0833317cee182e22cb83ca0f95dc4c2a64363696443050fe03cfc31bb2b41db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 09:44:31 compute-0 podman[469373]: 2025-10-02 09:44:31.733376873 +0000 UTC m=+0.394213155 container start 0833317cee182e22cb83ca0f95dc4c2a64363696443050fe03cfc31bb2b41db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 09:44:31 compute-0 podman[469373]: 2025-10-02 09:44:31.782656881 +0000 UTC m=+0.443493153 container attach 0833317cee182e22cb83ca0f95dc4c2a64363696443050fe03cfc31bb2b41db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 02 09:44:32 compute-0 nova_compute[260603]: 2025-10-02 09:44:32.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:44:32 compute-0 unruffled_bell[469389]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:44:32 compute-0 unruffled_bell[469389]: --> relative data size: 1.0
Oct 02 09:44:32 compute-0 unruffled_bell[469389]: --> All data devices are unavailable
Oct 02 09:44:32 compute-0 systemd[1]: libpod-0833317cee182e22cb83ca0f95dc4c2a64363696443050fe03cfc31bb2b41db8.scope: Deactivated successfully.
Oct 02 09:44:32 compute-0 systemd[1]: libpod-0833317cee182e22cb83ca0f95dc4c2a64363696443050fe03cfc31bb2b41db8.scope: Consumed 1.163s CPU time.
Oct 02 09:44:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3808: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:32 compute-0 podman[469418]: 2025-10-02 09:44:32.986853234 +0000 UTC m=+0.027234962 container died 0833317cee182e22cb83ca0f95dc4c2a64363696443050fe03cfc31bb2b41db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:44:33 compute-0 ceph-mon[74477]: pgmap v3808: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1386de154a68c0191cf6213d118aa30dce155512f864c04789e5a6dd8a11644f-merged.mount: Deactivated successfully.
Oct 02 09:44:33 compute-0 podman[469418]: 2025-10-02 09:44:33.70679423 +0000 UTC m=+0.747175958 container remove 0833317cee182e22cb83ca0f95dc4c2a64363696443050fe03cfc31bb2b41db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:44:33 compute-0 systemd[1]: libpod-conmon-0833317cee182e22cb83ca0f95dc4c2a64363696443050fe03cfc31bb2b41db8.scope: Deactivated successfully.
Oct 02 09:44:33 compute-0 sudo[469267]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:33 compute-0 sudo[469433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:44:33 compute-0 sudo[469433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:33 compute-0 sudo[469433]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:33 compute-0 sudo[469458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:44:33 compute-0 sudo[469458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:33 compute-0 sudo[469458]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:34 compute-0 sudo[469483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:44:34 compute-0 sudo[469483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:34 compute-0 sudo[469483]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:34 compute-0 sudo[469508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:44:34 compute-0 sudo[469508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:34 compute-0 nova_compute[260603]: 2025-10-02 09:44:34.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:44:34 compute-0 podman[469570]: 2025-10-02 09:44:34.540491971 +0000 UTC m=+0.099143488 container create 110100c31968e493cda02f4cb2262d997aeeb8ea61c1da6975d5c8facefe90e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:44:34 compute-0 podman[469570]: 2025-10-02 09:44:34.464496547 +0000 UTC m=+0.023148114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:44:34 compute-0 systemd[1]: Started libpod-conmon-110100c31968e493cda02f4cb2262d997aeeb8ea61c1da6975d5c8facefe90e8.scope.
Oct 02 09:44:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:44:34 compute-0 podman[469570]: 2025-10-02 09:44:34.638283196 +0000 UTC m=+0.196934733 container init 110100c31968e493cda02f4cb2262d997aeeb8ea61c1da6975d5c8facefe90e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:44:34 compute-0 podman[469570]: 2025-10-02 09:44:34.644852421 +0000 UTC m=+0.203503938 container start 110100c31968e493cda02f4cb2262d997aeeb8ea61c1da6975d5c8facefe90e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:44:34 compute-0 hopeful_vaughan[469586]: 167 167
Oct 02 09:44:34 compute-0 systemd[1]: libpod-110100c31968e493cda02f4cb2262d997aeeb8ea61c1da6975d5c8facefe90e8.scope: Deactivated successfully.
Oct 02 09:44:34 compute-0 conmon[469586]: conmon 110100c31968e493cda0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-110100c31968e493cda02f4cb2262d997aeeb8ea61c1da6975d5c8facefe90e8.scope/container/memory.events
Oct 02 09:44:34 compute-0 podman[469570]: 2025-10-02 09:44:34.692505159 +0000 UTC m=+0.251156676 container attach 110100c31968e493cda02f4cb2262d997aeeb8ea61c1da6975d5c8facefe90e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:44:34 compute-0 podman[469570]: 2025-10-02 09:44:34.694516632 +0000 UTC m=+0.253168179 container died 110100c31968e493cda02f4cb2262d997aeeb8ea61c1da6975d5c8facefe90e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 09:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f0e264042e45634eb532d3d6f80fb2e133c32112124d90b599bca0bf311326b-merged.mount: Deactivated successfully.
Oct 02 09:44:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:44:34.884 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:44:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:44:34.886 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:44:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:44:34.886 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:44:34 compute-0 podman[469570]: 2025-10-02 09:44:34.894265851 +0000 UTC m=+0.452917368 container remove 110100c31968e493cda02f4cb2262d997aeeb8ea61c1da6975d5c8facefe90e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 02 09:44:34 compute-0 systemd[1]: libpod-conmon-110100c31968e493cda02f4cb2262d997aeeb8ea61c1da6975d5c8facefe90e8.scope: Deactivated successfully.
Oct 02 09:44:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3809: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:35 compute-0 ceph-mon[74477]: pgmap v3809: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:35 compute-0 podman[469611]: 2025-10-02 09:44:35.181265856 +0000 UTC m=+0.105917540 container create df0cb887fc94c05111b1f8d99474a733b013e667e2a6fd2bfd3e00a8e8345935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:44:35 compute-0 podman[469611]: 2025-10-02 09:44:35.116563295 +0000 UTC m=+0.041215009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:44:35 compute-0 systemd[1]: Started libpod-conmon-df0cb887fc94c05111b1f8d99474a733b013e667e2a6fd2bfd3e00a8e8345935.scope.
Oct 02 09:44:35 compute-0 nova_compute[260603]: 2025-10-02 09:44:35.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392d4bc080fa58658d835fc759d710d25eb365e8ffa900ec8e48e0d731c45937/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392d4bc080fa58658d835fc759d710d25eb365e8ffa900ec8e48e0d731c45937/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392d4bc080fa58658d835fc759d710d25eb365e8ffa900ec8e48e0d731c45937/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392d4bc080fa58658d835fc759d710d25eb365e8ffa900ec8e48e0d731c45937/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:35 compute-0 podman[469611]: 2025-10-02 09:44:35.34242196 +0000 UTC m=+0.267073694 container init df0cb887fc94c05111b1f8d99474a733b013e667e2a6fd2bfd3e00a8e8345935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 09:44:35 compute-0 podman[469611]: 2025-10-02 09:44:35.349937864 +0000 UTC m=+0.274589538 container start df0cb887fc94c05111b1f8d99474a733b013e667e2a6fd2bfd3e00a8e8345935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 02 09:44:35 compute-0 podman[469611]: 2025-10-02 09:44:35.399163012 +0000 UTC m=+0.323814716 container attach df0cb887fc94c05111b1f8d99474a733b013e667e2a6fd2bfd3e00a8e8345935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]: {
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:     "0": [
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:         {
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "devices": [
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "/dev/loop3"
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             ],
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_name": "ceph_lv0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_size": "21470642176",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "name": "ceph_lv0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "tags": {
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.cluster_name": "ceph",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.crush_device_class": "",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.encrypted": "0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.osd_id": "0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.type": "block",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.vdo": "0"
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             },
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "type": "block",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "vg_name": "ceph_vg0"
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:         }
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:     ],
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:     "1": [
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:         {
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "devices": [
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "/dev/loop4"
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             ],
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_name": "ceph_lv1",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_size": "21470642176",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "name": "ceph_lv1",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "tags": {
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.cluster_name": "ceph",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.crush_device_class": "",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.encrypted": "0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.osd_id": "1",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.type": "block",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.vdo": "0"
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             },
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "type": "block",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "vg_name": "ceph_vg1"
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:         }
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:     ],
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:     "2": [
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:         {
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "devices": [
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "/dev/loop5"
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             ],
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_name": "ceph_lv2",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_size": "21470642176",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "name": "ceph_lv2",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "tags": {
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.cluster_name": "ceph",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.crush_device_class": "",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.encrypted": "0",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.osd_id": "2",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.type": "block",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:                 "ceph.vdo": "0"
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             },
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "type": "block",
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:             "vg_name": "ceph_vg2"
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:         }
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]:     ]
Oct 02 09:44:36 compute-0 adoring_heyrovsky[469628]: }
Oct 02 09:44:36 compute-0 systemd[1]: libpod-df0cb887fc94c05111b1f8d99474a733b013e667e2a6fd2bfd3e00a8e8345935.scope: Deactivated successfully.
Oct 02 09:44:36 compute-0 podman[469611]: 2025-10-02 09:44:36.150629304 +0000 UTC m=+1.075280988 container died df0cb887fc94c05111b1f8d99474a733b013e667e2a6fd2bfd3e00a8e8345935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:44:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:44:36 compute-0 nova_compute[260603]: 2025-10-02 09:44:36.343 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:44:36 compute-0 nova_compute[260603]: 2025-10-02 09:44:36.344 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:44:36 compute-0 nova_compute[260603]: 2025-10-02 09:44:36.344 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:44:36 compute-0 nova_compute[260603]: 2025-10-02 09:44:36.344 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:44:36 compute-0 nova_compute[260603]: 2025-10-02 09:44:36.344 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-392d4bc080fa58658d835fc759d710d25eb365e8ffa900ec8e48e0d731c45937-merged.mount: Deactivated successfully.
Oct 02 09:44:36 compute-0 nova_compute[260603]: 2025-10-02 09:44:36.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:44:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/668175489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:44:36 compute-0 nova_compute[260603]: 2025-10-02 09:44:36.856 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:44:36 compute-0 podman[469611]: 2025-10-02 09:44:36.869996842 +0000 UTC m=+1.794648526 container remove df0cb887fc94c05111b1f8d99474a733b013e667e2a6fd2bfd3e00a8e8345935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 09:44:36 compute-0 sudo[469508]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:36 compute-0 systemd[1]: libpod-conmon-df0cb887fc94c05111b1f8d99474a733b013e667e2a6fd2bfd3e00a8e8345935.scope: Deactivated successfully.
Oct 02 09:44:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/668175489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:44:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3810: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:36 compute-0 sudo[469673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:44:36 compute-0 sudo[469673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:37 compute-0 sudo[469673]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:37 compute-0 nova_compute[260603]: 2025-10-02 09:44:37.036 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:44:37 compute-0 nova_compute[260603]: 2025-10-02 09:44:37.038 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3430MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:44:37 compute-0 nova_compute[260603]: 2025-10-02 09:44:37.038 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:44:37 compute-0 nova_compute[260603]: 2025-10-02 09:44:37.039 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:44:37 compute-0 sudo[469698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:44:37 compute-0 sudo[469698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:37 compute-0 sudo[469698]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:37 compute-0 sudo[469723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:44:37 compute-0 sudo[469723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:37 compute-0 sudo[469723]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:37 compute-0 sudo[469748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:44:37 compute-0 sudo[469748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:37 compute-0 podman[469811]: 2025-10-02 09:44:37.54128883 +0000 UTC m=+0.079123952 container create e2ca50b21c9a1506d957ec2bc06e87c97fa254cbcbaa9337e480a92c816b5354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jones, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:44:37 compute-0 podman[469811]: 2025-10-02 09:44:37.482825293 +0000 UTC m=+0.020660445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:44:37 compute-0 systemd[1]: Started libpod-conmon-e2ca50b21c9a1506d957ec2bc06e87c97fa254cbcbaa9337e480a92c816b5354.scope.
Oct 02 09:44:37 compute-0 nova_compute[260603]: 2025-10-02 09:44:37.628 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:44:37 compute-0 nova_compute[260603]: 2025-10-02 09:44:37.629 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:44:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:44:37 compute-0 nova_compute[260603]: 2025-10-02 09:44:37.665 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:44:37 compute-0 podman[469811]: 2025-10-02 09:44:37.814553135 +0000 UTC m=+0.352388257 container init e2ca50b21c9a1506d957ec2bc06e87c97fa254cbcbaa9337e480a92c816b5354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jones, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:44:37 compute-0 podman[469811]: 2025-10-02 09:44:37.823243287 +0000 UTC m=+0.361078389 container start e2ca50b21c9a1506d957ec2bc06e87c97fa254cbcbaa9337e480a92c816b5354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 09:44:37 compute-0 objective_jones[469828]: 167 167
Oct 02 09:44:37 compute-0 systemd[1]: libpod-e2ca50b21c9a1506d957ec2bc06e87c97fa254cbcbaa9337e480a92c816b5354.scope: Deactivated successfully.
Oct 02 09:44:37 compute-0 podman[469811]: 2025-10-02 09:44:37.937559767 +0000 UTC m=+0.475394959 container attach e2ca50b21c9a1506d957ec2bc06e87c97fa254cbcbaa9337e480a92c816b5354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 09:44:37 compute-0 podman[469811]: 2025-10-02 09:44:37.938207497 +0000 UTC m=+0.476042639 container died e2ca50b21c9a1506d957ec2bc06e87c97fa254cbcbaa9337e480a92c816b5354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jones, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 09:44:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:44:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/665939848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:44:38 compute-0 nova_compute[260603]: 2025-10-02 09:44:38.123 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:44:38 compute-0 nova_compute[260603]: 2025-10-02 09:44:38.129 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:44:38 compute-0 nova_compute[260603]: 2025-10-02 09:44:38.281 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:44:38 compute-0 nova_compute[260603]: 2025-10-02 09:44:38.283 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:44:38 compute-0 nova_compute[260603]: 2025-10-02 09:44:38.283 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:44:38 compute-0 ceph-mon[74477]: pgmap v3810: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5526e7585137683b54c36a1ce480c53d4c3183f9297b3cec07e16adc9771974d-merged.mount: Deactivated successfully.
Oct 02 09:44:38 compute-0 podman[469811]: 2025-10-02 09:44:38.514195919 +0000 UTC m=+1.052031021 container remove e2ca50b21c9a1506d957ec2bc06e87c97fa254cbcbaa9337e480a92c816b5354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jones, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 09:44:38 compute-0 systemd[1]: libpod-conmon-e2ca50b21c9a1506d957ec2bc06e87c97fa254cbcbaa9337e480a92c816b5354.scope: Deactivated successfully.
Oct 02 09:44:38 compute-0 podman[469873]: 2025-10-02 09:44:38.684714744 +0000 UTC m=+0.054503692 container create 2d29aa14b5377f7c3fd0abc3d535d0c19be4067b1e2cdae9424051acff1e0bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:44:38 compute-0 systemd[1]: Started libpod-conmon-2d29aa14b5377f7c3fd0abc3d535d0c19be4067b1e2cdae9424051acff1e0bb1.scope.
Oct 02 09:44:38 compute-0 podman[469873]: 2025-10-02 09:44:38.654597174 +0000 UTC m=+0.024386142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:44:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b36c38a37c9909651d7039275787eea4f5b96534f93256feafff253bfdb98c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b36c38a37c9909651d7039275787eea4f5b96534f93256feafff253bfdb98c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b36c38a37c9909651d7039275787eea4f5b96534f93256feafff253bfdb98c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b36c38a37c9909651d7039275787eea4f5b96534f93256feafff253bfdb98c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:44:38 compute-0 podman[469873]: 2025-10-02 09:44:38.828395903 +0000 UTC m=+0.198184871 container init 2d29aa14b5377f7c3fd0abc3d535d0c19be4067b1e2cdae9424051acff1e0bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 02 09:44:38 compute-0 podman[469873]: 2025-10-02 09:44:38.835839595 +0000 UTC m=+0.205628553 container start 2d29aa14b5377f7c3fd0abc3d535d0c19be4067b1e2cdae9424051acff1e0bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_carson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:44:38 compute-0 podman[469873]: 2025-10-02 09:44:38.863076606 +0000 UTC m=+0.232865584 container attach 2d29aa14b5377f7c3fd0abc3d535d0c19be4067b1e2cdae9424051acff1e0bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 09:44:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3811: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:39 compute-0 nova_compute[260603]: 2025-10-02 09:44:39.283 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:44:39 compute-0 nova_compute[260603]: 2025-10-02 09:44:39.285 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:44:39 compute-0 nova_compute[260603]: 2025-10-02 09:44:39.285 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:44:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/665939848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:44:39 compute-0 ceph-mon[74477]: pgmap v3811: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:39 compute-0 nova_compute[260603]: 2025-10-02 09:44:39.429 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:44:39 compute-0 nova_compute[260603]: 2025-10-02 09:44:39.430 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:44:39 compute-0 nova_compute[260603]: 2025-10-02 09:44:39.430 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:44:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:44:39 compute-0 nova_compute[260603]: 2025-10-02 09:44:39.660 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:44:39 compute-0 keen_carson[469890]: {
Oct 02 09:44:39 compute-0 keen_carson[469890]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "osd_id": 2,
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "type": "bluestore"
Oct 02 09:44:39 compute-0 keen_carson[469890]:     },
Oct 02 09:44:39 compute-0 keen_carson[469890]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "osd_id": 1,
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "type": "bluestore"
Oct 02 09:44:39 compute-0 keen_carson[469890]:     },
Oct 02 09:44:39 compute-0 keen_carson[469890]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "osd_id": 0,
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:44:39 compute-0 keen_carson[469890]:         "type": "bluestore"
Oct 02 09:44:39 compute-0 keen_carson[469890]:     }
Oct 02 09:44:39 compute-0 keen_carson[469890]: }
Oct 02 09:44:39 compute-0 podman[469873]: 2025-10-02 09:44:39.797399539 +0000 UTC m=+1.167188487 container died 2d29aa14b5377f7c3fd0abc3d535d0c19be4067b1e2cdae9424051acff1e0bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct 02 09:44:39 compute-0 systemd[1]: libpod-2d29aa14b5377f7c3fd0abc3d535d0c19be4067b1e2cdae9424051acff1e0bb1.scope: Deactivated successfully.
Oct 02 09:44:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-14b36c38a37c9909651d7039275787eea4f5b96534f93256feafff253bfdb98c-merged.mount: Deactivated successfully.
Oct 02 09:44:40 compute-0 nova_compute[260603]: 2025-10-02 09:44:40.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:40 compute-0 podman[469873]: 2025-10-02 09:44:40.573450699 +0000 UTC m=+1.943239647 container remove 2d29aa14b5377f7c3fd0abc3d535d0c19be4067b1e2cdae9424051acff1e0bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_carson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:44:40 compute-0 systemd[1]: libpod-conmon-2d29aa14b5377f7c3fd0abc3d535d0c19be4067b1e2cdae9424051acff1e0bb1.scope: Deactivated successfully.
Oct 02 09:44:40 compute-0 sudo[469748]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:44:40 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:44:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:44:40 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:44:40 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0b507549-5979-4d94-811d-7f4b6834a277 does not exist
Oct 02 09:44:40 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ab5f8ef8-533d-46c0-bac7-d00a137d9229 does not exist
Oct 02 09:44:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3812: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:40 compute-0 sudo[469936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:44:40 compute-0 sudo[469936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:41 compute-0 sudo[469936]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:41 compute-0 sudo[469961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:44:41 compute-0 sudo[469961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:44:41 compute-0 sudo[469961]: pam_unix(sudo:session): session closed for user root
Oct 02 09:44:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:44:41 compute-0 nova_compute[260603]: 2025-10-02 09:44:41.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:44:41 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:44:41 compute-0 ceph-mon[74477]: pgmap v3812: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3813: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:43 compute-0 ceph-mon[74477]: pgmap v3813: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:43 compute-0 nova_compute[260603]: 2025-10-02 09:44:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:44:44 compute-0 podman[469987]: 2025-10-02 09:44:44.017600254 +0000 UTC m=+0.087748252 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 09:44:44 compute-0 podman[469986]: 2025-10-02 09:44:44.032774857 +0000 UTC m=+0.103247696 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 09:44:44 compute-0 nova_compute[260603]: 2025-10-02 09:44:44.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:44:44 compute-0 nova_compute[260603]: 2025-10-02 09:44:44.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 09:44:44 compute-0 nova_compute[260603]: 2025-10-02 09:44:44.762 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 09:44:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3814: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:45 compute-0 ceph-mon[74477]: pgmap v3814: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:45 compute-0 nova_compute[260603]: 2025-10-02 09:44:45.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.244163) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398286244343, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 2073, "num_deletes": 250, "total_data_size": 3224765, "memory_usage": 3275096, "flush_reason": "Manual Compaction"}
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398286299379, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 1900732, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77246, "largest_seqno": 79318, "table_properties": {"data_size": 1893741, "index_size": 3616, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19501, "raw_average_key_size": 21, "raw_value_size": 1877685, "raw_average_value_size": 2058, "num_data_blocks": 165, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759398083, "oldest_key_time": 1759398083, "file_creation_time": 1759398286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 55243 microseconds, and 5924 cpu microseconds.
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.299430) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 1900732 bytes OK
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.299456) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.360355) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.360401) EVENT_LOG_v1 {"time_micros": 1759398286360390, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.360429) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 3215764, prev total WAL file size 3215764, number of live WAL files 2.
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.361682) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323533' seq:72057594037927935, type:22 .. '6D6772737461740033353034' seq:0, type:0; will stop at (end)
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(1856KB)], [185(10MB)]
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398286361719, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 12804996, "oldest_snapshot_seqno": -1}
Oct 02 09:44:46 compute-0 nova_compute[260603]: 2025-10-02 09:44:46.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 9676 keys, 10753509 bytes, temperature: kUnknown
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398286611609, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 10753509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10694187, "index_size": 34057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24197, "raw_key_size": 252875, "raw_average_key_size": 26, "raw_value_size": 10526887, "raw_average_value_size": 1087, "num_data_blocks": 1319, "num_entries": 9676, "num_filter_entries": 9676, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759398286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.611907) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 10753509 bytes
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.616589) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 51.2 rd, 43.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.4 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(12.4) write-amplify(5.7) OK, records in: 10093, records dropped: 417 output_compression: NoCompression
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.616613) EVENT_LOG_v1 {"time_micros": 1759398286616601, "job": 116, "event": "compaction_finished", "compaction_time_micros": 249958, "compaction_time_cpu_micros": 33986, "output_level": 6, "num_output_files": 1, "total_output_size": 10753509, "num_input_records": 10093, "num_output_records": 9676, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398286617021, "job": 116, "event": "table_file_deletion", "file_number": 187}
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398286618952, "job": 116, "event": "table_file_deletion", "file_number": 185}
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.361609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.619017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.619023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.619024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.619026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:44:46 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:46.619027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:44:46 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3815: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:47 compute-0 ceph-mon[74477]: pgmap v3815: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:47 compute-0 podman[470030]: 2025-10-02 09:44:47.985460637 +0000 UTC m=+0.053477891 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:44:47 compute-0 podman[470031]: 2025-10-02 09:44:47.985494988 +0000 UTC m=+0.050190958 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid)
Oct 02 09:44:48 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3816: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:49 compute-0 ceph-mon[74477]: pgmap v3816: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:50 compute-0 nova_compute[260603]: 2025-10-02 09:44:50.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:50 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3817: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:51 compute-0 ceph-mon[74477]: pgmap v3817: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:44:51 compute-0 nova_compute[260603]: 2025-10-02 09:44:51.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:52 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3818: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:53 compute-0 ceph-mon[74477]: pgmap v3818: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:54 compute-0 nova_compute[260603]: 2025-10-02 09:44:54.763 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:44:54 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3819: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:55 compute-0 ceph-mon[74477]: pgmap v3819: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:55 compute-0 nova_compute[260603]: 2025-10-02 09:44:55.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.323327) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398295323443, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 326, "num_deletes": 251, "total_data_size": 151845, "memory_usage": 158168, "flush_reason": "Manual Compaction"}
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398295346833, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 150715, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79319, "largest_seqno": 79644, "table_properties": {"data_size": 148590, "index_size": 288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5229, "raw_average_key_size": 18, "raw_value_size": 144528, "raw_average_value_size": 508, "num_data_blocks": 13, "num_entries": 284, "num_filter_entries": 284, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759398287, "oldest_key_time": 1759398287, "file_creation_time": 1759398295, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 23554 microseconds, and 1995 cpu microseconds.
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.346900) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 150715 bytes OK
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.346932) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.365532) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.365603) EVENT_LOG_v1 {"time_micros": 1759398295365587, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.365642) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 149576, prev total WAL file size 149576, number of live WAL files 2.
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.366391) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(147KB)], [188(10MB)]
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398295366446, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 10904224, "oldest_snapshot_seqno": -1}
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 9451 keys, 9185948 bytes, temperature: kUnknown
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398295495895, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 9185948, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9129602, "index_size": 31593, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23685, "raw_key_size": 248885, "raw_average_key_size": 26, "raw_value_size": 8967715, "raw_average_value_size": 948, "num_data_blocks": 1206, "num_entries": 9451, "num_filter_entries": 9451, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759398295, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.496356) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 9185948 bytes
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.525503) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.1 rd, 70.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.3 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(133.3) write-amplify(60.9) OK, records in: 9960, records dropped: 509 output_compression: NoCompression
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.525577) EVENT_LOG_v1 {"time_micros": 1759398295525550, "job": 118, "event": "compaction_finished", "compaction_time_micros": 129592, "compaction_time_cpu_micros": 23733, "output_level": 6, "num_output_files": 1, "total_output_size": 9185948, "num_input_records": 9960, "num_output_records": 9451, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398295526137, "job": 118, "event": "table_file_deletion", "file_number": 190}
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398295530952, "job": 118, "event": "table_file_deletion", "file_number": 188}
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.366264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.531165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.531176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.531178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.531181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:44:55 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:44:55.531183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:44:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:44:56 compute-0 nova_compute[260603]: 2025-10-02 09:44:56.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:44:56 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3820: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:57 compute-0 ceph-mon[74477]: pgmap v3820: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:44:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:44:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:44:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:44:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:44:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:44:58 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3821: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:44:59 compute-0 ceph-mon[74477]: pgmap v3821: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:00 compute-0 nova_compute[260603]: 2025-10-02 09:45:00.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:00 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3822: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:01 compute-0 ceph-mon[74477]: pgmap v3822: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:45:01 compute-0 nova_compute[260603]: 2025-10-02 09:45:01.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:01 compute-0 nova_compute[260603]: 2025-10-02 09:45:01.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:45:01 compute-0 nova_compute[260603]: 2025-10-02 09:45:01.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:45:02 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3823: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:03 compute-0 ceph-mon[74477]: pgmap v3823: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:04 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3824: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:05 compute-0 nova_compute[260603]: 2025-10-02 09:45:05.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:05 compute-0 ceph-mon[74477]: pgmap v3824: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:05 compute-0 nova_compute[260603]: 2025-10-02 09:45:05.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:45:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:45:06 compute-0 nova_compute[260603]: 2025-10-02 09:45:06.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:06 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3825: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:07 compute-0 ceph-mon[74477]: pgmap v3825: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:08 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3826: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:09 compute-0 ceph-mon[74477]: pgmap v3826: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:10 compute-0 nova_compute[260603]: 2025-10-02 09:45:10.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:10 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3827: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:45:11 compute-0 ceph-mon[74477]: pgmap v3827: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:11 compute-0 nova_compute[260603]: 2025-10-02 09:45:11.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:12 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3828: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:13 compute-0 ceph-mon[74477]: pgmap v3828: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:14 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3829: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:15 compute-0 podman[470069]: 2025-10-02 09:45:15.013703747 +0000 UTC m=+0.079755262 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 09:45:15 compute-0 podman[470068]: 2025-10-02 09:45:15.063821192 +0000 UTC m=+0.129797425 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 09:45:15 compute-0 ceph-mon[74477]: pgmap v3829: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:15 compute-0 nova_compute[260603]: 2025-10-02 09:45:15.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:45:16 compute-0 nova_compute[260603]: 2025-10-02 09:45:16.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:16 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3830: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:17 compute-0 ceph-mon[74477]: pgmap v3830: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:18 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3831: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:19 compute-0 podman[470114]: 2025-10-02 09:45:19.024321936 +0000 UTC m=+0.082301971 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 09:45:19 compute-0 podman[470113]: 2025-10-02 09:45:19.04043566 +0000 UTC m=+0.095578877 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:45:19 compute-0 ceph-mon[74477]: pgmap v3831: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:20 compute-0 nova_compute[260603]: 2025-10-02 09:45:20.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:20 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3832: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:21 compute-0 ceph-mon[74477]: pgmap v3832: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:45:21 compute-0 nova_compute[260603]: 2025-10-02 09:45:21.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:45:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/894545114' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:45:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:45:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/894545114' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:45:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/894545114' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:45:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/894545114' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:45:22 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3833: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:23 compute-0 ceph-mon[74477]: pgmap v3833: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:23 compute-0 nova_compute[260603]: 2025-10-02 09:45:23.568 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:45:23 compute-0 nova_compute[260603]: 2025-10-02 09:45:23.569 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 09:45:24 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3834: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:25 compute-0 nova_compute[260603]: 2025-10-02 09:45:25.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:25 compute-0 ceph-mon[74477]: pgmap v3834: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:45:26 compute-0 nova_compute[260603]: 2025-10-02 09:45:26.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:26 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3835: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:27 compute-0 ceph-mon[74477]: pgmap v3835: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:45:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:45:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:45:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:45:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:45:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:45:28
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'vms']
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:45:28 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3836: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:29 compute-0 ceph-mon[74477]: pgmap v3836: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:30 compute-0 nova_compute[260603]: 2025-10-02 09:45:30.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:30 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3837: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:31 compute-0 ceph-mon[74477]: pgmap v3837: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:45:31 compute-0 nova_compute[260603]: 2025-10-02 09:45:31.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:32 compute-0 nova_compute[260603]: 2025-10-02 09:45:32.582 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:45:32 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3838: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:33 compute-0 ceph-mon[74477]: pgmap v3838: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:34 compute-0 nova_compute[260603]: 2025-10-02 09:45:34.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:45:34 compute-0 nova_compute[260603]: 2025-10-02 09:45:34.599 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:45:34 compute-0 nova_compute[260603]: 2025-10-02 09:45:34.599 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:45:34 compute-0 nova_compute[260603]: 2025-10-02 09:45:34.599 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:45:34 compute-0 nova_compute[260603]: 2025-10-02 09:45:34.599 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:45:34 compute-0 nova_compute[260603]: 2025-10-02 09:45:34.600 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:45:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:45:34.885 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:45:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:45:34.886 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:45:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:45:34.886 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:45:34 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3839: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:45:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/466566876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.039 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:45:35 compute-0 ceph-mon[74477]: pgmap v3839: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/466566876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.208 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.209 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3518MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.209 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.210 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.298 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.298 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.315 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.419 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.419 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.444 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.462 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.482 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:45:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:45:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452863600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.926 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.931 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.973 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.975 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:45:35 compute-0 nova_compute[260603]: 2025-10-02 09:45:35.975 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:45:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/452863600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:45:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.350413) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398336350451, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 556, "num_deletes": 255, "total_data_size": 572322, "memory_usage": 584176, "flush_reason": "Manual Compaction"}
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398336412864, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 567077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79645, "largest_seqno": 80200, "table_properties": {"data_size": 564057, "index_size": 991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6778, "raw_average_key_size": 18, "raw_value_size": 558100, "raw_average_value_size": 1504, "num_data_blocks": 45, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759398297, "oldest_key_time": 1759398297, "file_creation_time": 1759398336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 62502 microseconds, and 2438 cpu microseconds.
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.412912) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 567077 bytes OK
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.412936) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.415258) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.415272) EVENT_LOG_v1 {"time_micros": 1759398336415267, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.415288) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 569207, prev total WAL file size 569207, number of live WAL files 2.
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.415822) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353132' seq:72057594037927935, type:22 .. '6C6F676D0033373633' seq:0, type:0; will stop at (end)
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(553KB)], [191(8970KB)]
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398336415876, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 9753025, "oldest_snapshot_seqno": -1}
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 9305 keys, 9662050 bytes, temperature: kUnknown
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398336467943, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 9662050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9605555, "index_size": 32126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23301, "raw_key_size": 246761, "raw_average_key_size": 26, "raw_value_size": 9445127, "raw_average_value_size": 1015, "num_data_blocks": 1228, "num_entries": 9305, "num_filter_entries": 9305, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759398336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.468159) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 9662050 bytes
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.469271) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.1 rd, 185.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 8.8 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(34.2) write-amplify(17.0) OK, records in: 9822, records dropped: 517 output_compression: NoCompression
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.469287) EVENT_LOG_v1 {"time_micros": 1759398336469280, "job": 120, "event": "compaction_finished", "compaction_time_micros": 52129, "compaction_time_cpu_micros": 29381, "output_level": 6, "num_output_files": 1, "total_output_size": 9662050, "num_input_records": 9822, "num_output_records": 9305, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398336469464, "job": 120, "event": "table_file_deletion", "file_number": 193}
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398336470871, "job": 120, "event": "table_file_deletion", "file_number": 191}
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.415695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.470913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.470918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.470920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.470922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:45:36 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:45:36.470923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:45:36 compute-0 nova_compute[260603]: 2025-10-02 09:45:36.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:36 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3840: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:37 compute-0 ceph-mon[74477]: pgmap v3840: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:38 compute-0 nova_compute[260603]: 2025-10-02 09:45:38.976 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:45:38 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3841: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:39 compute-0 ceph-mon[74477]: pgmap v3841: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:39 compute-0 nova_compute[260603]: 2025-10-02 09:45:39.513 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:45:39 compute-0 nova_compute[260603]: 2025-10-02 09:45:39.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:45:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:45:40 compute-0 nova_compute[260603]: 2025-10-02 09:45:40.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:40 compute-0 nova_compute[260603]: 2025-10-02 09:45:40.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:45:40 compute-0 nova_compute[260603]: 2025-10-02 09:45:40.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:45:40 compute-0 nova_compute[260603]: 2025-10-02 09:45:40.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:45:40 compute-0 nova_compute[260603]: 2025-10-02 09:45:40.741 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:45:40 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3842: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:41 compute-0 ceph-mon[74477]: pgmap v3842: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:41 compute-0 sudo[470196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:45:41 compute-0 sudo[470196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:41 compute-0 sudo[470196]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:41 compute-0 sudo[470221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:45:41 compute-0 sudo[470221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:41 compute-0 sudo[470221]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:41 compute-0 sudo[470246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:45:41 compute-0 sudo[470246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:45:41 compute-0 sudo[470246]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:41 compute-0 sudo[470271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:45:41 compute-0 sudo[470271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:41 compute-0 nova_compute[260603]: 2025-10-02 09:45:41.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:41 compute-0 sudo[470271]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:45:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:45:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:45:41 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:45:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:45:42 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:45:42 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 7c20eede-9cee-4d98-a756-79d4da7e38f5 does not exist
Oct 02 09:45:42 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 1e82b8dd-cab2-4000-8297-a6b5e2e2a583 does not exist
Oct 02 09:45:42 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 0cb5b790-61e4-4ef7-b846-a1c1b0af5f05 does not exist
Oct 02 09:45:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:45:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:45:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:45:42 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:45:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:45:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:45:42 compute-0 sudo[470328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:45:42 compute-0 sudo[470328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:42 compute-0 sudo[470328]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:42 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:45:42 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:45:42 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:45:42 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:45:42 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:45:42 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:45:42 compute-0 sudo[470353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:45:42 compute-0 sudo[470353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:42 compute-0 sudo[470353]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:42 compute-0 sudo[470378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:45:42 compute-0 sudo[470378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:42 compute-0 sudo[470378]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:42 compute-0 sudo[470403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:45:42 compute-0 sudo[470403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:42 compute-0 podman[470468]: 2025-10-02 09:45:42.747810267 +0000 UTC m=+0.024675272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:45:42 compute-0 podman[470468]: 2025-10-02 09:45:42.85935328 +0000 UTC m=+0.136218265 container create b2181c7d696c68a3160022154ef826ea7d01a80d60c93cbef705e29a1b2ad95d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:45:42 compute-0 systemd[1]: Started libpod-conmon-b2181c7d696c68a3160022154ef826ea7d01a80d60c93cbef705e29a1b2ad95d.scope.
Oct 02 09:45:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:45:42 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3843: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:43 compute-0 podman[470468]: 2025-10-02 09:45:43.018608464 +0000 UTC m=+0.295473469 container init b2181c7d696c68a3160022154ef826ea7d01a80d60c93cbef705e29a1b2ad95d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jennings, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:45:43 compute-0 podman[470468]: 2025-10-02 09:45:43.028135902 +0000 UTC m=+0.305000887 container start b2181c7d696c68a3160022154ef826ea7d01a80d60c93cbef705e29a1b2ad95d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jennings, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:45:43 compute-0 pedantic_jennings[470485]: 167 167
Oct 02 09:45:43 compute-0 systemd[1]: libpod-b2181c7d696c68a3160022154ef826ea7d01a80d60c93cbef705e29a1b2ad95d.scope: Deactivated successfully.
Oct 02 09:45:43 compute-0 podman[470468]: 2025-10-02 09:45:43.077413871 +0000 UTC m=+0.354278856 container attach b2181c7d696c68a3160022154ef826ea7d01a80d60c93cbef705e29a1b2ad95d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 09:45:43 compute-0 podman[470468]: 2025-10-02 09:45:43.08090542 +0000 UTC m=+0.357770415 container died b2181c7d696c68a3160022154ef826ea7d01a80d60c93cbef705e29a1b2ad95d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jennings, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 09:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9bf5336a7b3b3286eb8e999865463715b57574488801f89586c51700114e1fd-merged.mount: Deactivated successfully.
Oct 02 09:45:43 compute-0 ceph-mon[74477]: pgmap v3843: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:43 compute-0 podman[470468]: 2025-10-02 09:45:43.561568344 +0000 UTC m=+0.838433369 container remove b2181c7d696c68a3160022154ef826ea7d01a80d60c93cbef705e29a1b2ad95d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jennings, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:45:43 compute-0 systemd[1]: libpod-conmon-b2181c7d696c68a3160022154ef826ea7d01a80d60c93cbef705e29a1b2ad95d.scope: Deactivated successfully.
Oct 02 09:45:43 compute-0 podman[470512]: 2025-10-02 09:45:43.829859754 +0000 UTC m=+0.088253478 container create 28fd07347d6d3ea0d7f92fc7344c9cead5e2ca3158d61f25a5a0b20920fddbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dewdney, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:45:43 compute-0 podman[470512]: 2025-10-02 09:45:43.775981891 +0000 UTC m=+0.034375675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:45:43 compute-0 systemd[1]: Started libpod-conmon-28fd07347d6d3ea0d7f92fc7344c9cead5e2ca3158d61f25a5a0b20920fddbaf.scope.
Oct 02 09:45:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcd393a2bc368ca41de4415062968b53484289d682d0cdb09ce25df4ebb301f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcd393a2bc368ca41de4415062968b53484289d682d0cdb09ce25df4ebb301f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcd393a2bc368ca41de4415062968b53484289d682d0cdb09ce25df4ebb301f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcd393a2bc368ca41de4415062968b53484289d682d0cdb09ce25df4ebb301f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcd393a2bc368ca41de4415062968b53484289d682d0cdb09ce25df4ebb301f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:44 compute-0 podman[470512]: 2025-10-02 09:45:44.000615307 +0000 UTC m=+0.259009111 container init 28fd07347d6d3ea0d7f92fc7344c9cead5e2ca3158d61f25a5a0b20920fddbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dewdney, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:45:44 compute-0 podman[470512]: 2025-10-02 09:45:44.009114812 +0000 UTC m=+0.267508556 container start 28fd07347d6d3ea0d7f92fc7344c9cead5e2ca3158d61f25a5a0b20920fddbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dewdney, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:45:44 compute-0 podman[470512]: 2025-10-02 09:45:44.096666647 +0000 UTC m=+0.355060411 container attach 28fd07347d6d3ea0d7f92fc7344c9cead5e2ca3158d61f25a5a0b20920fddbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dewdney, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:45:44 compute-0 nova_compute[260603]: 2025-10-02 09:45:44.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:45:44 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3844: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:45 compute-0 clever_dewdney[470529]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:45:45 compute-0 clever_dewdney[470529]: --> relative data size: 1.0
Oct 02 09:45:45 compute-0 clever_dewdney[470529]: --> All data devices are unavailable
Oct 02 09:45:45 compute-0 systemd[1]: libpod-28fd07347d6d3ea0d7f92fc7344c9cead5e2ca3158d61f25a5a0b20920fddbaf.scope: Deactivated successfully.
Oct 02 09:45:45 compute-0 systemd[1]: libpod-28fd07347d6d3ea0d7f92fc7344c9cead5e2ca3158d61f25a5a0b20920fddbaf.scope: Consumed 1.071s CPU time.
Oct 02 09:45:45 compute-0 podman[470512]: 2025-10-02 09:45:45.140389007 +0000 UTC m=+1.398782791 container died 28fd07347d6d3ea0d7f92fc7344c9cead5e2ca3158d61f25a5a0b20920fddbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct 02 09:45:45 compute-0 ceph-mon[74477]: pgmap v3844: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:45 compute-0 nova_compute[260603]: 2025-10-02 09:45:45.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bcd393a2bc368ca41de4415062968b53484289d682d0cdb09ce25df4ebb301f-merged.mount: Deactivated successfully.
Oct 02 09:45:45 compute-0 podman[470565]: 2025-10-02 09:45:45.428140665 +0000 UTC m=+0.245890901 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 09:45:45 compute-0 podman[470558]: 2025-10-02 09:45:45.452279199 +0000 UTC m=+0.272109350 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:45:45 compute-0 podman[470512]: 2025-10-02 09:45:45.460989211 +0000 UTC m=+1.719382965 container remove 28fd07347d6d3ea0d7f92fc7344c9cead5e2ca3158d61f25a5a0b20920fddbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dewdney, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:45:45 compute-0 systemd[1]: libpod-conmon-28fd07347d6d3ea0d7f92fc7344c9cead5e2ca3158d61f25a5a0b20920fddbaf.scope: Deactivated successfully.
Oct 02 09:45:45 compute-0 sudo[470403]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:45 compute-0 sudo[470612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:45:45 compute-0 sudo[470612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:45 compute-0 sudo[470612]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:45 compute-0 sudo[470637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:45:45 compute-0 sudo[470637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:45 compute-0 sudo[470637]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:45 compute-0 sudo[470662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:45:45 compute-0 sudo[470662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:45 compute-0 sudo[470662]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:45 compute-0 sudo[470687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:45:45 compute-0 sudo[470687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:46 compute-0 podman[470754]: 2025-10-02 09:45:46.202043448 +0000 UTC m=+0.059579593 container create 70208f9614029e04c7f6f802590344df21262b15534c2387383b93129c6d3c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:45:46 compute-0 podman[470754]: 2025-10-02 09:45:46.166989853 +0000 UTC m=+0.024526018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:45:46 compute-0 systemd[1]: Started libpod-conmon-70208f9614029e04c7f6f802590344df21262b15534c2387383b93129c6d3c04.scope.
Oct 02 09:45:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:45:46 compute-0 podman[470754]: 2025-10-02 09:45:46.32411994 +0000 UTC m=+0.181656105 container init 70208f9614029e04c7f6f802590344df21262b15534c2387383b93129c6d3c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:45:46 compute-0 podman[470754]: 2025-10-02 09:45:46.332980427 +0000 UTC m=+0.190516602 container start 70208f9614029e04c7f6f802590344df21262b15534c2387383b93129c6d3c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:45:46 compute-0 podman[470754]: 2025-10-02 09:45:46.338526421 +0000 UTC m=+0.196062616 container attach 70208f9614029e04c7f6f802590344df21262b15534c2387383b93129c6d3c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:45:46 compute-0 bold_hopper[470770]: 167 167
Oct 02 09:45:46 compute-0 systemd[1]: libpod-70208f9614029e04c7f6f802590344df21262b15534c2387383b93129c6d3c04.scope: Deactivated successfully.
Oct 02 09:45:46 compute-0 podman[470754]: 2025-10-02 09:45:46.340849843 +0000 UTC m=+0.198386018 container died 70208f9614029e04c7f6f802590344df21262b15534c2387383b93129c6d3c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:45:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-69a756242ef30e3a17d7146c2d007bb70080dfa76a252e6b74db3c4d32d581c8-merged.mount: Deactivated successfully.
Oct 02 09:45:46 compute-0 podman[470754]: 2025-10-02 09:45:46.38559164 +0000 UTC m=+0.243127815 container remove 70208f9614029e04c7f6f802590344df21262b15534c2387383b93129c6d3c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 09:45:46 compute-0 systemd[1]: libpod-conmon-70208f9614029e04c7f6f802590344df21262b15534c2387383b93129c6d3c04.scope: Deactivated successfully.
Oct 02 09:45:46 compute-0 nova_compute[260603]: 2025-10-02 09:45:46.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:46 compute-0 podman[470794]: 2025-10-02 09:45:46.577314249 +0000 UTC m=+0.060837302 container create 53e07ce1b54e3a23068be1361983aed75c0c645b9538a59950ecb0094c2f712a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banzai, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:45:46 compute-0 systemd[1]: Started libpod-conmon-53e07ce1b54e3a23068be1361983aed75c0c645b9538a59950ecb0094c2f712a.scope.
Oct 02 09:45:46 compute-0 podman[470794]: 2025-10-02 09:45:46.556469847 +0000 UTC m=+0.039992920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:45:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aeb23071c6f948f94fa00e718fd696b2b99184643964d568c833c5448e5354/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aeb23071c6f948f94fa00e718fd696b2b99184643964d568c833c5448e5354/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aeb23071c6f948f94fa00e718fd696b2b99184643964d568c833c5448e5354/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aeb23071c6f948f94fa00e718fd696b2b99184643964d568c833c5448e5354/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:46 compute-0 podman[470794]: 2025-10-02 09:45:46.702392035 +0000 UTC m=+0.185915118 container init 53e07ce1b54e3a23068be1361983aed75c0c645b9538a59950ecb0094c2f712a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banzai, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 09:45:46 compute-0 podman[470794]: 2025-10-02 09:45:46.717442166 +0000 UTC m=+0.200965229 container start 53e07ce1b54e3a23068be1361983aed75c0c645b9538a59950ecb0094c2f712a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banzai, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:45:46 compute-0 podman[470794]: 2025-10-02 09:45:46.721254625 +0000 UTC m=+0.204777698 container attach 53e07ce1b54e3a23068be1361983aed75c0c645b9538a59950ecb0094c2f712a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banzai, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 09:45:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3845: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:47 compute-0 ceph-mon[74477]: pgmap v3845: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]: {
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:     "0": [
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:         {
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "devices": [
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "/dev/loop3"
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             ],
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_name": "ceph_lv0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_size": "21470642176",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "name": "ceph_lv0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "tags": {
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.cluster_name": "ceph",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.crush_device_class": "",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.encrypted": "0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.osd_id": "0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.type": "block",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.vdo": "0"
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             },
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "type": "block",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "vg_name": "ceph_vg0"
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:         }
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:     ],
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:     "1": [
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:         {
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "devices": [
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "/dev/loop4"
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             ],
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_name": "ceph_lv1",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_size": "21470642176",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "name": "ceph_lv1",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "tags": {
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.cluster_name": "ceph",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.crush_device_class": "",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.encrypted": "0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.osd_id": "1",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.type": "block",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.vdo": "0"
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             },
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "type": "block",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "vg_name": "ceph_vg1"
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:         }
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:     ],
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:     "2": [
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:         {
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "devices": [
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "/dev/loop5"
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             ],
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_name": "ceph_lv2",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_size": "21470642176",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "name": "ceph_lv2",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "tags": {
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.cluster_name": "ceph",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.crush_device_class": "",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.encrypted": "0",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.osd_id": "2",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.type": "block",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:                 "ceph.vdo": "0"
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             },
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "type": "block",
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:             "vg_name": "ceph_vg2"
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:         }
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]:     ]
Oct 02 09:45:47 compute-0 relaxed_banzai[470810]: }
Oct 02 09:45:47 compute-0 systemd[1]: libpod-53e07ce1b54e3a23068be1361983aed75c0c645b9538a59950ecb0094c2f712a.scope: Deactivated successfully.
Oct 02 09:45:47 compute-0 podman[470794]: 2025-10-02 09:45:47.483620527 +0000 UTC m=+0.967143610 container died 53e07ce1b54e3a23068be1361983aed75c0c645b9538a59950ecb0094c2f712a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banzai, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:45:47 compute-0 nova_compute[260603]: 2025-10-02 09:45:47.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:45:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-48aeb23071c6f948f94fa00e718fd696b2b99184643964d568c833c5448e5354-merged.mount: Deactivated successfully.
Oct 02 09:45:47 compute-0 podman[470794]: 2025-10-02 09:45:47.546414149 +0000 UTC m=+1.029937202 container remove 53e07ce1b54e3a23068be1361983aed75c0c645b9538a59950ecb0094c2f712a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banzai, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 09:45:47 compute-0 systemd[1]: libpod-conmon-53e07ce1b54e3a23068be1361983aed75c0c645b9538a59950ecb0094c2f712a.scope: Deactivated successfully.
Oct 02 09:45:47 compute-0 sudo[470687]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:47 compute-0 sudo[470831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:45:47 compute-0 sudo[470831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:47 compute-0 sudo[470831]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:47 compute-0 sudo[470856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:45:47 compute-0 sudo[470856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:47 compute-0 sudo[470856]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:47 compute-0 sudo[470881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:45:47 compute-0 sudo[470881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:47 compute-0 sudo[470881]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:47 compute-0 sudo[470906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:45:47 compute-0 sudo[470906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:48 compute-0 podman[470972]: 2025-10-02 09:45:48.235084799 +0000 UTC m=+0.052431269 container create 1fb1d15f3bbbb8cba79b280cbb6bc64251cd573264286ce0c7970941d25967b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 09:45:48 compute-0 systemd[1]: Started libpod-conmon-1fb1d15f3bbbb8cba79b280cbb6bc64251cd573264286ce0c7970941d25967b1.scope.
Oct 02 09:45:48 compute-0 podman[470972]: 2025-10-02 09:45:48.207395494 +0000 UTC m=+0.024742014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:45:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:45:48 compute-0 podman[470972]: 2025-10-02 09:45:48.338815279 +0000 UTC m=+0.156161779 container init 1fb1d15f3bbbb8cba79b280cbb6bc64251cd573264286ce0c7970941d25967b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 09:45:48 compute-0 podman[470972]: 2025-10-02 09:45:48.348729488 +0000 UTC m=+0.166075958 container start 1fb1d15f3bbbb8cba79b280cbb6bc64251cd573264286ce0c7970941d25967b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:45:48 compute-0 podman[470972]: 2025-10-02 09:45:48.353735105 +0000 UTC m=+0.171081605 container attach 1fb1d15f3bbbb8cba79b280cbb6bc64251cd573264286ce0c7970941d25967b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_snyder, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:45:48 compute-0 stupefied_snyder[470989]: 167 167
Oct 02 09:45:48 compute-0 systemd[1]: libpod-1fb1d15f3bbbb8cba79b280cbb6bc64251cd573264286ce0c7970941d25967b1.scope: Deactivated successfully.
Oct 02 09:45:48 compute-0 podman[470972]: 2025-10-02 09:45:48.356322866 +0000 UTC m=+0.173669376 container died 1fb1d15f3bbbb8cba79b280cbb6bc64251cd573264286ce0c7970941d25967b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:45:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c742a3e33c9ee99c94044ad7afbb31dfb690dac1a320c07aad601b368e9decf-merged.mount: Deactivated successfully.
Oct 02 09:45:48 compute-0 podman[470972]: 2025-10-02 09:45:48.420624794 +0000 UTC m=+0.237971304 container remove 1fb1d15f3bbbb8cba79b280cbb6bc64251cd573264286ce0c7970941d25967b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_snyder, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 02 09:45:48 compute-0 systemd[1]: libpod-conmon-1fb1d15f3bbbb8cba79b280cbb6bc64251cd573264286ce0c7970941d25967b1.scope: Deactivated successfully.
Oct 02 09:45:48 compute-0 podman[471013]: 2025-10-02 09:45:48.639029246 +0000 UTC m=+0.073460046 container create d16509ecdc194a27949dcda02f59270114388808b8713d27ba7cdef5b4b13f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:45:48 compute-0 podman[471013]: 2025-10-02 09:45:48.608493172 +0000 UTC m=+0.042924052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:45:48 compute-0 systemd[1]: Started libpod-conmon-d16509ecdc194a27949dcda02f59270114388808b8713d27ba7cdef5b4b13f78.scope.
Oct 02 09:45:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2807dee043af28522bc9a31f26c09a7dd7d8e4f112058ae3d44c6053917251/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2807dee043af28522bc9a31f26c09a7dd7d8e4f112058ae3d44c6053917251/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2807dee043af28522bc9a31f26c09a7dd7d8e4f112058ae3d44c6053917251/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2807dee043af28522bc9a31f26c09a7dd7d8e4f112058ae3d44c6053917251/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:45:48 compute-0 podman[471013]: 2025-10-02 09:45:48.809679006 +0000 UTC m=+0.244109876 container init d16509ecdc194a27949dcda02f59270114388808b8713d27ba7cdef5b4b13f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wright, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:45:48 compute-0 podman[471013]: 2025-10-02 09:45:48.821849576 +0000 UTC m=+0.256280366 container start d16509ecdc194a27949dcda02f59270114388808b8713d27ba7cdef5b4b13f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct 02 09:45:48 compute-0 podman[471013]: 2025-10-02 09:45:48.838189976 +0000 UTC m=+0.272620876 container attach d16509ecdc194a27949dcda02f59270114388808b8713d27ba7cdef5b4b13f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wright, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 09:45:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3846: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:49 compute-0 ceph-mon[74477]: pgmap v3846: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:49 compute-0 affectionate_wright[471029]: {
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "osd_id": 2,
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "type": "bluestore"
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:     },
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "osd_id": 1,
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "type": "bluestore"
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:     },
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "osd_id": 0,
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:         "type": "bluestore"
Oct 02 09:45:49 compute-0 affectionate_wright[471029]:     }
Oct 02 09:45:49 compute-0 affectionate_wright[471029]: }
Oct 02 09:45:49 compute-0 systemd[1]: libpod-d16509ecdc194a27949dcda02f59270114388808b8713d27ba7cdef5b4b13f78.scope: Deactivated successfully.
Oct 02 09:45:49 compute-0 podman[471013]: 2025-10-02 09:45:49.805741697 +0000 UTC m=+1.240172497 container died d16509ecdc194a27949dcda02f59270114388808b8713d27ba7cdef5b4b13f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 09:45:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf2807dee043af28522bc9a31f26c09a7dd7d8e4f112058ae3d44c6053917251-merged.mount: Deactivated successfully.
Oct 02 09:45:50 compute-0 podman[471013]: 2025-10-02 09:45:50.055371344 +0000 UTC m=+1.489802174 container remove d16509ecdc194a27949dcda02f59270114388808b8713d27ba7cdef5b4b13f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wright, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:45:50 compute-0 podman[471063]: 2025-10-02 09:45:50.062951401 +0000 UTC m=+0.223808571 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 02 09:45:50 compute-0 systemd[1]: libpod-conmon-d16509ecdc194a27949dcda02f59270114388808b8713d27ba7cdef5b4b13f78.scope: Deactivated successfully.
Oct 02 09:45:50 compute-0 podman[471074]: 2025-10-02 09:45:50.080405917 +0000 UTC m=+0.240523974 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:45:50 compute-0 sudo[470906]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:45:50 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:45:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:45:50 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:45:50 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e5ceea49-db01-4423-9073-535299143476 does not exist
Oct 02 09:45:50 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev afb9e4ba-a6cb-46f9-9f42-9dc76d054fbe does not exist
Oct 02 09:45:50 compute-0 sudo[471114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:45:50 compute-0 sudo[471114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:50 compute-0 sudo[471114]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:50 compute-0 nova_compute[260603]: 2025-10-02 09:45:50.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:50 compute-0 sudo[471139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:45:50 compute-0 sudo[471139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:45:50 compute-0 sudo[471139]: pam_unix(sudo:session): session closed for user root
Oct 02 09:45:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3847: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:51 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:45:51 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:45:51 compute-0 ceph-mon[74477]: pgmap v3847: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:45:51 compute-0 nova_compute[260603]: 2025-10-02 09:45:51.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3848: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:53 compute-0 ceph-mon[74477]: pgmap v3848: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:54 compute-0 nova_compute[260603]: 2025-10-02 09:45:54.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:45:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3849: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:55 compute-0 ceph-mon[74477]: pgmap v3849: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:55 compute-0 nova_compute[260603]: 2025-10-02 09:45:55.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:45:56 compute-0 nova_compute[260603]: 2025-10-02 09:45:56.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:45:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3850: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:57 compute-0 ceph-mon[74477]: pgmap v3850: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:45:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:45:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:45:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:45:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:45:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:45:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3851: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:45:59 compute-0 ceph-mon[74477]: pgmap v3851: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:00 compute-0 nova_compute[260603]: 2025-10-02 09:46:00.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3852: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:01 compute-0 ceph-mon[74477]: pgmap v3852: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:46:01 compute-0 nova_compute[260603]: 2025-10-02 09:46:01.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3853: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:03 compute-0 ceph-mon[74477]: pgmap v3853: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:03 compute-0 nova_compute[260603]: 2025-10-02 09:46:03.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:46:03 compute-0 nova_compute[260603]: 2025-10-02 09:46:03.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:46:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3854: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:05 compute-0 ceph-mon[74477]: pgmap v3854: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:05 compute-0 nova_compute[260603]: 2025-10-02 09:46:05.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:46:06 compute-0 nova_compute[260603]: 2025-10-02 09:46:06.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3855: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:07 compute-0 ceph-mon[74477]: pgmap v3855: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3856: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:09 compute-0 ceph-mon[74477]: pgmap v3856: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:10 compute-0 nova_compute[260603]: 2025-10-02 09:46:10.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3857: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:11 compute-0 ceph-mon[74477]: pgmap v3857: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:46:11 compute-0 nova_compute[260603]: 2025-10-02 09:46:11.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3858: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:13 compute-0 ceph-mon[74477]: pgmap v3858: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3859: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:15 compute-0 ceph-mon[74477]: pgmap v3859: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:15 compute-0 nova_compute[260603]: 2025-10-02 09:46:15.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:16 compute-0 podman[471165]: 2025-10-02 09:46:16.038207127 +0000 UTC m=+0.104802355 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:46:16 compute-0 podman[471164]: 2025-10-02 09:46:16.0818546 +0000 UTC m=+0.149760219 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:46:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:46:16 compute-0 nova_compute[260603]: 2025-10-02 09:46:16.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3860: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:17 compute-0 ceph-mon[74477]: pgmap v3860: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3861: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:19 compute-0 ceph-mon[74477]: pgmap v3861: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:20 compute-0 nova_compute[260603]: 2025-10-02 09:46:20.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:21 compute-0 podman[471212]: 2025-10-02 09:46:20.999919255 +0000 UTC m=+0.065403264 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:46:21 compute-0 podman[471213]: 2025-10-02 09:46:21.000318818 +0000 UTC m=+0.062115282 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 09:46:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3862: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:21 compute-0 ceph-mon[74477]: pgmap v3862: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:46:21 compute-0 nova_compute[260603]: 2025-10-02 09:46:21.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:46:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2221414947' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:46:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:46:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2221414947' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:46:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2221414947' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:46:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/2221414947' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:46:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3863: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:23 compute-0 ceph-mon[74477]: pgmap v3863: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3864: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:25 compute-0 ceph-mon[74477]: pgmap v3864: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:25 compute-0 nova_compute[260603]: 2025-10-02 09:46:25.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:46:26 compute-0 nova_compute[260603]: 2025-10-02 09:46:26.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3865: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:27 compute-0 ceph-mon[74477]: pgmap v3865: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:46:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:46:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:46:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:46:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:46:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:46:28
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.control']
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:46:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:46:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3866: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:29 compute-0 ceph-mon[74477]: pgmap v3866: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:30 compute-0 nova_compute[260603]: 2025-10-02 09:46:30.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3867: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:31 compute-0 ceph-mon[74477]: pgmap v3867: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:46:31 compute-0 nova_compute[260603]: 2025-10-02 09:46:31.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3868: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:33 compute-0 ceph-mon[74477]: pgmap v3868: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:33 compute-0 nova_compute[260603]: 2025-10-02 09:46:33.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:46:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:46:34.886 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:46:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:46:34.886 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:46:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:46:34.886 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:46:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3869: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:35 compute-0 ceph-mon[74477]: pgmap v3869: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:35 compute-0 nova_compute[260603]: 2025-10-02 09:46:35.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:46:36 compute-0 nova_compute[260603]: 2025-10-02 09:46:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:46:36 compute-0 nova_compute[260603]: 2025-10-02 09:46:36.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:36 compute-0 nova_compute[260603]: 2025-10-02 09:46:36.639 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:46:36 compute-0 nova_compute[260603]: 2025-10-02 09:46:36.639 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:46:36 compute-0 nova_compute[260603]: 2025-10-02 09:46:36.639 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:46:36 compute-0 nova_compute[260603]: 2025-10-02 09:46:36.639 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:46:36 compute-0 nova_compute[260603]: 2025-10-02 09:46:36.640 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:46:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3870: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:46:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1431678478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:46:37 compute-0 nova_compute[260603]: 2025-10-02 09:46:37.076 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:46:37 compute-0 ceph-mon[74477]: pgmap v3870: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1431678478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:46:37 compute-0 nova_compute[260603]: 2025-10-02 09:46:37.252 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:46:37 compute-0 nova_compute[260603]: 2025-10-02 09:46:37.255 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3513MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:46:37 compute-0 nova_compute[260603]: 2025-10-02 09:46:37.255 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:46:37 compute-0 nova_compute[260603]: 2025-10-02 09:46:37.256 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:46:37 compute-0 nova_compute[260603]: 2025-10-02 09:46:37.479 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:46:37 compute-0 nova_compute[260603]: 2025-10-02 09:46:37.480 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:46:37 compute-0 nova_compute[260603]: 2025-10-02 09:46:37.502 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:46:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:46:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142485477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:46:37 compute-0 nova_compute[260603]: 2025-10-02 09:46:37.942 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:46:37 compute-0 nova_compute[260603]: 2025-10-02 09:46:37.950 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:46:38 compute-0 nova_compute[260603]: 2025-10-02 09:46:38.007 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:46:38 compute-0 nova_compute[260603]: 2025-10-02 09:46:38.010 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:46:38 compute-0 nova_compute[260603]: 2025-10-02 09:46:38.010 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:46:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1142485477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:46:39 compute-0 nova_compute[260603]: 2025-10-02 09:46:39.010 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3871: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:39 compute-0 ceph-mon[74477]: pgmap v3871: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:46:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:46:40 compute-0 rsyslogd[1004]: imjournal: 15066 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 02 09:46:40 compute-0 nova_compute[260603]: 2025-10-02 09:46:40.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:40 compute-0 nova_compute[260603]: 2025-10-02 09:46:40.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:46:40 compute-0 nova_compute[260603]: 2025-10-02 09:46:40.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:46:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:41 compute-0 ceph-mon[74477]: pgmap v3872: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:46:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:46:41 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 17K writes, 80K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s
                                           Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1370 writes, 6517 keys, 1370 commit groups, 1.0 writes per commit group, ingest: 8.76 MB, 0.01 MB/s
                                           Interval WAL: 1370 writes, 1370 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     67.3      1.47              0.38        60    0.024       0      0       0.0       0.0
                                             L6      1/0    9.21 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2    123.3    105.0      4.90              1.84        59    0.083    423K    31K       0.0       0.0
                                            Sum      1/0    9.21 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     94.9     96.4      6.37              2.22       119    0.054    423K    31K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.4     73.1     72.8      0.88              0.20        12    0.073     58K   3005       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    123.3    105.0      4.90              1.84        59    0.083    423K    31K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     67.5      1.46              0.38        59    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.096, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.60 GB write, 0.09 MB/s write, 0.59 GB read, 0.08 MB/s read, 6.4 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 69.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000524 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4753,66.15 MB,21.76%) FilterBlock(120,1.15 MB,0.376646%) IndexBlock(120,1.83 MB,0.602928%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 09:46:41 compute-0 nova_compute[260603]: 2025-10-02 09:46:41.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:42 compute-0 nova_compute[260603]: 2025-10-02 09:46:42.284 2 DEBUG oslo_concurrency.processutils [None req-a6a311c5-f6fe-4054-b703-3b4765cc45ab bc7f869da0c24323b627eb876eb78945 db4e6e87e4854aa8b884db24f9410884 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:46:42 compute-0 nova_compute[260603]: 2025-10-02 09:46:42.323 2 DEBUG oslo_concurrency.processutils [None req-a6a311c5-f6fe-4054-b703-3b4765cc45ab bc7f869da0c24323b627eb876eb78945 db4e6e87e4854aa8b884db24f9410884 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:46:42 compute-0 nova_compute[260603]: 2025-10-02 09:46:42.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:46:42 compute-0 nova_compute[260603]: 2025-10-02 09:46:42.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:46:42 compute-0 nova_compute[260603]: 2025-10-02 09:46:42.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:46:42 compute-0 nova_compute[260603]: 2025-10-02 09:46:42.648 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:46:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:43 compute-0 ceph-mon[74477]: pgmap v3873: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:45 compute-0 ceph-mon[74477]: pgmap v3874: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:45 compute-0 nova_compute[260603]: 2025-10-02 09:46:45.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:45 compute-0 nova_compute[260603]: 2025-10-02 09:46:45.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:46:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:46:46 compute-0 nova_compute[260603]: 2025-10-02 09:46:46.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:46 compute-0 podman[471297]: 2025-10-02 09:46:46.993086149 +0000 UTC m=+0.053846743 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:46:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:47 compute-0 podman[471296]: 2025-10-02 09:46:47.033490011 +0000 UTC m=+0.095509524 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 09:46:47 compute-0 ceph-mon[74477]: pgmap v3875: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:49 compute-0 ceph-mon[74477]: pgmap v3876: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:46:50.374 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 09:46:50 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:46:50.374 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 09:46:50 compute-0 nova_compute[260603]: 2025-10-02 09:46:50.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:50 compute-0 sudo[471343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:46:50 compute-0 sudo[471343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:50 compute-0 sudo[471343]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:50 compute-0 nova_compute[260603]: 2025-10-02 09:46:50.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:50 compute-0 sudo[471368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:46:50 compute-0 sudo[471368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:50 compute-0 sudo[471368]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:50 compute-0 sudo[471393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:46:50 compute-0 sudo[471393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:50 compute-0 sudo[471393]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:50 compute-0 sudo[471418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:46:50 compute-0 sudo[471418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:51 compute-0 sudo[471418]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:46:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:46:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:46:51 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:46:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:46:51 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:46:51 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 627f986b-76e2-41d2-8256-c4069866b0c0 does not exist
Oct 02 09:46:51 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 98aee177-b757-49b3-9496-03bbef2db6b5 does not exist
Oct 02 09:46:51 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 90c80edb-b1c4-4633-bcd2-786e7959b46e does not exist
Oct 02 09:46:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:46:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:46:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:46:51 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:46:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:46:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:46:51 compute-0 ceph-mon[74477]: pgmap v3877: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:51 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:46:51 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:46:51 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:46:51 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:46:51 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:46:51 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:46:51 compute-0 sudo[471474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:46:51 compute-0 sudo[471474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:51 compute-0 sudo[471474]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:51 compute-0 sudo[471511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:46:51 compute-0 sudo[471511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:51 compute-0 sudo[471511]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:51 compute-0 podman[471499]: 2025-10-02 09:46:51.244155879 +0000 UTC m=+0.065639102 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid)
Oct 02 09:46:51 compute-0 podman[471498]: 2025-10-02 09:46:51.266646921 +0000 UTC m=+0.088277998 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 09:46:51 compute-0 sudo[471564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:46:51 compute-0 sudo[471564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:51 compute-0 sudo[471564]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:46:51 compute-0 sudo[471589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:46:51 compute-0 sudo[471589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:51 compute-0 nova_compute[260603]: 2025-10-02 09:46:51.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:51 compute-0 podman[471653]: 2025-10-02 09:46:51.782141103 +0000 UTC m=+0.049287161 container create 77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 09:46:51 compute-0 systemd[1]: Started libpod-conmon-77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb.scope.
Oct 02 09:46:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:46:51 compute-0 podman[471653]: 2025-10-02 09:46:51.759304469 +0000 UTC m=+0.026450567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:46:51 compute-0 podman[471653]: 2025-10-02 09:46:51.869930825 +0000 UTC m=+0.137076913 container init 77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 09:46:51 compute-0 podman[471653]: 2025-10-02 09:46:51.878919776 +0000 UTC m=+0.146065824 container start 77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 09:46:51 compute-0 podman[471653]: 2025-10-02 09:46:51.881815606 +0000 UTC m=+0.148961684 container attach 77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gauss, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:46:51 compute-0 optimistic_gauss[471669]: 167 167
Oct 02 09:46:51 compute-0 systemd[1]: libpod-77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb.scope: Deactivated successfully.
Oct 02 09:46:51 compute-0 conmon[471669]: conmon 77a8a5415d4ca880c4bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb.scope/container/memory.events
Oct 02 09:46:51 compute-0 podman[471653]: 2025-10-02 09:46:51.887280187 +0000 UTC m=+0.154426245 container died 77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 02 09:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-046cb8a83cd8dedf312c70306076d70717da068836e54d2c071984e3b5da90c9-merged.mount: Deactivated successfully.
Oct 02 09:46:51 compute-0 podman[471653]: 2025-10-02 09:46:51.922729934 +0000 UTC m=+0.189875992 container remove 77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gauss, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:46:51 compute-0 systemd[1]: libpod-conmon-77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb.scope: Deactivated successfully.
Oct 02 09:46:52 compute-0 podman[471694]: 2025-10-02 09:46:52.072467731 +0000 UTC m=+0.039514045 container create 30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 09:46:52 compute-0 systemd[1]: Started libpod-conmon-30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455.scope.
Oct 02 09:46:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:46:52 compute-0 podman[471694]: 2025-10-02 09:46:52.055955365 +0000 UTC m=+0.023001699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a157fdaff2c0e0fa536138cef511e1952993b6fd1b838fd1b86d7c0dfa6e377/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a157fdaff2c0e0fa536138cef511e1952993b6fd1b838fd1b86d7c0dfa6e377/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a157fdaff2c0e0fa536138cef511e1952993b6fd1b838fd1b86d7c0dfa6e377/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a157fdaff2c0e0fa536138cef511e1952993b6fd1b838fd1b86d7c0dfa6e377/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a157fdaff2c0e0fa536138cef511e1952993b6fd1b838fd1b86d7c0dfa6e377/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:52 compute-0 podman[471694]: 2025-10-02 09:46:52.164505166 +0000 UTC m=+0.131551500 container init 30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 02 09:46:52 compute-0 podman[471694]: 2025-10-02 09:46:52.172840416 +0000 UTC m=+0.139886730 container start 30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:46:52 compute-0 podman[471694]: 2025-10-02 09:46:52.176048026 +0000 UTC m=+0.143094340 container attach 30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:46:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:53 compute-0 ceph-mon[74477]: pgmap v3878: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:53 compute-0 epic_gould[471710]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:46:53 compute-0 epic_gould[471710]: --> relative data size: 1.0
Oct 02 09:46:53 compute-0 epic_gould[471710]: --> All data devices are unavailable
Oct 02 09:46:53 compute-0 systemd[1]: libpod-30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455.scope: Deactivated successfully.
Oct 02 09:46:53 compute-0 systemd[1]: libpod-30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455.scope: Consumed 1.048s CPU time.
Oct 02 09:46:53 compute-0 podman[471694]: 2025-10-02 09:46:53.273742432 +0000 UTC m=+1.240788796 container died 30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 09:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a157fdaff2c0e0fa536138cef511e1952993b6fd1b838fd1b86d7c0dfa6e377-merged.mount: Deactivated successfully.
Oct 02 09:46:53 compute-0 podman[471694]: 2025-10-02 09:46:53.341087406 +0000 UTC m=+1.308133740 container remove 30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:46:53 compute-0 systemd[1]: libpod-conmon-30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455.scope: Deactivated successfully.
Oct 02 09:46:53 compute-0 sudo[471589]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:53 compute-0 sudo[471753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:46:53 compute-0 sudo[471753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:53 compute-0 sudo[471753]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:53 compute-0 sudo[471778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:46:53 compute-0 sudo[471778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:53 compute-0 sudo[471778]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:53 compute-0 sudo[471803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:46:53 compute-0 sudo[471803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:53 compute-0 sudo[471803]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:53 compute-0 sudo[471828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:46:53 compute-0 sudo[471828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:54 compute-0 podman[471896]: 2025-10-02 09:46:54.100544117 +0000 UTC m=+0.037973947 container create 9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 09:46:54 compute-0 systemd[1]: Started libpod-conmon-9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27.scope.
Oct 02 09:46:54 compute-0 podman[471896]: 2025-10-02 09:46:54.083836755 +0000 UTC m=+0.021266585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:46:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:46:54 compute-0 podman[471896]: 2025-10-02 09:46:54.206202907 +0000 UTC m=+0.143632747 container init 9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 09:46:54 compute-0 podman[471896]: 2025-10-02 09:46:54.218647016 +0000 UTC m=+0.156076826 container start 9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 09:46:54 compute-0 podman[471896]: 2025-10-02 09:46:54.222152015 +0000 UTC m=+0.159581825 container attach 9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_darwin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 09:46:54 compute-0 reverent_darwin[471912]: 167 167
Oct 02 09:46:54 compute-0 systemd[1]: libpod-9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27.scope: Deactivated successfully.
Oct 02 09:46:54 compute-0 podman[471896]: 2025-10-02 09:46:54.225003844 +0000 UTC m=+0.162433654 container died 9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_darwin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:46:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b17dfa2ce81cd694a479585a569ba7c62f074412efc0100bef3821039b5b8208-merged.mount: Deactivated successfully.
Oct 02 09:46:54 compute-0 podman[471896]: 2025-10-02 09:46:54.259558354 +0000 UTC m=+0.196988174 container remove 9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 09:46:54 compute-0 systemd[1]: libpod-conmon-9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27.scope: Deactivated successfully.
Oct 02 09:46:54 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:46:54.377 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 09:46:54 compute-0 podman[471936]: 2025-10-02 09:46:54.44480584 +0000 UTC m=+0.053189633 container create 22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 09:46:54 compute-0 systemd[1]: Started libpod-conmon-22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8.scope.
Oct 02 09:46:54 compute-0 podman[471936]: 2025-10-02 09:46:54.423590508 +0000 UTC m=+0.031974331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:46:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989dfb4a662bbcf3070ae5fae1c698266e66539c21074496caf4d5c1c5f5b61c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989dfb4a662bbcf3070ae5fae1c698266e66539c21074496caf4d5c1c5f5b61c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989dfb4a662bbcf3070ae5fae1c698266e66539c21074496caf4d5c1c5f5b61c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989dfb4a662bbcf3070ae5fae1c698266e66539c21074496caf4d5c1c5f5b61c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:54 compute-0 podman[471936]: 2025-10-02 09:46:54.558648775 +0000 UTC m=+0.167032668 container init 22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:46:54 compute-0 podman[471936]: 2025-10-02 09:46:54.568781452 +0000 UTC m=+0.177165235 container start 22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:46:54 compute-0 podman[471936]: 2025-10-02 09:46:54.572296142 +0000 UTC m=+0.180680025 container attach 22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:46:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:55 compute-0 ceph-mon[74477]: pgmap v3879: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:55 compute-0 cool_leakey[471953]: {
Oct 02 09:46:55 compute-0 cool_leakey[471953]:     "0": [
Oct 02 09:46:55 compute-0 cool_leakey[471953]:         {
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "devices": [
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "/dev/loop3"
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             ],
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_name": "ceph_lv0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_size": "21470642176",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "name": "ceph_lv0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "tags": {
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.cluster_name": "ceph",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.crush_device_class": "",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.encrypted": "0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.osd_id": "0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.type": "block",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.vdo": "0"
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             },
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "type": "block",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "vg_name": "ceph_vg0"
Oct 02 09:46:55 compute-0 cool_leakey[471953]:         }
Oct 02 09:46:55 compute-0 cool_leakey[471953]:     ],
Oct 02 09:46:55 compute-0 cool_leakey[471953]:     "1": [
Oct 02 09:46:55 compute-0 cool_leakey[471953]:         {
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "devices": [
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "/dev/loop4"
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             ],
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_name": "ceph_lv1",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_size": "21470642176",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "name": "ceph_lv1",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "tags": {
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.cluster_name": "ceph",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.crush_device_class": "",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.encrypted": "0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.osd_id": "1",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.type": "block",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.vdo": "0"
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             },
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "type": "block",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "vg_name": "ceph_vg1"
Oct 02 09:46:55 compute-0 cool_leakey[471953]:         }
Oct 02 09:46:55 compute-0 cool_leakey[471953]:     ],
Oct 02 09:46:55 compute-0 cool_leakey[471953]:     "2": [
Oct 02 09:46:55 compute-0 cool_leakey[471953]:         {
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "devices": [
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "/dev/loop5"
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             ],
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_name": "ceph_lv2",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_size": "21470642176",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "name": "ceph_lv2",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "tags": {
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.cluster_name": "ceph",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.crush_device_class": "",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.encrypted": "0",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.osd_id": "2",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.type": "block",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:                 "ceph.vdo": "0"
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             },
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "type": "block",
Oct 02 09:46:55 compute-0 cool_leakey[471953]:             "vg_name": "ceph_vg2"
Oct 02 09:46:55 compute-0 cool_leakey[471953]:         }
Oct 02 09:46:55 compute-0 cool_leakey[471953]:     ]
Oct 02 09:46:55 compute-0 cool_leakey[471953]: }
Oct 02 09:46:55 compute-0 systemd[1]: libpod-22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8.scope: Deactivated successfully.
Oct 02 09:46:55 compute-0 podman[471936]: 2025-10-02 09:46:55.342602602 +0000 UTC m=+0.950986395 container died 22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-989dfb4a662bbcf3070ae5fae1c698266e66539c21074496caf4d5c1c5f5b61c-merged.mount: Deactivated successfully.
Oct 02 09:46:55 compute-0 podman[471936]: 2025-10-02 09:46:55.403635358 +0000 UTC m=+1.012019151 container remove 22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 09:46:55 compute-0 systemd[1]: libpod-conmon-22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8.scope: Deactivated successfully.
Oct 02 09:46:55 compute-0 nova_compute[260603]: 2025-10-02 09:46:55.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:55 compute-0 sudo[471828]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:55 compute-0 sudo[471975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:46:55 compute-0 sudo[471975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:55 compute-0 sudo[471975]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:55 compute-0 sudo[472000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:46:55 compute-0 sudo[472000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:55 compute-0 sudo[472000]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:55 compute-0 sudo[472025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:46:55 compute-0 sudo[472025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:55 compute-0 sudo[472025]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:55 compute-0 sudo[472050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:46:55 compute-0 sudo[472050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:55 compute-0 podman[472115]: 2025-10-02 09:46:55.982992485 +0000 UTC m=+0.037253945 container create 33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 09:46:56 compute-0 systemd[1]: Started libpod-conmon-33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b.scope.
Oct 02 09:46:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:46:56 compute-0 podman[472115]: 2025-10-02 09:46:55.968112989 +0000 UTC m=+0.022374469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:46:56 compute-0 podman[472115]: 2025-10-02 09:46:56.117691322 +0000 UTC m=+0.171952792 container init 33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:46:56 compute-0 podman[472115]: 2025-10-02 09:46:56.125455494 +0000 UTC m=+0.179716954 container start 33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_pasteur, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:46:56 compute-0 modest_pasteur[472131]: 167 167
Oct 02 09:46:56 compute-0 systemd[1]: libpod-33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b.scope: Deactivated successfully.
Oct 02 09:46:56 compute-0 conmon[472131]: conmon 33571b9efd492843dfae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b.scope/container/memory.events
Oct 02 09:46:56 compute-0 podman[472115]: 2025-10-02 09:46:56.171204523 +0000 UTC m=+0.225465983 container attach 33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_pasteur, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:46:56 compute-0 podman[472115]: 2025-10-02 09:46:56.171827503 +0000 UTC m=+0.226088983 container died 33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_pasteur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fceb72004a65632ab61849edbaa4ae04ca26642dbb71e5079b694d05f6849c6-merged.mount: Deactivated successfully.
Oct 02 09:46:56 compute-0 podman[472115]: 2025-10-02 09:46:56.251605454 +0000 UTC m=+0.305866914 container remove 33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_pasteur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:46:56 compute-0 systemd[1]: libpod-conmon-33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b.scope: Deactivated successfully.
Oct 02 09:46:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:46:56 compute-0 podman[472157]: 2025-10-02 09:46:56.400173605 +0000 UTC m=+0.027980325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:46:56 compute-0 nova_compute[260603]: 2025-10-02 09:46:56.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:46:56 compute-0 nova_compute[260603]: 2025-10-02 09:46:56.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:46:56 compute-0 podman[472157]: 2025-10-02 09:46:56.58637104 +0000 UTC m=+0.214177770 container create 142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:46:56 compute-0 systemd[1]: Started libpod-conmon-142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c.scope.
Oct 02 09:46:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee3e8e11e61ec0e19f96f4d3d1fe205899e7f5b620a6e66f4f78cd10122e146/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee3e8e11e61ec0e19f96f4d3d1fe205899e7f5b620a6e66f4f78cd10122e146/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee3e8e11e61ec0e19f96f4d3d1fe205899e7f5b620a6e66f4f78cd10122e146/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee3e8e11e61ec0e19f96f4d3d1fe205899e7f5b620a6e66f4f78cd10122e146/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:46:56 compute-0 podman[472157]: 2025-10-02 09:46:56.742717383 +0000 UTC m=+0.370524103 container init 142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 09:46:56 compute-0 podman[472157]: 2025-10-02 09:46:56.754044137 +0000 UTC m=+0.381850837 container start 142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:46:56 compute-0 podman[472157]: 2025-10-02 09:46:56.844212673 +0000 UTC m=+0.472019403 container attach 142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:46:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:57 compute-0 ceph-mon[74477]: pgmap v3880: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:57 compute-0 fervent_khorana[472174]: {
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "osd_id": 2,
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "type": "bluestore"
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:     },
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "osd_id": 1,
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "type": "bluestore"
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:     },
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "osd_id": 0,
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:         "type": "bluestore"
Oct 02 09:46:57 compute-0 fervent_khorana[472174]:     }
Oct 02 09:46:57 compute-0 fervent_khorana[472174]: }
Oct 02 09:46:57 compute-0 systemd[1]: libpod-142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c.scope: Deactivated successfully.
Oct 02 09:46:57 compute-0 conmon[472174]: conmon 142787f91b4ddf2cbda7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c.scope/container/memory.events
Oct 02 09:46:57 compute-0 podman[472157]: 2025-10-02 09:46:57.717111718 +0000 UTC m=+1.344918418 container died 142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:46:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ee3e8e11e61ec0e19f96f4d3d1fe205899e7f5b620a6e66f4f78cd10122e146-merged.mount: Deactivated successfully.
Oct 02 09:46:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:46:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:46:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:46:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:46:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:46:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:46:58 compute-0 podman[472157]: 2025-10-02 09:46:58.005843966 +0000 UTC m=+1.633650676 container remove 142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:46:58 compute-0 systemd[1]: libpod-conmon-142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c.scope: Deactivated successfully.
Oct 02 09:46:58 compute-0 sudo[472050]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:46:58 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:46:58 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:46:58 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:46:58 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 972db762-3493-4804-b9bf-996eeb01c2be does not exist
Oct 02 09:46:58 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev efcb4192-6ccd-4183-afc8-fff0e1c2ff99 does not exist
Oct 02 09:46:58 compute-0 sudo[472219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:46:58 compute-0 sudo[472219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:58 compute-0 sudo[472219]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:58 compute-0 sudo[472244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:46:58 compute-0 sudo[472244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:46:58 compute-0 sudo[472244]: pam_unix(sudo:session): session closed for user root
Oct 02 09:46:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:46:59 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:46:59 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:47:00 compute-0 ceph-mon[74477]: pgmap v3881: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:00 compute-0 nova_compute[260603]: 2025-10-02 09:47:00.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:01 compute-0 ceph-mon[74477]: pgmap v3882: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:47:01 compute-0 nova_compute[260603]: 2025-10-02 09:47:01.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:03 compute-0 ceph-mon[74477]: pgmap v3883: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:03 compute-0 nova_compute[260603]: 2025-10-02 09:47:03.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:47:03 compute-0 nova_compute[260603]: 2025-10-02 09:47:03.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:47:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:05 compute-0 ceph-mon[74477]: pgmap v3884: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:05 compute-0 nova_compute[260603]: 2025-10-02 09:47:05.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:47:06 compute-0 nova_compute[260603]: 2025-10-02 09:47:06.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:07 compute-0 ceph-mon[74477]: pgmap v3885: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:09 compute-0 ceph-mon[74477]: pgmap v3886: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:10 compute-0 nova_compute[260603]: 2025-10-02 09:47:10.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:11 compute-0 ceph-mon[74477]: pgmap v3887: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:47:11 compute-0 nova_compute[260603]: 2025-10-02 09:47:11.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:13 compute-0 ceph-mon[74477]: pgmap v3888: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:15 compute-0 ceph-mon[74477]: pgmap v3889: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:15 compute-0 nova_compute[260603]: 2025-10-02 09:47:15.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:47:16 compute-0 nova_compute[260603]: 2025-10-02 09:47:16.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:17 compute-0 ceph-mon[74477]: pgmap v3890: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:17 compute-0 podman[472271]: 2025-10-02 09:47:17.99052003 +0000 UTC m=+0.055143974 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:47:18 compute-0 podman[472270]: 2025-10-02 09:47:18.0235115 +0000 UTC m=+0.088177676 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 09:47:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:19 compute-0 ceph-mon[74477]: pgmap v3891: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:20 compute-0 nova_compute[260603]: 2025-10-02 09:47:20.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:21 compute-0 ceph-mon[74477]: pgmap v3892: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:47:21 compute-0 nova_compute[260603]: 2025-10-02 09:47:21.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:22 compute-0 podman[472313]: 2025-10-02 09:47:22.02547856 +0000 UTC m=+0.097314240 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 09:47:22 compute-0 podman[472314]: 2025-10-02 09:47:22.025771199 +0000 UTC m=+0.090069334 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:47:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:47:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4011591992' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:47:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:47:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4011591992' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:47:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/4011591992' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:47:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/4011591992' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:47:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:23 compute-0 unix_chkpwd[472357]: password check failed for user (root)
Oct 02 09:47:23 compute-0 sshd-session[472355]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.71.248.239  user=root
Oct 02 09:47:23 compute-0 sshd-session[472353]: Connection closed by 77.110.113.94 port 56526 [preauth]
Oct 02 09:47:23 compute-0 ceph-mon[74477]: pgmap v3893: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:25 compute-0 ceph-mon[74477]: pgmap v3894: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:25 compute-0 sshd-session[472355]: Failed password for root from 167.71.248.239 port 56198 ssh2
Oct 02 09:47:25 compute-0 nova_compute[260603]: 2025-10-02 09:47:25.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:25 compute-0 sshd-session[472355]: Connection closed by authenticating user root 167.71.248.239 port 56198 [preauth]
Oct 02 09:47:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:47:26 compute-0 nova_compute[260603]: 2025-10-02 09:47:26.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:27 compute-0 ceph-mon[74477]: pgmap v3895: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:47:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:47:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:47:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:47:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:47:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:47:28
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'volumes', 'default.rgw.log', '.mgr', 'backups', 'vms', 'cephfs.cephfs.meta']
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:47:28 compute-0 ceph-mgr[74774]: client.0 ms_handle_reset on v2:192.168.122.100:6800/860957497
Oct 02 09:47:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:29 compute-0 ceph-mon[74477]: pgmap v3896: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:30 compute-0 nova_compute[260603]: 2025-10-02 09:47:30.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:31 compute-0 ceph-mon[74477]: pgmap v3897: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:47:31 compute-0 nova_compute[260603]: 2025-10-02 09:47:31.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:33 compute-0 ceph-mon[74477]: pgmap v3898: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:34 compute-0 nova_compute[260603]: 2025-10-02 09:47:34.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:47:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:47:34.887 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:47:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:47:34.887 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:47:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:47:34.887 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:47:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3899: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:35 compute-0 ceph-mon[74477]: pgmap v3899: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:35 compute-0 nova_compute[260603]: 2025-10-02 09:47:35.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:47:36 compute-0 nova_compute[260603]: 2025-10-02 09:47:36.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:37 compute-0 ceph-mon[74477]: pgmap v3900: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:37 compute-0 nova_compute[260603]: 2025-10-02 09:47:37.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:47:37 compute-0 nova_compute[260603]: 2025-10-02 09:47:37.569 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:47:37 compute-0 nova_compute[260603]: 2025-10-02 09:47:37.569 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:47:37 compute-0 nova_compute[260603]: 2025-10-02 09:47:37.570 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:47:37 compute-0 nova_compute[260603]: 2025-10-02 09:47:37.570 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:47:37 compute-0 nova_compute[260603]: 2025-10-02 09:47:37.570 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:47:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:47:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2129779092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:47:37 compute-0 nova_compute[260603]: 2025-10-02 09:47:37.991 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:47:38 compute-0 nova_compute[260603]: 2025-10-02 09:47:38.140 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:47:38 compute-0 nova_compute[260603]: 2025-10-02 09:47:38.142 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3508MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:47:38 compute-0 nova_compute[260603]: 2025-10-02 09:47:38.142 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:47:38 compute-0 nova_compute[260603]: 2025-10-02 09:47:38.142 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:47:38 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2129779092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:47:38 compute-0 nova_compute[260603]: 2025-10-02 09:47:38.241 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:47:38 compute-0 nova_compute[260603]: 2025-10-02 09:47:38.241 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:47:38 compute-0 nova_compute[260603]: 2025-10-02 09:47:38.261 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:47:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:47:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817562172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:47:38 compute-0 nova_compute[260603]: 2025-10-02 09:47:38.699 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:47:38 compute-0 nova_compute[260603]: 2025-10-02 09:47:38.704 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:47:38 compute-0 nova_compute[260603]: 2025-10-02 09:47:38.729 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:47:38 compute-0 nova_compute[260603]: 2025-10-02 09:47:38.731 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:47:38 compute-0 nova_compute[260603]: 2025-10-02 09:47:38.731 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3817562172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:47:39 compute-0 ceph-mon[74477]: pgmap v3901: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:47:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:47:40 compute-0 nova_compute[260603]: 2025-10-02 09:47:40.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:40 compute-0 nova_compute[260603]: 2025-10-02 09:47:40.731 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:47:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3902: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:41 compute-0 ceph-mon[74477]: pgmap v3902: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:47:41 compute-0 nova_compute[260603]: 2025-10-02 09:47:41.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:47:41 compute-0 nova_compute[260603]: 2025-10-02 09:47:41.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:42 compute-0 nova_compute[260603]: 2025-10-02 09:47:42.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:47:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:43 compute-0 ceph-mon[74477]: pgmap v3903: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:44 compute-0 nova_compute[260603]: 2025-10-02 09:47:44.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:47:44 compute-0 nova_compute[260603]: 2025-10-02 09:47:44.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:47:44 compute-0 nova_compute[260603]: 2025-10-02 09:47:44.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:47:44 compute-0 nova_compute[260603]: 2025-10-02 09:47:44.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:47:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:45 compute-0 ceph-mon[74477]: pgmap v3904: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:45 compute-0 nova_compute[260603]: 2025-10-02 09:47:45.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:45 compute-0 nova_compute[260603]: 2025-10-02 09:47:45.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:47:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:47:46 compute-0 nova_compute[260603]: 2025-10-02 09:47:46.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:47 compute-0 ceph-mon[74477]: pgmap v3905: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:49 compute-0 podman[472403]: 2025-10-02 09:47:49.052779558 +0000 UTC m=+0.102345837 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 09:47:49 compute-0 podman[472402]: 2025-10-02 09:47:49.062565594 +0000 UTC m=+0.112022910 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:47:49 compute-0 ceph-mon[74477]: pgmap v3906: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:49 compute-0 nova_compute[260603]: 2025-10-02 09:47:49.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:47:50 compute-0 nova_compute[260603]: 2025-10-02 09:47:50.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:51 compute-0 ceph-mon[74477]: pgmap v3907: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:47:51 compute-0 nova_compute[260603]: 2025-10-02 09:47:51.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:53 compute-0 podman[472442]: 2025-10-02 09:47:53.013799819 +0000 UTC m=+0.072636599 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 09:47:53 compute-0 podman[472443]: 2025-10-02 09:47:53.013922733 +0000 UTC m=+0.059657375 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:47:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:53 compute-0 ceph-mon[74477]: pgmap v3908: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:55 compute-0 ceph-mon[74477]: pgmap v3909: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:55 compute-0 nova_compute[260603]: 2025-10-02 09:47:55.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:47:56 compute-0 nova_compute[260603]: 2025-10-02 09:47:56.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:47:56 compute-0 nova_compute[260603]: 2025-10-02 09:47:56.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:47:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:57 compute-0 ceph-mon[74477]: pgmap v3910: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:47:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:47:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:47:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:47:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:47:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:47:58 compute-0 sudo[472482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:47:58 compute-0 sudo[472482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:47:58 compute-0 sudo[472482]: pam_unix(sudo:session): session closed for user root
Oct 02 09:47:58 compute-0 sudo[472507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:47:58 compute-0 sudo[472507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:47:58 compute-0 sudo[472507]: pam_unix(sudo:session): session closed for user root
Oct 02 09:47:58 compute-0 sudo[472532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:47:58 compute-0 sudo[472532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:47:58 compute-0 sudo[472532]: pam_unix(sudo:session): session closed for user root
Oct 02 09:47:58 compute-0 sudo[472557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:47:58 compute-0 sudo[472557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:47:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:59 compute-0 ceph-mon[74477]: pgmap v3911: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:47:59 compute-0 sudo[472557]: pam_unix(sudo:session): session closed for user root
Oct 02 09:47:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:47:59 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:47:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:47:59 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:47:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:47:59 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:47:59 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev bef146d0-6eab-4838-906f-f27b1975e443 does not exist
Oct 02 09:47:59 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 3d4301ac-6346-400f-ad9a-95ab869bb585 does not exist
Oct 02 09:47:59 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev ea6c337e-54ad-4dbe-bac2-4623ad2147c4 does not exist
Oct 02 09:47:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:47:59 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:47:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:47:59 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:47:59 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:47:59 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:47:59 compute-0 sudo[472613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:47:59 compute-0 sudo[472613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:47:59 compute-0 sudo[472613]: pam_unix(sudo:session): session closed for user root
Oct 02 09:47:59 compute-0 sudo[472638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:47:59 compute-0 sudo[472638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:47:59 compute-0 sudo[472638]: pam_unix(sudo:session): session closed for user root
Oct 02 09:47:59 compute-0 sudo[472663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:47:59 compute-0 sudo[472663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:47:59 compute-0 sudo[472663]: pam_unix(sudo:session): session closed for user root
Oct 02 09:47:59 compute-0 sudo[472688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:47:59 compute-0 sudo[472688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:48:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:48:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:48:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:48:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:48:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:48:00 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:48:00 compute-0 podman[472751]: 2025-10-02 09:48:00.152267646 +0000 UTC m=+0.083970964 container create 1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_margulis, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:48:00 compute-0 podman[472751]: 2025-10-02 09:48:00.101686326 +0000 UTC m=+0.033389674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:48:00 compute-0 systemd[1]: Started libpod-conmon-1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c.scope.
Oct 02 09:48:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:48:00 compute-0 podman[472751]: 2025-10-02 09:48:00.353689137 +0000 UTC m=+0.285392485 container init 1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_margulis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 09:48:00 compute-0 podman[472751]: 2025-10-02 09:48:00.369395348 +0000 UTC m=+0.301098696 container start 1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_margulis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 09:48:00 compute-0 great_margulis[472767]: 167 167
Oct 02 09:48:00 compute-0 systemd[1]: libpod-1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c.scope: Deactivated successfully.
Oct 02 09:48:00 compute-0 podman[472751]: 2025-10-02 09:48:00.399783427 +0000 UTC m=+0.331486775 container attach 1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_margulis, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:48:00 compute-0 podman[472751]: 2025-10-02 09:48:00.400302763 +0000 UTC m=+0.332006081 container died 1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 09:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2188d06bbc4b548809aafbd373b2931a0aa57a13a2b841206b6aa2f109abcc43-merged.mount: Deactivated successfully.
Oct 02 09:48:00 compute-0 nova_compute[260603]: 2025-10-02 09:48:00.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:00 compute-0 podman[472751]: 2025-10-02 09:48:00.5996782 +0000 UTC m=+0.531381578 container remove 1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_margulis, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:48:00 compute-0 systemd[1]: libpod-conmon-1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c.scope: Deactivated successfully.
Oct 02 09:48:00 compute-0 podman[472791]: 2025-10-02 09:48:00.88047876 +0000 UTC m=+0.076179750 container create 7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:48:00 compute-0 systemd[1]: Started libpod-conmon-7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113.scope.
Oct 02 09:48:00 compute-0 podman[472791]: 2025-10-02 09:48:00.85003416 +0000 UTC m=+0.045735160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:48:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123250583e46a789e5a1aae9e29692a4d19c584d1d0097e8f1cb2b5f94b53ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123250583e46a789e5a1aae9e29692a4d19c584d1d0097e8f1cb2b5f94b53ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123250583e46a789e5a1aae9e29692a4d19c584d1d0097e8f1cb2b5f94b53ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123250583e46a789e5a1aae9e29692a4d19c584d1d0097e8f1cb2b5f94b53ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123250583e46a789e5a1aae9e29692a4d19c584d1d0097e8f1cb2b5f94b53ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:00 compute-0 podman[472791]: 2025-10-02 09:48:00.985284744 +0000 UTC m=+0.180985774 container init 7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_agnesi, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:48:01 compute-0 podman[472791]: 2025-10-02 09:48:01.004167134 +0000 UTC m=+0.199868104 container start 7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_agnesi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:48:01 compute-0 podman[472791]: 2025-10-02 09:48:01.008416047 +0000 UTC m=+0.204117077 container attach 7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_agnesi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:48:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:01 compute-0 ceph-mon[74477]: pgmap v3912: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:48:01 compute-0 nova_compute[260603]: 2025-10-02 09:48:01.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:02 compute-0 brave_agnesi[472808]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:48:02 compute-0 brave_agnesi[472808]: --> relative data size: 1.0
Oct 02 09:48:02 compute-0 brave_agnesi[472808]: --> All data devices are unavailable
Oct 02 09:48:02 compute-0 systemd[1]: libpod-7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113.scope: Deactivated successfully.
Oct 02 09:48:02 compute-0 systemd[1]: libpod-7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113.scope: Consumed 1.290s CPU time.
Oct 02 09:48:02 compute-0 podman[472791]: 2025-10-02 09:48:02.343714195 +0000 UTC m=+1.539415175 container died 7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct 02 09:48:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c123250583e46a789e5a1aae9e29692a4d19c584d1d0097e8f1cb2b5f94b53ca-merged.mount: Deactivated successfully.
Oct 02 09:48:02 compute-0 podman[472791]: 2025-10-02 09:48:02.484212863 +0000 UTC m=+1.679913843 container remove 7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:48:02 compute-0 systemd[1]: libpod-conmon-7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113.scope: Deactivated successfully.
Oct 02 09:48:02 compute-0 sudo[472688]: pam_unix(sudo:session): session closed for user root
Oct 02 09:48:02 compute-0 sudo[472851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:48:02 compute-0 sudo[472851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:48:02 compute-0 sudo[472851]: pam_unix(sudo:session): session closed for user root
Oct 02 09:48:02 compute-0 sudo[472876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:48:02 compute-0 sudo[472876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:48:02 compute-0 sudo[472876]: pam_unix(sudo:session): session closed for user root
Oct 02 09:48:02 compute-0 sudo[472901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:48:02 compute-0 sudo[472901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:48:02 compute-0 sudo[472901]: pam_unix(sudo:session): session closed for user root
Oct 02 09:48:02 compute-0 sudo[472926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:48:02 compute-0 sudo[472926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:48:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:03 compute-0 ceph-mon[74477]: pgmap v3913: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:03 compute-0 podman[472992]: 2025-10-02 09:48:03.368484592 +0000 UTC m=+0.064244398 container create 77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_feynman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:48:03 compute-0 systemd[1]: Started libpod-conmon-77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9.scope.
Oct 02 09:48:03 compute-0 podman[472992]: 2025-10-02 09:48:03.347950131 +0000 UTC m=+0.043709976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:48:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:48:03 compute-0 podman[472992]: 2025-10-02 09:48:03.483693221 +0000 UTC m=+0.179453076 container init 77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_feynman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 09:48:03 compute-0 podman[472992]: 2025-10-02 09:48:03.496438809 +0000 UTC m=+0.192198614 container start 77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:48:03 compute-0 podman[472992]: 2025-10-02 09:48:03.501087244 +0000 UTC m=+0.196847079 container attach 77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_feynman, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 09:48:03 compute-0 distracted_feynman[473008]: 167 167
Oct 02 09:48:03 compute-0 systemd[1]: libpod-77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9.scope: Deactivated successfully.
Oct 02 09:48:03 compute-0 podman[472992]: 2025-10-02 09:48:03.503712536 +0000 UTC m=+0.199472401 container died 77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:48:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a576d1dc5ece41089f34d106f4a33c4957222de04fa77aac44278e2b8f4eaf9-merged.mount: Deactivated successfully.
Oct 02 09:48:03 compute-0 podman[472992]: 2025-10-02 09:48:03.549259889 +0000 UTC m=+0.245019694 container remove 77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_feynman, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:48:03 compute-0 systemd[1]: libpod-conmon-77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9.scope: Deactivated successfully.
Oct 02 09:48:03 compute-0 podman[473031]: 2025-10-02 09:48:03.755083818 +0000 UTC m=+0.066980654 container create c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:48:03 compute-0 systemd[1]: Started libpod-conmon-c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb.scope.
Oct 02 09:48:03 compute-0 podman[473031]: 2025-10-02 09:48:03.724372548 +0000 UTC m=+0.036269454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:48:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4cd58a91756f401b220cfe99aea28ca7445621b4c650bfb005eb591ef13c8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4cd58a91756f401b220cfe99aea28ca7445621b4c650bfb005eb591ef13c8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4cd58a91756f401b220cfe99aea28ca7445621b4c650bfb005eb591ef13c8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4cd58a91756f401b220cfe99aea28ca7445621b4c650bfb005eb591ef13c8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:03 compute-0 podman[473031]: 2025-10-02 09:48:03.866366774 +0000 UTC m=+0.178263580 container init c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:48:03 compute-0 podman[473031]: 2025-10-02 09:48:03.877392518 +0000 UTC m=+0.189289314 container start c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:48:03 compute-0 podman[473031]: 2025-10-02 09:48:03.881369182 +0000 UTC m=+0.193266008 container attach c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 09:48:04 compute-0 nova_compute[260603]: 2025-10-02 09:48:04.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:48:04 compute-0 nova_compute[260603]: 2025-10-02 09:48:04.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:48:04 compute-0 crazy_buck[473047]: {
Oct 02 09:48:04 compute-0 crazy_buck[473047]:     "0": [
Oct 02 09:48:04 compute-0 crazy_buck[473047]:         {
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "devices": [
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "/dev/loop3"
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             ],
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_name": "ceph_lv0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_size": "21470642176",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "name": "ceph_lv0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "tags": {
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.cluster_name": "ceph",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.crush_device_class": "",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.encrypted": "0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.osd_id": "0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.type": "block",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.vdo": "0"
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             },
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "type": "block",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "vg_name": "ceph_vg0"
Oct 02 09:48:04 compute-0 crazy_buck[473047]:         }
Oct 02 09:48:04 compute-0 crazy_buck[473047]:     ],
Oct 02 09:48:04 compute-0 crazy_buck[473047]:     "1": [
Oct 02 09:48:04 compute-0 crazy_buck[473047]:         {
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "devices": [
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "/dev/loop4"
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             ],
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_name": "ceph_lv1",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_size": "21470642176",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "name": "ceph_lv1",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "tags": {
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.cluster_name": "ceph",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.crush_device_class": "",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.encrypted": "0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.osd_id": "1",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.type": "block",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.vdo": "0"
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             },
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "type": "block",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "vg_name": "ceph_vg1"
Oct 02 09:48:04 compute-0 crazy_buck[473047]:         }
Oct 02 09:48:04 compute-0 crazy_buck[473047]:     ],
Oct 02 09:48:04 compute-0 crazy_buck[473047]:     "2": [
Oct 02 09:48:04 compute-0 crazy_buck[473047]:         {
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "devices": [
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "/dev/loop5"
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             ],
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_name": "ceph_lv2",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_size": "21470642176",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "name": "ceph_lv2",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "tags": {
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.cluster_name": "ceph",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.crush_device_class": "",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.encrypted": "0",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.osd_id": "2",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.type": "block",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:                 "ceph.vdo": "0"
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             },
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "type": "block",
Oct 02 09:48:04 compute-0 crazy_buck[473047]:             "vg_name": "ceph_vg2"
Oct 02 09:48:04 compute-0 crazy_buck[473047]:         }
Oct 02 09:48:04 compute-0 crazy_buck[473047]:     ]
Oct 02 09:48:04 compute-0 crazy_buck[473047]: }
Oct 02 09:48:04 compute-0 systemd[1]: libpod-c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb.scope: Deactivated successfully.
Oct 02 09:48:04 compute-0 podman[473031]: 2025-10-02 09:48:04.675854437 +0000 UTC m=+0.987751263 container died c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 09:48:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b4cd58a91756f401b220cfe99aea28ca7445621b4c650bfb005eb591ef13c8c-merged.mount: Deactivated successfully.
Oct 02 09:48:04 compute-0 podman[473031]: 2025-10-02 09:48:04.755923349 +0000 UTC m=+1.067820155 container remove c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:48:04 compute-0 systemd[1]: libpod-conmon-c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb.scope: Deactivated successfully.
Oct 02 09:48:04 compute-0 sudo[472926]: pam_unix(sudo:session): session closed for user root
Oct 02 09:48:04 compute-0 sudo[473071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:48:04 compute-0 sudo[473071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:48:04 compute-0 sudo[473071]: pam_unix(sudo:session): session closed for user root
Oct 02 09:48:04 compute-0 sudo[473096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:48:05 compute-0 sudo[473096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:48:05 compute-0 sudo[473096]: pam_unix(sudo:session): session closed for user root
Oct 02 09:48:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:05 compute-0 sudo[473121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:48:05 compute-0 sudo[473121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:48:05 compute-0 sudo[473121]: pam_unix(sudo:session): session closed for user root
Oct 02 09:48:05 compute-0 sudo[473146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:48:05 compute-0 sudo[473146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:48:05 compute-0 ceph-mon[74477]: pgmap v3914: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:05 compute-0 nova_compute[260603]: 2025-10-02 09:48:05.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:05 compute-0 podman[473211]: 2025-10-02 09:48:05.577538971 +0000 UTC m=+0.070921206 container create 37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_raman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 09:48:05 compute-0 podman[473211]: 2025-10-02 09:48:05.53556162 +0000 UTC m=+0.028943895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:48:05 compute-0 systemd[1]: Started libpod-conmon-37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d.scope.
Oct 02 09:48:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:48:05 compute-0 podman[473211]: 2025-10-02 09:48:05.731409478 +0000 UTC m=+0.224791753 container init 37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_raman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 09:48:05 compute-0 podman[473211]: 2025-10-02 09:48:05.741223854 +0000 UTC m=+0.234606079 container start 37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_raman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 09:48:05 compute-0 competent_raman[473228]: 167 167
Oct 02 09:48:05 compute-0 systemd[1]: libpod-37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d.scope: Deactivated successfully.
Oct 02 09:48:05 compute-0 conmon[473228]: conmon 37824e61506cd773132a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d.scope/container/memory.events
Oct 02 09:48:05 compute-0 podman[473211]: 2025-10-02 09:48:05.79585216 +0000 UTC m=+0.289234395 container attach 37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_raman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 09:48:05 compute-0 podman[473211]: 2025-10-02 09:48:05.797223354 +0000 UTC m=+0.290605579 container died 37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_raman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:48:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b33f29ea783cd0dd4a660ab659b09bbe250b9220120fe3f695051cbd39d34a7a-merged.mount: Deactivated successfully.
Oct 02 09:48:05 compute-0 podman[473211]: 2025-10-02 09:48:05.981929452 +0000 UTC m=+0.475311677 container remove 37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:48:05 compute-0 systemd[1]: libpod-conmon-37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d.scope: Deactivated successfully.
Oct 02 09:48:06 compute-0 podman[473253]: 2025-10-02 09:48:06.165104873 +0000 UTC m=+0.027920462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:48:06 compute-0 podman[473253]: 2025-10-02 09:48:06.260461082 +0000 UTC m=+0.123276631 container create 69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:48:06 compute-0 systemd[1]: Started libpod-conmon-69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0.scope.
Oct 02 09:48:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb54c7baa339ad651352d834e942d7ed98fd67015d8886972d63b624976c3bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb54c7baa339ad651352d834e942d7ed98fd67015d8886972d63b624976c3bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb54c7baa339ad651352d834e942d7ed98fd67015d8886972d63b624976c3bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb54c7baa339ad651352d834e942d7ed98fd67015d8886972d63b624976c3bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:48:06 compute-0 podman[473253]: 2025-10-02 09:48:06.445348407 +0000 UTC m=+0.308163966 container init 69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 09:48:06 compute-0 podman[473253]: 2025-10-02 09:48:06.459498679 +0000 UTC m=+0.322314258 container start 69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:48:06 compute-0 podman[473253]: 2025-10-02 09:48:06.476187861 +0000 UTC m=+0.339003440 container attach 69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 09:48:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:48:06 compute-0 nova_compute[260603]: 2025-10-02 09:48:06.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3915: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:07 compute-0 stupefied_tu[473269]: {
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "osd_id": 2,
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "type": "bluestore"
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:     },
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "osd_id": 1,
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "type": "bluestore"
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:     },
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "osd_id": 0,
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:         "type": "bluestore"
Oct 02 09:48:07 compute-0 stupefied_tu[473269]:     }
Oct 02 09:48:07 compute-0 stupefied_tu[473269]: }
Oct 02 09:48:07 compute-0 systemd[1]: libpod-69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0.scope: Deactivated successfully.
Oct 02 09:48:07 compute-0 systemd[1]: libpod-69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0.scope: Consumed 1.037s CPU time.
Oct 02 09:48:07 compute-0 podman[473253]: 2025-10-02 09:48:07.488907472 +0000 UTC m=+1.351723081 container died 69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:48:07 compute-0 ceph-mon[74477]: pgmap v3915: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bb54c7baa339ad651352d834e942d7ed98fd67015d8886972d63b624976c3bd-merged.mount: Deactivated successfully.
Oct 02 09:48:07 compute-0 podman[473253]: 2025-10-02 09:48:07.850557679 +0000 UTC m=+1.713373248 container remove 69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:48:07 compute-0 systemd[1]: libpod-conmon-69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0.scope: Deactivated successfully.
Oct 02 09:48:07 compute-0 sudo[473146]: pam_unix(sudo:session): session closed for user root
Oct 02 09:48:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:48:07 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:48:07 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:48:07 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:48:07 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev b3fbb71e-f003-42bc-bea6-9ac0847fb52b does not exist
Oct 02 09:48:07 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev edc28543-ccb6-4466-95d1-2cb9814ad39c does not exist
Oct 02 09:48:08 compute-0 sudo[473316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:48:08 compute-0 sudo[473316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:48:08 compute-0 sudo[473316]: pam_unix(sudo:session): session closed for user root
Oct 02 09:48:08 compute-0 sudo[473341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:48:08 compute-0 sudo[473341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:48:08 compute-0 sudo[473341]: pam_unix(sudo:session): session closed for user root
Oct 02 09:48:08 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:48:08 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:48:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3916: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:09 compute-0 ceph-mon[74477]: pgmap v3916: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:10 compute-0 nova_compute[260603]: 2025-10-02 09:48:10.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3917: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:11 compute-0 ceph-mon[74477]: pgmap v3917: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:48:11 compute-0 nova_compute[260603]: 2025-10-02 09:48:11.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3918: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:13 compute-0 ceph-mon[74477]: pgmap v3918: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3919: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:15 compute-0 ceph-mon[74477]: pgmap v3919: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:15 compute-0 nova_compute[260603]: 2025-10-02 09:48:15.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:48:16 compute-0 nova_compute[260603]: 2025-10-02 09:48:16.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3920: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:17 compute-0 ceph-mon[74477]: pgmap v3920: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3921: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:19 compute-0 ceph-mon[74477]: pgmap v3921: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:20 compute-0 podman[473367]: 2025-10-02 09:48:20.011275503 +0000 UTC m=+0.068695297 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Oct 02 09:48:20 compute-0 podman[473366]: 2025-10-02 09:48:20.059134658 +0000 UTC m=+0.117063607 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:48:20 compute-0 nova_compute[260603]: 2025-10-02 09:48:20.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3922: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:21 compute-0 ceph-mon[74477]: pgmap v3922: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:48:21 compute-0 nova_compute[260603]: 2025-10-02 09:48:21.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:48:22 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 372 writes, 991 keys, 372 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                           Interval WAL: 372 writes, 177 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:48:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:48:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1970290044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:48:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:48:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1970290044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:48:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1970290044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:48:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1970290044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:48:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3923: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:23 compute-0 ceph-mon[74477]: pgmap v3923: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:23 compute-0 podman[473409]: 2025-10-02 09:48:23.997053778 +0000 UTC m=+0.063631659 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 09:48:24 compute-0 podman[473410]: 2025-10-02 09:48:24.025665902 +0000 UTC m=+0.091387915 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 09:48:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3924: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:25 compute-0 ceph-mon[74477]: pgmap v3924: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:25 compute-0 nova_compute[260603]: 2025-10-02 09:48:25.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:48:26 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 189K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 18K syncs, 2.71 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 334 writes, 766 keys, 334 commit groups, 1.0 writes per commit group, ingest: 0.33 MB, 0.00 MB/s
                                           Interval WAL: 334 writes, 148 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:48:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:48:26 compute-0 nova_compute[260603]: 2025-10-02 09:48:26.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3925: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:27 compute-0 ceph-mon[74477]: pgmap v3925: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:48:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:48:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:48:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:48:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:48:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:48:28
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'backups', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr']
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:48:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:48:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3926: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:29 compute-0 ceph-mon[74477]: pgmap v3926: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:30 compute-0 nova_compute[260603]: 2025-10-02 09:48:30.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3927: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:31 compute-0 ceph-mon[74477]: pgmap v3927: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:48:31 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 146K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.68 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 458 writes, 1050 keys, 458 commit groups, 1.0 writes per commit group, ingest: 0.46 MB, 0.00 MB/s
                                           Interval WAL: 458 writes, 208 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:48:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:48:31 compute-0 nova_compute[260603]: 2025-10-02 09:48:31.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:32 compute-0 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct 02 09:48:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3928: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:33 compute-0 ceph-mon[74477]: pgmap v3928: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:48:34.889 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:48:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:48:34.890 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:48:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:48:34.890 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:48:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3929: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:35 compute-0 ceph-mon[74477]: pgmap v3929: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:35 compute-0 nova_compute[260603]: 2025-10-02 09:48:35.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:48:36 compute-0 nova_compute[260603]: 2025-10-02 09:48:36.522 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:48:36 compute-0 nova_compute[260603]: 2025-10-02 09:48:36.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3930: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:37 compute-0 ceph-mon[74477]: pgmap v3930: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:38 compute-0 nova_compute[260603]: 2025-10-02 09:48:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:48:38 compute-0 nova_compute[260603]: 2025-10-02 09:48:38.578 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:48:38 compute-0 nova_compute[260603]: 2025-10-02 09:48:38.579 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:48:38 compute-0 nova_compute[260603]: 2025-10-02 09:48:38.579 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:48:38 compute-0 nova_compute[260603]: 2025-10-02 09:48:38.579 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:48:38 compute-0 nova_compute[260603]: 2025-10-02 09:48:38.580 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:48:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:48:39 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1752737819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:48:39 compute-0 nova_compute[260603]: 2025-10-02 09:48:39.051 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3931: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1752737819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:48:39 compute-0 nova_compute[260603]: 2025-10-02 09:48:39.207 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:48:39 compute-0 nova_compute[260603]: 2025-10-02 09:48:39.208 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3498MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:48:39 compute-0 nova_compute[260603]: 2025-10-02 09:48:39.208 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:48:39 compute-0 nova_compute[260603]: 2025-10-02 09:48:39.209 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:48:39 compute-0 nova_compute[260603]: 2025-10-02 09:48:39.305 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:48:39 compute-0 nova_compute[260603]: 2025-10-02 09:48:39.305 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:48:39 compute-0 nova_compute[260603]: 2025-10-02 09:48:39.496 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:48:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:48:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:48:39 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1171908827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:48:39 compute-0 nova_compute[260603]: 2025-10-02 09:48:39.913 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:48:39 compute-0 nova_compute[260603]: 2025-10-02 09:48:39.917 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:48:40 compute-0 nova_compute[260603]: 2025-10-02 09:48:40.003 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:48:40 compute-0 nova_compute[260603]: 2025-10-02 09:48:40.005 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:48:40 compute-0 nova_compute[260603]: 2025-10-02 09:48:40.005 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:48:40 compute-0 ceph-mon[74477]: pgmap v3931: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:40 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1171908827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:48:40 compute-0 nova_compute[260603]: 2025-10-02 09:48:40.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:41 compute-0 nova_compute[260603]: 2025-10-02 09:48:41.006 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:48:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3932: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:48:41 compute-0 nova_compute[260603]: 2025-10-02 09:48:41.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:42 compute-0 ceph-mon[74477]: pgmap v3932: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:42 compute-0 nova_compute[260603]: 2025-10-02 09:48:42.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:48:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3933: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.313561) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398523313630, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1736, "num_deletes": 251, "total_data_size": 2803502, "memory_usage": 2848080, "flush_reason": "Manual Compaction"}
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398523342180, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 2754286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80201, "largest_seqno": 81936, "table_properties": {"data_size": 2746302, "index_size": 4862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16211, "raw_average_key_size": 19, "raw_value_size": 2730342, "raw_average_value_size": 3362, "num_data_blocks": 217, "num_entries": 812, "num_filter_entries": 812, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759398337, "oldest_key_time": 1759398337, "file_creation_time": 1759398523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 28653 microseconds, and 6072 cpu microseconds.
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.342220) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 2754286 bytes OK
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.342242) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.353604) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.353626) EVENT_LOG_v1 {"time_micros": 1759398523353620, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.353646) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 2796091, prev total WAL file size 2796091, number of live WAL files 2.
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.354430) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(2689KB)], [194(9435KB)]
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398523354482, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 12416336, "oldest_snapshot_seqno": -1}
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 9600 keys, 10666603 bytes, temperature: kUnknown
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398523424848, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 10666603, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10607232, "index_size": 34298, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24005, "raw_key_size": 253472, "raw_average_key_size": 26, "raw_value_size": 10440613, "raw_average_value_size": 1087, "num_data_blocks": 1315, "num_entries": 9600, "num_filter_entries": 9600, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759398523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.425201) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 10666603 bytes
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.438831) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.2 rd, 151.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 10117, records dropped: 517 output_compression: NoCompression
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.438876) EVENT_LOG_v1 {"time_micros": 1759398523438859, "job": 122, "event": "compaction_finished", "compaction_time_micros": 70464, "compaction_time_cpu_micros": 24412, "output_level": 6, "num_output_files": 1, "total_output_size": 10666603, "num_input_records": 10117, "num_output_records": 9600, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398523439950, "job": 122, "event": "table_file_deletion", "file_number": 196}
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398523443392, "job": 122, "event": "table_file_deletion", "file_number": 194}
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.354323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.443444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.443450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.443452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.443454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:48:43 compute-0 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.443457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 09:48:43 compute-0 nova_compute[260603]: 2025-10-02 09:48:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:48:44 compute-0 ceph-mon[74477]: pgmap v3933: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:44 compute-0 nova_compute[260603]: 2025-10-02 09:48:44.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:48:44 compute-0 nova_compute[260603]: 2025-10-02 09:48:44.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:48:44 compute-0 nova_compute[260603]: 2025-10-02 09:48:44.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:48:44 compute-0 nova_compute[260603]: 2025-10-02 09:48:44.598 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:48:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3934: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:45 compute-0 nova_compute[260603]: 2025-10-02 09:48:45.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:48:45 compute-0 nova_compute[260603]: 2025-10-02 09:48:45.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:46 compute-0 ceph-mon[74477]: pgmap v3934: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:48:46 compute-0 nova_compute[260603]: 2025-10-02 09:48:46.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3935: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:48 compute-0 ceph-mon[74477]: pgmap v3935: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3936: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:50 compute-0 ceph-mon[74477]: pgmap v3936: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:50 compute-0 nova_compute[260603]: 2025-10-02 09:48:50.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:51 compute-0 podman[473496]: 2025-10-02 09:48:51.039338408 +0000 UTC m=+0.078155472 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 09:48:51 compute-0 podman[473495]: 2025-10-02 09:48:51.058028482 +0000 UTC m=+0.108175589 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 09:48:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3937: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:48:51 compute-0 nova_compute[260603]: 2025-10-02 09:48:51.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:52 compute-0 ceph-mon[74477]: pgmap v3937: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3938: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:54 compute-0 ceph-mon[74477]: pgmap v3938: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:54 compute-0 podman[473540]: 2025-10-02 09:48:54.991962048 +0000 UTC m=+0.060236223 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 09:48:54 compute-0 podman[473541]: 2025-10-02 09:48:54.99777353 +0000 UTC m=+0.062271126 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 09:48:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3939: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:55 compute-0 nova_compute[260603]: 2025-10-02 09:48:55.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:56 compute-0 ceph-mon[74477]: pgmap v3939: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:48:56 compute-0 nova_compute[260603]: 2025-10-02 09:48:56.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:48:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3940: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:48:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:48:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:48:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:48:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:48:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:48:58 compute-0 ceph-mon[74477]: pgmap v3940: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:48:58 compute-0 nova_compute[260603]: 2025-10-02 09:48:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:48:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3941: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:00 compute-0 ceph-mon[74477]: pgmap v3941: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:00 compute-0 nova_compute[260603]: 2025-10-02 09:49:00.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3942: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:49:01 compute-0 nova_compute[260603]: 2025-10-02 09:49:01.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:02 compute-0 ceph-mon[74477]: pgmap v3942: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3943: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:04 compute-0 ceph-mon[74477]: pgmap v3943: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3944: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:05 compute-0 nova_compute[260603]: 2025-10-02 09:49:05.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:49:06 compute-0 nova_compute[260603]: 2025-10-02 09:49:06.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:49:06 compute-0 nova_compute[260603]: 2025-10-02 09:49:06.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:49:06 compute-0 ceph-mon[74477]: pgmap v3944: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:06 compute-0 nova_compute[260603]: 2025-10-02 09:49:06.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3945: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:08 compute-0 sudo[473578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:49:08 compute-0 sudo[473578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:08 compute-0 sudo[473578]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:08 compute-0 sudo[473603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:49:08 compute-0 sudo[473603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:08 compute-0 sudo[473603]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:08 compute-0 sudo[473628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:49:08 compute-0 sudo[473628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:08 compute-0 sudo[473628]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:08 compute-0 sudo[473653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:49:08 compute-0 sudo[473653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:08 compute-0 ceph-mon[74477]: pgmap v3945: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:08 compute-0 sudo[473653]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:08 compute-0 sudo[473709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:49:08 compute-0 sudo[473709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:08 compute-0 sudo[473709]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:08 compute-0 sudo[473734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:49:08 compute-0 sudo[473734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:08 compute-0 sudo[473734]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:09 compute-0 sudo[473759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:49:09 compute-0 sudo[473759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:09 compute-0 sudo[473759]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:09 compute-0 sudo[473784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 09:49:09 compute-0 sudo[473784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3946: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:09 compute-0 sudo[473784]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:49:09 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:49:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:49:09 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:49:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:49:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:49:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:49:09 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:49:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:49:09 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:49:09 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 7fb107ee-b12b-4b79-903a-a64dcf5eb7b6 does not exist
Oct 02 09:49:09 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev e44f36cb-d42d-43e9-8841-98a6917eb105 does not exist
Oct 02 09:49:09 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8bdc2f25-1820-4233-9e0a-1e1a85ebb01e does not exist
Oct 02 09:49:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:49:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:49:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:49:09 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:49:09 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:49:09 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:49:09 compute-0 sudo[473826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:49:09 compute-0 sudo[473826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:09 compute-0 sudo[473826]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:09 compute-0 sudo[473851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:49:09 compute-0 sudo[473851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:09 compute-0 sudo[473851]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:09 compute-0 sudo[473876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:49:09 compute-0 sudo[473876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:09 compute-0 sudo[473876]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:09 compute-0 sudo[473901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:49:09 compute-0 sudo[473901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:09 compute-0 podman[473967]: 2025-10-02 09:49:09.953094492 +0000 UTC m=+0.085390168 container create c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 09:49:09 compute-0 podman[473967]: 2025-10-02 09:49:09.893348566 +0000 UTC m=+0.025644272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:49:10 compute-0 systemd[1]: Started libpod-conmon-c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457.scope.
Oct 02 09:49:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:49:10 compute-0 podman[473967]: 2025-10-02 09:49:10.095546661 +0000 UTC m=+0.227842437 container init c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:49:10 compute-0 podman[473967]: 2025-10-02 09:49:10.104819601 +0000 UTC m=+0.237115307 container start c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:49:10 compute-0 mystifying_spence[473983]: 167 167
Oct 02 09:49:10 compute-0 systemd[1]: libpod-c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457.scope: Deactivated successfully.
Oct 02 09:49:10 compute-0 podman[473967]: 2025-10-02 09:49:10.14771262 +0000 UTC m=+0.280008306 container attach c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:49:10 compute-0 podman[473967]: 2025-10-02 09:49:10.149208708 +0000 UTC m=+0.281504384 container died c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:49:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8eb2623a03257046f1d280d4b69f4a9d8deb521e2985043e6543c23ae6225916-merged.mount: Deactivated successfully.
Oct 02 09:49:10 compute-0 ceph-mon[74477]: pgmap v3946: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:49:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:49:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:49:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:49:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:49:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:49:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:49:10 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:49:10 compute-0 podman[473967]: 2025-10-02 09:49:10.411850071 +0000 UTC m=+0.544145777 container remove c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:49:10 compute-0 systemd[1]: libpod-conmon-c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457.scope: Deactivated successfully.
Oct 02 09:49:10 compute-0 nova_compute[260603]: 2025-10-02 09:49:10.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:10 compute-0 podman[474008]: 2025-10-02 09:49:10.675574578 +0000 UTC m=+0.106937850 container create 6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:49:10 compute-0 podman[474008]: 2025-10-02 09:49:10.597309544 +0000 UTC m=+0.028672826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:49:10 compute-0 systemd[1]: Started libpod-conmon-6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6.scope.
Oct 02 09:49:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be57a2f7de865981bac0172d8e0dc40eae27061656c5f487dd3469b02db85ccb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be57a2f7de865981bac0172d8e0dc40eae27061656c5f487dd3469b02db85ccb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be57a2f7de865981bac0172d8e0dc40eae27061656c5f487dd3469b02db85ccb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be57a2f7de865981bac0172d8e0dc40eae27061656c5f487dd3469b02db85ccb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be57a2f7de865981bac0172d8e0dc40eae27061656c5f487dd3469b02db85ccb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:10 compute-0 podman[474008]: 2025-10-02 09:49:10.865060017 +0000 UTC m=+0.296423339 container init 6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 09:49:10 compute-0 podman[474008]: 2025-10-02 09:49:10.879670383 +0000 UTC m=+0.311033665 container start 6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 09:49:11 compute-0 podman[474008]: 2025-10-02 09:49:11.006194535 +0000 UTC m=+0.437557807 container attach 6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:49:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3947: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:49:11 compute-0 nova_compute[260603]: 2025-10-02 09:49:11.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:12 compute-0 trusting_golick[474025]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:49:12 compute-0 trusting_golick[474025]: --> relative data size: 1.0
Oct 02 09:49:12 compute-0 trusting_golick[474025]: --> All data devices are unavailable
Oct 02 09:49:12 compute-0 systemd[1]: libpod-6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6.scope: Deactivated successfully.
Oct 02 09:49:12 compute-0 systemd[1]: libpod-6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6.scope: Consumed 1.265s CPU time.
Oct 02 09:49:12 compute-0 podman[474054]: 2025-10-02 09:49:12.233707016 +0000 UTC m=+0.031708051 container died 6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 09:49:12 compute-0 ceph-mon[74477]: pgmap v3947: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3948: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-be57a2f7de865981bac0172d8e0dc40eae27061656c5f487dd3469b02db85ccb-merged.mount: Deactivated successfully.
Oct 02 09:49:13 compute-0 podman[474054]: 2025-10-02 09:49:13.641369454 +0000 UTC m=+1.439370519 container remove 6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:49:13 compute-0 systemd[1]: libpod-conmon-6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6.scope: Deactivated successfully.
Oct 02 09:49:13 compute-0 sudo[473901]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:13 compute-0 sudo[474069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:49:13 compute-0 sudo[474069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:13 compute-0 sudo[474069]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:13 compute-0 sudo[474094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:49:13 compute-0 sudo[474094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:13 compute-0 sudo[474094]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:13 compute-0 sudo[474119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:49:13 compute-0 sudo[474119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:13 compute-0 sudo[474119]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:14 compute-0 sudo[474144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:49:14 compute-0 sudo[474144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:14 compute-0 podman[474210]: 2025-10-02 09:49:14.412314284 +0000 UTC m=+0.026456187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:49:14 compute-0 podman[474210]: 2025-10-02 09:49:14.557824149 +0000 UTC m=+0.171966062 container create 50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 09:49:14 compute-0 systemd[1]: Started libpod-conmon-50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90.scope.
Oct 02 09:49:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:49:14 compute-0 podman[474210]: 2025-10-02 09:49:14.948594425 +0000 UTC m=+0.562736388 container init 50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 09:49:14 compute-0 podman[474210]: 2025-10-02 09:49:14.96061684 +0000 UTC m=+0.574758713 container start 50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:49:14 compute-0 hardcore_feistel[474226]: 167 167
Oct 02 09:49:14 compute-0 systemd[1]: libpod-50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90.scope: Deactivated successfully.
Oct 02 09:49:15 compute-0 podman[474210]: 2025-10-02 09:49:15.056238427 +0000 UTC m=+0.670380290 container attach 50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:49:15 compute-0 podman[474210]: 2025-10-02 09:49:15.057136805 +0000 UTC m=+0.671278678 container died 50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 09:49:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3949: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:15 compute-0 ceph-mon[74477]: pgmap v3948: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:15 compute-0 nova_compute[260603]: 2025-10-02 09:49:15.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-82a441db44e138c0509b84c9ce5689a80b06a5c703a06cceedb37eac6d7c6a60-merged.mount: Deactivated successfully.
Oct 02 09:49:16 compute-0 podman[474210]: 2025-10-02 09:49:16.479948186 +0000 UTC m=+2.094090099 container remove 50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 09:49:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:49:16 compute-0 systemd[1]: libpod-conmon-50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90.scope: Deactivated successfully.
Oct 02 09:49:16 compute-0 nova_compute[260603]: 2025-10-02 09:49:16.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:16 compute-0 podman[474252]: 2025-10-02 09:49:16.70129841 +0000 UTC m=+0.028983187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:49:16 compute-0 podman[474252]: 2025-10-02 09:49:16.831124874 +0000 UTC m=+0.158809621 container create 25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:49:17 compute-0 systemd[1]: Started libpod-conmon-25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30.scope.
Oct 02 09:49:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:49:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c45f564b7e331489a88f000f378562dd23d2ce619fc649a78cf5d1a6e014d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c45f564b7e331489a88f000f378562dd23d2ce619fc649a78cf5d1a6e014d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c45f564b7e331489a88f000f378562dd23d2ce619fc649a78cf5d1a6e014d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c45f564b7e331489a88f000f378562dd23d2ce619fc649a78cf5d1a6e014d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3950: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:17 compute-0 ceph-mon[74477]: pgmap v3949: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:17 compute-0 podman[474252]: 2025-10-02 09:49:17.436939257 +0000 UTC m=+0.764623974 container init 25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:49:17 compute-0 podman[474252]: 2025-10-02 09:49:17.444655548 +0000 UTC m=+0.772340265 container start 25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:49:17 compute-0 podman[474252]: 2025-10-02 09:49:17.693061277 +0000 UTC m=+1.020746034 container attach 25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 09:49:18 compute-0 kind_feynman[474268]: {
Oct 02 09:49:18 compute-0 kind_feynman[474268]:     "0": [
Oct 02 09:49:18 compute-0 kind_feynman[474268]:         {
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "devices": [
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "/dev/loop3"
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             ],
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_name": "ceph_lv0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_size": "21470642176",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "name": "ceph_lv0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "tags": {
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.cluster_name": "ceph",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.crush_device_class": "",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.encrypted": "0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.osd_id": "0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.type": "block",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.vdo": "0"
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             },
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "type": "block",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "vg_name": "ceph_vg0"
Oct 02 09:49:18 compute-0 kind_feynman[474268]:         }
Oct 02 09:49:18 compute-0 kind_feynman[474268]:     ],
Oct 02 09:49:18 compute-0 kind_feynman[474268]:     "1": [
Oct 02 09:49:18 compute-0 kind_feynman[474268]:         {
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "devices": [
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "/dev/loop4"
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             ],
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_name": "ceph_lv1",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_size": "21470642176",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "name": "ceph_lv1",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "tags": {
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.cluster_name": "ceph",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.crush_device_class": "",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.encrypted": "0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.osd_id": "1",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.type": "block",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.vdo": "0"
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             },
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "type": "block",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "vg_name": "ceph_vg1"
Oct 02 09:49:18 compute-0 kind_feynman[474268]:         }
Oct 02 09:49:18 compute-0 kind_feynman[474268]:     ],
Oct 02 09:49:18 compute-0 kind_feynman[474268]:     "2": [
Oct 02 09:49:18 compute-0 kind_feynman[474268]:         {
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "devices": [
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "/dev/loop5"
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             ],
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_name": "ceph_lv2",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_size": "21470642176",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "name": "ceph_lv2",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "tags": {
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.cluster_name": "ceph",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.crush_device_class": "",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.encrypted": "0",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.osd_id": "2",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.type": "block",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:                 "ceph.vdo": "0"
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             },
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "type": "block",
Oct 02 09:49:18 compute-0 kind_feynman[474268]:             "vg_name": "ceph_vg2"
Oct 02 09:49:18 compute-0 kind_feynman[474268]:         }
Oct 02 09:49:18 compute-0 kind_feynman[474268]:     ]
Oct 02 09:49:18 compute-0 kind_feynman[474268]: }
Oct 02 09:49:18 compute-0 systemd[1]: libpod-25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30.scope: Deactivated successfully.
Oct 02 09:49:18 compute-0 podman[474252]: 2025-10-02 09:49:18.274078185 +0000 UTC m=+1.601762952 container died 25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 09:49:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-45c45f564b7e331489a88f000f378562dd23d2ce619fc649a78cf5d1a6e014d3-merged.mount: Deactivated successfully.
Oct 02 09:49:18 compute-0 podman[474252]: 2025-10-02 09:49:18.890838119 +0000 UTC m=+2.218522866 container remove 25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 09:49:18 compute-0 sudo[474144]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:18 compute-0 systemd[1]: libpod-conmon-25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30.scope: Deactivated successfully.
Oct 02 09:49:19 compute-0 sudo[474289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:49:19 compute-0 sudo[474289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:19 compute-0 sudo[474289]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:19 compute-0 sudo[474314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:49:19 compute-0 sudo[474314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:19 compute-0 sudo[474314]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3951: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:19 compute-0 sudo[474339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:49:19 compute-0 ceph-mon[74477]: pgmap v3950: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:19 compute-0 sudo[474339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:19 compute-0 sudo[474339]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:19 compute-0 sudo[474364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:49:19 compute-0 sudo[474364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:19 compute-0 podman[474429]: 2025-10-02 09:49:19.72551149 +0000 UTC m=+0.042357785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:49:19 compute-0 podman[474429]: 2025-10-02 09:49:19.858795553 +0000 UTC m=+0.175641828 container create 2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mendeleev, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 09:49:19 compute-0 systemd[1]: Started libpod-conmon-2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7.scope.
Oct 02 09:49:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:49:20 compute-0 podman[474429]: 2025-10-02 09:49:20.158962169 +0000 UTC m=+0.475808494 container init 2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 09:49:20 compute-0 podman[474429]: 2025-10-02 09:49:20.166268436 +0000 UTC m=+0.483114751 container start 2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mendeleev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:49:20 compute-0 systemd[1]: libpod-2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7.scope: Deactivated successfully.
Oct 02 09:49:20 compute-0 happy_mendeleev[474446]: 167 167
Oct 02 09:49:20 compute-0 conmon[474446]: conmon 2fd480bb586bba482809 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7.scope/container/memory.events
Oct 02 09:49:20 compute-0 podman[474429]: 2025-10-02 09:49:20.645525076 +0000 UTC m=+0.962371361 container attach 2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mendeleev, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:49:20 compute-0 podman[474429]: 2025-10-02 09:49:20.646802505 +0000 UTC m=+0.963648820 container died 2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:49:20 compute-0 nova_compute[260603]: 2025-10-02 09:49:20.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:20 compute-0 ceph-mon[74477]: pgmap v3951: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3952: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:49:21 compute-0 nova_compute[260603]: 2025-10-02 09:49:21.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e1a7946a768281e447ca96f98c7ad79900033e60e9f3996f3a417dd73c9c001-merged.mount: Deactivated successfully.
Oct 02 09:49:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:49:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1775118086' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:49:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:49:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1775118086' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:49:22 compute-0 podman[474429]: 2025-10-02 09:49:22.305076681 +0000 UTC m=+2.621922996 container remove 2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 02 09:49:22 compute-0 systemd[1]: libpod-conmon-2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7.scope: Deactivated successfully.
Oct 02 09:49:22 compute-0 podman[474463]: 2025-10-02 09:49:22.402203115 +0000 UTC m=+0.820834460 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 09:49:22 compute-0 podman[474462]: 2025-10-02 09:49:22.429522378 +0000 UTC m=+0.852647923 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 09:49:22 compute-0 podman[474517]: 2025-10-02 09:49:22.528817979 +0000 UTC m=+0.033650822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:49:23 compute-0 podman[474517]: 2025-10-02 09:49:23.043028889 +0000 UTC m=+0.547861712 container create 69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 09:49:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3953: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:23 compute-0 ceph-mon[74477]: pgmap v3952: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1775118086' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:49:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/1775118086' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:49:23 compute-0 systemd[1]: Started libpod-conmon-69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c.scope.
Oct 02 09:49:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187f31de43cab2229b62fc045b8b80ad1a188eb253fcec6042cc8f6326039f5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187f31de43cab2229b62fc045b8b80ad1a188eb253fcec6042cc8f6326039f5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187f31de43cab2229b62fc045b8b80ad1a188eb253fcec6042cc8f6326039f5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187f31de43cab2229b62fc045b8b80ad1a188eb253fcec6042cc8f6326039f5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:49:24 compute-0 podman[474517]: 2025-10-02 09:49:24.026673295 +0000 UTC m=+1.531506168 container init 69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 09:49:24 compute-0 podman[474517]: 2025-10-02 09:49:24.043108858 +0000 UTC m=+1.547941631 container start 69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:49:24 compute-0 podman[474517]: 2025-10-02 09:49:24.229630463 +0000 UTC m=+1.734463276 container attach 69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 09:49:24 compute-0 ceph-mon[74477]: pgmap v3953: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3954: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]: {
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "osd_id": 2,
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "type": "bluestore"
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:     },
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "osd_id": 1,
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "type": "bluestore"
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:     },
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "osd_id": 0,
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:         "type": "bluestore"
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]:     }
Oct 02 09:49:25 compute-0 vigilant_brattain[474534]: }
Oct 02 09:49:25 compute-0 systemd[1]: libpod-69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c.scope: Deactivated successfully.
Oct 02 09:49:25 compute-0 podman[474517]: 2025-10-02 09:49:25.151628352 +0000 UTC m=+2.656461175 container died 69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 02 09:49:25 compute-0 systemd[1]: libpod-69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c.scope: Consumed 1.116s CPU time.
Oct 02 09:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-187f31de43cab2229b62fc045b8b80ad1a188eb253fcec6042cc8f6326039f5e-merged.mount: Deactivated successfully.
Oct 02 09:49:25 compute-0 nova_compute[260603]: 2025-10-02 09:49:25.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:26 compute-0 podman[474517]: 2025-10-02 09:49:25.999728473 +0000 UTC m=+3.504561286 container remove 69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 09:49:26 compute-0 sudo[474364]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:49:26 compute-0 systemd[1]: libpod-conmon-69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c.scope: Deactivated successfully.
Oct 02 09:49:26 compute-0 podman[474574]: 2025-10-02 09:49:26.070923656 +0000 UTC m=+0.858996512 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 09:49:26 compute-0 podman[474568]: 2025-10-02 09:49:26.109264403 +0000 UTC m=+0.910745828 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 09:49:26 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:49:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:49:26 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:49:26 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 53500d9a-d2fd-4c35-9da8-5929d455698c does not exist
Oct 02 09:49:26 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 02feecb4-032a-4c99-9aa2-7317029eac32 does not exist
Oct 02 09:49:26 compute-0 sudo[474621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:49:26 compute-0 sudo[474621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:26 compute-0 sudo[474621]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:26 compute-0 sudo[474646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:49:26 compute-0 sudo[474646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:49:26 compute-0 sudo[474646]: pam_unix(sudo:session): session closed for user root
Oct 02 09:49:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:49:26 compute-0 nova_compute[260603]: 2025-10-02 09:49:26.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:26 compute-0 ceph-mon[74477]: pgmap v3954: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:26 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:49:26 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:49:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3955: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:49:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:49:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:49:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:49:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:49:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:49:28
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'backups', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data']
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:49:28 compute-0 ceph-mon[74477]: pgmap v3955: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:49:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:49:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3956: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Oct 02 09:49:30 compute-0 ceph-mon[74477]: pgmap v3956: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Oct 02 09:49:30 compute-0 nova_compute[260603]: 2025-10-02 09:49:30.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3957: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Oct 02 09:49:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:49:31 compute-0 nova_compute[260603]: 2025-10-02 09:49:31.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:32 compute-0 ceph-mon[74477]: pgmap v3957: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Oct 02 09:49:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3958: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct 02 09:49:34 compute-0 ceph-mon[74477]: pgmap v3958: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct 02 09:49:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:49:34.890 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:49:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:49:34.891 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:49:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:49:34.891 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:49:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3959: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct 02 09:49:35 compute-0 nova_compute[260603]: 2025-10-02 09:49:35.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:36 compute-0 ceph-mon[74477]: pgmap v3959: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct 02 09:49:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:49:36 compute-0 nova_compute[260603]: 2025-10-02 09:49:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:49:36 compute-0 nova_compute[260603]: 2025-10-02 09:49:36.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3960: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct 02 09:49:38 compute-0 ceph-mon[74477]: pgmap v3960: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3961: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:49:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:49:40 compute-0 ceph-mon[74477]: pgmap v3961: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct 02 09:49:40 compute-0 nova_compute[260603]: 2025-10-02 09:49:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:49:40 compute-0 nova_compute[260603]: 2025-10-02 09:49:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:49:40 compute-0 nova_compute[260603]: 2025-10-02 09:49:40.591 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:49:40 compute-0 nova_compute[260603]: 2025-10-02 09:49:40.592 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:49:40 compute-0 nova_compute[260603]: 2025-10-02 09:49:40.592 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:49:40 compute-0 nova_compute[260603]: 2025-10-02 09:49:40.592 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:49:40 compute-0 nova_compute[260603]: 2025-10-02 09:49:40.592 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:49:40 compute-0 nova_compute[260603]: 2025-10-02 09:49:40.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:49:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/456584274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.082 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:49:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3962: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.241 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.242 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3493MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.242 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.243 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.313 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.313 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.335 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:49:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:49:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/456584274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:49:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778361680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.755 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.760 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.783 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.785 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:49:41 compute-0 nova_compute[260603]: 2025-10-02 09:49:41.785 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:49:42 compute-0 ceph-mon[74477]: pgmap v3962: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Oct 02 09:49:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2778361680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:49:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3963: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Oct 02 09:49:44 compute-0 ceph-mon[74477]: pgmap v3963: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Oct 02 09:49:44 compute-0 nova_compute[260603]: 2025-10-02 09:49:44.781 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:49:44 compute-0 nova_compute[260603]: 2025-10-02 09:49:44.781 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:49:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3964: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Oct 02 09:49:45 compute-0 nova_compute[260603]: 2025-10-02 09:49:45.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:49:45 compute-0 nova_compute[260603]: 2025-10-02 09:49:45.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:49:45 compute-0 nova_compute[260603]: 2025-10-02 09:49:45.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:49:45 compute-0 nova_compute[260603]: 2025-10-02 09:49:45.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:49:45 compute-0 nova_compute[260603]: 2025-10-02 09:49:45.539 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:49:45 compute-0 nova_compute[260603]: 2025-10-02 09:49:45.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 09:49:45 compute-0 nova_compute[260603]: 2025-10-02 09:49:45.561 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 09:49:45 compute-0 nova_compute[260603]: 2025-10-02 09:49:45.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:49:46 compute-0 ceph-mon[74477]: pgmap v3964: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Oct 02 09:49:46 compute-0 nova_compute[260603]: 2025-10-02 09:49:46.540 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:49:46 compute-0 nova_compute[260603]: 2025-10-02 09:49:46.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3965: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Oct 02 09:49:48 compute-0 ceph-mon[74477]: pgmap v3965: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Oct 02 09:49:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3966: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Oct 02 09:49:49 compute-0 nova_compute[260603]: 2025-10-02 09:49:49.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:49:50 compute-0 ceph-mon[74477]: pgmap v3966: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Oct 02 09:49:50 compute-0 nova_compute[260603]: 2025-10-02 09:49:50.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3967: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:49:51 compute-0 nova_compute[260603]: 2025-10-02 09:49:51.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:52 compute-0 ceph-mon[74477]: pgmap v3967: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:53 compute-0 podman[474718]: 2025-10-02 09:49:53.0230004 +0000 UTC m=+0.078017267 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 09:49:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3968: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:53 compute-0 podman[474717]: 2025-10-02 09:49:53.117995177 +0000 UTC m=+0.176603807 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 09:49:54 compute-0 ceph-mon[74477]: pgmap v3968: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:55 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3969: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:55 compute-0 nova_compute[260603]: 2025-10-02 09:49:55.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:56 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:49:56 compute-0 ceph-mon[74477]: pgmap v3969: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:56 compute-0 nova_compute[260603]: 2025-10-02 09:49:56.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:49:57 compute-0 podman[474760]: 2025-10-02 09:49:57.011595562 +0000 UTC m=+0.075196390 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 02 09:49:57 compute-0 podman[474761]: 2025-10-02 09:49:57.029738648 +0000 UTC m=+0.079187414 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid)
Oct 02 09:49:57 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3970: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:49:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:49:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:49:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:49:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:49:57 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:49:58 compute-0 ceph-mon[74477]: pgmap v3970: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:49:59 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3971: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:00 compute-0 nova_compute[260603]: 2025-10-02 09:50:00.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:50:00 compute-0 ceph-mon[74477]: pgmap v3971: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:00 compute-0 nova_compute[260603]: 2025-10-02 09:50:00.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:01 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3972: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:01 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:50:01 compute-0 nova_compute[260603]: 2025-10-02 09:50:01.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:02 compute-0 ceph-mon[74477]: pgmap v3972: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:03 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3973: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:04 compute-0 ceph-mon[74477]: pgmap v3973: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:05 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3974: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:05 compute-0 nova_compute[260603]: 2025-10-02 09:50:05.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:06 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:50:06 compute-0 ceph-mon[74477]: pgmap v3974: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:06 compute-0 nova_compute[260603]: 2025-10-02 09:50:06.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:07 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3975: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:08 compute-0 nova_compute[260603]: 2025-10-02 09:50:08.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:50:08 compute-0 nova_compute[260603]: 2025-10-02 09:50:08.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 09:50:08 compute-0 ceph-mon[74477]: pgmap v3975: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:09 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3976: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:10 compute-0 ceph-mon[74477]: pgmap v3976: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:10 compute-0 nova_compute[260603]: 2025-10-02 09:50:10.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:11 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3977: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:11 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:50:11 compute-0 nova_compute[260603]: 2025-10-02 09:50:11.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:12 compute-0 nova_compute[260603]: 2025-10-02 09:50:12.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:50:12 compute-0 ceph-mon[74477]: pgmap v3977: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:13 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3978: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:13 compute-0 sshd-session[474798]: Accepted publickey for zuul from 192.168.122.10 port 56900 ssh2: ECDSA SHA256:QEnwbgBR1jglQLPp4vwsTS2MMzDakrR2dLJ/eEaCKUI
Oct 02 09:50:13 compute-0 systemd-logind[787]: New session 60 of user zuul.
Oct 02 09:50:13 compute-0 systemd[1]: Started Session 60 of User zuul.
Oct 02 09:50:13 compute-0 sshd-session[474798]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 09:50:13 compute-0 sudo[474802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 02 09:50:13 compute-0 sudo[474802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 09:50:14 compute-0 ceph-mon[74477]: pgmap v3978: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:15 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3979: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:15 compute-0 nova_compute[260603]: 2025-10-02 09:50:15.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:16 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23423 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:16 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:50:16 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23425 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:16 compute-0 ceph-mon[74477]: pgmap v3979: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:16 compute-0 nova_compute[260603]: 2025-10-02 09:50:16.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:17 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 09:50:17 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2224406694' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 09:50:17 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3980: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:17 compute-0 ceph-mon[74477]: from='client.23423 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:17 compute-0 ceph-mon[74477]: from='client.23425 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:17 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2224406694' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 09:50:18 compute-0 ceph-mon[74477]: pgmap v3980: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:19 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3981: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:19 compute-0 ovs-vsctl[475086]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 02 09:50:20 compute-0 nova_compute[260603]: 2025-10-02 09:50:20.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:20 compute-0 ceph-mon[74477]: pgmap v3981: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:20 compute-0 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 02 09:50:20 compute-0 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 02 09:50:20 compute-0 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 09:50:21 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3982: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:21 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:50:21 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: cache status {prefix=cache status} (starting...)
Oct 02 09:50:21 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: client ls {prefix=client ls} (starting...)
Oct 02 09:50:21 compute-0 nova_compute[260603]: 2025-10-02 09:50:21.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:21 compute-0 lvm[475426]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 02 09:50:21 compute-0 lvm[475425]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 09:50:21 compute-0 lvm[475425]: VG ceph_vg0 finished
Oct 02 09:50:21 compute-0 lvm[475426]: VG ceph_vg1 finished
Oct 02 09:50:22 compute-0 lvm[475466]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 02 09:50:22 compute-0 lvm[475466]: VG ceph_vg2 finished
Oct 02 09:50:22 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23429 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 09:50:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/227274942' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:50:22 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 09:50:22 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/227274942' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:50:22 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: damage ls {prefix=damage ls} (starting...)
Oct 02 09:50:22 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump loads {prefix=dump loads} (starting...)
Oct 02 09:50:22 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23435 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:22 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 02 09:50:22 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 02 09:50:22 compute-0 ceph-mon[74477]: pgmap v3982: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/227274942' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 09:50:22 compute-0 ceph-mon[74477]: from='client.? 192.168.122.10:0/227274942' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 09:50:22 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 02 09:50:23 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 02 09:50:23 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3983: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 09:50:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1711645371' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 09:50:23 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 02 09:50:23 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23441 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:23 compute-0 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T09:50:23.445+0000 7f67e8e61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 09:50:23 compute-0 ceph-mgr[74774]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 09:50:23 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 02 09:50:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:50:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2517926739' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:50:23 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: ops {prefix=ops} (starting...)
Oct 02 09:50:23 compute-0 ceph-mon[74477]: from='client.23429 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:23 compute-0 ceph-mon[74477]: from='client.23435 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1711645371' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 09:50:23 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2517926739' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:50:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 02 09:50:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541629165' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 09:50:23 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 02 09:50:23 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268134731' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 09:50:23 compute-0 podman[475714]: 2025-10-02 09:50:23.998043323 +0000 UTC m=+0.058766317 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:50:24 compute-0 podman[475713]: 2025-10-02 09:50:24.036539386 +0000 UTC m=+0.097494527 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 09:50:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 02 09:50:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3906598908' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 09:50:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 09:50:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1604482718' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 09:50:24 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: session ls {prefix=session ls} (starting...)
Oct 02 09:50:24 compute-0 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: status {prefix=status} (starting...)
Oct 02 09:50:24 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23453 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:24 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 09:50:24 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1092721279' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 09:50:24 compute-0 ceph-mon[74477]: pgmap v3983: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:24 compute-0 ceph-mon[74477]: from='client.23441 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/541629165' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 09:50:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1268134731' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 09:50:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3906598908' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 09:50:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1604482718' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 09:50:24 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1092721279' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 09:50:25 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3984: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:25 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23457 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 09:50:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1704548908' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 09:50:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 09:50:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2254031404' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 09:50:25 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 09:50:25 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3214909565' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 09:50:25 compute-0 nova_compute[260603]: 2025-10-02 09:50:25.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:26 compute-0 ceph-mon[74477]: from='client.23453 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1704548908' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 09:50:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2254031404' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 09:50:26 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3214909565' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 09:50:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 02 09:50:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191865689' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 09:50:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 09:50:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925953199' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 09:50:26 compute-0 sudo[476078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:50:26 compute-0 sudo[476078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:26 compute-0 sudo[476078]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:26 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23469 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:26 compute-0 ceph-mgr[74774]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 09:50:26 compute-0 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T09:50:26.498+0000 7f67e8e61640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 09:50:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:50:26 compute-0 sudo[476106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:50:26 compute-0 sudo[476106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:26 compute-0 sudo[476106]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:26 compute-0 sudo[476133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:50:26 compute-0 sudo[476133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:26 compute-0 sudo[476133]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 09:50:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1730364769' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 09:50:26 compute-0 sudo[476160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 09:50:26 compute-0 sudo[476160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:26 compute-0 nova_compute[260603]: 2025-10-02 09:50:26.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:26 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 02 09:50:26 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968457732' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 09:50:27 compute-0 ceph-mon[74477]: pgmap v3984: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:27 compute-0 ceph-mon[74477]: from='client.23457 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4191865689' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 09:50:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1925953199' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 09:50:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1730364769' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 09:50:27 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/968457732' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 09:50:27 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23475 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:27 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3985: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:27 compute-0 sudo[476160]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:27 compute-0 sudo[476314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:50:27 compute-0 sudo[476314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:27 compute-0 sudo[476314]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:27 compute-0 podman[476351]: 2025-10-02 09:50:27.408208538 +0000 UTC m=+0.070715930 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 09:50:27 compute-0 podman[476346]: 2025-10-02 09:50:27.408716143 +0000 UTC m=+0.073074083 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct 02 09:50:27 compute-0 sudo[476371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:50:27 compute-0 sudo[476371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:27 compute-0 sudo[476371]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 02 09:50:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606793835' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 09:50:27 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23479 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:27 compute-0 sudo[476446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:50:27 compute-0 sudo[476446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:27 compute-0 sudo[476446]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:27 compute-0 sudo[476491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- inventory --format=json-pretty --filter-for-batch
Oct 02 09:50:27 compute-0 sudo[476491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:27 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23481 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:27 compute-0 podman[476654]: 2025-10-02 09:50:27.903011093 +0000 UTC m=+0.042987084 container create e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:50:27 compute-0 systemd[1]: Started libpod-conmon-e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0.scope.
Oct 02 09:50:27 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 09:50:27 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2581317241' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 09:50:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:50:27 compute-0 podman[476654]: 2025-10-02 09:50:27.880937753 +0000 UTC m=+0.020913744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:50:27 compute-0 podman[476654]: 2025-10-02 09:50:27.98840024 +0000 UTC m=+0.128376241 container init e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 09:50:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:50:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:50:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:50:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:50:27 compute-0 podman[476654]: 2025-10-02 09:50:27.996235895 +0000 UTC m=+0.136211886 container start e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 09:50:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 09:50:27 compute-0 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 09:50:28 compute-0 podman[476654]: 2025-10-02 09:50:28.000367224 +0000 UTC m=+0.140343235 container attach e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:50:28 compute-0 reverent_noyce[476693]: 167 167
Oct 02 09:50:28 compute-0 systemd[1]: libpod-e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0.scope: Deactivated successfully.
Oct 02 09:50:28 compute-0 podman[476654]: 2025-10-02 09:50:28.003010116 +0000 UTC m=+0.142986107 container died e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_noyce, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 09:50:28 compute-0 ceph-mon[74477]: from='client.23469 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:28 compute-0 ceph-mon[74477]: from='client.23475 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:28 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/606793835' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 09:50:28 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2581317241' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 09:50:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd21132eff86ccb978386cd9ffe2e95c40ba3c77cff38fb09b2c1b3b765603ad-merged.mount: Deactivated successfully.
Oct 02 09:50:28 compute-0 podman[476654]: 2025-10-02 09:50:28.046288588 +0000 UTC m=+0.186264579 container remove e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 09:50:28 compute-0 systemd[1]: libpod-conmon-e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0.scope: Deactivated successfully.
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64900400 session 0x562c62733a40
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64902c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:50:28
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.control', 'vms', 'default.rgw.meta']
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:56.651690+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69f3f800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:57.651886+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081975 data_alloc: 218103808 data_used: 3538944
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:58.652850+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:16:59.652950+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:00.653113+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:01.653231+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:02.653397+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087095 data_alloc: 218103808 data_used: 4190208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:03.653522+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:04.653665+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:05.653846+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:06.654062+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276914176 unmapped: 39608320 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:07.654253+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087095 data_alloc: 218103808 data_used: 4190208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.632834435s of 12.789543152s, submitted: 4
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:08.654421+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:09.654583+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:10.654706+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:11.654893+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed507000/0x0/0x4ffc00000, data 0x21e7067/0x2377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277700608 unmapped: 38821888 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:12.655061+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3138289 data_alloc: 218103808 data_used: 4243456
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277708800 unmapped: 38813696 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:13.655215+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4cf000/0x0/0x4ffc00000, data 0x221f067/0x23af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:14.655389+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:15.655514+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:16.655646+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:17.655833+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3145465 data_alloc: 218103808 data_used: 4243456
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:18.655948+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4c7000/0x0/0x4ffc00000, data 0x2227067/0x23b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:19.656056+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.898307800s of 12.210510254s, submitted: 59
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:20.656202+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902c00 session 0x562c628ede00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c61e56f00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 38780928 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f70800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:21.656346+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4c7000/0x0/0x4ffc00000, data 0x2227067/0x23b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 38780928 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:22.656478+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3146537 data_alloc: 218103808 data_used: 4231168
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277749760 unmapped: 38772736 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:23.656600+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4bf000/0x0/0x4ffc00000, data 0x222f067/0x23bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277749760 unmapped: 38772736 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:24.656717+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277749760 unmapped: 38772736 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:25.656878+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:26.657071+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f70800 session 0x562c646baf00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd08000/0x0/0x4ffc00000, data 0x19e6067/0x1b76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:27.657192+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074399 data_alloc: 218103808 data_used: 3526656
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd08000/0x0/0x4ffc00000, data 0x19e6044/0x1b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:28.657331+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:29.657528+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:30.657789+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:31.658031+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd08000/0x0/0x4ffc00000, data 0x19e6044/0x1b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:32.658279+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074399 data_alloc: 218103808 data_used: 3526656
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:33.658458+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.508622169s of 13.601557732s, submitted: 32
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c6475e780
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dcc00 session 0x562c628f0000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:34.658598+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c649c5000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:35.658700+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:36.658880+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:37.659101+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074223 data_alloc: 218103808 data_used: 3526656
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd09000/0x0/0x4ffc00000, data 0x19e6044/0x1b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:38.659239+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:39.659405+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c5000 session 0x562c6367cd20
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:40.659525+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:41.659662+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:42.659831+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:43.660559+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:44.660818+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:45.660972+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:46.661184+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:47.661363+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:48.661514+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:49.661742+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:50.662006+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:51.662163+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:52.662310+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:53.662475+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:54.662632+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:55.662834+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:56.663073+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:57.663263+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:58.663505+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:59.663745+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:00.664001+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 36K writes, 140K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.70 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1788 writes, 7086 keys, 1788 commit groups, 1.0 writes per commit group, ingest: 8.42 MB, 0.01 MB/s
                                           Interval WAL: 1788 writes, 697 syncs, 2.57 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:01.664180+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets getting new tickets!
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:02.665128+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _finish_auth 0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:02.666327+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:03.665285+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:04.665444+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:05.665673+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:06.665895+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: mgrc ms_handle_reset ms_handle_reset con 0x562c64994400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct 02 09:50:28 compute-0 ceph-osd[90385]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: get_auth_request con 0x562c62b25400 auth_method 0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: mgrc handle_mgr_configure stats_period=5
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:07.666010+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:08.666266+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:09.667177+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:10.667796+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:11.668055+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:12.668397+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:13.668623+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:14.668835+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:15.668967+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:16.669181+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:17.669377+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:18.669567+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:19.669721+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:20.669893+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274432000 unmapped: 42090496 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:21.670055+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274432000 unmapped: 42090496 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:22.670232+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274432000 unmapped: 42090496 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69f3f800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.216686249s of 49.199211121s, submitted: 27
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2958270 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c6b307a40
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6963f000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6963f000 session 0x562c6363ef00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c649c5000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c5000 session 0x562c646d6780
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661dcc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dcc00 session 0x562c628dc1e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c67dd4c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c646d7860
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:23.670383+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:24.670722+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:25.670981+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:26.671219+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:27.671429+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3003011 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:28.671663+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:29.671825+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c627dfc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:30.671986+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c6315f0e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274382848 unmapped: 46342144 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64901800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f41400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:31.672143+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274391040 unmapped: 46333952 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:32.672302+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274399232 unmapped: 46325760 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026672 data_alloc: 218103808 data_used: 3313664
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:33.672496+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:34.672673+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:35.672872+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:36.673106+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:37.673300+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045552 data_alloc: 218103808 data_used: 5947392
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:38.673509+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:39.673672+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:40.673891+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:41.674061+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:42.674233+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.794466019s of 20.170822144s, submitted: 32
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045624 data_alloc: 218103808 data_used: 5947392
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:43.674380+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276774912 unmapped: 43950080 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,20])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:44.674548+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275988480 unmapped: 44736512 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:45.674708+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276111360 unmapped: 44613632 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:46.674889+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:47.675026+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092030 data_alloc: 218103808 data_used: 6160384
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:48.675167+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd83000/0x0/0x4ffc00000, data 0x196e011/0x1afb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:49.675248+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:50.675414+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:51.675588+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:52.675834+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd62000/0x0/0x4ffc00000, data 0x198f011/0x1b1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3090850 data_alloc: 218103808 data_used: 6164480
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:53.675990+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:54.676157+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:55.676333+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.045855522s of 12.648342133s, submitted: 63
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:56.676489+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd62000/0x0/0x4ffc00000, data 0x198f011/0x1b1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276332544 unmapped: 44392448 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54e400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:57.676621+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276340736 unmapped: 44384256 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092395 data_alloc: 218103808 data_used: 6164480
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:58.676744+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 284459008 unmapped: 40468480 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:59.676886+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 48259072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54e400 session 0x562c64ef6960
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54e400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54e400 session 0x562c643a61e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c627dfc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c6551b2c0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c649c5000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:00.677040+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 48259072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:01.677213+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 48259072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:02.677380+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151015 data_alloc: 218103808 data_used: 6164480
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c5000 session 0x562c6b3061e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f71400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f71400 session 0x562c6274ab40
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:03.677576+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:04.677741+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:05.677917+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.307779789s of 10.068083763s, submitted: 53
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:06.678073+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62874c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:07.678206+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3152489 data_alloc: 218103808 data_used: 6164480
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:08.678402+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276692992 unmapped: 48234496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62874c00 session 0x562c646d9680
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:09.678533+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62874c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276701184 unmapped: 48226304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c627dfc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:10.678707+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276701184 unmapped: 48226304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:11.678831+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276701184 unmapped: 48226304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:12.678976+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279396352 unmapped: 45531136 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3210696 data_alloc: 234881024 data_used: 14290944
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:13.679136+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279412736 unmapped: 45514752 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:14.679292+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:15.679446+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:16.679618+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.747380257s of 11.106559753s, submitted: 63
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:17.679795+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3211048 data_alloc: 234881024 data_used: 14290944
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:18.679962+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:19.680162+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:20.680303+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279437312 unmapped: 45490176 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:21.680537+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279437312 unmapped: 45490176 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:22.680741+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282304512 unmapped: 42622976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228632 data_alloc: 234881024 data_used: 14290944
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:23.680915+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282304512 unmapped: 42622976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:24.681061+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 284680192 unmapped: 40247296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:25.681189+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebf33000/0x0/0x4ffc00000, data 0x261e011/0x27ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:26.681394+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.735648155s of 10.060455322s, submitted: 31
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:27.681551+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247268 data_alloc: 234881024 data_used: 14290944
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:28.681710+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebebb000/0x0/0x4ffc00000, data 0x2696011/0x2823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:29.681805+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:30.681978+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeb6000/0x0/0x4ffc00000, data 0x269b011/0x2828000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:31.682132+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:32.682276+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3249180 data_alloc: 234881024 data_used: 14315520
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:33.682460+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:34.682615+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:35.682728+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:36.682953+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeb0000/0x0/0x4ffc00000, data 0x26a1011/0x282e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:37.683139+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3249180 data_alloc: 234881024 data_used: 14315520
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.374366760s of 11.633337975s, submitted: 16
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:38.683307+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:39.683478+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:40.683657+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:41.684058+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:42.684172+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253572 data_alloc: 234881024 data_used: 14413824
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:43.684300+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:44.684415+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:45.684580+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:46.684785+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:47.685441+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253572 data_alloc: 234881024 data_used: 14413824
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:48.685561+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:49.685716+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:50.685885+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:51.686074+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:52.686289+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253572 data_alloc: 234881024 data_used: 14413824
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:53.686531+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.346201897s of 15.059863091s, submitted: 4
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62874c00 session 0x562c64efe960
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c628dda40
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c69f3f800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:54.686858+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:55.687064+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebea5000/0x0/0x4ffc00000, data 0x26ac011/0x2839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:56.687283+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:57.687445+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253376 data_alloc: 234881024 data_used: 14413824
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:58.687643+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:59.687843+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282861568 unmapped: 42065920 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x199c011/0x1b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:00.688007+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282861568 unmapped: 42065920 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:01.688265+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c64ef7860
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:02.688463+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099936 data_alloc: 218103808 data_used: 6164480
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:03.688622+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x199c011/0x1b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:04.688862+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:05.689010+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.364779472s of 12.359013557s, submitted: 40
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64901800 session 0x562c6b307a40
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f41400 session 0x562c6551bc20
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c627dfc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:06.689162+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:07.689296+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276013056 unmapped: 48914432 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2973676 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed307000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:08.689464+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:09.689629+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c64effc20
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:10.689805+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:11.689945+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:12.690079+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:13.690237+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:14.690401+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:15.690715+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:16.691040+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:17.691254+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:18.691652+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:19.691863+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:20.692186+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:21.692384+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:22.692526+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:23.692968+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:24.693216+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:25.693455+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:26.693672+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:27.693863+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:28.694000+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:29.694310+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:30.694457+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:31.694662+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:32.694946+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:33.695136+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:34.695368+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:35.695534+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:36.695708+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:37.695897+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:38.696041+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:39.696194+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:40.696362+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:41.696509+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:42.696727+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:43.696926+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:44.697095+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:45.697229+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:46.698328+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:47.698486+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:48.698700+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:49.698852+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64998000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64998000 session 0x562c62a46f00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661dd800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dd800 session 0x562c6274a780
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f1400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c6471a3c0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:50.699016+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64999000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64999000 session 0x562c646d94a0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c627dfc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.197753906s of 44.837074280s, submitted: 31
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c6475f4a0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f1400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c6b3065a0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64998000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64998000 session 0x562c6467bc20
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661dd800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dd800 session 0x562c6363f0e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c65477c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477c00 session 0x562c6315f860
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:51.699411+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:52.699558+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:53.699712+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001882 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:54.699947+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:55.700157+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:56.700406+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:57.700529+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:58.700683+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001882 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:59.700904+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:00.701129+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:01.701260+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661ddc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661ddc00 session 0x562c646e2b40
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:02.701407+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c67dd4c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c645674a0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:03.701550+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001882 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54fc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c64566780
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c68fe9000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.026555061s of 13.081507683s, submitted: 4
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c68fe9000 session 0x562c628ed4a0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276045824 unmapped: 48881664 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:04.701692+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62874400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64e73c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:05.701889+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:06.702066+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:07.702241+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:08.702472+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030199 data_alloc: 218103808 data_used: 3846144
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:09.703229+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:10.703374+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:11.703531+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:12.703790+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:13.703967+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030199 data_alloc: 218103808 data_used: 3846144
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:14.704160+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:15.704320+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.270869255s of 12.386258125s, submitted: 6
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 48275456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:16.704555+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278601728 unmapped: 46325760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:17.704719+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278700032 unmapped: 46227456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:18.704885+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3067793 data_alloc: 218103808 data_used: 4562944
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:19.705058+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:20.705216+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:21.705441+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:22.705622+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:23.705812+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3071175 data_alloc: 218103808 data_used: 4702208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:24.705969+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:25.706093+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:26.706313+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:27.706480+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:28.706655+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3071495 data_alloc: 218103808 data_used: 4710400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:29.706811+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:30.706940+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:31.707080+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:32.707233+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62863400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62863400 session 0x562c6279b0e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62863400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62863400 session 0x562c646ba000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661ddc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661ddc00 session 0x562c61e57a40
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c67dd4c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c64effe00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c68fe9000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.001466751s of 16.817237854s, submitted: 48
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279355392 unmapped: 45572096 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:33.707344+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c68fe9000 session 0x562c628ec000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3088659 data_alloc: 218103808 data_used: 4710400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54fc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c62748000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54fc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c6363e1e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62863400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62863400 session 0x562c64566d20
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c661ddc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661ddc00 session 0x562c61e57a40
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:34.707517+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:35.707685+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eceab000/0x0/0x4ffc00000, data 0x16a5fbf/0x1833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:36.707859+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:37.719423+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:38.719622+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3088819 data_alloc: 218103808 data_used: 4714496
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eceab000/0x0/0x4ffc00000, data 0x16a5fbf/0x1833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:39.719797+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eceab000/0x0/0x4ffc00000, data 0x16a5fbf/0x1833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:40.719952+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62862c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:41.720144+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:42.720308+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.341195107s of 10.009945869s, submitted: 15
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862c00 session 0x562c6279b0e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:43.720514+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092880 data_alloc: 218103808 data_used: 4714496
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:44.720714+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64902400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f71800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:45.720925+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279543808 unmapped: 45383680 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:46.721111+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:47.721237+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:48.721421+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103412 data_alloc: 218103808 data_used: 6053888
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:49.721608+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:50.721803+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:51.721999+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:52.722183+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:53.722363+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103412 data_alloc: 218103808 data_used: 6053888
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:54.722698+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:55.722810+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:56.722960+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.366211891s of 13.564207077s, submitted: 1
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42450944 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:57.723087+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:58.723214+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x1da3fbf/0x1f31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172036 data_alloc: 218103808 data_used: 7139328
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:59.723386+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec790000/0x0/0x4ffc00000, data 0x1da7fbf/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:00.723534+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:01.723692+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:02.723818+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec790000/0x0/0x4ffc00000, data 0x1da7fbf/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:03.724001+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172664 data_alloc: 218103808 data_used: 7143424
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:04.724188+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:05.724315+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:06.724476+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec7a6000/0x0/0x4ffc00000, data 0x1daafbf/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:07.724602+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:08.724817+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3166132 data_alloc: 218103808 data_used: 7143424
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:09.724963+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:10.725126+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:11.725269+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec7a6000/0x0/0x4ffc00000, data 0x1daafbf/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:12.725406+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:13.725534+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3166772 data_alloc: 218103808 data_used: 7204864
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:14.725670+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902400 session 0x562c6363f0e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.399305344s of 18.292930603s, submitted: 109
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f71800 session 0x562c6551a780
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64902400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:15.725800+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282681344 unmapped: 42246144 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902400 session 0x562c645665a0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:16.726065+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:17.726283+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:18.726438+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 podman[476794]: 2025-10-02 09:50:28.260401766 +0000 UTC m=+0.088258458 container create eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080618 data_alloc: 218103808 data_used: 4771840
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:19.726571+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:20.726716+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:21.726820+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:22.726944+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:23.727096+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62874400 session 0x562c62a46000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64e73c00 session 0x562c64efe1e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3082950 data_alloc: 218103808 data_used: 4759552
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c643a4c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:24.727259+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280166400 unmapped: 44761088 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.822414398s of 10.087261200s, submitted: 43
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:25.727454+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280166400 unmapped: 44761088 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:26.727646+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:27.727838+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c643a4c00 session 0x562c64eff680
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:28.728055+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:29.728256+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:30.728418+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:31.728618+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:32.728834+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:33.728972+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:34.729101+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:35.729255+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:36.729624+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:37.729825+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:38.730089+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:39.730290+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:40.730435+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:41.730602+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:42.730846+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:43.731026+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:44.731188+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:45.731321+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:46.731470+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:47.731612+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:48.731851+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:49.731999+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:50.732120+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:51.732277+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:52.732436+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:53.732575+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:54.732712+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:55.732903+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:56.733135+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:57.733308+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:58.733487+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280182784 unmapped: 44744704 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:59.733652+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:00.733811+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:01.733985+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:02.734168+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:03.734358+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:04.734533+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:05.734718+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:06.734968+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:07.735164+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:08.735376+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:09.767737+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:10.767887+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:11.768135+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:12.768311+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:13.768606+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:14.768806+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:15.769060+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:16.769256+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:17.769452+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:18.769629+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:19.769808+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:20.769989+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:21.770182+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:22.770306+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:23.770511+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:24.770662+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:25.770783+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:26.770974+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:27.771111+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:28.771254+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:29.771380+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:30.771499+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:31.771681+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280231936 unmapped: 44695552 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:32.771849+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280231936 unmapped: 44695552 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:33.772020+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280231936 unmapped: 44695552 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:34.772165+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:35.772270+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:36.772436+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:37.772648+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:38.772860+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:39.773035+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:40.773187+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:41.773364+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:42.773572+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:43.773675+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:44.773807+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:45.773927+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:46.774231+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:47.774385+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280256512 unmapped: 44670976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:48.774538+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280256512 unmapped: 44670976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:49.774729+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280256512 unmapped: 44670976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:50.774922+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:51.775056+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:52.775254+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:53.775428+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:54.775614+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:55.775817+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280272896 unmapped: 44654592 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:56.775984+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280272896 unmapped: 44654592 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:57.776171+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280272896 unmapped: 44654592 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:58.776384+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:59.776553+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:00.776694+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:01.776821+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:02.776981+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:03.777181+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280289280 unmapped: 44638208 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:04.777361+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280289280 unmapped: 44638208 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:05.777503+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280289280 unmapped: 44638208 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64901800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:06.778058+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 100.534408569s of 101.709854126s, submitted: 14
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64901800 session 0x562c6551af00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54fc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c6456ad20
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64381c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c6475e780
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f41000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:07.778223+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f41000 session 0x562c628e2d20
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f63800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f63800 session 0x562c646f8f00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64381c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c64ef61e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:08.778422+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3017460 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecfaa000/0x0/0x4ffc00000, data 0x1197faf/0x1324000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64901800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64901800 session 0x562c646ba960
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:09.778559+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64f41000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f41000 session 0x562c61e4eb40
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54fc00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c62732000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:10.778798+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 42319872 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64998000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f0000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:11.778945+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282615808 unmapped: 42311680 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:12.779102+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecfa8000/0x0/0x4ffc00000, data 0x1197fe2/0x1326000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:13.779257+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3043767 data_alloc: 218103808 data_used: 3309568
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:14.779423+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecfa8000/0x0/0x4ffc00000, data 0x1197fe2/0x1326000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:15.779543+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64998000 session 0x562c627330e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f0000 session 0x562c646d94a0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64381c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:16.779692+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.375305176s of 10.585134506s, submitted: 14
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:17.779833+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c62474d20
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:18.779989+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:19.780148+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:20.780297+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:21.780426+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:22.780560+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:23.780850+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:24.780975+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:25.781086+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:26.781269+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:27.781406+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:28.781622+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:29.781865+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:30.782043+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:31.782176+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:32.782352+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:33.782548+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:34.782696+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:35.782847+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:36.783042+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:37.783257+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:38.783414+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:39.783563+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:40.784213+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:41.784384+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:42.784540+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:43.784702+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:44.784849+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:45.784984+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:46.785198+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c68fe8800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:47.785315+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.151807785s of 30.341138840s, submitted: 13
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:48.785428+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 42262528 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 291 ms_handle_reset con 0x562c68fe8800 session 0x562c62475e00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997150 data_alloc: 218103808 data_used: 53248
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:49.785552+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 42262528 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0xe4bb5d/0xfd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:50.785834+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:51.785992+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:52.786132+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:53.786263+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0xe4bb5d/0xfd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997150 data_alloc: 218103808 data_used: 53248
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:54.786396+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0xe4bb5d/0xfd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:55.786516+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:56.786702+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:57.786843+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:58.787027+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:59.787214+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:00.787328+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:01.787477+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:02.787600+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:03.787832+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:04.788033+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:05.788209+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:06.788469+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:07.788727+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:08.788956+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:09.789116+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:10.789315+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:11.789492+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:12.789691+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:13.789852+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:14.790025+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:15.790175+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:16.790386+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:17.790560+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:18.790733+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:19.790921+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:20.791086+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:21.791254+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:22.791379+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:23.791567+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:24.791717+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:25.791830+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:26.791964+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:27.792072+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:28.792286+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:29.792449+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:30.792565+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:31.792690+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:32.792832+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:33.792962+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:34.793123+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:35.793301+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:36.793492+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:37.793654+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:38.793806+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:39.793935+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:40.794088+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:41.794255+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:42.794398+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:43.794523+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:44.794649+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:45.794832+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:46.795028+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:47.795215+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:48.795358+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:49.795502+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:50.795639+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:51.795790+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:52.796025+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:53.796177+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:54.796413+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:55.796778+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282796032 unmapped: 42131456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:56.796937+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282796032 unmapped: 42131456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:57.797079+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:58.797208+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:59.797373+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:00.797507+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:01.797657+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:02.797828+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:03.798058+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:04.798247+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:05.798399+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:06.798630+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:07.798815+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:08.799002+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:09.799175+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:10.799318+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:11.799515+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:12.799673+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:13.799840+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:14.800092+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:15.800265+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:16.800458+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:17.800590+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:18.800814+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:19.800983+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:20.801127+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:21.801262+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:22.801435+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:23.801586+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:24.801818+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:25.802062+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:26.802315+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:27.802465+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:28.802628+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:29.802801+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:30.802951+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:31.803120+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:32.803274+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:33.803482+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:34.803633+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42074112 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:35.803856+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42074112 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:36.804097+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42074112 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:37.804297+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:38.804471+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:39.804691+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:40.804852+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:41.805018+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:42.805144+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:43.805269+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:44.805422+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:45.805543+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:46.805823+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:47.806025+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:48.806343+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:49.806522+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:50.806667+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64687c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 290488320 unmapped: 34439168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:51.806794+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 ms_handle_reset con 0x562c64687c00 session 0x562c6315e1e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:52.806970+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:53.807108+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:54.807237+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3012284 data_alloc: 218103808 data_used: 6860800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:55.807380+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:56.807528+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:57.807842+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:58.808061+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 35389440 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:59.808226+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3012284 data_alloc: 218103808 data_used: 6860800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:00.808345+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 35389440 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64900400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:01.808505+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 35389440 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 134.109100342s of 134.325363159s, submitted: 30
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 293 ms_handle_reset con 0x562c64900400 session 0x562c6315ed20
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:02.808691+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 293 heartbeat osd_stat(store_statfs(0x4edaef000/0x0/0x4ffc00000, data 0x64f191/0x7de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:03.808886+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:04.809056+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941203 data_alloc: 218103808 data_used: 45056
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c630f0000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:05.809240+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 294 ms_handle_reset con 0x562c630f0000 session 0x562c64efe000
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 294 heartbeat osd_stat(store_statfs(0x4edf5c000/0x0/0x4ffc00000, data 0x1e0d62/0x371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:06.809477+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:07.809703+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:08.809867+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 294 heartbeat osd_stat(store_statfs(0x4edf5c000/0x0/0x4ffc00000, data 0x1e0d62/0x371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:09.810039+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902681 data_alloc: 218103808 data_used: 53248
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:10.810274+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:11.810528+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:12.810729+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64bd9c00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 294 ms_handle_reset con 0x562c64bd9c00 session 0x562c6315e780
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.945940018s of 11.156617165s, submitted: 55
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 295 heartbeat osd_stat(store_statfs(0x4edf5c000/0x0/0x4ffc00000, data 0x1e0d62/0x371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:13.810925+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:14.811086+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905655 data_alloc: 218103808 data_used: 53248
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:15.811151+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:16.811391+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:17.811491+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64687400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:18.811618+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf5a000/0x0/0x4ffc00000, data 0x1e27c5/0x374000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 ms_handle_reset con 0x562c64687400 session 0x562c6456af00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:19.811856+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:20.811994+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:21.812323+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:22.812475+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:23.812606+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:24.812688+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:25.812816+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:26.813036+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:27.813234+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:28.813367+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:29.813548+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:30.813732+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:31.813961+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:32.814120+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:33.814272+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:34.814450+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:35.814565+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:36.814745+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:37.814967+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:38.815142+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:39.815325+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:40.815435+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:41.815559+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:42.815682+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:43.815833+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:44.816026+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:45.816196+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:46.816411+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23485 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:47.816548+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:48.816723+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:49.816905+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:50.817039+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:51.817156+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:52.817313+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:53.817448+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:54.817606+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:55.817797+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:56.817987+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:57.818190+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:58.818348+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:59.818498+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:00.818706+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 37K writes, 145K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 37K writes, 14K syncs, 2.69 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1178 writes, 4161 keys, 1178 commit groups, 1.0 writes per commit group, ingest: 5.22 MB, 0.01 MB/s
                                           Interval WAL: 1178 writes, 469 syncs, 2.51 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:01.818829+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:02.818998+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:03.819140+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:04.819839+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:05.820012+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:06.820211+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:07.820338+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:08.820542+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:09.820688+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:10.820836+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:11.820996+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:12.821171+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:13.821367+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:14.821516+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:15.821627+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:16.821848+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:17.822010+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:18.822142+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:19.822344+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286097408 unmapped: 38830080 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:20.822461+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286097408 unmapped: 38830080 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:21.822601+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286097408 unmapped: 38830080 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:22.822798+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:23.822953+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:24.823041+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:25.823146+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:26.823348+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:27.823492+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:28.823642+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:29.823784+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:30.823885+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:31.824017+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:32.824143+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:33.824280+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:34.824385+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:35.824501+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:36.824624+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:37.824762+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:38.824913+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:39.825059+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:40.825212+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:41.825348+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:42.825466+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:43.825639+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:44.825778+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:45.825909+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:46.826122+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:47.826284+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:48.826472+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:49.826655+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:50.826788+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:51.826965+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286146560 unmapped: 38780928 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:52.827151+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:53.827284+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 systemd[1]: Started libpod-conmon-eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc.scope.
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:54.827412+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:55.827536+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 102.574424744s of 102.733085632s, submitted: 24
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf57000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:56.827683+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:57.827827+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286203904 unmapped: 38723584 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:58.827953+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:59.828104+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2909566 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:00.828278+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf57000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:01.828433+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:02.828624+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:03.828822+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6547c800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf57000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286261248 unmapped: 38666240 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:04.828964+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 38674432 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2968052 data_alloc: 218103808 data_used: 61440
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:05.829118+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286261248 unmapped: 47063040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.070027351s of 10.197218895s, submitted: 103
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 297 ms_handle_reset con 0x562c6547c800 session 0x562c61dd25a0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:06.829296+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6c54e800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286277632 unmapped: 47046656 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:07.829430+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286285824 unmapped: 47038464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74f000/0x0/0x4ffc00000, data 0x9e5f38/0xb7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 ms_handle_reset con 0x562c6c54e800 session 0x562c646f8d20
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:08.829557+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:09.829708+0000)
Oct 02 09:50:28 compute-0 podman[476794]: 2025-10-02 09:50:28.225200696 +0000 UTC m=+0.053057418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:10.829875+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:11.830019+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:12.830173+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:13.830379+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:14.830526+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:15.830881+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:16.831053+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:17.831194+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:18.831316+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:19.831438+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:20.831605+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:21.831866+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:22.832051+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:23.832172+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:24.832295+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:25.832420+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:26.832610+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:27.832885+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:28.833093+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:29.833263+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:30.833415+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:31.833732+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:32.833950+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:33.834091+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:34.834235+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:35.834351+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:36.834516+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:37.834646+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:38.834828+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:39.835038+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286343168 unmapped: 46981120 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c62874800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 34.268466949s of 34.596256256s, submitted: 25
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:40.835232+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286351360 unmapped: 46972928 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74d000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:41.835389+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286359552 unmapped: 46964736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:42.835551+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286384128 unmapped: 46940160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed749000/0x0/0x4ffc00000, data 0x9e9686/0xb84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:43.835726+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286384128 unmapped: 46940160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:44.835906+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287432704 unmapped: 45891584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982217 data_alloc: 218103808 data_used: 77824
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:45.836035+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 299 ms_handle_reset con 0x562c62874800 session 0x562c624745a0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:46.836209+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:47.836371+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed74a000/0x0/0x4ffc00000, data 0x9e9686/0xb84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:48.836499+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:49.836663+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2981401 data_alloc: 218103808 data_used: 77824
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.584912777s of 10.009610176s, submitted: 37
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:50.836853+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:51.837025+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:52.837253+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:53.837920+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:54.838084+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985575 data_alloc: 218103808 data_used: 86016
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:55.838249+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:56.838474+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:57.838782+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:58.839001+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:59.839315+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:00.839480+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:01.839727+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:02.839965+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:03.840221+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:04.840498+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:05.840640+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:06.840858+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:07.841077+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:08.841289+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:09.841513+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:10.841699+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286441472 unmapped: 46882816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:11.841901+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286441472 unmapped: 46882816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:12.842102+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286441472 unmapped: 46882816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:13.842262+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:14.842410+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:15.842554+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:16.842799+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:17.842917+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:18.843054+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:19.843249+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 46866432 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:20.843633+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:21.843897+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:22.844092+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:23.844282+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:24.844437+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:25.844600+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:26.844797+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:27.844986+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:28.845128+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:29.845288+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:30.845425+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:31.845580+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:32.845810+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:33.845933+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:34.846022+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286482432 unmapped: 46841856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:35.846184+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286482432 unmapped: 46841856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:36.846426+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286482432 unmapped: 46841856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:37.846609+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:38.846842+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:39.847031+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:40.847193+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:41.847376+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:42.847741+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:43.847970+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:44.848215+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:45.848355+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:46.848641+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:47.848875+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:48.849088+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:49.849247+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:50.849415+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:51.849610+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:52.849781+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:53.850137+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:54.850290+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:55.850408+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:56.850607+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:57.850803+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:58.850905+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:59.851037+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:00.851206+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:01.851386+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:02.851528+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:03.851683+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:04.851820+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:05.851921+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:06.852098+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 46792704 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:07.852207+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 46792704 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:08.852373+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 46792704 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:09.852563+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:10.852673+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:11.852812+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:12.852953+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:13.853085+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:14.853219+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:15.853414+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:16.853600+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:17.853714+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:18.853870+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:19.854032+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:20.854279+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:21.854439+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:22.854568+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:23.854690+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:24.854802+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:25.854917+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:26.855058+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:27.855238+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:28.855375+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:29.855521+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:30.855727+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 46751744 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:31.855827+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 46751744 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:32.855970+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 46751744 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:33.856118+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:34.856246+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:35.856383+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:36.856535+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:37.856668+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:38.856858+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:39.857036+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:40.857158+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:41.857284+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:42.857425+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:43.876518+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:44.876632+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:45.876872+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:46.877087+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:47.877224+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:48.877384+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:49.877537+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:50.877669+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:51.877836+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:52.877958+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:53.878092+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:54.878218+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 46702592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:55.878363+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 46702592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:56.878528+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 46702592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:57.878692+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:58.878841+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:59.879210+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:00.880073+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:01.880312+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:02.881210+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:03.881861+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:04.882189+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:05.882675+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:06.883164+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:07.883571+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:08.884092+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:09.884394+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:10.884797+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:11.885110+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286646272 unmapped: 46678016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:12.885393+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286646272 unmapped: 46678016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:13.885626+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286646272 unmapped: 46678016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:14.885881+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:15.886056+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:16.886299+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:17.886517+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:18.886811+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:19.887054+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286662656 unmapped: 46661632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:20.887272+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286662656 unmapped: 46661632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:21.887485+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286662656 unmapped: 46661632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:22.887726+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:23.887915+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:24.888110+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:25.888287+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:26.888537+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2040b99ff67525125f524b889ffae00099dd6f7e45d95dbbaf735db57cc4d04b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:27.888790+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:28.888961+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:29.889131+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:30.889311+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:31.889520+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:32.889703+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:33.889955+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:34.890093+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:35.890245+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:36.890449+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:37.890639+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:38.890788+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:39.890953+0000)
Oct 02 09:50:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:40.891085+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:41.891253+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:42.891448+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:43.891896+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:44.892288+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:45.892573+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:46.892845+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:47.893079+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:48.893274+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286711808 unmapped: 46612480 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:49.893520+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286711808 unmapped: 46612480 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:50.893848+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286711808 unmapped: 46612480 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:51.894150+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2040b99ff67525125f524b889ffae00099dd6f7e45d95dbbaf735db57cc4d04b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2040b99ff67525125f524b889ffae00099dd6f7e45d95dbbaf735db57cc4d04b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:52.894417+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:53.894626+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:54.894910+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:55.895090+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:56.895320+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 46596096 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:57.895499+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 46596096 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:58.895702+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 46596096 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:59.895883+0000)
Oct 02 09:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2040b99ff67525125f524b889ffae00099dd6f7e45d95dbbaf735db57cc4d04b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:00.896043+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:01.896238+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:02.896446+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:03.896614+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:04.896840+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:05.897020+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:06.897219+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: mgrc ms_handle_reset ms_handle_reset con 0x562c62b25400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct 02 09:50:28 compute-0 ceph-osd[90385]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: get_auth_request con 0x562c62874800 auth_method 0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: mgrc handle_mgr_configure stats_period=5
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:07.897369+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:08.897547+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:09.897806+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:10.897944+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:11.898103+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:12.898324+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:13.898499+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:14.898659+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:15.898793+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286752768 unmapped: 46571520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:16.899006+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286752768 unmapped: 46571520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:17.899153+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286752768 unmapped: 46571520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:18.899348+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:19.899546+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:20.899683+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:21.899873+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:22.900090+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:23.900263+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:24.900443+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:25.900565+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:26.900732+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:27.900903+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:28.901052+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:29.901181+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:30.901370+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:31.901655+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:32.901857+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:33.901989+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:34.902130+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:35.902250+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:36.902446+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:37.902606+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:38.902946+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:39.903202+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:40.903342+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:41.903469+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:42.903589+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:43.903715+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:44.903831+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:45.903999+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 46522368 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:46.904190+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286810112 unmapped: 46514176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:47.904332+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286810112 unmapped: 46514176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:48.904517+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:49.904636+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:50.904807+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:51.905057+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:52.905215+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:53.905406+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:54.905641+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:55.905786+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286826496 unmapped: 46497792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:56.905970+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286826496 unmapped: 46497792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:57.906108+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286826496 unmapped: 46497792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:58.906349+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:59.906521+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:00.907056+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:01.907258+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:02.907410+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:03.907572+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:04.907827+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:05.908046+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:06.908276+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:07.908403+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:08.908595+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:09.908804+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:10.908978+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286851072 unmapped: 46473216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:11.909115+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286851072 unmapped: 46473216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:12.909299+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:13.909458+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:14.909612+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:15.909775+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:16.910065+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:17.910201+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:18.910380+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286875648 unmapped: 46448640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:19.910558+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286875648 unmapped: 46448640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:20.910735+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286875648 unmapped: 46448640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:21.910925+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:22.911099+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:23.911266+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:24.911675+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:25.911807+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:26.911998+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:27.912174+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:28.912368+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:29.912517+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:30.912678+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:31.912807+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:32.913004+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:33.913130+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:34.913318+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286900224 unmapped: 46424064 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:35.913470+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286908416 unmapped: 46415872 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:36.913661+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286908416 unmapped: 46415872 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:37.913826+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:38.914013+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:39.914188+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:40.914452+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:41.914661+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:42.914795+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:43.914920+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:44.915075+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:45.915246+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:46.915459+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:47.915599+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:48.915768+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:49.915914+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:50.916054+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:51.916178+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:52.916326+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:53.916456+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:54.916566+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286949376 unmapped: 46374912 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:55.916679+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286949376 unmapped: 46374912 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:56.916822+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286949376 unmapped: 46374912 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:57.917280+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286957568 unmapped: 46366720 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:58.917440+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286973952 unmapped: 46350336 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:59.917674+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286973952 unmapped: 46350336 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:00.917890+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286973952 unmapped: 46350336 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:01.918070+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:02.918214+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:03.918432+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:04.918633+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:05.918849+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:06.919106+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:07.919283+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:08.919450+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:09.919653+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:10.919784+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:11.919918+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:12.920106+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:13.920281+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:14.920448+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:15.920615+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:16.920887+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:17.921037+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:18.921223+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:19.921389+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:20.921564+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:21.921904+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:22.922058+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:23.922238+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:24.922398+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:25.922534+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:26.922732+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:27.922966+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:28.923182+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:29.923355+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:30.923540+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287023104 unmapped: 46301184 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:31.923710+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287023104 unmapped: 46301184 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:32.923971+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287023104 unmapped: 46301184 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:33.924151+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:34.924285+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:35.924429+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:36.924629+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:37.924805+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:38.924974+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:39.925160+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:40.925324+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:41.925478+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:42.925646+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 09:50:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1299664923' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 09:50:28 compute-0 podman[476794]: 2025-10-02 09:50:28.371203677 +0000 UTC m=+0.199060379 container init eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:43.925803+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:44.925986+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:45.926134+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:46.926333+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 46260224 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:47.926472+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 46260224 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:48.926606+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 46260224 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:49.926730+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:50.926882+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:51.927127+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:52.927324+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:53.927536+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:54.927689+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 46243840 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:55.927854+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 46243840 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:56.928087+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 46243840 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:57.928263+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:58.928401+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:59.928533+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:00.928804+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:01.928942+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:02.929150+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:03.929320+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:04.929477+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:05.929626+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:06.929827+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:07.929981+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:08.930146+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:09.930296+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:10.930476+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 46211072 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:11.930636+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 46211072 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:12.930812+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:13.930998+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:14.931208+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:15.931331+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:16.931516+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:17.931672+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:18.931844+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:19.931997+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:20.932134+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:21.932311+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:22.932499+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:23.932665+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:24.932836+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:25.933004+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:26.933187+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:27.933564+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:28.933743+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:29.934024+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:30.934178+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:31.934315+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 46170112 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:32.934438+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 46170112 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:33.934569+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 46170112 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:34.934789+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:35.934955+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:36.935148+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:37.935270+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:38.935405+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:39.935556+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:40.935712+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:41.935873+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:42.936036+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:43.936179+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 46145536 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:44.936298+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 46145536 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:45.936432+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:46.936648+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:47.936806+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:48.936945+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:49.937145+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:50.937351+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 46129152 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:51.937503+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:52.937647+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:53.937835+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:54.938053+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:55.938237+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:56.938494+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:57.938673+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:58.938859+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:59.939091+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:00.939250+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:01.939418+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:02.939581+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:03.939829+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:04.939989+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:05.940178+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:06.940359+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:07.940512+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287227904 unmapped: 46096384 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:08.940666+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287227904 unmapped: 46096384 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:09.940834+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287227904 unmapped: 46096384 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:10.940994+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:11.941124+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:12.941245+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:13.941375+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:14.941503+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:15.941632+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 46080000 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:16.941797+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 46080000 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:17.941932+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 46071808 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:18.942068+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:19.942190+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:20.942368+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:21.942562+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:22.942854+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:23.943021+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:24.943193+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:25.943433+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:26.943628+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:27.944050+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:28.944197+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:29.944340+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:30.944520+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:31.944737+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:32.944934+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:33.945073+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:34.945213+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:35.945407+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:36.945600+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:37.945832+0000)
Oct 02 09:50:28 compute-0 podman[476794]: 2025-10-02 09:50:28.380318702 +0000 UTC m=+0.208175394 container start eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 09:50:28 compute-0 podman[476794]: 2025-10-02 09:50:28.384194782 +0000 UTC m=+0.212051504 container attach eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:38.945999+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:39.946179+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:40.946365+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:41.946513+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:42.946716+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:43.946950+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:44.947102+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:45.959559+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:46.959799+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:47.959967+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:48.960111+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:49.960241+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:50.960369+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:51.960500+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:52.960690+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:53.960851+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:54.960995+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:55.961127+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:56.961298+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:57.961528+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:58.961808+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:59.961967+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:00.962122+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 145K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 338 writes, 741 keys, 338 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s
                                           Interval WAL: 338 writes, 158 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:01.962290+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:02.962412+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:03.962595+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:04.962732+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:05.962901+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:06.963094+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:07.963243+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:08.963373+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:09.963537+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:10.963719+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:11.964098+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 45973504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:12.964260+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 45973504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:13.964466+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 45973504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:14.964619+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:15.964761+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:16.965033+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:17.965193+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:18.965362+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:19.965504+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287375360 unmapped: 45948928 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:20.965655+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287375360 unmapped: 45948928 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:21.965803+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:22.965956+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:23.966104+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:24.966245+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:25.966392+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:26.966541+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:27.966680+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:28.966802+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:29.966935+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:30.967052+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:31.967179+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:32.967306+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:33.967464+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 45924352 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:34.967613+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:35.967775+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:36.967939+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:37.968050+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:38.968158+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:39.968282+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:40.968423+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:41.968562+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:42.968689+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:43.968846+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:44.968947+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:45.969096+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:46.969299+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c64900800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 536.308471680s of 536.377807617s, submitted: 15
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 45899776 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985147 data_alloc: 218103808 data_used: 94208
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:47.969518+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 45883392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 301 ms_handle_reset con 0x562c64900800 session 0x562c64d21c20
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:48.969689+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 45883392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:49.969819+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 301 heartbeat osd_stat(store_statfs(0x4edf44000/0x0/0x4ffc00000, data 0x1ecc74/0x388000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c647e6800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 45883392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:50.969936+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 302 ms_handle_reset con 0x562c647e6800 session 0x562c6279be00
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:51.970036+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935656 data_alloc: 218103808 data_used: 110592
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:52.970168+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:53.970240+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 302 heartbeat osd_stat(store_statfs(0x4edf43000/0x0/0x4ffc00000, data 0x1ee812/0x389000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:54.970363+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:55.970476+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c649f1800
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 45817856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:56.970659+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.243688583s of 10.019772530s, submitted: 100
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 45809664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2942373 data_alloc: 218103808 data_used: 114688
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:57.970788+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287563776 unmapped: 45760512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 ms_handle_reset con 0x562c649f1800 session 0x562c628ec3c0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:58.970956+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 45719552 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:59.971084+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:00.971249+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:01.971457+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:02.971593+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:03.971661+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:04.971792+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:05.971988+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:06.972122+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:07.972244+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:08.972348+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:09.972472+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:10.972597+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:11.972857+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:12.973063+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:13.973179+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:14.973318+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 45686784 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:15.973466+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 45686784 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:16.973662+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:17.973834+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:18.973982+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:19.974152+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:20.974352+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:21.974509+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:22.974678+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:23.974824+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:24.974991+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:25.975166+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:26.975343+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:27.975454+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:28.975592+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:29.975894+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:30.976024+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:31.976146+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:32.976271+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:33.976514+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:34.976669+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:35.976801+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:36.976972+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:37.977170+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:38.977317+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:39.977460+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:40.977578+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:41.977733+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:42.977922+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:43.978085+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:44.978214+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:45.978400+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:46.978563+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:47.978681+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:48.978865+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:49.979053+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:50.979248+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:51.979437+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:52.979570+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:53.979725+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:54.979900+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 45629440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:55.980004+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 45629440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: handle_auth_request added challenge on 0x562c6bb59400
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.478626251s of 59.437236786s, submitted: 96
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _renew_subs
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 304 handle_osd_map epochs [305,305], i have 305, src has [1,305]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:56.980203+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 305 ms_handle_reset con 0x562c6bb59400 session 0x562c628e30e0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949134 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:57.980327+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:58.980457+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf3b000/0x0/0x4ffc00000, data 0x1f39c3/0x392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:59.980619+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:00.980801+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf3b000/0x0/0x4ffc00000, data 0x1f39c3/0x392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:01.980948+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: get_auth_request con 0x562c61dbd400 auth_method 0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:02.981114+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 45555712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:03.981287+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 45555712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:04.981437+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:05.981589+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:06.981777+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:07.981872+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:08.981969+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:09.982130+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:10.982335+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 45539328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:11.982460+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 45539328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:12.982583+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 45539328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:13.982810+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:14.982950+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:15.983095+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:16.983253+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:17.983382+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:18.983681+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 45522944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:19.983865+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 45522944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:20.984062+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 45522944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:21.984254+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:22.984373+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:23.984495+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:24.984675+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:25.984809+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:26.985038+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:27.985213+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:28.985368+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:29.985570+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:30.985686+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:31.985813+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:32.985906+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:33.986037+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:34.986180+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:35.986415+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:36.986664+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:37.986826+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:38.987030+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:39.987367+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:40.987691+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:41.987986+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:42.988127+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 45481984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:43.988333+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 45481984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:44.988644+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 45481984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:45.988978+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:46.989212+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:47.989329+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:48.989595+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:49.989796+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:50.990024+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:51.990187+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:52.990353+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:53.990797+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:54.990959+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:55.991182+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:56.991350+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:57.991597+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:58.991829+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 45457408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:59.991982+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 45457408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:00.992119+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 45449216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:01.992253+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 45449216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:02.992442+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:03.992594+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:04.992781+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:05.992979+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:06.993150+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:07.993281+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:08.993404+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:09.993528+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:10.993641+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:11.993770+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:12.993895+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 45293568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:13.994022+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'config diff' '{prefix=config diff}'
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'config show' '{prefix=config show}'
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 45817856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:14.994159+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 45850624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:15.994297+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'log dump' '{prefix=log dump}'
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 46301184 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:16.994485+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'perf dump' '{prefix=perf dump}'
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'perf schema' '{prefix=perf schema}'
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287055872 unmapped: 57311232 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:17.994600+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 57303040 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:18.994730+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 57303040 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:19.994865+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:20.994970+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:21.995125+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:22.995251+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:23.995429+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:24.995545+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:25.995713+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:26.995874+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:27.995971+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:28.996089+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:29.996247+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:30.996383+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:31.996511+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:32.996628+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:33.996764+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:34.996888+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:35.997013+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:36.997159+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:37.997272+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:38.997437+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:39.998003+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287096832 unmapped: 57270272 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:40.998148+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287096832 unmapped: 57270272 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:41.998382+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287096832 unmapped: 57270272 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:42.998573+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 57262080 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:43.999265+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 57262080 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:44.999405+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 57262080 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:45.999576+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 57262080 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:46.999827+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 57262080 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:47.999946+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:49.000106+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:50.000231+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:51.000459+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:52.000616+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:53.000786+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:54.001075+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:55.001292+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:56.001513+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 57245696 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:57.001783+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 57245696 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:58.002004+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 57245696 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:59.002178+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 57237504 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:00.002387+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 57237504 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:01.002523+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 57237504 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:02.002788+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 57237504 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:03.002930+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 57237504 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:04.003044+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287137792 unmapped: 57229312 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:05.003174+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287137792 unmapped: 57229312 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:06.003347+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287137792 unmapped: 57229312 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:07.003524+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287137792 unmapped: 57229312 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:08.003676+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287137792 unmapped: 57229312 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:09.003914+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 57221120 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:10.004057+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 57221120 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:11.004185+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 57221120 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:12.004343+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 57221120 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:13.004480+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 57212928 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:14.004628+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 57212928 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:15.004774+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 57212928 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:16.004975+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 57212928 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:17.005152+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 57212928 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:18.005305+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 57204736 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:19.005428+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 57204736 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:20.005583+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287170560 unmapped: 57196544 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:21.005732+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287170560 unmapped: 57196544 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:22.005926+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287170560 unmapped: 57196544 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:23.006053+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287170560 unmapped: 57196544 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:24.006195+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287170560 unmapped: 57196544 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:25.006325+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287170560 unmapped: 57196544 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:26.006502+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:27.006679+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:28.006871+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:29.007008+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:30.007135+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:31.007316+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:32.007562+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:33.007696+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:34.007871+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:35.008066+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 57180160 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:36.008237+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:37.008371+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:38.008502+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:39.008627+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:40.008842+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:41.008978+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:42.009260+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:43.009414+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 57163776 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:44.009627+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 57163776 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:45.009782+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 57163776 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:46.009927+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287211520 unmapped: 57155584 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:47.010067+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287211520 unmapped: 57155584 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:48.010187+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287211520 unmapped: 57155584 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:49.010321+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287211520 unmapped: 57155584 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:50.010445+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287211520 unmapped: 57155584 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:51.010587+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287211520 unmapped: 57155584 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:52.010708+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:53.010825+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:54.010960+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:55.011091+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:56.011226+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:57.011397+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:58.011542+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:59.011674+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:00.011789+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:01.011930+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:02.012033+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:03.012155+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:04.012315+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:05.012455+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:06.012618+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57122816 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:07.012799+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57122816 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:08.012923+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57122816 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:09.013035+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57122816 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:10.013164+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57114624 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:11.013277+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57114624 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:12.013388+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57114624 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:13.013553+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57114624 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:14.013694+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57114624 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:15.013827+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:16.013962+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:17.014175+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:18.014417+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:19.014726+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:20.015050+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:21.015258+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:22.015470+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:23.015688+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57090048 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:24.015862+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57090048 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:25.016074+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57090048 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:26.016349+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57090048 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:27.016696+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57081856 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:28.017025+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57081856 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:29.017280+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57081856 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:30.017541+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57081856 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:31.017947+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 57073664 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:32.018152+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 57073664 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:33.018388+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287301632 unmapped: 57065472 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:34.018598+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287301632 unmapped: 57065472 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:35.018772+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287301632 unmapped: 57065472 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:36.018952+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287301632 unmapped: 57065472 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:37.019144+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287301632 unmapped: 57065472 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:38.019331+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287301632 unmapped: 57065472 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:39.019473+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:40.019651+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:41.019791+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:42.019944+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:43.020082+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:44.020196+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:45.020325+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:46.020442+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:47.020597+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:48.020790+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:49.021055+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:50.021213+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:51.021371+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:52.021536+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:53.021713+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:54.021843+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:55.021994+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 57032704 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:56.022190+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 57032704 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:57.022416+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 57032704 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:58.022557+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:59.022693+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:00.022879+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:01.023022+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:02.023210+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:03.023385+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:04.023515+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:05.023712+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:06.023946+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:07.024141+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:08.024343+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:09.024500+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:10.024661+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:11.024901+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 57016320 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:12.025114+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 57016320 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:13.025284+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 57008128 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:14.025461+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 57008128 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:15.025615+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 57008128 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:16.025843+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287367168 unmapped: 56999936 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:17.026042+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287367168 unmapped: 56999936 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:18.026247+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287367168 unmapped: 56999936 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:19.026416+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:20.026622+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:21.026832+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:22.026993+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:23.027135+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:24.029477+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:25.029635+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:26.029787+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:27.029965+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 56975360 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:28.030131+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 56975360 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:29.030290+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 56975360 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:30.030494+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 56967168 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:31.030604+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 56967168 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:32.030828+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 56967168 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:33.031005+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 56967168 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:34.031145+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 56967168 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:35.031323+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:36.031514+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:37.031705+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:38.031924+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:39.032101+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:40.032268+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:41.032486+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:42.032634+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:43.032840+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 56950784 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:44.033002+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 56950784 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:45.033173+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 56942592 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:46.033386+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 56942592 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:47.033616+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 56942592 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:48.033865+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 56942592 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:49.034032+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 56942592 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:50.034203+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 56942592 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:51.034381+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 56926208 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:52.034573+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 56926208 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:53.034736+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 56926208 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:54.034993+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:55.035173+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:56.035354+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:57.035602+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:58.035827+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:59.035953+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:00.036101+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:01.036285+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:02.036470+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287457280 unmapped: 56909824 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:03.036632+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287457280 unmapped: 56909824 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:04.036871+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287457280 unmapped: 56909824 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:05.037182+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287457280 unmapped: 56909824 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:06.037322+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287457280 unmapped: 56909824 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:07.037513+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:08.037858+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:09.038136+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:10.038374+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:11.038566+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:12.038702+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:13.038836+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:14.039545+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:15.039732+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:16.039884+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:17.040231+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:18.040462+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:19.040669+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:20.040849+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:21.041022+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:22.041248+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:23.041475+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 56860672 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:24.041888+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 56860672 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:25.042158+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 56860672 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:26.042478+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 56852480 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:27.042911+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 56852480 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:28.043248+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 56852480 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:29.043515+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 56852480 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:30.043800+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 56852480 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:31.044007+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287522816 unmapped: 56844288 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:32.044194+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287522816 unmapped: 56844288 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:33.044371+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287522816 unmapped: 56844288 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:34.044557+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287531008 unmapped: 56836096 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:35.044892+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287531008 unmapped: 56836096 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:36.045144+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287531008 unmapped: 56836096 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:37.045395+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287531008 unmapped: 56836096 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:38.045605+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287531008 unmapped: 56836096 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:39.045934+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287539200 unmapped: 56827904 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:40.046224+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287539200 unmapped: 56827904 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:41.046418+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287539200 unmapped: 56827904 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:42.046633+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287547392 unmapped: 56819712 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:43.046869+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287547392 unmapped: 56819712 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:44.047129+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287547392 unmapped: 56819712 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:45.047362+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287547392 unmapped: 56819712 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:46.047526+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287547392 unmapped: 56819712 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:47.047780+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:48.048058+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:49.048244+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:50.048467+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:51.048675+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:52.048895+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:53.049112+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:54.049271+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:55.049482+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287563776 unmapped: 56803328 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:56.049664+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287563776 unmapped: 56803328 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:57.049867+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287563776 unmapped: 56803328 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:58.050048+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287571968 unmapped: 56795136 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:59.050214+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287571968 unmapped: 56795136 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:00.050564+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287571968 unmapped: 56795136 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:01.050792+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287571968 unmapped: 56795136 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:02.051034+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287571968 unmapped: 56795136 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:03.051331+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287580160 unmapped: 56786944 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:04.051506+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287580160 unmapped: 56786944 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:05.051739+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287580160 unmapped: 56786944 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:06.051952+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287588352 unmapped: 56778752 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:07.052172+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287588352 unmapped: 56778752 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:08.052353+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287588352 unmapped: 56778752 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:09.052531+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287588352 unmapped: 56778752 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:10.052778+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287588352 unmapped: 56778752 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:11.052995+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:12.053149+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:13.053319+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:14.053494+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:15.053663+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:16.053893+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:17.054095+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:18.054269+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:19.054466+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 56762368 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:20.054671+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:21.054922+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:22.055107+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:23.055486+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:24.055696+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:25.055885+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:26.056074+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:27.056306+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:28.056479+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:29.056616+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:30.056724+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:31.056851+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:32.056977+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:33.057202+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:34.057337+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:35.057499+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:36.057682+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:37.058110+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:38.058333+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-10-02T09:46:39.058477+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _finish_auth 0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:39.059721+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:40.058642+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:41.058939+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:42.059088+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:43.059270+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:44.059384+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:45.059549+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:46.059691+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:47.059828+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:48.059956+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:49.060069+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:50.060168+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:51.060327+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:52.060481+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:53.060645+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:54.060857+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:55.060993+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:56.061137+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:57.061291+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:58.061427+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:59.061617+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:00.061766+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 56688640 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:01.062274+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 56688640 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:02.062613+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 56680448 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:03.063439+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 56680448 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:04.063688+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 56680448 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:05.064307+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 56680448 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:06.064644+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 56680448 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:07.064801+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 56680448 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:08.064916+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:09.065224+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:10.065581+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:11.065715+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:12.066213+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:13.066462+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:14.066672+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:15.066885+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:16.067106+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:17.067339+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:18.067567+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:19.067732+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:20.068101+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:21.068299+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:22.068546+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:23.068728+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:24.069047+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:25.069386+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:26.069644+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:27.069956+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:28.070169+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:29.070347+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:30.070539+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:31.070718+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:32.070890+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:33.071061+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:34.071246+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:35.071426+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:36.071612+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:37.071943+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287719424 unmapped: 56647680 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:38.072152+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287719424 unmapped: 56647680 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:39.072411+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:40.072607+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:41.072795+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:42.072974+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:43.073165+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:44.073413+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:45.073599+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:46.073893+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:47.074164+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:48.074319+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:49.074541+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:50.074706+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:51.075094+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:52.075350+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:53.075555+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:54.075778+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:55.076018+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:56.076841+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:57.077095+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:58.077426+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:59.077692+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:00.077945+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:01.078129+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 146K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.68 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 458 writes, 1050 keys, 458 commit groups, 1.0 writes per commit group, ingest: 0.46 MB, 0.00 MB/s
                                           Interval WAL: 458 writes, 208 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:02.078485+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:03.078801+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:04.079090+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:05.079360+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:06.079531+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:07.079814+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:08.080019+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:09.080172+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:10.080344+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:11.080491+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:12.080689+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:13.081110+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:14.081381+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:15.081831+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:16.082082+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:17.082430+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:18.082652+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:19.082870+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:20.083107+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:21.083229+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:22.083475+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:23.083819+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:24.084109+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:25.084337+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:26.084543+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:27.084856+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 56590336 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:28.085064+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 56590336 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:29.085256+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 56590336 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:30.085470+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 56582144 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:31.085681+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 56582144 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:32.085860+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 56582144 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:33.086088+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 56582144 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:34.086352+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 56582144 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:35.086626+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 56582144 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:36.086804+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:37.087079+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:38.087363+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:39.087485+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:40.087788+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:41.087949+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:42.088134+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:43.088311+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:44.088501+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:45.088939+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:46.089128+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:47.089357+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:48.089507+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:49.089665+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:50.089805+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:51.089982+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:52.090149+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:53.090363+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:54.090517+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:55.090699+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 538.921264648s of 539.136047363s, submitted: 54
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:56.090830+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:57.091069+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,0,1])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:58.091294+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287825920 unmapped: 56541184 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:59.091537+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 56532992 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951300 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,1,0,1])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:00.091789+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 56532992 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:01.092031+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 56532992 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:02.092252+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 56524800 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:03.092440+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:04.092710+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 56516608 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:05.092966+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:06.093223+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:07.093494+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:08.093708+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:09.093889+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:10.094051+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:11.094191+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:12.094408+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:13.094597+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:14.094714+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:15.094896+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:16.095108+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:17.095340+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:18.095705+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:19.095833+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:20.096066+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:21.096244+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:22.096426+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:23.096578+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:24.096825+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:25.096983+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:26.097160+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:27.097420+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:28.097571+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:29.097727+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:30.097923+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:31.098095+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:32.098324+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:33.098569+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:34.098717+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:35.098897+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:36.099073+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:37.099251+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:38.099412+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:39.099613+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:40.099827+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:41.100005+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:42.100184+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:43.100385+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:44.100631+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:45.100841+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:46.100985+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:47.101159+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:48.101379+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:49.101534+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:50.101677+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:51.101856+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:52.101981+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:53.102153+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:54.102296+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:28 compute-0 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:28 compute-0 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:55.102451+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'config diff' '{prefix=config diff}'
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'config show' '{prefix=config show}'
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 56336384 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:56.102651+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:57.102841+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: tick
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_tickets
Oct 02 09:50:28 compute-0 ceph-osd[90385]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:58.102970+0000)
Oct 02 09:50:28 compute-0 ceph-osd[90385]: do_command 'log dump' '{prefix=log dump}'
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23489 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:50:28 compute-0 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 09:50:28 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 09:50:28 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/250851339' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 09:50:29 compute-0 ceph-mon[74477]: pgmap v3985: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:29 compute-0 ceph-mon[74477]: from='client.23479 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:29 compute-0 ceph-mon[74477]: from='client.23481 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1299664923' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 09:50:29 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/250851339' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 09:50:29 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3986: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:29 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23493 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 09:50:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3852882195' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 09:50:29 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23497 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:29 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 09:50:29 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1843826777' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 09:50:29 compute-0 charming_keldysh[476830]: [
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:     {
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:         "available": false,
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:         "ceph_device": false,
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:         "lsm_data": {},
Oct 02 09:50:29 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:         "lvs": [],
Oct 02 09:50:29 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:         "path": "/dev/sr0",
Oct 02 09:50:29 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:         "rejected_reasons": [
Oct 02 09:50:29 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "Has a FileSystem",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "Insufficient space (<5GB)"
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:         ],
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:         "sys_api": {
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "actuators": null,
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "device_nodes": "sr0",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "devname": "sr0",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "human_readable_size": "482.00 KB",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "id_bus": "ata",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "model": "QEMU DVD-ROM",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "nr_requests": "2",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "parent": "/dev/sr0",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "partitions": {},
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "path": "/dev/sr0",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "removable": "1",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "rev": "2.5+",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "ro": "0",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "rotational": "0",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "sas_address": "",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "sas_device_handle": "",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "scheduler_mode": "mq-deadline",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "sectors": 0,
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "sectorsize": "2048",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "size": 493568.0,
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "support_discard": "2048",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "type": "disk",
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:             "vendor": "QEMU"
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:         }
Oct 02 09:50:29 compute-0 charming_keldysh[476830]:     }
Oct 02 09:50:29 compute-0 charming_keldysh[476830]: ]
Oct 02 09:50:29 compute-0 systemd[1]: libpod-eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc.scope: Deactivated successfully.
Oct 02 09:50:29 compute-0 podman[476794]: 2025-10-02 09:50:29.91638238 +0000 UTC m=+1.744239072 container died eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 09:50:29 compute-0 systemd[1]: libpod-eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc.scope: Consumed 1.566s CPU time.
Oct 02 09:50:29 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23501 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:30 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 02 09:50:30 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069847584' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 09:50:30 compute-0 ceph-mon[74477]: from='client.23485 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:30 compute-0 ceph-mon[74477]: from='client.23489 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3852882195' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 09:50:30 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1843826777' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 09:50:30 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23505 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2040b99ff67525125f524b889ffae00099dd6f7e45d95dbbaf735db57cc4d04b-merged.mount: Deactivated successfully.
Oct 02 09:50:30 compute-0 nova_compute[260603]: 2025-10-02 09:50:30.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:31 compute-0 podman[476794]: 2025-10-02 09:50:31.079247881 +0000 UTC m=+2.907104603 container remove eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 09:50:31 compute-0 sudo[476491]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:31 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3987: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:50:31 compute-0 systemd[1]: libpod-conmon-eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc.scope: Deactivated successfully.
Oct 02 09:50:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 02 09:50:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4064757745' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 09:50:31 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:50:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:50:31 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:50:31 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23513 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:31 compute-0 ceph-mgr[74774]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 09:50:31 compute-0 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T09:50:31.538+0000 7f67e8e61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 09:50:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:50:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:50:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:50:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 09:50:31 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:50:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 09:50:31 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 02 09:50:31 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2803204799' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 09:50:31 compute-0 ceph-mon[74477]: pgmap v3986: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:31 compute-0 ceph-mon[74477]: from='client.23493 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:31 compute-0 ceph-mon[74477]: from='client.23497 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:31 compute-0 ceph-mon[74477]: from='client.23501 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2069847584' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 09:50:31 compute-0 ceph-mon[74477]: from='client.23505 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:31 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4064757745' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 09:50:31 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:50:31 compute-0 nova_compute[260603]: 2025-10-02 09:50:31.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:31 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:50:31 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 8ad3e822-a13a-4470-920d-644c3a6f059b does not exist
Oct 02 09:50:31 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 3e900b88-0d3b-4103-93c2-00efff6a8a23 does not exist
Oct 02 09:50:31 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 43575c45-6f8e-4858-a3c8-d2531eb1efd3 does not exist
Oct 02 09:50:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 09:50:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 09:50:32 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 09:50:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 02 09:50:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3556137666' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 09:50:32 compute-0 sudo[479582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:50:32 compute-0 sudo[479582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:32 compute-0 sudo[479582]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 02 09:50:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1717055950' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 09:50:32 compute-0 sudo[479620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:50:32 compute-0 sudo[479620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:32 compute-0 sudo[479620]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:32 compute-0 sudo[479669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:50:32 compute-0 sudo[479669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:32 compute-0 sudo[479669]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:32 compute-0 crontab[479757]: (root) LIST (root)
Oct 02 09:50:32 compute-0 sudo[479720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 09:50:32 compute-0 sudo[479720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 02 09:50:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3178428276' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 02 09:50:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219174880' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: pgmap v3987: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='client.23513 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2803204799' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3556137666' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1717055950' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3178428276' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 09:50:32 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1219174880' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 09:50:32 compute-0 podman[479855]: 2025-10-02 09:50:32.704818864 +0000 UTC m=+0.029847282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:50:32 compute-0 podman[479855]: 2025-10-02 09:50:32.833098301 +0000 UTC m=+0.158126719 container create b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gould, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 09:50:32 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 02 09:50:32 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972735957' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 09:50:33 compute-0 systemd[1]: Started libpod-conmon-b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b.scope.
Oct 02 09:50:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:50:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 02 09:50:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3675472947' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:01.703203+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:02.703364+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659071 data_alloc: 234881024 data_used: 21536768
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:03.703477+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:04.703606+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:05.703721+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:06.703859+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:07.704087+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.032077789s of 21.478837967s, submitted: 37
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3690449 data_alloc: 234881024 data_used: 21581824
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,18])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:08.704246+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350855168 unmapped: 50298880 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85d6000/0x0/0x4ffc00000, data 0x2a8c83f/0x2c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:09.704380+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 50290688 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:10.704590+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 50290688 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:11.704831+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 50290688 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85b2000/0x0/0x4ffc00000, data 0x2aad83f/0x2c43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,4])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:12.705000+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50233344 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733393 data_alloc: 234881024 data_used: 22216704
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:13.705216+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:14.705401+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:15.705548+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:16.705686+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85a5000/0x0/0x4ffc00000, data 0x2ac383f/0x2c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:17.705880+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740617 data_alloc: 234881024 data_used: 22421504
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:18.706026+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:19.706182+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.158976555s of 12.613157272s, submitted: 94
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:20.706300+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da170be00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85a5000/0x0/0x4ffc00000, data 0x2ac383f/0x2c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:21.706490+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 50208768 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:22.706610+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 50208768 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3738385 data_alloc: 234881024 data_used: 22425600
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:23.706823+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 50208768 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85a5000/0x0/0x4ffc00000, data 0x2ac383f/0x2c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,3])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8a9b000/0x0/0x4ffc00000, data 0x192083f/0x1ab6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:24.706949+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:25.707101+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:26.707279+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da060d680
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:27.707417+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525679 data_alloc: 218103808 data_used: 12595200
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:28.707594+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:29.707824+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:30.707994+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:31.708234+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:32.708479+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525679 data_alloc: 218103808 data_used: 12595200
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:33.708666+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562d9fa010e0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.587030411s of 13.759822845s, submitted: 32
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562d9f4b05a0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:34.708850+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06cf000
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:35.709015+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:36.709197+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:37.709369+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413519 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x11d77dd/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:38.709508+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349446144 unmapped: 51707904 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:39.709623+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562d9f910f00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:40.709817+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:41.709985+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:42.710138+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:43.710337+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:44.710494+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:45.710774+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:46.710980+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:47.711161+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:48.711310+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:49.711475+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:50.711644+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:51.711826+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:52.712009+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:53.712190+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:54.712344+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:55.712588+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2280 writes, 9757 keys, 2280 commit groups, 1.0 writes per commit group, ingest: 10.53 MB, 0.02 MB/s
                                           Interval WAL: 2280 writes, 833 syncs, 2.74 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:56.712768+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets getting new tickets!
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:57.713031+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _finish_auth 0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:57.714480+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:58.713212+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:59.713451+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:00.713719+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1cb7800 session 0x562da1ad5680
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562d9f949c00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:01.714046+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: mgrc ms_handle_reset ms_handle_reset con 0x562da328f400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct 02 09:50:33 compute-0 ceph-osd[89321]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: get_auth_request con 0x562da3171c00 auth_method 0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: mgrc handle_mgr_configure stats_period=5
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da40e3400 session 0x562da1f84960
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06d4400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da02c0d20
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1965c00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:02.714356+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:03.714524+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:04.714699+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:05.715136+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:06.716305+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:07.718483+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:08.718789+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:09.718993+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:10.719432+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:11.719951+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:12.720283+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:13.720519+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:14.720812+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:15.721264+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:16.721690+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:17.722046+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:18.722557+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:19.722717+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:20.723003+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:21.723288+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1a1a000
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1700f00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da329a400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da02fc960
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:22.723466+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562d9f8d9c20
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1964000
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.360366821s of 48.620044708s, submitted: 32
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 43769856 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562da1e9be00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da03d0000
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1f185a0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473234 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1ae43c0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da329a400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da1e9a5a0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:23.723704+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:24.724090+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:25.724302+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8904000/0x0/0x4ffc00000, data 0x15c57dd/0x175a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:26.724646+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:27.725345+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 50561024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473010 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:28.725490+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3298400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3298400 session 0x562d9f8d6780
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 50561024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8904000/0x0/0x4ffc00000, data 0x15c57dd/0x175a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da1a1b2c0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:29.725681+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 50561024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562d9f988f00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:30.725888+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da18b4b40
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da329a400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3297c00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 50257920 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:31.726089+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 50257920 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:32.726243+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529512 data_alloc: 218103808 data_used: 15495168
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:33.726485+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e88df000/0x0/0x4ffc00000, data 0x15e97ed/0x177f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:34.726694+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:35.726830+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:36.726991+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:37.727179+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e88df000/0x0/0x4ffc00000, data 0x15e97ed/0x177f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529512 data_alloc: 218103808 data_used: 15495168
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:38.727337+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:39.727423+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:40.727599+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:41.727846+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:42.728001+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.683063507s of 20.067323685s, submitted: 24
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e88df000/0x0/0x4ffc00000, data 0x15e97ed/0x177f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565370 data_alloc: 218103808 data_used: 15523840
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:43.728124+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 49455104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:44.728309+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350699520 unmapped: 50454528 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:45.728402+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:46.728547+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:47.728695+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600800 data_alloc: 218103808 data_used: 15843328
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:48.728848+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:49.728981+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:50.729373+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:51.729597+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:52.729793+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600800 data_alloc: 218103808 data_used: 15843328
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:53.729974+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:54.730190+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:55.730356+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.367254257s of 13.001447678s, submitted: 92
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 49397760 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:56.730579+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 49397760 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:57.730828+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da5c8fc00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 49389568 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3677723 data_alloc: 218103808 data_used: 15843328
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:58.730977+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:59.731150+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da5c8fc00 session 0x562d9f8d7c20
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:00.731287+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x2514816/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:01.731515+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:02.731682+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da170b2c0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3656683 data_alloc: 218103808 data_used: 15843328
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:03.731832+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:04.731985+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:05.732109+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x251484f/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x251484f/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.334874630s of 10.473359108s, submitted: 68
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:06.732220+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x251484f/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:07.733070+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3658552 data_alloc: 218103808 data_used: 15843328
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:08.733218+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da2063860
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:09.733342+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:10.733494+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 52977664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:11.733689+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 52936704 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:12.733820+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 52928512 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:13.733990+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703584 data_alloc: 234881024 data_used: 22114304
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:14.734176+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:15.734357+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:16.734478+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:17.734629+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:18.734789+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703584 data_alloc: 234881024 data_used: 22114304
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:19.736138+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:20.736306+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:21.736534+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.022311211s of 15.682323456s, submitted: 60
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,2,2,2])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 52387840 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:22.736734+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353009664 unmapped: 51822592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:23.736981+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744446 data_alloc: 234881024 data_used: 22147072
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74cd000/0x0/0x4ffc00000, data 0x29f9872/0x2b91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,2])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 51150848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:24.737135+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 51150848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:25.737270+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 51150848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:26.758677+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 51077120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:27.758869+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 51740672 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:28.759085+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3750872 data_alloc: 234881024 data_used: 22351872
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 51740672 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:29.759193+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e739e000/0x0/0x4ffc00000, data 0x2b28872/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e739e000/0x0/0x4ffc00000, data 0x2b28872/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:30.759355+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:31.759543+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.567046165s of 10.198050499s, submitted: 62
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:32.759683+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7359000/0x0/0x4ffc00000, data 0x2b6d872/0x2d05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:33.759913+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754768 data_alloc: 234881024 data_used: 22589440
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:34.760078+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:35.760222+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 50446336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:36.760503+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7320000/0x0/0x4ffc00000, data 0x2ba6872/0x2d3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 50446336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:37.760687+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 50446336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:38.760862+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757992 data_alloc: 234881024 data_used: 22646784
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:39.761031+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e731b000/0x0/0x4ffc00000, data 0x2bab872/0x2d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:40.761155+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e731b000/0x0/0x4ffc00000, data 0x2bab872/0x2d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:41.761350+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:42.761485+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:43.761623+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762988 data_alloc: 234881024 data_used: 22687744
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:44.761885+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7317000/0x0/0x4ffc00000, data 0x2baf872/0x2d47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7317000/0x0/0x4ffc00000, data 0x2baf872/0x2d47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:45.762054+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:46.762231+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.851724625s of 15.161529541s, submitted: 25
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 50421760 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:47.762366+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:48.762541+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3764976 data_alloc: 234881024 data_used: 22679552
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72fa000/0x0/0x4ffc00000, data 0x2bcc872/0x2d64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:49.762732+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:50.762975+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72fa000/0x0/0x4ffc00000, data 0x2bcc872/0x2d64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:51.763246+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:52.763431+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 50356224 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:53.763656+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762328 data_alloc: 234881024 data_used: 22679552
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da060ab40
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 50356224 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:54.763820+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:55.764010+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72f2000/0x0/0x4ffc00000, data 0x2bd4872/0x2d6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:56.764159+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.166407108s of 10.001169205s, submitted: 24
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72f2000/0x0/0x4ffc00000, data 0x2bd4872/0x2d6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:57.764306+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:58.764493+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762224 data_alloc: 234881024 data_used: 22679552
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 50339840 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:59.764663+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8106000/0x0/0x4ffc00000, data 0x1dc0872/0x1f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 50331648 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:00.764822+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 50323456 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:01.765010+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da2582d20
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 50323456 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:02.765176+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8106000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 50315264 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:03.765316+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3606314 data_alloc: 218103808 data_used: 15831040
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 50307072 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:04.765524+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 50307072 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:05.765667+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da1ad4d20
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da068cb40
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 50307072 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:06.765772+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.448636055s of 10.010467529s, submitted: 54
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8108000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 52862976 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:07.765945+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8fea000/0x0/0x4ffc00000, data 0xede7ed/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 52854784 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:08.766106+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440388 data_alloc: 218103808 data_used: 8228864
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 52854784 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:09.766250+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da03d1e00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:10.766382+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:11.766530+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:12.766678+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:13.766849+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:14.767018+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:15.767181+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:16.767567+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:17.767895+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:18.768352+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:19.768652+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:20.768909+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:21.769127+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:22.769334+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:23.769481+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:24.769854+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:25.770195+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:26.770544+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:27.770842+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:28.771023+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:29.771248+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:30.771417+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:31.771684+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:32.771914+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:33.772129+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:34.772299+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 52813824 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:35.772439+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:36.772577+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:37.772773+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:38.772951+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:39.773169+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:40.773370+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:41.773614+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:42.773868+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:43.774069+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:44.774227+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:45.774385+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:46.774530+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:47.774673+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:48.774880+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:49.775136+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:50.775332+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.401294708s of 43.381404877s, submitted: 17
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562d9f989a40
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562d9f4b14a0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3297c00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562d9f989680
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da329a400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da1e9b0e0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da16f41e0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:51.775498+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:52.775652+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3988: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:53.775845+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479417 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:54.776076+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6f000/0x0/0x4ffc00000, data 0x135983f/0x14ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:55.776285+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:56.776474+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:57.776677+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:58.776837+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479417 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:59.777049+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:00.777188+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6f000/0x0/0x4ffc00000, data 0x135983f/0x14ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:01.777376+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da202be00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:02.777552+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da16f52c0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:03.777707+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3297c00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da170be00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3291000
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.937417030s of 13.075335503s, submitted: 42
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1f85c20
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481740 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:04.777858+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:05.778110+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6d000/0x0/0x4ffc00000, data 0x1359872/0x14f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:06.778226+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:07.778440+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:08.778683+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515152 data_alloc: 218103808 data_used: 12677120
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:09.778883+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:10.779068+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6d000/0x0/0x4ffc00000, data 0x1359872/0x14f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:11.779267+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:12.779424+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:13.779606+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515152 data_alloc: 218103808 data_used: 12677120
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:14.779824+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:15.779974+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 52584448 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6d000/0x0/0x4ffc00000, data 0x1359872/0x14f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.298130989s of 12.432414055s, submitted: 6
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:16.780195+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:17.780387+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8330000/0x0/0x4ffc00000, data 0x1b96872/0x1d2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:18.780521+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3585360 data_alloc: 218103808 data_used: 13922304
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:19.780709+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:20.780886+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:21.781092+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82ea000/0x0/0x4ffc00000, data 0x1bdc872/0x1d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:22.781254+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:23.781438+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3585680 data_alloc: 218103808 data_used: 13930496
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:24.781590+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c9000/0x0/0x4ffc00000, data 0x1bfd872/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:25.781716+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:26.781891+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:27.782080+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c9000/0x0/0x4ffc00000, data 0x1bfd872/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:28.782251+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583492 data_alloc: 218103808 data_used: 13934592
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:29.782418+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:30.782571+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:31.782831+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:32.782989+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.958930016s of 17.081628799s, submitted: 105
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:33.783119+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c9000/0x0/0x4ffc00000, data 0x1bfd872/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da14b4f00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642870 data_alloc: 218103808 data_used: 13934592
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:34.783268+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:35.783396+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:36.783527+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:37.783669+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:38.783855+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642870 data_alloc: 218103808 data_used: 13934592
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:39.784000+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:40.784340+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3297c00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:41.784559+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:42.784715+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562d9f8d9a40
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:43.784933+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643467 data_alloc: 218103808 data_used: 13934592
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:44.785118+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da5c8fc00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b52000/0x0/0x4ffc00000, data 0x2373895/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:45.785290+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:46.785398+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:47.785513+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:48.785650+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.500934601s of 15.522595406s, submitted: 17
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692123 data_alloc: 234881024 data_used: 20611072
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b52000/0x0/0x4ffc00000, data 0x2373895/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:49.785810+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:50.785970+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:51.786196+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:52.786392+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:53.786605+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692123 data_alloc: 234881024 data_used: 20611072
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b52000/0x0/0x4ffc00000, data 0x2373895/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:54.786842+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:55.787001+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:56.787157+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 45514752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:57.787231+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358465536 unmapped: 46366720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:58.787396+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:59.787538+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e6e9f000/0x0/0x4ffc00000, data 0x3025895/0x31be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:00.787670+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:01.787841+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:02.788059+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:03.788214+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:04.788366+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:05.788496+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e6e9f000/0x0/0x4ffc00000, data 0x3025895/0x31be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:06.788634+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:07.788818+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:08.788975+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:09.789132+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:10.789273+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:11.789446+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e6e9f000/0x0/0x4ffc00000, data 0x3025895/0x31be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.707262039s of 23.078323364s, submitted: 83
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:12.789590+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:13.789731+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:14.789890+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da5c8fc00 session 0x562da16f4f00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:15.790015+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562da14b4b40
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:16.790196+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c2000/0x0/0x4ffc00000, data 0x1c03872/0x1d9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:17.790358+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:18.790497+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595381 data_alloc: 218103808 data_used: 13934592
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:19.790800+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:20.790983+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:21.791147+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:22.791272+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82bb000/0x0/0x4ffc00000, data 0x1c0a872/0x1da2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:23.791430+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.594208717s of 11.749836922s, submitted: 32
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da202a780
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da18b5680
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 51036160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594709 data_alloc: 218103808 data_used: 13930496
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:24.791556+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 49971200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:25.791700+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 49971200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:26.791917+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 49971200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da1f18b40
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:27.792200+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:28.792464+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:29.792659+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:30.792804+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:31.792994+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:32.793183+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:33.793313+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:34.793464+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:35.793598+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:36.793765+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:37.793925+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:38.794070+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:39.794267+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:40.794464+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:41.794687+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:42.794889+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:43.795123+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:44.795299+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:45.795466+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:46.795669+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:47.795849+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:48.796040+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:49.796223+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:50.796359+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:51.796563+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:52.796698+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:53.796863+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:54.797031+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:55.797211+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:56.797400+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:57.797574+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:58.797807+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:59.798004+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:00.798178+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:01.798388+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:02.798623+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:03.798842+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 49938432 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:04.799049+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 49938432 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:05.799267+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 49938432 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:06.799525+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:07.799711+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:08.800023+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:09.800171+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:10.800336+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:11.800559+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:12.800825+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:13.801008+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:14.801187+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:15.801390+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:16.801545+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:17.801739+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:18.801937+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:19.802110+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:20.802295+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:21.802522+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:22.802664+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:23.802943+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:24.803078+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:25.803271+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:26.803419+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:27.803596+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:28.803772+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:29.803943+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:30.804090+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:31.804288+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:32.804425+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:33.804584+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:34.805260+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:35.805432+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:36.805567+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:37.805834+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:38.805989+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:39.806137+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:40.806294+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:41.806505+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:42.806737+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:43.806914+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 49872896 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:44.807046+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 49872896 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:45.807208+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 49872896 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:46.807374+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:47.807539+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:48.807699+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:49.807828+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:50.807994+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:51.808186+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:52.808331+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:53.808510+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:54.808676+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:55.808818+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:56.808933+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:57.809117+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:58.809277+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:59.809440+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:00.809634+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:01.809847+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:02.809977+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355000320 unmapped: 49831936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:03.810194+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 49823744 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:04.810379+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 49823744 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:05.810573+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1904f00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562d9f8da5a0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3297c00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da202a3c0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da1a1bc20
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 100.757339478s of 102.810211182s, submitted: 53
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 49815552 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:06.810719+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da1708960
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1770b40
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562da25825a0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da3297c00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 49725440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da14b4780
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:07.810829+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da03d0f00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 49725440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:08.811016+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 49725440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:09.811168+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495520 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8be5000/0x0/0x4ffc00000, data 0x12e37ed/0x1479000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da02fc000
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:10.811318+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:11.811537+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:12.811703+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:13.811842+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:14.812014+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3522193 data_alloc: 218103808 data_used: 11350016
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:15.812187+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8bc1000/0x0/0x4ffc00000, data 0x13077ed/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da068c1e0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562d9f9885a0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da5c8fc00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:16.812364+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.136798859s of 10.462966919s, submitted: 20
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 52199424 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:17.812497+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da5c8fc00 session 0x562da2583860
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:18.812675+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:19.812891+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:20.813040+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:21.813249+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:22.813388+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:23.813554+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:24.813678+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:25.813801+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:26.813977+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:27.814130+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:28.814319+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:29.814508+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:30.814647+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:31.814864+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:32.815030+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:33.815182+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:34.815355+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 52166656 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:35.815513+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:36.815662+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:37.815869+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:38.816043+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:39.816230+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:40.816357+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:41.816528+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:42.816687+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:43.816862+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:44.816980+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:45.817126+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:46.817314+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.090339661s of 30.667470932s, submitted: 30
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:47.817429+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 52142080 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 290 handle_osd_map epochs [291,291], i have 291, src has [1,291]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:48.817591+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: get_auth_request con 0x562da0341000 auth_method 0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 291 ms_handle_reset con 0x562da18b8800 session 0x562d9f9112c0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:49.817714+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462964 data_alloc: 218103808 data_used: 8126464
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:50.817892+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:51.818096+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:52.818222+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:53.818339+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:54.818466+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462964 data_alloc: 218103808 data_used: 8126464
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:55.818614+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:56.818738+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:57.818886+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:58.819254+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:59.819439+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:00.819571+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:01.819779+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:02.819900+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:03.820089+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:04.820287+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:05.820464+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:06.820708+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:07.820864+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:08.821017+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:09.821193+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:10.821373+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:11.821533+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:12.821670+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:13.821798+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:14.821972+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:15.822115+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:16.822276+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:17.822416+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:18.822578+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:19.822727+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:20.822877+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:21.823058+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 52068352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:22.823180+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 52068352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:23.823348+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 52060160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:24.823495+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 52060160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:25.823628+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 52060160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:26.823770+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:27.823919+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:28.824062+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:29.824237+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:30.824354+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:31.824500+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:32.824646+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:33.824786+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:34.824915+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:35.825074+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:36.825226+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:37.825399+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:38.825551+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:39.825688+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 52035584 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:40.825807+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 52035584 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:41.825991+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:42.826129+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:43.826247+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:44.826363+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:45.826530+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 52019200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:46.826705+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 52019200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:47.826843+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:48.826960+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:49.827113+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:50.827246+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:51.827428+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:52.827726+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:53.827952+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:54.828118+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:55.828257+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:56.828409+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:57.828547+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:58.828670+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:59.828862+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:00.829008+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:01.829217+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:02.829380+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:03.829636+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 51978240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:04.829812+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 51978240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:05.830007+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:06.830203+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:07.830670+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:08.830919+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:09.831201+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:10.831431+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:11.831643+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 51961856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:12.831864+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:13.832032+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:14.832193+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:15.832358+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:16.832501+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:17.832672+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:18.832838+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 51945472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:19.832982+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:20.833098+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:21.833247+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:22.833406+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:23.833541+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:24.833694+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:25.833828+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:26.833996+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:27.834227+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:28.834377+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:29.834532+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:30.834656+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:31.834842+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:32.835015+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:33.835230+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:34.835521+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:35.835676+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:36.835863+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:37.836014+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:38.836211+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:39.836369+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:40.836501+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:41.836665+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:42.836932+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:43.837112+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:44.837275+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:45.837429+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:46.837610+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:47.837874+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:48.838040+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:49.838242+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:50.838394+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352960512 unmapped: 51871744 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 123.357223511s of 123.813163757s, submitted: 62
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 ms_handle_reset con 0x562da192d800 session 0x562da068c960
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:51.838613+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 ms_handle_reset con 0x562da1d55400 session 0x562da1ebbc20
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 51855360 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:52.838780+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:53.838938+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:54.839097+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466082 data_alloc: 218103808 data_used: 9117696
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:55.839230+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:56.839394+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:57.839579+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:58.839765+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:59.840000+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466082 data_alloc: 218103808 data_used: 9117696
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:00.840194+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 51830784 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:01.840446+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.466549873s of 10.502127647s, submitted: 11
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 293 ms_handle_reset con 0x562da230f800 session 0x562da060cb40
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:02.840801+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:03.841024+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:04.841165+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d000
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e9479000/0x0/0x4ffc00000, data 0xa4f97d/0xbe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429452 data_alloc: 218103808 data_used: 4472832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:05.841304+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 294 ms_handle_reset con 0x562da192d000 session 0x562d9f910f00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:06.841425+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:07.841609+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9c77000/0x0/0x4ffc00000, data 0x251508/0x3e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:08.841817+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:09.841983+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3358060 data_alloc: 218103808 data_used: 1130496
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9c77000/0x0/0x4ffc00000, data 0x251508/0x3e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:10.842143+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:11.842300+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:12.842436+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:13.842575+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:14.842693+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360842 data_alloc: 218103808 data_used: 1130496
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:15.842843+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e9c75000/0x0/0x4ffc00000, data 0x252f6b/0x3e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:16.843327+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:17.843643+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.849654198s of 16.112354279s, submitted: 77
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:18.843859+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 ms_handle_reset con 0x562da18b8800 session 0x562da193a5a0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:19.844055+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:20.844160+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:21.844272+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:22.844381+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:23.844485+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:24.844599+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:25.844728+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:26.844906+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:27.845101+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:28.845273+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:29.845402+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:30.845542+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:31.845688+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:32.845801+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:33.845945+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:34.846055+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:35.846232+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:36.846354+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:37.846508+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:38.846659+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:39.846829+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:40.846931+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:41.847099+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:42.847250+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:43.847394+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:44.847568+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:45.847711+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:46.847913+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:47.848163+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:48.848355+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:49.848498+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:50.848651+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:51.848812+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:52.848966+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:53.849123+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:54.849243+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:55.849467+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 187K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1591 writes, 5946 keys, 1591 commit groups, 1.0 writes per commit group, ingest: 5.99 MB, 0.01 MB/s
                                           Interval WAL: 1591 writes, 616 syncs, 2.58 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:56.849600+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:57.849723+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:58.849868+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:59.849980+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:00.850096+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:01.850254+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:02.850449+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:03.850604+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:04.850853+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:05.851033+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:06.851237+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:07.851398+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:08.851573+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:09.851719+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:10.851895+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346030080 unmapped: 58802176 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:11.852107+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:12.852255+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:13.852423+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:14.852560+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:15.852714+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:16.852823+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:17.852950+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:18.853110+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:19.853312+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:20.853559+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:21.853810+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:22.853968+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:23.854092+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 58769408 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:24.854194+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 58769408 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:25.854330+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 58769408 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:26.854493+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:27.854675+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:28.854830+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:29.854939+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:30.855049+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:31.855204+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 58753024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:32.855327+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 58753024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:33.855474+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 58753024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:34.855619+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 58744832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:35.917061+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:36.917235+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:37.917388+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:38.917566+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:39.917703+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:40.917859+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:41.918032+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:42.918179+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:43.918324+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:44.918452+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:45.918596+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:46.918731+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:47.918931+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:48.919179+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:49.919343+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:50.919539+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:51.919738+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:52.920018+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:53.920249+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 58703872 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:54.920457+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 58695680 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:55.920628+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.635398865s of 97.845252991s, submitted: 11
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 58662912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:56.920771+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c70000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 58638336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:57.920925+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c70000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 58580992 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:58.921064+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 58580992 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:59.921219+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 58580992 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c70000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:00.921410+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368383 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:01.921593+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:02.921777+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:03.921987+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:04.922146+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:05.922298+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370197 data_alloc: 218103808 data_used: 1138688
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c3c/0x3ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.818475723s of 10.272746086s, submitted: 91
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 297 ms_handle_reset con 0x562da192d800 session 0x562da202a960
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 58564608 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:06.922424+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:07.922546+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 297 handle_osd_map epochs [298,298], i have 298, src has [1,298]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 66936832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 ms_handle_reset con 0x562da1d55400 session 0x562da2063860
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:08.922710+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:09.922864+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:10.922997+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:11.923147+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:12.923266+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:13.923396+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:14.923522+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:15.923727+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:16.923886+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:17.924037+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:18.924281+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:19.924491+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346333184 unmapped: 66895872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:20.924699+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346333184 unmapped: 66895872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:21.924964+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346333184 unmapped: 66895872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:22.925141+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:23.925328+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:24.925450+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:25.925605+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:26.925812+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:27.925965+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:28.926122+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:29.926289+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:30.926465+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:31.926657+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:32.926800+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:33.926963+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:34.927136+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:35.927318+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:36.927497+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:37.927634+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:38.927854+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:39.927988+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:40.928117+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464650 data_alloc: 218103808 data_used: 1150976
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.875930786s of 35.106796265s, submitted: 17
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:41.928355+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:42.928506+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:43.928647+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:44.928821+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xec9f07/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:45.928969+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464947 data_alloc: 218103808 data_used: 1155072
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 299 ms_handle_reset con 0x562da230f800 session 0x562d9f4b0780
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:46.929194+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:47.929361+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec9dc3/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:48.929507+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:49.929681+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:50.929796+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:51.929946+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:52.930113+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 66805760 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:53.930223+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 66805760 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:54.930359+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 66805760 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:55.930522+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:56.930709+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:57.930839+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:58.931072+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:59.931222+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:00.931355+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:01.931540+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:02.931804+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:03.932043+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:04.932336+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:05.932553+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:06.932718+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:07.932947+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:08.933164+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:09.933399+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:10.933629+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:11.933821+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:12.933969+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:13.934187+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:14.934447+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:15.934613+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:16.934780+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:17.934899+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:18.935097+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:19.935256+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:20.935446+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:21.935643+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:22.935796+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:23.935925+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:24.936078+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:25.936227+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:26.936410+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:27.936678+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:28.936902+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:29.937048+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:30.937281+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:31.937503+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:32.937638+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:33.937854+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:34.938036+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:35.938228+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:36.938406+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:37.938627+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:38.938824+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:39.939033+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:40.939247+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:41.939439+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:42.939599+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:43.939874+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:44.940094+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:45.940300+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:46.940573+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:47.940864+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:48.941069+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:49.941224+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:50.941426+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:51.941586+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:52.941721+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:53.941897+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:54.942057+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:55.942601+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:56.942744+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:57.942925+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:58.943070+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346578944 unmapped: 66650112 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:59.943190+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:00.943280+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:01.943429+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:02.943571+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:03.943703+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:04.943838+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:05.943926+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:06.944079+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:07.944248+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:08.944378+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:09.944587+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:10.944910+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:11.945171+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:12.945450+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:13.946032+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:14.946173+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:15.946323+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:16.946420+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:17.946563+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346628096 unmapped: 66600960 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:18.946684+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346628096 unmapped: 66600960 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:19.946848+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:20.947020+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:21.947193+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:22.947346+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:23.947508+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:24.947631+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:25.947799+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:26.947922+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:27.948042+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:28.948169+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:29.948310+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:30.948448+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:31.948616+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:32.948727+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:33.948891+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:34.949014+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:35.949138+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:36.949371+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:37.949554+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:38.949794+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:39.951716+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:40.952488+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:41.952705+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:42.953007+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:43.953189+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:44.953345+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:45.953509+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:46.953643+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346710016 unmapped: 66519040 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:47.953800+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:48.953962+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:49.954133+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:50.954271+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:51.954446+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:52.954593+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:53.954724+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:54.954903+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:55.955082+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:56.955208+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:57.955341+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:58.955507+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:59.955677+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:00.955992+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:01.956981+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:02.957455+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:03.958010+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:04.958368+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:05.958609+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:06.958879+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:07.959256+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:08.959730+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:09.960201+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:10.960577+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:11.960915+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:12.961166+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:13.961378+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:14.961616+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:15.961807+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346783744 unmapped: 66445312 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:16.962020+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346783744 unmapped: 66445312 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:17.962175+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346783744 unmapped: 66445312 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:18.962363+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:19.962555+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:20.962724+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:21.963075+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:22.963300+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346800128 unmapped: 66428928 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:23.963463+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346800128 unmapped: 66428928 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:24.963801+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346808320 unmapped: 66420736 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:25.964026+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346808320 unmapped: 66420736 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:26.964395+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 66412544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:27.964536+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 66412544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:28.964707+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 66412544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:29.964935+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:30.965161+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:31.965467+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:32.965648+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:33.965888+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:34.966027+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346841088 unmapped: 66387968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:35.966140+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 66379776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:36.966269+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 66379776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:37.966423+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 66379776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:38.966614+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:39.966818+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:40.966982+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:41.967256+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:42.967588+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:43.967856+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:44.968085+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:45.968262+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:46.968442+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:47.968635+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:48.968805+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:49.969030+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:50.969194+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 66347008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:51.969367+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 66347008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:52.969577+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 66338816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:53.969843+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 66338816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:54.969969+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 66330624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:55.970141+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 66330624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:56.970305+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 66330624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:57.970473+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 66330624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:58.970613+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:59.970793+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:00.970974+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 ms_handle_reset con 0x562d9f949c00 session 0x562da1a1ad20
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da18b8800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:01.971146+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: mgrc ms_handle_reset ms_handle_reset con 0x562da3171c00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct 02 09:50:33 compute-0 ceph-osd[89321]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: get_auth_request con 0x562da3298400 auth_method 0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: mgrc handle_mgr_configure stats_period=5
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 ms_handle_reset con 0x562da06d4400 session 0x562da1907c20
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da328e800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 ms_handle_reset con 0x562da1965c00 session 0x562da060c5a0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da40e3400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:02.971371+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:03.971548+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 66314240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:04.971807+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 66314240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:05.971959+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 66314240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:06.972126+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:07.972277+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:08.972852+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:09.972990+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:10.973181+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:11.973407+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:12.973804+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 66297856 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:13.974090+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 66297856 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:14.974325+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 66289664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:15.974457+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 66289664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:16.974590+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 66289664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:17.974807+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 66289664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:18.974929+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346947584 unmapped: 66281472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:19.975135+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:20.975323+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:21.975560+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:22.975822+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:23.975965+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:24.976174+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:25.976339+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:26.976540+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:27.976715+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 66256896 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:28.976895+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 66256896 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:29.977348+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 66248704 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:30.977508+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346996736 unmapped: 66232320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:31.977663+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346996736 unmapped: 66232320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:32.977868+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347004928 unmapped: 66224128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:33.978063+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347004928 unmapped: 66224128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:34.978235+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 66215936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:35.978449+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 66215936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:36.978668+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 66215936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:37.978872+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 66215936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:38.979111+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:39.979251+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:40.979447+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:41.979678+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:42.979858+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 66199552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:43.979999+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 66199552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:44.980155+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 66199552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:45.980315+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 66199552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:46.980441+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:47.980564+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:48.980836+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:49.980970+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:50.981216+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:51.981551+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:52.981853+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:53.982102+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:54.982303+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:55.982447+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:56.982595+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:57.982794+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:58.983598+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:59.983765+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:00.983906+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:01.984118+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:02.984305+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:03.984450+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:04.984676+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:05.984865+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:06.985055+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:07.985261+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:08.985494+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:09.985675+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:10.985812+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:11.986044+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:12.986293+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:13.986490+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:14.986720+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:15.986990+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:16.987179+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:17.987363+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:18.987557+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:19.987782+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347127808 unmapped: 66101248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:20.987978+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:21.988212+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:22.988381+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:23.988600+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:24.988813+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:25.989027+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:26.989238+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:27.989401+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:28.989588+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:29.989738+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:30.989904+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:31.990069+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:32.990296+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:33.990534+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:34.990814+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:35.991038+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:36.991264+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:37.991466+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:38.991687+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:39.991929+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:40.992164+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:41.992383+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:42.992548+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:43.992711+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:44.992883+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:45.993038+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:46.993264+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:47.993448+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:48.993624+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:49.993798+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:50.993936+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 66035712 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:51.994123+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:52.994298+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:53.994439+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:54.994569+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:55.994709+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:56.994831+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:57.994967+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:58.995130+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:59.995296+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:00.995433+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:01.995587+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:02.995908+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:03.996038+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:04.996222+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:05.996386+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:06.996538+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:07.996697+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:08.996896+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:09.997043+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:10.997865+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:11.998032+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346218496 unmapped: 67010560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:12.998174+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346218496 unmapped: 67010560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:13.998407+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346218496 unmapped: 67010560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:14.998560+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346218496 unmapped: 67010560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:15.998861+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:16.999026+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:17.999168+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:18.999319+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:19.999488+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:20.999651+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:21.999917+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:23.000103+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:24.000272+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:25.000442+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:26.000568+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:27.000735+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:28.000968+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:29.005625+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:30.005828+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:31.006007+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 66977792 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:32.006214+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:33.006500+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:34.006645+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:35.006824+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:36.006966+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:37.007106+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 66961408 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:38.007274+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 66961408 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:39.007409+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:40.007580+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:41.007737+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:42.007913+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:43.008089+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:44.008238+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:45.008361+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:46.008608+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:47.008847+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:48.009088+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:49.009228+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:50.009358+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:51.009527+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:52.009725+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:53.009917+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:54.010128+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:55.010317+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:56.010464+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:57.010622+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:58.010824+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:59.010945+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:00.011101+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:01.011275+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:02.011819+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:03.012060+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:04.012211+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:05.012374+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:06.012522+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:07.012687+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:08.012914+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:09.013148+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:10.013345+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:11.013485+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:12.013670+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:13.013882+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:14.014054+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:15.014217+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:16.014371+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:17.014525+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:18.014701+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:19.014813+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:20.014947+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:21.015081+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:22.015289+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:23.015512+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:24.015680+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:25.015843+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:26.015982+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:27.016158+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:28.016311+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:29.016459+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:30.016607+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:31.016803+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 66846720 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:32.016983+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:33.017116+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:34.017307+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:35.017472+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:36.017620+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:37.017878+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:38.018009+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:39.018360+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:40.018490+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:41.018728+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:42.019459+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:43.019643+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:44.019794+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:45.019927+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:46.020074+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:47.020246+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:48.020379+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:49.020497+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:50.020680+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:51.020851+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:52.021089+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:53.021247+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:54.021438+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:55.021578+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:56.021714+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:57.021886+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:58.022048+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:59.022225+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:00.022391+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:01.022516+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:02.022675+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:03.022811+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:04.022944+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:05.023093+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:06.023229+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:07.023353+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:08.023500+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:09.023696+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:10.023857+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:11.023988+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:12.024159+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:13.024276+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:14.024443+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:15.024653+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:16.024791+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:17.024927+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:18.025100+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:19.025247+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:20.025371+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:21.025532+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:22.025800+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:23.026093+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:24.026248+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:25.026435+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:26.026582+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:27.026880+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:28.027047+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:29.027203+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:30.027406+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:31.027532+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:32.027694+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:33.027881+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:34.028021+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:35.028154+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:36.028292+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:37.028428+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:38.028575+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:39.028699+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:40.028860+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:41.029026+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:42.029324+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:43.029497+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:44.029644+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:45.029826+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:46.029976+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:47.030262+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:48.030418+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:49.030562+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:50.030707+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:51.031036+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:52.031297+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:53.031587+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:54.031779+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:55.031921+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:56.032047+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.71 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 278 writes, 624 keys, 278 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 278 writes, 128 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:57.032236+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:58.032426+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:59.032635+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:00.032845+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:01.032976+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:02.033170+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:03.033284+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:04.033455+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:05.033614+0000)
Oct 02 09:50:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 02 09:50:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/145436114' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:06.033778+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:07.033919+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:08.034069+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:09.034270+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:10.034446+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:11.034565+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:12.034802+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:13.035012+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:14.035152+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:15.035333+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:16.035518+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:17.035661+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:18.035812+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:19.035951+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:20.036076+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:21.036236+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:22.036450+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:23.036602+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:24.036823+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:25.036945+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:26.037056+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:27.037173+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:28.037294+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:29.037422+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 66584576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:30.037526+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 66584576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:31.037637+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 66584576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:32.037784+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 66584576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:33.037900+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:34.038051+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:35.038217+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:36.038356+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:37.038504+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:38.038638+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:39.038789+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:40.038976+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:41.039112+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:42.039275+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:43.039494+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:44.039636+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:45.039809+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:46.039974+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:47.040101+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1965c00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:48.040237+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 545.098144531s of 546.776916504s, submitted: 42
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472079 data_alloc: 218103808 data_used: 1167360
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 301 ms_handle_reset con 0x562da1965c00 session 0x562da18b4780
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:49.040359+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9463000/0x0/0x4ffc00000, data 0xa5d3e7/0xbfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9463000/0x0/0x4ffc00000, data 0xa5d3e7/0xbfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:50.040481+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 66527232 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9463000/0x0/0x4ffc00000, data 0xa5d3e7/0xbfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1d55400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:51.040604+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346710016 unmapped: 66519040 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 302 ms_handle_reset con 0x562da1d55400 session 0x562da202ab40
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:52.040799+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:53.040926+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346726400 unmapped: 66502656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390468 data_alloc: 218103808 data_used: 1171456
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:54.041065+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346726400 unmapped: 66502656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9c60000/0x0/0x4ffc00000, data 0x25efb8/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:55.041143+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346726400 unmapped: 66502656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:56.041228+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9c60000/0x0/0x4ffc00000, data 0x25efb8/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346726400 unmapped: 66502656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da230f800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:57.041375+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346750976 unmapped: 66478080 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:58.041502+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.232305527s of 10.012306213s, submitted: 80
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e9c5d000/0x0/0x4ffc00000, data 0x260a1b/0x400000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 ms_handle_reset con 0x562da230f800 session 0x562da02fc5a0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:59.041632+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347840512 unmapped: 65388544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:00.041899+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347840512 unmapped: 65388544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:01.042094+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:02.042323+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:03.042447+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:04.042580+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:05.042663+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:06.042813+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 podman[479855]: 2025-10-02 09:50:33.301730319 +0000 UTC m=+0.626758767 container init b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gould, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:07.042926+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:08.043046+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:09.043173+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:10.043296+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:11.043475+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:12.043630+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:13.043834+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:14.043973+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:15.044105+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:16.044281+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:17.044544+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:18.044707+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:19.044884+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:20.044976+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 65339392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:21.045132+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 65339392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:22.045301+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 65339392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:23.045435+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 65339392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:24.045591+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:25.045729+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:26.045869+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:27.046030+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:28.046189+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:29.046320+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:30.046507+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:31.046662+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:32.046815+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:33.046946+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:34.047093+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:35.047251+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:36.047459+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:37.047599+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:38.047792+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:39.047933+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 65314816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:40.048096+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 65314816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:41.048252+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:42.048523+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:43.048679+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:44.048863+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:45.049018+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:46.049227+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:47.049360+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:48.049491+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:49.049650+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:50.049842+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:51.049974+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:52.050145+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:53.050275+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:54.050418+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:55.050586+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 65290240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:56.050790+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 65282048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1a59c00
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _renew_subs
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.771060944s of 58.697120667s, submitted: 80
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:57.050914+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 305 ms_handle_reset con 0x562da1a59c00 session 0x562da17092c0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 65257472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9847000/0x0/0x4ffc00000, data 0x264169/0x406000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:58.051054+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 65257472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399710 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:59.051205+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 65257472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:00.051326+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 65249280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:01.051447+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 65249280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:02.051629+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da1eaf000
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 305 ms_handle_reset con 0x562da1eaf000 session 0x562da18b4960
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 65241088 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:03.051875+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 65241088 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:04.052025+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:05.052144+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:06.052307+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:07.052421+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:08.052553+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:09.052684+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:10.052850+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:11.053044+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 65208320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:12.053247+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 65208320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:13.053399+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 65208320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:14.053542+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 65208320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:15.053665+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 65200128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:16.053826+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 65200128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:17.053975+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 65200128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:18.054103+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 65200128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:19.054303+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:20.054474+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:21.054698+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:22.054923+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:23.055043+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 conmon[479915]: conmon b3b402a26b5e1354a08c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b.scope/container/memory.events
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:24.055177+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 65183744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:25.055326+0000)
Oct 02 09:50:33 compute-0 podman[479855]: 2025-10-02 09:50:33.314612961 +0000 UTC m=+0.639641379 container start b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gould, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 65183744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:26.055529+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 65183744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:27.055717+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 65167360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:28.055843+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 65167360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:29.056035+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 65167360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:30.056329+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 65167360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:31.056469+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:32.056672+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:33.056810+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:34.056944+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:35.057409+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:36.058228+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:37.058595+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:38.058776+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:39.058907+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 65150976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:40.059289+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 65150976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:41.059643+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 65150976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:42.059871+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 65150976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:43.060011+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:44.060304+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:45.060809+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:46.061208+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:47.061457+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:48.061637+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:49.061787+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:50.062113+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:51.062625+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 65126400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:52.062908+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 65126400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:53.063290+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 65126400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:54.063944+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 65118208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:55.064306+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 65118208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:56.064587+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 65110016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:57.064871+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 65110016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:58.065029+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 65110016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:59.065292+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:00.065734+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:01.066060+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 wonderful_gould[479915]: 167 167
Oct 02 09:50:33 compute-0 systemd[1]: libpod-b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b.scope: Deactivated successfully.
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:02.066234+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:03.066392+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:04.066549+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:05.066788+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:06.066970+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:07.067237+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 65085440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:08.067377+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 65085440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:09.067498+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 65085440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:10.067637+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 65085440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:11.067778+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 65077248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:12.067914+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 65069056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:13.068028+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 65069056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:14.068140+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 65069056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:15.068281+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 65052672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:16.068411+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 65052672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:17.068551+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 65052672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:18.068682+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'config diff' '{prefix=config diff}'
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'config show' '{prefix=config show}'
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:19.068810+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347783168 unmapped: 65445888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:20.068951+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 65609728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:21.069083+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'log dump' '{prefix=log dump}'
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347643904 unmapped: 65585152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'perf dump' '{prefix=perf dump}'
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:22.069229+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'perf schema' '{prefix=perf schema}'
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:23.069359+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:24.069487+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:25.069652+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:26.069784+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:27.069922+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:28.070036+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:29.070162+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:30.070285+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:31.070404+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347054080 unmapped: 66174976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:32.070561+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347054080 unmapped: 66174976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:33.070682+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347054080 unmapped: 66174976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:34.070804+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:35.070927+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:36.071047+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:37.071199+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:38.071276+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:39.071406+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:40.071709+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:41.071831+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:42.071985+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:43.072117+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:44.072315+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:45.072445+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:46.072574+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:47.072707+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:48.072825+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:49.072958+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:50.073121+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:51.073269+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:52.073435+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:53.073607+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:54.073839+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:55.074006+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:56.074180+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 66125824 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:57.074396+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 66125824 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:58.074564+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:59.074711+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:00.074849+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:01.075061+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:02.075339+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:03.075506+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347127808 unmapped: 66101248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:04.075650+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347127808 unmapped: 66101248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:05.075813+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347127808 unmapped: 66101248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:06.075979+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347127808 unmapped: 66101248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:07.076116+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:08.076339+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:09.076555+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:10.083106+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:11.083237+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:12.083468+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:13.083685+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:14.083843+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:15.083989+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:16.084183+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:17.084311+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:18.084492+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:19.084679+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 66076672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:20.084854+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 66076672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:21.084981+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 66076672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:22.085163+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347160576 unmapped: 66068480 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:23.085358+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:24.085486+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:25.085618+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:26.085780+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:27.085914+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:28.086083+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:29.086209+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:30.086332+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:31.086526+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:32.086825+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:33.086976+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:34.087216+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:35.087411+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 66035712 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:36.087571+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 66035712 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:37.087787+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:38.087951+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:39.088071+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:40.088365+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:41.088547+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:42.088713+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:43.088834+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:44.088969+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:45.089105+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:46.089243+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:47.089420+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:48.089638+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:49.089791+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:50.089915+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:51.090038+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:52.090238+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:53.090369+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:54.090504+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:55.090622+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:56.090766+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:57.090926+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:58.091135+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:59.091535+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:00.091680+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:01.091823+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:02.091974+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:03.092159+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347267072 unmapped: 65961984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:04.092274+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347267072 unmapped: 65961984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:05.092473+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347267072 unmapped: 65961984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:06.092617+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347267072 unmapped: 65961984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:07.092770+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347275264 unmapped: 65953792 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:08.092922+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347275264 unmapped: 65953792 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:09.093050+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347275264 unmapped: 65953792 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:10.093232+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347275264 unmapped: 65953792 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:11.093373+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347283456 unmapped: 65945600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:12.093528+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347283456 unmapped: 65945600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:13.093671+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347283456 unmapped: 65945600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:14.093794+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347283456 unmapped: 65945600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:15.094013+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347291648 unmapped: 65937408 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:16.094295+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347291648 unmapped: 65937408 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:17.094501+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347299840 unmapped: 65929216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:18.094704+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347299840 unmapped: 65929216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:19.094881+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347299840 unmapped: 65929216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:20.095045+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347299840 unmapped: 65929216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:21.095223+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347299840 unmapped: 65929216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:22.095430+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347299840 unmapped: 65929216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:23.095590+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347316224 unmapped: 65912832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:24.096016+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347316224 unmapped: 65912832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:25.096211+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347316224 unmapped: 65912832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:26.096351+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347316224 unmapped: 65912832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:27.096529+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347316224 unmapped: 65912832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:28.096694+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347316224 unmapped: 65912832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:29.096872+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:30.097087+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347316224 unmapped: 65912832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:31.097247+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347316224 unmapped: 65912832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:32.097559+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347332608 unmapped: 65896448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:33.097690+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347332608 unmapped: 65896448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:34.097868+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347332608 unmapped: 65896448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:35.098099+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347332608 unmapped: 65896448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:36.098237+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347340800 unmapped: 65888256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:37.098371+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347340800 unmapped: 65888256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:38.098542+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347340800 unmapped: 65888256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:39.098681+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347340800 unmapped: 65888256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:40.098836+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347340800 unmapped: 65888256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:41.098993+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347340800 unmapped: 65888256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:42.099138+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347340800 unmapped: 65888256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:43.099311+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347348992 unmapped: 65880064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:44.099554+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347348992 unmapped: 65880064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:45.099813+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347348992 unmapped: 65880064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:46.100055+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347348992 unmapped: 65880064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:47.100227+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347348992 unmapped: 65880064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:48.100492+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347357184 unmapped: 65871872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:49.100668+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 65863680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:50.100905+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 65863680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:51.101028+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347365376 unmapped: 65863680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:52.101260+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347373568 unmapped: 65855488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:53.101471+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347389952 unmapped: 65839104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:54.101706+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347389952 unmapped: 65839104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:55.101880+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347389952 unmapped: 65839104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:56.102174+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 67813376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:57.102424+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 67805184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:58.102559+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 67805184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:59.102710+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 67788800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:00.102836+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 67788800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:01.102979+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 67788800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:02.103223+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 67788800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:03.103369+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 67788800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:04.103556+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 67788800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:05.103689+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 67788800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:06.103887+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 67788800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:07.104047+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345440256 unmapped: 67788800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:08.104175+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 67780608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:09.104324+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 67780608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:10.104652+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345448448 unmapped: 67780608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:11.104776+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 67772416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:12.104934+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345473024 unmapped: 67756032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:13.105050+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345473024 unmapped: 67756032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:14.105173+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345473024 unmapped: 67756032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:15.105287+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 67747840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:16.105525+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 67747840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:17.105661+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 67747840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:18.105795+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 67747840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:19.106243+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345489408 unmapped: 67739648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:20.106414+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 67731456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:21.106557+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 67731456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:22.106738+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 67731456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:23.106944+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 67731456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:24.107096+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 67731456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:25.107286+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 67731456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:26.107458+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345497600 unmapped: 67731456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:27.107619+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 67723264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:28.107823+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 67715072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:29.108130+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 67715072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:30.108275+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 67715072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:31.108414+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 67715072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:32.108571+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 67715072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:33.108734+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 67715072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:34.108950+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345513984 unmapped: 67715072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:35.109134+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345530368 unmapped: 67698688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:36.109323+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 67690496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:37.109470+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 67690496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:38.109674+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345538560 unmapped: 67690496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:39.109869+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 67682304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:40.110099+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 67682304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:41.110343+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345546752 unmapped: 67682304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:42.110571+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 67665920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:43.110798+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 67665920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:44.111055+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345571328 unmapped: 67657728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:45.111441+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345571328 unmapped: 67657728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:46.111655+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345571328 unmapped: 67657728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:47.111858+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345571328 unmapped: 67657728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:48.112108+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345571328 unmapped: 67657728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:49.112353+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345571328 unmapped: 67657728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:50.112598+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345571328 unmapped: 67657728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:51.112884+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 67641344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:52.113180+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 67641344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:53.113405+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 67641344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:54.113706+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 67641344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:55.113878+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 67641344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:56.114046+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 67641344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:57.114175+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 67641344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:58.114310+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345587712 unmapped: 67641344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:59.114494+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345595904 unmapped: 67633152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:00.114672+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 67624960 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:01.114862+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345604096 unmapped: 67624960 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:02.115089+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 67608576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:03.115240+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 67600384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:04.115387+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 67600384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:05.115520+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 67600384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:06.115654+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345628672 unmapped: 67600384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:07.115835+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345636864 unmapped: 67592192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:08.116025+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345636864 unmapped: 67592192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:09.116257+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345636864 unmapped: 67592192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:10.116424+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345636864 unmapped: 67592192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:11.116551+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345636864 unmapped: 67592192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:12.116707+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 67584000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:13.116957+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 67584000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:14.117112+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345645056 unmapped: 67584000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:15.117259+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 67567616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:16.117389+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 67567616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:17.117544+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 67567616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:18.117784+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 67567616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:19.118004+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 67567616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:20.118147+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 67567616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:21.118318+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345661440 unmapped: 67567616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:22.118595+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345677824 unmapped: 67551232 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:23.118782+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 67534848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:24.118955+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345694208 unmapped: 67534848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:25.119136+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 67526656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:26.119363+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 67526656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:27.119591+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345702400 unmapped: 67526656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:28.119809+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 67518464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:29.120063+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 67518464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:30.120326+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345710592 unmapped: 67518464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:31.120609+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345718784 unmapped: 67510272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:32.120860+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345718784 unmapped: 67510272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:33.121030+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345718784 unmapped: 67510272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:34.121172+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345718784 unmapped: 67510272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:35.121360+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345718784 unmapped: 67510272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:36.121558+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345718784 unmapped: 67510272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:37.121775+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345718784 unmapped: 67510272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:38.121946+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345718784 unmapped: 67510272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:39.122149+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 67493888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:40.122344+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 67493888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:41.122493+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345735168 unmapped: 67493888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:42.122689+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 67477504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:43.122887+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 67477504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:44.123079+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 67477504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:45.123286+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 67477504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:46.123441+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 67477504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:47.123625+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 67452928 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:48.123826+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345776128 unmapped: 67452928 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:49.123970+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 67444736 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:50.124119+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 67436544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:51.124239+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 67436544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:52.124461+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 67436544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:53.124617+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 67436544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:54.124824+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345792512 unmapped: 67436544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:55.125017+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345800704 unmapped: 67428352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:56.125147+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345800704 unmapped: 67428352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:57.125308+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345800704 unmapped: 67428352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:58.125443+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345800704 unmapped: 67428352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:59.125567+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 67403776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:00.125919+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 67403776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:01.126122+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 67403776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:02.126401+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345841664 unmapped: 67387392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:03.126640+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 67379200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:04.126966+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 67379200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:05.127113+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 67371008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:06.127294+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 67371008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:07.127421+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 67371008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:08.127632+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 67371008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:09.127970+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 67371008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:10.128131+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345858048 unmapped: 67371008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:11.128292+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 67362816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:12.128495+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 67362816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:13.128630+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 67362816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:14.128837+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345866240 unmapped: 67362816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:15.128952+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 67354624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:16.129126+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 67354624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:17.129282+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 67354624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:18.129576+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345874432 unmapped: 67354624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:19.129774+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345890816 unmapped: 67338240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:20.129941+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345890816 unmapped: 67338240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:21.130095+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345890816 unmapped: 67338240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:22.130271+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 67313664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:23.130419+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 67313664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:24.130554+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 67313664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:25.130664+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 67313664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:26.130785+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345915392 unmapped: 67313664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:27.130953+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 67305472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:28.131098+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 67305472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:29.131258+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 67305472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:30.131402+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 67297280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:31.131558+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 67297280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:32.131869+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 67297280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:33.132098+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 67297280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:34.132259+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 67297280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:35.132396+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 67289088 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:36.132590+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 67289088 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:37.132744+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345939968 unmapped: 67289088 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:38.132992+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345948160 unmapped: 67280896 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-10-02T09:46:39.133188+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _finish_auth 0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:39.134115+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345948160 unmapped: 67280896 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:40.133493+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345948160 unmapped: 67280896 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:41.133671+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345948160 unmapped: 67280896 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:42.134041+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345964544 unmapped: 67264512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:43.134248+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 67239936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:44.134564+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 67239936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:45.134682+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 67239936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:46.134830+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 67239936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:47.134926+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 67239936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:48.135059+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 67239936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:49.135208+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 67239936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:50.135305+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 67239936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:51.135400+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 67231744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:52.135558+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 67223552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:53.135722+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 67215360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:54.135925+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 67215360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:55.136062+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 67215360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:56.136208+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 67215360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:57.136348+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 67215360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:58.136521+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346030080 unmapped: 67198976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:59.136686+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 67182592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:00.136847+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 67182592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:01.137354+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 67182592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:02.137570+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 67166208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:03.138006+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 67158016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:04.139255+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 67158016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:05.139616+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 67158016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:06.140525+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 67158016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:07.141351+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 67149824 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:08.141506+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 67149824 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:09.141629+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 67149824 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:10.142032+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 67149824 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:11.142287+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 67149824 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:12.142850+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 67149824 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:13.143166+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 67149824 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:14.143415+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 67149824 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:15.143613+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 67133440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:16.143830+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 67133440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:17.144043+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 67133440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:18.144371+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 67133440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:19.144782+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 67133440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:20.145167+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 67133440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:21.145353+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 67133440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:22.145527+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 67117056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:23.145901+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 67100672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:24.146113+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 67100672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:25.146312+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 67100672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:26.146530+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 67100672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:27.146732+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 67092480 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:28.146929+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 67092480 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:29.147208+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 67092480 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:30.147381+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 67092480 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:31.147599+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346144768 unmapped: 67084288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:32.147883+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346144768 unmapped: 67084288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:33.148058+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346144768 unmapped: 67084288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:34.148287+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346144768 unmapped: 67084288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:35.148439+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346144768 unmapped: 67084288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:36.148580+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346144768 unmapped: 67084288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:37.148736+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346144768 unmapped: 67084288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:38.148914+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346144768 unmapped: 67084288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:39.149109+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346161152 unmapped: 67067904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:40.149286+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346161152 unmapped: 67067904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:41.149475+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346161152 unmapped: 67067904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:42.149711+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346177536 unmapped: 67051520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:43.149953+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346177536 unmapped: 67051520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:44.150178+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346177536 unmapped: 67051520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:45.150347+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346177536 unmapped: 67051520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:46.150516+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346177536 unmapped: 67051520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:47.150740+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 67035136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:48.150954+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 67035136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:49.151131+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 67035136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:50.151351+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 67035136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:51.151589+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 67035136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:52.151855+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 67035136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:53.152085+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 67035136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:54.152267+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 67035136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:55.152427+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346210304 unmapped: 67018752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:56.152570+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 189K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 18K syncs, 2.71 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 334 writes, 766 keys, 334 commit groups, 1.0 writes per commit group, ingest: 0.33 MB, 0.00 MB/s
                                           Interval WAL: 334 writes, 148 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346210304 unmapped: 67018752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:57.152816+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346210304 unmapped: 67018752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:58.153029+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:59.153287+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:00.153468+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 ms_handle_reset con 0x562da18b8800 session 0x562da068c1e0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da5c90000
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:01.153662+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: mgrc ms_handle_reset ms_handle_reset con 0x562da3298400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct 02 09:50:33 compute-0 ceph-osd[89321]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: get_auth_request con 0x562da3291c00 auth_method 0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: mgrc handle_mgr_configure stats_period=5
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346234880 unmapped: 66994176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:02.153856+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 ms_handle_reset con 0x562da328e800 session 0x562da16f5680
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da192d800
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 ms_handle_reset con 0x562da40e3400 session 0x562da060d2c0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: handle_auth_request added challenge on 0x562da06d4400
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346234880 unmapped: 66994176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:03.154005+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:04.154138+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:05.154323+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:06.154545+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 66977792 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:07.154877+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 66961408 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:08.155144+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:09.155302+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:10.155436+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:11.155593+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:12.155881+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 66936832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:13.156048+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 66936832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:14.156202+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 66936832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:15.156361+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 66936832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:16.156565+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 66936832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:17.156856+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 66936832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:18.157003+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 66936832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:19.157213+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 66936832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:20.157359+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:21.157526+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:22.157829+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:23.158025+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:24.158322+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:25.158529+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:26.158683+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:27.158923+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:28.159172+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:29.159324+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:30.159502+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:31.159919+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:32.160118+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:33.160442+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:34.160737+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:35.161523+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:36.161652+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:37.161832+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:38.162052+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:39.162268+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:40.162395+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:41.162673+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:42.163020+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:43.163357+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:44.163579+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:45.163864+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:46.164045+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:47.164272+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 66846720 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:48.164571+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:49.164827+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:50.164988+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:51.165192+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 66846720 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:52.165378+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:53.165534+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:54.165878+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 538.001525879s of 538.084411621s, submitted: 26
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:55.166124+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:56.166289+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:57.166484+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:58.166649+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:59.166782+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [0,0,0,0,0,1,1])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 66846720 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:00.166987+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:01.167222+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:02.167421+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:03.167554+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:04.167729+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.848669052s of 10.000719070s, submitted: 108
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:05.168011+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:06.168181+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:07.168407+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:08.168620+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:09.168863+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:10.169050+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:11.169202+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:12.169373+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:13.169575+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:14.169723+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:15.169885+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:16.170021+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:17.170157+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:18.170274+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:19.170490+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:20.170793+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:21.171025+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:22.171244+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:23.171416+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:24.171627+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:25.171929+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:26.172207+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:27.172417+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:28.172556+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:29.172847+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:30.173000+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:31.173212+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:32.173442+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:33.173606+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:34.173792+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:35.173971+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:36.174419+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:37.174610+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:38.174816+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:39.175106+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:40.175305+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:41.175476+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:42.175642+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:43.175883+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:44.176026+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:45.176150+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:46.176285+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:47.176411+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:48.176522+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:49.176624+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:50.176792+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:51.176905+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:52.177034+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:53.177236+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:54.177387+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:55.177555+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:56.177693+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:57.177830+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:58.177951+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:59.178074+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:50:00.178225+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'config diff' '{prefix=config diff}'
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'config show' '{prefix=config show}'
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 66347008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:33 compute-0 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:33 compute-0 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401804 data_alloc: 218103808 data_used: 1179648
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:50:01.178388+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9845000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: tick
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_tickets
Oct 02 09:50:33 compute-0 ceph-osd[89321]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:50:02.178597+0000)
Oct 02 09:50:33 compute-0 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:33 compute-0 ceph-osd[89321]: do_command 'log dump' '{prefix=log dump}'
Oct 02 09:50:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 02 09:50:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2196714675' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 09:50:33 compute-0 podman[479855]: 2025-10-02 09:50:33.662782036 +0000 UTC m=+0.987810454 container attach b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gould, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:50:33 compute-0 podman[479855]: 2025-10-02 09:50:33.663366175 +0000 UTC m=+0.988394593 container died b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gould, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:50:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 02 09:50:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/581660439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 09:50:33 compute-0 rsyslogd[1004]: imjournal from <np0005465604:ceph-osd>: begin to drop messages due to rate-limiting
Oct 02 09:50:33 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 02 09:50:33 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412818012' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 09:50:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2972735957' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 09:50:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3675472947' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 09:50:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/145436114' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 09:50:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2196714675' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 09:50:33 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/581660439' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 09:50:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-708b157a566e74097d07c1d41bacc5a48670c7152fd9b827dea344c613c37030-merged.mount: Deactivated successfully.
Oct 02 09:50:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 09:50:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1361733126' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 09:50:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 02 09:50:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/368073236' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 09:50:34 compute-0 podman[479855]: 2025-10-02 09:50:34.618919021 +0000 UTC m=+1.943947439 container remove b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gould, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 02 09:50:34 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 02 09:50:34 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2588287656' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 09:50:34 compute-0 systemd[1]: libpod-conmon-b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b.scope: Deactivated successfully.
Oct 02 09:50:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:50:34.892 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:50:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:50:34.893 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:50:34 compute-0 ovn_metadata_agent[162328]: 2025-10-02 09:50:34.893 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:50:34 compute-0 podman[480144]: 2025-10-02 09:50:34.833812913 +0000 UTC m=+0.044607655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:50:34 compute-0 nova_compute[260603]: 2025-10-02 09:50:34.977 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:50:34 compute-0 nova_compute[260603]: 2025-10-02 09:50:34.978 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 09:50:35 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23545 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:35 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 02 09:50:35 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2102507077' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 09:50:35 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3989: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:35 compute-0 podman[480144]: 2025-10-02 09:50:35.179780069 +0000 UTC m=+0.390574761 container create d759a40702de76622ad8328379170e3e60567dde5863cb41cce91d7d73bce778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:50:35 compute-0 ceph-mon[74477]: pgmap v3988: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1412818012' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 09:50:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1361733126' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 09:50:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/368073236' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 09:50:35 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2588287656' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 09:50:35 compute-0 systemd[1]: Started libpod-conmon-d759a40702de76622ad8328379170e3e60567dde5863cb41cce91d7d73bce778.scope.
Oct 02 09:50:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80dabce25e6e05ee5003789a085aca0d49eafbf124ea3b78a61a09d4eb943939/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80dabce25e6e05ee5003789a085aca0d49eafbf124ea3b78a61a09d4eb943939/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80dabce25e6e05ee5003789a085aca0d49eafbf124ea3b78a61a09d4eb943939/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80dabce25e6e05ee5003789a085aca0d49eafbf124ea3b78a61a09d4eb943939/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80dabce25e6e05ee5003789a085aca0d49eafbf124ea3b78a61a09d4eb943939/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:35 compute-0 podman[480144]: 2025-10-02 09:50:35.498668999 +0000 UTC m=+0.709463701 container init d759a40702de76622ad8328379170e3e60567dde5863cb41cce91d7d73bce778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hodgkin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:50:35 compute-0 podman[480144]: 2025-10-02 09:50:35.508142986 +0000 UTC m=+0.718937678 container start d759a40702de76622ad8328379170e3e60567dde5863cb41cce91d7d73bce778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hodgkin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:50:35 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23548 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:35 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23549 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:35 compute-0 podman[480144]: 2025-10-02 09:50:35.791216777 +0000 UTC m=+1.002011489 container attach d759a40702de76622ad8328379170e3e60567dde5863cb41cce91d7d73bce778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hodgkin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 02 09:50:35 compute-0 nova_compute[260603]: 2025-10-02 09:50:35.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:35 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23551 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:36 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23555 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:36 compute-0 ceph-mon[74477]: from='client.23545 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:36 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2102507077' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 09:50:36 compute-0 ceph-mon[74477]: pgmap v3989: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:36 compute-0 ceph-mon[74477]: from='client.23548 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:36 compute-0 ceph-mon[74477]: from='client.23549 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:36 compute-0 ceph-mon[74477]: from='client.23551 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:36 compute-0 nova_compute[260603]: 2025-10-02 09:50:36.545 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:50:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:50:36 compute-0 dreamy_hodgkin[480232]: --> passed data devices: 0 physical, 3 LVM
Oct 02 09:50:36 compute-0 dreamy_hodgkin[480232]: --> relative data size: 1.0
Oct 02 09:50:36 compute-0 dreamy_hodgkin[480232]: --> All data devices are unavailable
Oct 02 09:50:36 compute-0 systemd[1]: libpod-d759a40702de76622ad8328379170e3e60567dde5863cb41cce91d7d73bce778.scope: Deactivated successfully.
Oct 02 09:50:36 compute-0 systemd[1]: libpod-d759a40702de76622ad8328379170e3e60567dde5863cb41cce91d7d73bce778.scope: Consumed 1.089s CPU time.
Oct 02 09:50:36 compute-0 podman[480144]: 2025-10-02 09:50:36.692033523 +0000 UTC m=+1.902828205 container died d759a40702de76622ad8328379170e3e60567dde5863cb41cce91d7d73bce778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hodgkin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:50:36 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 02 09:50:36 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123275875' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 09:50:36 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23559 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:36 compute-0 nova_compute[260603]: 2025-10-02 09:50:36.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-80dabce25e6e05ee5003789a085aca0d49eafbf124ea3b78a61a09d4eb943939-merged.mount: Deactivated successfully.
Oct 02 09:50:37 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3990: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:37 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23563 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 02 09:50:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3955582369' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 09:50:37 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23565 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:37 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 09:50:37 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2418259864' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 09:50:37 compute-0 ceph-mon[74477]: from='client.23555 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4123275875' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 09:50:37 compute-0 ceph-mon[74477]: from='client.23559 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:37 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3955582369' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 09:50:37 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23569 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:13.506188+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355549184 unmapped: 57237504 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3943335 data_alloc: 234881024 data_used: 32980992
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:14.506318+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 57196544 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:15.506437+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 57196544 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:16.506552+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 57196544 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e538d000/0x0/0x4ffc00000, data 0x3f0c208/0x40a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:17.506717+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e538d000/0x0/0x4ffc00000, data 0x3f0c208/0x40a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 57196544 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:18.506847+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 57196544 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3944135 data_alloc: 234881024 data_used: 33067008
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:19.506994+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 57196544 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e538d000/0x0/0x4ffc00000, data 0x3f0c208/0x40a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:20.507128+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.364212990s of 12.667222977s, submitted: 85
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 57163776 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e807dde00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e8079d2c0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:21.507278+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 57163776 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:22.507404+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 57163776 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:23.507530+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355639296 unmapped: 57147392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714295 data_alloc: 234881024 data_used: 20889600
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:24.507686+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e67f1000/0x0/0x4ffc00000, data 0x2aa91f8/0x2c3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:25.507817+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:26.507941+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e807dd0e0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6816000/0x0/0x4ffc00000, data 0x2a851e8/0x2c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:27.508133+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:28.508278+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6816000/0x0/0x4ffc00000, data 0x2a851e8/0x2c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3704758 data_alloc: 234881024 data_used: 20774912
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:29.508437+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:30.508639+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:31.508786+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:32.508959+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6816000/0x0/0x4ffc00000, data 0x2a851e8/0x2c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6816000/0x0/0x4ffc00000, data 0x2a851e8/0x2c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:33.509123+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.611377716s of 13.367643356s, submitted: 45
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3704666 data_alloc: 234881024 data_used: 20774912
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa10c00 session 0x555e8c878000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:34.510903+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4d000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6816000/0x0/0x4ffc00000, data 0x2a851e8/0x2c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:35.511048+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 60137472 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:36.511217+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 60129280 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:37.511389+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 60129280 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6f4e000/0x0/0x4ffc00000, data 0x15011d8/0x1693000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:38.511538+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6f4e000/0x0/0x4ffc00000, data 0x15011d8/0x1693000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:39.511694+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f4d000 session 0x555e7f6c6960
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:40.511841+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:41.511966+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:42.512142+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:43.512318+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:44.512542+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:45.512869+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:46.513145+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:47.556002+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:48.556241+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:49.556453+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 70459392 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:50.556608+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:51.556882+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 43K writes, 174K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 43K writes, 16K syncs, 2.74 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2386 writes, 10K keys, 2386 commit groups, 1.0 writes per commit group, ingest: 12.81 MB, 0.02 MB/s
                                           Interval WAL: 2386 writes, 857 syncs, 2.78 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:52.557113+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets getting new tickets!
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:53.557394+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _finish_auth 0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:53.558250+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:54.557651+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:55.558065+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:56.558299+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:57.558546+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: mgrc ms_handle_reset ms_handle_reset con 0x555e816bc400
Oct 02 09:50:37 compute-0 ceph-osd[88314]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct 02 09:50:37 compute-0 ceph-osd[88314]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: get_auth_request con 0x555e816cfc00 auth_method 0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: mgrc handle_mgr_configure stats_period=5
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:58.558870+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:17:59.559103+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:00.559332+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:01.559528+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85862c00 session 0x555e8061d2c0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:02.559869+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:03.560111+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:04.560344+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:05.560576+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:06.560893+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:07.561362+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:08.561543+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:09.562041+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:10.562539+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:11.562969+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:12.563339+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:13.563616+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:14.563854+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:15.564084+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:16.564255+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:17.564545+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:18.565205+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420332 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:19.565622+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342335488 unmapped: 70451200 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:20.566110+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 70443008 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:21.566244+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e80c2000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 70443008 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e8063d0e0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4d000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f4d000 session 0x555e817d63c0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:22.566374+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e807b10e0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e8077fa40
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816c7000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.933547974s of 48.876960754s, submitted: 33
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 70443008 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816c7000 session 0x555e80650000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7eb1000/0x0/0x4ffc00000, data 0x13ea1e8/0x157d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e8063cd20
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4d000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f4d000 session 0x555e807d0000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:23.566648+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e817b8b40
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e8061c5a0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 70025216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492502 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:24.566846+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 70025216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:25.567037+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 70025216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:26.567322+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7860000/0x0/0x4ffc00000, data 0x1a3b1e8/0x1bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 70025216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:27.567693+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7860000/0x0/0x4ffc00000, data 0x1a3b1e8/0x1bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 70025216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:28.568089+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 70017024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492502 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:29.568564+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 70017024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:30.568722+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865800 session 0x555e7e8663c0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7860000/0x0/0x4ffc00000, data 0x1a3b1e8/0x1bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e785f000/0x0/0x4ffc00000, data 0x1a3b20b/0x1bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:31.568805+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:32.568953+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:33.569113+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547691 data_alloc: 218103808 data_used: 15233024
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:34.569272+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e785f000/0x0/0x4ffc00000, data 0x1a3b20b/0x1bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:35.569514+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:36.569674+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:37.569847+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:38.570002+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e785f000/0x0/0x4ffc00000, data 0x1a3b20b/0x1bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547691 data_alloc: 218103808 data_used: 15233024
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:39.570151+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:40.570322+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:41.570414+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e785f000/0x0/0x4ffc00000, data 0x1a3b20b/0x1bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:42.570585+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 70008832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.164596558s of 20.521892548s, submitted: 23
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:43.570814+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 64430080 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615039 data_alloc: 218103808 data_used: 15269888
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:44.574113+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 64012288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:45.574328+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 63848448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:46.574523+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 63848448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:47.574695+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa11800 session 0x555e7e867680
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f4d000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 63848448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c6b000/0x0/0x4ffc00000, data 0x262620b/0x27ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:48.574955+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 63848448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3661025 data_alloc: 218103808 data_used: 16412672
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:49.575220+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 63848448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:50.575459+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 63848448 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:51.575600+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 64651264 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:52.575882+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c71000/0x0/0x4ffc00000, data 0x262920b/0x27bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 64651264 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:53.576087+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6c71000/0x0/0x4ffc00000, data 0x262920b/0x27bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 64651264 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652529 data_alloc: 218103808 data_used: 16412672
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:54.576319+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 64651264 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:55.576498+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.104067802s of 12.720571518s, submitted: 138
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 64643072 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:56.576640+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 64643072 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e7f66c960
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e7e8bd860
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7f9b32c0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e807ebc20
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:57.576843+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa43400
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 64634880 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:58.577264+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 64618496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e665b000/0x0/0x4ffc00000, data 0x2c3e21b/0x2dd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3708561 data_alloc: 218103808 data_used: 16412672
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:18:59.577401+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e7fa43400 session 0x555e7fa7da40
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7e8672c0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e7e8670e0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 64610304 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:00.577558+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 64602112 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:01.577721+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 64602112 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:02.577841+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e665b000/0x0/0x4ffc00000, data 0x2c3e21b/0x2dd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 64602112 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e8079da40
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e7f66da40
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:03.577995+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e873f7000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e873f7000 session 0x555e7ed80000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 64577536 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707505 data_alloc: 218103808 data_used: 16412672
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:04.578149+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e807eab40
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 64577536 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:05.578283+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.773425579s of 10.016960144s, submitted: 49
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 64569344 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:06.578434+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 64561152 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e817d7860
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:07.578614+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e769b000/0x0/0x4ffc00000, data 0x2c3e21b/0x2dd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 64561152 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:08.578792+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 64561152 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e817d63c0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712259 data_alloc: 218103808 data_used: 16412672
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:09.578941+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7676000/0x0/0x4ffc00000, data 0x2c6222b/0x2df8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816cc000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 64561152 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:10.579078+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7676000/0x0/0x4ffc00000, data 0x2c6222b/0x2df8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 64561152 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:11.579216+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 64528384 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:12.579416+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:13.579595+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:14.579789+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757843 data_alloc: 234881024 data_used: 22790144
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:15.579937+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7676000/0x0/0x4ffc00000, data 0x2c6222b/0x2df8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:16.580076+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.123291016s of 10.906085968s, submitted: 74
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:17.580268+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:18.580425+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:19.580622+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758679 data_alloc: 234881024 data_used: 22790144
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:20.580811+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7674000/0x0/0x4ffc00000, data 0x2c6322b/0x2df9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 63954944 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:21.580987+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351117312 unmapped: 61669376 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:22.581200+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7340000/0x0/0x4ffc00000, data 0x2f9722b/0x312d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,37,11])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 59514880 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:23.581426+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 59170816 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:24.581658+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826649 data_alloc: 234881024 data_used: 22818816
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 57294848 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:25.581802+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6b95000/0x0/0x4ffc00000, data 0x374322b/0x38d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,12])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 57999360 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:26.582056+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.919890881s of 10.145005226s, submitted: 99
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 57769984 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:27.582296+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 57769984 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:28.582452+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6963000/0x0/0x4ffc00000, data 0x396f22b/0x3b05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:29.582636+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3862983 data_alloc: 234881024 data_used: 23425024
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6969000/0x0/0x4ffc00000, data 0x396f22b/0x3b05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:30.582817+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6969000/0x0/0x4ffc00000, data 0x396f22b/0x3b05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:31.583052+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:32.583241+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:33.583382+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6944000/0x0/0x4ffc00000, data 0x399222b/0x3b28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,4])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:34.583537+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3867965 data_alloc: 234881024 data_used: 23457792
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6944000/0x0/0x4ffc00000, data 0x399222b/0x3b28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:35.583710+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:36.583883+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.491674423s of 10.077248573s, submitted: 14
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:37.584108+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:38.584258+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6943000/0x0/0x4ffc00000, data 0x399522b/0x3b2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 57753600 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:39.584434+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3867095 data_alloc: 234881024 data_used: 23453696
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 57745408 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:40.584585+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 57745408 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:41.584778+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 57737216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:42.585007+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6933000/0x0/0x4ffc00000, data 0x39a522b/0x3b3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 57737216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:43.585152+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 57737216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:44.585314+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3875927 data_alloc: 234881024 data_used: 23781376
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 57737216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:45.585547+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 57737216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6933000/0x0/0x4ffc00000, data 0x39a522b/0x3b3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:46.585733+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 57737216 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:47.585958+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.368193626s of 10.428758621s, submitted: 12
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6933000/0x0/0x4ffc00000, data 0x39a522b/0x3b3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:48.586141+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:49.586357+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874275 data_alloc: 234881024 data_used: 23781376
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:50.586542+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6931000/0x0/0x4ffc00000, data 0x39a722b/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:51.586732+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:52.586993+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6931000/0x0/0x4ffc00000, data 0x39a722b/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:53.587170+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e817b8b40
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816cc000 session 0x555e7f669e00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:54.587342+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874747 data_alloc: 234881024 data_used: 23781376
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 57729024 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:55.587478+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 57720832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:56.587606+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 57720832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e692f000/0x0/0x4ffc00000, data 0x39a922b/0x3b3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:57.587798+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.071995735s of 10.076697350s, submitted: 6
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 57720832 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:58.587941+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 65871872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:19:59.588112+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674879 data_alloc: 218103808 data_used: 16527360
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6930000/0x0/0x4ffc00000, data 0x334b21b/0x34e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 65871872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:00.588247+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 65871872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:01.588398+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7fa7dc20
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 65871872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:02.588533+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 65871872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:03.588675+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65863680 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:04.588867+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666796 data_alloc: 218103808 data_used: 16412672
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e7caf000/0x0/0x4ffc00000, data 0x262b20b/0x27bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65863680 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:05.589027+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e8c878d20
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65863680 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:06.590787+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:07.590988+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 74260480 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.774965286s of 10.106699944s, submitted: 31
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:08.591169+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:09.591328+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450222 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1fb/0x136d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e7f9b30e0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:10.591510+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:11.591701+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:12.591919+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:13.592083+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:14.592328+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:15.592740+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:16.593000+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:17.593321+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:18.593498+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:19.593806+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:20.594174+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:21.594325+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:22.594538+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:23.594733+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:24.594905+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:25.595146+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:26.595443+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:27.595698+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:28.595856+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:29.596025+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:30.596171+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:31.596503+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:32.596734+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:33.597029+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:34.597267+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:35.597533+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:36.597738+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:37.598214+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:38.598379+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:39.598588+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:40.598818+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:41.599082+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:42.599391+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:43.599671+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:44.600005+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:45.600179+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:46.600417+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:47.600675+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:48.600876+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:49.601071+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449462 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e8064a5a0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bf000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bf000 session 0x555e81553c20
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:50.601242+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7f8ed860
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e807685a0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 74252288 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.617778778s of 43.052780151s, submitted: 10
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9101000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,2,0,4])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e7ebdcd20
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e8061d860
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e817d6960
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e7fa7cf00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7f9b3680
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:51.601380+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:52.601584+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:53.601845+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:54.602025+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553286 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:55.602208+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8402000/0x0/0x4ffc00000, data 0x1ed91e8/0x206c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:56.602404+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:57.602578+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:58.602805+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:20:59.602997+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 74227712 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553286 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:00.603195+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 74219520 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8402000/0x0/0x4ffc00000, data 0x1ed91e8/0x206c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:01.603388+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 74219520 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:02.603576+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 74219520 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e8402000/0x0/0x4ffc00000, data 0x1ed91e8/0x206c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:03.603848+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338567168 unmapped: 74219520 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.918078423s of 13.026695251s, submitted: 19
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7ebdcb40
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:04.603972+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 74063872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e83de000/0x0/0x4ffc00000, data 0x1efd1e8/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3555290 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:05.604117+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 338722816 unmapped: 74063872 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:06.604231+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:07.604404+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:08.604656+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:09.604814+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e83de000/0x0/0x4ffc00000, data 0x1efd1e8/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643290 data_alloc: 234881024 data_used: 20152320
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:10.605045+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:11.605182+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:12.605361+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:13.605521+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e83de000/0x0/0x4ffc00000, data 0x1efd1e8/0x2090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:14.605685+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643290 data_alloc: 234881024 data_used: 20152320
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:15.605836+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 72810496 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.514795303s of 12.579584122s, submitted: 2
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:16.606046+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 67002368 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:17.606285+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 66961408 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76ae000/0x0/0x4ffc00000, data 0x2c241e8/0x2db7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:18.606457+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345825280 unmapped: 66961408 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:19.606613+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3756342 data_alloc: 234881024 data_used: 20328448
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:20.606783+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:21.606967+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76ad000/0x0/0x4ffc00000, data 0x2c2e1e8/0x2dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:22.607150+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76ad000/0x0/0x4ffc00000, data 0x2c2e1e8/0x2dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:23.607298+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:24.607384+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76ad000/0x0/0x4ffc00000, data 0x2c2e1e8/0x2dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3756358 data_alloc: 234881024 data_used: 20328448
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:25.607564+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:26.607811+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:27.608018+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:28.608156+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76aa000/0x0/0x4ffc00000, data 0x2c311e8/0x2dc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:29.608349+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3756358 data_alloc: 234881024 data_used: 20328448
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:30.608599+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:31.608793+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:32.608937+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865c00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865c00 session 0x555e8061d2c0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81069000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81069000 session 0x555e8c8792c0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 67624960 heap: 412786688 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e817d74a0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e80494f00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.623231888s of 16.666929245s, submitted: 92
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e76aa000/0x0/0x4ffc00000, data 0x2c311e8/0x2dc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,1,4,8])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:33.609099+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e817d6f00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865c00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865c00 session 0x555e7f9b3680
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345341952 unmapped: 71647232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f47400
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f47400 session 0x555e817d6960
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7ebdcd20
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7f9b30e0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:34.609262+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6a36000/0x0/0x4ffc00000, data 0x38a4211/0x3a38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 71639040 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3855272 data_alloc: 234881024 data_used: 20328448
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:35.609398+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 71639040 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:36.609533+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 71639040 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:37.609718+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345350144 unmapped: 71639040 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:38.609868+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e7f669e00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 71630848 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6a36000/0x0/0x4ffc00000, data 0x38a424a/0x3a38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:39.610077+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865c00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865c00 session 0x555e817d63c0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 71630848 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3855272 data_alloc: 234881024 data_used: 20328448
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:40.610274+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f47000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80f47000 session 0x555e817d7860
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 71630848 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:41.610447+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6a35000/0x0/0x4ffc00000, data 0x38a425a/0x3a39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 71630848 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:42.610624+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345358336 unmapped: 71630848 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e807eab40
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:43.610852+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 71622656 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:44.610993+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.369200706s of 11.610096931s, submitted: 47
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 71622656 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857066 data_alloc: 234881024 data_used: 20328448
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:45.611098+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 345366528 unmapped: 71622656 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:46.611239+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6a35000/0x0/0x4ffc00000, data 0x38a425a/0x3a39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 64831488 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:47.611432+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353525760 unmapped: 63463424 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:48.611590+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6a35000/0x0/0x4ffc00000, data 0x38a425a/0x3a39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:49.611722+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3947278 data_alloc: 234881024 data_used: 33079296
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:50.611982+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:51.612230+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:52.612400+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:53.612562+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6a33000/0x0/0x4ffc00000, data 0x38a525a/0x3a3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:54.612880+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3947454 data_alloc: 234881024 data_used: 33079296
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:55.613046+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 63455232 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.134132385s of 11.804743767s, submitted: 3
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:56.613167+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 61513728 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:57.613305+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e6521000/0x0/0x4ffc00000, data 0x3daf25a/0x3f44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 59662336 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:58.613409+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:21:59.613562+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4005096 data_alloc: 234881024 data_used: 33775616
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:00.613701+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:01.613830+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:02.613972+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e64b0000/0x0/0x4ffc00000, data 0x3e2925a/0x3fbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:03.614163+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:04.614326+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4004220 data_alloc: 234881024 data_used: 33779712
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:05.614481+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e648f000/0x0/0x4ffc00000, data 0x3e4a25a/0x3fdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:06.614592+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:07.614807+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:08.614973+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:09.615115+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4004220 data_alloc: 234881024 data_used: 33779712
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:10.615248+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e648f000/0x0/0x4ffc00000, data 0x3e4a25a/0x3fdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357179392 unmapped: 59809792 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:11.615381+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357203968 unmapped: 59785216 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:12.615557+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357203968 unmapped: 59785216 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:13.615800+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357203968 unmapped: 59785216 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:14.615927+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e648f000/0x0/0x4ffc00000, data 0x3e4a25a/0x3fdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.914283752s of 18.503477097s, submitted: 87
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7f66da40
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e7f6c94a0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 357203968 unmapped: 59785216 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4004532 data_alloc: 234881024 data_used: 33849344
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865c00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:15.616214+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865c00 session 0x555e8064b4a0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:16.616359+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e72a8000/0x0/0x4ffc00000, data 0x2c321e8/0x2dc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:17.617482+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:18.617646+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:19.617791+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e72a7000/0x0/0x4ffc00000, data 0x2c331e8/0x2dc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768085 data_alloc: 234881024 data_used: 20328448
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:20.621845+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e72a7000/0x0/0x4ffc00000, data 0x2c331e8/0x2dc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:21.621977+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:22.622106+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:23.622291+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e81011800 session 0x555e807dc5a0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e7f668b40
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 354230272 unmapped: 62758912 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:24.622422+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3483899 data_alloc: 218103808 data_used: 7856128
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90de000/0x0/0x4ffc00000, data 0x11fe1d8/0x1390000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:25.622582+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:26.622717+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.506937981s of 11.834042549s, submitted: 56
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:27.622914+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e7ed9cd20
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:28.623101+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:29.623318+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:30.623466+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:31.623677+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:32.623874+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:33.624020+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:34.624197+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:35.624327+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:36.624464+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:37.624635+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:38.624851+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:39.624994+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:40.625174+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:41.625398+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:42.625575+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:43.625717+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:44.625861+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:45.626045+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:46.626275+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:47.626468+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:48.626641+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:49.626821+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:50.626957+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:51.627153+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:52.627321+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:53.627480+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:54.627667+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:55.627860+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:56.628046+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:57.628274+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:58.628456+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:22:59.628666+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:00.628873+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:01.629049+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66715648 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:02.629256+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 66707456 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:03.629459+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 66707456 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:04.629628+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 66707456 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:05.629853+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 66707456 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:06.630037+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 66707456 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:07.630237+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:08.630425+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:09.630617+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:10.630817+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:11.631017+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:12.631205+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:13.631360+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:14.631577+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66699264 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:15.631737+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 66691072 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:16.631906+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 66691072 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:17.632082+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 66691072 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:18.632220+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:19.632401+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:20.632719+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:21.632881+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:22.633031+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:23.633166+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:24.633370+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 66682880 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:25.633552+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66674688 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:26.633797+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66674688 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:27.634000+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66674688 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:28.634163+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66674688 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:29.634311+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66674688 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:30.634501+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66674688 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:31.634699+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:32.634967+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:33.635209+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:34.635370+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:35.635626+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:36.635825+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:37.636031+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:38.636250+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66666496 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:39.636467+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 66658304 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:40.636622+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:41.636824+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:42.637010+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:43.637139+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:44.637481+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:45.637612+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:46.637817+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66650112 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:47.637995+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:48.638146+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:49.638309+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:50.638464+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:51.638714+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:52.638937+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:53.639213+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:54.639405+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66641920 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:55.639551+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 66633728 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:56.639804+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 66633728 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:57.640041+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 66633728 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:58.640163+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 66617344 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:23:59.640342+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 66617344 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:00.640510+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 66617344 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:01.640692+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 66617344 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:02.640841+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 66617344 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:03.640976+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 66609152 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:04.641112+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 66609152 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:05.641308+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477583 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 66609152 heap: 416989184 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 99.549827576s of 99.685768127s, submitted: 2
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:06.641464+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e7ebd9680
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e7f667e00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865c00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865c00 session 0x555e7ed9c960
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e85865c00
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 69836800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:07.641629+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e85865c00 session 0x555e7f9b32c0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e03000 session 0x555e807ebc20
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 69836800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:08.641837+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e833a000/0x0/0x4ffc00000, data 0x1fa123a/0x2134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 69836800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:09.642016+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 69836800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:10.642145+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e807eb4a0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3588099 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 69828608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:11.642304+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e833a000/0x0/0x4ffc00000, data 0x1fa123a/0x2134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 69828608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:12.642430+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 67952640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:13.642546+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 67952640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:14.642682+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e833a000/0x0/0x4ffc00000, data 0x1fa123a/0x2134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 67952640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:15.642810+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689991 data_alloc: 234881024 data_used: 22016000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e8100f000 session 0x555e8c8785a0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e816bc000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.975601196s of 10.013640404s, submitted: 48
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 67952640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:16.642945+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:17.643104+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e816bc000 session 0x555e7e8bcb40
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:18.643312+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:19.643455+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:20.643590+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487613 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:21.643703+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:22.643853+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:23.644029+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:24.644151+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:25.644334+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487613 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:26.644502+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:27.644692+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:28.644880+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:29.645477+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:30.645674+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487613 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:31.645886+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:32.646071+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:33.646205+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:34.646357+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:35.646570+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487613 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:36.646774+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:37.647026+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:38.647163+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:39.647316+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:40.701221+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487613 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:41.701367+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:42.701517+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:43.701709+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:44.701873+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:45.702082+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487613 data_alloc: 218103808 data_used: 7745536
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:46.702214+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:47.702411+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:48.702550+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e90e3000/0x0/0x4ffc00000, data 0x11da1d8/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e63800
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 ms_handle_reset con 0x555e80e63800 session 0x555e822210e0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.237102509s of 32.277904510s, submitted: 31
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 291 ms_handle_reset con 0x555e80e03000 session 0x555e817b9c20
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:49.702730+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:50.703004+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3490903 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e90ff000/0x0/0x4ffc00000, data 0x11dbc98/0x136e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:51.703306+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:52.703464+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:53.703653+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e90ff000/0x0/0x4ffc00000, data 0x11dbc98/0x136e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:54.703793+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e90ff000/0x0/0x4ffc00000, data 0x11dbc98/0x136e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:55.703947+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:56.704085+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 70549504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:57.704262+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 70549504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:58.704458+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 70549504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:24:59.704647+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 70549504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:00.704843+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:01.705060+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:02.705204+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 70541312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:03.705429+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 70533120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:04.705633+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 70533120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:05.705880+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 70533120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:06.706062+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 70533120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:07.706276+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 70533120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:08.706489+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:09.706629+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:10.706845+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:11.706996+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:12.707138+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:13.707318+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:14.707508+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:15.707664+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 70524928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:16.707849+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 70516736 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:17.708050+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 70516736 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:18.708229+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 70516736 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:19.708421+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 70508544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:20.708629+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 70508544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:21.708817+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 70508544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:22.709124+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 70508544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:23.709310+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 70508544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:24.709445+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:25.709598+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:26.709854+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:27.710020+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:28.710219+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:29.710404+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:30.710588+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:31.710796+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 70500352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:32.710963+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:33.711091+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:34.711229+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:35.711445+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:36.711564+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:37.711810+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:38.711977+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:39.712118+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 70483968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:40.712260+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 70475776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:41.712425+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 70475776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:42.712582+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350183424 unmapped: 70475776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:43.712739+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 70467584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:44.712980+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 70467584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:45.713155+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 70467584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:46.713336+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 70467584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:47.713515+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 70467584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:48.713665+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 70451200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:49.713820+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 70451200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:50.713947+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 70451200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:51.714078+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 70443008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:52.714261+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 70443008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:53.714426+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 70434816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:54.714648+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 70434816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:55.714869+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350224384 unmapped: 70434816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:56.715074+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:57.715298+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:58.715439+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:25:59.715628+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:00.715816+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:01.715970+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:02.716165+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:03.716402+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 70426624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:04.716606+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:05.716846+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:06.717064+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:07.717395+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:08.717570+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:09.717816+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:10.718002+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:11.718194+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 70410240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:12.718419+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 70402048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:13.718585+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 70402048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:14.718816+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 70402048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:15.718963+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 70402048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:16.719129+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 70402048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:17.719314+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 70393856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:18.719476+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 70393856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:19.719646+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 70393856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:20.719823+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:21.720015+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:22.720200+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:23.720403+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:24.720580+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:25.720780+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:26.720919+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:27.721098+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 70377472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:28.721256+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:29.721430+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:30.721558+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:31.721739+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:32.721961+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:33.722164+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:34.722310+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 70369280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:35.722462+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 70361088 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:36.722713+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:37.723018+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:38.723207+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:39.723400+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:40.723568+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:41.723804+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:42.724032+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:43.724356+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 70344704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:44.724512+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 70336512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:45.724647+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 70336512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:46.724828+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 70336512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:47.725007+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 70328320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:48.725198+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 70328320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:49.725386+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 70328320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:50.725576+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 70328320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7753728
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:51.725775+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 ms_handle_reset con 0x555e8100f000 session 0x555e807eb0e0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 70631424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:52.725941+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 70623232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:53.726084+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 70623232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:54.726244+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 70623232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:55.726402+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 70615040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493877 data_alloc: 218103808 data_used: 7819264
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:56.726558+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 70615040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e90fc000/0x0/0x4ffc00000, data 0x11dd6fb/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:57.726834+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 70615040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:58.727058+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 70615040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:26:59.727267+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 70615040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:00.727465+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 70606848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e87eca400
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 132.208267212s of 132.352478027s, submitted: 24
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493245 data_alloc: 218103808 data_used: 7819264
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:01.727591+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 70598656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 293 ms_handle_reset con 0x555e87eca400 session 0x555e7f6692c0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:02.727781+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e98fa000/0x0/0x4ffc00000, data 0x9df2a9/0xb73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:03.727982+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:04.728200+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011400
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:05.728351+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 294 ms_handle_reset con 0x555e81011400 session 0x555e8063c780
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371250 data_alloc: 218103808 data_used: 1015808
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:06.728505+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 294 heartbeat osd_stat(store_statfs(0x4ea0f8000/0x0/0x4ffc00000, data 0x1e0e6a/0x375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:07.728718+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:08.728934+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:09.729129+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 294 heartbeat osd_stat(store_statfs(0x4ea0f8000/0x0/0x4ffc00000, data 0x1e0e6a/0x375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:10.729318+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371250 data_alloc: 218103808 data_used: 1015808
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 294 heartbeat osd_stat(store_statfs(0x4ea0f8000/0x0/0x4ffc00000, data 0x1e0e6a/0x375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.180046082s of 10.362761497s, submitted: 42
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:11.729486+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:12.729617+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 295 heartbeat osd_stat(store_statfs(0x4ea0f5000/0x0/0x4ffc00000, data 0x1e28cd/0x378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: get_auth_request con 0x555e7ec18c00 auth_method 0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:13.729828+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:14.729974+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:15.730115+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374224 data_alloc: 218103808 data_used: 1015808
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:16.730408+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:17.730678+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 74194944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 295 heartbeat osd_stat(store_statfs(0x4ea0f5000/0x0/0x4ffc00000, data 0x1e28cd/0x378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:18.730801+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100e400
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 ms_handle_reset con 0x555e8100e400 session 0x555e8061c5a0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 74186752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:19.731027+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 74186752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:20.731159+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 74186752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379014 data_alloc: 218103808 data_used: 1015808
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:21.731315+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 74186752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:22.731484+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 74186752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:23.731656+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 74186752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:24.731835+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 74170368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:25.732017+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 74170368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:26.732198+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 74170368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:27.732465+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 74162176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:28.732636+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 74162176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:29.732814+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 74162176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:30.732958+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 74162176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:31.733068+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 74162176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:32.733189+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:33.733347+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:34.733506+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:35.733633+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:36.733824+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:37.734024+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:38.734197+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:39.734371+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 74145792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:40.734538+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 74137600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:41.734669+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 74137600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:42.734784+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 74129408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:43.734931+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 74129408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:44.735032+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 74129408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:45.735179+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 74129408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:46.735385+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 74129408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:47.735567+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 74121216 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:48.735725+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:49.735837+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:50.735992+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:51.736113+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.74 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1442 writes, 6120 keys, 1442 commit groups, 1.0 writes per commit group, ingest: 6.56 MB, 0.01 MB/s
                                           Interval WAL: 1442 writes, 522 syncs, 2.76 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:52.736209+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:53.736354+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:54.736492+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:55.736717+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 74113024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:56.736861+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 74104832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:57.737009+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 74096640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:58.737129+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 74096640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:27:59.737308+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 74096640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:00.737489+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 74096640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:01.737626+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 74088448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:02.737858+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 74088448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:03.738029+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 74088448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:04.738173+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 74072064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:05.738347+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 74063872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:06.738494+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 74063872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:07.738674+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 74063872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:08.738811+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 74063872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:09.738950+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 74063872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:10.739143+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 74063872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:11.739315+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 74055680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:12.739499+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 74055680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:13.739668+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 74055680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:14.739840+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 74055680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:15.739974+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 74047488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:16.740121+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 74047488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:17.740284+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 74047488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:18.740452+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 74047488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:19.740563+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 74039296 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:20.740693+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346628096 unmapped: 74031104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:21.740819+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346628096 unmapped: 74031104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:22.741030+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 74022912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:23.741228+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 74022912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:24.741397+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 74022912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:25.741567+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 74022912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:26.741710+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 74022912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:27.741916+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 74022912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:28.742057+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 74014720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:29.742212+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 74014720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:30.742330+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 74006528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:31.742471+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 74006528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:32.742611+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 74006528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:33.742812+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 73998336 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:34.742983+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 73998336 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:35.743109+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:36.743232+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:37.743371+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:38.743521+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:39.743687+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:40.743860+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:41.744037+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:42.744202+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 73990144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:43.744346+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 73981952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:44.744489+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 73981952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:45.744627+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 73973760 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:46.744796+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:47.745021+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:48.745213+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:49.745410+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:50.745558+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379174 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:51.745712+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:52.745908+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:53.746066+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f1000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:54.746222+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 73965568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:55.746379+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 104.105834961s of 104.461563110s, submitted: 11
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 73957376 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3378118 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:56.746548+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 73957376 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:57.746823+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 73916416 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:58.747038+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f2000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346750976 unmapped: 73908224 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:28:59.747187+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f2000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346750976 unmapped: 73908224 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:00.747323+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 73900032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3378118 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:01.747468+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 73900032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:02.747661+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 73900032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:03.747824+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f2000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 73900032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:04.747964+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 73900032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:05.748147+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.567169189s of 10.098338127s, submitted: 107
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 heartbeat osd_stat(store_statfs(0x4ea0f2000/0x0/0x4ffc00000, data 0x1e446d/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 73900032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 ms_handle_reset con 0x555e80e03000 session 0x555e7e7db0e0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432966 data_alloc: 218103808 data_used: 1019904
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:06.748258+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e8100f000
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 69214208 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:07.748380+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e98f2000/0x0/0x4ffc00000, data 0x9e446d/0xb7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e8c7e000/0x0/0x4ffc00000, data 0x1655fea/0x17ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 298 handle_osd_map epochs [298,298], i have 298, src has [1,298]
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 72810496 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:08.748555+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 298 ms_handle_reset con 0x555e8100f000 session 0x555e7ed80960
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 72802304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:09.748708+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 72802304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:10.748863+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 72794112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:11.748997+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 72794112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:12.749123+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 72794112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:13.749246+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 72794112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:14.749329+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 72794112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:15.749437+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:16.749560+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:17.749796+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:18.749980+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:19.750150+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:20.750373+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:21.750565+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:22.750679+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 72785920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:23.750815+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 72777728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:24.750978+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 72777728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:25.751153+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 72761344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:26.751568+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:37 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 72761344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:27.751805+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 72761344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:28.751996+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 72761344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:29.752195+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 72761344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:30.752333+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:37 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 72761344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:37 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:31.752473+0000)
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:37 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 72753152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:32.752634+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 72753152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:33.752897+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 72753152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:34.753034+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 72744960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:35.753197+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 72744960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:36.753338+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 72744960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:37.753513+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 72744960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:38.753697+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 72744960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:39.753827+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 72736768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:40.754047+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e81011400
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 72736768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:41.754169+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1657b8a/0x17f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528426 data_alloc: 218103808 data_used: 1028096
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.684211731s of 35.786945343s, submitted: 6
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 72736768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:42.754335+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 72728576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:43.754507+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 72728576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:44.754626+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 72728576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:45.754783+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 299 ms_handle_reset con 0x555e81011400 session 0x555e8079d4a0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 72728576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:46.754961+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528958 data_alloc: 218103808 data_used: 1028096
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1659738/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1659738/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 72728576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:47.755191+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1659738/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1659738/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:48.755342+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 72720384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e8c79000/0x0/0x4ffc00000, data 0x1659738/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:49.755564+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 72720384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:50.755737+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 72720384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:51.755922+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 72712192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533132 data_alloc: 218103808 data_used: 1036288
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:52.756070+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 72712192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:53.756262+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 72712192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:54.756467+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 72712192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:55.756642+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 72712192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:56.757206+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:57.757473+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:58.757590+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:29:59.757790+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:00.757966+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:01.758114+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:02.758448+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:03.758620+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 72695808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:04.758952+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 72679424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:05.759198+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 72679424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:06.759330+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 72679424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:07.759575+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 72679424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:08.759810+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 72679424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:09.760142+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 72679424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:10.760395+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 72671232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:11.760816+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 72671232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:12.761105+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 72671232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:13.761384+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 72671232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:14.761609+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 72671232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:15.761802+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 72663040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:16.762012+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 72663040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:17.762173+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 72663040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:18.762319+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 72663040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:19.762532+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 72663040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:20.762854+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 72654848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:21.763077+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 72654848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:22.763315+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 72654848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:23.763511+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 72646656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:24.763696+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 72646656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:25.763923+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 72646656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:26.764138+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 72646656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:27.764393+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 72646656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:28.764630+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:29.764827+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:30.765171+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:31.765391+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:32.765629+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:33.765799+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:34.765953+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:35.766166+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 72630272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:36.766332+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 72622080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:37.766527+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 72622080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:38.766822+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 72622080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:39.767007+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 72613888 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:40.767215+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 72613888 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:41.767376+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 72613888 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:42.767529+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 72613888 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:43.767672+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 72613888 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:44.767820+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 72605696 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:45.768002+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 72605696 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:46.768211+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 72605696 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:47.768416+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 72597504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:48.768606+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 72597504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:49.768889+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 72597504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:50.769097+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 72597504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:51.769234+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 72597504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:52.769360+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 72589312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:53.769486+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 72589312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:54.769610+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 72589312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:55.769744+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 72581120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:56.769953+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 72581120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:57.770179+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 72581120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:58.770289+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 72581120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:30:59.770374+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 72581120 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:00.770512+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 72572928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:01.770640+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 72572928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:02.770771+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 72572928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:03.770906+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 72572928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:04.771028+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 72572928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:05.771148+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 72572928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:06.771273+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 72564736 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:07.771438+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 72556544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:08.771577+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 72548352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:09.771701+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:10.771867+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:11.772115+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:12.772302+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:13.772439+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:14.772606+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:15.772791+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 72540160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:16.772940+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 72531968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:17.773131+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 72531968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:18.773269+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 72531968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:19.773402+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 72515584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:20.773565+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 72515584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:21.773729+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 72515584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:22.773896+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 72515584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:23.774047+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 72515584 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:24.774214+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 72507392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:25.774340+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 72507392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:26.774547+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 72499200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:27.774741+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 72499200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:28.774885+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 72499200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:29.775032+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 72499200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:30.775127+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 72499200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:31.775249+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 72499200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:32.775409+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 72482816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:33.775957+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 72482816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:34.776069+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 72482816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:35.776212+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 72482816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:36.776370+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 72474624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:37.776584+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:38.776796+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:39.776943+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:40.777112+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:41.777289+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:42.777459+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:43.777588+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:44.777709+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:45.777877+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 72466432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:46.778015+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348200960 unmapped: 72458240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:47.778174+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348200960 unmapped: 72458240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:48.778339+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 72441856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:49.778519+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 72441856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:50.778674+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 72441856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:51.778821+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 72433664 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:52.778967+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 72433664 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:53.779164+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 72433664 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:54.779416+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 72433664 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:55.779585+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 72433664 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:56.779926+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 72425472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:57.780111+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 72425472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:58.780314+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 72425472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:31:59.780509+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 72417280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:00.780994+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 72417280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:01.781454+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 72417280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:02.781651+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 72417280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:03.781961+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 72417280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:04.782490+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348250112 unmapped: 72409088 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:05.782707+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 72400896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:06.782882+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 72400896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:07.783144+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 72400896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:08.783519+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 72400896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:09.783910+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 72400896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:10.784216+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 72400896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:11.784936+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 72384512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:12.785376+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 72384512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:13.785529+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 72384512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:14.785855+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 72376320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:15.786081+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 72376320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:16.786395+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 72376320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:17.786725+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 72376320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:18.786983+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 72376320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:19.787221+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348291072 unmapped: 72368128 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:20.787434+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348291072 unmapped: 72368128 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:21.787695+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348291072 unmapped: 72368128 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:22.787961+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348299264 unmapped: 72359936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:23.788124+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348299264 unmapped: 72359936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:24.788315+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348299264 unmapped: 72359936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:25.788537+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348299264 unmapped: 72359936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:26.788696+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348299264 unmapped: 72359936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:27.788932+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:28.789071+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:29.789226+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:30.789390+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:31.789530+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:32.789690+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:33.790017+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:34.790195+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 72343552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:35.790480+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:36.790651+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:37.790932+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:38.791057+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:39.791213+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:40.791442+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:41.791587+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:42.791721+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 72327168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:43.791935+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 72318976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:44.792163+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 72318976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:45.792331+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 72318976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:46.792509+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 72310784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:47.792732+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 72310784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:48.792912+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 72310784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:49.793092+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 72310784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:50.793309+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 72310784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:51.793565+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 72310784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:52.793738+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 72302592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:53.793950+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 72302592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:54.794099+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 72294400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:55.794248+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 72294400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:56.794389+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 72294400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:57.794551+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 72294400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:58.794682+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 72294400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:32:59.794915+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348381184 unmapped: 72278016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:00.795073+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348381184 unmapped: 72278016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:01.795257+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 ms_handle_reset con 0x555e7fa10c00 session 0x555e7ebd81e0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e87eca400
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348381184 unmapped: 72278016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:02.795459+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348381184 unmapped: 72278016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:03.795706+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 72269824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:04.795903+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 72269824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:05.796045+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 72269824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:06.796199+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 72269824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:07.796362+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348397568 unmapped: 72261632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:08.796500+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348397568 unmapped: 72261632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:09.796668+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 72253440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:10.796857+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 72253440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:11.796992+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 72253440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:12.797238+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348413952 unmapped: 72245248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:13.797416+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348413952 unmapped: 72245248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:14.797551+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348413952 unmapped: 72245248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:15.797697+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348430336 unmapped: 72228864 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:16.797886+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348430336 unmapped: 72228864 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:17.798150+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348438528 unmapped: 72220672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:18.798277+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348438528 unmapped: 72220672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:19.798413+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348438528 unmapped: 72220672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:20.798540+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 72212480 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:21.798656+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 72212480 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:22.798806+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348454912 unmapped: 72204288 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:23.798970+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 72196096 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:24.799155+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 72196096 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:25.799291+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 72196096 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:26.799475+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348471296 unmapped: 72187904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:27.799688+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348471296 unmapped: 72187904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:28.799971+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348471296 unmapped: 72187904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:29.800161+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348471296 unmapped: 72187904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:30.800302+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348471296 unmapped: 72187904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:31.800488+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:32.800611+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:33.800806+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:34.800948+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:35.801069+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:36.801286+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:37.801481+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:38.801679+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 72179712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:39.801903+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348495872 unmapped: 72163328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:40.802160+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 72155136 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:41.802309+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 72155136 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:42.802442+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 72146944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:43.802581+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 72146944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:44.802740+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 72146944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:45.802935+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 72146944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:46.803054+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 ms_handle_reset con 0x555e80f4d000 session 0x555e807dd4a0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7fa10c00
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 72146944 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:47.803601+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348520448 unmapped: 72138752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:48.803736+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348520448 unmapped: 72138752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:49.803866+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348520448 unmapped: 72138752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:50.804067+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348520448 unmapped: 72138752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:51.804244+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348528640 unmapped: 72130560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:52.804457+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348528640 unmapped: 72130560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:53.804640+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348528640 unmapped: 72130560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:54.804809+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348528640 unmapped: 72130560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:55.804950+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 72122368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:56.805077+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 72122368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:57.805302+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 72122368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:58.805484+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 72122368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:33:59.805642+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348545024 unmapped: 72114176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:00.805771+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348545024 unmapped: 72114176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:01.805881+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348545024 unmapped: 72114176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:02.806018+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348545024 unmapped: 72114176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:03.806141+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:04.806264+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:05.806418+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:06.806581+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:07.806862+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:08.807242+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:09.807402+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:10.807536+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:11.807650+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348561408 unmapped: 72097792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:12.807806+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348569600 unmapped: 72089600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:13.807944+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348569600 unmapped: 72089600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:14.808080+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348577792 unmapped: 72081408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:15.808259+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348577792 unmapped: 72081408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:16.808465+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348577792 unmapped: 72081408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:17.808725+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348577792 unmapped: 72081408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:18.808963+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348585984 unmapped: 72073216 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:19.809140+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348585984 unmapped: 72073216 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:20.809329+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:21.809721+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:22.809912+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:23.810057+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:24.810291+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:25.810466+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:26.810678+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:27.810896+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 72065024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:28.811080+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348610560 unmapped: 72048640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:29.811247+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:30.811409+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:31.811574+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:32.811730+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:33.811928+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:34.812134+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:35.812293+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348618752 unmapped: 72040448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:36.812430+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:37.812626+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:38.812855+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:39.812997+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:40.813122+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:41.813255+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:42.813383+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:43.813516+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 72032256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:44.813706+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 72024064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:45.813849+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348635136 unmapped: 72024064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:46.814005+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 72015872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:47.814162+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 72015872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:48.814284+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 72015872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:49.814398+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 72007680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:50.814542+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 72007680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:51.814667+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348651520 unmapped: 72007680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:52.814792+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 71999488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:53.814926+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 71999488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:54.815067+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 71999488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:55.815198+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 71991296 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:56.815326+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 71991296 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:57.815482+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 71991296 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:58.815616+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 71991296 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:34:59.815803+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 71991296 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:00.815937+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348676096 unmapped: 71983104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:01.816080+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348676096 unmapped: 71983104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:02.816302+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348676096 unmapped: 71983104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:03.816470+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348676096 unmapped: 71983104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:04.816646+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348676096 unmapped: 71983104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:05.816839+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348684288 unmapped: 71974912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:06.817090+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348684288 unmapped: 71974912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:07.817371+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348684288 unmapped: 71974912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:08.817557+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348692480 unmapped: 71966720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:09.817697+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348692480 unmapped: 71966720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:10.817815+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 71958528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:11.817949+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 71958528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:12.818107+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 71958528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:13.818257+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 71958528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:14.818381+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 71958528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:15.818585+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348708864 unmapped: 71950336 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:16.818727+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 71942144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:17.818961+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 71942144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:18.819087+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 71942144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:19.819233+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 71942144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:20.819345+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 71942144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:21.819526+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348725248 unmapped: 71933952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:22.819672+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348725248 unmapped: 71933952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:23.819818+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348725248 unmapped: 71933952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:24.820013+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348733440 unmapped: 71925760 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:25.820264+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348733440 unmapped: 71925760 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:26.820475+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 71917568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:27.820705+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 71917568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:28.820847+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 71917568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:29.821126+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 71917568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:30.821335+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 71917568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:31.821588+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 71917568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:32.821772+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348758016 unmapped: 71901184 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:33.822008+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348758016 unmapped: 71901184 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:34.822203+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 71892992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:35.822347+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 71892992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:36.822469+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 71892992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:37.822706+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 71892992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:38.822843+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 71892992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:39.823008+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 71892992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:40.823156+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:41.823284+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:42.823446+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:43.823856+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:44.824029+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:45.824180+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:46.824316+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:47.824491+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 71884800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:48.824684+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348798976 unmapped: 71860224 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:49.824833+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348798976 unmapped: 71860224 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:50.824980+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348798976 unmapped: 71860224 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:51.825110+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 71852032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:52.825241+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 71852032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:53.825386+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 71852032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:54.825576+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 71852032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:55.825736+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 71852032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:56.825915+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:57.826129+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:58.826295+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:35:59.826470+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:00.826786+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:01.826919+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:02.827070+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:03.827191+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 71843840 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:04.827329+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 71827456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:05.827476+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 71827456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:06.827671+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 71827456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:07.827839+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 71827456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:08.828132+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 71827456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:09.828300+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 71819264 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:10.828430+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 71819264 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:11.828597+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 71819264 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:12.828741+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348848128 unmapped: 71811072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:13.828939+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348848128 unmapped: 71811072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:14.829194+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348848128 unmapped: 71811072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:15.829331+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348856320 unmapped: 71802880 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:16.829528+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348856320 unmapped: 71802880 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:17.829799+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348856320 unmapped: 71802880 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:18.829952+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348856320 unmapped: 71802880 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:19.830126+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348856320 unmapped: 71802880 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:20.830301+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348872704 unmapped: 71786496 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:21.830485+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348872704 unmapped: 71786496 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:22.830696+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 71778304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:23.830901+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 71778304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:24.831064+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 71778304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:25.831235+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 71778304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:26.831396+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 71778304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:27.831605+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 71778304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:28.831826+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 71770112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:29.832035+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 71770112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:30.832210+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 71761920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:31.832468+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 71761920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:32.832669+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 71761920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:33.832814+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 71761920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:34.832962+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 71753728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:35.833138+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 71745536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:36.833332+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 71745536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:37.833541+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 71745536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:38.833680+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:39.833832+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:40.833981+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:41.834172+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:42.834361+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:43.834508+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:44.834694+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 71737344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:45.834889+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 71729152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:46.835113+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 71729152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:47.835378+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 71729152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:48.835523+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 71729152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:49.835663+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 71729152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:50.835816+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 71729152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:51.836059+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 71720960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:52.836294+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 71720960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:53.836516+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 71720960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:54.836727+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 71712768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:55.836949+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 71712768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:56.837131+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 71712768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:57.837320+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 71712768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:58.837510+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 71712768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:36:59.837728+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 71704576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:00.837916+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 71704576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:01.838082+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 71704576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:02.838239+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 71696384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:03.838397+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 71696384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:04.838542+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 71696384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:05.838687+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 71696384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:06.838829+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 71696384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:07.839010+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 71688192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:08.839138+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 71688192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:09.839254+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 71688192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:10.839395+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 71680000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:11.839537+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 71680000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:12.839680+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 71680000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:13.839824+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 71680000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:14.839963+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 71680000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:15.840098+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:16.840271+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:17.840473+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:18.840622+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:19.841016+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:20.841215+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:21.841367+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:22.841526+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 71663616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:23.841806+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:24.841990+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:25.842233+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:26.842391+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:27.842530+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:28.842678+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:29.842812+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 71655424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:30.842969+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 71647232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:31.843146+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 71630848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:32.843288+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 71630848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:33.843429+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 71630848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:34.843551+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 71630848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:35.843704+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 71630848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:36.843926+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349036544 unmapped: 71622656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:37.844129+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349036544 unmapped: 71622656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:38.844279+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349036544 unmapped: 71622656 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:39.844460+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:40.844611+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:41.844777+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:42.844952+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:43.845093+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:44.845232+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:45.845349+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:46.845496+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 71614464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:47.845695+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:48.845825+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:49.845974+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:50.846109+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:51.846380+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 597 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 125 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:52.846559+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:53.846829+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:54.847033+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 71589888 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:55.847173+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 71606272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:56.847399+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 71606272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:57.847589+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 71606272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:58.847719+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 71606272 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:37:59.847850+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:00.848031+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:01.848185+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:02.848320+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 71598080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:03.848478+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 71581696 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:04.848626+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 71581696 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:05.848770+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 71581696 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:06.848958+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 71573504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:07.849193+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 71573504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:08.849349+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 71573504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:09.849517+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 71573504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:10.849701+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 71573504 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:11.849840+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:12.849965+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:13.850079+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:14.850191+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:15.850325+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:16.850465+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:17.850675+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:18.850838+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:19.850985+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 71565312 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:20.851151+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349110272 unmapped: 71548928 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:21.851350+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:22.851517+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:23.851743+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:24.852017+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:25.852176+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:26.852324+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:27.852564+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 71532544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:28.852740+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349134848 unmapped: 71524352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:29.852999+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349134848 unmapped: 71524352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:30.853147+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349143040 unmapped: 71516160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:31.853303+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349143040 unmapped: 71516160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:32.853429+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349143040 unmapped: 71516160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:33.853586+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349143040 unmapped: 71516160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:34.853727+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349143040 unmapped: 71516160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:35.853874+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349143040 unmapped: 71516160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:36.854025+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 71507968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:37.854156+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 71507968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:38.854322+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 71507968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:39.854560+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 71507968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:40.854846+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 71499776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:41.855083+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 71499776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:42.855265+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 71499776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533292 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:43.855402+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 71499776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:44.855568+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349175808 unmapped: 71483392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:45.855731+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349175808 unmapped: 71483392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:46.855915+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349175808 unmapped: 71483392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f49000
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e8c75000/0x0/0x4ffc00000, data 0x165b19b/0x17f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:47.856138+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349175808 unmapped: 71483392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 545.605590820s of 546.408325195s, submitted: 26
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 301 ms_handle_reset con 0x555e80f49000 session 0x555e82221e00
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:48.856318+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482690 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349200384 unmapped: 71458816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:49.856452+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349200384 unmapped: 71458816 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e886ea800
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e9473000/0x0/0x4ffc00000, data 0xe5cd49/0xffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:50.856627+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 71434240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e9473000/0x0/0x4ffc00000, data 0xe5cd49/0xffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 302 ms_handle_reset con 0x555e886ea800 session 0x555e7ed9cf00
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:51.856835+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 71434240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:52.857019+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 71426048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:53.857218+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3401510 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 71426048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:54.857368+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 71426048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 302 heartbeat osd_stat(store_statfs(0x4ea0e0000/0x0/0x4ffc00000, data 0x1ee91a/0x38d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:55.857484+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 71426048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80e03000
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:56.857612+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 71393280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:57.857906+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 71393280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 303 handle_osd_map epochs [304,304], i have 304, src has [1,304]
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.549840927s of 10.031791687s, submitted: 62
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 ms_handle_reset con 0x555e80e03000 session 0x555e7ebdcb40
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:58.858048+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519807 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 69287936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:38:59.858188+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 69263360 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,1])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:00.858297+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:01.858426+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:02.858546+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:03.858680+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:04.858861+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:05.858971+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:06.859087+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:07.859289+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 69238784 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:08.859401+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 69230592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:09.859720+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 69230592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:10.859851+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 69230592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:11.859973+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:12.860093+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:13.860229+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:14.860371+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:15.860516+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 69214208 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:16.860647+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:17.860811+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:18.860971+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:19.861085+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:20.861200+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:21.861317+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:22.861454+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:23.861662+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:24.861948+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:25.862067+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:26.862207+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:27.862356+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:28.862483+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:29.862600+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:30.862795+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:31.862937+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:32.863052+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:33.863172+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:34.863440+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:35.863705+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:36.863824+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:37.863992+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:38.864120+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:39.864398+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 69189632 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:40.864572+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:41.864770+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:42.865404+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:43.865531+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 69173248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:44.865893+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 69173248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:45.866039+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 69165056 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:46.866230+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 69165056 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:47.866406+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 69165056 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:48.866546+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 69165056 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:49.866802+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 69165056 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:50.866959+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 69148672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:51.867147+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 69148672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:52.867339+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 69148672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e7f3a000/0x0/0x4ffc00000, data 0x11f1f1d/0x1394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:53.867487+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517895 data_alloc: 218103808 data_used: 1040384
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 69148672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:54.867663+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 69148672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:55.867877+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 69148672 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e80f49000
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 57.483512878s of 58.355438232s, submitted: 97
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _renew_subs
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:56.867985+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351526912 unmapped: 69132288 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 305 ms_handle_reset con 0x555e80f49000 session 0x555e8003cd20
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f36000/0x0/0x4ffc00000, data 0x1f3acb/0x396000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:57.868124+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351526912 unmapped: 69132288 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:58.868297+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414397 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351526912 unmapped: 69132288 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:39:59.868418+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f36000/0x0/0x4ffc00000, data 0x1f3acb/0x396000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351526912 unmapped: 69132288 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:00.868551+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351526912 unmapped: 69132288 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:01.868696+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 69115904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:02.868890+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 69115904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:03.869015+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 69115904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:04.869209+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351551488 unmapped: 69107712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:05.869336+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351551488 unmapped: 69107712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:06.869494+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351551488 unmapped: 69107712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:07.869649+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351551488 unmapped: 69107712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:08.869806+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351551488 unmapped: 69107712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:09.869940+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:10.870058+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:11.870174+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:12.870294+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 podman[480144]: 2025-10-02 09:50:38.081971717 +0000 UTC m=+3.292766409 container remove d759a40702de76622ad8328379170e3e60567dde5863cb41cce91d7d73bce778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hodgkin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:13.870397+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:14.870541+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 69091328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:15.870677+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 69091328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:16.870795+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 69091328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:17.870956+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 69091328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:18.871073+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 69091328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:19.871214+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 69091328 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:20.871370+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:21.871520+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:22.871648+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:23.871785+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:24.871930+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:25.872091+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:26.872235+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:27.872475+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:28.872694+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 69058560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:29.872839+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 69058560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:30.872961+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 69050368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:31.873073+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 69050368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:32.873180+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 69050368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:33.873298+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 69050368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:34.873423+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 69050368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:35.873597+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 69050368 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:36.874221+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 69042176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:37.874980+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:38.875712+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:39.875998+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:40.876569+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:41.877046+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:42.877443+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:43.877899+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:44.878172+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:45.878364+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:46.878528+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:47.878833+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:48.879086+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351641600 unmapped: 69017600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:49.879213+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351641600 unmapped: 69017600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:50.879567+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351641600 unmapped: 69017600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:51.879714+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351641600 unmapped: 69017600 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:52.879880+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351657984 unmapped: 69001216 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:53.880038+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351657984 unmapped: 69001216 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:54.880186+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 68993024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:55.880342+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:56.880512+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 systemd[1]: libpod-conmon-d759a40702de76622ad8328379170e3e60567dde5863cb41cce91d7d73bce778.scope: Deactivated successfully.
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:57.880680+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:58.880866+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:40:59.880994+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:00.881125+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:01.881245+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:02.881402+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:03.881568+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:04.881675+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:05.881877+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:06.882020+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:07.882195+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 68968448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:08.882515+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 68968448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:09.882647+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 68968448 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:10.882774+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 68960256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:11.882898+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351707136 unmapped: 68952064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:12.883093+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351707136 unmapped: 68952064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:13.883218+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351707136 unmapped: 68952064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:14.883348+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351707136 unmapped: 68952064 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:15.883507+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:16.883929+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:17.884082+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:18.884223+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:19.884350+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:20.884468+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:21.884595+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 68935680 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:22.884782+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'config diff' '{prefix=config diff}'
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'config show' '{prefix=config show}'
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 68943872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:23.886662+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:24.888717+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 68886528 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'log dump' '{prefix=log dump}'
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:25.891908+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'perf dump' '{prefix=perf dump}'
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'perf schema' '{prefix=perf schema}'
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351182848 unmapped: 69476352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:26.892051+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 69435392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:27.892241+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 69435392 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:28.892417+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 69427200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:29.892600+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 69427200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:30.892731+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 69427200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:31.892923+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 69419008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:32.893171+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 69419008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:33.893304+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 69419008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:34.893429+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 69419008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:35.893591+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 69419008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:36.893723+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 69419008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:37.893918+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 69419008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:38.894067+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 69419008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:39.894286+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351256576 unmapped: 69402624 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:40.894418+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 69394432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:41.894551+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 69394432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:42.894687+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 69386240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:43.894826+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 69386240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:44.894965+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 69386240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:45.895148+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 69386240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:46.895288+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 69386240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:47.895437+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 69386240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:48.895566+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 69386240 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:49.895700+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351281152 unmapped: 69378048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:50.895857+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351281152 unmapped: 69378048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:51.895980+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351281152 unmapped: 69378048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:52.896325+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351281152 unmapped: 69378048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:53.896617+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351281152 unmapped: 69378048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:54.896781+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351281152 unmapped: 69378048 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:55.896971+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 69369856 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:56.897097+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 69361664 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:57.897289+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 69353472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:58.897558+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 69353472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:41:59.897820+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 69353472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:00.898074+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 69353472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:01.898264+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 69353472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:02.898431+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 69353472 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:03.898560+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 69345280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:04.898834+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 69345280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:05.899033+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 69345280 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:06.899329+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 69337088 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:07.899621+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 69337088 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:08.899851+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 69337088 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:09.900091+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 69337088 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:10.900257+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 69337088 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:11.900423+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 69337088 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:12.900576+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 69337088 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:13.900724+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 69328896 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:14.900893+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 69320704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:15.901034+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 69320704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:16.901158+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 69320704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:17.901355+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 69320704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:18.901485+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 69320704 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:19.901648+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 69312512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:20.901800+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 69312512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:21.901947+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 69312512 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:22.902114+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 69304320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:23.902278+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 69304320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:24.902414+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 69304320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:25.902562+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 69304320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:26.902689+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 69304320 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:27.902993+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 69287936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:28.903176+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 69287936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:29.903340+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 69287936 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:30.903543+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 69279744 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:31.903688+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 69279744 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:32.903861+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 69279744 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:33.903979+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 69279744 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:34.904194+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 69279744 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:35.904437+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 69279744 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:36.904632+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 69279744 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:37.904915+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 69279744 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:38.905116+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 69279744 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:39.905274+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 69271552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:40.905494+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 69271552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:41.905678+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 69271552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:42.905851+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 69271552 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:43.906084+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 69255168 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:44.906277+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 69246976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:45.906499+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 69246976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:46.906632+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 69246976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:47.906903+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 69246976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:48.907028+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 69246976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:49.907297+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 69246976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:50.907447+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 69246976 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:51.907579+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 69230592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:52.907847+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 69230592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:53.907966+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 69230592 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:54.908124+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:55.908253+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:56.908484+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:57.908723+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:58.908893+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 69222400 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:42:59.909048+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 69214208 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:00.909169+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:01.909302+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 69214208 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:02.909493+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 69214208 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:03.909637+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:04.909779+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:05.909967+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:06.910093+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:07.910345+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:08.910472+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 69206016 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:09.910599+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:10.910791+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:11.910913+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:12.911088+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:13.911247+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:14.911396+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:15.911534+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 69197824 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:16.911661+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:17.911893+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:18.912031+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:19.912269+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:20.912492+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:21.912625+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:22.912833+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:23.913015+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 69181440 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:24.913187+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 69173248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:25.913330+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 69173248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:26.913548+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 69173248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:27.913823+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 69173248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:28.913998+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 69173248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:29.914147+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 69173248 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:30.914293+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 69140480 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:31.914451+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 69140480 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:32.914707+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 69124096 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:33.915048+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 69124096 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:34.915443+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 69124096 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:35.915685+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 69115904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:36.915823+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 69115904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:37.916029+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 69115904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:38.916234+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 69115904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:39.916480+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 69115904 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:40.916698+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351551488 unmapped: 69107712 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:41.916883+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:42.917050+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:43.917214+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:44.917379+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:45.917523+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:46.917662+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:47.917835+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 69099520 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:48.917974+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 69083136 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:49.918120+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 69083136 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:50.918278+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 69083136 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:51.918451+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 69083136 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:52.918660+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 69083136 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:53.918870+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 69083136 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:54.919091+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 69083136 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:55.919274+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 69083136 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:56.919481+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:57.919815+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 69066752 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:58.919940+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 69058560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:43:59.920184+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 69058560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:00.920376+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 69058560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:01.920513+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 69058560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:02.920687+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 69058560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:03.920994+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 69058560 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:04.921166+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 69042176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:05.921378+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 69042176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:06.921517+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 69042176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:07.921684+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 69042176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:08.921813+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 69042176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:09.921973+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 69042176 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:10.922205+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:11.922394+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 69033984 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:12.922571+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:13.922817+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:14.923026+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:15.923254+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:16.923462+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:17.923695+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:18.923859+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:19.923994+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 69025792 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:20.924143+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 69009408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:21.924292+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 69009408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:22.924465+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 69009408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:23.924610+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 69009408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:24.924850+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 69009408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:25.925137+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 69009408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:26.925400+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 69009408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:27.925718+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 69009408 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:28.926204+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 68993024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:29.936489+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 68993024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:30.936678+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 68993024 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:31.936852+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:32.937009+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:33.937181+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:34.937357+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:35.937506+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:36.937692+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:37.937991+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 68984832 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:38.938194+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:39.938347+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:40.938498+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:41.938905+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:42.939147+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:43.939362+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 68976640 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:44.939549+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 68960256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:45.939718+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 68960256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:46.939870+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 68960256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:47.940091+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 68960256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:48.940373+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 68960256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:49.940539+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 68960256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:50.940728+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 68960256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:51.940982+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 68960256 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:52.941125+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 68943872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:53.941237+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 68943872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:54.941442+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 68943872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:55.941640+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 68943872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:56.941824+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 68943872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:57.942048+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 68943872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:58.942176+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 68943872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:44:59.942334+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 68943872 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:00.942479+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 68927488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:01.942610+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 68927488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:02.942830+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 68927488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:03.943174+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 68927488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:04.943326+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 68927488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:05.943460+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 68927488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:06.943624+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 68927488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:07.943804+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 68927488 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:08.943999+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 68911104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:09.944137+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 68911104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:10.944558+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 68911104 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:11.944689+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 68902912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:12.944823+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 68902912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:13.944991+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 68902912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:14.945218+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 68902912 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:15.945389+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 68894720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:16.945534+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 68894720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:17.945704+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 68894720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:18.945895+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 68894720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:19.946049+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 68894720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:20.946262+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 68894720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:21.946469+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 68894720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:22.946665+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 68894720 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:23.946869+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 68870144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:24.947056+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 68870144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:25.947231+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 68870144 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:26.947415+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 68861952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:27.947681+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 68861952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:28.948086+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 68861952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:29.948267+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 68861952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:30.948430+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 68861952 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:31.948658+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 68853760 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:32.948859+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 68853760 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:33.949084+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 68853760 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:34.949233+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 68845568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:35.949439+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 68845568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:36.949619+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 68845568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:37.949817+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 68845568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:38.949955+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 68845568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:39.950096+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 68837376 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:40.951114+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 68837376 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:41.951422+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 68837376 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:42.951578+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 68837376 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:43.951697+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 68837376 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:44.951880+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 68837376 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:45.952001+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 68837376 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:46.952127+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 68837376 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:47.952311+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 68820992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:48.952439+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 68820992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:49.952571+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 68820992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:50.952735+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 68820992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:51.952962+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 68820992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:52.953131+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 68820992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:53.953261+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 68812800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:54.953405+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 68812800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:55.953559+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 68804608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:56.953693+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 68804608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:57.953868+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 68804608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:58.953989+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 68804608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:45:59.954251+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 68804608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:00.954490+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 68804608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:01.954917+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 68804608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:02.955128+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 68796416 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:03.955345+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 68780032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:04.955556+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 68780032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:05.955781+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 68780032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:06.955944+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 68763648 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:07.956199+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 68763648 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:08.956325+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 68763648 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:09.956527+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 68763648 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:10.956788+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 68763648 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:11.956986+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 68755456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:12.957120+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 68755456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:13.957244+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 68755456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:14.957423+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 68755456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:15.960060+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 68755456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:16.960186+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 68755456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:17.960363+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 68755456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:18.960516+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 68755456 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:19.960649+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 68739072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:20.960866+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 68739072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:21.960993+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 68739072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:22.961313+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 68739072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:23.961465+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 68739072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:24.961648+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 68739072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:25.961828+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 68739072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:26.961972+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 68730880 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:27.962215+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 68722688 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:28.962359+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 68722688 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:29.962473+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 68714496 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:30.962652+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 68706304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:31.962818+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 68706304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:32.962949+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 68706304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:33.963074+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 68706304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:34.963287+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 68706304 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:35.963415+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 68698112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:36.963617+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 68698112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:37.963784+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 68698112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-10-02T09:46:38.963860+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _finish_auth 0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:38.964934+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 68689920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:39.964074+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 68689920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:40.964282+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 68689920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:41.964491+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 68689920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:42.964673+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 68689920 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:43.964773+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 68681728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:44.964916+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 68681728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:45.965062+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 68681728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:46.965170+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 68673536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:47.965302+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 68673536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:48.965430+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 68673536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:49.965556+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 68673536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:50.965691+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 68673536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:51.965821+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 68673536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:52.965989+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 68665344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:53.966170+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 68665344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:54.966345+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 68657152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:55.966448+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 68657152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:56.966549+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 68657152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:57.966724+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 68657152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:58.966991+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 68657152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:46:59.967161+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 68640768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:00.967948+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 68640768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:01.968172+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 68640768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:02.969323+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 68640768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:03.970267+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 68640768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:04.970984+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 68640768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:05.971269+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 68640768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:06.971578+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 68640768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:07.971901+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 68632576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:08.972088+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 68632576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:09.972226+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 68632576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:10.972398+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 68624384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:11.972813+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 68624384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:12.973150+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 68624384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:13.973496+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 68624384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:14.973968+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 68624384 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:15.974119+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 68608000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:16.974339+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 68608000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:17.974539+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 68608000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:18.974689+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 68608000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:19.974930+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 68608000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:20.975204+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 68608000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:21.975362+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 68608000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:22.975492+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 68608000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:23.975645+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 68599808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:24.975957+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 68591616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:25.976211+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 68591616 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:26.976401+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 68583424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:27.976630+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 68583424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:28.976821+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 68583424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:29.977010+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 68583424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:30.977205+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 68583424 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:31.977484+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 68575232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:32.977697+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:33.977920+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 68575232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:34.978141+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 68575232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:35.978446+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 68575232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:36.978723+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 68575232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:37.979064+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 68575232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:38.979253+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 68575232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:39.979505+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 68575232 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:40.979729+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352092160 unmapped: 68567040 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:41.980031+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 68558848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:42.980295+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 68558848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:43.980640+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 68558848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:44.980885+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 68558848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:45.981226+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 68558848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:46.981539+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 68558848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:47.981960+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 68558848 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:48.982222+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 68542464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:49.982401+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 68542464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:50.982727+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 68542464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:51.983177+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 68542464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 372 writes, 991 keys, 372 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                           Interval WAL: 372 writes, 177 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:52.983529+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 68542464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:53.983697+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 68542464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:54.984018+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 68542464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:55.984292+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 68542464 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:56.984620+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 68526080 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: mgrc ms_handle_reset ms_handle_reset con 0x555e816cfc00
Oct 02 09:50:38 compute-0 ceph-osd[88314]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct 02 09:50:38 compute-0 ceph-osd[88314]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: get_auth_request con 0x555e886ea800 auth_method 0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: mgrc handle_mgr_configure stats_period=5
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:57.984856+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 68468736 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:58.985202+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 68460544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:47:59.985487+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 68460544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:00.985706+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 68460544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:01.985894+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 68460544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 ms_handle_reset con 0x555e87eca400 session 0x555e7f66c960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e7e72f800
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:02.986438+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 68460544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:03.986576+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 68460544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:04.986824+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 68460544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:05.986998+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 68460544 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:06.987134+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 68452352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:07.987318+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 68452352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:08.987493+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 68452352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:09.987631+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 68452352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:10.987820+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 68452352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:11.987972+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 68452352 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:12.988149+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 68444160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:13.988296+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 68444160 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:14.988474+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 68435968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:15.988617+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 68435968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:16.988815+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 68435968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:17.989041+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 68435968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:18.989230+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 68435968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:19.989438+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 68435968 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:20.989594+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 68427776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:21.989781+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 68427776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:22.989997+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 68427776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 sudo[479720]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:23.990156+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 68427776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:24.990304+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 68427776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:25.990455+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 68427776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:26.990621+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 68427776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:27.990867+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 68427776 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:28.991089+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 68403200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:29.991308+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 68403200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:30.991557+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 68403200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:31.991852+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 68403200 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:32.992048+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 69869568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:33.992217+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 69869568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:34.992449+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 69869568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:35.992636+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 69869568 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:36.992822+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 69853184 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:37.993141+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 69844992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:38.993305+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 69844992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:39.993471+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 69844992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:40.993627+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 69844992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:41.993866+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 69844992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:42.994071+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 69844992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:43.994214+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 69844992 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:44.994272+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 69836800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:45.994504+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 69836800 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:46.994882+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 69828608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 ms_handle_reset con 0x555e7fa10c00 session 0x555e7ebdc000
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: handle_auth_request added challenge on 0x555e87eca400
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:47.995381+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 69828608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:48.995533+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 69828608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:49.995671+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 69828608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:50.995959+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417179 data_alloc: 218103808 data_used: 1048576
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 69828608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:51.996182+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 69828608 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:52.996380+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f34000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350838784 unmapped: 69820416 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:53.996867+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350838784 unmapped: 69820416 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 537.806762695s of 537.973632812s, submitted: 36
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:54.997066+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350838784 unmapped: 69820416 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:55.997277+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416459 data_alloc: 218103808 data_used: 1052672
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350838784 unmapped: 69820416 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:56.997469+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350838784 unmapped: 69820416 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f35000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:57.997723+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f35000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350838784 unmapped: 69820416 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:58.997937+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350855168 unmapped: 69804032 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:48:59.998138+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 69763072 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:00.998382+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 69754880 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:01.998562+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350912512 unmapped: 69746688 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:02.998787+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 69722112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:03.999023+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 69722112 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.726120472s of 10.053056717s, submitted: 112
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:04.999203+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 69705728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:05.999416+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 69705728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:06.999662+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 69705728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:08.000082+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 69705728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:09.000245+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 69705728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:10.000419+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 69705728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:11.000578+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 69705728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:12.000716+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 69705728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:13.000836+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 69705728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:14.000957+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 69705728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:15.001115+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 69705728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:16.001295+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 69705728 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:17.001578+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 69697536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:18.001864+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 69697536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:19.002036+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 69697536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:20.002242+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 69697536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:21.002419+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 69697536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:22.002622+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 69697536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:23.002772+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 69697536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:24.002985+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 69697536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:25.003138+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 69697536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:26.003366+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 69697536 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:27.003596+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 69689344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:28.003883+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 69689344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:29.004055+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 69689344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:30.004190+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 69689344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:31.004395+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 69689344 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:32.004569+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350978048 unmapped: 69681152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:33.004816+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350978048 unmapped: 69681152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:34.004953+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350978048 unmapped: 69681152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:35.005148+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350978048 unmapped: 69681152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:36.005309+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350978048 unmapped: 69681152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:37.005467+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350978048 unmapped: 69681152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:38.005660+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350978048 unmapped: 69681152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:39.005797+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350978048 unmapped: 69681152 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:40.005961+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 69672960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:41.006116+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 69672960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:42.006269+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 69672960 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:43.006490+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 69664768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:44.007178+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 69664768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:45.007302+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 69664768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:46.007379+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 69664768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:47.007490+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 69664768 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:48.007670+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 69656576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:49.007806+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 69656576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:50.007917+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 69656576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:51.008062+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 69656576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:52.008190+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 69656576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:53.008311+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 69656576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:54.008444+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 69656576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:55.008583+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 69656576 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:56.008714+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351019008 unmapped: 69640192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:57.008802+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351019008 unmapped: 69640192 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:58.010071+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 69632000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:49:59.010209+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 69632000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:50:00.010413+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 69632000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:50:01.010608+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 69632000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:50:02.010776+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 69632000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:50:03.010910+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 69632000 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:50:04.011063+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351035392 unmapped: 69623808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:50:05.011249+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'config diff' '{prefix=config diff}'
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'config show' '{prefix=config show}'
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351035392 unmapped: 69623808 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 09:50:38 compute-0 ceph-osd[88314]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b25000/0x0/0x4ffc00000, data 0x1f552e/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:50:06.011388+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 09:50:38 compute-0 ceph-osd[88314]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 09:50:38 compute-0 ceph-osd[88314]: bluestore.MempoolThread(0x555e7d3b7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416779 data_alloc: 218103808 data_used: 1064960
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 69419008 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: tick
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_tickets
Oct 02 09:50:38 compute-0 ceph-osd[88314]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T09:50:07.011517+0000)
Oct 02 09:50:38 compute-0 ceph-osd[88314]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 69394432 heap: 420659200 old mem: 2845415832 new mem: 2845415832
Oct 02 09:50:38 compute-0 ceph-osd[88314]: do_command 'log dump' '{prefix=log dump}'
Oct 02 09:50:38 compute-0 sudo[480573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:50:38 compute-0 sudo[480573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:38 compute-0 sudo[480573]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:38 compute-0 sudo[480602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:50:38 compute-0 sudo[480602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:38 compute-0 sudo[480602]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:38 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 02 09:50:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3213308472' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 09:50:38 compute-0 sudo[480627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:50:38 compute-0 sudo[480627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:38 compute-0 sudo[480627]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:38 compute-0 sudo[480658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- lvm list --format json
Oct 02 09:50:38 compute-0 sudo[480658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 09:50:38 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 09:50:38 compute-0 podman[480777]: 2025-10-02 09:50:38.820803725 +0000 UTC m=+0.025334412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:50:39 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 02 09:50:39 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1504234564' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 09:50:39 compute-0 podman[480777]: 2025-10-02 09:50:39.091046176 +0000 UTC m=+0.295576843 container create 22e92007c12ee0088ae9c777afbea3a65052388dd74d02f1dc0813e6cd13fa3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_murdock, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3991: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:39 compute-0 ceph-mon[74477]: pgmap v3990: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:39 compute-0 ceph-mon[74477]: from='client.23563 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:39 compute-0 ceph-mon[74477]: from='client.23565 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2418259864' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 09:50:39 compute-0 ceph-mon[74477]: from='client.23569 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:39 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3213308472' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 09:50:39 compute-0 ceph-mon[74477]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 09:50:39 compute-0 ceph-mon[74477]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 09:50:39 compute-0 systemd[1]: Started libpod-conmon-22e92007c12ee0088ae9c777afbea3a65052388dd74d02f1dc0813e6cd13fa3b.scope.
Oct 02 09:50:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:50:39 compute-0 podman[480777]: 2025-10-02 09:50:39.774526234 +0000 UTC m=+0.979056921 container init 22e92007c12ee0088ae9c777afbea3a65052388dd74d02f1dc0813e6cd13fa3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_murdock, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 09:50:39 compute-0 podman[480777]: 2025-10-02 09:50:39.788595863 +0000 UTC m=+0.993126530 container start 22e92007c12ee0088ae9c777afbea3a65052388dd74d02f1dc0813e6cd13fa3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 09:50:39 compute-0 upbeat_murdock[480825]: 167 167
Oct 02 09:50:39 compute-0 systemd[1]: libpod-22e92007c12ee0088ae9c777afbea3a65052388dd74d02f1dc0813e6cd13fa3b.scope: Deactivated successfully.
Oct 02 09:50:39 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23579 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:39 compute-0 podman[480777]: 2025-10-02 09:50:39.93737498 +0000 UTC m=+1.141905647 container attach 22e92007c12ee0088ae9c777afbea3a65052388dd74d02f1dc0813e6cd13fa3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:50:39 compute-0 podman[480777]: 2025-10-02 09:50:39.938685761 +0000 UTC m=+1.143216428 container died 22e92007c12ee0088ae9c777afbea3a65052388dd74d02f1dc0813e6cd13fa3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:50:40 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 02 09:50:40 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3986331808' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 09:50:40 compute-0 nova_compute[260603]: 2025-10-02 09:50:40.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:50:40 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1504234564' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 09:50:40 compute-0 ceph-mon[74477]: pgmap v3991: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:40 compute-0 ceph-mon[74477]: from='client.23579 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:40 compute-0 nova_compute[260603]: 2025-10-02 09:50:40.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-535fd6852b5a98eb5fba6d858764c0d9a84955ae3b7752782adf130da0fa1c2f-merged.mount: Deactivated successfully.
Oct 02 09:50:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 02 09:50:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367925642' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 09:50:41 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3992: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:41 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 09:50:41 compute-0 podman[480777]: 2025-10-02 09:50:41.166034708 +0000 UTC m=+2.370565375 container remove 22e92007c12ee0088ae9c777afbea3a65052388dd74d02f1dc0813e6cd13fa3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_murdock, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 09:50:41 compute-0 systemd[1]: libpod-conmon-22e92007c12ee0088ae9c777afbea3a65052388dd74d02f1dc0813e6cd13fa3b.scope: Deactivated successfully.
Oct 02 09:50:41 compute-0 systemd[1]: Started Hostname Service.
Oct 02 09:50:41 compute-0 podman[481004]: 2025-10-02 09:50:41.415072696 +0000 UTC m=+0.088631580 container create 5980e4e17fd6baa98b81774da755314f5135fadd1c985ebd8052ef745ba68040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:50:41 compute-0 podman[481004]: 2025-10-02 09:50:41.356884648 +0000 UTC m=+0.030443552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:50:41 compute-0 systemd[1]: Started libpod-conmon-5980e4e17fd6baa98b81774da755314f5135fadd1c985ebd8052ef745ba68040.scope.
Oct 02 09:50:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a0beb84170f84c32e69eeb2afabd87d443265fb64b2d10c6cb62b68ec7a9e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a0beb84170f84c32e69eeb2afabd87d443265fb64b2d10c6cb62b68ec7a9e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a0beb84170f84c32e69eeb2afabd87d443265fb64b2d10c6cb62b68ec7a9e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a0beb84170f84c32e69eeb2afabd87d443265fb64b2d10c6cb62b68ec7a9e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:41 compute-0 nova_compute[260603]: 2025-10-02 09:50:41.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:50:41 compute-0 podman[481004]: 2025-10-02 09:50:41.564194873 +0000 UTC m=+0.237753787 container init 5980e4e17fd6baa98b81774da755314f5135fadd1c985ebd8052ef745ba68040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:50:41 compute-0 podman[481004]: 2025-10-02 09:50:41.576493388 +0000 UTC m=+0.250052272 container start 5980e4e17fd6baa98b81774da755314f5135fadd1c985ebd8052ef745ba68040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:50:41 compute-0 podman[481004]: 2025-10-02 09:50:41.583971441 +0000 UTC m=+0.257530325 container attach 5980e4e17fd6baa98b81774da755314f5135fadd1c985ebd8052ef745ba68040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 09:50:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:50:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3986331808' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 09:50:41 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1367925642' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 09:50:41 compute-0 nova_compute[260603]: 2025-10-02 09:50:41.615 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:50:41 compute-0 nova_compute[260603]: 2025-10-02 09:50:41.616 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:50:41 compute-0 nova_compute[260603]: 2025-10-02 09:50:41.616 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:50:41 compute-0 nova_compute[260603]: 2025-10-02 09:50:41.616 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 09:50:41 compute-0 nova_compute[260603]: 2025-10-02 09:50:41.617 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:50:41 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 02 09:50:41 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089971065' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 09:50:41 compute-0 nova_compute[260603]: 2025-10-02 09:50:41.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 02 09:50:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2202586296' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 09:50:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:50:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2946240488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.103 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.319 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.321 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3182MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.322 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.322 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]: {
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:     "0": [
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:         {
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "devices": [
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "/dev/loop3"
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             ],
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_name": "ceph_lv0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_size": "21470642176",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "name": "ceph_lv0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "tags": {
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.cluster_name": "ceph",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.crush_device_class": "",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.encrypted": "0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.osd_id": "0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.type": "block",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.vdo": "0"
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             },
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "type": "block",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "vg_name": "ceph_vg0"
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:         }
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:     ],
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:     "1": [
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:         {
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "devices": [
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "/dev/loop4"
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             ],
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_name": "ceph_lv1",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_size": "21470642176",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "name": "ceph_lv1",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "tags": {
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.cluster_name": "ceph",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.crush_device_class": "",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.encrypted": "0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.osd_id": "1",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.type": "block",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.vdo": "0"
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             },
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "type": "block",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "vg_name": "ceph_vg1"
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:         }
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:     ],
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:     "2": [
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:         {
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "devices": [
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "/dev/loop5"
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             ],
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_name": "ceph_lv2",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_size": "21470642176",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "name": "ceph_lv2",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "tags": {
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.cluster_name": "ceph",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.crush_device_class": "",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.encrypted": "0",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.osd_id": "2",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.type": "block",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:                 "ceph.vdo": "0"
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             },
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "type": "block",
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:             "vg_name": "ceph_vg2"
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:         }
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]:     ]
Oct 02 09:50:42 compute-0 nifty_satoshi[481041]: }
Oct 02 09:50:42 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23591 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:42 compute-0 systemd[1]: libpod-5980e4e17fd6baa98b81774da755314f5135fadd1c985ebd8052ef745ba68040.scope: Deactivated successfully.
Oct 02 09:50:42 compute-0 conmon[481041]: conmon 5980e4e17fd6baa98b81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5980e4e17fd6baa98b81774da755314f5135fadd1c985ebd8052ef745ba68040.scope/container/memory.events
Oct 02 09:50:42 compute-0 podman[481004]: 2025-10-02 09:50:42.483471977 +0000 UTC m=+1.157030891 container died 5980e4e17fd6baa98b81774da755314f5135fadd1c985ebd8052ef745ba68040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:50:42 compute-0 ceph-mon[74477]: pgmap v3992: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/4089971065' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 09:50:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2202586296' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 09:50:42 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2946240488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:50:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-18a0beb84170f84c32e69eeb2afabd87d443265fb64b2d10c6cb62b68ec7a9e2-merged.mount: Deactivated successfully.
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.657 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.659 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 09:50:42 compute-0 podman[481004]: 2025-10-02 09:50:42.76050662 +0000 UTC m=+1.434065534 container remove 5980e4e17fd6baa98b81774da755314f5135fadd1c985ebd8052ef745ba68040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 09:50:42 compute-0 systemd[1]: libpod-conmon-5980e4e17fd6baa98b81774da755314f5135fadd1c985ebd8052ef745ba68040.scope: Deactivated successfully.
Oct 02 09:50:42 compute-0 sudo[480658]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.822 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.846 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.846 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.877 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 09:50:42 compute-0 sudo[481207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:50:42 compute-0 sudo[481207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:42 compute-0 sudo[481207]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.911 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 09:50:42 compute-0 nova_compute[260603]: 2025-10-02 09:50:42.942 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 09:50:42 compute-0 sudo[481238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 09:50:42 compute-0 sudo[481238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:42 compute-0 sudo[481238]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:42 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 02 09:50:42 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3922612142' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 09:50:43 compute-0 sudo[481268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:50:43 compute-0 sudo[481268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:43 compute-0 sudo[481268]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:43 compute-0 sudo[481297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -- raw list --format json
Oct 02 09:50:43 compute-0 sudo[481297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:43 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3993: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 09:50:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633381236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:50:43 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 02 09:50:43 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1284759916' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 09:50:43 compute-0 nova_compute[260603]: 2025-10-02 09:50:43.498 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 09:50:43 compute-0 nova_compute[260603]: 2025-10-02 09:50:43.510 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 09:50:43 compute-0 podman[481403]: 2025-10-02 09:50:43.541204765 +0000 UTC m=+0.028548453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:50:43 compute-0 podman[481403]: 2025-10-02 09:50:43.669795571 +0000 UTC m=+0.157139209 container create 21914dfaec230a89a19065f54d718e237bb431b5a7005df09e1cc07aa2e2c3e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 09:50:43 compute-0 ceph-mon[74477]: from='client.23591 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3922612142' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 09:50:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1633381236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 09:50:43 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1284759916' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 09:50:43 compute-0 systemd[1]: Started libpod-conmon-21914dfaec230a89a19065f54d718e237bb431b5a7005df09e1cc07aa2e2c3e3.scope.
Oct 02 09:50:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:50:43 compute-0 nova_compute[260603]: 2025-10-02 09:50:43.806 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 09:50:43 compute-0 nova_compute[260603]: 2025-10-02 09:50:43.808 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 09:50:43 compute-0 nova_compute[260603]: 2025-10-02 09:50:43.808 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 09:50:43 compute-0 podman[481403]: 2025-10-02 09:50:43.81384971 +0000 UTC m=+0.301193438 container init 21914dfaec230a89a19065f54d718e237bb431b5a7005df09e1cc07aa2e2c3e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 09:50:43 compute-0 podman[481403]: 2025-10-02 09:50:43.824789283 +0000 UTC m=+0.312132921 container start 21914dfaec230a89a19065f54d718e237bb431b5a7005df09e1cc07aa2e2c3e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 09:50:43 compute-0 quirky_shamir[481448]: 167 167
Oct 02 09:50:43 compute-0 systemd[1]: libpod-21914dfaec230a89a19065f54d718e237bb431b5a7005df09e1cc07aa2e2c3e3.scope: Deactivated successfully.
Oct 02 09:50:43 compute-0 podman[481403]: 2025-10-02 09:50:43.952069207 +0000 UTC m=+0.439412885 container attach 21914dfaec230a89a19065f54d718e237bb431b5a7005df09e1cc07aa2e2c3e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 09:50:43 compute-0 podman[481403]: 2025-10-02 09:50:43.953977867 +0000 UTC m=+0.441321505 container died 21914dfaec230a89a19065f54d718e237bb431b5a7005df09e1cc07aa2e2c3e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 09:50:44 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23599 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc2e74a7f1699731de59c722e42a0e7924d2cd56ffeb7cb869ccc062c414dee2-merged.mount: Deactivated successfully.
Oct 02 09:50:44 compute-0 podman[481403]: 2025-10-02 09:50:44.075226475 +0000 UTC m=+0.562570113 container remove 21914dfaec230a89a19065f54d718e237bb431b5a7005df09e1cc07aa2e2c3e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 09:50:44 compute-0 systemd[1]: libpod-conmon-21914dfaec230a89a19065f54d718e237bb431b5a7005df09e1cc07aa2e2c3e3.scope: Deactivated successfully.
Oct 02 09:50:44 compute-0 podman[481529]: 2025-10-02 09:50:44.311279137 +0000 UTC m=+0.071834104 container create 4c79ca5adc1906a569dfdafbed608934b414d67c8ba59f0980bb821118b5936c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:50:44 compute-0 podman[481529]: 2025-10-02 09:50:44.272588319 +0000 UTC m=+0.033143296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 09:50:44 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 02 09:50:44 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3513466169' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 09:50:44 compute-0 systemd[1]: Started libpod-conmon-4c79ca5adc1906a569dfdafbed608934b414d67c8ba59f0980bb821118b5936c.scope.
Oct 02 09:50:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5cb16bc4ac0444481037f8d3ec68cdc55a23358ca2596b3d8b09899ff3bdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5cb16bc4ac0444481037f8d3ec68cdc55a23358ca2596b3d8b09899ff3bdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5cb16bc4ac0444481037f8d3ec68cdc55a23358ca2596b3d8b09899ff3bdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5cb16bc4ac0444481037f8d3ec68cdc55a23358ca2596b3d8b09899ff3bdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 09:50:44 compute-0 podman[481529]: 2025-10-02 09:50:44.56175081 +0000 UTC m=+0.322305797 container init 4c79ca5adc1906a569dfdafbed608934b414d67c8ba59f0980bb821118b5936c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cray, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:50:44 compute-0 podman[481529]: 2025-10-02 09:50:44.581146567 +0000 UTC m=+0.341701534 container start 4c79ca5adc1906a569dfdafbed608934b414d67c8ba59f0980bb821118b5936c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cray, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:50:44 compute-0 podman[481529]: 2025-10-02 09:50:44.586436232 +0000 UTC m=+0.346991189 container attach 4c79ca5adc1906a569dfdafbed608934b414d67c8ba59f0980bb821118b5936c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 09:50:44 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23603 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:44 compute-0 ceph-mon[74477]: pgmap v3993: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:44 compute-0 ceph-mon[74477]: from='client.23599 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:44 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3513466169' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 09:50:45 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3994: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:45 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23605 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:45 compute-0 compassionate_cray[481553]: {
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:     "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "osd_id": 2,
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "type": "bluestore"
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:     },
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:     "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "osd_id": 1,
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "type": "bluestore"
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:     },
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:     "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "osd_id": 0,
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:         "type": "bluestore"
Oct 02 09:50:45 compute-0 compassionate_cray[481553]:     }
Oct 02 09:50:45 compute-0 compassionate_cray[481553]: }
Oct 02 09:50:45 compute-0 systemd[1]: libpod-4c79ca5adc1906a569dfdafbed608934b414d67c8ba59f0980bb821118b5936c.scope: Deactivated successfully.
Oct 02 09:50:45 compute-0 systemd[1]: libpod-4c79ca5adc1906a569dfdafbed608934b414d67c8ba59f0980bb821118b5936c.scope: Consumed 1.086s CPU time.
Oct 02 09:50:45 compute-0 podman[481529]: 2025-10-02 09:50:45.665464645 +0000 UTC m=+1.426019602 container died 4c79ca5adc1906a569dfdafbed608934b414d67c8ba59f0980bb821118b5936c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cray, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 09:50:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 02 09:50:45 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016890733' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 09:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8e5cb16bc4ac0444481037f8d3ec68cdc55a23358ca2596b3d8b09899ff3bdc-merged.mount: Deactivated successfully.
Oct 02 09:50:45 compute-0 podman[481529]: 2025-10-02 09:50:45.766523332 +0000 UTC m=+1.527078289 container remove 4c79ca5adc1906a569dfdafbed608934b414d67c8ba59f0980bb821118b5936c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 09:50:45 compute-0 systemd[1]: libpod-conmon-4c79ca5adc1906a569dfdafbed608934b414d67c8ba59f0980bb821118b5936c.scope: Deactivated successfully.
Oct 02 09:50:45 compute-0 nova_compute[260603]: 2025-10-02 09:50:45.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:45 compute-0 sudo[481297]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 09:50:45 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:50:45 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 09:50:45 compute-0 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:50:45 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 5dd0e416-673f-4652-925f-e40a6c376a67 does not exist
Oct 02 09:50:45 compute-0 ceph-mgr[74774]: [progress WARNING root] complete: ev 62142fde-afae-4b3e-bcc5-9925c0f13201 does not exist
Oct 02 09:50:45 compute-0 ceph-mon[74477]: from='client.23603 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:45 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3016890733' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 09:50:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:50:45 compute-0 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct 02 09:50:45 compute-0 sudo[481751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 09:50:45 compute-0 sudo[481751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:45 compute-0 sudo[481751]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:46 compute-0 sudo[481810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 09:50:46 compute-0 sudo[481810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 09:50:46 compute-0 sudo[481810]: pam_unix(sudo:session): session closed for user root
Oct 02 09:50:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 02 09:50:46 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/121166550' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23611 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:46 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:50:46 compute-0 nova_compute[260603]: 2025-10-02 09:50:46.804 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:50:46 compute-0 nova_compute[260603]: 2025-10-02 09:50:46.805 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:50:46 compute-0 ceph-mon[74477]: pgmap v3994: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:46 compute-0 ceph-mon[74477]: from='client.23605 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:46 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/121166550' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23613 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 09:50:46 compute-0 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 09:50:46 compute-0 nova_compute[260603]: 2025-10-02 09:50:46.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:47 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3995: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 02 09:50:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1382237480' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 09:50:47 compute-0 nova_compute[260603]: 2025-10-02 09:50:47.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:50:47 compute-0 nova_compute[260603]: 2025-10-02 09:50:47.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 09:50:47 compute-0 nova_compute[260603]: 2025-10-02 09:50:47.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 09:50:47 compute-0 nova_compute[260603]: 2025-10-02 09:50:47.608 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 09:50:47 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 02 09:50:47 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2122896597' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 09:50:47 compute-0 ceph-mon[74477]: from='client.23611 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:47 compute-0 ceph-mon[74477]: from='client.23613 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/1382237480' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 09:50:47 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2122896597' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 09:50:48 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23619 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:48 compute-0 nova_compute[260603]: 2025-10-02 09:50:48.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 09:50:48 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23621 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:48 compute-0 ceph-mon[74477]: pgmap v3995: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 09:50:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3640354985' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 09:50:49 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3996: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 02 09:50:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/885335738' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 09:50:49 compute-0 ovs-appctl[482752]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 09:50:49 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 02 09:50:49 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103822057' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 09:50:49 compute-0 ovs-appctl[482759]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 09:50:49 compute-0 ceph-mon[74477]: from='client.23619 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:49 compute-0 ceph-mon[74477]: from='client.23621 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 09:50:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/3640354985' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 09:50:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/885335738' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 09:50:49 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/2103822057' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 09:50:49 compute-0 ovs-appctl[482766]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 09:50:50 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23629 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:50 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 09:50:50 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832418817' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 09:50:50 compute-0 nova_compute[260603]: 2025-10-02 09:50:50.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:51 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3997: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:51 compute-0 ceph-mon[74477]: pgmap v3996: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:51 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/832418817' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 09:50:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 09:50:51 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 02 09:50:51 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437170415' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 09:50:51 compute-0 nova_compute[260603]: 2025-10-02 09:50:51.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 09:50:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 02 09:50:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592707855' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 09:50:52 compute-0 ceph-mon[74477]: from='client.23629 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:52 compute-0 ceph-mon[74477]: pgmap v3997: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct 02 09:50:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/437170415' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 09:50:52 compute-0 ceph-mon[74477]: from='client.? 192.168.122.100:0/592707855' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 09:50:52 compute-0 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct 02 09:50:52 compute-0 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1247291033' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 09:50:53 compute-0 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23639 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 09:50:53 compute-0 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3998: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
